This repository will implement the classic deep reinforcement learning algorithms by using PyTorch. The aim of this repository is to provide clear code for people to learn the deep reinforcemen learning algorithms. In the future, more algorithms will be added and the existing codes will also be maintained.
- Deep Q-Learning Network (DQN)
- Basic DQN
- Double Q network
- Dueling Network Archtiecure
- Deep Deterministic Policy Gradient (DDPG)
- Advantage Actor-Critic (A2C)
- Trust Region Policy Gradient (TRPO)
- Proximal Policy Optimization (PPO)
- Actor Critic using Kronecker-Factored Trust Region (ACKTR)
- Soft Actor-Critic (SAC)
🚩 2018-10-17 - In this update, most of algorithms have been imporved and add more experiments with plots (except for DPPG). The PPO now supports atari-games and mujoco-env. The TRPO is much stable and can have better results!
🚩 2019-07-15 - In this update, the installation for the openai baseline is no longer needed. I have intergated useful functions in the rl__utils module. DDPG is also re-implemented and support more results. README file has been modified. The code structure also has tiny adjustment.
🚩 2019-07-26 - In this update, the revised repository will be public. In order to have a light size of the repository. I rebuild the repository and the previous version is deleted. But I will make a backup in the google driver.
🚩 2019-11-13 - Change the code structure of the repo, all algorithms have been moved to rl_algorithms/
folder. Add soft actor critic method, the expriments plots will be added soon.
- add prioritized experience replay.
- in the future, we will not use openai baseline's pre-processing functions.
- improve the DDPG - I have already implemented a pytorch Hindsight Experience Replay (HER) with DDPG, you chould check them here.
- update pre-trained models in google driver (will update soon!).
- pytorch=1.0.1
- gym=0.12.5
- mpi4py
- mujoco-py
- opencv-python
- Install our
rl_utils
module:
pip install -e .
- Install mujoco: please follow the instruction of official website.
- Instll Box2d:
sudo apt-get install swig or brew install swig
pip install gym[box2d]
pip install box2d box2d-kengz
- Train the agent (details could be found in each folder):
cd rl_algorithms/<target_algo_folder>/
python train.py --<arguments you need>
- Play the demo:
cd rl_algorithms/<target_algo_folder>/
python demo.py --<arguments you need>
- rl algorithms:
arguments.py
: contain the parameters used in the training.<rl-name>_agent.py
: contain the most important part of the reinforcement learning algorithms.models.py
: the network structure for the policy and value function.utils.py
: some useful function, such as select actions.train.py
: the script to train the agent.demo.py
: visualize the trained agent.
- rl_utils module:
env_wrapper/
: contain the pre-processing function for the atari games and wrapper to create environments.experience_replay/
: contain the experience replay for the off-policy rl algorithms.logger/
: contain functions to take down log infos during training.mpi_utils/
: contain the tools for the mpi training.running_filter/
: contain the running mean filter functions to normalize the observation in the mujoco environments.seeds/
: contain function to setup the random seeds for the training for reproducibility.
Atari Env (BreakoutNoFrameskip-v4) | Box2d Env (BipedalWalker-v2) | Mujoco Env (Hopper-v2) |
---|---|---|
[1] A Brief Survey of Deep Reinforcement Learning
[2] The Beta Policy for Continuous Control Reinforcement Learning
[3] Playing Atari with Deep Reinforcement Learning
[4] Deep Reinforcement Learning with Double Q-learning
[5] Dueling Network Architectures for Deep Reinforcement Learning
[6] Continuous control with deep reinforcement learning
[7] Continuous Deep Q-Learning with Model-based Acceleration
[8] Asynchronous Methods for Deep Reinforcement Learning
[9] Trust Region Policy Optimization
[10] Proximal Policy Optimization Algorithms
[11] Soft Actor-Critic Algorithms and Applications
[12] Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation