Skip to content

Latest commit

 

History

History
25 lines (16 loc) · 813 Bytes

README.md

File metadata and controls

25 lines (16 loc) · 813 Bytes

PPO Experiments

This repo contains most of the code for the blog post "Advancements in PPO". You can read the blog post at https://go281.user.srcf.net/blog/ppo.

Install

pip install -r requirements.txt

Run

Train agents with experiment tracking:

python ppo_critic_first.py --gym-id CartPole-v1 --cuda --seed 0

Acknowledgements

This code is modified from the incredible blog post "The 37 Implementation Details of Proximal Policy Optimization" and related code https://github.com/vwxyzjn/ppo-implementation-details.

To see more of the code for the project, see https://github.com/George-Ogden/spinningup.