This is a pip package implementing Reinforcement Learning algorithms in non-stationary environments supported by the OpenAI Gym toolkit. It contains both the dynamic environments i.e. whose transition and reward functions depend on the time and some algorithms implementations.
The implemented environments are the following and can be found at dyna-gym/dyna_gym/envs
.
For each environment, the id given as argument to the gym.make function is writen in bold.
- CartPoleDynamicTransition-v0. A cart pole environment with a time-varying direction of the gravitational force;
Cart pole in the CartPoleDynamicTransition-v0 environment. The red bar indicates the direction of the gravitational force.
- CartPoleDynamicReward-v1. A cart pole environment with a double objective: to balance the pole and to keep the position of the cart along the x-axis within a time-varying interval;
Cart pole in the CartPoleDynamicReward-v1 environment. The two red dots correspond to the limiting interval.
- CartPoleDynamicReward-v2. A cart pole environment with a time-varying cone into which the pole should balance.
Cart pole in the CartPoleDynamicReward-v2 environment. The two black lines correspond to the limiting angle interval.
The implemented algorithms are the following and can be found at dyna-gym/dyna_gym/agents
.
- Random action selection;
- Vanilla MCTS algorithm (random tree policy);
- UCT algorithm;
- OLUCT algorithm;
- Online Asynchronous Dynamic Programming with tree structure.
Type the following commands in order to install the package:
cd dyna-gym
pip install -e .
Examples are provided in the example/
repository. You can run them using your
installed version of Python.
Edit June 12 June 2018.
The package depends on several classic Python libraries. An up to date list is the following: copy; csv; gym; itertools; logging; math; matplotlib; numpy; random; setuptools; statistics.
Non classic libraries are also used by some algorithms: scikit-learn (see website); LWPR (see git repository for a Python 3 binding).