Action Branching Agents repository provides a set of deep reinforcement learning agents based on the incorporation of the action branching architecture into the existing reinforcement learning algorithms.
Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. To address this problem, we propose the action branching architecture, a novel neural architecture featuring a shared network module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension.
Branching Dueling Q-Network (BDQ) is a novel agent which is based on the incorporation of the proposed action branching architecture into the Deep Q-Network (DQN) algorithm, as well as adapting a selection of its extensions, Double Q-Learning, Dueling Network Architectures, and Prioritized Experience Replay.
As we show in the paper, BDQ is able to solve numerous continuous control domains via discretization of the action space. Most remarkably, we have shown that BDQ is able to perform well on the Humanoid-v1 domain with a total of 6.5 x 1025 discrete actions.
You can clone this repository by:
git clone https://github.com/atavakol/action-branching-agents.git
You can readily train a new model for any continuous control domain compatible with the OpenAI Gym by running the train_continuous.py
script from the agent's main directory.
Alternatively, you can evaluate a pre-trained model included in the agent's trained_models
directory, by running the enjoy_continuous.py
script from the agent's main directory.
If you use this work, we ask that you use the following BibTeX entry:
@inproceedings{tavakoli2018branching,
title={Action Branching Architectures for Deep Reinforcement Learning},
author={Tavakoli, Arash and Pardo, Fabio and Kormushev, Petar},
booktitle={AAAI Conference on Artificial Intelligence},
pages = {4131--4138},
year={2018}
}