Skip to content

[AAAI 2018] TensorFlow implementation of Action Branching Architectures for Deep Reinforcement Learning.

License

Notifications You must be signed in to change notification settings

wilsionhuang/action-branching-agents

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Action Branching Agents

Overview

Action Branching Agents repository provides a set of deep reinforcement learning agents based on the incorporation of the action branching architecture into the existing reinforcement learning algorithms.

Action Branching Architecture

Motivation

Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. To address this problem, we propose the action branching architecture, a novel neural architecture featuring a shared network module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension.

Supported Agents

  • Branching Dueling Q-Network (BDQ) (code, paper)

BDQ

Branching Dueling Q-Network (BDQ) is a novel agent which is based on the incorporation of the proposed action branching architecture into the Deep Q-Network (DQN) algorithm, as well as adapting a selection of its extensions, Double Q-Learning, Dueling Network Architectures, and Prioritized Experience Replay.

As we show in the paper, BDQ is able to solve numerous continuous control domains via discretization of the action space. Most remarkably, we have shown that BDQ is able to perform well on the Humanoid-v1 domain with a total of 6.5 x 1025 discrete actions.

Reacher3DOF-v0 Reacher4DOF-v0 Reacher5DOF-v0 Reacher6DOF-v0

Reacher-v1 Hopper-v1 Walker2d-v1 Humanoid-v1

Getting Started

You can clone this repository by:

git clone https://github.com/atavakol/action-branching-agents.git

Train

You can readily train a new model for any continuous control domain compatible with the OpenAI Gym by running the train_continuous.py script from the agent's main directory.

Evaluate

Alternatively, you can evaluate a pre-trained model included in the agent's trained_models directory, by running the enjoy_continuous.py script from the agent's main directory.

Citation

If you use this work, we ask that you use the following BibTeX entry:

@inproceedings{tavakoli2018branching,
  title={Action Branching Architectures for Deep Reinforcement Learning},
  author={Tavakoli, Arash and Pardo, Fabio and Kormushev, Petar},
  booktitle={AAAI Conference on Artificial Intelligence},
  pages = {4131--4138},
  year={2018}
}

About

[AAAI 2018] TensorFlow implementation of Action Branching Architectures for Deep Reinforcement Learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%