RL4CO has been accepted as an oral presentation at the NeurIPS 2023 GLFrontiers Workshop! 🎉
An extensive Reinforcement Learning (RL) for Combinatorial Optimization (CO) benchmark. Our goal is to provide a unified framework for RL-based CO algorithms, and to facilitate reproducible research in this field, decoupling the science from the engineering.
RL4CO is built upon:
- TorchRL: official PyTorch framework for RL algorithms and vectorized environments on GPUs
- TensorDict: a library to easily handle heterogeneous data such as states, actions and rewards
- PyTorch Lightning: a lightweight PyTorch wrapper for high-performance AI research
- Hydra: a framework for elegantly configuring complex applications
We provide several utilities and modularization. For autoregressive policies, we modularize reusable components such as environment embeddings that can easily be swapped to solve new problems
RL4CO is now available for installation on pip
!
pip install rl4co
To get started, we recommend checking out our quickstart notebook or the minimalistic example below.
This command installs the bleeding edge main
version, useful for staying up-to-date with the latest developments - for instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet:
pip install -U git+https://github.com/ai4co/rl4co.git
If you want to develop RL4CO we recommend you to install it locally with pip
in editable mode:
git clone https://github.com/ai4co/rl4co && cd rl4co
pip install -e .
We recommend using a virtual environment such as conda
to install rl4co
locally.
Train model with default configuration (AM on TSP environment):
python run.py
Change experiment
Train model with chosen experiment configuration from configs/experiment/ (e.g. tsp/am, and environment with 42 cities)
python run.py experiment=routing/am env.num_loc=42
Disable logging
python run.py experiment=routing/am logger=none '~callbacks.learning_rate_monitor'
Note that ~
is used to disable a callback that would need a logger.
Create a sweep over hyperparameters (-m for multirun)
python run.py -m experiment=routing/am train.optimizer.lr=1e-3,1e-4,1e-5
Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:
from rl4co.envs import TSPEnv
from rl4co.models import AttentionModel
from rl4co.utils import RL4COTrainer
# Environment, Model, and Lightning Module
env = TSPEnv(num_loc=20)
model = AttentionModel(env,
baseline="rollout",
train_data_size=100_000,
test_data_size=10_000,
optimizer_kwargs={'lr': 1e-4}
)
# Trainer
trainer = RL4COTrainer(max_epochs=3)
# Fit the model
trainer.fit(model)
# Test the model
trainer.test(model)
Other examples can be found on the documentation!
Run tests with pytest
from the root directory:
pytest tests
Installing PyG
via Conda
seems to update Torch itself. We have found that this update introduces some bugs with torchrl
. At this moment, we recommend installing PyG
with Pip
:
pip install torch_geometric
Have a suggestion, request, or found a bug? Feel free to open an issue or submit a pull request. If you would like to contribute, please check out our contribution guidelines here. We welcome and look forward to all contributions to RL4CO!
We are also on Slack if you have any questions or would like to discuss RL4CO with us. We are open to collaborations and would love to hear from you 🚀
If you find RL4CO valuable for your research or applied projects:
@inproceedings{berto2023rl4co,
title={{RL}4{CO}: a Unified Reinforcement Learning for Combinatorial Optimization Library},
author={Federico Berto and Chuanbo Hua and Junyoung Park and Minsu Kim and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Joungho Kim and Jinkyoo Park},
booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning},
year={2023},
url={https://openreview.net/forum?id=YXSJxi8dOV},
note={\url{https://github.com/ai4co/rl4co}}
}
We invite you to join our AI4CO community, an open research group in Artificial Intelligence (AI) for Combinatorial Optimization (CO)!