This repository contains the illustrative diagrams and demonstration videos of the proposed approach called safety-aware human-in-the-loop reinforcement learning (SaHiL-RL).
⏳ We will publish the source code once the paper is accepted.
🍺 Prior to this, we are more than happy to discuss the details of our algorithm if you are interested. Please feel free to contact us without any hesitation.
Email: [email protected]
sahil1_lanechange.mp4
sahil_uncooperated.mp4
sahil_cooperated.mp4
sahil_unobserved.mp4
Specify your own name for the virtual environment, e.g., hil-rl:
conda create -n hil-rl python=3.7
conda activate hil-rl
conda install gym==0.19.0
pip install cpprb tqdm pyyaml scipy matplotlib pandas casadi
Select the correct version based on your cuda version and device (cpu/gpu):
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
# Download SMARTS
git clone https://github.com/huawei-noah/SMARTS.git
cd <path/to/SMARTS>
# Install the system requirements.
bash utils/setup/install_deps.sh
# Install smarts.
pip install -e '.[camera_obs,test,train]'
# Install extra dependencies.
pip install -e .[extras]
cd <path/to/Human-in-the-loop-RL>
scl scenario build --clean scenario/straight_with_left_turn/
scl envision start
Then go to http://localhost:8081/
python main.py
Edit the mode in config.yaml as evaluation and run:
python main.py