Skip to content

Stable-baselines3 based CrowdNavigation Simulator, It is based on 2d lidar scan.

License

Notifications You must be signed in to change notification settings

CAI23sbP/CrowdNav_Cai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CrowdNav_Cai

this repo makes for training and testing a Crowd navigation or a map based navigation

you can run online reinforcement learning in a navigation task by stable-baselines3 and sb3-contrib!

we have added GRU+PPO for user (please see result in my past repo)

Simulator rendering

  1. ${\textsf{\color{pink}The pink}}$ can not be seen by 2d scan, ${\textsf{\color{blue}The blue}}$ can be seen by 2d scan. (this is for CrowdNav testing or training)

  2. ${\textsf{\color{pink}A pink}}$ line is a path which is made by Dijkstra algorithm, and ${\textsf{\color{pink}A pink}}$ triangle is a subgoal which is made by look-ahead planning, ${\textsf{\color{Yellow}A yellow}}$ triangle is a goal.

  3. apply map (with walls)

  4. only walls (on the other way, only map is applied)

How to install

os: ubuntu20.04 , python: 3.8.x

1. make path

mkdir catkin_ws && cd catkin_ws && mkdir src && cd src

2. git clone python package

git clone https://github.com/CAI23sbP/CrowdNav_Cai.git && cd CrowdNav_Cai && pip3 install -e . && cd .. 

3. git clone python sub-packages

3.1. git clone pymap2d & build

git clone https://github.com/CAI23sbP/pymap2d.git && cd pymap2d && pip3 install -e . && cd ..

3.2. git clone py-rvo2 <ver: danieldugas-0.0.2>

git clone https://github.com/danieldugas/Python-RVO2.git && cd Python-RVO2 && python3 setup.py build && python3 setup.py install && cd ..

3.3. git clone range_libc

git clone https://github.com/CAI23sbP/range_libc.git && cd range_libc/pywrapper && pip3 install -e. && cd ../..

How to train

1. set your config

see config file in crowd_nav/configs/*.yaml

3. training

train.py --model_name default_model --config_name base_config --device cuda

How to test

1. testing your model

if you want to test example model, see detail : ### Example weight ###

test.py --n_eval 100 --weight_path Your_model_name --render True --config_name base_config

Here is testing rendering (from example weight)

10humans.mp4
general.mp4

How make your agent

1. make a env (with a reward and a observation manager)

see detail example_scan_sim.py (path: crowd_sim/envs/)

2. make policy

see detail example.py (path: crowd_sim/envs/policy/network_policies/)

3. make a model (with a feature extractor)

if you want to use libraries which are caled imitation , stable-baslines3 and sb3-contrib, You must make the extractor. see detail example_extractor.py (path: drl_utils/algorithms/extractors/)

4. make your config file

see detail base_config.yaml (path: crowd_nav/configs/)

5. make train_<your_model_name>.py

see detail train.py

6. add your mode in test.py

Example weight

  1. create new folder : model_weight / example

  2. download model weight: Link

  3. move weight to model_weight

  4. test by under code

test.py --model_path example/Last.pt

Reference code

[1] CrowdNav

[2] NavRep

[3] CrowdNav_DSRNN

[4] Pas_CrowdNav

Contact us

[email protected]