Skip to content

mmahdavian/e2etransfuser

 
 

Repository files navigation

DMFuser: Distilled Multi-Task End-to-end Sensor Fusion for Autonomous Driving

yt5s.io-DMFuser.mp4

Contents

  1. Setup
  2. Dataset
  3. Training
  4. Evaluation

Setup

Clone the repo, setup CARLA 0.9.10.1, and build the conda environment:

git clone [email protected]:pagand/e2etransfuser.git
cd e2etransfuser
chmod +x setup_carla.sh
./setup_carla.sh
conda env create -f environment.yml
conda activate tfuse

1- if you have 10 <CUDA <=10.2

pip install torch-scatter -f https://data.pyg.org/whl/torch-1.11.0+cu102.html
pip install mmcv-full==1.5.3 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.11.0/index.html 

2- if you have CUDA >10.2

pip uninstall torch torchvision torchaudio #(run twice)
pip install torch==1.12.1 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.12.1%2Bcu113.html
pip install mmcv-full -f  https://download.openmmlab.com/mmcv/dist/cu113/torch1.12.0/index.html

3- Alternatively

pip uninstall torch torchvision torchaudio #(run twice)

3-1- Install the correct version of pytorch given your CUDA from previous versions or start locally. Replcae the {version} with the correct compatible version.

conda install pytorch=={version1} torchvision=={version2} cudatoolkit={version3} -c pytorch

3-2- Install torch-scatter by finding your closest CUDA/pytroch version in this address. Then replace the {address} with that.

pip install torch-scatter -f {adderss}

3-3- Install mmcv-full acording to your pytorch and CUDA choose the correct prebuilt package available in this address

4- Install Huggingface transformers or follow the link

pip install transformers

Dataset

Our dataset is generated via a privileged agent which we call the autopilot (/transfuser_pami/team_code_autopilot/autopilot.py) in 8 CARLA towns using the routes and scenario files provided in this folder. See the transfuser_pami/tools/dataset folder for detailed documentation regarding the training routes and scenarios.

The dataset is structured as follows:

- Scenario
    - Town
        - Route
            - rgb: camera images
            - depth: corresponding depth images
            - semantics: corresponding segmentation images
            - lidar: 3d point cloud in .npy format
            - topdown: topdown segmentation maps
            - label_raw: 3d bounding boxes for vehicles
            - measurements: contains ego-agent's position, velocity and other metadata

Option 1: Data generation

We have provided the scripts for data generation that we used to train our autopilot agent. To generate data, the first step is to launch a CARLA server:

cd transfuser_pami
./CarlaUE4.sh --world-port=2000 -opengl

For more information on running CARLA servers (e.g. on a machine without a display), see the official documentation. Once the server is running, use the script below for generating training data:

./leaderboard/scripts/datagen.sh <carla root> <working directory of this repo (*/transfuser/)>

The main variables to set for this script are SCENARIOS and ROUTES.

Option 2: Downloading dataset

A minimal dataset (210GB) without the long scenario, where the camera is mounted in 2.3 m, can be downloaded by running:

cd transfuser_pami
chmod +x download_data.sh
./download_data.sh

The data with camera mounted at 1.8 m, can be download from HuggingFace dataset.

Dataset Augmentation

In order to generate long route and add to the current data, apply the data generation for long route. We have added [Town01long, Town02long, Town03long,Town04long, Town06long] for training and Town05long for validation.

To augment the vehicular control for the next n-step, use the script bellow.

cd utilx
python augmentcontroldata.py

TRAINING

You can train different baselines. For each method, follow the corresponding section. Check the config file in each folder accordingly.

DMFuser

The model will be saved in a newly created folder log. To train the model,

cd LetFuser
python train.py

Or you can alternatively download the latest model path files from this directory.

X13

The model will be saved in a newly created folder log.

cd x13
python train.py

To predict expert's driving records for task-wise evaluation

python3 predict_expert.py is intended 

Transfuser PAMI

The code for training via imitation learning is provided in train.py.
A minimal example of running the training script on a single machine:

cd transfuser_pami/team_code_transfuser
python train.py --batch_size 10 --logdir /path/to/logdir --root_dir /path/to/dataset_root/ --parallel_training 0

The training script has many more useful features documented at the start of the main function. One of them is parallel training. The script has to be started differently when training on a multi-gpu node:

cd team_code_transfuser
CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=16 OPENBLAS_NUM_THREADS=1 torchrun --nnodes=1 --nproc_per_node=2 --max_restarts=0 --rdzv_id=1234576890 --rdzv_backend=c10d train.py --logdir /path/to/logdir --root_dir /path/to/dataset_root/ --parallel_training 1

Enumerate the GPUs you want to train on with CUDA_VISIBLE_DEVICES. Set the variable OMP_NUM_THREADS to the number of cpus available on your system. Set OPENBLAS_NUM_THREADS=1 if you want to avoid threads spawning other threads. Set --nproc_per_node to the number of available GPUs on your node.

Evaluation

Longest6 benchmark

We make some minor modifications to the CARLA leaderboard code for the Longest6 benchmark, which are documented here. See the leaderboard/data/longest6 folder for a description of Longest6 and how to evaluate on it.

Pretrained agents

Pre-trained agent files for all 4 methods can be downloaded from AWS:

mkdir model_ckpt
wget https://s3.eu-central-1.amazonaws.com/avg-projects/transfuser/models_2022.zip -P model_ckpt
unzip model_ckpt/models_2022.zip -d model_ckpt/
rm model_ckpt/models_2022.zip

Running an agent

To evaluate a model, we first launch a CARLA server:

./CarlaUE4.sh --world-port=2000 -opengl

Once the CARLA server is running, evaluate an agent with the script:

./leaderboard/scripts/local_evaluation.sh <carla root> <working directory of this repo (*/transfuser/)>

By editing the arguments in local_evaluation.sh, we can benchmark performance on the Longest6 routes. You can evaluate both privileged agents (such as [autopilot.py]) and sensor-based models. To evaluate the sensor-based models use submission_agent.py as the TEAM_AGENT and point to the folder you downloaded the model weights into for the TEAM_CONFIG. The code is automatically configured to use the correct method based on the args.txt file in the model folder.

You can look at qualitative examples of the expected driving behavior of TransFuser on the Longest6 routes here.

Credits

This repository hevealy depends on the following repos:

  • End-to-end driving with Semantic Depth Cloud Github

  • DATA from TransFuser PAMI 2022 paper

  • Transfuser CVPR 2021 Github.

  • CvT netowrk CVPR 2021 Github, paper

  • TCP NeurIPS 2022 Github

Cite

 @article{aganddmfuser,
  title={DMFuser: Distilled Multi-Task Learning for  End-to-end Transformer-Based Sensor Fusion in Autonomous Driving},
  author={Agand, Pedram, Mahdavian, Mohammad, Savva, Manolis, and Mo Chen},
  booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  pages={},
  year={2024},
  organization={IEEE}
  }

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.9%
  • XSLT 4.5%
  • HTML 0.9%
  • Shell 0.4%
  • Dockerfile 0.1%
  • CSS 0.1%
  • Other 0.1%