This repository contains the official code for
"LC-MSM: Language-Conditioned Masked Segmentation Model for Unsupervised Domain Adaptation"
GTA to Cityscapes | SYNTHIA to Cityscapes | |
---|---|---|
DAFormer | 68.3 | 60.9 |
LC-MSM (Single-resolution) | 71.8 | 62.8 |
HRDA | 73.8 | 65.8 |
LC-MSM (Multi-resolution) | 76.0 | 68.2 |
- Ubuntu 20.04
- Python 3.8.5
- torch >= 1.8.0
- torchvision
- mmcv-full
- open-clip
- tqdm
To use this code, please first install the 'mmcv-full' by following the official guidelines (mmcv
).
The requirements can be installed by the following command
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install open_clip_torch
Citysacpes: Please, download leftImg8bit_trainvaltest.zip and trainvaltest.zip from here
GTA: Please download all images and label packages from here
SYNTHIA: Please download SYNTHIA-RAND-CITYSCAPES from here
please download the pre-trained weight for MiT-B5 via shell script
sh tools/download_checkpoints.sh
The following dataset is preprocessed in COCO format , but if you are using the raw json file you can preprocess with the script
python tools/convert_datasets/gta.py /your/path/
python tools/convert_datasets/cityscapes.py /your/path/
python tools/convert_datasets/synthia.py /your/path/
For convenience, provides and annotated config file of the adaptation model
Before training the data path in the datset config file should be modifed with your data path.
A training job can be launched using:
python run_experiment.py --config configs/daformer/gta2cs_uda_lc_msm.py
The checkpoint will be sevaed automatically in work_dirs, else you set a directory for it.
sh test.sh path/to/checkpoint/directory