Skip to content

Latest commit

 

History

History
50 lines (37 loc) · 2.41 KB

README.md

File metadata and controls

50 lines (37 loc) · 2.41 KB

SDA-OD

Approaches

Domain shift is addressed in two steps. In the first step, to bridge the domain gap, an unpaired image-to-image translator is trained to construct a fake target domain by translating the source images to the similar ones in the target domain. In the second step, an adaptive CenterNet is designed to align distributions at the feature level in an adversarial learning manner.

How to use code

Please refer to INSTALL.md for installation instructions.

Datasets

You can download dataset from: Cityscapes and Foggy CityscapesBDD100Ksim10k

After downloading, it needs to be converted into coco dataset format. The folder structure for our experiment should look like this:

Training

Step 1: CycleGAN

The source code used for the CycleGAN model was made publicly available by here.

Step 2: Adaptive CenterNet

Below script gives you an example of training a model with pre-trained model.

python main.py ctdet --source_dataset fake_cityscapes --target_dataset foggy_cityscapes --exp_id grl_C2F --batch_size 32 --data_dir /root/dataset/ --load_model ./pre-trained-model/ctdet_coco_dla_2x.pth

Evaluation

Our proposed method is evaluated in domain shift scenarios based on the driving datasets.

Example: Clear-to-Haze Adaptation Scenario

You can download the checkpoint and do prediction or evaluation.

python test.py ctdet --exp_id checkout --source_dataset foggy_cityscapes --not_prefetch_test --data_dir /root/dataset/ --load_model ./sda_save.pth

The results show that our method is superior to the state-of-the-art methods and is effective for object detection in domain shift scenarios.


Prediction

The image detection results can be viewed with the following commands.

python demo.py ctdet --demo ./images --load_model ./sda_save.pth