Official (Pytorch) Implementation of NeurIPS 2022 Spotlight "Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks" by Jianan Zhou*, Jianing Zhu*, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama.
@inproceedings{zhou2022adversarial,
title={Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks},
author={Jianan Zhou and Jianing Zhu and Jingfeng Zhang and Tongliang Liu and Gang Niu and Bo Han and Masashi Sugiyama},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=s7SukMH7ie9}
}
How to equip machine learning models with adversarial robustness when all given labels in a dataset are wrong (i.e., complementary labels)?
- Python 3.8
- Scipy
- PyTorch 1.11.0
- AutoAttack
Please refer to Section 5 and Appendix D.1 of our paper for the detailed setups.
# Two-stage baseline for MNIST/Kuzushiji
python main.py --dataset 'kuzushiji' --model 'cnn' --method 'log' --framework 'two_stage' --cl_epochs 50 --adv_epochs 50 --cl_lr 0.001 --at_lr 0.01
# Two-stage baseline for CIFAR10/SVHN
python main.py --dataset 'cifar10' --model 'resnet18' --method 'log' --framework 'two_stage' --cl_epochs 50 --adv_epochs 70 --cl_lr 0.01 --at_lr 0.01
# Complementary baselines (e.g., LOG) for MNIST/Kuzushiji
python main.py --dataset 'kuzushiji' --model 'cnn' --method 'log' --framework 'one_stage' --adv_epochs 100 --at_lr 0.01 --scheduler 'none'
# Complementary baselines (e.g., LOG) for CIFAR10/SVHN
python main.py --dataset 'cifar10' --model 'resnet18' --method 'log' --framework 'one_stage' --adv_epochs 120 --at_lr 0.01 --scheduler 'none'
# MNIST/Kuzushiji
python main.py --dataset 'kuzushiji' --model 'cnn' --method 'log_ce' --framework 'one_stage' --adv_epochs 100 --at_lr 0.01 --scheduler 'cosine' --sch_epoch 50 --warmup_epoch 10
# CIFAR10/SVHN
python main.py --dataset 'cifar10' --model 'resnet18' --method 'log_ce' --framework 'one_stage' --adv_epochs 120 --at_lr 0.01 --scheduler 'cosine' --sch_epoch 40 --warmup_epoch 40
# Supported Datasets (we cannot handle cifar100 on the SCL setting currently, i.e., complementary learning fails on CIFAR100 in our exp.)
--dataset - ['mnist', 'kuzushiji', 'fashion', 'cifar10', 'svhn', 'cifar100']
# Complementary Loss Functions
--method - ['free', 'nn', 'ga', 'pc', 'forward', 'scl_exp', 'scl_nl', 'mae', 'mse', 'ce', 'gce', 'phuber_ce', 'log', 'exp', 'l_uw', 'l_w', 'log_ce', 'exp_ce']
# Multiple Complementary Labels (MCLs)
--cl_num - (1-9) the number of complementary labels of each data; (0) MCLs data distribution of ICML2020 - "Learning with Multiple Complementary Labels"
-
[NeurIPS 2017] - Learning from complementary labels
-
[ECCV 2018] - Learning with biased complementary labels
-
[ICML 2019] - Complementary-label learning for arbitrary losses and models
-
[ICML 2020] - Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
-
[ICML 2020] - Learning with Multiple Complementary Labels
Thank the authors of "Complementary-label learning for arbitrary losses and models" for the open-source code and issue discussion. Other codebases may be found on the corresponding author's homepage. We also would like to thank anonymous reviewers of NeurIPS 2022 for their constructive comments.
Please contact [email protected] and [email protected] if you have any questions regarding the paper or implementation.