I am a research fellow at the School of Computing and Information Systems at Singapore Management University supervised by Prof. Jun Sun. I also work closely with the Prof. Xingjun Ma at Fudan university. I have completed my Ph.D. degree at Xidian University supervised by Prof. Xixiang Lyu. Research publications in Google Scholar.
- Understanding the effectiveness of backdoor attacks
- Robust training against backdoor attacks
- Design and implement a general defense framework for backdoor attacks
- Yige Li, Xingjun Ma, et al., “Multi-Trigger Backdoor Attacks: More Triggers, More Threats”, submitting, 2024.
- Yige Li, Xixiang Lyu, et al., “Reconstructive Neuron Pruning for Backdoor Defense”, ICML 2023.
- Yige Li, Xixiang Lyu, et al., “Anti-Backdoor Learning: Training Clean Models on Poisoned Data”, NeurIPS 2021.
- Yige Li, Xixiang Lyu, et al., “Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks”, ICLR 2021.
-
Neural Attention Distillation (NAD)
- A simple and universal method against 6 state-of-the-art backdoor attacks via knowledge distillation
- Only a small amount of clean data is required (5%)
- Only a few epochs of fine-tuning (2-10 epochs) are required
-
Anti-Backdoor Learning (ABL)
- Simple, effective, and universal, can defend against 10 state-of-the-art backdoor attacks
- 1% isolation data is required
- A novel stratrgy benefit companies, research institutes, or government agencies to train backdoor-free machine learning models