Skip to content

The official GitHub page for the survey paper "A Survey on Data Augmentation in Large Model Era"

Notifications You must be signed in to change notification settings

MLGroup-JLU/LLM-data-aug-survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 

Repository files navigation

A collection of papers and resources related to large model-based data augmentation methods.

A Survey on Data Augmentation in Large Model Era

Yue Zhou*1   Chenlu Guo*1   Xu Wang1   Yi Chang1   Yuan Wu#1

1 Jilin University
(*: Co-first authors, #: Corresponding authors)

Papers and resources on data augmentation using large models

The papers are organized according to our survey: A Survey on Data Augmentation in Large Model Era.

NOTE: As real-time updates on the arXiv paper aren't possible, please refer to this repository for the latest information. We appreciate your contributions through pull requests or issue reports to enhance the survey, and your efforts will be acknowledged in (acknowledgements).

Related projects:

  • Evlauation of large language models: [LLM-eval]

Table of Contents
  1. News and Updates
  2. Approaches
  3. Applications
  4. Data Post Processing
  5. Contributing
  6. Citation
  7. Acknowledgments

News and updates

Approaches

Image Augmentation

Prompt-driven approaches

Text Prompt-driven

  1. Camdiff: Camouflage image augmentation via diffusion model. Luo, X.-J. et al. arKiv 2023. [paper][code]
  2. Diffedit: Diffusion-based semantic image editing with mask guidance. Couairon, G. et al. arKiv 2022. [paper]
  3. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Nichol, A. et al. arXiv 2021. [paper][code]
  4. It is all about where you start: Text-to-image generation with seed selection. Samuel, D. et al. arXiv 2023. [paper]
  5. Plug-and-play diffusion features for text-driven image-to-image translation. Tumanyan, N. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  6. Prompt-to-prompt image editing with cross attention control. Hertz, A. et al. arXiv 2022. [paper][code]
  7. Localizing Object-level Shape Variations with Text-to-Image Diffusion Models. Patashnik, O. et al. arXiv 2023. [paper][code]
  8. Sine: Single image editing with text-to-image diffusion models. Zhang, Z. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  9. Text2live: Text-driven layered image and video editing. Bar-Tal, O. et al. European conference on computer vision. [paper][code]
  10. Diffusionclip: Text-guided diffusion models for robust image manipulation. Kim, G. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  11. StyleGAN-NADA: CLIP-guided domain adaptation of image generators. Gal, R. et al. ACM Transactions on Graphics (TOG). [paper][code]
  12. Diversify your vision datasets with automatic diffusion-based augmentation. Dunlap, L. et al. arXiv 2023. [paper][code]
  13. Effective data augmentation with diffusion models. Trabucco, B. et al. arXiv 2023. [paper][code]
  14. Imagic: Text-based real image editing with diffusion models. Kawar, B. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper]
  15. TTIDA: Controllable Generative Data Augmentation via Text-to-Text and Text-to-Image Models. Yin, Y. et al. arXiv 2023. [paper][code]
  16. Blended diffusion for text-driven editing of natural images. Avrahami, O. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  17. Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing. Wang, K. et al. arXiv 2023. [paper][code]
  18. Instructpix2pix: Learning to follow image editing instructions. Brooks, T. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  19. Expressive text-to-image generation with rich text. Ge, S. et al. Proceedings of the IEEE/CVF International Conference on Computer Vision. [paper][code]
  20. GeNIe: Generative Hard Negative Images Through Diffusion. Koohpayegani, S. et al. arXiv 2023. [paper][code]
  21. Semantic Generative Augmentations for Few-Shot Counting. Doubinsky, P. et al. arXiv 2023. [paper]
  22. InstaGen: Enhancing Object Detection by Training on Synthetic Dataset. Feng, C. et al. arXiv 2024. [paper][code]
  23. Cross domain generative augmentation: Domain generalization with latent diffusion models. Hemati, S. et al. arXiv 2023. [paper]

Visual Prompt-driven

  1. ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation. Sun, Y. et al. arXiv 2023. [paper]
  2. Diffusion-based data augmentation for nuclei image segmentation. Yu, X. et al. International Conference on Medical Image Computing and Computer-Assisted Intervention. [paper][code]
  3. Image Augmentation with Controlled Diffusion for Weakly-Supervised Semantic Segmentation. Wu, W. et al. arXiv 2023. [paper]
  4. More control for free! image synthesis with semantic diffusion guidance. Liu, X. et al. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. [paper][code]

Multimodal Prompt-driven

  1. Visual instruction inversion: Image editing via visual prompting. Nguyen, T. et al. arXiv 2023. [paper][code]
  2. In-context learning unlocked for diffusion models. Wang, Z. et al. arXiv 2023. [paper][code]
  3. Smartbrush: Text and shape guided object inpainting with diffusion model. Xie, S. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper]
  4. ReVersion: Diffusion-Based Relation Inversion from Images. Huang, Z. et al. arXiv 2023. [paper][code]
  5. Gligen: Open-set grounded text-to-image generation. Li, Y. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  6. Adding conditional control to text-to-image diffusion models. Zhang, L. et al. Proceedings of the IEEE/CVF International Conference on Computer Vision. [paper][code]
  7. Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with Visual and Textual Prompts. Du, S. et al. arXiv 2023. [paper]
  8. Generative Data Augmentation Improves Scribble-supervised Semantic Segmentation. Schnell, J. et al. arXiv 2023. [paper]
  9. Chameleon: Foundation Models for Fairness-aware Multi-modal Data Augmentation to Enhance Coverage of Minorities. Erfanian, M. et al. arXiv 2024. [paper][code]
  10. Diffusion-based Data Augmentation for Object Counting Problems. Wang, Z. et al. arXiv 2024. [paper]

Subject-driven approaches

  1. An image is worth one word: Personalizing text-to-image generation using textual inversion. Gal, R. et al. arXiv 2022. [paper][code]
  2. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. Ruiz, N. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  3. Instantbooth: Personalized text-to-image generation without test-time finetuning. Shi, J. et al. arXiv 2023. [paper]
  4. Unified multi-modal latent diffusion for joint subject and text conditional image generation. Ma, Y. et al. arXiv 2023. [paper]
  5. Multi-concept customization of text-to-image diffusion. Kumari, N. et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [paper][code]
  6. Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing. Li, D. et al. arXiv 2023. [paper][code]
  7. FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention. Xiao, G. et al. arXiv 2023. [paper][code]
  8. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. Wei, Y. et al. arXiv 2023. [paper][code]
  9. Subject-driven text-to-image generation via apprenticeship learning. Chen, W. et al. arXiv 2023. [paper]

Text Augmentation

Label-based approaches

  1. Augmented sbert: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. Thakur, N. et al. arXiv 2020. [paper][code]
  2. Data augmentation using pre-trained transformer models. Kumar, V. et al. arXiv 2020. [paper][code]
  3. Data augmentation for intent classification with off-the-shelf large language models. Sahu, G. et al. arXiv 2022. [paper][code]
  4. GPT3Mix: Leveraging large-scale language models for text augmentation. Yoo, K. et al. arXiv 2021. [paper][code]
  5. Augmenting text for spoken language understanding with Large Language Models. Sharma, R. et al. arXiv 2023. [paper]
  6. Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges. Samuel, V. et al. arXiv 2023. [paper]
  7. Can large language models aid in annotating speech emotional data? uncovering new frontiers. Latif, S. et al. arXiv 2023. [paper]
  8. Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering. Chowdhury, A. et al. arXiv 2023. [paper]
  9. MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering. Chen, X. et al. arXiv 2023. [paper]
  10. Text Data Augmentation in Low-Resource Settings via Fine-Tuning of Large Language Models. Kaddour, J. et al. arXiv 2023. [paper]
  11. Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation. Wu, S. et al. arXiv 2023. [paper][code]
  12. LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition. Ye, J. et al. arXiv 2024. [paper]

Generated content-based approaches

  1. Augesc: Dialogue augmentation with large language models for emotional support conversation. Zheng, C. et al. Findings of the Association for Computational Linguistics: ACL 2023. [paper][code]
  2. Chataug: Leveraging chatgpt for text data augmentation. Dai, H. et al. arXiv 2023. [paper][code]
  3. Coca: Contrastive captioners are image-text foundation models. Yu, J. et al. arXiv 2022. [paper][code]
  4. DAGAM: Data Augmentation with Generation And Modification. Jo, B. et al. arXiv 2022. [paper][code]
  5. Data augmentation for neural machine translation using generative language model. Oh, S. et al. arXiv 2023. [paper]
  6. Deep Transformer based Data Augmentation with Subword Units for Morphologically Rich Online ASR. Tarj{'a}n, B. et al. arXiv 2020. [paper]
  7. Flipda: Effective and robust data augmentation for few-shot learning. Zhou, J. et al. arXiv 2021. [paper][code]
  8. Genius: Sketch-based language model pre-training via extreme and selective masking for text generation and augmentation. Guo, B. et al. arXiv 2022. [paper][code]
  9. Inpars: Data augmentation for information retrieval using large language models. Bonifacio, L. et al. arXiv 2022. [paper][code]
  10. SkillBot: Towards Data Augmentation using Transformer language model and linguistic evaluation. Khatri, S. et al. 2022 Human-Centered Cognitive Systems (HCCS). [paper]
  11. Textual data augmentation for efficient active learning on tiny datasets. Quteineh, H. et al. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). [paper]
  12. Wanli: Worker and ai collaboration for natural language inference dataset creation. Liu, A. et al. arXiv 2022. [paper][code]
  13. EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets. Lu, H. et al. arXiv 2023. [paper]
  14. Tuning language models as training data generators for augmentation-enhanced few-shot learning. Meng, Y. et al. International Conference on Machine Learning. [paper][code]
  15. Generating training data with language models: Towards zero-shot language understanding. Meng, Y. et al. Advances in Neural Information Processing Systems. [paper][code]
  16. ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer. Saakyan, A. et al. arXiv 2023. [paper][code]
  17. Natural Language Dataset Generation Framework for Visualizations Powered by Large Language Models. Ko, H. et al. arXiv 2023. [paper][code]
  18. PULSAR at MEDIQA-Sum 2023: Large Language Models Augmented by Synthetic Dialogue Convert Patient Dialogues to Medical Records. Schlegel, V. et al. arXiv 2023. [paper][code]
  19. Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning. Gao, J. et al. The Eleventh International Conference on Learning Representations. [paper][code]
  20. Resolving the Imbalance Issue in Hierarchical Disciplinary Topic Inference via LLM-based Data Augmentation. Cai, X. et al. arXiv 2023. [paper]
  21. Just-in-Time Security Patch Detection--LLM At the Rescue for Data Augmentation. Tang, X. et al. arXiv 2023. [paper][code]
  22. DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase. Li, D. et al. arXiv 2023. [paper]
  23. ZeroShotDataAug: Generating and Augmenting Training Data with ChatGPT. Ubani, S. et al. arXiv 2023. [paper]
  24. Large Language Models as Data Augmenters for Cold-Start Item Recommendation. Wang, J. et al. arXiv 2024. [paper]

Paired Augmentation

  1. Mixgen: A new multi-modal data augmentation. Hao, X. et al. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. [paper][code]
  2. PromptMix: Text-to-image diffusion models enhance the performance of lightweight networks. Bakhtiarnia, A. et al. arXiv 2023. [paper]
  3. Towards reporting bias in visual-language datasets: bimodal augmentation by decoupling object-attribute association. Wu, Q. et al. arXiv 2023. [paper]

Applications

Natural Language Processing

Text classification

  1. Chataug: Leveraging chatgpt for text data augmentation. Dai, H. et al. arXiv 2023. [paper][code]
  2. DAGAM: Data Augmentation with Generation And Modification. Jo, B. et al. arXiv 2022. [paper][code]
  3. Data augmentation using pre-trained transformer models. Kumar, V. et al. arXiv 2020. [paper][code]
  4. Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning. Gao, J. et al. The Eleventh International Conference on Learning Representations. [paper][code]
  5. Resolving the Imbalance Issue in Hierarchical Disciplinary Topic Inference via LLM-based Data Augmentation. Cai, X. et al. arXiv 2023. [paper]
  6. Genius: Sketch-based language model pre-training via extreme and selective masking for text generation and augmentation. Guo, B. et al. arXiv 2022. [paper][code]
  7. Tuning language models as training data generators for augmentation-enhanced few-shot learning. Meng, Y. et al. International Conference on Machine Learning. [paper][code]
  8. ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer. Saakyan, A. et al. arXiv 2023. [paper][code]
  9. DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase. Li, D. et al. arXiv 2023. [paper]

Question answering

  1. MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering. Chen, X. et al. arXiv 2023. [paper]
  2. Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering. Chowdhury, A. et al. arXiv 2023. [paper]
  3. Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges. Samuel, V. et al. arXiv 2023. [paper]
  4. CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration. Sachdeva, R. et al. arXiv 2023. [paper][code]

Machine translation

  1. EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets. Lu, H. et al. arXiv 2023. [paper]
  2. Data augmentation for neural machine translation using generative language model. Oh, S. et al. arXiv 2023. [paper]

Natural language inference

  1. EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets. Lu, H. et al. arXiv 2023. [paper]
  2. Wanli: Worker and ai collaboration for natural language inference dataset creation. Liu, A. et al. arXiv 2022. [paper][code]

Dialogue summarising

  1. EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets. Lu, H. et al. arXiv 2023. [paper]
  2. PULSAR at MEDIQA-Sum 2023: Large Language Models Augmented by Synthetic Dialogue Convert Patient Dialogues to Medical Records. Schlegel, V. et al. arXiv 2023. [paper][code]

Others

  1. Augesc: Dialogue augmentation with large language models for emotional support conversation. Zheng, C. et al. Findings of the Association for Computational Linguistics: ACL 2023. [paper][code]
  2. Augmented sbert: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. Thakur, N. et al. arXiv 2020. [paper]
  3. Inpars: Data augmentation for information retrieval using large language models. Bonifacio, L. et al. arXiv 2022. [paper][code]
  4. EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets. Lu, H. et al. arXiv 2023. [paper]
  5. Large Language Models as Data Augmenters for Cold-Start Item Recommendation. Wang, J. et al. arXiv 2024. [paper]
  6. LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition. Ye, J. et al. arXiv 2024. [paper]

Computer Vision

Image classification

  1. It is all about where you start: Text-to-image generation with seed selection. Samuel, D. et al. arXiv 2023. [paper]
  2. Diversify your vision datasets with automatic diffusion-based augmentation. Dunlap, L. et al. arXiv 2023. [paper][code]
  3. Effective data augmentation with diffusion models. Trabucco, B. et al. arXiv 2023. [paper][code]
  4. TTIDA: Controllable Generative Data Augmentation via Text-to-Text and Text-to-Image Models. Yin, Y. et al. arXiv 2023. [paper][code]
  5. Boosting Unsupervised Contrastive Learning Using Diffusion-Based Data Augmentation From Scratch. Zang, Z. et al. arXiv 2023. [paper][code}
  6. GeNIe: Generative Hard Negative Images Through Diffusion. Koohpayegani, S. et al. arXiv 2023. [paper][code]
  7. Chameleon: Foundation Models for Fairness-aware Multi-modal Data Augmentation to Enhance Coverage of Minorities. Erfanian, M. et al. arXiv 2024. [paper][code]
  8. Cross domain generative augmentation: Domain generalization with latent diffusion models. Hemati, S. et al. arXiv 2023. [paper]

Semantic segmentation

  1. EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided Diffusion Model. Zhang, Z. et al. arXiv 2023. [paper]
  2. Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with Visual and Textual Prompts. Du, S. et al. arXiv 2023. [paper]
  3. Diffusion-based data augmentation for nuclei image segmentation. Yu, X. et al. International Conference on Medical Image Computing and Computer-Assisted Intervention. [paper][code]
  4. Image Augmentation with Controlled Diffusion for Weakly-Supervised Semantic Segmentation. Wu, W. et al. arXiv 2023. [paper]
  5. Generative Data Augmentation Improves Scribble-supervised Semantic Segmentation. Schnell, J. et al. arXiv 2023. [paper]

Object detection

  1. The Big Data Myth: Using Diffusion Models for Dataset Generation to Train Deep Detection Models. Voetman, R. et al. arXiv 2023. [paper]
  2. WoVoGen: World Volume-aware Diffusion for Controllable Multi-camera Driving Scene Generation. Lu, J. et al. arXiv 2023. [paper][code]
  3. InstaGen: Enhancing Object Detection by Training on Synthetic Dataset. Feng, C. et al. arXiv 2024. [paper][code]
  4. Diffusion-based Data Augmentation for Object Counting Problems. Wang, Z. et al. arXiv 2024. [paper]

Audio signal processing

  1. Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation. Wu, S. et al. arXiv 2023. [paper][code]
  2. Augmenting text for spoken language understanding with Large Language Models. Sharma, R. et al. arXiv 2023. [paper]
  3. Can large language models aid in annotating speech emotional data? uncovering new frontiers. Latif, S. et al. arXiv 2023. [paper]
  4. Deep Transformer based Data Augmentation with Subword Units for Morphologically Rich Online ASR. Tarj{'a}n, B. et al. arXiv 2020. [paper]
  5. Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance. Kim, J. et al. arXiv 2023. [paper][code]

Data Post Processing

Top-K Selection

  1. Inpars: Data augmentation for information retrieval using large language models. Bonifacio, L. et al. arXiv 2022. [paper][code]
  2. Generating training data with language models: Towards zero-shot language understanding. Meng, Y. et al. Advances in Neural Information Processing Systems. [paper][code]
  3. Strata: Self-training with task augmentation for better few-shot learning. Vu, T. et al. arXiv 2021. [paper][code]

Model-based Approaches

  1. CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration. Sachdeva, R. et al. arXiv 2023. [paper][code]
  2. Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges. Samuel, V. et al. arXiv 2023. [paper]
  3. Augmenting text for spoken language understanding with Large Language Models. Sharma, R. et al. arXiv 2023. [paper]
  4. Data augmentation for intent classification with off-the-shelf large language models. Sahu, G. et al. arXiv 2022. [paper][code]
  5. Flipda: Effective and robust data augmentation for few-shot learning. Zhou, J. et al. arXiv 2021. [paper][code]
  6. Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation. Wu, S. et al. arXiv 2023. [paper][code]

Score-based Approaches

  1. DiffuseExpand: Expanding dataset for 2D medical image segmentation using diffusion models. Shao, S. et al. arXiv 2023. [paper][code]
  2. Image Augmentation with Controlled Diffusion for Weakly-Supervised Semantic Segmentation. Wu, W. et al. arXiv 2023. [paper]
  3. Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering. Chowdhury, A. et al. arXiv 2023. [paper]
  4. Augesc: Dialogue augmentation with large language models for emotional support conversation. Zheng, C. et al. Findings of the Association for Computational Linguistics: ACL 2023. [paper][code]
  5. Wanli: Worker and ai collaboration for natural language inference dataset creation. Liu, A. et al. arXiv 2022. [paper][code]

Cluster-based Approaches

  1. Diffusion-based data augmentation for nuclei image segmentation. Yu, X. et al. International Conference on Medical Image Computing and Computer-Assisted Intervention. [paper][code]

Contributing

We welcome contributions to LLM-data-aug-survey! If you'd like to contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch incorporating your modifications.
  3. Submit a pull request accompanied by a clear description of the changes you made.

Feel free to open an issue if you have any additions or comments.

Citation

If you find this project useful in your research or work, please consider citing it:

@article{zhou2024survey,
  title={A Survey on Data Augmentation in Large Model Era},
  author={Zhou, Yue and Guo, Chenlu and Wang, Xu and Chang, Yi and Wu, Yuan},
  journal={arXiv preprint arXiv:2401.15422},
  year={2024}
}

Acknowledgements

About

The official GitHub page for the survey paper "A Survey on Data Augmentation in Large Model Era"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •