Skip to content

sutd-visual-computing-group/awesome-generative-modeling-under-data-constraints

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 

Repository files navigation

Awesome Maintenance PR's Welcome

A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot

Milad Abdollahzadeh, Touba Malekzadeh*, Christopher T. H. Teo*, Keshigeyan Chandrasegaran*, Guimeng Liu, Ngai-Man Cheung
(* Equal contribution, Corresponding author)

This repo contains the list of papers with public code implementations for Generative Modeling under Data Constraint (GM-DC). For each work, we determine the generative task(s) addressed, the approach, and the type of generative model used.

First, we define the generative tasks, and the approach definition, and then provide our comprehensive list of the works for GM-DC with the required details for each work.

⭐ Overview

In machine learning, generative modeling aims to learn to generate new data statistically similar to the training data distribution. In this paper, we survey learning generative models under limited data, few shots and zero shot, referred to as Generative Modeling under Data Constraint (GM-DC). This is an important topic when data acquisition is challenging, e.g. healthcare applications. We discuss background, challenges, and propose two taxonomies: one on GM-DC tasks and another on GM-DC approaches. Importantly, we study interactions between different GM-DC tasks and approaches. Furthermore, we highlight research gaps, research trends, and potential avenues for future exploration.

🌏 News

  • Oct 28, 2024: The slides for our ICIP tutorial on "Generative Modeling for Limited Data, Few Shots and Zero Shot" can be found here.
  • July 28, 2023: First release (113 works included)!

Generative Tasks Definition

We define 8 different generative tasks under data constraints based on the rigorous review of the literature. The description of these tasks can be found in the follwing table:

Task Description & Example Illustration
uGM-1 Description: Given $K$ samples from a domain $\mathcal{D}$, learn to generate diverse and high-quality samples from $\mathcal{D}$
Example: ADA learns a StyleGAN2 using 1k images from AFHQ-Dog
uGM1
uGM-2 Description: Given a pre-trained generator on a source domain $\mathcal{D}_s$ and $K$ samples from a target domain $\mathcal{D}_t$, learn to generate diverse and high-quality samples from $\mathcal{D}_t$
Example: CDC adapts a pre-trained GAN on FFHQ (Human Faces) to Sketches using 10 samples
uGM2
uGM-3 Description: Given a pre-trained generator on a source domain $\mathcal{D}_s$ and a text prompt describing a target domain $\mathcal{D}_t$, learn to generate diverse and high-quality samples from $\mathcal{D}_t$
Example: StyleGAN-NADA adapts pre-trained GAN on FFHQ to the painting domain using Fernando Botero Painting as input
uGM3
cGM-1 Description: Given $K$ samples with class labels from a domain $\mathcal{D}$, learn to generate diverse and high-quality samples conditioning on the class labels from $\mathcal{D}$
Example: CbC trains conditional generator on 20 classes of ImageNet Carnivores using 100 images per class
cGM1
cGM-2 Description: Given a pre-trained generator on the seen classes $C_{seen}$ of a domain $\mathcal{D}$ and $K$ samples with class labels from unseen classes $C_{unseen}$ of $\mathcal{D}$, learn to generate diverse and high-quality samples conditioning on the class labels for $C_{unseen}$ from $\mathcal{D}$
Example: LoFGAN learns from 85 classes of Flowers to generate images for an unseen class with only 3 samples
cGM2
cGM-3 Description: Given a pre-trained generator on a source domain $\mathcal{D}_s$ and $K$ samples with class labels from a target domain $\mathcal{D}_t$ , learn to generate diverse and high-quality samples conditioning on the class labels from $\mathcal{D}_t$
Example: VPT adapts a pre-trained conditional generator on ImageNet to Places365 with 500 images per class
cGM3
IGM Description: Given $K$ samples (usually $K=1$) and assuming rich internal distribution for patches within these samples, learn to generate diverse and high-quality samples with the same internal patch distribution
Example: SinDDM trains a generator using a single image of Marina Bay Sands, and generates variants of it
IGM
SGM Description: Given a pre-trained generator, $K$ samples of a particular subject, and a text prompt, learn to generate diverse and high-quality samples containing the same subject
Example: DreamBooth trains a generator using 4 images of a particular backpack and adapts it with a text-prompt to be in the grand canyon
SGM

Please refer to our survey for a more detailed discussion of these generative tasks including the attributes of each task and the data limitation range that addressed for each task.

Transfer Learning

Click to expand/collapse 50 works
  • Transferring GANs: generating images from limited data
    ECCV 2018
    [Paper] [Official Code]
  • Image Generation from Small Datasets via Batch Statistics Adaptation
    ICCV 2019
    [Paper] [Official Code]
  • Freeze the Discriminator: a Simple Baseline for Fine-tuning GANs
    CVPR 2020-W
    [Paper] [Official Code]
  • On Leveraging Pretrained GANs for Generation with Limited Data
    ICML 2020
    [Paper] [Official Code]
  • Few-Shot Image Generation with Elastic Weight Consolidation
    NeurIPS 2020
    [Paper]
  • GAN Memory with No Forgetting
    NeurIPS 2020
    [Paper] [Official Code]
  • Few-Shot Adaptation of Generative Adversarial Networks
    arXiv 2020
    [Paper] [Official Code]
  • Effective Knowledge Transfer from GANs to Target domains with Few Images
    CVPR 2021
    [Paper] [Official Code]
  • Few-Shot Image Generation via Cross-domain Correspondence
    CVPR 2021
    [Paper] [Official Code]
  • Efficient Conditional GAN Transfer with Konwledge Propagation across Classes
    CVPR 2021
    [Paper] [Official Code]
  • CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks
    NeurIPS 2021
    [Paper] [Official Code]
  • Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation
    NeurIPS 2021-W
    [Paper]
  • Instance-Conditioned GAN
    NeurIPS 2021
    [Paper] [Official Code]
  • Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
    arXiv 2021
    [Paper] [Official Code]
  • One-Shot Generative Domain Adaptation
    arXiv 2021
    [Paper] [Official Code]
  • When, Why, and Which Pre-trained GANs are useful?
    ICLR 2022
    [Paper] [Official Code]
  • Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks
    ICLR 2022
    [Paper] [Official Code]
  • A Closer Look at Few-Shot Image Generation
    CVPR 2022
    [Paper]
  • Few shot generative model adaption via relaxed spatial structural alignment
    CVPR 2022
    [Paper] [Official Code]
  • One Shot Face Stylization
    ECCV 2022
    [Paper] [Official Code]
  • Few-shot Image Generation via Adaptation-Aware Kernel Modulation
    NeurIPS 2022
    [Paper] [Official Code]
  • Universal Domain Adaptation for Generative Adversarial Networks
    NeurIPS 2022
    [Paper] [Official Code]
  • Generalized One-shot Domain Adaptation of Generative Adversarial Networks
    NeurIPS 2022
    [Paper] [Official Code]
  • Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks
    NeurIPS 2022
    [Paper] [Official Code]
  • CLIP-Guided Domain Adaptation of Image Generators
    ACM-TOG 2022
    [Paper] [Official Code]
  • Dynamic Few-shot Adaptation of GANs to Multiple Domains
    SIGGRAPH-Asia 2022
    [Paper] [Official Code]
  • Exploiting Knowledge Distillation for Few-Shot Image Generation
    arXiv 2022
    [Paper]
  • Few-shot Artistic Portraits Generation with Contrastive Transfer Learning
    arXiv 2022
    [Paper]
  • Dynamic Weighted Semantic Correspondence for Few-Shot Image Generative Adaptation
    ACM-MM 2022
    [Paper]
  • Fair Generative Models via Transfer Learning
    AAAI 2023
    [Paper] [Official Code]
  • Progressive Few-Shot Adaptation of Generative Model with Align-Free Spatial Correlation
    AAAI 2023
    [Paper] [Official Code]
  • Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning
    ICLR 2023
    [Paper] [Official Code]
  • Exploring Incompatible Knowledge Transfer in Few-shot Image Generation
    CVPR 2023
    [Paper] [Official Code]
  • Zero-shot Generative Model Adaptation via Image-specific Prompt Learning
    CVPR 2023
    [Paper] [Official Code]
  • Visual Prompt Tuning for Generative Transfer Learning
    CVPR 2023
    [Paper] [Official Code]
  • SINgle Image Editing with Text-to-Image Diffusion Models
    CVPR 2023
    [Paper] [Official Code]
  • DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
    CVPR 2023
    [Paper]
  • Multi-Concept Customization of Text-to-Image Diffusion
    CVPR 2023
    [Paper] [Official Code]
  • Plug-and-Play Sample-Efficient Fine-Tuning of Text-to-Image Diffusion Models to Learn Any Unseen Style
    CVPR 2023
    [Paper]
  • Target-Aware Generative Augmentations for Single-Shot Adaptation
    ICML 2023
    [Paper] [Official Code]
  • MultiDiffusion:Fusing Diffusion Paths for Controlled Image Generation
    ICML 2023
    [Paper] [Official Code]
  • Data-Dependent Domain Transfer GANs for Image Generation with Limited Data
    AC-MTMCCA 2023
    [Paper]
  • One-Shot Adaptation of GAN in Just One CLIP
    TPAMI 2023
    [Paper] [Official Code]
  • Few-shot Image Generation via Masked Discrimination
    arXiv 2023
    [Paper]
  • Few-shot Image Generation via Latent Space Relocation
    arXiv 2023
    [Paper]
  • Faster Few-Shot Face Image Generation with Features of Specific Group Using Pivotal Tuning Inversion and PCA
    ICAIIC 2023
    [Paper]
  • Few-shot Image Generation with Diffusion Models
    arXiv 2023
    [Paper]
  • Rethinking cross-domain semantic relation for few-shot image generation
    Applied-Inteligence 2023
    [Paper] [Official Code]
  • An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
    arXiv 2023
    [Paper] [Official Code]
  • BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
    arXiv 2023
    [Paper] [Official Code]

Data Augmentation

Click to expand/collapse 12 works
  • Consistency Regularization for Generative Adversarial Networks
    ICLR 2019
    [Paper] [Official Code]
  • Training generative adversarial networks with limited data
    NeurIPS 2020
    [Paper] [Official Code]
  • Differentiable Augmentation for Data-efficient GAN Training
    NeurIPS 2020
    [Paper] [Official Code]
  • Image Augmentations for GAN Training
    arXiv 2020
    [Paper]
  • Improved Consistency Regularization for GANs
    AAAI 2021
    [Paper]
  • DeceiveD: Adaptive pseudo augmentation for gan training with limited data
    NeurIPS 2021
    [Paper] [Official Code]
  • Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective
    NeurIPS 2021
    [Paper] [Official Code]
  • Self-Supervised GANs with Label Augmentation
    NeurIPS 2021
    [Paper] [Official Code]
  • On Data Augmentation for GAN Training
    TIP 2021
    [Paper] [Official Code]
  • Adaptive Feature Interpolation for Low-Shot Image Generation
    ECCV 2022
    [Paper] [Official Code]
  • Training GANs with Diffusion
    ICLR 2023
    [Paper] [Official Code]
  • Faster and More Data-Efficient Training of Diffusion Models
    arXiv 2023
    [Paper]

Network Architectures

Click to expand/collapse 11 works
  • Towards faster and stabilized gan training for high-fidelity few-shot image synthesis
    ICLR 2021
    [Paper] [Official Code]
  • Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective
    NeurIPS 2021
    [Paper] [Official Code]
  • Projected GANs Converge Faster
    NeurIPS 2021
    [Paper] [Official Code]
  • Prototype Memory and Attention Mechanisms for Few Shot Image Generation
    ICLR 2022
    [Paper] [Official Code]
  • Collapse by conditioning: Training class-conditional GANs with limited data
    ICLR 2022
    [Paper] [Official Code]
  • Ensembling Off-the-shelf Models for GAN Training
    CVPR 2022
    [Paper] [Official Code]
  • Hierarchical Context Aggregation for Few-Shot Generation
    ICML 2022
    [Paper] [Official Code]
  • Improving GANs with A Dynamic Discriminator
    NeurIPS 2022
    [Paper] [Official Code]
  • Data-Efficient GANs Training via Architectural Reconfiguration
    CVPR 2023
    [Paper] [Official Code]
  • Introducing editable and representative attributes for few-shot image generation
    Engineering Applications of AI 2023
    [Paper] [Official Code]
  • Toward a better image synthesis GAN framework for high-fidelity few-shot datasets via NAS and contrastive learning
    Elsevier KBS 2023
    [Paper] [Official Code]

Multi-Task Objectives

Click to expand/collapse 25 works
  • Image Augmentations for GAN Training
    arXiv 2020
    [Paper]
  • Regularizing generative adversarial networks under limited data
    CVPR 2021
    [Paper] [Official Code]
  • Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation
    NeurIPS 2021-W
    [Paper]
  • Data-Efficient Instance Generation from Instance Discrimination
    NeurIPS 2021
    [Paper] [Official Code]
  • Diffusion-Decoding Models for Few-Shot Conditional Generation
    NeurIPS 2021
    [Paper] [Official Code]
  • Generative Co-training for Generative Adversarial Networks with Limited Data
    AAAI 2022
    [Paper] [Official Code]
  • Prototype Memory and Attention Mechanisms for Few Shot Image Generation
    ICLR 2022
    [Paper] [Official Code]
  • A Closer Look at Few-Shot Image Generation
    CVPR 2022
    [Paper]
  • Few-shot Image Generation with Mixup-based Distance Learning
    ECCV 2022
    [Paper] [Official Code]
  • Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs
    ECCV 2022
    [Paper] [Official Code]
  • Any-resolution Training for High-resolution Image Synthesis
    ECCV 2022
    [Paper] [Official Code]
  • Discriminator gradIent Gap Regularization for GAN Training with Limited Data
    NeurIPS 2022
    [Paper] [Official Code]
  • Masked Generative Adversarial Networks are Data-Efficient Generation Learners
    NeurIPS 2022
    [Paper]
  • Exploiting Knowledge Distillation for Few-Shot Image Generation
    arXiv 2022
    [Paper]
  • Few-shot Artistic Portraits Generation with Contrastive Transfer Learning
    arXiv 2022
    [Paper]
  • Few-Shot Diffusion Models
    arXiv 2022
    [Paper] [Official Code]
  • Few-shot image generation based on contrastive meta-learning generative adversarial network
    Visual Computer 2022
    [Paper]
  • Training GANs with Diffusion
    ICLR 2023
    [Paper] [Official Code]
  • Data Limited Image Generation via Knowledge Distillation
    CVPR 2023
    [Paper]
  • Adaptive IMLE for Few-shot Pretraining-free Generative Modelling
    ICML 2023
    [Paper] [Official Code]
  • Few-shot Image Generation via Masked Discrimination
    arXiv 2023
    [Paper]
  • Faster and More Data-Efficient Training of Diffusion Models
    arXiv 2023
    [Paper]
  • Towards high diversity and fidelity image synthesis under limited data
    Information Sciences 2023
    [Paper] [Official Code]
  • Regularizing Label-Augmented Generative Adversarial Networks Under Limited Data
    IEEE Access 2023
    [Paper]
  • Dynamically Masked Discriminator for Generative Adversarial Networks
    arXiv 2023
    [Paper]

Exploiting Frequency Components

Click to expand/collapse 4 works
  • Generative Co-training for Generative Adversarial Networks with Limited Data
    AAAI 2022
    [Paper] [Official Code]
  • Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation
    ECCV 2022
    [Paper] [Official Code]
  • Improving GANs with A Dynamic Discriminator
    NeurIPS 2022
    [Paper] [Official Code]
  • Exploiting Frequency Components for Training GANs under Limited Data
    NeurIPS 2022
    [Paper] [Official Code]

Meta-learning

Click to expand/collapse 17 works
  • Data Augmentaion Generative Adversarial Networks
    arXiv 2017
    [Paper] [Official Code]
  • Few-shot Generative Modelling with Generative Matching Networks
    AISTATS 2018
    [Paper]
  • Few-shot Image Generation with Reptile
    arXiv 2019
    [Paper] [Official Code]
  • A domain adaptive few shot generation framework
    arXiv 2020
    [Paper]
  • Matching-based Few-shot Image Generation
    ICME 2020
    [Paper] [Official Code]
  • Fusing-and-Filling GAN for Few-shot Image Generation
    ACM-MM 2020
    [Paper] [Official Code]
  • Fusing Local Representations for Few-shot Image Generation
    ICCV 2021
    [Paper] [Official Code]
  • Fast Adaptive Meta-Learning for Few-Shot Image Generation
    TMM 2021
    [Paper] [Official Code]
  • Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation
    ECCV 2022
    [Paper] [Official Code]
  • Towards Diverse Few-shot Image Generation with Sample-Specific Delta
    ECCV 2022
    [Paper] [Official Code]
  • Few-shot image generation based on contrastive meta-learning generative adversarial network
    Visual Computer 2022
    [Paper]
  • Few-shot Image Generation Using Discrete Content Representation
    ACM MM 2022
    [Paper]
  • The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
    arXiv 2022
    [Paper]
  • Where is My Spot? Few-shot Image Generation via Latent Subspace Optimization
    CVPR 2023
    [Paper] [Official Code]
  • Attribute Group Editing for Reliable Few-shot Image Generation
    CVPR 2023
    [Paper] [Official Code]
  • Adaptive multi-scale modulation generative adversarial network for few-shot image generation
    Applied Intelligence 2023
    [Paper]
  • Stable Attribute Group Editing for Reliable Few-shot Image Generation
    arXiv 2023
    [Paper] [Official Code]

Modeling Internal Patch Distribution

Click to expand/collapse 8 works
  • Learning a Generative Model from a Single Natural Image
    ICCV 2019
    [Paper] [Official Code]
  • Learning to generate samples from single images and videos
    CVPR 2021-W
    [Paper] [Official Code]
  • Improved techniques for training single image gans
    WACV 2021
    [Paper] [Official Code]
  • Learning a Diffusion Model from a Single Natural Image
    arXiv 2022
    [Paper] [Official Code]
  • Learning and Blending the Internal Distributions of Single Images by Spatial Image-Identity Conditioning
    arXiv 2022
    [Paper]
  • Training Diffusion Models on a Single Image or Video
    ICML 2023
    [Paper] [Official Code]
  • A Single Image Denoising Diffusion Model
    ICML 2023
    [Paper] [Official Code]
  • Diverse Attribute Transfer for Few-Shot Image Synthesis
    VISIGRAPP 2023
    [Paper] [Official Code]

Citation

If you find this repo useful, please cite our paper

@article{abdollahzadeh2023survey,
      title={A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot}, 
      author={Milad Abdollahzadeh and Touba Malekzadeh and Christopher T. H. Teo and Keshigeyan Chandrasegaran and Guimeng Liu and Ngai-Man Cheung},
      year={2023},
      eprint={2307.14397},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}