Skip to content

Latest commit

 

History

History
125 lines (84 loc) · 8.52 KB

README.md

File metadata and controls

125 lines (84 loc) · 8.52 KB

🌀 ControlNeXt

ControlNeXt is our official implementation for controllable generation, supporting both images and videos while incorporating diverse forms of control information. In this project, we propose a new method that reduces trainable parameters by up to 90% compared with ControlNet, achieving faster convergence and outstanding efficiency. This method can be directly combined with other LoRA techniques to alter style and ensure more stable generation. Please refer to the examples for more details.

We provide an online demo of ControlNeXt-SDXL. Due to the high resource requirements of SVD, we are unable to offer it online.

This project is still undergoing iterative development. The code and model may be updated at any time. More information will be provided later.

Experiences

We share more training experiences there and in the Issue. We spent a lot of time to find these. Now share with all of you. May these will help you!

Model Zoo

  • ControlNeXt-SDXL [ Link ] : Controllable image generation. Our model is built upon Stable Diffusion XL . Fewer trainable parameters, faster convergence, improved efficiency, and can be integrated with LoRA.

  • ControlNeXt-SDXL-Training [ Link ] : The training scripts for our ControlNeXt-SDXL [ Link ].

  • ControlNeXt-SVD-v2 [ Link ] : Generate the video controlled by the sequence of human poses. In the v2 version, we implement several improvements: a higher-quality collected training dataset, larger training and inference batch frames, higher generation resolution, enhanced human-related video generation through continual training, and pose alignment for inference to improve overall performance.

  • ControlNeXt-SVD-v2-Training [ Link ] : The training scripts for our ControlNeXt-SVD-v2 [ Link ].

  • ControlNeXt-SVD [ Link ] : Generate the video controlled by the sequence of human poses. This can be seen as an attempt to replicate the implementation of AnimateAnyone. However, our model is built upon Stable Video Diffusion, employing a more concise architecture.

  • ControlNeXt-SD1.5 [ Link ] : Controllable image generation. Our model is built upon Stable Diffusion 1.5. Fewer trainable parameters, faster convergence, improved efficiency, and can be integrated with LoRA.

  • ControlNeXt-SD1.5-Training [ Link ] : The training scripts for our ControlNeXt-SD1.5 [ Link ].

  • ControlNeXt-SD3 [ Link ] : We are regret to inform that ControlNeXt-SD3 is trained with protected and private data and code, and therefore cannot be released.

🎥 Examples

For more examples, please refer to our Project page.

demo1 demo2 demo3 demo5

If you can't load the videos, you can also directly download them from here and here. Or you can view them from our Project Page or BiliBili.

02.mp4
02-1.mp4
01.mp4
01-1.mp4

03-1.mp4

04-1.mp4

If you can't load the videos, you can also directly download them from here.

tiktok.mp4

spiderman.mp4

star.mp4
chair.mp4

DreamShaper

Anythingv3

Anythingv3

If you find this work useful, please consider citing:

@article{peng2024controlnext,
  title={ControlNeXt: Powerful and Efficient Control for Image and Video Generation},
  author={Peng, Bohao and Wang, Jian and Zhang, Yuechen and Li, Wenbo and Yang, Ming-Chang and Jia, Jiaya},
  journal={arXiv preprint arXiv:2408.06070},
  year={2024}
}