Skip to content

Latest commit

 

History

History

ControlNeXt-SVD-v2

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

🌀 ControlNeXt-SVD-v2

This is our implementation of ControlNeXt based on Stable Video Diffusion. It can be seen as an attempt to replicate the implementation of AnimateAnyone with a more concise and efficient architecture.

Compared to image generation, video generation poses significantly greater challenges. While direct training of the generation model using our method is feasible, we also employ various engineering strategies to enhance performance. Although they are irrespective of academic algorithms.

Please refer to Examples for further intuitive details.
Please refer to Base model for more details of our used base model.
Please refer to Inference for more details regarding installation and inference.
Please refer to Advanced Performance for more details to achieve a better performance.
Please refer to Limitations for more details about the limitations of current work.

Examples

If you can't load the videos, you can also directly download them from here and here. Or you can view them from our Project Page or BiliBili.

02.mp4
02-1.mp4
01.mp4
01-1.mp4

03-1.mp4

04-1.mp4

Base Model

For the v2 version, we adopt the below operations to improve the performance:

  • We have collected a higher-quality dataset with higher resolution to train our model.
  • We have extended the training and inference batch frames to 24.
  • We have extended the video height and width to a resolution of 576 × 1024.
  • We conduct extensive continual training of SVD on human-related videos to enhance its ability to generate human-related content.
  • We adopt fp32.
  • We adopt the pose alignment during the inference following the related.

Inference

  1. Clone our repository
  2. cd ControlNeXt-SVD-v2
  3. Download the pretrained weight into pretrained/ from here. (More details please refer to Base Model)
  4. Download the DWPose weights including the dw-ll_ucoco_384 and yolox_l into pretrained/DWPose. For more details, please refer to DWPose:
├───pretrained
    └───DWPose
    |   │───dw-ll_ucoco_384.onnx
    |   └───yolox_l.onnx
    |
    ├───unet.bin
    └───controlnet.bin
  1. Run the scipt
CUDA_VISIBLE_DEVICES=0 python run_controlnext.py \
  --pretrained_model_name_or_path stabilityai/stable-video-diffusion-img2vid-xt-1-1 \
  --output_dir outputs \
  --max_frame_num 240 \
  --guidance_scale 3 \
  --batch_frames 24 \
  --sample_stride 2 \
  --overlap 6 \
  --height 1024 \
  --width 576 \
  --controlnext_path pretrained/controlnet.bin \
  --unet_path pretrained/unet.bin \
  --validation_control_video_path examples/video/02.mp4 \
  --ref_image_path examples/ref_imgs/01.jpeg

--pretrained_model_name_or_path : pretrained base model, we pretrain and fintune models based on SVD-XT1.1
--controlnet_model_name_or_path : the model path of controlnet (a light weight module)
--unet_model_name_or_path : the model path of unet
--ref_image_path: the path to the reference image
--overlap: The length of the overlapped frames for long-frame video generation.
--sample_stride: The length of the sampled stride for the conditional controls. You can set it to 1 to make more smooth generation wihile requires more computation.

  1. Face Enhancement (Optional,Recommand for bad faces)

Currently, the model is not specifically trained for IP consistency, as there are already many mature tools available. Additionally, alternatives like Animate Anyone also adopt such post-processing techniques.

a. Clone Face Fusion:
git clone https://github.com/facefusion/facefusion

b. Ensure to enter the directory:
cd facefusion

c. Install facefusion (Recommand create a new virtual environment using conda to avoid conflicts):
python install.py

d. Run the command:

python run.py \
  -s ../outputs/demo.jpg \
  -t ../outputs/demo.mp4 \
  -o ../outputs/out.mp4 \
  --headless \
  --execution-providers cuda  \
  --face-selector-mode one 

-s: the reference image
-t: the path to the original video
-o: the path to store the refined video
--headless: no gui need
--execution-providers cuda: use cuda for acceleration (If available, most the cpu is enough)

Advanced Performance

In this section, we will delve into additional details and my own experiences to enhance video generation. These factors are algorithm-independent and unrelated to academia, yet crucial for achieving superior results. Many closely related works incorporate these strategies.

Reference Image

It is crucial to ensure that the reference image is clear and easily understandable, especially aligning the face of the reference with the pose.

Face Enhencement

Most related works utilize face enhancement as part of the post-processing. This is especially relevant when generating videos based on images of unfamiliar individuals, such as friends, who were not included in the base model's pretraining and are therefore unseen and OOD data.

We recommand the Facefusion for the post proct-processing. And please let us know if you have a better solution.

Please refer to Facefusion for more details.

Facefusion

Continuously Finetune

To significantly enhance performance on a specific pose sequence, you can continuously fine-tune the model for just a few hundred steps.

We will release the related fine-tuning code later.

Pose Generation

We adopt DWPose for the pose generation, and follow the related work (1, 2) to align the pose.

Limitations

IP Consistency

We did not prioritize maintaining IP consistency during the development of the generation model and now rely on a helper model for face enhancement.

However, additional training can be implemented to ensure IP consistency moving forward.

This also leaves a possible direction for futher improvement.

Base model

The base model plays a crucial role in generating human features, particularly hands and faces. We encourage collaboration to improve the base model for enhanced human-related video generation.

TODO

  • Training and finetune code