Skip to content

Lightricks/LongAnimateDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LongAnimateDiff

Sapir Weissbuch, Naomi Ken Korem, Daniel Shalem, Yoav HaCohen | Lightricks Research

Hugging Face Spaces

We are pleased to release the "LongAnimateDiff" model, which has been trained to generate videos with a variable frame count, ranging from 16 to 64 frames. This model is compatible with the original AnimateDiff model. For optimal results, we recommend using a motion scale of 1.15.

We release two models:

  1. The LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. You can download the weights from either Google Drive or HuggingFace.
  2. A specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. Please download the weights from Google Drive or HuggingFace.

Update: December 27, 2023

  • We are releasing version 1.1 of the LongAnimateDiff model, which generates better videos of 64 frames.

Results

A young man is dancing in a party A teddy bear is drawing a portrait A hamster is riding an auto rickshaw A swan swims in a lake A young man is dancing in a paris nice street A cat is sitting next to a wall
A gorilla is eating a banana.gif A drone is flying in the sky above the mountains A swan swims in the lake A ginger woman in space future Photo portrait of old lady with glasses Small fish swimming in an aquarium

Installation and Usage

ComfyUI usage

You can run our models using the ComfyUI framework. The models can be conveniently placed in the 'AnimateDiff models' folder within your ComfyUI framework. You can run the graph below.

alt text

AnimateDiff codebase usage

Note: our models work better with motion scale > 1. Motion scale is not implemented in AnimteDiff git, thus using ComfyUI is recommended.

git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
git clone https://github.com/Lightricks/LongAnimateDiff.git
bash download_bashscripts/5-RealisticVision.sh
python -m scripts.animate --config LongAnimateDiff/configs/RealisticVision-32-animate.yaml --inference_config LongAnimateDiff/configs/long-inference.yaml --L 32 --pretrained_model_path {path to sd-1-5 base model}

To run the 64 frames model:

  • Modify the temporal_position_encoding_max_len parameter in LongAnimateDiff/configs/long-inference.yaml to 128.
  • Download the model from Google Drive / HuggingFace and place in models/Motion_Module.
  • Download epicRealismNaturalSin from civit.ai to models/DreamBooth_LoRA/epicRealismNaturalSin.safetensors.
python -m scripts.animate --config LongAnimateDiff/configs/EpicRealism-64-animate.yaml --inference_config LongAnimateDiff/configs/long-inference.yaml --L {select number from 32|48|64} --pretrained_model_path {path to sd-1-5 base model}

Disclaimer

This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.

Acknowledgements

https://github.com/guoyww/AnimateDiff

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published