SDXL mm training (crowdsource/crowdfunding) + bunch of small questions #388
arturkolotilov
started this conversation in
General
Replies: 1 comment
-
Amazing, i dont have an answer, but just a question about the videos. Are those video workflows that use refiner? How did you manage to integrate it for those experiements? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Jared, first of all I want to say thank you! You've created some of the most amazing and important stuff. I've beeт experimenting for the whole month already with AD and have a number of questions.
AD SD15 motion module is great in terms of consistency, but the overall aesthetic and style of images a bit less cinematic compared to SDXL.
But I'm struggling with SDXL and its always a choice between consistency and image quality. I have a hypothesis of significant improvement in quality using sdxl base + sdxl refiner workflow, however AD node with sdxl motion module does not accept SDXL REFINER and asks for SDXL model only.
Maybe there is some trick of easy way to change something in the code and plug SDXL refiner model to the AD node anyhow?
OR if there is no shortcut, maybe I can help you somehow to train SDXL motion module so it will generate jaw dropping output?
OR there is a probability that I'm doing something wrong and maybe we can arrange a 1-2 hour consulting/master-class session on a paid basis?
Here are some videos from my experiments
sdxl https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/170331051/6c2cb52f-b236-4dd4-93a6-48ebaa420f7c
sd15 https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/170331051/d2f3ad64-df1f-4dd2-bc1f-70efeea744c8
AD-CN_00758.mp4
AD-CN_00281.mp4
Beta Was this translation helpful? Give feedback.
All reactions