You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 7, 2023. It is now read-only.
Does this need too much training data to get results like these. I'm sure there's a pre trained model just for Natalie portman. I wonder if we can train the DFL using just a video of 2 minutes in the training
I don't think i just found this out today I don't know what to do, i got my wav2lip video i created, he said use wav2lip-gan video as DST for deepfacelab, i don't know where to start
It is a good idea, DFL serves as a restoration, it reconstructs the proper shapes, as it also usually "de-agifies" the faces, because the models average/smooth the featuers.
I had a related idea, but syncing without wav2lips and letting DFL smooths a rough-rendered lips or smooths the artifacts.
@sokffa Re the too much data - I guess no. It depends on the video, also you can create a medley from many sources and pack a very good set in 2 minutes (or better the respective frames, not all in a video, because they could be from very different sources, lighting etc.).
25 or 30 fps x 120s is 3000 - 3600 frames, which is decent (especially for deepfakes of interviews like this in constant light etc.)
https://www.youtube.com/watch?v=Kwhqj93wyXU
The text was updated successfully, but these errors were encountered: