You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that this project successfully implements diffusion-based motion generation using only 50 timesteps for both training and sampling, which is significantly fewer than the 1000 steps used in traditional DDPM/DDIM approaches.
I'm very interested in understanding the theoretical foundation and implementation details behind this choice, as I couldn't find any references to training with such reduced timesteps in the original DDPM/DDIM papers.
Could you please share:
The research paper or theoretical work this implementation is based on?
If this is a novel approach, what modifications were made to enable stable training with reduced timesteps?
Any empirical observations or ablation studies that led to choosing 50 as the optimal number of timesteps?
I’d really appreciate your help!
The text was updated successfully, but these errors were encountered:
I noticed that this project successfully implements diffusion-based motion generation using only 50 timesteps for both training and sampling, which is significantly fewer than the 1000 steps used in traditional DDPM/DDIM approaches.
I'm very interested in understanding the theoretical foundation and implementation details behind this choice, as I couldn't find any references to training with such reduced timesteps in the original DDPM/DDIM papers.
Could you please share:
I’d really appreciate your help!
The text was updated successfully, but these errors were encountered: