You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing this great word! May I ask what could be the possible reason if the trained model always produces the same and static output, even starting from quite different Gaussian noise? This is like a general question, any insights would be greatly appreciated!
Thanks
The text was updated successfully, but these errors were encountered:
Two things that can cause this come into my mind, (1) too small data causes overfit, (2) the data is not normalized properly and the noise at x_T cannot cover it completely. To debug (2) try to visualize the input training x_t for different ts
Thank you for the reply! For (1), the dataset currently being used is larger than the ones in the paper, with more than 10k sequences and 800k frames. For (2), the noise at x_T is completely Gaussian but I indeed forget the normalization. Thanks for the notice.
One more question I'd like to ask is: we currently choose to directly predict x_0, how does it compare to predicting the noise? Because predicting x_0 from Gaussian noise x_T during training has training pairs (x_0, x_T), but x_T actually changes every time, leading to a challenge in learning the mapping function between them.
Meanwhile, may I respectively ask why the motion generation process is repeated by 3 times? can we directly generate N motion sequences as a batch in one run?
Hi,
Thanks for sharing this great word! May I ask what could be the possible reason if the trained model always produces the same and static output, even starting from quite different Gaussian noise? This is like a general question, any insights would be greatly appreciated!
Thanks
The text was updated successfully, but these errors were encountered: