-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FVD score not conform to what's reported in the paper #15
Comments
That's odd, it runs fine on my end (maybe around ~104 FVD). Are you running the command below?
|
Yes, but I modified the script so that it works with Bair in MP4 format, I guess this wouldn't make much difference? PS: I changed |
It might make FVD a little worse, since FVD is usually sensitive to noise, and saving as But it shouldn't be that much worse. 1000+ FVD seems to indicate to me that either the samples or the real test examples are incorrect. Does sampling produce good quality samples? Similarly, it might help to visualize some instances of the test set to double check that data is correct. |
I also tried sampling mp4s with the pre-trained model first, then calculate FVD from the original implementation, I got FVD around 500. I visualized the generated clips, their first frames (the conditioned frame) are the same as their corresponding ground truth clips', but the motion of the robot arm do not conform to the ground truth, this is normal right? since the model is only conditioned on the first frame but not the motion. btw, thanks for the quick reply :) |
Yes, the motion most likely would not match ground truth. Not sure sure what the cause of this issue is. Could you share what some samples look like? |
Hi, I tested the Bair pre-trained VideoGPT model, but your evaluation script reported FVD to be 1000+, however FVD* was around 100, probably there's a mistake with your evaluation script?
The text was updated successfully, but these errors were encountered: