-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding FID statistics calculation as an option (can now do "train", "eval", or "fid_stats") #5
base: main
Are you sure you want to change the base?
Conversation
After quickly looking through the code, I think you should always disable uniform dequantization, and in addition:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed lines that should not be there
Alright, will change this. |
Will now test to see if I can get the same FID on a model. |
Very close results, but not exactly the same, sadly! FID: 534.9150390625 Let me know if you figure out anything to be changed. At least it seems very close now. |
Our stats files were computed on TPUs, where they replace Why are the FID scores so large? |
Then, that might be fine, floating points errors are acceptable. It's trained for 8 iterations on a tiny batch 😂. |
Good news, I tried with your pre-trained model (cifar10_continuous_ve) on chkpt 24 on a FID on only 2k samples (so it finishes quickly enough). The results from two runs with the FID statistics from the new code:
The results from one run with the FID statistics from your google drive:
So the new code works. Thanks for your help! Alexia |
Hey, Thanks! I am using @AlexiaJM's fork with my custom config. I get the following error:
Update
Feel free to correct me if I am wrong. I am not sure if this is the best/right solution. |
With these small changes, you can get the fid statistics by running with --mode "fid_stats". It loops through the dataset for 1 epoch and extract the FID statistics. That makes it easier to add new datasets.
The only issue is that I am not getting the same FID when evaluating a model on the google drive FID statistics as opposed to the one made by this new mode. Can you verify that my implementation is correct?
I could be misusing the scaler or inverse scaler.