-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generator loss is different from the original article in Alpha_WGAN_ADNI_train.ipynb notebook. #11
Comments
So,have you test which is better? |
I have changed the loss functions of the generator and the discriminator. I recommend you to check if there is a mode collapse (when the discriminator or the generator wins) and look for more work on 3D regeneration. Have you tried using spectral normalisation? That can be a big improvement. |
Thanks for your reply. Before I use 3D GAN and also find the output is mode collapse(different noise input and output same voxel),If I find the problem,what can I do to solve it?Train more times on Generator and less on Discriminator? Thank you again~ |
The complexity can also be in the resolution of the images. If you have a high resolution, it will be most difficult to maintain the stability of the training. I think you should have more than one-morning training. This architecture requires a lot of computing power and time. For example, in one of my works, I trained it for 6 days and the results improved exponentially. |
In https://arxiv.org/pdf/1908.02498.pdf article the generator loss is just calculated using the d_loss and the l1_loss. The c_loss is just used in lossCodeDiscriminator calculation.
Please, let me know if what I said is correct.
The text was updated successfully, but these errors were encountered: