-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confuse about the my_layer_norm function and GAN loss function #2
Comments
I have the same confuse about the gan loss part... the gan loss in the code seem not do adversarial train |
The same question |
g_fake_label = torch.ones_like(g_fake).cuda() is wrong for gan, g_fake_label = torch.zeros_like(g_fake).cuda() is right. The author handle the wrong with parser.add_argument('--adv_weight', type=float, default=0.01,help='loss weight for adversarial loss'),no use for adv loss for training for netG. |
|
Hi, thank you for your excellent work, I get a good result when run the demo. But when I read the source code, I get some problems about GAN loss function.
In the paper, the dis loss about fake_img should be self.loss_fn(d_fake, gauss(1 - mask)), but I find you just do gauss(mask), Is there something wrong with my understanding?
whatmore, the dis loss about real_img should be self.loss_fn(d_real, d_real_label), where d_real_label is torch.ones(...), but you write it to torch.zeros(...).
By the way, could you explain the work of my_layer_norm in the AOT block?
Thanks.
The text was updated successfully, but these errors were encountered: