-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorflow 2.0 implementation #16
Comments
in your code you use: recon_loss_A_l = tf.losses.absolute_difference(A,ABA, in tensorflow 2.0 there is no tf.losses.absolute_difference and tf.losses.Reduction.MEAN anymore, could you give me a hint what else to use ? thanks in advance |
hi @Skylion007 , i got a question about the losses. In https://github.com/brownvc/ganimorph/blob/master/GAN.py inside def _build_multigpu_gan_trainer you get the costs with model.build_graph() and return back return [model.d_loss, model.g_loss]. Then you use these two losses with the optimizer to minimize. But there are 4 neural nets, or? two generators and two discriminators which are separated in your code with tf.variable_scope("A") and ("B"), so does one not needs two g_loss and two d_loss for optimizer? How is this solved in your code? Or what do i not understand :) ? |
There are two generators that are intertwined in a CycleGAN style fashion and two discriminators. You only need a single g_loss and d_loss for the model. Each direction A->B and B->A are separated by the appropriate variable scope. The discriminators are likewise also separated in a similar way to distinguish between samples in domain A and samples in domain B. Each generator is updated by at least two separately computed gradients per step using weight sharing. Tensorflow takes care of all this by using appropiate weight scoping. Hope this helps. |
does that mean you apply to both generators/discriminators the same g_loss/d_loss ? I am following mostly this tutorial https://www.tensorflow.org/beta/tutorials/generative/cyclegan which i got mostly implemented with your code, but it seemed like the generation of the gradients is done with separate losses which does not work with your code because it builds this g_loss/d_loss from all images-losses (A,B,ABA,BAB) so i cannot build two g_loss/d_loss with (A,BAB)(B,BAB) . I dont know really how to describe my not-understanding :) i will look more, thx |
hi,
i started a tensorflow 2.0 implementation of ganimorph. First i learn more about GANs, second it seemed more usable with gcp and more GPUs/TPUs (which would reduce training time) or hyperparametertuning by gcp and other stuff. It could also be trained/developed for free with https://colab.research.google.com/notebooks/welcome.ipynb (GPU,TPU support) , where you can clone the github repository from inside colab notebook. very simple.
Would be nice if one could review the code or help writing it :)
Or download/clone it:
https://github.com/flobotics/colab.git
flo
The text was updated successfully, but these errors were encountered: