Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorflow 2.0 implementation #16

Open
flobotics opened this issue May 4, 2019 · 5 comments
Open

tensorflow 2.0 implementation #16

flobotics opened this issue May 4, 2019 · 5 comments

Comments

@flobotics
Copy link

flobotics commented May 4, 2019

hi,
i started a tensorflow 2.0 implementation of ganimorph. First i learn more about GANs, second it seemed more usable with gcp and more GPUs/TPUs (which would reduce training time) or hyperparametertuning by gcp and other stuff. It could also be trained/developed for free with https://colab.research.google.com/notebooks/welcome.ipynb (GPU,TPU support) , where you can clone the github repository from inside colab notebook. very simple.

Would be nice if one could review the code or help writing it :)

Or download/clone it:
https://github.com/flobotics/colab.git

flo

@flobotics
Copy link
Author

flobotics commented May 11, 2019

in your code you use:

recon_loss_A_l = tf.losses.absolute_difference(A,ABA,
reduction=tf.losses.Reduction.MEAN)

in tensorflow 2.0 there is no tf.losses.absolute_difference and tf.losses.Reduction.MEAN anymore, could you give me a hint what else to use ?

thanks in advance

@flobotics
Copy link
Author

got it running, but it seemed not to work correctly right now. mostly the resulting images are mostly red or blue , could be something with RGB layers perhaps?

training with human-images gives sometimes a result that looks like humans, but mostly like said:

ganipic

@flobotics
Copy link
Author

hi @Skylion007 , i got a question about the losses. In https://github.com/brownvc/ganimorph/blob/master/GAN.py inside def _build_multigpu_gan_trainer you get the costs with model.build_graph() and return back return [model.d_loss, model.g_loss]. Then you use these two losses with the optimizer to minimize. But there are 4 neural nets, or? two generators and two discriminators which are separated in your code with tf.variable_scope("A") and ("B"), so does one not needs two g_loss and two d_loss for optimizer? How is this solved in your code? Or what do i not understand :) ?

@Skylion007
Copy link
Contributor

There are two generators that are intertwined in a CycleGAN style fashion and two discriminators. You only need a single g_loss and d_loss for the model. Each direction A->B and B->A are separated by the appropriate variable scope. The discriminators are likewise also separated in a similar way to distinguish between samples in domain A and samples in domain B. Each generator is updated by at least two separately computed gradients per step using weight sharing. Tensorflow takes care of all this by using appropiate weight scoping. Hope this helps.

@flobotics
Copy link
Author

does that mean you apply to both generators/discriminators the same g_loss/d_loss ? I am following mostly this tutorial https://www.tensorflow.org/beta/tutorials/generative/cyclegan which i got mostly implemented with your code, but it seemed like the generation of the gradients is done with separate losses which does not work with your code because it builds this g_loss/d_loss from all images-losses (A,B,ABA,BAB) so i cannot build two g_loss/d_loss with (A,BAB)(B,BAB) . I dont know really how to describe my not-understanding :) i will look more, thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants