-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate images from the PixelCNN? #13
Comments
I am having the same issue! I am wondering if the train_dataset._label_encoder() is not constant? After training, I run the evaluation script, and in every run the labels seem to be mapped to a different integer encoding. Also, is there a reason that shuffle is turned off for the training data? Thanks! |
@Hanzy1996 Can you please share the code for sample images based on training of the PixelCNN model? |
@mitkina @Hanzy1996 this is what is was able to achieve : @Hanzy1996 is this similar to what you generate? |
@enk100 the CIFAR10 images look similar to what I got. |
Hey guys could you please write the code how did you sample from the pixelCNN and then generate these images? I did what @Hanzy1996 suggested but I get really bad images, can you please write the steps to go from sampling pixelCNN--->images using the given functions? |
to generate fake samples, just add the following at the end of main method in pixelcnn_prior.py:latents = prior.generate(torch.LongTensor([0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,6,6,6,6,6,6,6,6,7,7,7,7,7,7,7,7]).cuda()) ##this tensor above contains the labels (each int 0-9 corresponds to one of the ten classes in the CIFAR10 dataset which I am samps = model.decode(latents) fixed_grid = make_grid(samps, nrow=8, range=(-1, 1), normalize=True) then check them in tensorboard by inserting the following in a jupyter notebook%load_ext tensorboard %tensorboard --logdir logs/pixelcnn_prior |
The PixelCNN learn to model the prior q(z) in the paper and the code. For any given classes/labels, PixelCNN should model their prior q(z), as shown in the code
pytorch-vqvae/modules.py
Line 262 in 8d123c0
I first generate the index for some given classes as the codes
pytorch-vqvae/modules.py
Line 262 in 8d123c0
After I got the index q(z), I try to generate the images based on the index using the decoder in VQVAE
pytorch-vqvae/modules.py
Line 142 in 8d123c0
However, these generated images look very unrealistic, unlike the reconstruction results.
Can we evaluate the PixelCNN based on the generated images? How can I get the realistic images based on the prior generated by PixelCNN?
Best wishes!
The text was updated successfully, but these errors were encountered: