-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A Guide To Run The Code #8
Comments
This comment has been minimized.
This comment has been minimized.
Sorry, but I have a question. Is it necessary to train the vgg16 by ourselves? Or just use the pretrained model downloaded from Pytorch? ( and using features[:18] for the vgg forwarding) |
@Jian-danai I think we can just use the pretrained model from Pytorch. I have run the code and the result was still good. The aim of vgg16 is to extract high-level features, and pretrained model can do this, too. By the way, training vgg model on ImageNet by ourselves takes much time lol😏。 |
|
By the way, can you change the batch size? When I change the batch size from 1 to 2, I will get this error: Traceback (most recent call last): |
Note that the masks and the facial images are loaded together:
So when you change the batch_size(e.g. batch_size=2), you should change such lines like this:
Because expand(a,b,c,d) is to create a tensor whose shape is (a,b,c,d). In this case, a is the batch size. By the way, no need to change the batch_size because: generating makeup images is a 'specific' task ; larger batch means heavier requirement on GPU memory. As for batch=1(no modification to author's code except the vgg I mentioned before), it takes about 5000MB GPU memory(I checked it via 'nvidia-smi' command). You may exceed your GPU memory if you enlarge your batch size. |
Thanks for your reply, but I do not understand why enlarging my batch size is meaningless. If I have enough GPU mem, will the larger batch_size contribute to the higher training speed or higher accuracy or not? One more question, the multi-GPU mode doesn't work, right? (because I have not found the related code for multi-GPU computation....) |
I am checking on this; The author did not implement Multi-GPU, you need to do it yourself~~ |
@TalentedMUSE Hey is is any chance for you to share your code? |
Hi, does anyone know why the BeautyGAN authors choose 6 residual blocks in the generator but not 9 as in CycleGAN because the training images are 256*256? I also find that some unofficial implementations based on tensorflow are using 9 blocks. For example, https://github.com/baldFemale/beautyGAN-tf-Implement It's really weird! |
@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels. |
I browsed all the pictures and found that they are still black. If it is convenient, can you provide the data set you review. Thank you very much.~
…------------------ 原始邮件 ------------------
发件人: "DateBro"<[email protected]>;
发送时间: 2020年8月5日(星期三) 上午10:58
收件人: "wtjiang98/BeautyGAN_pytorch"<[email protected]>;
抄送: "1079578049"<[email protected]>; "Mention"<[email protected]>;
主题: Re: [wtjiang98/BeautyGAN_pytorch] A Guide To Run The Code (#8)
@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@yql0612 I have opened your screenshot in a new tab and found it not all black. If you look carefully, you can find that there are some face segmentations, whose grey value is slightly bigger than 0. |
OK,I see... thanks~
…------------------ 原始邮件 ------------------
发件人: "DateBro"<[email protected]>;
发送时间: 2020年8月5日(星期三) 下午2:34
收件人: "wtjiang98/BeautyGAN_pytorch"<[email protected]>;
抄送: "1079578049"<[email protected]>; "Mention"<[email protected]>;
主题: Re: [wtjiang98/BeautyGAN_pytorch] A Guide To Run The Code (#8)
@yql0612 I have opened your screenshot in a new tab and found it not all black. If you look carefully, you can find that there are some face segmentations, whose grey value is slightly bigger than 0.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
When I trained according this project,I meet some problems as follows: But when I add the txt to /test/makeup folder, there are another problem arises like this: |
Thank you for your tips! It helps a lot! |
@wangyyff I think you should double-check your txt file. Pls follow the guideline here #8 (comment) Are you using Makeup Transfer Dataset or Custom dataset? |
I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more? |
I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more? |
I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them??? |
download the vgg model in "https://bethgelab.org/media/uploads/pytorch_models/vgg_conv.pth". |
Thanks for all your help!! Now there's another problem. Maybe I'm just a foolish! ! All the images are in "makeup" and the txts are in "makeup_final. Where do I need to rewrite? |
@Hellboykun I think your results are not that bad. But I found the original pretrained model is very stable. Sadly, it is tensorflow, and they doesn't provide training code!! |
@wangyyff sorry i didn't check this thread lately. have you solved your problem? |
@thaoshibe Holan's model works very well, but unfortunately it's not clear how he implemented it. It was suggested that it should be revised makeup.py Data processing part. I don't know if it works. If you train a good model, I hope you can share it. Thank you very much |
Hello! Could you share your code with me? I am new for this!Thanks a lot! |
Stay tuned. I'll upload it in the next couple of days! |
@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc). You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation |
Thank u very much!
…------------------ 原始邮件 ------------------
发件人: "wtjiang98/BeautyGAN_pytorch" ***@***.***>;
发送时间: 2021年3月15日(星期一) 中午12:49
***@***.***>;
***@***.******@***.***>;
主题: Re: [wtjiang98/BeautyGAN_pytorch] A Guide To Run The Code (#8)
@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc).
You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation
Not sure if I did anything wrong (haha), but I hope it helps.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
hi,writer!Could you share your code with me?Thanks a lot |
I have met the same problem during my experiment on some images. It seems that this method isn't quite robust on illumination--some shades on the original face may still be transferred to the target face. I think changing the loss function/tuning the hyperparameters may may may help. |
great!~ |
My honor~😄 |
Can the model process video in real time? |
您的邮件我已收到。祝一切顺利!
|
您好,我已收到您的邮件。谢谢!
|
After many hours, I finally can run the code 2333. Here are some tips to run the code:
The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!
You can organize the dataset like this:
Then you should change the paths in makeup.py. For example:
You can just download the VGG model from the Pytorch model zoo
And then, write a forward function on your own to seize the 4th conv layer:
Finally:
(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!
The text was updated successfully, but these errors were encountered: