Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A Guide To Run The Code #8

Open
TalentedMUSE opened this issue Feb 24, 2020 · 44 comments
Open

A Guide To Run The Code #8

TalentedMUSE opened this issue Feb 24, 2020 · 44 comments

Comments

@TalentedMUSE
Copy link

TalentedMUSE commented Feb 24, 2020

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7

  2. You can just download the VGG model from the Pytorch model zoo

import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

@TalentedMUSE

This comment has been minimized.

@Jian-danai
Copy link

Jian-danai commented Mar 24, 2020

Sorry, but I have a question. Is it necessary to train the vgg16 by ourselves? Or just use the pretrained model downloaded from Pytorch? ( and using features[:18] for the vgg forwarding)

@TalentedMUSE
Copy link
Author

@Jian-danai I think we can just use the pretrained model from Pytorch. I have run the code and the result was still good. The aim of vgg16 is to extract high-level features, and pretrained model can do this, too. By the way, training vgg model on ImageNet by ourselves takes much time lol😏。

@Jian-danai
Copy link

@Jian-danai I think we can just use the pretrained model from Pytorch. I have run the code and the result was still good. The aim of vgg16 is to extract high-level features, and pretrained model can do this, too. By the way, training vgg model on ImageNet by ourselves takes much time lol😏。
I see. Thank you very much.

@Jian-danai
Copy link

By the way, can you change the batch size? When I change the batch size from 1 to 2, I will get this error:

Traceback (most recent call last):
File "train.py", line 83, in
train_net()
File "train.py", line 60, in train_net
solver.train()
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 375, in train
g_A_lip_loss_his = self.criterionHis(fake_A, ref_B, mask_A_lip, mask_B_lip, index_A_lip) * self.lambda_his_lip
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 239, in criterionHis
mask_src = mask_src.expand(1, 3, mask_src.size(2), mask_src.size(2)).squeeze()
RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 0. Target sizes: [1, 3, 256, 256]. Tensor sizes: [2, 3, 256, 256]

@TalentedMUSE
Copy link
Author

Note that the masks and the facial images are loaded together:

for self.i, (img_A, img_B, mask_A, mask_B) in enumerate(self.data_loader_train):

So when you change the batch_size(e.g. batch_size=2), you should change such lines like this:

mask_src = mask_src.expand(2, 3, mask_src.size(2), mask_src.size(2)).squeeze()

Because expand(a,b,c,d) is to create a tensor whose shape is (a,b,c,d). In this case, a is the batch size.

By the way, no need to change the batch_size because: generating makeup images is a 'specific' task ; larger batch means heavier requirement on GPU memory. As for batch=1(no modification to author's code except the vgg I mentioned before), it takes about 5000MB GPU memory(I checked it via 'nvidia-smi' command). You may exceed your GPU memory if you enlarge your batch size.

@Jian-danai
Copy link

Jian-danai commented Mar 25, 2020

Thanks for your reply, but I do not understand why enlarging my batch size is meaningless. If I have enough GPU mem, will the larger batch_size contribute to the higher training speed or higher accuracy or not?
I have tried modifiying 'train.py'
parser.add_argument('--batch_size', default='2', type=int, help='batch_size')
and then modified 'solver_makeup.py' with
mask_src = mask_src.expand(2, 3, mask_src.size(2), mask_src.size(2)).squeeze()
mask_tar = mask_tar.expand(2, 3, mask_tar.size(2), mask_tar.size(2)).squeeze()
but still got this error (actually yesterday I have tried modifying these two files)
Traceback (most recent call last):
File "train.py", line 83, in
train_net()
File "train.py", line 60, in train_net
solver.train()
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 378, in train
g_A_lip_loss_his = self.criterionHis(fake_A, ref_B, mask_A_lip, mask_B_lip, index_A_lip) * self.lambda_his_lip
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 248, in criterionHis
input_match = histogram_matching(input_masked, target_masked, index)
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/ops/histogram_matching.py", line 51, in histogram_matching
dst_align = [dstImg[i, index[0], index[1]] for i in range(0, 3)]
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/ops/histogram_matching.py", line 51, in
dst_align = [dstImg[i, index[0], index[1]] for i in range(0, 3)]
IndexError: index 220 is out of bounds for axis 1 with size 3

(where index 220 will be changed to 'index 3' or anything else if I try again.)

One more question, the multi-GPU mode doesn't work, right? (because I have not found the related code for multi-GPU computation....)

@TalentedMUSE
Copy link
Author

I am checking on this; The author did not implement Multi-GPU, you need to do it yourself~~

@DanielMao2015
Copy link

@TalentedMUSE Hey is is any chance for you to share your code?

@pao-hui
Copy link

pao-hui commented Jun 29, 2020

Hi, does anyone know why the BeautyGAN authors choose 6 residual blocks in the generator but not 9 as in CycleGAN because the training images are 256*256?

I also find that some unofficial implementations based on tensorflow are using 9 blocks. For example, https://github.com/baldFemale/beautyGAN-tf-Implement

It's really weird!

@yql0612
Copy link

yql0612 commented Aug 5, 2020

I am checking on this; The author did not implement Multi-GPU, you need to do it yourself~~

Thanks for your sharing, but I found the data set mask pictures are all black?no any other content like this, it‘s really normal?
image

@DateBro
Copy link

DateBro commented Aug 5, 2020

@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels.

@yql0612
Copy link

yql0612 commented Aug 5, 2020 via email

@DateBro
Copy link

DateBro commented Aug 5, 2020

@yql0612 I have opened your screenshot in a new tab and found it not all black. If you look carefully, you can find that there are some face segmentations, whose grey value is slightly bigger than 0.

@yql0612
Copy link

yql0612 commented Aug 5, 2020 via email

@yql0612
Copy link

yql0612 commented Aug 26, 2020

@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels.

When I trained according this project,I meet some problems as follows:
File "/home/data_mount_2/yql/makeup_model/BeautyGAN_pytorch-master/data_loaders/makeup.py", line 79, in getitem
image_B = Image.open(os.path.join(self.image_path, "test/makeup", getattr(self, "test_" + self.cls_B + "filenames")[index % getattr(self, 'num_of_test' + self.cls_list[1] + 'data')])).convert("RGB")
File "/home/data_mount_2/yql/conda/envs/PSGAN/lib/python3.7/site-packages/PIL/Image.py", line 2878, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/home/data_mount_2/yql/dataset/BeautyGAN/data/test/makeup/train_MAKEMIX.txt'

But when I add the txt to /test/makeup folder, there are another problem arises like this:
File "/home/data_mount_2/yql/makeup_model/BeautyGAN_pytorch-master/data_loaders/makeup.py", line 79, in getitem
image_B = Image.open(os.path.join(self.image_path, "test/makeup", getattr(self, "test" + self.cls_B + "filenames")[index % getattr(self, 'num_of_test' + self.cls_list[1] + '_data')])).convert("RGB")
File "/home/data_mount_2/yql/conda/envs/PSGAN/lib/python3.7/site-packages/PIL/Image.py", line 2931, in open
"cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file '/home/data_mount_2/yql/dataset/BeautyGAN/data/test/makeup/train_MAKEMIX.txt'
I have try many times to solve it, but still not work, I sincerely need your help, Thank you very much!

@Zteat
Copy link

Zteat commented Sep 25, 2020

Thank you for your tips! It helps a lot!

@wangyyff
Copy link

AJ~P)BR95SNS(K7LXI229 V
Has there anyone met this problem?? NEED HELP!!!!

@thaoshibe
Copy link

AJ~P)BR95SNS(K7LXI229 V
Has there anyone met this problem?? NEED HELP!!!!

Could you check if your .txt file is empty or not?

@wangyyff
Copy link

Has there anyone met this problem?? NEED HELP!!!!

Could you check if your .txt file is empty or not?

My txt is like this,I don't konw what else to do,please help!!
U8EO7KVM R0YGGK9HQ{D~JF

@thaoshibe
Copy link

thaoshibe commented Oct 17, 2020

@wangyyff I think you should double-check your txt file. Pls follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

@wangyyff
Copy link

I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more?

@wangyyff
Copy link

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more?

@wangyyff
Copy link

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

@Zteat
Copy link

Zteat commented Oct 20, 2020

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)
Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

download the vgg model in "https://bethgelab.org/media/uploads/pytorch_models/vgg_conv.pth".

@wangyyff
Copy link

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)
Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

download the vgg model in "https://bethgelab.org/media/uploads/pytorch_models/vgg_conv.pth".

Thanks for all your help!! Now there's another problem. Maybe I'm just a foolish! ! All the images are in "makeup" and the txts are in "makeup_final. Where do I need to rewrite?
FF2NDZ_XG7$7W(W$)TTABQ7

@flyz1
Copy link

flyz1 commented Nov 15, 2020

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

@TomatoBoy90
Copy link

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

he change the file solver_makeup.py for vgg and create 4 txt files(train_MAKEMIX,test_MAKEMIX,train_SYMIX ,test_SYMIX )

@flyz1
Copy link

flyz1 commented Dec 10, 2020

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

he change the file solver_makeup.py for vgg and create 4 txt files(train_MAKEMIX,test_MAKEMIX,train_SYMIX ,test_SYMIX )

Thanks for your reply! But after i have tried for many days i can not still run the code. So would you mind sharing your code for me? Thanks a million!

@thaoshibe
Copy link

thaoshibe commented Feb 2, 2021

@Hellboykun I think your results are not that bad.
image
Mine look terrible. (Left to right: Source | Reference | Output)

But I found the original pretrained model is very stable.
Check in Holan's Github: https://github.com/Honlan/BeautyGAN
Or direct to Google Drive: https://drive.google.com/drive/folders/1pgVqnF2-rnOxcUQ3SO4JwHUFTdiSe5t9

Sadly, it is tensorflow, and they doesn't provide training code!!

@thaoshibe
Copy link

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

@Hellboykun
Copy link

@thaoshibe Holan's model works very well, but unfortunately it's not clear how he implemented it. It was suggested that it should be revised makeup.py Data processing part. I don't know if it works. If you train a good model, I hope you can share it. Thank you very much

@pirate-zhang
Copy link

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hi,Thanks for your guidef firstly! I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a lot!

@pirate-zhang
Copy link

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

Hello! Could you share your code with me? I am new for this!Thanks a lot!

@thaoshibe
Copy link

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

Hello! Could you share your code with me? I am new for this!Thanks a lot!

Stay tuned. I'll upload it in the next couple of days!

@thaoshibe
Copy link

@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc).

You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation
Not sure if I did anything wrong (haha), but I hope it helps.

@pirate-zhang
Copy link

pirate-zhang commented Mar 15, 2021 via email

@xwh130
Copy link

xwh130 commented Apr 17, 2021

hi,writer!Could you share your code with me?Thanks a lot

@TalentedMUSE
Copy link
Author

@Hellboykun I think your results are not that bad.
image
Mine look terrible. (Left to right: Source | Reference | Output)

But I found the original pretrained model is very stable.
Check in Holan's Github: https://github.com/Honlan/BeautyGAN
Or direct to Google Drive: https://drive.google.com/drive/folders/1pgVqnF2-rnOxcUQ3SO4JwHUFTdiSe5t9

Sadly, it is tensorflow, and they doesn't provide training code!!

I have met the same problem during my experiment on some images. It seems that this method isn't quite robust on illumination--some shades on the original face may still be transferred to the target face. I think changing the loss function/tuning the hyperparameters may may may help.

@TalentedMUSE
Copy link
Author

@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc).

You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation
Not sure if I did anything wrong (haha), but I hope it helps.

great!~

@TalentedMUSE
Copy link
Author

Thank you for your tips! It helps a lot!

My honor~😄

@NaVi-JackMartin
Copy link

Can the model process video in real time?

@yql0612
Copy link

yql0612 commented May 20, 2022 via email

@pirate-zhang
Copy link

pirate-zhang commented May 20, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests