Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some error use connectivity loss fuction in gpu #10

Open
kelisiya opened this issue Nov 15, 2019 · 12 comments
Open

some error use connectivity loss fuction in gpu #10

kelisiya opened this issue Nov 15, 2019 · 12 comments

Comments

@kelisiya
Copy link

Im trying to reproduce your paper . The cnn retun a tensor , but the Loss Fuction use numpy;
So I use tensor to numpy and calculate loss , the numpy to tensor . I found connectivity loss can't work.
How did you deal with it?

@poppinace
Copy link
Owner

Hi @kelisiya,
Sorry for the late reply due to the CVPR deadline.
Do you mean the training loss? or calculating the evaluation errors?
The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

@kelisiya
Copy link
Author

Hi @kelisiya,
Sorry for the late reply due to the CVPR deadline.
Do you mean the training loss? or calculating the evaluation errors?
The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

It's work , I let your backbone to DIM and use Alpha loss and Grad loss , finally I can train your model .

@poppinace
Copy link
Owner

@kelisiya Nice, let me know if your model achieves better results:)

@kelisiya
Copy link
Author

kelisiya commented Dec 5, 2019 via email

@poppinace
Copy link
Owner

@kelisiya It is normal that the network's output is not bounded by [0, 1] due to the nature of regression. This is why postprocessing is required to eliminate unreasonale outputs.
However, you should not use the clip operator or clamp in pytorch during training, because the gradient will be clipped to zero either. This may be the reason why the loss does not decrease.
The clip operator should be only applicable in inference.

@kelisiya
Copy link
Author

kelisiya commented Dec 5, 2019 via email

@poppinace
Copy link
Owner

I don't use cv.normalize.
I also tried sigmoid, but do not find it necessary.

@kelisiya
Copy link
Author

kelisiya commented Dec 5, 2019 via email

@poppinace
Copy link
Owner

Exactly!

@kelisiya
Copy link
Author

kelisiya commented Dec 5, 2019 via email

@kelisiya
Copy link
Author

kelisiya commented Dec 5, 2019 via email

@poppinace
Copy link
Owner

Of course you should normalize the alpha before calculating the loss.
Yes, the concatenated input like DIM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants