Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory increase during DIM 1K-dataset testing #8

Open
wuyujack opened this issue Oct 27, 2019 · 1 comment
Open

GPU memory increase during DIM 1K-dataset testing #8

wuyujack opened this issue Oct 27, 2019 · 1 comment

Comments

@wuyujack
Copy link

Hi Hao,

I run the testing code with both DIM pretrained model and indexnet matting pretrained model. The GPU I used is 2080Ti and the PyTorch version is 1.0.

During testing with indexnet pretrained model, I observe that the GPU memory keeps increasing from 5000+ MB to 10200+ MB while it only takes 2680 MB to 4000+ MB approximately for DIM pretrained model at the first 700+ iters but also suddenly increase to 10800+ MB at 800+/1000 iters. For the testing time, indexnet matting (avg: 5.88Hz) is much faster than DIM model (avg: 10.75Hz).

It seems like you use the original size of DIM image for inference and is it normal to see increasing memory usage during inference mode for both DIM pretrained model and IndexNet Matting?

By the way, the DIM pretrained model you provided seems not to be consistent with the evaluation score you provide in the GitHub, here is what I get at the last iter:

test: 1000/1000, sad: 14.25, SAD: 59.47, MSE: 0.0205, framerate: 10.11Hz/10.75Hz

For IndexNet Matting I get:

test: 1000/1000, sad: 11.49, SAD: 45.65, MSE: 0.0131, framerate: 4.98Hz/5.88Hz

which seems to match the results you report in your paper.

Regards,
Mingfu

@poppinace
Copy link
Owner

@wuyujack Hi Mingfu,

In the deep matting code, I downsampled the image so that it can be processed on GPU. The performance I reported in paper is calculated on the original size. Please read the code. I have corresponding comments. In 2080ti, the memory is not sufficient for inference for the original-size image unless you have Volta. I actually test DeepMatting on CPU. And the final performance should be re-calculated using the matlab code provided.

I don’t encounter the memory problem before. The memory may change because of the size of input image. But in your circumstance, it seems that something are saved. It is weird!

Hao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants