Ways to clear memory. #513
-
I'm currently fighting with a high memory consumption when using the Using the following code, called between iterations, I was able to significantly reduce the problem: del image
gc.collect()
gc.collect()
torch.cuda.empty_cache() With image being a previously rendered image. While looking for solutions, I found the Lines 45 to 64 in 152352f Now I'm wondering if some of these methods might help to further alleviate my problem. Unfortunately, I couldn't find a documentation for them. Therefore, I'd be grateful, if somebody could shed some light on them and assess if they are suitable to reduce memory load while still being inside an optimization loop, i.e. not mess with the optimization. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi The I can't find the thread about it anymore, but typically The |
Beta Was this translation helpful? Give feedback.
Hi
The
path
integrator has a significant memory/efficiency cost to it when used in AD settings. I'd strongly recommend usingprb
instead, take a look at figure 6 of the Path Replay Backpropagation paper.The
wrap_ad
function has some caveats to it but they rather have to do with runtime than memory consumption usually.I can't find the thread about it anymore, but typically
torch
also likes to reserve a huge chuck of your VRAM. So that might also be an avenue to look into.The
clean_up()
method you found is very "strict". Because it's used between unit tests, it practically flushes the entire JIT's state which is rarely something you want. Anyway, this should not be a significant amount o…