You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that there seems to be a memory leak issue during training.
On my K80, I set the batch size to 24 and the GPU memory consumption is about 4000MB. However, as the training process goes on, the GPU memory consumptions increases and a RuntimeError: CUDA error: out of memory is raised about 30 minutes later. If I set the batch size to 16 the error would not take place but the memory still increase.
The text was updated successfully, but these errors were encountered:
@amjltc295 I have the same issue RuntimeError: CUDA error: out of memory. Have you solve the problem yet?
or do you have any idea how to debug this kind of error?
I notice that there seems to be a memory leak issue during training.
On my K80, I set the batch size to 24 and the GPU memory consumption is about 4000MB. However, as the training process goes on, the GPU memory consumptions increases and a
RuntimeError: CUDA error: out of memory
is raised about 30 minutes later. If I set the batch size to 16 the error would not take place but the memory still increase.The text was updated successfully, but these errors were encountered: