You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What size is your tensor? this depends a lot on the size of the tensor because when unfolding it to performs underlying calculations can demand a lot of memory
I've experimented with different sizes, but still have the same issue. For example, a tensor of (42, 2628, 27, 27), which is roughly half the size I need, still runs out of memory. There is plenty of CPU memory available, so if it can be stored on CPU between iterations that would help. Seems to progressively accumulate each iteration. Otherwise, if it can be chunked differently for the GPU processes.
Can you try running the decomposition with a given number of factors (e.g. 10 factors) and see if you still get this error?
If so, the issue is the memory that you have in your GPU, which from what I see is 16GB.
In my opinion, the issue is the size of your tensor and the limited memory in your GPU. I would recommend prioritizing ligand-receptor pairs (e.g. by signaling pathway, expression level, among other options) to reduce the number of interactions from 2,628 to ~200-500 pairs.
The memory at each iteration seems to accumulate. Is it possible to release it or chuck it differently to circumvent this issue?
The text was updated successfully, but these errors were encountered: