You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey Albert. For what model and how many qubits and available RAM? And do you mean there was a leak in the sense the the memory usage kept increasing as more models were trained?
Hey Albert. For what model and how many qubits and available RAM? And do you mean there was a leak in the sense the the memory usage kept increasing as more models were trained?
Hi Joseph. Actually any model and any qubit count caused the leak. The process would allocate all the memory possible in the GPU causing errors in execution due lack of memory. It didn't increase as models were trained, instead the memory just was fully allocated.
The fact that this issue only happened to me makes me think it could be related to my Python/sckit-learn version, since it's definetely caused by sckit-learn.
Anyway I think it would be useful to open this issue for others to know, in case it happens to somebody too. 😉
I was getting a memory leak when using run_hyperparameter_search.py.
As stated here., removing n_jobs=-1 fixed the issue.
The text was updated successfully, but these errors were encountered: