-
Notifications
You must be signed in to change notification settings - Fork 478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime error CUDA out of memory #75
Comments
Hello, are you sure, you check nvidia-smi, than the procces still working? Also, be sure, you use light version of model, only batch_size=1 and other paramaters, which are affect to the model architecture - try to decrease them until your model can stored on GPU device. |
Same problem |
Same problem, even with "--light=True" memory is running out after 1k steps... NVIDIA GeForce RTX 3060, 12GB GPU Memory... |
On my opinion, there is better solution to unpaired transfer style, like VSAIT: Unpaired Image Translation via Vector Symbolic Architectures for example. |
Thanks @kirill-ionkin for pointing it out, I was trying VSAIT recently; for reference I found it here: https://github.com/facebookresearch/vsait |
I was trying to run this model on 2080ti,but it always said I do not have enough GPU memory.
There is no other process using GPU
The text was updated successfully, but these errors were encountered: