Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I got a RTX 3080 Ti, but it keep telling me I run out of CUDA memory? #36

Open
Josuke2018 opened this issue Oct 2, 2022 · 1 comment

Comments

@Josuke2018
Copy link

RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 12.00 GiB total capacity; 10.70 GiB already allocated; 0 bytes free; 11.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I don't know what's wrong but I got a RTX 3080 Ti
It has at least 12G availabe to use

@nicolai256
Copy link
Owner

hmm maybe u have too much stuff open or ur not using the right config file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants