Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to solve CUDA error: out of memory while doing inference for my diarization model #13

Open
Ataullha opened this issue Jul 16, 2024 · 1 comment

Comments

@Ataullha
Copy link

Ataullha commented Jul 16, 2024

ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

I'm using a 12GB NVIDIA GeForce RTX 2050 with Cuda compilation tools, release 11.8

How to Solve this or how to use batching/ batch_size while doing inference?

@Ataullha
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant