You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
I'm using a 12GB NVIDIA GeForce RTX 2050 with Cuda compilation tools, release 11.8
How to Solve this or how to use batching/ batch_size while doing inference?
The text was updated successfully, but these errors were encountered:
ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.I'm using a
12GB NVIDIA GeForce RTX 2050
with Cuda compilation tools, release 11.8How to Solve this or how to use batching/ batch_size while doing inference?
The text was updated successfully, but these errors were encountered: