You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to run Llama 3.1 (and even TinyLlama, which I was previously able to use) on wgpu I get an error due to lack of memory.
On WSL:
Loading record...
thread 'main' panicked at /home/laggui/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cubecl-runtime-0.2.0/src/memory_management/dynamic.rs:156:9:
No memory pool big enough to reserve 262144000 bytes.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
On Windows:
Loading record...
thread 'main' panicked at C:\Users\guila\.cargo\registry\src\index.crates.io-6f17d22bba15001f\wgpu-22.1.0\src\backend\wgpu_core.rs:3411:5:
wgpu error: Validation Error
Caused by:
In Device::create_buffer
Not enough memory left.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: process didn't exit successfully: `target\release\examples\chat.exe` (exit code: 101)
My notebook only has 6GB VRAM but I can run with any other backend no problem.
I know other users on discord have reported a similar issue even with 4090s.
The text was updated successfully, but these errors were encountered:
Trying to run Llama 3.1 (and even TinyLlama, which I was previously able to use) on wgpu I get an error due to lack of memory.
On WSL:
On Windows:
My notebook only has 6GB VRAM but I can run with any other backend no problem.
I know other users on discord have reported a similar issue even with 4090s.
The text was updated successfully, but these errors were encountered: