You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have installed the llm-rs library with the CUDA version, However, even though I have set use_gpu=True in the SessionConfig, the GPU is not utilized when running the code. Instead, the CPU usage remains at 100% during execution.
Additional Information:
I am using the "RedPajama Chat 3B" model from Rustformers. The model can be found at the following link: RedPajama Chat 3B Model.
Terminal output:
PS C:\Users\andri\Downloads\chatwaifu> python main.py
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA P106-100, compute capability 6.1
Currently only llama based models are accelerated by metal/cuda/opencl. If you use another architecture like gpt-neox it will fallback to cpu only inference. What you are seeing in your std-out is your gpu being initialized but the model then isn't offloaded to the gpu as we haven't implemented acceleration for this architecture in rustformers/llm yet.
I will probably create some sort of table in the rustformers/llm repo which shows which architectures are accelerated on which platform and then link to it to avoid further confusion.
We are planning to bring cuda acceleration too gpt-neox, gpt2 etc. but it will take some time as all internal operations of these models need to be implemented as cuda kernels in the ggml repo. Currently only llama and falcon can be completely offloaded onto the gpu and get the full acceleration.
I appreciate the plan to create a table in the rustformers/llm repository, showing which architectures are supported with acceleration on specific platforms. That will definitely help avoid confusion in the future. Thanks again for the explanation.
I have installed the llm-rs library with the CUDA version, However, even though I have set
use_gpu=True
in theSessionConfig
, the GPU is not utilized when running the code. Instead, the CPU usage remains at 100% during execution.Additional Information:
I am using the "RedPajama Chat 3B" model from Rustformers. The model can be found at the following link: RedPajama Chat 3B Model.
Terminal output:
Code:
The text was updated successfully, but these errors were encountered: