How to run Llama3-8b instruct model on multiple GPUs? #7086
aitechguy0105
started this conversation in
General
Replies: 2 comments 1 reply
-
Why would you run it on multiple GPUs? |
Beta Was this translation helpful? Give feedback.
0 replies
-
to test benchmark. other models seems to be supported on multiple GPUs. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Deos llama.cpp support llama3-8b to run on multiple GPUs?
Beta Was this translation helpful? Give feedback.
All reactions