use local llama.cpp branch #1183
-
Hello, I'm trying to do some testing of a modified branch of llama.cpp with llama-cpp-python, but can't figure out how to force it to use it. Is there a documented method to do this cleanly? |
Beta Was this translation helpful? Give feedback.
Answered by
vriesdemichael
Feb 23, 2024
Replies: 1 comment
-
To install llama-cpp-python from source you need to init the submodule for llama.cpp so that the llama.cpp source can be found inside the cloned repo. You can switch this submodule to your own dev branch of llama.cpp (you might need to add your forked repo as a remote to switch to it) |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
bmtwl
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
To install llama-cpp-python from source you need to init the submodule for llama.cpp so that the llama.cpp source can be found inside the cloned repo.
You can switch this submodule to your own dev branch of llama.cpp (you might need to add your forked repo as a remote to switch to it)