Support the new format of llama.cpp "gguf"? #993
Labels
enhancement
New feature or request
primordial
Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT
A new file format has been introduced: ggerganov/llama.cpp#2398
I want to use my own model for privateGPT, then i download it from HF, but i realize that the hf format need to convert to ggml.
The previous version of llama.cpp(https://codeload.github.com/ggerganov/llama.cpp/zip/refs/tags/master-9e232f0) have a python script(convert.py) can convert hf format to ggml, but it doesn't work now.(maybe before either)
The latest llama.cpp include a script called convert.py can convert hf format to the new format "gguf",and other format can also convert to "gguf".(i don't know wether it is a good idea to introduce this new format but these guys have done a lot of work for this new format).So the problem is that the privategpt can not gguf format model. Is it a better idea to support the new gguf format?
The text was updated successfully, but these errors were encountered: