You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I want to change the default model dolphin-2.2.1-mistral-7b.Q5_K_M.gguf to another model like as Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf or a custom-defined model, how should I modify it?
The text was updated successfully, but these errors were encountered:
hello,
I'd be happy to explain. The config contains many options but for this one, the llm option is important. All the listed items there llama, hugging_face, etc. are examples of configs for the LLM model to load/the LLM backend to use. Right now only the first one is used. That should be llama for you but the new config allows to use the TaskProcessing tasks for the answer generation too (nc_texttotext).
Now, to answer the real question,
place the GGUF model file inside /nc_app_context_chat_backend_data/ inside the docker container nc_app_context_chat_backend. Use this command for that:
If I want to change the default model dolphin-2.2.1-mistral-7b.Q5_K_M.gguf to another model like as Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf or a custom-defined model, how should I modify it?
The text was updated successfully, but these errors were encountered: