Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[install-help]: Is it possible to change the default model? #73

Open
aef5748 opened this issue Sep 18, 2024 · 1 comment
Open

[install-help]: Is it possible to change the default model? #73

aef5748 opened this issue Sep 18, 2024 · 1 comment
Labels
help wanted Extra attention is needed

Comments

@aef5748
Copy link

aef5748 commented Sep 18, 2024

If I want to change the default model dolphin-2.2.1-mistral-7b.Q5_K_M.gguf to another model like as Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf or a custom-defined model, how should I modify it?

@aef5748 aef5748 added the help wanted Extra attention is needed label Sep 18, 2024
@kyteinsky
Copy link
Contributor

hello,
I'd be happy to explain. The config contains many options but for this one, the llm option is important. All the listed items there llama, hugging_face, etc. are examples of configs for the LLM model to load/the LLM backend to use. Right now only the first one is used. That should be llama for you but the new config allows to use the TaskProcessing tasks for the answer generation too (nc_texttotext).

Now, to answer the real question,

  1. place the GGUF model file inside /nc_app_context_chat_backend_data/ inside the docker container nc_app_context_chat_backend. Use this command for that:
docker cp <path_to_gguf_file> nc_app_context_chat_backend:/nc_app_context_chat_backend_data/
  1. modify the config file (/nc_app_context_chat_backend_data/config.yaml) and set the new model's filename at llm->llama->model_path
  2. restart the docker container
docker restart nc_app_context_chat_backend

For llama, you can see all the available options here: https://api.python.langchain.com/en/latest/llms/langchain_community.llms.llamacpp.LlamaCpp.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants