-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the local llm with ollama? #136
Comments
I do not remember having tested locally-installed LLMs, here to see with : @onuratakan if he encountered the issue. However from your implementation, it DOES seems right to me, there might be certain details to fill up for completeness of implementation. Are you sure your LLM is handled by langchain? If so, are you sure the support is handled by the langchain-core? If not, your issue must come from the fact that your model is calling langchain_core. I though Gemma was handled by Google with langchain-google-vertexai (from langchain_google_vertexai import GemmaVertexAIModelGarden, GemmaChatVertexAIModelGarden) and their method "GemmaVertexAIModelGarden". You'll find maybe more information about Gemma @ https://ai.google.dev/gemma/docs/integrations/langchain |
@onuratakan thanks!
|
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
EXCEPTION: generator raised StopIteration
also occurred EXCEPTION: 'messages'
ollama list :
I also modified the relevant files to match my local model.
Is this format correct? shown below
File llm_settings.py
File llm.py
How can I solve this problem?
Has anyone ever encountered such a situation?
The text was updated successfully, but these errors were encountered: