-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local RAG agent with LLaMA3 error: Ollama call failed with status code 400. Details: {"error":"unexpected server status: 1"} #346
Comments
I have been experiencing the same issue. It seems to happen at random when Ollama is requested. |
I'm seeing the same issue, the retry referenced here seems to help at times, but not 100%: langchain-ai/langchain#20773 (comment) |
I can second this exactly:
|
Workaround works (sometimes) after updating Ollama to 0.1.9 |
Still an issue, not really functional on Ollama 0.1.32. EDIT: resolved. Solved for me by ensuring other Ollama instances on the system (other Ubuntu instances under WSL or on host Windows machine were off (or uninstalled for the Windows version). Ollama may have a bug related to stopping the server. |
This seems to be more of an Ollama issue in this case? Or is there something specific to this notebook that you want fixed |
It's a terrific notebook and I'd love to see it working with ollama and llama 3. I believe the issue affects every llama 3 implementation so fixing it would help greatly. |
Checked other resources
Example Code
notebook example code in https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_rag_agent_llama3_local.ipynb
Error Message and Stack Trace (if applicable)
Description
running the example code i get the above error, this is not happening with mistral so i guess my ollama is ok. I also get the first "yes" from the llama3 if I'm not mistaken, so I supsect it's related to something not working here:
System Info
Ubuntu 22.04.4 LTS
Anaconda and VSC.
The text was updated successfully, but these errors were encountered: