-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request]: Ollama Support #17
Comments
Hi @ericrallen I’m the maintainer of LiteLLM - we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not) Try it here: https://docs.litellm.ai/docs/proxy_server Using LiteLLM Proxy Serverimport openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) Creating a proxy serverOllama models $ litellm --model ollama/llama2 --api_base http://localhost:11434 Hugging Face Models $ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1 Anthropic $ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1 Palm $ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison |
Hey there, @ishaan-jaff! While I think LiteLLM introduces some interesting functionality, I can't really see how it would be practical to integrate with this plugin that runs inside of an Electron app, but I might be missing how it could be easily integrated. |
Would love to get jmorganca/ollama #751 merged in to make it easier for Obsidian plugins to find Ollama hosts on the local network without needing to manually enter IP addresses. |
Is your feature request related to a problem? Please describe.
Add support for ollama models.
Describe the solution you'd like
If ollama is running, populate the models dropdown with available local models.
If an ollama model is selected, submit the request to the ollama model via the completion endpoint.
Additional context
We'll need to add a new ollama service with a generic model definition and adapter configuration.
We'll also need a formatting utility similar to the formatChat utility for the existing openai service.
The text was updated successfully, but these errors were encountered: