What's the difference between ollama and this? #9565
Daniel-dev22
started this conversation in
Ideas
Replies: 2 comments 1 reply
-
Ollama uses llama.cpp under the hood. Both llama-server and ollama support OpenAI API, both do so well enough for typical usecases, but last time I checked, llama-server didn't support tool use (ollama does), plus managing (downloading, switching between, etc.) models is more convenient in ollama. Plus (that's just my opiono) setting various parameters is more straightforward in llama-server than in ollama. YMMV. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@marcingomulkiewicz What about speed and memory usage? Is there any difference? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
New to this genai world.
What's the difference between running ollama and running this project in docker with Intel iGPU? Besides that this project supports Intel igpu but not ollama.
It appears the API is different between ollama and this project?
https://hub.docker.com/r/ollama/ollama
Beta Was this translation helpful? Give feedback.
All reactions