Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On Mac os: RuntimeError: Ollama Server is not running, start it using ollama serve in a separate terminal #688

Closed
1 of 2 tasks
Oscarjia opened this issue Dec 26, 2024 · 1 comment

Comments

@Oscarjia
Copy link

Oscarjia commented Dec 26, 2024

System Info

Chip: Apple M3
Os:Sonoma 14.6

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I have installed the Ollama Mac client, when i follow the example start-the-llama-stack-server

docker run -it \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ -v ~/.llama:/root/.llama \ llamastack/distribution-ollama \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env OLLAMA_URL=http://host.docker.internal:11434

It throws the following exception and i check the server is running:

ollama ps 
NAME           ID              SIZE      PROCESSOR    UNTIL              
llama3.2:1b    baf6a787fdff    2.8 GB    100% GPU     4 minutes from now

### Error logs

  ```File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/resolver.py", line 221, in resolve_impls
    impl = await instantiate_provider(
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/resolver.py", line 308, in instantiate_provider
    impl = await fn(*args)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/remote/inference/ollama/__init__.py", line 14, in get_adapter_impl
    await impl.initialize()
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/utils/telemetry/trace_protocol.py", line 101, in async_wrapper
    result = await method(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/remote/inference/ollama/ollama.py", line 131, in initialize
    raise RuntimeError(
RuntimeError: Ollama Server is not running, start it using `ollama serve` in a separate terminal```

### Expected behavior

start the stack server.
       
@Oscarjia Oscarjia changed the title On Mac os: On Mac os: RuntimeError: Ollama Server is not running, start it using ollama serve in a separate terminal Dec 26, 2024
@hardikjshah
Copy link
Contributor

Hey @Oscarjia -- you should follow the instructions here to start a ollama server in the background before running the stack.

We have some documentation here on how to do that -- https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/ollama.html#setting-up-ollama-server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants