From cb8a28c128cf205ae09f8df7e011ae543450e25a Mon Sep 17 00:00:00 2001 From: Aidan Do Date: Mon, 16 Dec 2024 01:52:28 +1100 Subject: [PATCH] Doc: Ollama command references non-existent file (#632) # What does this PR do? Fixes: Screenshot 2024-12-15 at 22 04 37 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. --- docs/source/distributions/self_hosted_distro/ollama.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/distributions/self_hosted_distro/ollama.md b/docs/source/distributions/self_hosted_distro/ollama.md index c915a7ac31..3fe552a567 100644 --- a/docs/source/distributions/self_hosted_distro/ollama.md +++ b/docs/source/distributions/self_hosted_distro/ollama.md @@ -102,7 +102,7 @@ Make sure you have done `pip install llama-stack` and have the Llama Stack CLI a export LLAMA_STACK_PORT=5001 llama stack build --template ollama --image-type conda -llama stack run ./run.yaml \ +llama stack run ./distributions/ollama/run.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env OLLAMA_URL=http://localhost:11434