-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug : in ollama llm the version is not so in drop down #869
Comments
Are you running ollama |
I am facing the same issue, the model does not appear there, i followed all steps, ollama is running checked from http://127.0.0.1:11434/ but still same issue |
Can you show some screenshots
|
You will have to go to the providers from the settings tab and scroll down to ollama and paste ollama base url as - [ http://localhost:11434 ] (default port used to interact with ollama). Refresh and it shows up now. |
Damn Bro, it worked. thank u for helping me out but im having another problem now it shows this Failed to get Ollama models: TypeError: fetch failed |
me too whit ollama or api key openia errno: -4078, |
Use http://127.0.0.1:11434 instead of localhost: 11434 |
Don't use http://localhost:11434 Because IPV6 conversation will convert the localhost to ::1:11434 which will make some issue |
@Soumyaranjan-17 bruhhh it worked 😭😭😭 thanks. Also i am only able to use 05b-1.5b anything above that just makes my laptop lag. Laptop 16GB LPDDR5X |
@afhanxd Congratulations bro.... But I think qwen 7b must work on your computer. As it works in mine also 😅 If any issues you are facing show some screenshots of errors and config. |
@Mayur1804 Have you installed the model that you are using !? |
The same here... can you provide help !!! Current Setup with Windows 11 Host:
LOGS: |
Yes. You can see in one of my screenshot saying Ollama is running |
ymmv but hope this might help someone landing here: worth noting that if you're running ollama or bolt via https, that both services should to be using the same scheme or the request to list models may fail in your browser (check console for mixed http/https errors) ollama & bolt via traefik w/o origins if youre running bolt with ssl and ollama in another container you need to enable ssl for ollama as well or you'll see CORS issues in the browser console. You also need to pass something like -e OLLAMA_ORIGINS="*" (not in prod obviously) (https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama) ollama via http, bolt via https with bolt and ollama running over ssl via traefik with the origins env flag mentioned the model list loads correctly also fwiw... if you're using podman you may need to update the compose file from "docker" to "containers", and it may need to be run as root vs rootless
|
@oliverbarreto // Backend: Check if we're running in Docker
const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
baseUrl = isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl;
baseUrl = isDocker ? baseUrl.replace('127.0.0.1', 'host.docker.internal') : baseUrl; If Bolt.diy is running in Docker and using Ollama, the OLLAMA_API_BASE_URL environment variable must be set. Otherwise, when the code reaches this point, it will throw an error and exit because the baseUrl is empty. |
Describe the bug
the version is not show in 7b version of ollama
Link to the Bolt URL that caused the error
http://localhost:3000/chat/3
Steps to reproduce
go to home page
select the ollama model
Expected behavior
not show
Screen Recording / Screenshot
Platform
Provider Used
brave
Model Used
ollama
Additional context
No response
The text was updated successfully, but these errors were encountered: