Need some assistance with Text to Image failing on fresh AIO GPU docker for windows Local-AI Instance #3496
-
Hey everyone! I am receiving an error after building a new image using "docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12" When attempting to generate any images using the default stable diffusion model, I receive the following error: could not load model (no success): Unexpected err=RepositoryNotFoundError('401 Client Error. (Request ID: Root=1-66de0c53-15c3eb6052783e1c1e555e4d;71959834-58b2-4e87-be6c-cffa8a049f20)\n\nRepository Not Found for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5/revision/main.\nPlease make sure you specified the correct I have attempted to load different AIO images, and now always get this error. I am thinking it is because the https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5/revision/main URL does not lead anywhere any longer. This error happens when I attempt to run any of the models in the default LocalAI gallery in the Image Generation repository. I must be doing something wrong on my end since I cant find any other people with this error. Any advice for a noobie such as myself? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 9 replies
-
Hello, not sure your doing something wrong, like you I suspect the invalid https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5/revision/main URL. Got the same issue with
Everything works a few days ago, and I didn't change anything. I just tried to reinstall LocalAI and all the models, with the same result. |
Beta Was this translation helpful? Give feedback.
-
As far as I understand, first time you generate an image with backend As a workaround, I think we can use
Strange, because:
I am using a container image, so I don't understand, but nevermind. Let's try to rebuild everything. So here I am, I don't know what to do next... If someone has an idea! Maybe the best is to wait for an official fix :/ EDIT: I found a dirty workaround. # Connect to your container:
docker exec -it <container name or id> bash
# Replace "runwayml" occurrences with "stable-diffusion-v1-5" in all files in "/build" directory:
find /build/ -type f -print0 | xargs -0 sed -i 's/runwayml/stable-diffusion-v1-5/g' |
Beta Was this translation helpful? Give feedback.
As far as I understand, first time you generate an image with backend
diffusers
(the default one), LocalAI needs to download a model, the one supposed to be at https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5/revision/main.But this URL is not valid anymore, so download fails.
As a workaround, I think we can use
stablediffusion-cpp
backend insteaddiffusers
(see https://localai.io/features/image-generation/#stablediffusion-cpp).I tried on my Docker instance, but got this error:
Strange, because: