-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dependency issues with uvicorn #1892
Comments
Same issue here. |
I manually installed all of the subsequent dependencies as the errors popped up and it fixed it for me. This shouldn't happen though. |
I solved the problem by installing pip with conda before installing llama-cpp-python
I guess this is more a problem with conda than with llama-cpp-python (see conda/conda#10401 (comment)). |
Interesting! I never had to manually install pip. Thanks, that works. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Please provide a detailed written description of what you were trying to do, and what you expected
llama-cpp-python
to do.When running the command
python3 -m llama_cpp.server --model model.gguf --port 8080
I am expecting the locally hosted model to me served atlocalhost:8080
.Current Behavior
Please provide a detailed written description of what
llama-cpp-python
did, instead.ModuleNotFoundError: No module named 'uvicorn'
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
Macbook Pro M1
Sequoia 15.2
python 3.10
miniforge3
GNU Make 3.81
Apple clang version 16.0.0 (clang-1600.0.26.6)
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
conda create -n myenv python=3.10
conda activate myenv
pip install llama-cpp-python
python3 -m llama_cpp.server --model llama-3.1-q4_k_m.gguf --port 8080
Local clone of llama.cpp works perfectly fine and I can do inference, but I need to use llama-cpp-python for a project.
Failure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
The text was updated successfully, but these errors were encountered: