Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependency issues with uvicorn #1892

Open
4 tasks done
jaslatendresse opened this issue Jan 8, 2025 · 4 comments
Open
4 tasks done

Dependency issues with uvicorn #1892

jaslatendresse opened this issue Jan 8, 2025 · 4 comments

Comments

@jaslatendresse
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python to do.

When running the command python3 -m llama_cpp.server --model model.gguf --port 8080 I am expecting the locally hosted model to me served at localhost:8080.

Current Behavior

Please provide a detailed written description of what llama-cpp-python did, instead.

ModuleNotFoundError: No module named 'uvicorn'

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using

Macbook Pro M1

  • Operating System

Sequoia 15.2

  • SDK version

python 3.10
miniforge3
GNU Make 3.81
Apple clang version 16.0.0 (clang-1600.0.26.6)

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. conda create -n myenv python=3.10
  2. conda activate myenv
  3. pip install llama-cpp-python
  4. python3 -m llama_cpp.server --model llama-3.1-q4_k_m.gguf --port 8080

Local clone of llama.cpp works perfectly fine and I can do inference, but I need to use llama-cpp-python for a project.

Failure Logs

Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.

Traceback (most recent call last):
  File "/Users/jasminelatendresse/miniforge3/envs/server310/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/jasminelatendresse/miniforge3/envs/server310/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/jasminelatendresse/miniforge3/envs/server310/lib/python3.10/site-packages/llama_cpp/server/__main__.py", line 31, in <module>
    import uvicorn
ModuleNotFoundError: No module named 'uvicorn'

@BenjaminMarechalEVITECH

Same issue here.
pip install uvicorn does not fix it, other modules are missing, like anyio, starlette, fastapi, sse_starlette, ...

@jaslatendresse
Copy link
Author

Same issue here. pip install uvicorn does not fix it, other modules are missing, like anyio, starlette, fastapi, sse_starlette, ...

I manually installed all of the subsequent dependencies as the errors popped up and it fixed it for me. This shouldn't happen though.

@BenjaminMarechalEVITECH
Copy link

BenjaminMarechalEVITECH commented Jan 15, 2025

I solved the problem by installing pip with conda before installing llama-cpp-python

  • conda activate myenv
  • conda install pip
  • pip install llama-cpp-python

I guess this is more a problem with conda than with llama-cpp-python (see conda/conda#10401 (comment)).

@jaslatendresse
Copy link
Author

I solved the problem by installing pip with conda before installing llama-cpp-python

* `conda activate myenv`

* `conda install pip`

* `pip install llama-cpp-python`

I guess this is more a problem with conda than with llama-cpp-python (see conda/conda#10401 (comment)).

Interesting! I never had to manually install pip. Thanks, that works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants