-
Notifications
You must be signed in to change notification settings - Fork 1k
Issues: abetlen/llama-cpp-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Using a pre-built wheel currently requires specifying the right version, e.g. llama-cpp-python==0.3.4
#1872
opened Dec 19, 2024 by
eeegnu
chatml-function-callling not adding tool description to the prompt.
#1869
opened Dec 16, 2024 by
undo76
2
Confusion regarding operation/terminology of speculative decoding and sampling
#1865
opened Dec 15, 2024 by
MushroomHunting
server chat/completion api fails - coroutine object not callable in llama_proxy
#1857
opened Dec 9, 2024 by
PurnaChandraPanda
"tool_calls" not returning on native http request on a llama cpp server
#1856
opened Dec 7, 2024 by
celsowm
4 tasks done
llama_get_logits_ith: invalid logits id -1, reason: no logits
#1855
opened Dec 7, 2024 by
devashishraj
4 tasks done
With Intel GPU on Windows, llama_perf_context_print reports invalid performance metrics
#1853
opened Dec 2, 2024 by
dnoliver
4 tasks done
Windows with Intel GPU fails to build if Ninja is not the selected backend
#1852
opened Dec 2, 2024 by
dnoliver
4 tasks done
Intel GPU not enabled when using -DLLAVA_BUILD=OFF
#1851
opened Dec 2, 2024 by
dnoliver
4 tasks done
Previous Next
ProTip!
Follow long discussions with comments:>50.