No labels!
There aren’t any labels for this repository quite yet.
Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
indicates that this may be ready to merge soon and is just holding out in case of objections
Testing and feedback with results are needed
The OP should provide more details about the issue
Issues specific to consuming flake.nix, or generally concerned with ❄ Nix-based llama.cpp deployment
Issues specific to Nvidia GPUs
Marker for potentially obsolete PR
Qualcomm's QNN(AI Direct Engine) SDK
Further information is requested
Generally require indepth knowledge of LLMs or GPUs
Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Generally require more time to grok but manageable by beginner to medium expertise level
GGUF split model sharding
https://en.wikipedia.org/wiki/SYCL - GPU programming language
Requires sync with the ggml repo after merging
You can’t perform that action at this time.