- llama.cpp from https://github.com/ggerganov/llama.cpp with CUDA enabled (found under
/opt/llama.cpp
) - Python bindings from https://github.com/abetlen/llama-cpp-python (found under
/opt/llama-cpp-python
)
Warning
Starting with version 0.1.79, the model format has changed from GGML to GGUF. Existing GGML models can be converted using the convert-llama-ggmlv3-to-gguf.py
script in llama.cpp
(or you can often find the GGUF conversions on HuggingFace Hub)
There are two branches of this container for backwards compatability:
llama_cpp:gguf
(the default, which tracks upstream master)llama_cpp:ggml
(which still supports GGML model format)
There are a couple patches applied to the legacy GGML fork:
- fixed
__fp16
typedef in llama.h on ARM64 (usehalf
with NVCC) - parsing of BOS/EOS tokens (see ggerganov/llama.cpp#1931)
You can use llama.cpp's built-in main
tool to run GGUF models (from HuggingFace Hub or elsewhere)
./run.sh --workdir=/opt/llama.cpp/bin $(./autotag llama_cpp) /bin/bash -c \
'./main --model $(huggingface-downloader TheBloke/Llama-2-7B-GGUF/llama-2-7b.Q4_K_S.gguf) \
--prompt "Once upon a time," \
--n-predict 128 --ctx-size 192 --batch-size 192 \
--n-gpu-layers 999 --threads $(nproc)'
> the
--model
argument expects a .gguf filename (typically theQ4_K_S
quantization is used)
> if you're trying to load Llama-2-70B, add the--gqa 8
flag
To use the Python API and benchmark.py
instead:
./run.sh --workdir=/opt/llama.cpp/bin $(./autotag llama_cpp) /bin/bash -c \
'python3 benchmark.py --model $(huggingface-downloader TheBloke/Llama-2-7B-GGUF/llama-2-7b.Q4_K_S.gguf) \
--prompt "Once upon a time," \
--n-predict 128 --ctx-size 192 --batch-size 192 \
--n-gpu-layers 999 --threads $(nproc)'
Model | Quantization | Memory (MB) |
---|---|---|
TheBloke/Llama-2-7B-GGUF |
llama-2-7b.Q4_K_S.gguf |
5,268 |
TheBloke/Llama-2-13B-GGUF |
llama-2-13b.Q4_K_S.gguf |
8,609 |
TheBloke/LLaMA-30b-GGUF |
llama-30b.Q4_K_S.gguf |
19,045 |
TheBloke/Llama-2-70B-GGUF |
llama-2-70b.Q4_K_S.gguf |
37,655 |
CONTAINERS
llama_cpp:0.2.57 |
|
---|---|
Aliases | llama_cpp |
Requires | L4T ['>=34.1.0'] |
Dependencies | build-essential cuda cudnn python cmake numpy huggingface_hub |
Dependants | langchain langchain:samples text-generation-webui:1.7 text-generation-webui:6a7cd01 text-generation-webui:main |
Dockerfile | Dockerfile |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/llama_cpp:ggml-r35.2.1 |
2023-12-05 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r35.3.1 |
2023-12-06 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r35.4.1 |
2023-12-19 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r36.2.0 |
2023-12-19 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r35.2.1 |
2023-12-15 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r35.3.1 |
2023-12-19 |
arm64 |
5.2GB |
dustynv/llama_cpp:gguf-r35.4.1 |
2023-12-15 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r36.2.0 |
2023-12-19 |
arm64 |
5.1GB |
dustynv/llama_cpp:r35.2.1 |
2023-08-29 |
arm64 |
5.2GB |
dustynv/llama_cpp:r35.3.1 |
2023-08-15 |
arm64 |
5.2GB |
dustynv/llama_cpp:r35.4.1 |
2023-08-13 |
arm64 |
5.1GB |
dustynv/llama_cpp:r36.2.0 |
2024-02-22 |
arm64 |
5.3GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use jetson-containers run
and autotag
, or manually put together a docker run
command:
# automatically pull or build a compatible container image
jetson-containers run $(autotag llama_cpp)
# or explicitly specify one of the container images above
jetson-containers run dustynv/llama_cpp:r36.2.0
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/llama_cpp:r36.2.0
jetson-containers run
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
jetson-containers run -v /path/on/host:/path/in/container $(autotag llama_cpp)
To launch the container running a command, as opposed to an interactive shell:
jetson-containers run $(autotag llama_cpp) my_app --abc xyz
You can pass any options to it that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
jetson-containers build llama_cpp
The dependencies from above will be built into the container, and it'll be tested during. Run it with --help
for build options.