llama.cpp
/
sha256:e2eb7a9e0dc79f03fbc13fc06995fffc4d1a76eba70df927aee7797b3373a98b
server
sha256:e2eb7a9e0dc79f03fbc13fc06995fffc4d1a76eba70df927aee7797b3373a98b
Install from the command line
$ docker pull ghcr.io/septa2112/llama.cpp:server
Use as base image in Dockerfile:
FROM ghcr.io/septa2112/llama.cpp:server
linux/amd64
$ docker pull ghcr.io/septa2112/llama.cpp:server--b1-c35e586@sha256:153333d7d66928bfb650677b071ed70fbcae7366815d7bf28750887efffa3795
linux/arm64
$ docker pull ghcr.io/septa2112/llama.cpp:server--b1-c35e586@sha256:f6756d33b5c5e08522b50be78d85c73ba0ff80902f9d579cd048592b008684aa
unknown/unknown
$ docker pull ghcr.io/septa2112/llama.cpp:server--b1-c35e586@sha256:bdd756af0d764746950d9e6ee22feed3bd344f60623b253c6ece93d2afc27d2d
Loading
Sorry, something went wrong.
Details
- llama.cpp
- Septa2112
- Septa2112/llama.cpp
- 10 days ago
Download activity
- Total downloads 0
- Last 30 days 0
- Last week 0
- Today 0