Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
11,704 workflow runs
11,704 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Fix missing file renames in Makefile due to changes in commit ae8de6d…
CI #16843: Commit 3952a22 pushed by slaren
November 19, 2024 22:18 1h 22m 3s master
November 19, 2024 22:18 1h 22m 3s
llama : add .clang-format file
CI #16842: Pull request #10415 opened by slaren
November 19, 2024 22:17 57m 11s sl/clang-format
November 19, 2024 22:17 57m 11s
add cmake rvv support (#10411)
CI #16840: Commit 42ae10b pushed by slaren
November 19, 2024 20:10 1h 21m 22s master
November 19, 2024 20:10 1h 21m 22s
sync : ggml
CI #16839: Commit 9fe0fb0 pushed by ggerganov
November 19, 2024 18:03 1h 38m 34s master
November 19, 2024 18:03 1h 38m 34s
cmake: force MSVC compiler charset to utf-8 (#9989)
CI #16838: Commit 342397d pushed by slaren
November 19, 2024 17:42 1h 53m 17s master
November 19, 2024 17:42 1h 53m 17s
sync : ggml
CI #16837: Pull request #10412 opened by ggerganov
November 19, 2024 17:16 1h 51m 28s sync
November 19, 2024 17:16 1h 51m 28s
Add required ggml-base and backend libs to cmake pkg (#10407)
CI #16836: Commit 2a11b6b pushed by slaren
November 19, 2024 16:10 2h 53m 2s master
November 19, 2024 16:10 2h 53m 2s
cmake: Add RISC-V compiler support
CI #16835: Pull request #10411 opened by lhpqaq
November 19, 2024 16:06 58m 58s lhpqaq:rvv
November 19, 2024 16:06 58m 58s
vulkan: copy iq4_nl LUT into shared memory
CI #16834: Pull request #10409 opened by jeffbolznv
November 19, 2024 15:05 6h 17m 38s jeffbolznv:iq4_nl
November 19, 2024 15:05 6h 17m 38s
cuda : fix CUDA_FLAGS not being applied (#10403)
CI #16831: Commit 3ee6382 pushed by slaren
November 19, 2024 13:29 54m 54s master
November 19, 2024 13:29 54m 54s
cuda : fix CUDA_FLAGS not being applied
CI #16830: Pull request #10403 opened by slaren
November 19, 2024 12:28 59m 45s sl/fix-cmake-cuda-flags
November 19, 2024 12:28 59m 45s
llama : handle KV shift for recurrent models
CI #16829: Pull request #10402 opened by ggerganov
November 19, 2024 12:20 53m 7s gg/llama-can-shift-cont
November 19, 2024 12:20 53m 7s
llama : add check for KV cache shifts (#10401)
CI #16828: Commit 8e752a7 pushed by ggerganov
November 19, 2024 11:29 1h 22m 33s master
November 19, 2024 11:29 1h 22m 33s
llama : add check for KV cache shifts
CI #16827: Pull request #10401 opened by ggerganov
November 19, 2024 10:02 51m 32s gg/llama-can-shift
November 19, 2024 10:02 51m 32s
llama : add OLMo November 2024 support (#10394)
CI #16826: Commit a88ad00 pushed by ggerganov
November 19, 2024 09:04 58m 33s master
November 19, 2024 09:04 58m 33s
sycl : Add option to set the SYCL architecture for all targets (#10266)
CI #16825: Commit 2a1507c pushed by Alcpz
November 19, 2024 08:02 1h 36m 46s master
November 19, 2024 08:02 1h 36m 46s
vulkan: Optimize soft_max (#10301)
CI #16824: Commit b3e5859 pushed by 0cc4m
November 19, 2024 07:25 1h 36m 6s master
November 19, 2024 07:25 1h 36m 6s
CANN Support Ascend310P to accelerate F32 and F16 LLM Model
CI #16823: Pull request #10216 synchronize by leo-pony
November 19, 2024 07:23 1h 16m 52s leo-pony:ascend310PAdapt
November 19, 2024 07:23 1h 16m 52s
CANN: Add Ascend CANN build ci
CI #16822: Pull request #10217 synchronize by xuedinge233
November 19, 2024 07:09 1h 21m 43s xuedinge233:master
November 19, 2024 07:09 1h 21m 43s
CANN Support Ascend310P to accelerate F32 and F16 LLM Model
CI #16821: Pull request #10216 synchronize by leo-pony
November 19, 2024 07:09 14m 45s leo-pony:ascend310PAdapt
November 19, 2024 07:09 14m 45s
CANN: Add Ascend CANN build ci
CI #16820: Pull request #10217 synchronize by xuedinge233
November 19, 2024 03:11 58m 30s xuedinge233:master
November 19, 2024 03:11 58m 30s
sycl: Revert MUL_MAT_OP support changes (#10385)
CI #16819: Commit 557924f pushed by NeoZhangJianyu
November 19, 2024 00:50 56m 43s master
November 19, 2024 00:50 56m 43s