Skip to content

Actions: PygmalionAI/aphrodite-engine

ruff

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
1,641 workflow runs
1,641 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

torch.compile: fix functionalization
ruff #1604: Pull request #1045 opened by AlpinDale
December 27, 2024 03:25 26s fix_functionalization
December 27, 2024 03:25 26s
model: add support for MiniCPM-3 (#1044)
ruff #1603: Commit ce7b602 pushed by AlpinDale
December 27, 2024 03:25 26s main
December 27, 2024 03:25 26s
model: add support for MiniCPM-3
ruff #1602: Pull request #1044 opened by AlpinDale
December 27, 2024 03:17 22s minicpm3
December 27, 2024 03:17 22s
rocm: add custom paged attention kernels for ROCm (#1043)
ruff #1601: Commit 4a7cb8f pushed by AlpinDale
December 27, 2024 03:08 26s main
December 27, 2024 03:08 26s
rocm: add custom paged attention kernels for ROCm
ruff #1600: Pull request #1043 opened by AlpinDale
December 27, 2024 03:08 27s rocm_paged_attn
December 27, 2024 03:08 27s
xpu: bump IPEX to 2.3, support GQA (#1042)
ruff #1599: Commit 6951928 pushed by AlpinDale
December 27, 2024 02:58 23s main
December 27, 2024 02:58 23s
xpu: bump IPEX to 2.3, support GQA
ruff #1598: Pull request #1042 opened by AlpinDale
December 27, 2024 02:58 25s ipex_23
December 27, 2024 02:58 25s
December 27, 2024 02:51 21s
torch.compile: allow adding custom compile backends via plugins
ruff #1596: Pull request #1041 opened by AlpinDale
December 27, 2024 02:51 22s compile_plugin
December 27, 2024 02:51 22s
fix: skip loading extra bias for Qwen2-VL GPTQ (#1040)
ruff #1595: Commit e3f5bae pushed by AlpinDale
December 27, 2024 02:48 23s main
December 27, 2024 02:48 23s
fix: skip loading extra bias for Qwen2-VL GPTQ
ruff #1594: Pull request #1040 opened by AlpinDale
December 27, 2024 02:48 24s qwen2vl-bias
December 27, 2024 02:48 24s
tests: map physical device indices for test utils
ruff #1593: Commit 18acf7e pushed by AlpinDale
December 27, 2024 02:44 21s main
December 27, 2024 02:44 21s
core: factor out input preprocessing into a separate class (#1039)
ruff #1592: Commit 05be608 pushed by AlpinDale
December 27, 2024 02:42 25s main
December 27, 2024 02:42 25s
fix: grouped_topk return type (#1038)
ruff #1590: Commit fd07406 pushed by AlpinDale
December 27, 2024 02:27 26s main
December 27, 2024 02:27 26s
fix: grouped_topk return type
ruff #1589: Pull request #1038 opened by AlpinDale
December 27, 2024 02:27 23s fix_grouped_topk
December 27, 2024 02:27 23s
December 27, 2024 02:23 26s
fix: multi-step + flashinfer with cuda graphs (#1036)
ruff #1586: Commit c951a54 pushed by AlpinDale
December 27, 2024 02:20 27s main
December 27, 2024 02:20 27s
fix: multi-step + flashinfer with cuda graphs
ruff #1585: Pull request #1036 opened by AlpinDale
December 27, 2024 02:20 26s flashinfer_graph
December 27, 2024 02:20 26s
December 27, 2024 02:19 22s
model: add support for DeepSeek-V3 model
ruff #1582: Pull request #1034 opened by AlpinDale
December 27, 2024 01:31 28s deepseek_v3
December 27, 2024 01:31 28s
multi-step: add support for flashinfer attention backend (#1033)
ruff #1581: Commit 1390915 pushed by AlpinDale
December 27, 2024 00:42 24s main
December 27, 2024 00:42 24s