Skip to content

Actions: vllm-project/vllm

codespell

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
3,880 workflow run results
3,880 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[DO NOT MERGE] VLM offline benchmark with MMMU-Pro vision
codespell #3893: Pull request #11196 synchronize by comaniac
December 16, 2024 22:51 20s ywang96:mmmu-pro-offline
December 16, 2024 22:51 20s
[core] platform agnostic executor
codespell #3892: Pull request #11243 opened by youkaichao
December 16, 2024 22:46 22s youkaichao:remove_allargs
December 16, 2024 22:46 22s
[Docs] hint to enable use of GPU performance counters in profiling to…
codespell #3887: Commit 35ffa68 pushed by mgoin
December 16, 2024 22:20 21s main
December 16, 2024 22:20 21s
[Bugfix] Fix request cancellation without polling
codespell #3886: Pull request #11190 synchronize by joerunde
December 16, 2024 22:19 21s joerunde:cancel-fix
December 16, 2024 22:19 21s
[V1] Prefix caching for multimodal language models
codespell #3885: Pull request #11187 synchronize by comaniac
December 16, 2024 22:17 20s comaniac:v1-vlm-cache
December 16, 2024 22:17 20s
[Bugfix] Fix request cancellation without polling
codespell #3884: Pull request #11190 synchronize by joerunde
December 16, 2024 22:09 25s joerunde:cancel-fix
December 16, 2024 22:09 25s
[V1] Prefix caching for multimodal language models
codespell #3883: Pull request #11187 synchronize by comaniac
December 16, 2024 22:01 25s comaniac:v1-vlm-cache
December 16, 2024 22:01 25s
[V1][VLM] Proper memory profiling for image language models
codespell #3882: Pull request #11210 synchronize by ywang96
December 16, 2024 21:53 22s ywang96:mm_profiling
December 16, 2024 21:53 22s