Skip to content

feat: implement inference server by using vllm #760

feat: implement inference server by using vllm

feat: implement inference server by using vllm #760

Triggered via pull request October 23, 2024 21:50
Status Cancelled
Total duration 1h 27m 51s
Artifacts

preset-image-build.yml

on: pull_request
determine-models
0s
determine-models
Matrix: build-models
Fit to window
Zoom out
Zoom in

Annotations

1 error
determine-models
Canceling since a higher priority waiting request for 'Build and Push Preset Models-zhuangqh/support-vllm' exists