Skip to content

feat: implement inference server by using vllm #762

feat: implement inference server by using vllm

feat: implement inference server by using vllm #762

Triggered via pull request October 23, 2024 23:34
Status Cancelled
Total duration 7d 1h 12m 45s
Artifacts

preset-image-build.yml

on: pull_request
determine-models
0s
determine-models
Matrix: build-models
Fit to window
Zoom out
Zoom in

Annotations

1 error
determine-models
Canceled by the server.