feat: implement inference server by using vllm #760
Triggered via pull request
October 23, 2024 21:50
Status
Cancelled
Total duration
1h 27m 51s
Artifacts
–
preset-image-build.yml
on: pull_request
determine-models
0s
Matrix: build-models
Annotations
1 error
determine-models
Canceling since a higher priority waiting request for 'Build and Push Preset Models-zhuangqh/support-vllm' exists
|