feat: implement inference server by using vllm #746
Triggered via pull request
October 14, 2024 22:35
Status
Cancelled
Total duration
10h 3m 5s
Artifacts
–
preset-image-build.yml
on: pull_request
determine-models
0s
Matrix: build-models
Annotations
1 error
determine-models
Canceling since a higher priority waiting request for 'Build and Push Preset Models-zhuangqh/support-vllm' exists
|