Skip to content

feat: implement inference server by using vllm #1269

feat: implement inference server by using vllm

feat: implement inference server by using vllm #1269

Triggered via pull request October 14, 2024 22:35
Status Cancelled
Total duration 10h 3m 6s
Artifacts

kaito-e2e.yml

on: pull_request
Matrix: run-e2e
Fit to window
Zoom out
Zoom in

Annotations

1 error
run-e2e (gpuprovisioner) / e2e-tests-gpuprovisioner
Canceling since a higher priority waiting request for 'pr-e2e-test-zhuangqh/support-vllm' exists