Skip to content

Commit

Permalink
rename wheels to manylinux and remove unused action (#167) (#168)
Browse files Browse the repository at this point in the history
rename wheels to manylinux and remove unused action

Co-authored-by: dhuangnm <[email protected]>
  • Loading branch information
dhuangnm and dhuangnm authored Apr 5, 2024
1 parent 92356b3 commit a3297db
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 49 deletions.
48 changes: 0 additions & 48 deletions .github/actions/nm-build-vllm-whl/action.yml

This file was deleted.

4 changes: 3 additions & 1 deletion .github/actions/nm-build-vllm/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,9 @@ runs:
BASE=$(./.github/scripts/convert-version ${{ inputs.python }})
ls -alh dist
WHL_FILEPATH=$(find dist -iname "*${BASE}*.whl")
WHL=$(basename ${WHL_FILEPATH})
RENAME=$(echo ${WHL_FILEPATH} | sed -e 's/linux_x86_64/manylinux_2_17_x86_64/')
mv ${WHL_FILEPATH} ${RENAME}
WHL=$(basename ${RENAME})
echo "whl=${WHL}" >> "$GITHUB_OUTPUT"
if [ ${SUCCESS} -ne 0 ]; then
exit 1
Expand Down

1 comment on commit a3297db

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: a3297db Previous: 3d151aa Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.2.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 3.9495861322289345 prompts/s
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.2.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 1516.641074775911 tokens/s

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.