Skip to content

Commit

Permalink
Update utils.py (#174)
Browse files Browse the repository at this point in the history
fix error in the docker

---------

Co-authored-by: dhuangnm <[email protected]>
  • Loading branch information
dhuangnm and dhuangnm authored Apr 8, 2024
1 parent a3297db commit e752ec7
Showing 1 changed file with 5 additions and 3 deletions.
8 changes: 5 additions & 3 deletions vllm/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,11 +119,13 @@ def is_hip() -> bool:

@lru_cache(maxsize=None)
def is_cpu() -> bool:
from importlib.metadata import version
from importlib.metadata import PackageNotFoundError, version

# UPSTREAM SYNC: needed for nm-vllm
is_cpu_flag = "cpu" in version("nm-vllm")
return is_cpu_flag
try:
return "cpu" in version("nm-vllm")
except PackageNotFoundError:
return False


@lru_cache(maxsize=None)
Expand Down

1 comment on commit e752ec7

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: e752ec7 Previous: 58811df Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.2.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 3.944863382305427 prompts/s 3.948645440160918 prompts/s 1.00
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA A10G x 1", "vllm_version": "0.2.0", "python_version": "3.10.12 (main, Mar 7 2024, 18:39:53) [GCC 9.4.0]", "torch_version": "2.1.2+cu121"} 1514.8275388052841 tokens/s 1516.2798490217926 tokens/s 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.