Skip to content

Commit

Permalink
Fix error of make style
Browse files Browse the repository at this point in the history
Signed-off-by: yuanwu <[email protected]>
  • Loading branch information
yuanwu2017 committed Sep 14, 2024
1 parent 285ee0a commit eea1811
Showing 1 changed file with 1 addition and 3 deletions.
4 changes: 1 addition & 3 deletions optimum/gptq/quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -546,9 +546,7 @@ def tmp(_, input, output):

if self.bits == 4:
# device not on gpu
if device.type != "cuda" or (
has_device_map and any(d in devices for d in ["cpu", "disk", "hpu"])
):
if device.type != "cuda" or (has_device_map and any(d in devices for d in ["cpu", "disk", "hpu"])):
if not self.disable_exllama:
logger.warning(
"Found modules on cpu/disk. Using Exllama/Exllamav2 backend requires all the modules to be on GPU. Setting `disable_exllama=True`"
Expand Down

0 comments on commit eea1811

Please sign in to comment.