Skip to content

Commit

Permalink
Disable Qwen2 VL test for with logits conv test (#463)
Browse files Browse the repository at this point in the history
## Summary
<!--- This is a required section; please describe the main purpose of
this proposed code change. --->

<!---
## Details
This is an optional section; is there anything specific that reviewers
should be aware of?
--->

## Testing Done
<!--- This is a required section; please describe how this change was
tested. --->

<!-- 
Replace BLANK with your device type. For example, A100-80G-PCIe

Complete the following tasks before sending your PR, and replace `[ ]`
with
`[x]` to indicate you have done them. 
-->

- Hardware Type: <BLANK>
- [ ] run `make test` to ensure correctness
- [ ] run `make checkstyle` to ensure code style
- [ ] run `make test-convergence` to ensure convergence
  • Loading branch information
ByronHsu authored Dec 10, 2024
1 parent 62a3c7d commit c33583a
Showing 1 changed file with 9 additions and 4 deletions.
13 changes: 9 additions & 4 deletions test/convergence/test_mini_models_with_logits.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@

import pytest
import torch
import transformers
from datasets import load_from_disk
from packaging import version
from torch.utils.data import DataLoader
from transformers.models.gemma import GemmaConfig, GemmaForCausalLM
from transformers.models.gemma2 import Gemma2Config, Gemma2ForCausalLM
Expand Down Expand Up @@ -538,8 +540,9 @@ def run_mini_model(
5e-3,
1e-5,
marks=pytest.mark.skipif(
not QWEN2_VL_AVAILABLE,
reason="Qwen2-VL not available in this version of transformers",
not QWEN2_VL_AVAILABLE
or version.parse(transformers.__version__) >= version.parse("4.47.0"),
reason="Qwen2-VL not available in this version of transformers or transformers version >= 4.47.0",
),
),
pytest.param(
Expand All @@ -558,8 +561,10 @@ def run_mini_model(
not supports_bfloat16(), reason="bfloat16 not supported on this GPU"
),
pytest.mark.skipif(
not QWEN2_VL_AVAILABLE,
reason="Qwen2-VL not available in this version of transformers",
not QWEN2_VL_AVAILABLE
or version.parse(transformers.__version__)
>= version.parse("4.47.0"),
reason="Qwen2-VL not available in this version of transformers or transformers version >= 4.47.0",
),
],
),
Expand Down

0 comments on commit c33583a

Please sign in to comment.