Skip to content

merge recent changes from ROCm/xformers (#1182) #3058

merge recent changes from ROCm/xformers (#1182)

merge recent changes from ROCm/xformers (#1182) #3058

Triggered via push January 5, 2025 11:13
Status Success
Total duration 2m 13s
Artifacts

gh-pages.yml

on: push
Fit to window
Zoom out
Zoom in

Annotations

20 errors and 2 warnings
test_mem_eff_attention.test_paged_attention_ck[gappy-128-128-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 640, 8, 128) (torch.bfloat16) value : shape=(1, 640, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-256-8192-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 40960, 8, 128) (torch.bfloat16) value : shape=(1, 40960, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-256-128-1]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 1, 8, 128) (torch.bfloat16) key : shape=(1, 128, 8, 128) (torch.bfloat16) value : shape=(1, 128, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-128-1]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 1, 8, 128) (torch.bfloat16) key : shape=(1, 128, 8, 128) (torch.bfloat16) value : shape=(1, 128, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-256-4096-1]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 1, 8, 128) (torch.bfloat16) key : shape=(1, 4096, 8, 128) (torch.bfloat16) value : shape=(1, 4096, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-64-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 8192, 8, 128) (torch.bfloat16) value : shape=(1, 8192, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-128-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 16384, 8, 128) (torch.bfloat16) value : shape=(1, 16384, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-128-64-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 320, 8, 128) (torch.bfloat16) value : shape=(1, 320, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-256-8192-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 40960, 8, 128) (torch.bfloat16) value : shape=(1, 40960, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-256-8192-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 1048576, 8, 128) (torch.bfloat16) value : shape=(1, 1048576, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-256-2048-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 262144, 8, 128) (torch.bfloat16) value : shape=(1, 262144, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-256-2048-1]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 1, 8, 128) (torch.bfloat16) key : shape=(1, 2048, 8, 128) (torch.bfloat16) value : shape=(1, 2048, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-128-2048-1]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 1, 8, 128) (torch.bfloat16) key : shape=(1, 2048, 8, 128) (torch.bfloat16) value : shape=(1, 2048, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-256-4096-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 20480, 8, 128) (torch.bfloat16) value : shape=(1, 20480, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-128-4096-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 524288, 8, 128) (torch.bfloat16) value : shape=(1, 524288, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-2048-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 10240, 8, 128) (torch.bfloat16) value : shape=(1, 10240, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-8192-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 40960, 8, 128) (torch.bfloat16) value : shape=(1, 40960, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-64-5]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 5, 8, 128) (torch.bfloat16) key : shape=(1, 320, 8, 128) (torch.bfloat16) value : shape=(1, 320, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[gappy-128-8192-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 1048576, 8, 128) (torch.bfloat16) value : shape=(1, 1048576, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalGappyKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
test_mem_eff_attention.test_paged_attention_ck[-128-8192-128]: tests/test_mem_eff_attention.py#L2418
ValueError: Operator `memory_efficient_attention` does not support inputs: query : shape=(1, 128, 8, 128) (torch.bfloat16) key : shape=(1, 1048576, 8, 128) (torch.bfloat16) value : shape=(1, 1048576, 8, 128) (torch.bfloat16) attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalCausalWithOffsetPaddedKeysMask'> p : 0.0 `ckF` is not supported because: bf16 is only supported on A100+ GPUs operator wasn't built - see `python -m xformers.info` for more info
deploy
Your workflow is using a version of actions/cache that is scheduled for deprecation, actions/cache@v2. Please update your workflow to use either v3 or v4 of actions/cache to avoid interruptions. Learn more: https://github.blog/changelog/2024-12-05-notice-of-upcoming-releases-and-breaking-changes-for-github-actions/#actions-cache-v1-v2-and-actions-toolkit-cache-package-closing-down
deploy
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/