You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the attention kernel does not work well in special cases. An example of this is with the following shapes
q.shape = torch.Size([4, 32, 1, 128])
k.shape = torch.Size([4, 32, 20, 128])
v.shape = torch.Size([4, 32, 20, 128])
Currently the attention kernel does not work well in special cases. An example of this is with the following shapes
q.shape = torch.Size([4, 32, 1, 128])
k.shape = torch.Size([4, 32, 20, 128])
v.shape = torch.Size([4, 32, 20, 128])
https://github.com/triton-lang/kernels/blob/main/kernels/flash_attention.py#L23
Repro steps:
CUDA_LAUNCH_BLOCKING=1 python3.9 -m main llama_chat_completion --profile=False --benchmark=False --ckpt_dir="models/llama/meta-llama/Meta-Llama-3-8B-Instruct/original" --tokenizer_path="models/llama/meta-llama/Meta-Llama-3-8B-Instruct/original/tokenizer.model" --use_triton=True
The text was updated successfully, but these errors were encountered: