Skip to content

gpt_big_code: make flash attention impl quantization friendly #4569

gpt_big_code: make flash attention impl quantization friendly

gpt_big_code: make flash attention impl quantization friendly #4569

Annotations

2 warnings

Check code quality (3.10, ubuntu-22.04)

succeeded Sep 25, 2024 in 11s