Releases: philipturner/metal-flash-attention
Releases · philipturner/metal-flash-attention
v1.0.1
v1.0.0
FlashAttention, dense and block-sparse.
The dense version consistently outperforms MPSGraph by one order of magnitude (3-5x). In some edge cases, that grows to two orders of magnitude (20x). MPSGraph is the modern API that Apple recommends for using Metal in machine learning applications.
The block-sparse version indirectly supports (and accelerates) triangular causal masks, but work distribution is sub-optimal. It is sometimes 60% faster than theoretically possible with dense, sometimes as slow as dense; performance is nondeterministic. This makes it the same as FlashAttention-2 from https://github.com/Dao-AILab/flash-attention.
v0.2.0-alpha
Added support for fused transposes and batched GEMM.
v0.1.0-alpha
Initial alpha release.