Skip to content

v1.0.0

Compare
Choose a tag to compare
@philipturner philipturner released this 27 Jul 20:03
· 39 commits to main since this release

FlashAttention, dense and block-sparse.

The dense version consistently outperforms MPSGraph by one order of magnitude (3-5x). In some edge cases, that grows to two orders of magnitude (20x). MPSGraph is the modern API that Apple recommends for using Metal in machine learning applications.

The block-sparse version indirectly supports (and accelerates) triangular causal masks, but work distribution is sub-optimal. It is sometimes 60% faster than theoretically possible with dense, sometimes as slow as dense; performance is nondeterministic. This makes it the same as FlashAttention-2 from https://github.com/Dao-AILab/flash-attention.