-
Microsoft Research Asia
- Beijing, China
Pinned Loading
-
customized-flash-attention
customized-flash-attention PublicForked from Dao-AILab/flash-attention
Fast and memory-efficient exact attention
-
microsoft/nnfusion
microsoft/nnfusion PublicA flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.