LoRA + FlashAttention2 speed up? #846
Unanswered
zhoumengbo
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When fine-tuning Mistral with LoRA, do you think FlashAttention2 helps in speeding up the process? If yes, how significant is the acceleration? Where is the primary acceleration achieved?
Beta Was this translation helpful? Give feedback.
All reactions