You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Please check that this feature request hasn't been suggested before.
I searched previous Ideas in Discussions didn't find any similar feature requests.
I searched previous Issues didn't find any similar feature requests.
🔖 Feature description
Support sequence / context parallelism to allow for the SFT of >128k tokens on A/H100 GPUs. With only 8H100 gpus, we can only manage to SFT of no more than 64k tokens now.
✔️ Solution
Axolotl is backboned with Accelerate and can already intergrate with many frameworks such as Deepspeed to utilize their features. But there is still no straightforward ways to use sequence / context parallelism with these intergrations. I guess maybe this repo can offer some clues: https://github.com/jzhang38/EasyContext . It seems that we only need to monkeypatch the model, and do some stuffs with the dataloading procedure.
❓ Alternatives
No response
📝 Additional Context
No response
Acknowledgements
My issue title is concise, descriptive, and in title casing.
I have searched the existing issues to make sure this feature has not been requested yet.
I have provided enough information for the maintainers to understand and evaluate this request.
The text was updated successfully, but these errors were encountered:
🔖 Feature description
Support sequence / context parallelism to allow for the SFT of >128k tokens on A/H100 GPUs. With only 8H100 gpus, we can only manage to SFT of no more than 64k tokens now.
✔️ Solution
Axolotl is backboned with Accelerate and can already intergrate with many frameworks such as Deepspeed to utilize their features. But there is still no straightforward ways to use sequence / context parallelism with these intergrations. I guess maybe this repo can offer some clues: https://github.com/jzhang38/EasyContext . It seems that we only need to monkeypatch the model, and do some stuffs with the dataloading procedure.
❓ Alternatives
No response
📝 Additional Context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: