You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the documentation, it is mentioned that fine-tuning Mamba2 8B should be possible on 2 80GB A100s, which makes sense, and assuming everything to be fp32, the memory consumption is expected to be:
32 GB for model params
32 GB for gradients
32 * 2 GB for optimizer states
This will sum up to 128 GB, however, in practice, it takes around ~240 GB to fine-tune Mamba8B using Nemo1 scripts.
I would appreciate any information or explanation regarding this difference.
Thanks in advance for your assistance!
The text was updated successfully, but these errors were encountered:
In the documentation, it is mentioned that fine-tuning Mamba2 8B should be possible on 2 80GB A100s, which makes sense, and assuming everything to be fp32, the memory consumption is expected to be:
This will sum up to 128 GB, however, in practice, it takes around ~240 GB to fine-tune Mamba8B using Nemo1 scripts.
I would appreciate any information or explanation regarding this difference.
Thanks in advance for your assistance!
The text was updated successfully, but these errors were encountered: