-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Device does not support bfloat16 #15
Comments
It looks like you're encountering a compatibility issue with bfloat16 on your current CUDA device. As the error suggests, switching the data type to float16 should resolve this problem. If you're using Mamba, make sure your environment is set up to support the necessary CUDA features. If you have further questions or need assistance, feel free to reach out! |
You might need to adjust the value of 'eps' used in the paper to match the precision you are working with. When using float16 (precision='16-mixed'), the limited numerical range can sometimes lead to instability, such as NaNs or Infs in the loss. Consider increasing 'eps' slightly to maintain numerical stability during training. |
File "/home/.conda/envs/spmba/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 234, in init
raise RuntimeError('Current CUDA Device does not support bfloat16. Please switch dtype to float16.')
RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
The text was updated successfully, but these errors were encountered: