Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use float or higher accumulator for layernorm internally when calculating Mean #2891

Open
umangyadav opened this issue Mar 15, 2024 · 1 comment

Comments

@umangyadav
Copy link
Member

make_array<vec_value_type>(vec_value_type{0}, vec_value_type{0}),

x^2/n can cause accuracy issues when using lower precision i.e. fp16.

Internally it can use higher precision for accumulation.

@pfultz2
Copy link
Collaborator

pfultz2 commented Mar 18, 2024

So #2883 will upgrade large reduce_means to use float by default and #2853 will update the x^2/n calculation to be float as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants