Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: benchdnn: softmax: relax threshold for ACL #1819 #1822

Merged

Conversation

jondea
Copy link
Contributor

@jondea jondea commented Mar 6, 2024

ACL softmax accumulates in F16, but oneDNN now expects accumulation in F32, so the tests fail. This commit partially reverts 6727bbe.

Fixes #1819

Checklist

General

  • Do all unit and benchdnn tests (make test and make test_benchdnn_*) pass locally for each commit?
  • Have you formatted the code using clang-format?

Bug fixes

  • Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
  • Have you added relevant regression tests? N/A, there are existing tests

ACL softmax accumulates in F16, but oneDNN now expects accumulation in
F32, so the tests fail. This commit partially reverts 6727bbe.
Fixes oneapi-src#1819
Copy link
Contributor

@mgouicem mgouicem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. As mentioned in related issue, you can rely on the accumulation datatype attribute (exposed internally as attr.acc_mode_).

@vpirogov vpirogov added this to the v3.5 milestone Mar 7, 2024
@mgouicem mgouicem merged commit e44b78f into oneapi-src:main Mar 12, 2024
10 checks passed
@vpirogov vpirogov added the platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64 label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:cpu-aarch64 Codeowner: @oneapi-src/onednn-cpu-aarch64
Projects
None yet
Development

Successfully merging this pull request may close these issues.

test_benchdnn_modeC_softmax_ci_cpu fails due to F16 accumulation
3 participants