Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST: Use deterministic seeds in tests #2230

Merged

Conversation

david-cortes-intel
Copy link
Contributor

@david-cortes-intel david-cortes-intel commented Dec 19, 2024

Description

This PR modifies tests parameterized by non-deterministic seeds to use fixed seeds.

Having these randomly-determined seeds has the issue that it makes it hard to compare a specific test over several runs (e.g. for different configurations), in addition to making it harder to reproduce potential failures.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.

Performance

Not applicable.

@david-cortes-intel david-cortes-intel added the testing Tests for sklearnex/daal4py/onedal4py & patching sklearn label Dec 19, 2024
@david-cortes-intel david-cortes-intel marked this pull request as draft December 19, 2024 13:28
Copy link
Contributor

@samir-nasibli samir-nasibli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link

codecov bot commented Dec 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Flag Coverage Δ
github 83.18% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

@david-cortes-intel david-cortes-intel marked this pull request as ready for review December 19, 2024 14:40
@david-cortes-intel david-cortes-intel merged commit d952af1 into uxlfoundation:main Dec 19, 2024
27 of 29 checks passed
@icfaust
Copy link
Contributor

icfaust commented Dec 21, 2024

@david-cortes-intel @samir-nasibli i wish i had been consulted on this change. 1) because I introduced this code and 2) the proper management of determism shouldn't rely on assumptions of the functionality of oneDAL (e.g. that default seeds exist for oneDAL) 3) if run to run determism is necessary for certain tests in multiple configurations we should be doing comparisons using only subsets of the tests. if this is with respect to another testing framework, we should define and publish requirements to those. Please note on the horizon it is planned to introduce fuzz testing (as an initiative). as a note, i do not think reverting is necessary, what is done is done (not a big deal)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Tests for sklearnex/daal4py/onedal4py & patching sklearn
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants