Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve compareperf exit-code logic #91

Open
ldoktor opened this issue Oct 29, 2020 · 0 comments
Open

Improve compareperf exit-code logic #91

ldoktor opened this issue Oct 29, 2020 · 0 comments
Labels
enhancement New feature or request

Comments

@ldoktor
Copy link
Collaborator

ldoktor commented Oct 29, 2020

Due to jittery nature of perf testing we usually get some failures that are only outliers but they still result in compareperf to return non-zero output. It'd be nice to define conditions under which we consider a comparison as good. This would be very useful for build reporting purposes as well as for bisection.

One way to judge would be to specify a percentage of acceptable failures (in total, per group, ...). Another would be to focus on reference builds and be more lenient on builds with a great jittery while failing in case of a stable failure rate. Of course we can add multiple metrics to allow the operators to define better rules.

Alternatively we can just report some aggregated info and let the users to process them afterwards, but a better handling would still be IMO useful.

Bonus: It'd be nice to allow ML identification and models support for deciding on the build status, but that is currently out of scope.

@ldoktor ldoktor added the enhancement New feature or request label Oct 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant