Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add yardstick validate subcommand #126

Closed
willmurphyscode opened this issue Aug 24, 2023 · 1 comment · Fixed by #380
Closed

add yardstick validate subcommand #126

willmurphyscode opened this issue Aug 24, 2023 · 1 comment · Fixed by #380
Assignees
Labels
enhancement New feature or request

Comments

@willmurphyscode
Copy link
Contributor

What would you like to be added:

Right now, yardstick can capture the difference between different tool outputs, but it has no notion of a label provider, and tools that compare yardstick results to known labels implement two mechanisms on their own:

  1. A mechanism to fetch some labels to use in comparison. (grype uses git submodules for this mechanism.)
  2. A mechanism to run a quality gate, for example so that too great a deviation from known labels fails a job in CI. Grype's gate.py is an example.

The core request is: a yardstick.yaml file should be able to configure two additional things: where do my labels come from? And what comparison methods and thresholds are considered success vs failure.

One way we've discussed this is: a label source and a comparator should be sort of like "tools" in yardstick's config model.

Why is this needed:

Right now, we're maintaining a fair amount of python code in different repos that fetches labels and runs different comparisons. All of these comparison codes fetch labels and run comparisons in similar, but not identical ways. We should promote this duplicated code into a new configurable section in yardstick.

Additional context:

@willmurphyscode willmurphyscode added the enhancement New feature or request label Aug 24, 2023
@willmurphyscode
Copy link
Contributor Author

willmurphyscode commented Sep 29, 2023

Currently there are quality gates in grype and vunnel:

Recently, both gates had a failure mode where they caused unexpected false positives. That is, the gate.py was exiting zero but should not have been. This seems like evidence that the quality gate needs to be better tested, but setting up testing for once-off python scripts in different repos isn't sustainable. The ask here is that the functionality implemented by gate.py in these two repos (and possibly others I missed) should be made into a CLI command in yardstick.

Right now, the comparisons used are different, but this difficulty seems surmountable for 2 reasons: first, some of the differences are accidental, and 2, we could employ a strategy pattern or plugin model for differences that couldn't be refactored out.

@willmurphyscode willmurphyscode self-assigned this Aug 26, 2024
@willmurphyscode willmurphyscode changed the title yardstick should know about quality gates add yardstick validate subcommand Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

1 participant