-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Less verbose output from successful cb-check #451
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that the full output is too verbose, but this might not be verbose enough. Most people will see Correctly parsed following benchmarks: unnamed
because the top-level "name"
field is optional.
I'd suggest something like
Format.printf "Correctly parsed the following benchmarks:@.";
Cb_schema.S.merge [] validated
|> List.iter (fun ({ Schema.benchmark_name; _ } as t) ->
match benchmark_name with
| Some name -> Format.printf "%s@." name
| None -> Format.printf "%a@." pp (Cb_schema.S.to_json t))
Let me know what you think :)
Yes, I agree that the Alos, I'm wondering if I understand the cb-check use cases correctly. AFAIK it currently works like this:
I find following a bit confusing:
Becaus of merging, I think that printing whole outputs might actually be better, because it is non-intuitive. |
Most projects have only one benchmark battery and so don't specify a toplevel name (I'm fine with "unnamed" personally, or "default")
If one does specify different names, then it creates multiple tabs in the sidebar on the left of the website (so basically, it allows creating multiple benchmark batteries that don't pollute the main page) I don't think this is used much atm, but you can imagine running multiple programs in
Yeah sorry this isn't documented much either... The intent is that a benchmark can produce each metric one by one in a streaming way, as soon as it computes it, rather than accumulate and print them all at the end:
This should give you the same as if you printed a single output:
If you print multiple times the same metric, then the results gets accumulated to compute their min/mean/max (and display the error margin on the graphs). This enables running the same benchmark in a for loop if more precision is needed. Finally, by printing metrics asap, the website can display the updated graphs while the benchmark is still running :) (so by running the important metrics first, one does not need to wait for the full benchmark to complete before seeing the impact of the changes) |
Thanks for great answer @art-w ! Can we add some of these informations to readme? I think it would make new user experience much better. :) |
Resolves #450