-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where can I find the latest results of the benchmark? #145
Comments
What kind of "results" are you thinking of? As far as I know, this set of benchmarks has not yet been used as a survey of BDD packages. If it was done, more BDD packages should be added (see also #12). I hadn't thought to maintain a document with such information on this repository. The best I can do, is create a list with papers that have used this set of benchmarks? But, those papers focus on a specific BDD package. That is, they often give a relative rather than an absolute view. |
Adding a list of papers that contain benchmark results would be excellent. |
With respect to : List of Publications I have pushed a list of publications in dc57f16 . @nhusung , I assume Configuring BDD Compilation Techniques for Feature Models didn't use this benchmarking suite but rather the newly added CNF benchmark is a generalisation for future work? Hence, it shouldn't be added to the list. |
Yes, for that paper, we directly used |
With respect to : Results There are a lot of different variables one can tweak with respect to (1) the machine of choice, (2) the common settings used, and (3) the settings of each individual BDD package.
There exists the following comparative studies of BDD packages. Most of them are (unsurprisingly) from the 90s; so, a more up-to-date one would be beneficial to the community.
Again note, there are quite a lot of choices to be made when trying to make a "fair" comparison. Things to work out I'd have to think a bit about how I/we could do something like this (and how to keep it somewhat up to date). It very quickly turns into me (1) conducting a survey and/or (2) creating a BDD competition. I'm not entirely sure, I am keen on taking on either, right now. These are just a few things one should have to make decisions on.
For this to be any use, one should then also need to include more BDD packages ( see also #12 ). |
I think the best approach would be to provide all the relevant data and let the user decide. And since BDDs might be used in similar ways to SAT solvers, we can take inspiration from the SAT competition.
In conclusion 3 tracks:
|
Notes on "BDD Competition" Rules To quote Cimatti et al.: "BDD-based and SAT-based model checking are often able to solve different classes of problems, and can therefore be seen as complementary techniques.". But, yes taking inspiration from the SAT competition and similar is a good idea anyway. The Disk-based track would merely be Adiar beating CAL by several orders of magnitude. For that, you can just read my papers. So, this can just be skipped and Adiar and CAL be excluded from the whole ordeal. The motivating example for the BeeDeeDee package and the Flix Compiler rely on the BDD package being thread-safe. So yes even though multi-threading is important for speed-ups (especially due to the CPU developments in the past decade), thread-safety opens up for new usages of BDDs (unlike multi-threading). So, a thread-safe track would actually be useful. Many of the current benchmarks could probably be made multi-threaded themselves. Notes on Hardware One could probably make this work as multiple larger GitHub action workflows - assuming they are somewhat consistent. The overall idea would be something akin to:
Then, a fixed document can combine them into a single readable document (set up to be the GitHub page). This whole thing can then be run whenever relevant. |
Edit (Steffan): Tasks
The text was updated successfully, but these errors were encountered: