Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create benchmarks #6

Open
jaantollander opened this issue Sep 4, 2020 · 1 comment
Open

Create benchmarks #6

jaantollander opened this issue Sep 4, 2020 · 1 comment
Labels
enhancement New feature or request

Comments

@jaantollander
Copy link
Contributor

jaantollander commented Sep 4, 2020

Benchmarks for decision programming on different kinds of influence diagrams. Here are some ideas on what to measure:

  • Hard lower bound versus soft lower bound with the positive path utility for path probability variables.
  • The effect of lazy cuts on performance.
  • Effects of limited memory influence diagrams on performance (compared to no-forgetting)
  • Performance comparison between the expected value and conditional value at risk.
  • Different Gurobi settings.
  • Memory usage might also be interesting.

Measuring performance requires random sampling of influence diagrams with different attributes such as the number of nodes, limited memory, and inactive chance nodes. The random.jl module is suited for this purpose. We also need to agree on good metrics for the benchmarks.

@jandelmi mentioned analyzing the model generated by Gurobi, which might be useful here as well.

backend = JuMP.backend(model)
gmodel = backend.optimizer.model.inner
Gurobi.write_model(gmodel, "gurobi_model.lp")
@jaantollander jaantollander added the enhancement New feature or request label Sep 4, 2020
@jaantollander
Copy link
Contributor Author

We can use BenchmarkTools.jl to measure model creation performance, namely time and allocations. If needed, we can also implement regression testing, which measures the performance impact of changes to the decision programming library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant