You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Benchmarks for decision programming on different kinds of influence diagrams. Here are some ideas on what to measure:
Hard lower bound versus soft lower bound with the positive path utility for path probability variables.
The effect of lazy cuts on performance.
Effects of limited memory influence diagrams on performance (compared to no-forgetting)
Performance comparison between the expected value and conditional value at risk.
Different Gurobi settings.
Memory usage might also be interesting.
Measuring performance requires random sampling of influence diagrams with different attributes such as the number of nodes, limited memory, and inactive chance nodes. The random.jl module is suited for this purpose. We also need to agree on good metrics for the benchmarks.
@jandelmi mentioned analyzing the model generated by Gurobi, which might be useful here as well.
We can use BenchmarkTools.jl to measure model creation performance, namely time and allocations. If needed, we can also implement regression testing, which measures the performance impact of changes to the decision programming library.
Benchmarks for decision programming on different kinds of influence diagrams. Here are some ideas on what to measure:
Measuring performance requires random sampling of influence diagrams with different attributes such as the number of nodes, limited memory, and inactive chance nodes. The
random.jl
module is suited for this purpose. We also need to agree on good metrics for the benchmarks.@jandelmi mentioned analyzing the model generated by Gurobi, which might be useful here as well.
The text was updated successfully, but these errors were encountered: