Skip to content

Create benchmarks #6

Open
Open
@jaantollander

Description

@jaantollander

Benchmarks for decision programming on different kinds of influence diagrams. Here are some ideas on what to measure:

  • Hard lower bound versus soft lower bound with the positive path utility for path probability variables.
  • The effect of lazy cuts on performance.
  • Effects of limited memory influence diagrams on performance (compared to no-forgetting)
  • Performance comparison between the expected value and conditional value at risk.
  • Different Gurobi settings.
  • Memory usage might also be interesting.

Measuring performance requires random sampling of influence diagrams with different attributes such as the number of nodes, limited memory, and inactive chance nodes. The random.jl module is suited for this purpose. We also need to agree on good metrics for the benchmarks.

@jandelmi mentioned analyzing the model generated by Gurobi, which might be useful here as well.

backend = JuMP.backend(model)
gmodel = backend.optimizer.model.inner
Gurobi.write_model(gmodel, "gurobi_model.lp")

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions