From 588895a67c3d323afaae39a6cb73beffbfafb3ff Mon Sep 17 00:00:00 2001 From: tmigot Date: Tue, 6 Aug 2024 17:44:48 -0400 Subject: [PATCH] fix benchmark doc --- docs/src/benchmark.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/src/benchmark.md b/docs/src/benchmark.md index 7d449bfa..c258f8d9 100644 --- a/docs/src/benchmark.md +++ b/docs/src/benchmark.md @@ -12,13 +12,13 @@ using OptimizationProblems.PureJuMP ``` We select the problems from `PureJuMP` submodule of `OptimizationProblems` converted in [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl) using [NLPModelsJuMP](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl). ``` @example ex1 -problems = (MathOptNLPModel(eval(Meta.parse(problem))(), name=problem) for problem ∈ OptimizationProblems.meta[!, :name]) +problems = (MathOptNLPModel(OptimizationProblems.PureJuMP.eval(Meta.parse(problem))(), name=problem) for problem ∈ OptimizationProblems.meta[!, :name]) ``` The same can be achieved using `OptimizationProblems.ADNLPProblems` instead of `OptimizationProblems.PureJuMP` as follows: ``` @example ex1 using ADNLPModels using OptimizationProblems.ADNLPProblems -ad_problems = (eval(Meta.parse(problem))() for problem ∈ OptimizationProblems.meta[!, :name]) +ad_problems = (OptimizationProblems.ADNLPProblems.eval(Meta.parse(problem))() for problem ∈ OptimizationProblems.meta[!, :name]) ``` We also define a dictionary of solvers that will be used for our benchmark. We consider here `JSOSolvers.lbfgs` and `JSOSolvers.trunk`.