Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ScaledModel #123

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

frapac
Copy link

@frapac frapac commented Jul 25, 2024

Following a suggestion by @dpo

Copy link

codecov bot commented Jul 25, 2024

Codecov Report

Attention: Patch coverage is 99.01961% with 1 line in your changes missing coverage. Please review.

Project coverage is 97.56%. Comparing base (40f0c0f) to head (051c3d2).
Report is 12 commits behind head on main.

Files Patch % Lines
src/scaled-model.jl 99.01% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #123      +/-   ##
==========================================
+ Coverage   97.40%   97.56%   +0.15%     
==========================================
  Files           6        7       +1     
  Lines         888      986      +98     
==========================================
+ Hits          865      962      +97     
- Misses         23       24       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link
Member

@tmigot tmigot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @frapac for the PR! Here is a first pass of comments. Sorry if I ask for a lot of clarification.
By the way, would you have a more general use case that would serve as a basis for a tutorial?

src/scaled-model.jl Outdated Show resolved Hide resolved
src/scaled-model.jl Outdated Show resolved Hide resolved
src/scaled-model.jl Outdated Show resolved Hide resolved
src/scaled-model.jl Outdated Show resolved Hide resolved
```
with ``σf`` a scalar defined as
```
σf = min(1, max_gradient / norm(g0, Inf))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is max_gradient?
Do we want to add a max as well to avoid having σf too small? Maybe this bound should depend on the eltype.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added more explanations in the docstring.

src/scaled-model.jl Show resolved Hide resolved
src/scaled-model.jl Outdated Show resolved Hide resolved
src/scaled-model.jl Show resolved Hide resolved
return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x)
end

function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more a general comment on future work. We recently split the constraint API to nonlinear and linear. Would it make sense in a future work to have two different scaling for linear and nonlinear constraints?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, anyway it would be better to have cons_lin! and cons_nln! instead of cons!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep the interface as is. As far as I understand cons! is calling by default cons_lin! and cons_nln! internally, and here the scaling does not depend on the nature of the constraint.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.

@@ -0,0 +1,236 @@
export ScaledModel

struct IpoptScaling{T}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really specific to Ipopt?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed the scaling ConservativeScaling. This is not specific to Ipopt, others solvers are using this scaling as well.

return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x)
end

function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.

the gradient and the Jacobian evaluated at the initial point ``x0``.

"""
struct ScaledModel{T, S, M} <: NLPModels.AbstractNLPModel{T, S}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and same comment throughout the file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants