-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add ScaledModel #123
base: main
Are you sure you want to change the base?
add ScaledModel #123
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #123 +/- ##
==========================================
+ Coverage 97.40% 97.56% +0.15%
==========================================
Files 6 7 +1
Lines 888 986 +98
==========================================
+ Hits 865 962 +97
- Misses 23 24 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @frapac for the PR! Here is a first pass of comments. Sorry if I ask for a lot of clarification.
By the way, would you have a more general use case that would serve as a basis for a tutorial?
``` | ||
with ``σf`` a scalar defined as | ||
``` | ||
σf = min(1, max_gradient / norm(g0, Inf)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is max_gradient
?
Do we want to add a max as well to avoid having σf
too small? Maybe this bound should depend on the eltype.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added more explanations in the docstring.
return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x) | ||
end | ||
|
||
function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more a general comment on future work. We recently split the constraint API to nonlinear and linear. Would it make sense in a future work to have two different scaling for linear and nonlinear constraints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, anyway it would be better to have cons_lin!
and cons_nln!
instead of cons!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to keep the interface as is. As far as I understand cons!
is calling by default cons_lin!
and cons_nln!
internally, and here the scaling does not depend on the nature of the constraint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, but calling a solver on a ScaledModel would return an cons_nln!
unimplemented.
src/scaled-model.jl
Outdated
@@ -0,0 +1,236 @@ | |||
export ScaledModel | |||
|
|||
struct IpoptScaling{T} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really specific to Ipopt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed the scaling ConservativeScaling
. This is not specific to Ipopt, others solvers are using this scaling as well.
return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x) | ||
end | ||
|
||
function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, but calling a solver on a ScaledModel would return an cons_nln!
unimplemented.
the gradient and the Jacobian evaluated at the initial point ``x0``. | ||
|
||
""" | ||
struct ScaledModel{T, S, M} <: NLPModels.AbstractNLPModel{T, S} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and same comment throughout the file
Following a suggestion by @dpo