Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update diagonal QN models #107

Merged
merged 4 commits into from
Feb 26, 2024
Merged

update diagonal QN models #107

merged 4 commits into from
Feb 26, 2024

Conversation

dpo
Copy link
Member

@dpo dpo commented Feb 26, 2024

This update follows an update of diagonal QN operators in LinearOperators.jl.

See JuliaSmoothOptimizers/LinearOperators.jl#316

This update follows an update of diagonal QN operators
in LinearOperators.jl.

See JuliaSmoothOptimizers/LinearOperators.jl#316
@dpo dpo requested a review from tmigot February 26, 2024 02:58
@dpo
Copy link
Member Author

dpo commented Feb 26, 2024

@tmigot Is there a reason for the SpectralGradientModel to exist separately from DiagonalQNModel? It’s just a special case.

Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

@tmigot
Copy link
Member

tmigot commented Feb 26, 2024

There was no specific reason, and I think your PR clarifies it. The changes look for me, and running locally the tests are passing. So, we just need a new (breaking) release of LinearOperators.jl after JuliaSmoothOptimizers/LinearOperators.jl#316

Copy link

codecov bot commented Feb 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.40%. Comparing base (1f127a9) to head (26dd0f3).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #107   +/-   ##
=======================================
  Coverage   97.40%   97.40%           
=======================================
  Files           6        6           
  Lines         887      888    +1     
=======================================
+ Hits          864      865    +1     
  Misses         23       23           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

@dpo
Copy link
Member Author

dpo commented Feb 26, 2024

I believe the AmplNLReader failure should go away after this PR has been merged.

Copy link
Member

@tmigot tmigot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!
AmplNLReader.jl fails because we didn't update to NLPModels 0.19 and 0.20 as we haven't implemented the linear API yet there. So, that's an expected failure I would say.

@dpo dpo merged commit 40f0c0f into main Feb 26, 2024
47 of 49 checks passed
@dpo dpo deleted the diag-qn-updates branch February 26, 2024 22:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants