From 3b039804d20e5b5b62d310c02b2552a2cfa99587 Mon Sep 17 00:00:00 2001 From: Ronny Bergmann Date: Fri, 17 Nov 2023 11:26:27 +0100 Subject: [PATCH] Fix references. --- docs/src/plans/index.md | 2 +- docs/src/plans/objective.md | 1 - docs/src/solvers/ChambollePock.md | 4 ++-- docs/src/solvers/augmented_Lagrangian_method.md | 2 +- docs/src/solvers/difference_of_convex.md | 2 +- docs/src/solvers/exact_penalty_method.md | 2 +- docs/src/solvers/gradient_descent.md | 9 ++++----- docs/src/solvers/index.md | 2 +- docs/src/solvers/subgradient.md | 2 +- 9 files changed, 12 insertions(+), 14 deletions(-) diff --git a/docs/src/plans/index.md b/docs/src/plans/index.md index 54bf3f4c76..857f324b94 100644 --- a/docs/src/plans/index.md +++ b/docs/src/plans/index.md @@ -9,7 +9,7 @@ information is required about both the optimisation task or “problem” at han This together is called a __plan__ in `Manopt.jl` and it consists of two data structures: * The [Manopt Problem](@ref ProblemSection) describes all _static_ data of a task, most prominently the manifold and the objective. -* The [Solver State](@refsec:solver-state) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state. +* The [Solver State](@ref sec-solver-state) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state. By splitting these two parts, one problem can be define an then be solved using different solvers. diff --git a/docs/src/plans/objective.md b/docs/src/plans/objective.md index d4e3642a0c..1fe163dd5e 100644 --- a/docs/src/plans/objective.md +++ b/docs/src/plans/objective.md @@ -20,7 +20,6 @@ InplaceEvaluation evaluation_type ``` - ## Decorators for objectives An objective can be decorated using the following trait and function to initialize diff --git a/docs/src/solvers/ChambollePock.md b/docs/src/solvers/ChambollePock.md index 930c22cddf..5cfbbb551a 100644 --- a/docs/src/solvers/ChambollePock.md +++ b/docs/src/solvers/ChambollePock.md @@ -20,7 +20,7 @@ such that ``Λ(\mathcal C) \subset \mathcal D``. The algorithm is available in four variants: exact versus linearized (see `variant`) as well as with primal versus dual relaxation (see `relax`). For more details, see [BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez:2021](@citet*). -In the following we note the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm. +In the following description is the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm. Given base points ``m∈\mathcal C``, ``n=Λ(m)∈\mathcal D``, initial primal and dual values ``p^{(0)} ∈\mathcal C``, ``ξ_n^{(0)} ∈T_n^*\mathcal N``, @@ -67,7 +67,7 @@ ChambollePock! ChambollePockState ``` -## Useful Terms +## Useful terms ```@docs primal_residual diff --git a/docs/src/solvers/augmented_Lagrangian_method.md b/docs/src/solvers/augmented_Lagrangian_method.md index ada799cf92..cbb2a773f2 100644 --- a/docs/src/solvers/augmented_Lagrangian_method.md +++ b/docs/src/solvers/augmented_Lagrangian_method.md @@ -15,7 +15,7 @@ CurrentModule = Manopt AugmentedLagrangianMethodState ``` -## Helping Functions +## Helping functions ```@docs AugmentedLagrangianCost diff --git a/docs/src/solvers/difference_of_convex.md b/docs/src/solvers/difference_of_convex.md index 89a314cdd1..0cb427101d 100644 --- a/docs/src/solvers/difference_of_convex.md +++ b/docs/src/solvers/difference_of_convex.md @@ -18,7 +18,7 @@ difference_of_convex_proximal_point difference_of_convex_proximal_point! ``` -## Manopt Solver States +## Solver states ```@docs DifferenceOfConvexState diff --git a/docs/src/solvers/exact_penalty_method.md b/docs/src/solvers/exact_penalty_method.md index 9f17a3f32c..2ad1fffc89 100644 --- a/docs/src/solvers/exact_penalty_method.md +++ b/docs/src/solvers/exact_penalty_method.md @@ -15,7 +15,7 @@ CurrentModule = Manopt ExactPenaltyMethodState ``` -## Helping Functions +## Helping functions ```@docs ExactPenaltyCost diff --git a/docs/src/solvers/gradient_descent.md b/docs/src/solvers/gradient_descent.md index e5ccf90cb9..db3b818961 100644 --- a/docs/src/solvers/gradient_descent.md +++ b/docs/src/solvers/gradient_descent.md @@ -15,7 +15,7 @@ CurrentModule = Manopt GradientDescentState ``` -## Direction Update Rules +## Direction update rules A field of the options is the `direction`, a [`DirectionUpdateRule`](@ref), which by default [`IdentityUpdateRule`](@ref) just evaluates the gradient but can be enhanced for example to @@ -27,7 +27,7 @@ AverageGradient Nesterov ``` -## Debug Actions +## Debug actions ```@docs DebugGradient @@ -35,7 +35,7 @@ DebugGradientNorm DebugStepsize ``` -## Record Actions +## Record actions ```@docs RecordGradient @@ -47,8 +47,7 @@ RecordStepsize The [`gradient_descent`](@ref) solver requires the following functions of a manifold to be available -* A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction, -for this case it does not have to be specified. +* A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction. If this default is set, a `retraction_method=` does not have to be specified. * By default gradient descent uses [`ArmijoLinesearch`](@ref) which requires [`max_stepsize`](@ref)`(M)` to be set and an implementation of [`inner`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.inner-Tuple%7BAbstractManifold,%20Any,%20Any,%20Any%7D)`(M, p, X)`. * By default the stopping criterion uses the [`norm`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#LinearAlgebra.norm-Tuple{AbstractManifold,%20Any,%20Any}) as well, to check for a small gradient, but if you implemented `inner` from the last point, the norm is provided already. * By default the tangent vector storing the gradient is initialized calling [`zero_vector`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.zero_vector-Tuple{AbstractManifold,%20Any})`(M,p)`. diff --git a/docs/src/solvers/index.md b/docs/src/solvers/index.md index e6e3866ff2..ea03bcb583 100644 --- a/docs/src/solvers/index.md +++ b/docs/src/solvers/index.md @@ -31,7 +31,7 @@ The following algorithms are currently available [Primal-dual Riemannian semismooth Newton Algorithm](@ref PDRSSNSolver) | [`primal_dual_semismooth_Newton`](@ref), [`PrimalDualSemismoothNewtonState`](@ref) (using [`TwoManifoldProblem`](@ref)) | ``f=F+G(Λ\cdot)``, ``\operatorname{prox}_{σ F}`` & diff., ``\operatorname{prox}_{τ G^*}`` & diff., ``Λ`` [Quasi-Newton Method](@ref quasiNewton) | [`quasi_Newton`](@ref), [`QuasiNewtonState`](@ref) | ``f``, ``\operatorname{grad} f`` | [Steihaug-Toint Truncated Conjugate-Gradient Method](@ref tCG) | [`truncated_conjugate_gradient_descent`](@ref), [`TruncatedConjugateGradientState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` | -[Subgradient Method](@refsec-subgradient-method) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` | +[Subgradient Method](@ref sec-subgradient-method) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` | [Stochastic Gradient Descent](@ref StochasticGradientDescentSolver) | [`stochastic_gradient_descent`](@ref), [`StochasticGradientDescentState`](@ref) | ``f = \sum_i f_i``, ``\operatorname{grad} f_i`` | [The Riemannian Trust-Regions Solver](@ref trust_regions) | [`trust_regions`](@ref), [`TrustRegionsState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` | diff --git a/docs/src/solvers/subgradient.md b/docs/src/solvers/subgradient.md index 9a6b6966f8..f5e1b31ec6 100644 --- a/docs/src/solvers/subgradient.md +++ b/docs/src/solvers/subgradient.md @@ -1,4 +1,4 @@ -# [Subgradient method](@idsec-subgradient-method) +# [Subgradient method](@id sec-subgradient-method) ```@docs subgradient_method