Skip to content

Commit

Permalink
Fix references.
Browse files Browse the repository at this point in the history
  • Loading branch information
kellertuer committed Nov 17, 2023
1 parent a7edf1e commit 3b03980
Show file tree
Hide file tree
Showing 9 changed files with 12 additions and 14 deletions.
2 changes: 1 addition & 1 deletion docs/src/plans/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ information is required about both the optimisation task or “problem” at han
This together is called a __plan__ in `Manopt.jl` and it consists of two data structures:

* The [Manopt Problem](@ref ProblemSection) describes all _static_ data of a task, most prominently the manifold and the objective.
* The [Solver State](@refsec:solver-state) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.
* The [Solver State](@ref sec-solver-state) describes all _varying_ data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.

By splitting these two parts, one problem can be define an then be solved using different solvers.

Expand Down
1 change: 0 additions & 1 deletion docs/src/plans/objective.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ InplaceEvaluation
evaluation_type
```


## Decorators for objectives

An objective can be decorated using the following trait and function to initialize
Expand Down
4 changes: 2 additions & 2 deletions docs/src/solvers/ChambollePock.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ such that ``Λ(\mathcal C) \subset \mathcal D``.
The algorithm is available in four variants: exact versus linearized (see `variant`)
as well as with primal versus dual relaxation (see `relax`). For more details, see
[BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez:2021](@citet*).
In the following we note the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm.
In the following description is the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm.

Given base points ``m∈\mathcal C``, ``n=Λ(m)∈\mathcal D``,
initial primal and dual values ``p^{(0)} ∈\mathcal C``, ``ξ_n^{(0)} ∈T_n^*\mathcal N``,
Expand Down Expand Up @@ -67,7 +67,7 @@ ChambollePock!
ChambollePockState
```

## Useful Terms
## Useful terms

```@docs
primal_residual
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/augmented_Lagrangian_method.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ CurrentModule = Manopt
AugmentedLagrangianMethodState
```

## Helping Functions
## Helping functions

```@docs
AugmentedLagrangianCost
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/difference_of_convex.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ difference_of_convex_proximal_point
difference_of_convex_proximal_point!
```

## Manopt Solver States
## Solver states

```@docs
DifferenceOfConvexState
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/exact_penalty_method.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ CurrentModule = Manopt
ExactPenaltyMethodState
```

## Helping Functions
## Helping functions

```@docs
ExactPenaltyCost
Expand Down
9 changes: 4 additions & 5 deletions docs/src/solvers/gradient_descent.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ CurrentModule = Manopt
GradientDescentState
```

## Direction Update Rules
## Direction update rules

A field of the options is the `direction`, a [`DirectionUpdateRule`](@ref), which by default [`IdentityUpdateRule`](@ref) just evaluates the gradient but can be enhanced for example to

Expand All @@ -27,15 +27,15 @@ AverageGradient
Nesterov
```

## Debug Actions
## Debug actions

```@docs
DebugGradient
DebugGradientNorm
DebugStepsize
```

## Record Actions
## Record actions

```@docs
RecordGradient
Expand All @@ -47,8 +47,7 @@ RecordStepsize

The [`gradient_descent`](@ref) solver requires the following functions of a manifold to be available

* A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction,
for this case it does not have to be specified.
* A [retract!](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/)ion; it is recommended to set the [`default_retraction_method`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/retractions/#ManifoldsBase.default_retraction_method-Tuple{AbstractManifold}) to a favourite retraction. If this default is set, a `retraction_method=` does not have to be specified.
* By default gradient descent uses [`ArmijoLinesearch`](@ref) which requires [`max_stepsize`](@ref)`(M)` to be set and an implementation of [`inner`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.inner-Tuple%7BAbstractManifold,%20Any,%20Any,%20Any%7D)`(M, p, X)`.
* By default the stopping criterion uses the [`norm`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#LinearAlgebra.norm-Tuple{AbstractManifold,%20Any,%20Any}) as well, to check for a small gradient, but if you implemented `inner` from the last point, the norm is provided already.

Check warning on line 52 in docs/src/solvers/gradient_descent.md

View workflow job for this annotation

GitHub Actions / vale

[vale] docs/src/solvers/gradient_descent.md#L52

[Google.WordList] Use 'select' instead of 'check'.
Raw output
{"message": "[Google.WordList] Use 'select' instead of 'check'.", "location": {"path": "docs/src/solvers/gradient_descent.md", "range": {"start": {"line": 52, "column": 193}}}, "severity": "WARNING"}
* By default the tangent vector storing the gradient is initialized calling [`zero_vector`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.zero_vector-Tuple{AbstractManifold,%20Any})`(M,p)`.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The following algorithms are currently available
[Primal-dual Riemannian semismooth Newton Algorithm](@ref PDRSSNSolver) | [`primal_dual_semismooth_Newton`](@ref), [`PrimalDualSemismoothNewtonState`](@ref) (using [`TwoManifoldProblem`](@ref)) | ``f=F+G(Λ\cdot)``, ``\operatorname{prox}_{σ F}`` & diff., ``\operatorname{prox}_{τ G^*}`` & diff., ``Λ``
[Quasi-Newton Method](@ref quasiNewton) | [`quasi_Newton`](@ref), [`QuasiNewtonState`](@ref) | ``f``, ``\operatorname{grad} f`` |
[Steihaug-Toint Truncated Conjugate-Gradient Method](@ref tCG) | [`truncated_conjugate_gradient_descent`](@ref), [`TruncatedConjugateGradientState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` |
[Subgradient Method](@refsec-subgradient-method) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` |
[Subgradient Method](@ref sec-subgradient-method) | [`subgradient_method`](@ref), [`SubGradientMethodState`](@ref) | ``f``, ``∂ f`` |
[Stochastic Gradient Descent](@ref StochasticGradientDescentSolver) | [`stochastic_gradient_descent`](@ref), [`StochasticGradientDescentState`](@ref) | ``f = \sum_i f_i``, ``\operatorname{grad} f_i`` |
[The Riemannian Trust-Regions Solver](@ref trust_regions) | [`trust_regions`](@ref), [`TrustRegionsState`](@ref) | ``f``, ``\operatorname{grad} f``, ``\operatorname{Hess} f`` |

Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/subgradient.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Subgradient method](@idsec-subgradient-method)
# [Subgradient method](@id sec-subgradient-method)

```@docs
subgradient_method
Expand Down

0 comments on commit 3b03980

Please sign in to comment.