diff --git a/Changelog.md b/Changelog.md index d85e97052a..4a6efb6a97 100644 --- a/Changelog.md +++ b/Changelog.md @@ -48,11 +48,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added -* The `AdaptiveWNGrad` stepsize is now availablbe as a new stepsize functor. +* The `AdaptiveWNGrad` stepsize is now available as a new stepsize functor. ### Fixed -* Levenberg-Marquardt now posesses its parameters `initial_residual_values` and +* Levenberg-Marquardt now possesses its parameters `initial_residual_values` and `initial_jacobian_f` also as keyword arguments, such that their default initialisations can be adapted, if necessary @@ -84,7 +84,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added -* More details on the Count and Cache toturial +* More details on the Count and Cache tutorial ### Changed @@ -104,7 +104,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations * A `ManifoldCountObjective` as a decorator for objectives to enable counting of calls to for example the cost and the gradient * adds a `return_objective` keyword, that switches the return of a solver to a tuple `(o, s)`, - where `o` is the (possibly decorated) objective, and `s` os the “classical” solver return (state or point). + where `o` is the (possibly decorated) objective, and `s` is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused. * change solvers on the mid level (form `solver(M, objective, p)`) to also accept decorated objectives @@ -123,7 +123,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added -* the sub solver for `trust_regions` is now costumizable, i.e. can be exchanged. +* the sub solver for `trust_regions` is now customizable, i.e. can be exchanged. ### Changed diff --git a/docs/src/plans/problem.md b/docs/src/plans/problem.md index d445d6f483..5a4678e394 100644 --- a/docs/src/plans/problem.md +++ b/docs/src/plans/problem.md @@ -18,7 +18,7 @@ Usually, such a problem is determined by the manifold or domain of the optimisat DefaultManoptProblem ``` -The exception to these are the primal dual-based solvers ([Chambolle-Pock](@ref ChambollePockSolver) and the [PD Semismooth Newton](@ref PDRSSNSolver)]), which both need two manifolds as their domain(s), hence thre also exists a +The exception to these are the primal dual-based solvers ([Chambolle-Pock](@ref ChambollePockSolver) and the [PD Semismooth Newton](@ref PDRSSNSolver)]), which both need two manifolds as their domain(s), hence there also exists a ```@docs TwoManifoldProblem diff --git a/docs/src/references.md b/docs/src/references.md index 7776cf358c..b3beda4a65 100644 --- a/docs/src/references.md +++ b/docs/src/references.md @@ -1,6 +1,6 @@ # Literature -This is all literature mentioned / referenced in the `Manopt.jl` documenation. +This is all literature mentioned / referenced in the `Manopt.jl` documentation. Usually you will find a small reference section at the end of every documentation page that contains references. ```@bibliography diff --git a/docs/src/solvers/index.md b/docs/src/solvers/index.md index 01c55769d7..808bf7425f 100644 --- a/docs/src/solvers/index.md +++ b/docs/src/solvers/index.md @@ -101,7 +101,7 @@ Then it would call the iterate process. ### The manual call -If you generate the correctsponding `problem` and `state` as the previous step does, you can +If you generate the corresponding `problem` and `state` as the previous step does, you can also use the third (lowest level) and just call ``` diff --git a/docs/src/solvers/truncated_conjugate_gradient_descent.md b/docs/src/solvers/truncated_conjugate_gradient_descent.md index b2d825af88..c6d797fccc 100644 --- a/docs/src/solvers/truncated_conjugate_gradient_descent.md +++ b/docs/src/solvers/truncated_conjugate_gradient_descent.md @@ -109,7 +109,7 @@ is to stop as soon as an iteration ``k`` is reached for which holds, where ``0 < κ < 1`` and ``θ > 0`` are chosen in advance. This is realized in this method by [`StopWhenResidualIsReducedByFactorOrPower`](@ref). -It can be shown shown that under appropriate conditions the iterates ``x_k`` +It can be shown that under appropriate conditions the iterates ``x_k`` of the underlying trust-region method converge to nondegenerate critical points with an order of convergence of at least ``\min \left( θ + 1, 2 \right)``, see [Absil, Mahony, Sepulchre, Princeton University Press, 2008](@cite AbsilMahonySepulchre:2008). diff --git a/docs/src/tutorials/GeodesicRegression.md b/docs/src/tutorials/GeodesicRegression.md index 7a4d529788..477d931807 100644 --- a/docs/src/tutorials/GeodesicRegression.md +++ b/docs/src/tutorials/GeodesicRegression.md @@ -34,7 +34,7 @@ highlighted = 4; ## Time Labeled Data If for each data item $d_i$ we are also given a time point $t_i\in\mathbb R$, which are pairwise different, -then we can use the least squares error to state the objetive function as [Fletcher:2013](@cite) +then we can use the least squares error to state the objective function as [Fletcher:2013](@cite) ``` math F(p,X) = \frac{1}{2}\sum_{i=1}^n d_{\mathcal M}^2(γ_{p,X}(t_i), d_i), @@ -362,7 +362,7 @@ where $t = (t_1,\ldots,t_n) \in \mathbb R^n$ is now an additional parameter of t We write $F_1(p, X)$ to refer to the function on the tangent bundle for fixed values of $t$ (as the one in the last part) and $F_2(t)$ for the function $F(p, X, t)$ as a function in $t$ with fixed values $(p, X)$. -For the Euclidean case, there is no neccessity to optimize with respect to $t$, as we saw +For the Euclidean case, there is no necessity to optimize with respect to $t$, as we saw above for the initialization of the fixed time points. On a Riemannian manifold this can be stated as a problem on the product manifold $\mathcal N = \mathrm{T}\mathcal M \times \mathbb R^n$, i.e. @@ -380,7 +380,7 @@ N = M × Euclidean(length(t2)) ``` In this tutorial we present an approach to solve this using an alternating gradient descent scheme. -To be precise, we define the cost funcion now on the product manifold +To be precise, we define the cost function now on the product manifold ``` julia struct RegressionCost2{T} @@ -430,7 +430,7 @@ function (a::RegressionGradient2a!)(N, Y, x) end ``` -Finally, we addionally look for a fixed point $x=(p,X) ∈ \mathrm{T}\mathcal M$ at +Finally, we additionally look for a fixed point $x=(p,X) ∈ \mathrm{T}\mathcal M$ at the gradient with respect to $t∈\mathbb R^n$, i.e. the second component, which is given by ``` math diff --git a/docs/src/tutorials/HowToDebug.md b/docs/src/tutorials/HowToDebug.md index fb5c1e3f6b..10b870cb10 100644 --- a/docs/src/tutorials/HowToDebug.md +++ b/docs/src/tutorials/HowToDebug.md @@ -74,7 +74,7 @@ There is two more advanced variants that can be used. The first is a tuple of a We can for example change the way the `:ϵ` is printed by adding a format string and use [`DebugCost`](@ref)`()` which is equivalent to using `:Cost`. -Especially with the format change, the lines are more coniststent in length. +Especially with the format change, the lines are more consistent in length. ``` julia p2 = exact_penalty_method( diff --git a/docs/src/tutorials/InplaceGradient.md b/docs/src/tutorials/InplaceGradient.md index 06b24bb1ab..e408fac188 100644 --- a/docs/src/tutorials/InplaceGradient.md +++ b/docs/src/tutorials/InplaceGradient.md @@ -1,7 +1,7 @@ # Speedup using Inplace Evaluation Ronny Bergmann -When it comes to time critital operations, a main ingredient in Julia is given by +When it comes to time critical operations, a main ingredient in Julia is given by mutating functions, i.e. those that compute in place without additional memory allocations. In the following, we illustrate how to do this with `Manopt.jl`. diff --git a/joss/paper.md b/joss/paper.md index 4eac402e07..861cbb4249 100644 --- a/joss/paper.md +++ b/joss/paper.md @@ -108,7 +108,7 @@ since its norm is approximately `0.858`. But even projecting this back onto the In the following figure the data `pts` (teal) and the resulting mean (orange) as well as the projected Euclidean mean (small, cyan) are shown. -![40 random points `pts` and the result from the gradient descent to compute the `x_mean` (orange) compared to a projection of their (Eucliean) mean onto the sphere (cyan).](src/img/MeanIllustr.png) +![40 random points `pts` and the result from the gradient descent to compute the `x_mean` (orange) compared to a projection of their (Euclidean) mean onto the sphere (cyan).](src/img/MeanIllustr.png) In order to print the current iteration number, change and cost every iteration as well as the stopping reason, you can provide a `debug` keyword with the corresponding symbols interleaved with strings. The Symbol `:Stop` indicates that the reason for stopping reason should be printed at the end. The last integer in this array specifies that debugging information should be printed only every $i$th iteration. While `:x` could be used to also print the current iterate, this usually takes up too much space.