diff --git a/.github/workflows/documenter.yml b/.github/workflows/documenter.yml index 2ec4cc523a..51527538a3 100644 --- a/.github/workflows/documenter.yml +++ b/.github/workflows/documenter.yml @@ -53,11 +53,4 @@ jobs: run: "docs/make.jl --quarto --prettyurls" env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} - - name: "vale.sh spell check" - uses: errata-ai/vale-action@reviewdog - with: - files: docs/src - fail_on_error: true - filter_mode: nofilter - vale_flags: "--config=docs/.vale.ini" + DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} \ No newline at end of file diff --git a/Changelog.md b/Changelog.md index 520f1dce95..3425f1763d 100644 --- a/Changelog.md +++ b/Changelog.md @@ -5,11 +5,62 @@ All notable Changes to the Julia package `Manopt.jl` will be documented in this The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). -## [unreleased] +## [0.4.44] December 12, 2023 + +Formally one could consider this version breaking, since a few functions +have been moved, that in earlier versions (0.3.x) have been used in example scripts. +These examples are now available again within [ManoptExamples.jl](https://juliamanifolds.github.io/ManoptExamples.jl/stable/), and with their +“reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps +have been moved there as well. +This is not considered breaking, since the functions were only used in the old, removed examples. +We still document each and every of the moved functions below. They have been partly renamed, +and their documentation and testing has been extended. ### Changed * Bumped and added dependencies on all 3 Project.toml files, the main one, the docs/, an the tutorials/ one. +* `artificial_S2_lemniscate` is available as [`ManoptExample.Lemniscate`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.Lemniscate-Tuple{Number}) – and works on arbitrary manifolds now. +* `artificial_S1_signal` is available as [`ManoptExample.artificial_S1_signal`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S1_signal) +* `artificial_S1_slope_signal` is available as [`ManoptExamples.artificial_S1_slope_signal`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S1_slope_signal) +* `artificial_S2_composite_bezier_curve` is available as [`ManoptExamples.artificial_S2_composite_Bezier_curve`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S2_composite_Bezier_curve-Tuple{}) +* `artificial_S2_rotation_image` is available as [`ManoptExamples.artificial_S2_rotation_image`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S2_rotation_image) +* `artificial_S2_whirl_image` is available as [`ManoptExamples.artificial_S2_whirl_image`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S2_whirl_image) +* `artificial_S2_whirl_patch` is available as [`ManoptExamples.artificial_S2_whirl_path`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_S2_whirl_patch) +* `artificial_SAR_image` is available as [`ManoptExamples.artificial_SAR_image`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificialIn_SAR_image-Tuple{Integer}) +* `artificial_SPD_image` is available as [`ManoptExamples.artificial_SPD_image`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_SPD_image) +* `artificial_SPD_image2` is available as [`ManoptExamples.artificial_SPD_image`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/data/#ManoptExamples.artificial_SPD_image2) +* `adjoint_differential_forward_logs` is available as [`ManoptExamples.adjoint_differential_forward_logs`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.adjoint_differential_forward_logs-Union{Tuple{TPR},%20Tuple{TSize},%20Tuple{TM},%20Tuple{𝔽},%20Tuple{ManifoldsBase.PowerManifold{𝔽,%20TM,%20TSize,%20TPR},%20Any,%20Any}}%20where%20{𝔽,%20TM,%20TSize,%20TPR}) +* `adjoint:differential_bezier_control` is available as [`ManoptExamples.adjoint_differential_Bezier_control_points`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.adjoint_differential_Bezier_control_points-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{%3C:ManoptExamples.BezierSegment},%20AbstractVector,%20AbstractVector}) +* `BezierSegment` is available as [`ManoptExamples.BeziérSegment`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.BezierSegment) +* `cost_acceleration_bezier` is avilable as [`ManoptExamples.acceleration_Bezier`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.acceleration_Bezier-Union{Tuple{P},%20Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{P},%20AbstractVector{%3C:Integer},%20AbstractVector{%3C:AbstractFloat}}}%20where%20P) +* `cost_L2_acceleration_bezier` is available as [`ManoptExamples.L2_acceleration_Bezier`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.L2_acceleration_Bezier-Union{Tuple{P},%20Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{P},%20AbstractVector{%3C:Integer},%20AbstractVector{%3C:AbstractFloat},%20AbstractFloat,%20AbstractVector{P}}}%20where%20P) +* `costIntrICTV12` is available as [`ManoptExamples.Intrinsic_infimal_convolution_TV12`]() +* `costL2TV` is available as [`ManoptExamples.L2_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.L2_Total_Variation-NTuple{4,%20Any}) +* `costL2TV12` is available as [`ManoptExamples.L2_Total_Variation_1_2`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.L2_Total_Variation_1_2-Tuple{ManifoldsBase.PowerManifold,%20Vararg{Any,%204}}) +* `costL2TV2` is available as [`ManoptExamples.L2_second_order_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.L2_second_order_Total_Variation-Tuple{ManifoldsBase.PowerManifold,%20Any,%20Any,%20Any}) +* `costTV` is available as [`ManoptExamples.Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.Total_Variation) +* `costTV2` is available as [`ManoptExamples.second_order_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.second_order_Total_Variation) +* `de_casteljau` is available as [`ManoptExamples.de_Casteljau`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.de_Casteljau-Tuple{ManifoldsBase.AbstractManifold,%20Vararg{Any}}) +* `differential_forward_logs` is available as [`ManoptExamples.differential_forward_logs`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.differential_forward_logs-Tuple{ManifoldsBase.PowerManifold,%20Any,%20Any}) +* `differential_bezier_control` is available as [`ManoptExamples.differential_Bezier_control_points`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.differential_Bezier_control_points-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{%3C:ManoptExamples.BezierSegment},%20AbstractVector,%20AbstractVector{%3C:ManoptExamples.BezierSegment}}) +* `forward_logs` is available as [`ManoptExamples.forward_logs`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.forward_logs-Union{Tuple{TPR},%20Tuple{TSize},%20Tuple{TM},%20Tuple{𝔽},%20Tuple{ManifoldsBase.PowerManifold{𝔽,%20TM,%20TSize,%20TPR},%20Any}}%20where%20{𝔽,%20TM,%20TSize,%20TPR}) +* `get_bezier_degree` is available as [`ManoptExamples.get_Bezier_degree`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_degree-Tuple{ManifoldsBase.AbstractManifold,%20ManoptExamples.BezierSegment}) +* `get_bezier_degrees` is available as [`ManoptExamples.get_Bezier_degrees`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_degrees-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{%3C:ManoptExamples.BezierSegment}}) +* `get_Bezier_inner_points` is available as [`ManoptExamples.get_Bezier_inner_points`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_inner_points-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{%3C:ManoptExamples.BezierSegment}}) +* `get_bezier_junction_tangent_vectors` is available as [`ManoptExamples.get_Bezier_junction_tangent_vectors`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_junction_tangent_vectors-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{%3C:ManoptExamples.BezierSegment}}) +* `get_bezier_junctions` is available as [`ManoptExamples.get_Bezier_junctions`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_junctions) +* `get_bezier_points` is available as [`ManoptExamples.get_Bezier_points`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_points) +* `get_bezier_segments` is available as [`ManoptExamples.get_Bezier_segments`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.get_Bezier_segments-Union{Tuple{P},%20Tuple{ManifoldsBase.AbstractManifold,%20Vector{P},%20Any},%20Tuple{ManifoldsBase.AbstractManifold,%20Vector{P},%20Any,%20Symbol}}%20where%20P) +* `grad_acceleration_bezier` is available as [`ManoptExamples.grad_acceleration_Bezier`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.grad_acceleration_Bezier-Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector,%20AbstractVector{%3C:Integer},%20AbstractVector}) +* `grad_L2_acceleration_bezier` is available as [`ManoptExamples.grad_L2_acceleration_Bezier`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.grad_L2_acceleration_Bezier-Union{Tuple{P},%20Tuple{ManifoldsBase.AbstractManifold,%20AbstractVector{P},%20AbstractVector{%3C:Integer},%20AbstractVector,%20Any,%20AbstractVector{P}}}%20where%20P) +* `grad_Intrinsic_infimal_convolution_TV12` is available as [`ManoptExamples.Intrinsic_infimal_convolution_TV12``](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.grad_intrinsic_infimal_convolution_TV12-Tuple{ManifoldsBase.AbstractManifold,%20Vararg{Any,%205}}) +* `grad_TV` is available as [`ManoptExamples.grad_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.grad_Total_Variation) +* `costIntrICTV12` is available as [`ManoptExamples.Intrinsic_infimal_convolution_TV12`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.Intrinsic_infimal_convolution_TV12-Tuple{ManifoldsBase.AbstractManifold,%20Vararg{Any,%205}}) +* `project_collaborative_TV` is available as [`ManoptExamples.project_collaborative_TV`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.project_collaborative_TV) +* `prox_parallel_TV` is available as [`ManoptExamples.prox_parallel_TV`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.prox_parallel_TV) +* `grad_TV2` is available as [`ManoptExamples.prox_second_order_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.grad_second_order_Total_Variation) +* `prox_TV` is available as [`ManoptExamples.prox_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.prox_Total_Variation) +* `prox_TV2` is available as [`ManopExamples.prox_second_order_Total_Variation`](https://juliamanifolds.github.io/ManoptExamples.jl/stable/objectives/#ManoptExamples.prox_second_order_Total_Variation-Union{Tuple{T},%20Tuple{ManifoldsBase.AbstractManifold,%20Any,%20Tuple{T,%20T,%20T}},%20Tuple{ManifoldsBase.AbstractManifold,%20Any,%20Tuple{T,%20T,%20T},%20Int64}}%20where%20T) ## [0.4.43] – November 19, 2023 @@ -137,7 +188,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added -* The `AdaptiveWNGrad` stepsize is now available as a new stepsize functor. +* The `AdaptiveWNGrad` stepsize is available as a new stepsize functor. ### Fixed diff --git a/Project.toml b/Project.toml index 758df65858..f616d432a1 100644 --- a/Project.toml +++ b/Project.toml @@ -1,7 +1,7 @@ name = "Manopt" uuid = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5" authors = ["Ronny Bergmann "] -version = "0.4.43" +version = "0.4.44" [deps] ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4" @@ -47,6 +47,7 @@ LRUCache = "1.4" ManifoldDiff = "0.3.8" Manifolds = "0.9" ManifoldsBase = "0.15" +ManoptExamples = "0.1.4" Markdown = "1.6" PolynomialRoots = "1" Printf = "1.6" @@ -62,9 +63,11 @@ ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210" JuMP = "4076af6c-e467-56ae-b986-b466b2749572" LRUCache = "8ac3fa9e-de4c-5943-b1dc-09c6b5f20637" LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255" +ManifoldDiff = "af67fdf4-a580-4b9f-bbec-742ef357defd" Manifolds = "1cead3c2-87b3-11e9-0ccd-23c62b72b94e" +ManoptExamples = "5b8d5e80-5788-45cb-83d6-5e8f1484217d" Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80" Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40" [targets] -test = ["Test", "ForwardDiff", "JuMP", "Manifolds", "Plots", "LineSearches", "LRUCache"] +test = ["Test", "ForwardDiff", "JuMP", "Manifolds", "ManoptExamples", "ManifoldDiff", "Plots", "LineSearches", "LRUCache"] diff --git a/docs/Project.toml b/docs/Project.toml index 6d614570bd..424afa482a 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -22,8 +22,8 @@ Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" [compat] BenchmarkTools = "1.3" -Colors = "0.12" CSV = "0.10" +Colors = "0.12" CondaPkg = "0.2" DataFrames = "1" Documenter = "1" @@ -32,9 +32,9 @@ FiniteDifferences = "0.12" Images = "0.26" JLD2 = "0.4" JuMP = "1" +LRUCache = "1" LineSearches = "7" Literate = "2" -LRUCache = "1" Manifolds = "0.8.81, 0.9" ManifoldsBase = "0.14.12, 0.15" Manopt = "0.4" diff --git a/docs/make.jl b/docs/make.jl index 1bc8163c57..375e8523e3 100755 --- a/docs/make.jl +++ b/docs/make.jl @@ -10,12 +10,12 @@ docs/make.jl Render the `Manopt.jl` documenation with optinal arguments Arguments -* `--exclude-docs` - exclude the tutorials from the menu of Documenter, +* `--exclude-tutorials` - exclude the tutorials from the menu of Documenter, this can be used if you do not have Quarto installed to still be able to render the docs locally on this machine. This option should not be set on CI. -* `--help` - print this help and exit without rendering the documentation -* `--prettyurls` – toggle the prettyurls part to true (which is otherwise only true on CI) -* `--quarto` – run the Quarto notebooks from the `tutorials/` folder before generating the documentation +* `--help` - print this help and exit without rendering the documentation +* `--prettyurls` – toggle the prettyurls part to true (which is otherwise only true on CI) +* `--quarto` – run the Quarto notebooks from the `tutorials/` folder before generating the documentation this has to be run locally at least once for the `tutorials/*.md` files to exist that are included in the documentation (see `--exclude-tutorials`) for the alternative. If they are generated ones they are cached accordingly. @@ -91,7 +91,7 @@ for (md_file, doc_file) in end end -## Build titorials menu +## Build tutorials menu tutorials_menu = "How to..." => [ "🏔️ Get started: optimize." => "tutorials/Optimize.md", @@ -112,6 +112,8 @@ makedocs(; format=Documenter.HTML(; prettyurls=(get(ENV, "CI", nothing) == "true") || ("--prettyurls" ∈ ARGS), assets=["assets/favicon.ico", "assets/citations.css"], + size_threshold_warn=200 * 2^10, # raise slightly from 100 to 200 KiB + size_threshold=300 * 2^10, # raise slightly 200 to to 300 KiB ), modules=[ Manopt, @@ -180,22 +182,7 @@ makedocs(; "Debug Output" => "plans/debug.md", "Recording values" => "plans/record.md", ], - "Functions" => [ - "Introduction" => "functions/index.md", - "Bézier curves" => "functions/bezier.md", - "Cost functions" => "functions/costs.md", - "Differentials" => "functions/differentials.md", - "Adjoint Differentials" => "functions/adjoint_differentials.md", - "Gradients" => "functions/gradients.md", - "Proximal Maps" => "functions/proximal_maps.md", - "Specific Manifold Functions" => "functions/manifold.md", - ], - "Helpers" => [ - "Checks" => "helpers/checks.md", - "Data" => "helpers/data.md", - "Error Measures" => "helpers/errorMeasures.md", - "Exports" => "helpers/exports.md", - ], + "Helpers" => ["Checks" => "helpers/checks.md", "Exports" => "helpers/exports.md"], "Contributing to Manopt.jl" => "contributing.md", "Extensions" => "extensions.md", "Notation" => "notation.md", diff --git a/docs/src/functions/adjoint_differentials.md b/docs/src/functions/adjoint_differentials.md deleted file mode 100644 index 19eee1e133..0000000000 --- a/docs/src/functions/adjoint_differentials.md +++ /dev/null @@ -1,13 +0,0 @@ -# [Adjoint differentials](@id adjointDifferentialFunctions) - -```@autodocs -Modules = [Manopt] -Pages = ["adjoint_differentials.jl"] -``` - -# Literature - -```@bibliography -Pages = ["adjoint_differentials.md"] -Canonical=false -``` \ No newline at end of file diff --git a/docs/src/functions/bezier.md b/docs/src/functions/bezier.md deleted file mode 100644 index 229215fb38..0000000000 --- a/docs/src/functions/bezier.md +++ /dev/null @@ -1,13 +0,0 @@ -# [Bézier curves](@id BezierCurves) - -```@autodocs -Modules = [Manopt] -Pages = ["bezier_curves.jl"] -``` - -# Literature - -```@bibliography -Pages = ["bezier.md"] -Canonical=false -``` \ No newline at end of file diff --git a/docs/src/functions/costs.md b/docs/src/functions/costs.md deleted file mode 100644 index ec12bb0ca0..0000000000 --- a/docs/src/functions/costs.md +++ /dev/null @@ -1,8 +0,0 @@ -# [Cost functions](@id CostFunctions) - -The following cost functions are available - -```@autodocs -Modules = [Manopt] -Pages = ["costs.jl"] -``` diff --git a/docs/src/functions/differentials.md b/docs/src/functions/differentials.md deleted file mode 100644 index 3e4d6a5606..0000000000 --- a/docs/src/functions/differentials.md +++ /dev/null @@ -1,6 +0,0 @@ -# [Differentials](@id DifferentialFunctions) - -```@autodocs -Modules = [Manopt] -Pages = ["functions/differentials.jl"] -``` diff --git a/docs/src/functions/gradients.md b/docs/src/functions/gradients.md deleted file mode 100644 index 245dc17a20..0000000000 --- a/docs/src/functions/gradients.md +++ /dev/null @@ -1,24 +0,0 @@ -# [Gradients](@id GradientFunctions) - -For a function ``f:\mathcal M→ℝ`` -the Riemannian gradient ``\operatorname{grad}f(x)`` at ``x∈\mathcal M`` -is given by the unique tangent vector fulfilling -``\langle \operatorname{grad}f(x), ξ\rangle_x = D_xf[ξ],\quad -\forall ξ ∈ T_x\mathcal M,`` -where ``D_xf[ξ]`` denotes the differential of ``f`` at ``x`` with respect to -the tangent direction (vector) ``ξ`` or in other words the directional -derivative. - -This page collects the available gradients. - -```@autodocs -Modules = [Manopt] -Pages = ["gradients.jl"] -``` - -# Literature - -```@bibliography -Pages = ["gradients.md"] -Canonical=false -``` \ No newline at end of file diff --git a/docs/src/functions/index.md b/docs/src/functions/index.md deleted file mode 100644 index 4220b52aa9..0000000000 --- a/docs/src/functions/index.md +++ /dev/null @@ -1,8 +0,0 @@ -# Functions - -There are several functions required within optimization, most prominently -[cost functions](@ref CostFunctions) and [gradients](@ref GradientFunctions). This package includes -several cost functions and corresponding gradients, but also corresponding -[proximal maps](@ref proximalMapFunctions) for variational methods -manifold-valued data. Most of these functions require the evaluation of -[Differential](@ref DifferentialFunctions)s or their [adjoints](@ref adjointDifferentialFunctions)s. \ No newline at end of file diff --git a/docs/src/functions/manifold.md b/docs/src/functions/manifold.md deleted file mode 100644 index 0f46f9116e..0000000000 --- a/docs/src/functions/manifold.md +++ /dev/null @@ -1,8 +0,0 @@ -# Specific manifold functions - -This small section extends the functions available from [ManifoldsBase.jl](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/) and [Manifolds.jl](https://juliamanifolds.github.io/Manifolds.jl/stable/), especially a few random generators, that are simpler than the available functions. - -```@autodocs -Modules = [Manopt] -Pages = ["manifold_functions.jl"] -``` diff --git a/docs/src/functions/proximal_maps.md b/docs/src/functions/proximal_maps.md deleted file mode 100644 index 58c51ee4ca..0000000000 --- a/docs/src/functions/proximal_maps.md +++ /dev/null @@ -1,34 +0,0 @@ -# [Proximal maps](@id proximalMapFunctions) - -For a function ``\varphi:\mathcal M →ℝ`` the proximal map is defined -as - -```math -\displaystyle\operatorname{prox}_{λ\varphi}(x) -= \operatorname*{argmin}_{y ∈ \mathcal M} d_{\mathcal M}^2(x,y) + λ\varphi(y), -\quad λ > 0, -``` - -where ``d_{\mathcal M}: \mathcal M \times \mathcal M → ℝ`` denotes -the geodesic distance on ``\mathcal M``. While it might still be difficult to -compute the minimizer, there are several proximal maps known (locally) in closed -form. Furthermore if ``x^{\star} ∈ \mathcal M`` is a minimizer of ``\varphi``, then - -```math -\operatorname{prox}_{λ\varphi}(x^\star) = x^\star. -``` - -This page lists all proximal maps available within Manopt. To add you own, just -extend the `functions/proximal_maps.jl` file. - -```@autodocs -Modules = [Manopt] -Pages = ["proximal_maps.jl"] -``` - -# Literature - -```@bibliography -Pages = ["proximal_maps.md"] -Canonical=false -``` \ No newline at end of file diff --git a/docs/src/helpers/data.md b/docs/src/helpers/data.md deleted file mode 100644 index 15e23d0a8e..0000000000 --- a/docs/src/helpers/data.md +++ /dev/null @@ -1,18 +0,0 @@ - -# [Data](@id Data) - -For some manifolds there are artificial or real application data available -that can be loaded using the following data functions. Note that these need -additionally [`Manifolds.jl`](https://juliamanifolds.github.io/Manifolds.jl/latest/) to be loaded. - -```@autodocs -Modules = [Manopt] -Pages = ["artificialDataFunctions.jl"] -``` - -# Literature - -```@bibliography -Pages = ["data.md"] -Canonical=false -``` \ No newline at end of file diff --git a/docs/src/helpers/errorMeasures.md b/docs/src/helpers/errorMeasures.md deleted file mode 100644 index af7b6ca2c1..0000000000 --- a/docs/src/helpers/errorMeasures.md +++ /dev/null @@ -1,6 +0,0 @@ -# [Error Measures](@id ErrorMeasures) - -```@docs -meanSquaredError -meanAverageError -``` diff --git a/docs/src/index.md b/docs/src/index.md index cd40716fa8..167f3dd36f 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -73,10 +73,6 @@ This project is build upon [ManifoldsBase.jl](https://juliamanifolds.github.io/M The notation in the documentation aims to follow the same [notation](https://juliamanifolds.github.io/Manifolds.jl/stable/misc/notation.html) from these packages. -### Functions on manifolds - -Several functions are available, implemented on an arbitrary manifold, [cost functions](@ref CostFunctions), [differentials](@ref DifferentialFunctions) and their [adjoints](@ref adjointDifferentialFunctions), and [gradients](@ref GradientFunctions) as well as [proximal maps](@ref proximalMapFunctions). - ### Visualization To visualize and interpret results, `Manopt.jl` aims to provide both easy plot functions as well as [exports](@ref Exports). Furthermore a system to get [debug](@ref DebugSection) during the iterations of an algorithms as well as [record](@ref RecordSection) capabilities, for example to record a specified tuple of values per iteration, most prominently [`RecordCost`](@ref) and diff --git a/docs/src/references.bib b/docs/src/references.bib index 588a65f25d..14af657d5a 100644 --- a/docs/src/references.bib +++ b/docs/src/references.bib @@ -162,29 +162,32 @@ @article{BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez:2021 } @article{BergmannLausSteidlWeinmann:2014:1, - AUTHOR = {Bergmann, Ronny and Laus, Friederike and Steidl, Gabriele and Weinmann, Andreas}, - DOI = {10.1137/140969993}, - JOURNAL = {SIAM Journal on Imaging Sciences}, - NUMBER = {4}, - PAGES = {2916--2953}, - TITLE = {Second order differences of cyclic data and applications in variational denoising}, - VOLUME = {7}, - YEAR = {2014}, - NOTE = {arxiv: [1405.5349](https://arxiv.org/abs/1405.5349)} + AUTHOR = {Bergmann, Ronny and Laus, Friederike and Steidl, Gabriele and Weinmann, Andreas}, + EPRINT = {1405.5349}, + EPRINTTYPE = {arXiv}, + DOI = {10.1137/140969993}, + JOURNAL = {SIAM Journal on Imaging Sciences}, + NUMBER = {4}, + PAGES = {2916--2953}, + TITLE = {Second order differences of cyclic data and applications in variational denoising}, + VOLUME = {7}, + YEAR = {2014} } @article{BergmannPerschSteidl:2016, - AUTHOR = {Bergmann, Ronny and Persch, Johannes and Steidl, Gabriele}, - YEAR = {2016}, - DOI = {10.1137/15M1052858}, - JOURNAL = {SIAM Journal on Imaging Sciences}, - NUMBER = {4}, - PAGES = {901--937}, - TITLE = {A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds}, - VOLUME = {9}, - NOTE = {arxiv: [1512.02814](https://arxiv.org/abs/1512.02814)} + AUTHOR = {Bergmann, Ronny and Persch, Johannes and Steidl, Gabriele}, + DOI = {10.1137/15M1052858}, + EPRINT = {1512.02814}, + EPRINTTYPE = {arXiv}, + JOURNAL = {SIAM Journal on Imaging Sciences}, + NUMBER = {4}, + PAGES = {901--937}, + TITLE = {A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds}, + VOLUME = {9}, + YEAR = {2016} } + @incollection{BorckmansIshtevaAbsil:2010, DOI = {10.1007/978-3-642-15461-4_2}, YEAR = {2010}, diff --git a/docs/src/solvers/DouglasRachford.md b/docs/src/solvers/DouglasRachford.md index c503d1c2e3..3f78fe84a9 100644 --- a/docs/src/solvers/DouglasRachford.md +++ b/docs/src/solvers/DouglasRachford.md @@ -62,6 +62,12 @@ DouglasRachfordState For specific [`DebugAction`](@ref)s and [`RecordAction`](@ref)s see also [Cyclic Proximal Point](@ref CPPSolver). +Furthermore, this solver has a short hand notation for the involved [`reflect`](@ref)ion. + +```@docs +reflect +``` + ## [Technical details](@id sec-dr-technical-details) The [`DouglasRachford`](@ref) solver requires the following functions of a manifold to be available diff --git a/docs/src/solvers/cyclic_proximal_point.md b/docs/src/solvers/cyclic_proximal_point.md index 73fcd55356..54e8c3b467 100644 --- a/docs/src/solvers/cyclic_proximal_point.md +++ b/docs/src/solvers/cyclic_proximal_point.md @@ -6,7 +6,7 @@ The Cyclic Proximal Point (CPP) algorithm aims to minimize F(x) = \sum_{i=1}^c f_i(x) ``` -assuming that the [proximal maps](@ref proximalMapFunctions) ``\operatorname{prox}_{λ f_i}(x)`` +assuming that the proximal maps ``\operatorname{prox}_{λ f_i}(x)`` are given in closed form or can be computed efficiently (at least approximately). The algorithm then cycles through these proximal maps, where the type of cycle diff --git a/ext/ManoptManifoldsExt/ManoptManifoldsExt.jl b/ext/ManoptManifoldsExt/ManoptManifoldsExt.jl index 9582aac4fe..3bf0a52688 100644 --- a/ext/ManoptManifoldsExt/ManoptManifoldsExt.jl +++ b/ext/ManoptManifoldsExt/ManoptManifoldsExt.jl @@ -4,8 +4,6 @@ using ManifoldsBase: exp, log, ParallelTransport, vector_transport_to using Manopt import Manopt: max_stepsize, - prox_TV2, - grad_TV2, alternating_gradient_descent, alternating_gradient_descent!, get_gradient, @@ -26,8 +24,6 @@ end const NONMUTATINGMANIFOLDS = Union{Circle,PositiveNumbers,Euclidean{Tuple{}}} include("manifold_functions.jl") -include("nonmutating_manifolds_functions.jl") -include("artificialDataFunctionsManifolds.jl") include("ChambollePockManifolds.jl") include("alternating_gradient.jl") end diff --git a/ext/ManoptManifoldsExt/artificialDataFunctionsManifolds.jl b/ext/ManoptManifoldsExt/artificialDataFunctionsManifolds.jl deleted file mode 100644 index d1331eb42a..0000000000 --- a/ext/ManoptManifoldsExt/artificialDataFunctionsManifolds.jl +++ /dev/null @@ -1,143 +0,0 @@ - -function Manopt.artificial_S2_composite_bezier_curve() - M = Sphere(2) - d0 = [0.0, 0.0, 1.0] - d1 = [0.0, -1.0, 0.0] - d2 = [-1.0, 0.0, 0.0] - d3 = [0.0, 0.0, -1.0] - # - # control points - where b1- and b2- are constructed by the C1 condition - # - # We define three segments: 1 - b00 = d0 # also known as p0 - ξ0 = π / (8.0 * sqrt(2.0)) .* [1.0, -1.0, 0.0] # staring direction from d0 - b01 = exp(M, d0, ξ0) # b0+ - ξ1 = π / (4.0 * sqrt(2)) .* [1.0, 0.0, 1.0] - # b02 or b1- and b11 or b1+ are constructed by this vector with opposing sign - # to achieve a C1 curve - b02 = exp(M, d1, ξ1) - b03 = d1 - # 2 - b10 = d1 - b11 = exp(M, d1, -ξ1) # yields c1 condition - ξ2 = -π / (4 * sqrt(2)) .* [0.0, 1.0, -1.0] - b12 = exp(M, d2, ξ2) - b13 = d2 - # 3 - b20 = d2 - b21 = exp(M, d2, -ξ2) - ξ3 = π / (8.0 * sqrt(2)) .* [-1.0, 1.0, 0.0] - b22 = exp(M, d3, ξ3) - b23 = d3 - # hence the matrix of controlpoints for the curve reads - return [ - BezierSegment([b00, b01, b02, b03]), - BezierSegment([b10, b11, b12, b13]), - BezierSegment([b20, b21, b22, b23]), - ] -end - -function Manopt.artificial_S2_rotation_image( - pts::Int=64, rotations::Tuple{Float64,Float64}=(0.5, 0.5) -) - M = Sphere(2) - img = fill(zeros(3), pts, pts) - north = [1.0, 0.0, 0.0] - Rxy(a) = [cos(a) -sin(a) 0.0; sin(a) cos(a) 0.0; 0.0 0.0 1] - Rxz(a) = [cos(a) 0.0 -sin(a); 0.0 1.0 0.0; sin(a) 0.0 cos(a)] - for i in 1:pts - for j in 1:pts - x = i / pts * 2π * rotations[1] - y = j / pts * 2π * rotations[2] - img[i, j] = Rxy(x + y) * Rxz(x - y) * [0, 0, 1] - end - end - return img -end - -function Manopt.artificial_S2_lemniscate(p, t::Float64, a::Float64=π / 2.0) - M = Sphere(2) - tP = 2.0 * Float64(p[1] >= 0.0) - 1.0 # Take north or south pole - base = [0.0, 0.0, tP] - xc = a * (cos(t) / (sin(t)^2 + 1.0)) - yc = a * (cos(t) * sin(t) / (sin(t)^2 + 1.0)) - tV = vector_transport_to(M, base, [xc, yc, 0.0], p, ParallelTransport()) - return exp(M, p, tV) -end - -function Manopt.artificial_S2_whirl_image(pts::Int=64) - M = Sphere(2) - img = artificial_S2_rotation_image(pts, (0.5, 0.5)) - # Set WhirlPatches - sc = pts / 64 - patchSizes = floor.(sc .* [9, 9, 9, 9, 11, 11, 11, 15, 15, 15, 17, 21]) - patchCenters = - Integer.( - floor.( - sc .* - [[35, 7] [25, 41] [32, 25] [7, 60] [10, 5] [41, 58] [11, 41] [23, 56] [ - 38, 45 - ] [16, 28] [55, 42] [51, 16]], - ), - ) - patchSigns = [1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1] - for i in 1:length(patchSizes) - pS = Integer(patchSizes[i]) - pSH = Integer(floor((patchSizes[i] - 1) / 2)) - pC = patchCenters[:, i] - pC = max.(Ref(1), pC .- pS) .+ pS - pSgn = patchSigns[i] - s = pS % 2 == 0 ? 1 : 0 - r = [pC[1] .+ ((-pSH):(pSH + s)), pC[2] .+ ((-pSH):(pSH + s))] - patch = artificial_S2_whirl_patch(pS) - if pSgn == -1 # opposite ? - patch = -patch - end - img[r...] = patch - end - return img -end - -function Manopt.artificial_SPD_image2(pts=64, fraction=0.66) - Zl = 4.0 * Matrix{Float64}(I, 3, 3) - # create a first matrix - α = 2.0 * π / 3 - β = π / 3 - B = [1.0 0.0 0.0; 0.0 cos(β) -sin(β); 0.0 sin(β) cos(β)] - A = [cos(α) -sin(α) 0.0; sin(α) cos(α) 0.0; 0.0 0.0 1.0] - Zo = Matrix(Symmetric(A * B * Diagonal([2.0, 4.0, 8.0]) * B' * A')) - # create a second matrix - α = -4.0 * π / 3 - β = -π / 3 - B = [1.0 0.0 0.0; 0.0 cos(β) -sin(β); 0.0 sin(β) cos(β)] - A = [cos(α) -sin(α) 0.0; sin(α) cos(α) 0.0; 0.0 0.0 1.0] - Zt = A * B * Diagonal([8.0 / sqrt(2.0), 8.0, sqrt(2.0)]) * B' * A' - data = fill(Matrix{Float64}(I, 3, 3), pts, pts) - M = SymmetricPositiveDefinite(3) - for row in 1:pts - for col in 1:pts - # (a) from Zo a part to Zt - C = Zo - if (row > 1) # in X direction - C = exp( - M, - C, - log(M, C, Zt), - (row - 1) / (2 * (pts - 1)) + ((row > fraction * pts) ? 1 / 2 : 0.0), - ) - end - if (col > 1) # and then in Y direction - C = exp( - M, - C, - vector_transport_to( - M, Symmetric(Zo), log(M, Zo, Zl), Symmetric(C), ParallelTransport() - ), - (col - 1.0) / (pts - 1), - ) - end - data[row, col] = C - end - end - return data -end diff --git a/ext/ManoptManifoldsExt/manifold_functions.jl b/ext/ManoptManifoldsExt/manifold_functions.jl index 1c49520c05..c022f5996f 100644 --- a/ext/ManoptManifoldsExt/manifold_functions.jl +++ b/ext/ManoptManifoldsExt/manifold_functions.jl @@ -58,22 +58,3 @@ function mid_point!(M::Sphere, y, p, q, x) y .= exp(M, p, 0.5 * X) return y end - -function prox_TV2(::Euclidean, λ, pointTuple::Tuple{T,T,T}, p::Int=1) where {T} - w = [1.0, -2.0, 1.0] - x = [pointTuple...] - if p == 1 # Example 3.2 in Bergmann, Laus, Steidl, Weinmann, 2014. - m = min.(Ref(λ), abs.(x .* w) / (dot(w, w))) - s = sign.(sum(x .* w)) - return x .- m .* s .* w - elseif p == 2 # Theorem 3.6 ibd. - t = λ * sum(x .* w) / (1 + λ * dot(w, w)) - return x .- t .* w - else - throw( - ErrorException( - "Proximal Map of TV2(Euclidean,λ,pT,p) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end -end diff --git a/ext/ManoptManifoldsExt/nonmutating_manifolds_functions.jl b/ext/ManoptManifoldsExt/nonmutating_manifolds_functions.jl deleted file mode 100644 index 5c771bd51b..0000000000 --- a/ext/ManoptManifoldsExt/nonmutating_manifolds_functions.jl +++ /dev/null @@ -1,73 +0,0 @@ -function grad_TV2(M::NONMUTATINGMANIFOLDS, q, p::Int=1) - c = mid_point(M, q[1], q[3], q[2]) # nearest mid point of x and z to y - d = distance(M, q[2], c) - innerLog = -log(M, c, q[2]) - X = [zero_vector(M, q[i]) for i in 1:3] - if p == 2 - X[1] = adjoint_differential_shortest_geodesic_startpoint( - M, q[1], q[3], 1 / 2, innerLog - ) - X[2] = -log(M, q[2], c) - X[3] = adjoint_differential_shortest_geodesic_endpoint( - M, q[1], q[3], 1 / 2, innerLog - ) - else - if d > 0 # gradient case (subdifferential contains zero, see above) - X[1] = adjoint_differential_shortest_geodesic_startpoint( - M, q[1], q[3], 1 / 2, innerLog / (d^(2 - p)) - ) - X[2] = -log(M, q[2], c) / (d^(2 - p)) - X[3] = adjoint_differential_shortest_geodesic_endpoint( - M, q[1], q[3], 1 / 2, innerLog / (d^(2 - p)) - ) - end - end - return X -end -function prox_TV2(::NONMUTATINGMANIFOLDS, λ, pointTuple::Tuple{T,T,T}, p::Int=1) where {T} - w = [1.0, -2.0, 1.0] - x = [pointTuple...] - if p == 1 # Theorem 3.5 in Bergmann, Laus, Steidl, Weinmann, 2014. - sr_dot_xw = sym_rem(sum(x .* w)) - m = min(λ, abs(sr_dot_xw) / (dot(w, w))) - s = sign(sr_dot_xw) - return sym_rem.(x .- m .* s .* w) - elseif p == 2 # Theorem 3.6 ibd. - t = λ * sym_rem(sum(x .* w)) / (1 + λ * dot(w, w)) - return sym_rem.(x - t .* w) - else - throw( - ErrorException( - "Proximal Map of TV2(Circle,λ,pT,p) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end -end -function prox_TV2(M::PowerManifold{𝔽,N}, λ, x, p::Int=1) where {𝔽,N<:NONMUTATINGMANIFOLDS} - power_size = power_dimensions(M) - R = CartesianIndices(power_size) - d = length(size(x)) - minInd = first(R).I - maxInd = last(R).I - y = deepcopy(x) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:2 - for i in R # iterate over all pixel - if (i[k] % 3) == l - JForward = i.I .+ ek.I #i + e_k - JBackward = i.I .- ek.I # i - e_k - if all(JForward .<= maxInd) && all(JBackward .>= minInd) - (y[M, JBackward...], y[M, i.I...], y[M, JForward...]) = prox_TV2( - M.manifold, - λ, - (y[M, JBackward...], y[M, i.I...], y[M, JForward...]), - p, - ) - end - end # if mod 3 - end # i in R - end # for mod 3 - end # directions - return y -end diff --git a/src/Manopt.jl b/src/Manopt.jl index 71cd5e2f74..5aeacc80dc 100644 --- a/src/Manopt.jl +++ b/src/Manopt.jl @@ -20,20 +20,8 @@ using Dates: Millisecond, Nanosecond, Period, canonicalize, value using LinearAlgebra: Diagonal, I, eigen, eigvals, tril, Symmetric, dot, cholesky, eigmin, opnorm using ManifoldDiff: - adjoint_Jacobi_field, - adjoint_Jacobi_field!, - adjoint_differential_exp_argument, - adjoint_differential_exp_argument!, - adjoint_differential_exp_basepoint, - adjoint_differential_exp_basepoint!, adjoint_differential_log_argument, adjoint_differential_log_argument!, - adjoint_differential_log_basepoint, - adjoint_differential_log_basepoint!, - adjoint_differential_shortest_geodesic_endpoint, - adjoint_differential_shortest_geodesic_endpoint!, - adjoint_differential_shortest_geodesic_startpoint, - adjoint_differential_shortest_geodesic_startpoint!, differential_exp_argument, differential_exp_argument!, differential_exp_basepoint, @@ -141,14 +129,6 @@ using SparseArrays using Statistics: cor, cov, mean, std include("plans/plan.jl") -# Functions -include("functions/bezier_curves.jl") -include("functions/adjoint_differentials.jl") -include("functions/costs.jl") -include("functions/differentials.jl") -include("functions/gradients.jl") -include("functions/proximal_maps.jl") -include("functions/manifold_functions.jl") # solvers general framework include("solvers/solver.jl") # specific solvers @@ -177,10 +157,8 @@ include("solvers/subgradient.jl") include("solvers/debug_solver.jl") include("solvers/record_solver.jl") include("helpers/checks.jl") -include("helpers/errorMeasures.jl") include("helpers/exports/Asymptote.jl") include("helpers/LineSearchesTypes.jl") -include("data/artificialDataFunctions.jl") include("deprecated.jl") """ @@ -442,6 +420,8 @@ export ApproxHessianFiniteDifference, ApproxHessianSymmetricRankOne, ApproxHessi export update_hessian!, update_hessian_basis! export ExactPenaltyCost, ExactPenaltyGrad, AugmentedLagrangianCost, AugmentedLagrangianGrad export AdaptiveRagularizationWithCubicsModelObjective +export ExactPenaltyCost, ExactPenaltyGrad +export SmoothingTechnique, LinearQuadraticHuber, LogarithmicSumOfExponentials # # Stepsize export Stepsize @@ -473,65 +453,10 @@ export get_active_stopping_criteria, get_stopping_criteria, get_reason, get_stopping_criterion export update_stopping_criterion! # -# Data functions -export artificial_S1_signal, artificial_S1_slope_signal, artificialIn_SAR_image -export artificial_SPD_image, artificial_SPD_image2 -export artificial_S2_whirl_image, artificial_S2_whirl_patch -export artificial_S2_rotation_image -export artificial_S2_whirl_patch, artificial_S2_lemniscate -export artificial_S2_composite_bezier_curve -# # Exports export asymptote_export_S2_signals, asymptote_export_S2_data, asymptote_export_SPD export render_asymptote # -# Coeffs & Helpers for differentials -# -# Adjoint differentials -export adjoint_differential_forward_logs, adjoint_differential_forward_logs! -export adjoint_differential_bezier_control, adjoint_differential_bezier_control! -# -# Differentials -export differential_forward_logs, differential_forward_logs! -export differential_bezier_control, differential_bezier_control! -# -# Functions -export costL2TV, costL2TVTV2, costL2TV2, costTV, costTV2, costIntrICTV12 -export cost_L2_acceleration_bezier, cost_acceleration_bezier -export ExactPenaltyCost, ExactPenaltyGrad -export SmoothingTechnique, LinearQuadraticHuber, LogarithmicSumOfExponentials -# Gradients -export grad_TV, - grad_TV!, - grad_TV2, - grad_TV2!, - grad_intrinsic_infimal_convolution_TV12, - forward_logs, - forward_logs!, - grad_distance, - grad_distance!, - grad_acceleration_bezier, - grad_L2_acceleration_bezier -# Proximal maps -export prox_distance, prox_distance! -export prox_TV, prox_TV! -export prox_parallel_TV, prox_parallel_TV! -export prox_TV2, prox_TV2! -export project_collaborative_TV, project_collaborative_TV! -# Error measures -export meanSquaredError, meanAverageError -# -# Bézier -export BezierSegment, - de_casteljau, - get_bezier_degrees, - get_bezier_degree, - get_bezier_inner_points, - get_bezier_junction_tangent_vectors, - get_bezier_junctions, - get_bezier_points, - get_bezier_segments -# # Debugs export DebugSolverState, DebugAction, DebugGroup, DebugEntry, DebugEntryChange, DebugEvery export DebugChange, diff --git a/src/data/artificialDataFunctions.jl b/src/data/artificialDataFunctions.jl deleted file mode 100644 index 6534cf865c..0000000000 --- a/src/data/artificialDataFunctions.jl +++ /dev/null @@ -1,324 +0,0 @@ -# -# generate Artificial Data on several manifolds -# -# -# -@doc raw""" - artificialIn_SAR_image([pts=500]) -generate an artificial InSAR image, i.e. phase valued data, of size `pts` x -`pts` points. - -This data set was introduced for the numerical examples in [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1). -""" -function artificialIn_SAR_image(pts::Integer) - # variables - # rotation of ellipse - aEll = 35.0 - cosE = cosd(aEll) - sinE = sind(aEll) - aStep = 45.0 - cosA = cosd(aStep) - sinA = sind(aStep) - # main and minor axis of the ellipse - axes_inv = [6, 25] - # values for the hyperboloid - mid_point = [0.275; 0.275] - radius = 0.18 - values = [range(-0.5, 0.5; length=pts)...] - # Steps - aSteps = 60.0 - cosS = cosd(aSteps) - sinS = sind(aSteps) - l = 0.075 - midP = [-0.475, -0.0625]#.125, .55] - img = zeros(Float64, pts, pts) - for j in eachindex(values), i in eachindex(values) - # ellipse - Xr = cosE * values[i] - sinE * values[j] - Yr = cosE * values[j] + sinE * values[i] - v = axes_inv[1] * Xr^2 + axes_inv[2] * Yr^2 - k1 = v <= 1.0 ? 10.0 * pi * Yr : 0.0 - # circle - Xr = cosA * values[i] - sinA * values[j] - Yr = cosA * values[j] + sinA * values[i] - v = ((Xr - mid_point[1])^2 + (Yr - mid_point[2])^2) / radius^2 - k2 = v <= 1.0 ? 4.0 * pi * (1.0 - v) : 0.0 - # - Xr = cosS * values[i] - sinS * values[j] - Yr = cosS * values[j] + sinS * values[i] - k3 = 0.0 - for m in 1:8 - in_range = (abs(Xr + midP[1] + m * l) + abs(Yr + midP[2] + m * l)) ≤ l - k3 += in_range ? 2 * pi * (m / 8) : 0.0 - end - img[i, j] = mod(k1 + k2 + k3 - pi, 2 * pi) + pi - end - return img -end - -@doc raw""" - artificial_S1_slope_signal([pts=500, slope=4.]) - -Creates a Signal of (phase-valued) data represented on the -[`Circle`](hhttps://juliamanifolds.github.io/Manifolds.jl/latest/manifolds/circle.html) with increasing slope. - -# Optional -* `pts` – (`500`) number of points to sample the function. -* `slope` – (`4.0`) initial slope that gets increased afterwards - -This data set was introduced for the numerical examples in [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1) - - -""" -function artificial_S1_slope_signal(pts::Integer=500, slope::Float64=4.0) - t = range(0.0, 1.0; length=pts) - f = zero(t) - f[t .<= 1 / 6] .= -π / 2 .+ slope * π / 8 * t[t .<= 1 / 6] - # In the following terms, the first max - f[(1 / 6 .< t) .& (t .<= 1 / 3)] .= - max(f[f .!= 0]...) .- slope * π / 4 * 1 / 6 .+ - slope * π / 4 .* t[(1 / 6 .< t) .& (t .<= 1 / 3)] - f[(1 / 3 .< t) .& (t .<= 1 / 2)] .= - max(f[f .!= 0]...) .- slope * π / 2 * 1 / 3 .+ - slope * π / 2 * t[(1 / 3 .< t) .& (t .<= 1 / 2)] - f[(1 / 2 .< t) .& (t .<= 2 / 3)] .= - max(f[f .!= 0]...) .- slope * π * 1 / 2 .+ - slope * π * t[(1 / 2 .< t) .& (t .<= 2 / 3)] - f[(2 / 3 .< t) .& (t .<= 5 / 6)] .= - max(f[f .!= 0]...) .- slope * 2 * π * 2 / 3 .+ - slope * 2 * π * t[(2 / 3 .< t) .& (t .<= 5 / 6)] - f[5 / 6 .< t] .= - max(f[f .!= 0]...) .- slope * 4 * π * 5 / 6 .+ slope * 4 * π * t[5 / 6 .< t] - return mod.(f .+ Float64(π), Ref(2 * π)) .- Float64(π) -end - -@doc raw""" - artificial_S1_signal([pts=500]) - -generate a real-valued signal having piecewise constant, linear and quadratic -intervals with jumps in between. If the resulting manifold the data lives on, -is the [`Circle`](hhttps://juliamanifolds.github.io/Manifolds.jl/latest/manifolds/circle.html) -the data is also wrapped to ``[-\pi,\pi)``. This is data for an example from [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1). - -# Optional -* `pts` – (`500`) number of points to sample the function -""" -function artificial_S1_signal(pts::Integer=500) - t = range(0.0, 1.0; length=pts) - f = artificial_S1_signal.(t) - return mod.(f .+ Float64(π), Ref(2 * π)) .- Float64(π) -end -@doc raw""" - artificial_S1_signal(x) -evaluate the example signal ``f(x), x ∈ [0,1]``, -of phase-valued data introduces in Sec. 5.1 of [Bergmann et. al., SIAM J Imag Sci, 2014](@cite BergmannLausSteidlWeinmann:2014:1) -for values outside that interval, this Signal is `missing`. -""" -function artificial_S1_signal(x::Real) - if x < 0 - y = missing - elseif x <= 1 / 4 - y = -24 * π * (x - 1 / 4)^2 + 3 / 4 * π - elseif x <= 3 / 8 - y = 4 * π * x - π / 4 - elseif x <= 1 / 2 - y = -π * x - 3 * π / 8 - elseif x <= (3 * 0 + 19) / 32 - y = -(0 + 7) / 8 * π - elseif x <= (3 * 1 + 19) / 32 - y = -(1 + 7) / 8 * π - elseif x <= (3 * 2 + 19) / 32 - y = -(2 + 7) / 8 * π - elseif x <= (3 * 3 + 19) / 32 - y = -(3 + 7) / 8 * π - elseif x <= 1 - y = 3 / 2 * π * exp(8 - 1 / (1 - x)) - 3 / 4 * π - else - y = missing - end - return y -end - -function artificial_S2_whirl_image end - -@doc raw""" - artificial_S2_whirl_image([pts::Int=64]) - -Generate an artificial image of data on the 2 sphere, - -# Arguments -* `pts` – (`64`) size of the image in `pts`×`pts` pixel. - -This example dataset was used in the numerical example in Section 5.5 of [Laus et al., SIAM J Imag Sci., 2017](@cite LausNikolovaPerschSteidl:2017) - -It is based on [`artificial_S2_rotation_image`](@ref) extended by small whirl patches. -""" -artificial_S2_whirl_image(::Int) -function artificial_S2_rotation_image end - -@doc raw""" - artificial_S2_rotation_image([pts=64, rotations=(.5,.5)]) - -Create an image with a rotation on each axis as a parametrization. - -# Optional Parameters -* `pts` – (`64`) number of pixels along one dimension -* `rotations` – (`(.5,.5)`) number of total rotations performed on the axes. - -This dataset was used in the numerical example of Section 5.1 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016). -""" -artificial_S2_rotation_image() - -@doc raw""" - artificial_S2_whirl_patch([pts=5]) - -create a whirl within the `pts`×`pts` patch of -[Sphere](https://juliamanifolds.github.io/Manifolds.jl/stable/manifolds/sphere.html)(@ref)`(2)`-valued image data. - -These patches are used within [`artificial_S2_whirl_image`](@ref). - -# Optional Parameters -* `pts` – (`5`) size of the patch. If the number is odd, the center is the north - pole. -""" -function artificial_S2_whirl_patch(pts::Int=5) - patch = fill([0.0, 0.0, -1.0], pts, pts) - scaleFactor = sqrt((pts - 1)^2 / 2) * 3 / π - for i in 1:pts - for j in 1:pts - if i != (pts + 1) / 2 || j != (pts + 1) / 2 - α = atan((j - (pts + 1) / 2), (i - (pts + 1) / 2)) - β = sqrt((j - (pts + 1) / 2)^2 + (i - (pts + 1) / 2)^2) / scaleFactor - patch[i, j] = [ - sin(α) * sin(π / 2 - β), -cos(α) * sin(π / 2 - β), cos(π / 2 - β) - ] - end - end - end - return patch -end - -function artificial_S2_composite_bezier_curve end - -@doc raw""" - artificial_S2_composite_bezier_curve() - -Create the artificial curve in the `Sphere(2)` consisting of 3 segments between the four -points - -````math -p_0 = \begin{bmatrix}0&0&1\end{bmatrix}^{\mathrm{T}}, -p_1 = \begin{bmatrix}0&-1&0\end{bmatrix}^{\mathrm{T}}, -p_2 = \begin{bmatrix}-1&0&0\end{bmatrix}^{\mathrm{T}}, -p_3 = \begin{bmatrix}0&0&-1\end{bmatrix}^{\mathrm{T}}, -```` - -where each segment is a cubic Bézier curve, i.e. each point, except ``p_3`` has a first point -within the following segment ``b_i^+``, ``i=0,1,2`` and a last point within the previous -segment, except for ``p_0``, which are denoted by ``b_i^-``, ``i=1,2,3``. -This curve is differentiable by the conditions ``b_i^- = \gamma_{b_i^+,p_i}(2)``, ``i=1,2``, -where ``\gamma_{a,b}`` is the [`shortest_geodesic`](https://juliamanifolds.github.io/ManifoldsBase.jl/stable/functions/#ManifoldsBase.shortest_geodesic-Tuple{AbstractManifold,%20Any,%20Any}) connecting ``a`` and ``b``. -The remaining points are defined as - -````math -\begin{aligned} - b_0^+ &= \exp_{p_0}\frac{\pi}{8\sqrt{2}}\begin{pmatrix}1&-1&0\end{pmatrix}^{\mathrm{T}},& - b_1^+ &= \exp_{p_1}-\frac{\pi}{4\sqrt{2}}\begin{pmatrix}-1&0&1\end{pmatrix}^{\mathrm{T}},\\ - b_2^+ &= \exp_{p_2}\frac{\pi}{4\sqrt{2}}\begin{pmatrix}0&1&-1\end{pmatrix}^{\mathrm{T}},& - b_3^- &= \exp_{p_3}-\frac{\pi}{8\sqrt{2}}\begin{pmatrix}-1&1&0\end{pmatrix}^{\mathrm{T}}. -\end{aligned} -```` - -This example was used within minimization of acceleration of the paper [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018). -""" -artificial_S2_composite_bezier_curve() - -@doc raw""" - artificial_SPD_image([pts=64, stepsize=1.5]) - -create an artificial image of symmetric positive definite matrices of size -`pts`×`pts` pixel with a jump of size `stepsize`. - -This dataset was used in the numerical example of Section 5.2 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016). -""" -function artificial_SPD_image(pts::Int=64, stepsize=1.5) - r = range(0; stop=1 - 1 / pts, length=pts) - v1 = abs.(2 * pi .* r .- pi) - v2 = pi .* r - v3 = range(0; stop=3 * (1 - 1 / pts), length=2 * pts) - data = fill(Matrix{Float64}(I, 3, 3), pts, pts) - for row in 1:pts - for col in 1:pts - A = [cos(v1[col]) -sin(v1[col]) 0.0; sin(v1[col]) cos(v1[col]) 0.0; 0.0 0.0 1.0] - B = [1.0 0.0 0.0; 0.0 cos(v2[row]) -sin(v2[row]); 0.0 sin(v2[row]) cos(v2[row])] - C = [ - cos(v1[mod(col - row, pts) + 1]) 0 -sin(v1[mod(col - row, pts) + 1]) - 0.0 1.0 0.0 - sin(v1[mod(col - row, pts) + 1]) 0.0 cos(v1[mod(col - row, pts) + 1]) - ] - scale = [ - 1 + stepsize / 2 * ((row + col) > pts ? 1 : 0) - 1 + v3[row + col] - stepsize * (col > pts / 2 ? 1 : 0) - 4 - v3[row + col] + stepsize * (row > pts / 2 ? 1 : 0) - ] - data[row, col] = Matrix(Symmetric(A * B * C * Diagonal(scale) * C' * B' * A')) - end - end - return data -end - -function artificial_SPD_image2 end - -@doc raw""" - artificial_SPD_image2([pts=64, fraction=.66]) - -create an artificial image of symmetric positive definite matrices of size -`pts`×`pts` pixel with right hand side `fraction` is moved upwards. - -This data set was introduced in the numerical examples of Section of [Bergmann, Presch, Steidl, SIAM J Imag Sci, 2016](@cite BergmannPerschSteidl:2016) -""" -artificial_SPD_image2(pts, fraction) - -@doc raw""" - artificial_S2_lemniscate(p, t::Float64; a::Float64=π/2) - -Generate a point from the signal on the [`Sphere`](https://juliamanifolds.github.io/Manifolds.jl/stable/manifolds/sphere.html) ``\mathbb S^2`` by -creating the [Lemniscate of Bernoulli](https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli) -in the tangent space of `p` sampled at `t` and use exp` to obtain a point on -the [`Sphere`](https://juliamanifolds.github.io/Manifolds.jl/stable/manifolds/sphere.html). - -# Input -* `p` – the tangent space the Lemniscate is created in -* `t` – value to sample the Lemniscate at - -# Optional Values - * `a` – (`π/2`) defines a half axis of the Lemniscate to cover a - half sphere. - -This dataset was used in the numerical example of Section 5.1 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016). -""" -artificial_S2_lemniscate(p, t::Float64, a::Float64=π / 2.0) - -@doc raw""" - artificial_S2_lemniscate(p [,pts=128,a=π/2,interval=[0,2π]) - -Generate a Signal on the [`Sphere`](https://juliamanifolds.github.io/Manifolds.jl/stable/manifolds/sphere.html) ``\mathbb S^2`` by creating the -[Lemniscate of Bernoulli](https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli) -in the tangent space of `p` sampled at `pts` points and use `exp` to get a -signal on the [`Sphere`](https://juliamanifolds.github.io/Manifolds.jl/stable/manifolds/sphere.html). - -# Input -* `p` – the tangent space the Lemniscate is created in -* `pts` – (`128`) number of points to sample the Lemniscate -* `a` – (`π/2`) defines a half axis of the Lemniscate to cover a - half sphere. -* `interval` – (`[0,2*π]`) range to sample the lemniscate at, the default value - refers to one closed curve - -This dataset was used in the numerical example of Section 5.1 of [Bačák et al., SIAM J Sci Comput, 2016](@cite BacakBergmannSteidlWeinmann:2016). -""" -function artificial_S2_lemniscate( - p, pts::Integer=128, a::Float64=π / 2.0, interval::Array{Float64,1}=[0.0, 2.0 * π] -) - return artificial_S2_lemniscate.(Ref(p), range(interval[1], interval[2]; length=pts), a) -end diff --git a/src/functions/adjoint_differentials.jl b/src/functions/adjoint_differentials.jl deleted file mode 100644 index cf89312912..0000000000 --- a/src/functions/adjoint_differentials.jl +++ /dev/null @@ -1,284 +0,0 @@ -@doc raw""" - adjoint_differential_bezier_control(M::AbstractManifold, b::BezierSegment, t, η) - adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::BezierSegment, - b::BezierSegment, - t, - η, - ) - -evaluate the adjoint of the differential of a Bézier curve on the manifold `M` -with respect to its control points `b` based on a point `t```∈[0,1]`` on the -curve and a tangent vector ``η∈T_{β(t)}\mathcal M``. -This can be computed in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function adjoint_differential_bezier_control(M::AbstractManifold, b::BezierSegment, t, η) - n = length(b.pts) - if n == 2 - return BezierSegment([ - adjoint_differential_shortest_geodesic_startpoint(M, b.pts[1], b.pts[2], t, η), - adjoint_differential_shortest_geodesic_endpoint(M, b.pts[1], b.pts[2], t, η), - ]) - end - c = [b.pts, [similar.(b.pts[1:l]) for l in (n - 1):-1:2]...] - for i in 2:(n - 1) # casteljau on the tree -> forward with interims storage - c[i] .= shortest_geodesic.(Ref(M), c[i - 1][1:(end - 1)], c[i - 1][2:end], Ref(t)) - end - Y = [η, [similar(η) for i in 1:(n - 1)]...] - for i in (n - 1):-1:1 # propagate adjoints -> backward without interims storage - Y[1:(n - i + 1)] .= - [ # take previous results and add start&end point effects - adjoint_differential_shortest_geodesic_startpoint.( - Ref(M), c[i][1:(end - 1)], c[i][2:end], Ref(t), Y[1:(n - i)] - )..., - zero_vector(M, c[i][end]), - ] .+ [ - zero_vector(M, c[i][1]), - adjoint_differential_shortest_geodesic_endpoint.( - Ref(M), c[i][1:(end - 1)], c[i][2:end], Ref(t), Y[1:(n - i)] - )..., - ] - end - return BezierSegment(Y) -end -function adjoint_differential_bezier_control!( - M::AbstractManifold, Y::BezierSegment, b::BezierSegment, t, η -) - n = length(b.pts) - if n == 2 - adjoint_differential_shortest_geodesic_startpoint!( - M, Y.pts[1], b.pts[1], b.pts[2], t, η - ) - adjoint_differential_shortest_geodesic_endpoint!( - M, Y.pts[2], b.pts[1], b.pts[2], t, η - ) - return Y - end - c = [b.pts, [similar.(b.pts[1:l]) for l in (n - 1):-1:2]...] - for i in 2:(n - 1) # casteljau on the tree -> forward with interims storage - c[i] .= shortest_geodesic.(Ref(M), c[i - 1][1:(end - 1)], c[i - 1][2:end], Ref(t)) - end - Y.pts[1] = η - for i in (n - 1):-1:1 # propagate adjoints -> backward without interims storage - Y.pts[1:(n - i + 1)] .= - [ # take previous results and add start&end point effects - adjoint_differential_shortest_geodesic_startpoint.( - Ref(M), c[i][1:(end - 1)], c[i][2:end], Ref(t), Y.pts[1:(n - i)] - )..., - zero_vector(M, c[i][end]), - ] .+ [ - zero_vector(M, c[i][1]), - adjoint_differential_shortest_geodesic_endpoint.( - Ref(M), c[i][1:(end - 1)], c[i][2:end], Ref(t), Y.pts[1:(n - i)] - )..., - ] - end - return Y -end - -@doc raw""" - adjoint_differential_bezier_control( - M::AbstractManifold, - b::BezierSegment, - t::AbstractVector, - X::AbstractVector, - ) - adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::BezierSegment, - b::BezierSegment, - t::AbstractVector, - X::AbstractVector, - ) -evaluate the adjoint of the differential of a Bézier curve on the manifold `M` -with respect to its control points `b` based on a points `T```=(t_i)_{i=1}^n`` that -are pointwise in `` t_i∈[0,1]`` on the curve and given corresponding tangential -vectors ``X = (η_i)_{i=1}^n``, ``η_i∈T_{β(t_i)}\mathcal M`` -This can be computed in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve and [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018) -""" -function adjoint_differential_bezier_control( - M::AbstractManifold, b::BezierSegment, t::AbstractVector, X::AbstractVector -) - effects = [bt.pts for bt in adjoint_differential_bezier_control.(Ref(M), Ref(b), t, X)] - return BezierSegment(sum(effects)) -end -function adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::BezierSegment, - b::BezierSegment, - t::AbstractVector, - X::AbstractVector, -) - Z = BezierSegment(similar.(Y.pts)) - fill!.(Y.pts, zero(eltype(first(Y.pts)))) - for i in 1:length(t) - adjoint_differential_bezier_control!(M, Z, b, t[i], X[i]) - Y.pts .+= Z.pts - end - return Y -end - -@doc raw""" - adjoint_differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - t, - X - ) - adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::AbstractVector{<:BezierSegment}, - B::AbstractVector{<:BezierSegment}, - t, - X - ) - -evaluate the adjoint of the differential of a composite Bézier curve on the -manifold `M` with respect to its control points `b` based on a points `T```=(t_i)_{i=1}^n`` -that are pointwise in ``t_i∈[0,1]`` on the curve and given corresponding tangential -vectors ``X = (η_i)_{i=1}^n``, ``η_i∈T_{β(t_i)}\mathcal M`` -This can be computed in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function adjoint_differential_bezier_control( - M::AbstractManifold, B::AbstractVector{<:BezierSegment}, t, X -) - Y = broadcast(b -> BezierSegment(zero_vector.(Ref(M), b.pts)), B) # Double broadcast - return adjoint_differential_bezier_control!(M, Y, B, t, X) -end -function adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::AbstractVector{<:BezierSegment}, - B::AbstractVector{<:BezierSegment}, - t, - X, -) - # doubly nested broadcast on the Array(Array) of CPs (note broadcast _and_ .) - if (0 > t) || (t > length(B)) - error( - "The parameter ", - t, - " to evaluate the composite Bézier curve at is outside the interval [0,", - length(B), - "].", - ) - end - for y in Y - fill!.(y.pts, zero(eltype(first(y.pts)))) - end - seg = max(ceil(Int, t), 1) - localT = ceil(Int, t) == 0 ? 0.0 : t - seg + 1 - adjoint_differential_bezier_control!(M, Y[seg], B[seg], localT, X) - return Y -end -@doc raw""" - adjoint_differential_bezier_control( - M::AbstractManifold, - T::AbstractVector, - X::AbstractVector, - ) - adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::AbstractVector{<:BezierSegment}, - T::AbstractVector, - X::AbstractVector, - ) - -Evaluate the adjoint of the differential with respect to the controlpoints at several times `T`. -This can be computed in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function adjoint_differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector, - X::AbstractVector, -) - Y = broadcast(b -> BezierSegment(zero_vector.(Ref(M), b.pts)), B) # Double broadcast - return adjoint_differential_bezier_control!(M, Y, B, T, X) -end -function adjoint_differential_bezier_control!( - M::AbstractManifold, - Y::AbstractVector{<:BezierSegment}, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector, - X::AbstractVector, -) - Z = [BezierSegment(similar.(y.pts)) for y in Y] - for j in 1:length(T) # for all times - adjoint_differential_bezier_control!(M, Z, B, T[j], X[j]) - for i in 1:length(Z) - Y[i].pts .+= Z[i].pts - end - end - return Y -end - -@doc raw""" - Y = adjoint_differential_forward_logs(M, p, X) - adjoint_differential_forward_logs!(M, Y, p, X) - -Compute the adjoint differential of [`forward_logs`](@ref) ``F`` occurring, -in the power manifold array `p`, the differential of the function - -``F_i(p) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j`` - -where ``i`` runs over all indices of the `PowerManifold` manifold `M` and ``\mathcal I_i`` -denotes the forward neighbors of ``i`` -Let ``n`` be the number dimensions of the `PowerManifold` manifold (i.e. `length(size(x)`)). -Then the input tangent vector lies on the manifold ``\mathcal M' = \mathcal M^n``. -The adjoint differential can be computed in place of `Y`. - -# Input - -* `M` – a `PowerManifold` manifold -* `p` – an array of points on a manifold -* `X` – a tangent vector to from the n-fold power of `p`, where n is the `ndims` of `p` - -# Output - -`Y` – resulting tangent vector in ``T_p\mathcal M`` representing the adjoint - differentials of the logs. -""" -function adjoint_differential_forward_logs( - M::PowerManifold{𝔽,TM,TSize,TPR}, p, X -) where {𝔽,TM,TSize,TPR} - Y = zero_vector(M, p) - return adjoint_differential_forward_logs!(M, Y, p, X) -end -function adjoint_differential_forward_logs!( - M::PowerManifold{𝔽,TM,TSize,TPR}, Y, p, X -) where {𝔽,TM,TSize,TPR} - power_size = power_dimensions(M) - d = length(power_size) - N = PowerManifold(M.manifold, TPR(), power_size..., d) - R = CartesianIndices(Tuple(power_size)) - maxInd = last(R).I - # since we add things in Y, make sure we start at zero. - zero_vector!(M, Y, p) - for i in R # iterate over all pixel - for k in 1:d # for all direction combinations - I = [i.I...] # array of index - J = I .+ 1 .* (1:d .== k) #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex{d}(J...) # neighbour index as Cartesian Index - Y[M, I...] = - Y[M, I...] + adjoint_differential_log_basepoint( - M.manifold, p[M, I...], p[M, J...], X[N, I..., k] - ) - Y[M, J...] = - Y[M, J...] + adjoint_differential_log_argument( - M.manifold, p[M, J...], p[M, I...], X[N, I..., k] - ) - end - end # directions - end # i in R - return Y -end diff --git a/src/functions/bezier_curves.jl b/src/functions/bezier_curves.jl deleted file mode 100644 index e859aae37c..0000000000 --- a/src/functions/bezier_curves.jl +++ /dev/null @@ -1,330 +0,0 @@ -@doc doc""" - BezierSegment - -A type to capture a Bezier segment. With ``n`` points, a Bézier segment of degree ``n-1`` -is stored. On the Euclidean manifold, this yields a polynomial of degree ``n-1``. - -This type is mainly used to encapsulate the points within a composite Bezier curve, which -consist of an `AbstractVector` of `BezierSegments` where each of the points might -be a nested array on a `PowerManifold` already. - -Not that this can also be used to represent tangent vectors on the control points of a segment. - -See also: [`de_casteljau`](@ref). - -# Constructor - BezierSegment(pts::AbstractVector) - -Given an abstract vector of `pts` generate the corresponding Bézier segment. -""" -struct BezierSegment{T<:AbstractVector{S} where {S}} - pts::T -end -#BezierSegment(pts::T) where {T <: AbstractVector{S} where S} = BezierSegment{T}(pts) -Base.show(io::IO, b::BezierSegment) = print(io, "BezierSegment($(b.pts))") - -@doc raw""" - de_casteljau(M::AbstractManifold, b::BezierSegment NTuple{N,P}) -> Function - -return the [Bézier curve](https://en.wikipedia.org/wiki/Bézier_curve) -``β(⋅;b_0,…,b_n): [0,1] → \mathcal M`` defined by the control points -``b_0,…,b_n∈\mathcal M``, ``n∈\mathbb N``, as a [`BezierSegment`](@ref). -This function implements de Casteljau's algorithm [Casteljau, 1959](@cite deCasteljau:1959), [Casteljau, 1963](@cite deCasteljau:1963) generalized -to manifolds by [Popiel, Noakes, J Approx Theo, 2007](@cite PopielNoakes:2007): Let ``γ_{a,b}(t)`` denote the -shortest geodesic connecting ``a,b∈\mathcal M``. Then the curve is defined by the recursion - -````math -\begin{aligned} - β(t;b_0,b_1) &= \gamma_{b_0,b_1}(t)\\ - β(t;b_0,…,b_n) &= \gamma_{β(t;b_0,…,b_{n-1}), β(t;b_1,…,b_n)}(t), -\end{aligned} -```` - -and `P` is the type of a point on the `Manifold` `M`. - - de_casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) -> Function - -Given a vector of Bézier segments, i.e. a vector of control points -``B=\bigl( (b_{0,0},…,b_{n_0,0}),…,(b_{0,m},… b_{n_m,m}) \bigr)``, -where the different segments might be of different degree(s) ``n_0,…,n_m``. The resulting -composite Bézier curve ``c_B:[0,m] → \mathcal M`` consists of ``m`` segments which are -Bézier curves. - -````math -c_B(t) := - \begin{cases} - β(t; b_{0,0},…,b_{n_0,0}) & \text{ if } t ∈[0,1]\\ - β(t-i; b_{0,i},…,b_{n_i,i}) & \text{ if } - t∈(i,i+1], \quad i∈\{1,…,m-1\}. - \end{cases} -```` - -````julia -de_casteljau(M::AbstractManifold, b::BezierSegment, t::Real) -de_casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}, t::Real) -de_casteljau(M::AbstractManifold, b::BezierSegment, T::AbstractVector) -> AbstractVector -de_casteljau( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector -) -> AbstractVector -```` - -Evaluate the Bézier curve at time `t` or at times `t` in `T`. -""" -de_casteljau(M::AbstractManifold, ::Any...) -function de_casteljau(M::AbstractManifold, b::BezierSegment) - if length(b.pts) == 2 - return t -> shortest_geodesic(M, b.pts[1], b.pts[2], t) - else - return t -> shortest_geodesic( - M, - de_casteljau(M, BezierSegment(b.pts[1:(end - 1)]), t), - de_casteljau(M, BezierSegment(b.pts[2:end]), t), - t, - ) - end -end -function de_casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - length(B) == 1 && return de_casteljau(M, B[1]) - return function (t) - ((0 > t) || (t > length(B))) && throw( - DomainError( - "Parameter $(t) outside of domain of the composite Bézier curve [0,$(length(B))].", - ), - ) - return de_casteljau( - M, B[max(ceil(Int, t), 1)], ceil(Int, t) == 0 ? 0.0 : t - ceil(Int, t) + 1 - ) - end -end -# the direct evaluation can be done iteratively -function de_casteljau(M::AbstractManifold, b::BezierSegment, t::Real) - if length(b.pts) == 2 - return shortest_geodesic(M, b.pts[1], b.pts[2], t) - else - c = deepcopy(b.pts) - for l in length(c):-1:2 # casteljau on the tree -> forward with interims storage - c[1:(l - 1)] .= shortest_geodesic.(Ref(M), c[1:(l - 1)], c[2:l], Ref(t)) - end - end - return c[1] -end -function de_casteljau(M::AbstractManifold, B::AbstractVector{<:BezierSegment}, t::Real) - ((0 > t) || (t > length(B))) && throw( - DomainError( - "Parameter $(t) outside of domain of the composite Bézier curve [0,$(length(B))].", - ), - ) - return de_casteljau( - M, B[max(ceil(Int, t), 1)], ceil(Int, t) == 0 ? 0.0 : t - ceil(Int, t) + 1 - ) -end -de_casteljau(M::AbstractManifold, b, T::AbstractVector) = de_casteljau.(Ref(M), Ref(b), T) - -@doc raw""" - get_bezier_junction_tangent_vectors(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - get_bezier_junction_tangent_vectors(M::AbstractManifold, b::BezierSegment) - -returns the tangent vectors at start and end points of the composite Bézier curve -pointing from a junction point to the first and last -inner control points for each segment of the composite Bezier curve specified by -the control points `B`, either a vector of segments of controlpoints. -""" -function get_bezier_junction_tangent_vectors( - M::AbstractManifold, B::AbstractVector{<:BezierSegment} -) - return cat( - [[log(M, b.pts[1], b.pts[2]), log(M, b.pts[end], b.pts[end - 1])] for b in B]..., - ; - dims=1, - ) -end -function get_bezier_junction_tangent_vectors(M::AbstractManifold, b::BezierSegment) - return get_bezier_junction_tangent_vectors(M, [b]) -end - -@doc raw""" - get_bezier_junctions(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - get_bezier_junctions(M::AbstractManifold, b::BezierSegment) - -returns the start and end point(s) of the segments of the composite Bézier curve -specified by the control points `B`. For just one segment `b`, its start and end points -are returned. -""" -function get_bezier_junctions( - ::AbstractManifold, B::AbstractVector{<:BezierSegment}, double_inner::Bool=false -) - return cat( - [double_inner ? [b.pts[[1, end]]...] : [b.pts[1]] for b in B]..., - double_inner ? [] : [last(last(B).pts)]; - dims=1, - ) -end -function get_bezier_junctions(::AbstractManifold, b::BezierSegment, ::Bool=false) - return b.pts[[1, end]] -end - -@doc raw""" - get_bezier_inner_points(M::AbstractManifold, B::AbstractVector{<:BezierSegment} ) - get_bezier_inner_points(M::AbstractManifold, b::BezierSegment) - -returns the inner (i.e. despite start and end) points of the segments of the -composite Bézier curve specified by the control points `B`. For a single segment `b`, -its inner points are returned -""" -function get_bezier_inner_points(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - return cat([[get_bezier_inner_points(M, b)...] for b in B]...; dims=1) -end -function get_bezier_inner_points(::AbstractManifold, b::BezierSegment) - return b.pts[2:(end - 1)] -end - -@doc raw""" - get_bezier_points( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - reduce::Symbol=:default - ) - get_bezier_points(M::AbstractManifold, b::BezierSegment, reduce::Symbol=:default) - -returns the control points of the segments of the composite Bézier curve -specified by the control points `B`, either a vector of segments of -controlpoints or a. - -This method reduces the points depending on the optional `reduce` symbol - -* `:default` – no reduction is performed -* `:continuous` – for a continuous function, the junction points are doubled at - ``b_{0,i}=b_{n_{i-1},i-1}``, so only ``b_{0,i}`` is in the vector. -* `:differentiable` – for a differentiable function additionally - ``\log_{b_{0,i}}b_{1,i} = -\log_{b_{n_{i-1},i-1}}b_{n_{i-1}-1,i-1}`` holds. - hence ``b_{n_{i-1}-1,i-1}`` is omitted. - -If only one segment is given, all points of `b` – i.e. `b.pts` is returned. -""" -function get_bezier_points( - M::AbstractManifold, B::AbstractVector{<:BezierSegment}, reduce::Symbol=:default -) - return get_bezier_points(M, B, Val(reduce)) -end -function get_bezier_points( - ::AbstractManifold, B::AbstractVector{<:BezierSegment}, ::Val{:default} -) - return cat([[b.pts...] for b in B]...; dims=1) -end -function get_bezier_points( - ::AbstractManifold, B::AbstractVector{<:BezierSegment}, ::Val{:continuous} -) - return cat([[b.pts[1:(end - 1)]...] for b in B]..., [last(last(B).pts)]; dims=1) -end -function get_bezier_points( - ::AbstractManifold, B::AbstractVector{<:BezierSegment}, ::Val{:differentiable} -) - return cat( - [first(B).pts[1]], [first(B).pts[2]], [[b.pts[3:end]...] for b in B]..., ; dims=1 - ) -end -get_bezier_points(::AbstractManifold, b::BezierSegment, ::Symbol=:default) = b.pts - -@doc raw""" - get_bezier_degree(M::AbstractManifold, b::BezierSegment) - -return the degree of the Bézier curve represented by the tuple `b` of control points on -the manifold `M`, i.e. the number of points minus 1. -""" -get_bezier_degree(::AbstractManifold, b::BezierSegment) = length(b.pts) - 1 - -@doc raw""" - get_bezier_degrees(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - -return the degrees of the components of a composite Bézier curve represented by tuples -in `B` containing points on the manifold `M`. -""" -function get_bezier_degrees(M::AbstractManifold, B::AbstractVector{<:BezierSegment}) - return get_bezier_degree.(Ref(M), B) -end - -@doc raw""" - get_bezier_segments(M::AbstractManifold, c::AbstractArray{P}, d[, s::Symbol=:default]) - -returns the array of [`BezierSegment`](@ref)s `B` of a composite Bézier curve reconstructed -from an array `c` of points on the manifold `M` and an array of degrees `d`. - -There are a few (reduced) representations that can get extended; -see also [`get_bezier_points`](@ref). For ease of the following, let ``c=(c_1,…,c_k)`` -and ``d=(d_1,…,d_m)``, where ``m`` denotes the number of components the composite Bézier -curve consists of. Then - -* `:default` – ``k = m + \sum_{i=1}^m d_i`` since each component requires one point more than - its degree. The points are then ordered in tuples, i.e. - ````math - B = \bigl[ [c_1,…,c_{d_1+1}], (c_{d_1+2},…,c_{d_1+d_2+2}],…, [c_{k-m+1+d_m},…,c_{k}] \bigr] - ```` -* `:continuous` – ``k = 1+ \sum_{i=1}{m} d_i``, since for a continuous curve start and end - point of successive components are the same, so the very first start point and the end - points are stored. - ````math - B = \bigl[ [c_1,…,c_{d_1+1}], [c_{d_1+1},…,c_{d_1+d_2+1}],…, [c_{k-1+d_m},…,b_{k}) \bigr] - ```` -* `:differentiable` – for a differentiable function additionally to the last explanation, also - the second point of any segment was not stored except for the first segment. - Hence ``k = 2 - m + \sum_{i=1}{m} d_i`` and at a junction point ``b_n`` with its given prior - point ``c_{n-1}``, i.e. this is the last inner point of a segment, the first inner point - in the next segment the junction is computed as - ``b = \exp_{c_n}(-\log_{c_n} c_{n-1})`` such that the assumed differentiability holds -""" -function get_bezier_segments( - M::AbstractManifold, c::Array{P,1}, d, s::Symbol=:default -) where {P} - ((length(c) == d[1]) && (length(d) == 1)) && return Tuple(c) - return get_bezier_segments(M, c, d, Val(s)) -end -function get_bezier_segments( - ::AbstractManifold, c::Array{P,1}, d, ::Val{:default} -) where {P} - endindices = cumsum(d .+ 1) - startindices = endindices - d - return [BezierSegment(c[si:ei]) for (si, ei) in zip(startindices, endindices)] -end -function get_bezier_segments( - ::AbstractManifold, c::Array{P,1}, d, ::Val{:continuous} -) where {P} - length(c) != (sum(d) + 1) && error( - "The number of control points $(length(c)) does not match (for degrees $(d) expected $(sum(d)+1) points.", - ) - nums = d .+ [(i == length(d)) ? 1 : 0 for i in 1:length(d)] - endindices = cumsum(nums) - startindices = cumsum(nums) - nums .+ 1 - return [ - [ # for all append the start of the new also as last - BezierSegment([c[startindices[i]:endindices[i]]..., c[startindices[i + 1]]]) for - i in 1:(length(startindices) - 1) - ]..., # despite for the last - BezierSegment(c[startindices[end]:endindices[end]]), - ] -end -function get_bezier_segments( - M::AbstractManifold, c::Array{P,1}, d, ::Val{:differentiable} -) where {P} - length(c) != (sum(d .- 1) + 2) && error( - "The number of control points $(length(c)) does not match (for degrees $(d) expected $(sum(d.-1)+2) points.", - ) - nums = d .+ [(i == 1) ? 1 : -1 for i in 1:length(d)] - endindices = cumsum(nums) - startindices = cumsum(nums) - nums .+ 1 - return [ # for all append the start of the new also as last - BezierSegment(c[startindices[1]:endindices[1]]), - [ - BezierSegment([ - c[endindices[i - 1]], - exp( - M, - c[endindices[i - 1]], - -log(M, c[endindices[i - 1]], c[endindices[i - 1] - 1]), - ), - c[startindices[i]:endindices[i]]..., - ]) for i in 2:length(startindices) - ]..., # despite for the last - ] -end diff --git a/src/functions/costs.jl b/src/functions/costs.jl deleted file mode 100644 index b54a23dbae..0000000000 --- a/src/functions/costs.jl +++ /dev/null @@ -1,291 +0,0 @@ - -@doc raw""" - cost_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector{P}, - degrees::AbstractVector{<:Integer}, - T::AbstractVector{<:AbstractFloat}, - ) where {P} - -compute the value of the discrete Acceleration of the composite Bezier curve - -```math -\sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3} -``` - -where for this formula the `pts` along the curve are equispaced and denoted by -``t_i``, ``i=1,…,N``, and ``d_2`` refers to the second order absolute difference [`costTV2`](@ref) -(squared). Note that the Bézier-curve is given in reduces form as a point on a `PowerManifold`, -together with the `degrees` of the segments and assuming a differentiable curve, the segments -can internally be reconstructed. - -This acceleration discretization was introduced in [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018). - -# See also - -[`grad_acceleration_bezier`](@ref), [`cost_L2_acceleration_bezier`](@ref), [`grad_L2_acceleration_bezier`](@ref) -""" -function cost_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector{P}, - degrees::AbstractVector{<:Integer}, - T::AbstractVector{<:AbstractFloat}, -) where {P} - Bt = get_bezier_segments(M, B, degrees, :differentiable) - p = de_casteljau(M, Bt, T) - n = length(T) - f = p[[1, 3:n..., n]] - b = p[[1, 1:(n - 2)..., n]] - d = distance.(Ref(M), p, shortest_geodesic.(Ref(M), f, b, Ref(0.5))) .^ 2 - samplingFactor = 1 / (((max(T...) - min(T...)) / (n - 1))^3) - return samplingFactor * sum(d) -end -@doc raw""" - cost_L2_acceleration_bezier(M,B,pts,λ,d) - -compute the value of the discrete Acceleration of the composite Bezier curve -together with a data term, i.e. - -````math -\frac{λ}{2}\sum_{i=0}^{N} d_{\mathcal M}(d_i, c_B(i))^2+ -\sum_{i=1}^{N-1}\frac{d^2_2 [ B(t_{i-1}), B(t_{i}), B(t_{i+1})]}{\Delta_t^3} -```` - -where for this formula the `pts` along the curve are equispaced and denoted by -``t_i`` and ``d_2`` refers to the second order absolute difference [`costTV2`](@ref) -(squared), the junction points are denoted by ``p_i``, and to each ``p_i`` corresponds -one data item in the manifold points given in `d`. For details on the acceleration -approximation, see [`cost_acceleration_bezier`](@ref). -Note that the Bézier-curve is given in reduces form as a point on a `PowerManifold`, -together with the `degrees` of the segments and assuming a differentiable curve, the -segments can internally be reconstructed. - - -# See also - -[`grad_L2_acceleration_bezier`](@ref), [`cost_acceleration_bezier`](@ref), [`grad_acceleration_bezier`](@ref) -""" -function cost_L2_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector{P}, - degrees::AbstractVector{<:Integer}, - T::AbstractVector{<:AbstractFloat}, - λ::AbstractFloat, - d::AbstractVector{P}, -) where {P} - Bt = get_bezier_segments(M, B, degrees, :differentiable) - p = get_bezier_junctions(M, Bt) - return cost_acceleration_bezier(M, B, degrees, T) + - λ / 2 * sum((distance.(Ref(M), p, d)) .^ 2) -end - -@doc raw""" - costIntrICTV12(M, f, u, v, α, β) - -Compute the intrinsic infimal convolution model, where the addition is replaced -by a mid point approach and the two functions involved are [`costTV2`](@ref) -and [`costTV`](@ref). The model reads - -```math -E(u,v) = - \frac{1}{2}\sum_{i ∈ \mathcal G} - d_{\mathcal M}\bigl(g(\frac{1}{2},v_i,w_i),f_i\bigr) - +\alpha\bigl( β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) \bigr). -``` -""" -function costIntrICTV12(M::AbstractManifold, f, u, v, α, β) - IC = 1 / 2 * distance(M, shortest_geodesic(M, u, v, 0.5), f)^2 - TV12 = β * costTV(M, u) + (1 - β) * costTV2(M, v) - return IC + α * TV12 -end - -@doc raw""" - costL2TV(M, f, α, x) - -compute the ``ℓ^2``-TV functional on the `PowerManifold manifold `M` for given -(fixed) data `f` (on `M`), a nonnegative weight `α`, and evaluated at `x` (on `M`), -i.e. - -```math -E(x) = d_{\mathcal M}^2(f,x) + \alpha \operatorname{TV}(x) -``` - -# See also - -[`costTV`](@ref) -""" -costL2TV(M, f, α, x) = 1 / 2 * distance(M, f, x)^2 + α * costTV(M, x) - -@doc raw""" - costL2TVTV2(M, f, α, β, x) - -compute the ``ℓ^2``-TV-TV2 functional on the `PowerManifold` manifold `M` for -given (fixed) data `f` (on `M`), nonnegative weight `α`, `β`, and evaluated -at `x` (on `M`), i.e. - -```math -E(x) = d_{\mathcal M}^2(f,x) + \alpha\operatorname{TV}(x) - + β\operatorname{TV}_2(x) -``` - -# See also - -[`costTV`](@ref), [`costTV2`](@ref) -""" -function costL2TVTV2(M::PowerManifold, f, α, β, x) - return 1 / 2 * distance(M, f, x)^2 + α * costTV(M, x) + β * costTV2(M, x) -end - -@doc raw""" - costL2TV2(M, f, β, x) - -compute the ``ℓ^2``-TV2 functional on the `PowerManifold` manifold `M` -for given data `f`, nonnegative parameter `β`, and evaluated at `x`, i.e. - -```math -E(x) = d_{\mathcal M}^2(f,x) + β\operatorname{TV}_2(x) -``` - -# See also - -[`costTV2`](@ref) -""" -function costL2TV2(M::PowerManifold, f, β, x) - return 1 / 2 * distance(M, f, x)^2 + β * costTV2(M, x) -end - -@doc raw""" - costTV(M, x, p) - -Compute the ``\operatorname{TV}^p`` functional for a tuple `pT` of points -on a manifold `M`, i.e. - -```math -E(x_1,x_2) = d_{\mathcal M}^p(x_1,x_2), \quad x_1,x_2 ∈ \mathcal M -``` - -# See also - -[`grad_TV`](@ref), [`prox_TV`](@ref) -""" -function costTV(M::AbstractManifold, x::Tuple{T,T}, p::Int=1) where {T} - return distance(M, x[1], x[2])^p -end -@doc raw""" - costTV(M,x [,p=2,q=1]) - -Compute the ``\operatorname{TV}^p`` functional for data `x`on the `PowerManifold` -manifold `M`, i.e. ``\mathcal M = \mathcal N^n``, where ``n ∈ \mathbb N^k`` denotes -the dimensions of the data `x`. -Let ``\mathcal I_i`` denote the forward neighbors, i.e. with ``\mathcal G`` as all -indices from ``\mathbf{1} ∈ \mathbb N^k`` to ``n`` we have -``\mathcal I_i = \{i+e_j, j=1,…,k\}\cap \mathcal G``. -The formula reads - -```math -E^q(x) = \sum_{i ∈ \mathcal G} - \bigl( \sum_{j ∈ \mathcal I_i} d^p_{\mathcal M}(x_i,x_j) \bigr)^{q/p}. -``` - -# See also - -[`grad_TV`](@ref), [`prox_TV`](@ref) -""" -function costTV(M::PowerManifold, x, p=1, q=1) - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R) - cost = fill(0.0, Tuple(power_size)) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for i in R # iterate over all pixel - j = i + ek # compute neighbor - if all(map(<=, j.I, maxInd.I)) # is this neighbor in range? - cost[i] += costTV(M.manifold, (x[M, Tuple(i)...], x[M, Tuple(j)...]), p) - end - end - end - cost = (cost) .^ (1 / p) - if q > 0 - return sum(cost .^ q)^(1 / q) - else - return cost - end -end -@doc raw""" - costTV2(M,(x1,x2,x3) [,p=1]) - -Compute the ``\operatorname{TV}_2^p`` functional for the 3-tuple of points -`(x1,x2,x3)`on the manifold `M`. Denote by - -```math - \mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};x_1,x_3) \text{ for some geodesic }g\bigr\} -``` - -the set of mid points between ``x_1`` and ``x_3``. Then the function reads - -```math -d_2^p(x_1,x_2,x_3) = \min_{c ∈ \mathcal C} d_{\mathcal M}(c,x_2). -``` - -# See also - -[`grad_TV2`](@ref), [`prox_TV2`](@ref) -""" -function costTV2(M::MT, x::Tuple{T,T,T}, p=1) where {MT<:AbstractManifold,T} - # note that here mid_point returns the closest to x2 from the e midpoints between x1 x3 - return 1 / p * distance(M, mid_point(M, x[1], x[3]), x[2])^p -end -@doc raw""" - costTV2(M,x [,p=1]) - -compute the ``\operatorname{TV}_2^p`` functional for data `x` on the -`PowerManifold` manifoldmanifold `M`, i.e. ``\mathcal M = \mathcal N^n``, -where ``n ∈ \mathbb N^k`` denotes the dimensions of the data `x`. -Let ``\mathcal I_i^{\pm}`` denote the forward and backward neighbors, respectively, -i.e. with ``\mathcal G`` as all indices from ``\mathbf{1} ∈ \mathbb N^k`` to ``n`` we -have ``\mathcal I^\pm_i = \{i\pm e_j, j=1,…,k\}\cap \mathcal I``. -The formula then reads - -```math -E(x) = \sum_{i ∈ \mathcal I,\ j_1 ∈ \mathcal I^+_i,\ j_2 ∈ \mathcal I^-_i} -d^p_{\mathcal M}(c_i(x_{j_1},x_{j_2}), x_i), -``` - -where ``c_i(⋅,⋅)`` denotes the mid point between its two arguments that is -nearest to ``x_i``. - -# See also - -[`grad_TV2`](@ref), [`prox_TV2`](@ref) -""" -function costTV2(M::PowerManifold, x, p::Int=1, Sum::Bool=true) - Tt = Tuple(power_dimensions(M)) - R = CartesianIndices(Tt) - d = length(Tt) - minInd, maxInd = first(R), last(R) - cost = fill(0.0, Tt) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for i in R # iterate over all pixel - jF = i + ek # compute forward neighbor - jB = i - ek # compute backward neighbor - if all(map(<=, jF.I, maxInd.I)) && all(map(>=, jB.I, minInd.I)) # are neighbors in range? - cost[i] += costTV2( - M.manifold, - (x[M, Tuple(jB)...], x[M, Tuple(i)...], x[M, Tuple(jF)...]), - p, - ) - end - end # i in R - end # directions - if p != 1 - cost = (cost) .^ (1 / p) - end - if Sum - return sum(cost) - else - return cost - end -end diff --git a/src/functions/differentials.jl b/src/functions/differentials.jl deleted file mode 100644 index 96a5a6a9e6..0000000000 --- a/src/functions/differentials.jl +++ /dev/null @@ -1,252 +0,0 @@ -@doc raw""" - differential_bezier_control(M::AbstractManifold, b::BezierSegment, t::Float, X::BezierSegment) - differential_bezier_control!( - M::AbstractManifold, - Y, - b::BezierSegment, - t, - X::BezierSegment - ) - -evaluate the differential of the Bézier curve with respect to its control points -`b` and tangent vectors `X` given in the tangent spaces of the control points. The result -is the “change” of the curve at `t```∈[0,1]``. The computation can be done in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function differential_bezier_control( - M::AbstractManifold, b::BezierSegment, t, X::BezierSegment -) - # iterative, because recursively would be too many Casteljau evals - Y = similar(first(X.pts)) - return differential_bezier_control!(M, Y, b, t, X) -end -function differential_bezier_control!( - M::AbstractManifold, Y, b::BezierSegment, t, X::BezierSegment -) - # iterative, because recursively would be too many Casteljau evals - Z = similar(X.pts) - c = deepcopy(b.pts) - for l in length(c):-1:2 - Z[1:(l - 1)] .= - differential_shortest_geodesic_startpoint.( - Ref(M), c[1:(l - 1)], c[2:l], Ref(t), X.pts[1:(l - 1)] - ) .+ - differential_shortest_geodesic_endpoint.( - Ref(M), c[1:(l - 1)], c[2:l], Ref(t), X.pts[2:l] - ) - c[1:(l - 1)] = shortest_geodesic.(Ref(M), c[1:(l - 1)], c[2:l], Ref(t)) - end - return copyto!(M, Y, Z[1]) -end -@doc raw""" - differential_bezier_control( - M::AbstractManifold, - b::BezierSegment, - T::AbstractVector, - X::BezierSegment, - ) - differential_bezier_control!( - M::AbstractManifold, - Y, - b::BezierSegment, - T::AbstractVector, - X::BezierSegment, - ) - -evaluate the differential of the Bézier curve with respect to its control points -`b` and tangent vectors `X` in the tangent spaces of the control points. The result -is the “change” of the curve at the points `T`, elementwise in ``t∈[0,1]``. -The computation can be done in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function differential_bezier_control( - M::AbstractManifold, b::BezierSegment, T::AbstractVector, X::BezierSegment -) - return differential_bezier_control.(Ref(M), Ref(b), T, Ref(X)) -end -function differential_bezier_control!( - M::AbstractManifold, Y, b::BezierSegment, T::AbstractVector, X::BezierSegment -) - return differential_bezier_control!.(Ref(M), Y, Ref(b), T, Ref(X)) -end -@doc raw""" - differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - t, - X::AbstractVector{<:BezierSegment} - ) - differential_bezier_control!( - M::AbstractManifold, - Y::AbstractVector{<:BezierSegment} - B::AbstractVector{<:BezierSegment}, - t, - X::AbstractVector{<:BezierSegment} - ) - -evaluate the differential of the composite Bézier curve with respect to its -control points `B` and tangent vectors `Ξ` in the tangent spaces of the control -points. The result is the “change” of the curve at `t```∈[0,N]``, which depends -only on the corresponding segment. Here, ``N`` is the length of `B`. -The computation can be done in place of `Y`. - -See [`de_casteljau`](@ref) for more details on the curve. -""" -function differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - t, - X::AbstractVector{<:BezierSegment}, -) - if (0 > t) || (t > length(B)) - return throw( - DomainError( - t, - "The parameter $(t) to evaluate the composite Bézier curve at is outside the interval [0,$(length(B))].", - ), - ) - end - seg = max(ceil(Int, t), 1) - localT = ceil(Int, t) == 0 ? 0.0 : t - seg + 1 - Y = differential_bezier_control(M, B[seg], localT, X[seg]) - if (Integer(t) == seg) && (seg < length(B)) # boundary case, -> seg-1 is also affecting the boundary differential - Y .+= differential_bezier_control(M, B[seg + 1], localT - 1, X[seg + 1]) - end - return Y -end -function differential_bezier_control!( - M::AbstractManifold, - Y, - B::AbstractVector{<:BezierSegment}, - t, - X::AbstractVector{<:BezierSegment}, -) - if (0 > t) || (t > length(B)) - return throw( - DomainError( - t, - "The parameter $(t) to evaluate the composite Bézier curve at is outside the interval [0,$(length(B))].", - ), - ) - end - seg = max(ceil(Int, t), 1) - localT = ceil(Int, t) == 0 ? 0.0 : t - seg + 1 - differential_bezier_control!(M, Y, B[seg], localT, X[seg]) - if (Integer(t) == seg) && (seg < length(B)) # boundary case, -> seg-1 is also affecting the boundary differential - Y .+= differential_bezier_control(M, B[seg + 1], localT - 1, X[seg + 1]) - end - return Y -end - -@doc raw""" - differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector - Ξ::AbstractVector{<:BezierSegment} - ) - differential_bezier_control!( - M::AbstractManifold, - Θ::AbstractVector{<:BezierSegment} - B::AbstractVector{<:BezierSegment}, - T::AbstractVector - Ξ::AbstractVector{<:BezierSegment} - ) - -evaluate the differential of the composite Bézier curve with respect to its -control points `B` and tangent vectors `Ξ` in the tangent spaces of the control -points. The result is the “change” of the curve at the points in `T`, which are elementwise -in ``[0,N]``, and each depending the corresponding segment(s). Here, ``N`` is the -length of `B`. For the mutating variant the result is computed in `Θ`. - -See [`de_casteljau`](@ref) for more details on the curve and [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018). -""" -function differential_bezier_control( - M::AbstractManifold, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector, - Ξ::AbstractVector{<:BezierSegment}, -) - return differential_bezier_control.(Ref(M), Ref(B), T, Ref(Ξ)) -end -function differential_bezier_control!( - M::AbstractManifold, - Y, - B::AbstractVector{<:BezierSegment}, - T::AbstractVector, - Ξ::AbstractVector{<:BezierSegment}, -) - return differential_bezier_control!.(Ref(M), Y, Ref(B), T, Ref(Ξ)) -end - -@doc raw""" - Y = differential_forward_logs(M, p, X) - differential_forward_logs!(M, Y, p, X) - -compute the differential of [`forward_logs`](@ref) ``F`` on the `PowerManifold` manifold -`M` at `p` and direction `X` , in the power manifold array, the differential of the function - -```math -F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{p_i} p_j, \quad i ∈ \mathcal G, -``` - -where ``\mathcal G`` is the set of indices of the `PowerManifold` manifold `M` -and ``\mathcal I_i`` denotes the forward neighbors of ``i``. - -# Input -* `M` – a `PowerManifold` manifold -* `p` – a point. -* `X` – a tangent vector. - -# Output -* `Y` – resulting tangent vector in ``T_x\mathcal N`` representing the differentials of the - logs, where ``\mathcal N`` is the power manifold with the number of dimensions added - to `size(x)`. The computation can also be done in place. -""" -function differential_forward_logs(M::PowerManifold, p, X) - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R).I - d2 = (d > 1) ? ones(Int, d + 1) + (d - 1) * (1:(d + 1) .== d + 1) : 1 - if d > 1 - N = PowerManifold(M.manifold, NestedPowerRepresentation(), power_size..., d) - else - N = PowerManifold(M.manifold, NestedPowerRepresentation(), power_size...) - end - Y = zero_vector(N, repeat(p; inner=d2)) - return differential_forward_logs!(M, Y, p, X) -end -function differential_forward_logs!(M::PowerManifold, Y, p, X) - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R).I - e_k_vals = [1 * (1:d .== k) for k in 1:d] - if d > 1 - N = PowerManifold(M.manifold, NestedPowerRepresentation(), power_size..., d) - else - N = PowerManifold(M.manifold, NestedPowerRepresentation(), power_size...) - end - for i in R # iterate over all pixel - for k in 1:d # for all direction combinations - I = i.I # array of index - J = I .+ e_k_vals[k] #i + e_k is j - if all(J .<= maxInd) - # this is neighbor in range, - # collects two, namely in kth direction since xi appears as base and arg - Y[N, I..., k] = - differential_log_basepoint( - M.manifold, p[M, I...], p[M, J...], X[M, I...] - ) .+ differential_log_argument( - M.manifold, p[M, I...], p[M, J...], X[M, J...] - ) - else - Y[N, I..., k] = zero_vector(M.manifold, p[M, I...]) - end - end # directions - end # i in R - return Y -end diff --git a/src/functions/gradients.jl b/src/functions/gradients.jl deleted file mode 100644 index e77c784c3e..0000000000 --- a/src/functions/gradients.jl +++ /dev/null @@ -1,522 +0,0 @@ -@doc raw""" - grad_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector, - degrees::AbstractVector{<:Integer} - T::AbstractVector - ) - -compute the gradient of the discretized acceleration of a (composite) Bézier curve ``c_B(t)`` -on the `Manifold` `M` with respect to its control points `B` given as a point on the -`PowerManifold` assuming C1 conditions and known `degrees`. The curve is -evaluated at the points given in `T` (elementwise in ``[0,N]``, where ``N`` is the -number of segments of the Bézier curve). The [`get_bezier_junctions`](@ref) are fixed for -this gradient (interpolation constraint). For the unconstrained gradient, -see [`grad_L2_acceleration_bezier`](@ref) and set ``λ=0`` therein. This gradient is computed using -`adjoint_Jacobi_field`s. For details, see [Bergmann, Gousenbourger, Front. Appl. Math. Stat., 2018](@cite BergmannGousenbourger:2018). -See [`de_casteljau`](@ref) for more details on the curve. - -# See also - -[`cost_acceleration_bezier`](@ref), [`grad_L2_acceleration_bezier`](@ref), [`cost_L2_acceleration_bezier`](@ref). -""" -function grad_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector, - degrees::AbstractVector{<:Integer}, - T::AbstractVector, -) - gradB = _grad_acceleration_bezier(M, B, degrees, T) - Bt = get_bezier_segments(M, B, degrees, :differentiable) - for k in 1:length(Bt) # we interpolate so we do not move end points - zero_vector!(M, gradB[k].pts[end], Bt[k].pts[end]) - zero_vector!(M, gradB[k].pts[1], Bt[k].pts[1]) - end - zero_vector!(M, gradB[end].pts[end], Bt[end].pts[end]) - return get_bezier_points(M, gradB, :differentiable) -end -function grad_acceleration_bezier(M::AbstractManifold, b::BezierSegment, T::AbstractVector) - gradb = _grad_acceleration_bezier(M, b.pts, [get_bezier_degree(M, b)], T)[1] - zero_vector!(M, gradb.pts[1], b.pts[1]) - zero_vector!(M, gradb.pts[end], b.pts[end]) - return gradb -end - -@doc raw""" - grad_L2_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector{P}, - degrees::AbstractVector{<:Integer}, - T::AbstractVector, - λ, - d::AbstractVector{P} - ) where {P} - -compute the gradient of the discretized acceleration of a composite Bézier curve -on the `Manifold` `M` with respect to its control points `B` together with a -data term that relates the junction points `p_i` to the data `d` with a weight -``λ`` compared to the acceleration. The curve is evaluated at the points -given in `pts` (elementwise in ``[0,N]``), where ``N`` is the number of segments of -the Bézier curve. The summands are [`grad_distance`](@ref) for the data term -and [`grad_acceleration_bezier`](@ref) for the acceleration with interpolation constrains. -Here the [`get_bezier_junctions`](@ref) are included in the optimization, i.e. setting ``λ=0`` -yields the unconstrained acceleration minimization. Note that this is ill-posed, since -any Bézier curve identical to a geodesic is a minimizer. - -Note that the Bézier-curve is given in reduces form as a point on a `PowerManifold`, -together with the `degrees` of the segments and assuming a differentiable curve, the segments -can internally be reconstructed. - -# See also - -[`grad_acceleration_bezier`](@ref), [`cost_L2_acceleration_bezier`](@ref), [`cost_acceleration_bezier`](@ref). -""" -function grad_L2_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector{P}, - degrees::AbstractVector{<:Integer}, - T::AbstractVector, - λ, - d::AbstractVector{P}, -) where {P} - gradB = _grad_acceleration_bezier(M, B, degrees, T) - Bt = get_bezier_segments(M, B, degrees, :differentiable) - # add start and end data grad - # include data term - for k in 1:length(Bt) - gradB[k].pts[1] .+= λ * grad_distance(M, d[k], Bt[k].pts[1]) - if k > 1 - gradB[k - 1].pts[end] .+= λ * grad_distance(M, d[k], Bt[k].pts[1]) - end - end - gradB[end].pts[end] .+= λ * grad_distance(M, d[end], Bt[end].pts[end]) - return get_bezier_points(M, gradB, :differentiable) -end - -# common helper for the two acceleration grads -function _grad_acceleration_bezier( - M::AbstractManifold, - B::AbstractVector, - degrees::AbstractVector{<:Integer}, - T::AbstractVector, -) - Bt = get_bezier_segments(M, B, degrees, :differentiable) - n = length(T) - m = length(Bt) - p = de_casteljau(M, Bt, T) - center = p - forward = p[[1, 3:n..., n]] - backward = p[[1, 1:(n - 2)..., n]] - mid = mid_point.(Ref(M), backward, forward) - # where the point of interest appears... - dt = (max(T...) - min(T...)) / (n - 1) - inner = -2 / ((dt)^3) .* log.(Ref(M), mid, center) - asForward = - adjoint_differential_shortest_geodesic_startpoint.( - Ref(M), forward, backward, Ref(0.5), inner - ) - asCenter = -2 / ((dt)^3) * log.(Ref(M), center, mid) - asBackward = - adjoint_differential_shortest_geodesic_endpoint.( - Ref(M), forward, backward, Ref(0.5), inner - ) - # effect of these to the control points is the preliminary gradient - grad_B = [ - BezierSegment(a.pts .+ b.pts .+ c.pts) for (a, b, c) in zip( - adjoint_differential_bezier_control(M, Bt, T[[1, 3:n..., n]], asForward), - adjoint_differential_bezier_control(M, Bt, T, asCenter), - adjoint_differential_bezier_control(M, Bt, T[[1, 1:(n - 2)..., n]], asBackward), - ) - ] - for k in 1:(length(Bt) - 1) # add both effects of left and right segments - X = grad_B[k + 1].pts[1] + grad_B[k].pts[end] - grad_B[k].pts[end] .= X - grad_B[k + 1].pts[1] .= X - end - # include c0 & C1 condition - for k in length(Bt):-1:2 - m = length(Bt[k].pts) - # updates b- - X1 = - grad_B[k - 1].pts[end - 1] .+ adjoint_differential_shortest_geodesic_startpoint( - M, Bt[k - 1].pts[end - 1], Bt[k].pts[1], 2.0, grad_B[k].pts[2] - ) - # update b+ - though removed in reduced form - X2 = - grad_B[k].pts[2] .+ adjoint_differential_shortest_geodesic_startpoint( - M, Bt[k].pts[2], Bt[k].pts[1], 2.0, grad_B[k - 1].pts[end - 1] - ) - # update p - effect from left and right segment as well as from c1 cond - X3 = - grad_B[k].pts[1] .+ adjoint_differential_shortest_geodesic_endpoint( - M, Bt[k - 1].pts[m - 1], Bt[k].pts[1], 2.0, grad_B[k].pts[2] - ) - # store - grad_B[k - 1].pts[end - 1] .= X1 - grad_B[k].pts[2] .= X2 - grad_B[k].pts[1] .= X3 - grad_B[k - 1].pts[end] .= X3 - end - return grad_B -end - -@doc raw""" - grad_distance(M,y,x[, p=2]) - grad_distance!(M,X,y,x[, p=2]) - -compute the (sub)gradient of the distance (squared), in place of `X`. - -```math -f(x) = \frac{1}{p} d^p_{\mathcal M}(x,y) -``` - -to a fixed point `y` on the manifold `M` and `p` is an -integer. The gradient reads - -```math - \operatorname{grad}f(x) = -d_{\mathcal M}^{p-2}(x,y)\log_xy -``` - -for ``p\neq 1`` or ``x\neq y``. Note that for the remaining case ``p=1``, -``x=y`` the function is not differentiable. In this case, the function returns the -corresponding zero tangent vector, since this is an element of the subdifferential. - -# Optional - -* `p` – (`2`) the exponent of the distance, i.e. the default is the squared - distance -""" -function grad_distance(M, y, x, p::Int=2) - if p == 2 - return -log(M, x, y) - elseif p == 1 && x == y - return zero_vector(M, x) - else - return -distance(M, x, y)^(p - 2) * log(M, x, y) - end -end -function grad_distance!(M, X, y, x, p::Int=2) - log!(M, X, x, y) - if p == 2 - X .*= -one(eltype(X)) - elseif p == 1 && x == y - X = zero_vector(M, x) - else - X .*= -distance(M, x, y)^(p - 2) - end - return X -end - -@doc raw""" - grad_u,⁠ grad_v = grad_intrinsic_infimal_convolution_TV12(M, f, u, v, α, β) - -compute (sub)gradient of the intrinsic infimal convolution model using the mid point -model of second order differences, see [`costTV2`](@ref), i.e. for some ``f ∈ \mathcal M`` -on a `PowerManifold` manifold ``\mathcal M`` this function computes the (sub)gradient of - -```math -E(u,v) = -\frac{1}{2}\sum_{i ∈ \mathcal G} d_{\mathcal M}(g(\frac{1}{2},v_i,w_i),f_i) -+ \alpha -\bigl( -β\mathrm{TV}(v) + (1-β)\mathrm{TV}_2(w) -\bigr), -``` -where both total variations refer to the intrinsic ones, [`grad_TV`](@ref) and -[`grad_TV2`](@ref), respectively. -""" -function grad_intrinsic_infimal_convolution_TV12(M::AbstractManifold, f, u, v, α, β) - c = mid_point(M, u, v, f) - iL = log(M, c, f) - return adjoint_differential_shortest_geodesic_startpoint(M, u, v, 1 / 2, iL) + - α * β * grad_TV(M, u), - adjoint_differential_shortest_geodesic_endpoint(M, u, v, 1 / 2, iL) + - α * (1 - β) * grad_TV2(M, v) -end -@doc raw""" - X = grad_TV(M, (x,y)[, p=1]) - grad_TV!(M, X, (x,y)[, p=1]) - -compute the (sub) gradient of ``\frac{1}{p}d^p_{\mathcal M}(x,y)`` with respect -to both ``x`` and ``y`` (in place of `X` and `Y`). -""" -function grad_TV(M::AbstractManifold, q::Tuple{T,T}, p=1) where {T} - if p == 2 - return (-log(M, q[1], q[2]), -log(M, q[2], q[1])) - else - d = distance(M, q[1], q[2]) - if d == 0 # subdifferential containing zero - return (zero_vector(M, q[1]), zero_vector(M, q[2])) - else - return (-log(M, q[1], q[2]) / (d^(2 - p)), -log(M, q[2], q[1]) / (d^(2 - p))) - end - end -end -function grad_TV!(M::AbstractManifold, X, q::Tuple{T,T}, p=1) where {T} - d = distance(M, q[1], q[2]) - if d == 0 # subdifferential containing zero - zero_vector!(M, X[1], q[1]) - zero_vector!(M, X[2], q[2]) - return X - end - log!(M, X[1], q[1], q[2]) - log!(M, X[2], q[2], q[1]) - if p == 2 - X[1] .*= -1 - X[2] .*= -1 - else - X[1] .*= -1 / (d^(2 - p)) - X[2] .*= -1 / (d^(2 - p)) - end - return X -end -@doc raw""" - X = grad_TV(M, λ, x[, p=1]) - grad_TV!(M, X, λ, x[, p=1]) - -Compute the (sub)gradient ``\partial F`` of all forward differences occurring, -in the power manifold array, i.e. of the function - -```math -F(x) = \sum_{i}\sum_{j ∈ \mathcal I_i} d^p(x_i,x_j) -``` - -where ``i`` runs over all indices of the `PowerManifold` manifold `M` -and ``\mathcal I_i`` denotes the forward neighbors of ``i``. - -# Input -* `M` – a `PowerManifold` manifold -* `x` – a point. - -# Output -* X – resulting tangent vector in ``T_x\mathcal M``. The computation can also be done in place. -""" -function grad_TV(M::PowerManifold, x, p::Int=1) - power_size = power_dimensions(M) - rep_size = representation_size(M.manifold) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R) - X = zero_vector(M, x) - c = costTV(M, x, p, 0) - for i in R # iterate over all pixel - di = 0.0 - for k in 1:d # for all direction combinations - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - j = i + ek # compute neighbor - if all(map(<=, j.I, maxInd.I)) # is this neighbor in range? - if p != 1 - g = (c[i] == 0 ? 1 : 1 / c[i]) .* grad_TV(M.manifold, (x[i], x[j]), p) # Compute TV on these - else - g = grad_TV(M.manifold, (x[i], x[j]), p) # Compute TV on these - end - X[i] += g[1] - X[j] += g[2] - end - end # directions - end # i in R - return X -end -function grad_TV!(M::PowerManifold, X, x, p::Int=1) - power_size = power_dimensions(M) - rep_size = representation_size(M.manifold) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R) - c = costTV(M, x, p, 0) - g = [zero_vector(M.manifold, x[first(R)]), zero_vector(M.manifold, x[first(R)])] - for i in R # iterate over all pixel - di = 0.0 - for k in 1:d # for all direction combinations - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - j = i + ek # compute neighbor - if all(map(<=, j.I, maxInd.I)) # is this neighbor in range? - grad_TV!(M.manifold, g, (x[i], x[j]), p) # Compute TV on these - if p != 1 - (c[i] != 0) && (g[1] .*= 1 / c[i]) - (c[i] != 0) && (g[2] .*= 1 / c[i]) - end - X[i] += g[1] - X[j] += g[2] - end - end # directions - end # i in R - return X -end - -@doc raw""" - Y = forward_logs(M,x) - forward_logs!(M, Y, x) - -compute the forward logs ``F`` (generalizing forward differences) occurring, -in the power manifold array, the function - -```math -F_i(x) = \sum_{j ∈ \mathcal I_i} \log_{x_i} x_j,\quad i ∈ \mathcal G, -``` - -where ``\mathcal G`` is the set of indices of the `PowerManifold` manifold `M` and -``\mathcal I_i`` denotes the forward neighbors of ``i``. This can also be done in place of `ξ`. - -# Input -* `M` – a `PowerManifold` manifold -* `x` – a point. - -# Output -* `Y` – resulting tangent vector in ``T_x\mathcal M`` representing the logs, where - ``\mathcal N`` is the power manifold with the number of dimensions added to `size(x)`. - The computation can be done in place of `Y`. -""" -function forward_logs(M::PowerManifold{𝔽,TM,TSize,TPR}, p) where {𝔽,TM,TSize,TPR} - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - sX = size(p) - maxInd = last(R).I - if d > 1 - d2 = fill(1, d + 1) - d2[d + 1] = d - else - d2 = 1 - end - sN = d > 1 ? [power_size..., d] : [power_size...] - N = PowerManifold(M.manifold, TPR(), sN...) - xT = repeat(p; inner=d2) - X = zero_vector(N, xT) - e_k_vals = [1 * (1:d .== k) for k in 1:d] - for i in R # iterate over all pixel - for k in 1:d # for all direction combinations - I = i.I - J = I .+ 1 .* e_k_vals[k] #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex{d}(J...) # neighbour index as Cartesian Index - X[N, i.I..., k] = log(M.manifold, p[M, i.I...], p[M, j.I...]) - end - end # directions - end # i in R - return X -end -function forward_logs!(M::PowerManifold{𝔽,TM,TSize,TPR}, X, p) where {𝔽,TM,TSize,TPR} - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - sX = size(p) - maxInd = last(R).I - if d > 1 - d2 = fill(1, d + 1) - d2[d + 1] = d - else - d2 = 1 - end - sN = d > 1 ? [power_size..., d] : [power_size...] - N = PowerManifold(M.manifold, TPR(), sN...) - e_k_vals = [1 * (1:d .== k) for k in 1:d] - for i in R # iterate over all pixel - for k in 1:d # for all direction combinations - I = i.I - J = I .+ 1 .* e_k_vals[k] #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex{d}(J...) # neighbour index as Cartesian Index - X[N, i.I..., k] = log(M.manifold, p[M, i.I...], p[M, j.I...]) - end - end # directions - end # i in R - return X -end - -@doc raw""" - Y = grad_TV2(M, q[, p=1]) - grad_TV2!(M, Y, q[, p=1]) - -computes the (sub) gradient of ``\frac{1}{p}d_2^p(q_1, q_2, q_3)`` with respect -to all three components of ``q∈\mathcal M^3``, where ``d_2`` denotes the second order -absolute difference using the mid point model, i.e. let - -```math -\mathcal C = \bigl\{ c ∈ \mathcal M \ |\ g(\tfrac{1}{2};q_1,q_3) \text{ for some geodesic }g\bigr\} -``` -denote the mid points between ``q_1`` and ``q_3`` on the manifold ``\mathcal M``. -Then the absolute second order difference is defined as - -```math -d_2(q_1,q_2,q_3) = \min_{c ∈ \mathcal C_{q_1,q_3}} d(c, q_2). -``` - -While the (sub)gradient with respect to ``q_2`` is easy, the other two require -the evaluation of an `adjoint_Jacobi_field`. -""" -function grad_TV2(M::AbstractManifold, q, p::Int=1) - X = [zero_vector(M, x) for x in q] - return grad_TV2!(M, X, q, p) -end -function grad_TV2!(M::AbstractManifold, X, q, p::Int=1) - c = mid_point(M, q[1], q[3], q[2]) # nearest mid point of x and z to y - d = distance(M, q[2], c) - innerLog = -log(M, c, q[2]) - if p == 2 - X[1] .= adjoint_differential_shortest_geodesic_startpoint( - M, q[1], q[3], 1 / 2, innerLog - ) - log!(M, X[2], q[2], c) - X[2] .*= -1 - X[3] .= adjoint_differential_shortest_geodesic_endpoint( - M, q[1], q[3], 1 / 2, innerLog - ) - else - if d == 0 # subdifferential containing zero - for i in 1:3 - zero_vector!(M, X[i], q[i]) - end - else - X[1] .= adjoint_differential_shortest_geodesic_startpoint( - M, q[1], q[3], 1 / 2, innerLog / (d^(2 - p)) - ) - log!(M, X[2], q[2], c) - X[2] .*= -1 / (d^(2 - p)) - X[3] .= adjoint_differential_shortest_geodesic_endpoint( - M, q[1], q[3], 1 / 2, innerLog / (d^(2 - p)) - ) - end - end - return X -end -@doc raw""" - grad_TV2(M::PowerManifold, q[, p=1]) - -computes the (sub) gradient of ``\frac{1}{p}d_2^p(q_1,q_2,q_3)`` -with respect to all ``q_1,q_2,q_3`` occurring along any array dimension in the -point `q`, where `M` is the corresponding `PowerManifold`. -""" -function grad_TV2(M::PowerManifold, q, p::Int=1) - X = zero_vector(M, q) - return grad_TV2!(M, X, q, p) -end -function grad_TV2!(M::PowerManifold, X, q, p::Int=1) - power_size = power_dimensions(M) - rep_size = representation_size(M.manifold) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - minInd, maxInd = first(R), last(R) - c = costTV2(M, q, p, false) - for i in R # iterate over all pixel - di = 0.0 - for k in 1:d # for all direction combinations - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - jF = i + ek # compute forward neighbor - jB = i - ek # compute backward neighbor - if all(map(<=, jF.I, maxInd.I)) && all(map(>=, jB.I, minInd.I)) # are neighbors in range? - if p != 1 - g = - (c[i] == 0 ? 1 : 1 / c[i]) .* - grad_TV2(M.manifold, (q[jB], q[i], q[jF]), p) # Compute TV2 on these - else - g = grad_TV2(M.manifold, (q[jB], q[i], q[jF]), p) # Compute TV2 on these - end - X[M, jB.I...] = g[1] - X[M, i.I...] = g[2] - X[M, jF.I...] = g[3] - end - end # directions - end # i in R - return X -end diff --git a/src/functions/proximal_maps.jl b/src/functions/proximal_maps.jl deleted file mode 100644 index 378dad66a2..0000000000 --- a/src/functions/proximal_maps.jl +++ /dev/null @@ -1,557 +0,0 @@ -@doc raw""" - y = prox_distance(M,λ,f,x [, p=2]) - prox_distance!(M, y, λ, f, x [, p=2]) - -compute the proximal map ``\operatorname{prox}_{λ\varphi}`` with -parameter λ of ``φ(x) = \frac{1}{p}d_{\mathcal M}^p(f,x)``. -For the mutating variant the computation is done in place of `y`. - -# Input -* `M` – a manifold `M` -* `λ` – the prox parameter -* `f` – a point ``f ∈ \mathcal M`` (the data) -* `x` – the argument of the proximal map - -# Optional argument -* `p` – (`2`) exponent of the distance. - -# Output -* `y` – the result of the proximal map of ``φ`` -""" -function prox_distance(M::AbstractManifold, λ, f, x, p::Int=2) - d = distance(M, f, x) - if p == 2 - t = λ / (1 + λ) - elseif p == 1 - t = (λ < d) ? λ / d : 1.0 - else - throw( - ErrorException( - "Proximal Map of distance(M,f,x) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end - return exp(M, x, log(M, x, f), t) -end -function prox_distance!(M::AbstractManifold, y, λ, f, x, p::Int=2) - d = distance(M, f, x) - if p == 2 - t = λ / (1 + λ) - elseif p == 1 - t = (λ < d) ? λ / d : 1.0 - else - throw( - ErrorException( - "Proximal Map of distance(M,f,x) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end - return exp!(M, y, x, log(M, x, f), t) -end - -@doc raw""" - [y1,y2] = prox_TV(M, λ, [x1,x2] [,p=1]) - prox_TV!(M, [y1,y2] λ, [x1,x2] [,p=1]) - -Compute the proximal map ``\operatorname{prox}_{λ\varphi}`` of -``φ(x,y) = d_{\mathcal M}^p(x,y)`` with -parameter `λ`. - -# Input - -* `M` – a manifold `M` -* `λ` – a real value, parameter of the proximal map -* `(x1,x2)` – a tuple of two points, - -# Optional - -(default is given in brackets) -* `p` – (1) exponent of the distance of the TV term - -# Output - -* `(y1,y2)` – resulting tuple of points of the ``\operatorname{prox}_{λφ}(```(x1,x2)```)``. - The result can also be computed in place. -""" -function prox_TV(M::AbstractManifold, λ::Number, x::Tuple{T,T}, p::Int=1) where {T} - d = distance(M, x[1], x[2]) - if p == 1 - t = min(0.5, λ / d) - elseif p == 2 - t = λ / (1 + 2 * λ) - else - throw( - ErrorException( - "Proximal Map of TV(M,x1,x2,p) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end - return (exp(M, x[1], log(M, x[1], x[2]), t), exp(M, x[2], log(M, x[2], x[1]), t)) -end -function prox_TV!(M::AbstractManifold, y, λ::Number, x::Tuple{T,T}, p::Int=1) where {T} - d = distance(M, x[1], x[2]) - if p == 1 - t = min(0.5, λ / d) - elseif p == 2 - t = λ / (1 + 2 * λ) - else - throw( - ErrorException( - "Proximal Map of TV(M,x1,x2,p) not implemented for p=$(p) (requires p=1 or 2)", - ), - ) - end - X1 = log(M, x[1], x[2]) - X2 = log(M, x[2], x[1]) - exp!(M, y[1], x[1], X1, t) - exp!(M, y[2], x[2], X2, t) - return y -end -@doc raw""" - ξ = prox_TV(M,λ,x [,p=1]) - -compute the proximal maps ``\operatorname{prox}_{λ\varphi}`` of -all forward differences occurring in the power manifold array, i.e. -``\varphi(xi,xj) = d_{\mathcal M}^p(xi,xj)`` with `xi` and `xj` are array -elements of `x` and `j = i+e_k`, where `e_k` is the ``k``th unit vector. -The parameter `λ` is the prox parameter. - -# Input -* `M` – a manifold `M` -* `λ` – a real value, parameter of the proximal map -* `x` – a point. - -# Optional -(default is given in brackets) -* `p` – (1) exponent of the distance of the TV term - -# Output -* `y` – resulting point containing with all mentioned proximal - points evaluated (in a cyclic order). The computation can also be done in place -""" -function prox_TV(M::PowerManifold, λ, x, p::Int=1) - y = deepcopy(x) - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - maxInd = last(R).I - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:1 - for i in R # iterate over all pixel - if (i[k] % 2) == l - J = i.I .+ ek.I #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex(J...) # neighbour index as Cartesian Index - (y[i], y[j]) = prox_TV(M.manifold, λ, (y[i], y[j]), p) # Compute TV on these - end - end - end # i in R - end # even odd - end # directions - return y -end -function prox_TV!(M::PowerManifold, y, λ, x, p::Int=1) - power_size = power_dimensions(M) - R = CartesianIndices(Tuple(power_size)) - d = length(power_size) - copyto!(M, y, x) - maxInd = last(R).I - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:1 - for i in R # iterate over all pixel - if (i[k] % 2) == l # even/odd splitting - J = i.I .+ ek.I #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex(J...) # neighbour index as Cartesian Index - prox_TV!(M.manifold, [y[i], y[j]], λ, (y[i], y[j]), p) # Compute TV on these - end - end - end # i in R - end # even odd - end # directions - return y -end -@doc raw""" - y = prox_parallel_TV(M, λ, x [,p=1]) - prox_parallel_TV!(M, y, λ, x [,p=1]) - -compute the proximal maps ``\operatorname{prox}_{λφ}`` of -all forward differences occurring in the power manifold array, i.e. -``φ(x_i,x_j) = d_{\mathcal M}^p(x_i,x_j)`` with `xi` and `xj` are array -elements of `x` and `j = i+e_k`, where `e_k` is the ``k``th unit vector. -The parameter `λ` is the prox parameter. - -# Input -* `M` – a `PowerManifold` manifold -* `λ` – a real value, parameter of the proximal map -* `x` – a point - -# Optional -(default is given in brackets) -* `p` – (`1`) exponent of the distance of the TV term - -# Output -* `y` – resulting Array of points with all mentioned proximal - points evaluated (in a parallel within the arrays elements). - The computation can also be done in place. - -*See also* [`prox_TV`](@ref) -""" -function prox_parallel_TV(M::PowerManifold, λ, x::AbstractVector, p::Int=1) - R = CartesianIndices(x[1]) - d = ndims(x[1]) - if length(x) != 2 * d - throw( - ErrorException( - "The number of inputs from the array ($(length(x))) has to be twice the data dimensions ($(d)).", - ), - ) - end - maxInd = Tuple(last(R)) - # create an array for even/odd splitted proxes along every dimension - y = deepcopy(x) - yV = reshape(y, d, 2) - xV = reshape(x, d, 2) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:1 # even odd - for i in R # iterate over all pixel - if (i[k] % 2) == l - J = i.I .+ ek.I #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex(J...) # neighbour index as Cartesian Index - # parallel means we apply each (direction even/odd) to a separate copy of the data. - (yV[k, l + 1][i], yV[k, l + 1][j]) = prox_TV( - M.manifold, λ, (xV[k, l + 1][i], xV[k, l + 1][j]), p - ) # Compute TV on these - end - end - end # i in R - end # even odd - end # directions - return y -end -function prox_parallel_TV!( - M::PowerManifold, y::AbstractVector, λ, x::AbstractVector, p::Int=1 -) - R = CartesianIndices(x[1]) - d = ndims(x[1]) - if length(x) != 2 * d - throw( - ErrorException( - "The number of inputs from the array ($(length(x))) has to be twice the data dimensions ($(d)).", - ), - ) - end - maxInd = Tuple(last(R)) - # init y - for i in 1:length(x) - copyto!(M, y[i], x[i]) - end - yV = reshape(y, d, 2) - xV = reshape(x, d, 2) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:1 # even odd - for i in R # iterate over all pixel - if (i[k] % 2) == l - J = i.I .+ ek.I #i + e_k is j - if all(J .<= maxInd) # is this neighbor in range? - j = CartesianIndex(J...) # neighbour index as Cartesian Index - # parallel means we apply each (direction even/odd) to a separate copy of the data. - prox_TV!( - M.manifold, - [yV[k, l + 1][i], yV[k, l + 1][j]], - λ, - (xV[k, l + 1][i], xV[k, l + 1][j]), - p, - ) # Compute TV on these in place of y - end - end - end # i in R - end # even odd - end # directions - return y -end - -@doc raw""" - (y1,y2,y3) = prox_TV2(M,λ,(x1,x2,x3),[p=1], kwargs...) - prox_TV2!(M, y, λ,(x1,x2,x3),[p=1], kwargs...) - -Compute the proximal map ``\operatorname{prox}_{λ\varphi}`` of -``\varphi(x_1,x_2,x_3) = d_{\mathcal M}^p(c(x_1,x_3),x_2)`` with -parameter `λ`>0, where ``c(x,z)`` denotes the mid point of a shortest -geodesic from `x1` to `x3` that is closest to `x2`. -The result can be computed in place of `y`. - -# Input - -* `M` – a manifold -* `λ` – a real value, parameter of the proximal map -* `(x1,x2,x3)` – a tuple of three points - -* `p` – (`1`) exponent of the distance of the TV term - -# Optional -`kwargs...` – parameters for the internal [`subgradient_method`](@ref) - (if `M` is neither `Euclidean` nor `Circle`, since for these a closed form - is given) - -# Output -* `(y1,y2,y3)` – resulting tuple of points of the proximal map. - The computation can also be done in place. -""" -function prox_TV2( - M::AbstractManifold, - λ, - x::Tuple{T,T,T}, - p::Int=1; - stopping_criterion::StoppingCriterion=StopAfterIteration(10), - kwargs..., -) where {T} - (p != 1) && throw( - ErrorException( - "Proximal Map of TV2(M, λ, x, p) not implemented for p=$(p) (requires p=1) on general manifolds.", - ), - ) - PowX = [x...] - PowM = PowerManifold(M, NestedPowerRepresentation(), 3) - xR = PowX - F(M, x) = 1 / 2 * distance(M, PowX, x)^2 + λ * costTV2(M, x) - ∂F(PowM, x) = log(PowM, x, PowX) + λ * grad_TV2(PowM, x) - subgradient_method!(PowM, F, ∂F, xR; stopping_criterion=stopping_criterion, kwargs...) - return (xR...,) -end -function prox_TV2!( - M::AbstractManifold, - y, - λ, - x::Tuple{T,T,T}, - p::Int=1; - stopping_criterion::StoppingCriterion=StopAfterIteration(10), - kwargs..., -) where {T} - (p != 1) && throw( - ErrorException( - "Proximal Map of TV2(M, λ, x, p) not implemented for p=$(p) (requires p=1) on general manifolds.", - ), - ) - PowX = [x...] - PowM = PowerManifold(M, NestedPowerRepresentation(), 3) - copyto!(M, y, PowX) - F(M, x) = 1 / 2 * distance(M, PowX, x)^2 + λ * costTV2(M, x) - ∂F!(M, y, x) = log!(M, y, x, PowX) + λ * grad_TV2!(M, y, x) - subgradient_method!( - PowM, - F, - ∂F!, - y; - stopping_criterion=stopping_criterion, - evaluation=InplaceEvaluation(), - kwargs..., - ) - return y -end -@doc raw""" - y = prox_TV2(M, λ, x[, p=1]) - prox_TV2!(M, y, λ, x[, p=1]) - -compute the proximal maps ``\operatorname{prox}_{λ\varphi}`` of -all centered second order differences occurring in the power manifold array, i.e. -``\varphi(x_k,x_i,x_j) = d_2(x_k,x_i.x_j)``, where ``k,j`` are backward and forward -neighbors (along any dimension in the array of `x`). -The parameter `λ` is the prox parameter. - -# Input -* `M` – a manifold `M` -* `λ` – a real value, parameter of the proximal map -* `x` – a points. - -# Optional -(default is given in brackets) -* `p` – (`1`) exponent of the distance of the TV term - -# Output -* `y` – resulting point with all mentioned proximal points evaluated (in a cyclic order). - The computation can also be done in place. -""" -function prox_TV2(M::PowerManifold{N,T}, λ, x, p::Int=1) where {N,T} - y = deepcopy(x) - return prox_TV2!(M, y, λ, x, p) -end -function prox_TV2!(M::PowerManifold{N,T}, y, λ, x, p::Int=1) where {N,T} - power_size = power_dimensions(M) - R = CartesianIndices(power_size) - d = length(size(x)) - minInd = first(R).I - maxInd = last(R).I - copyto!(M, y, x) - for k in 1:d # for all directions - ek = CartesianIndex(ntuple(i -> (i == k) ? 1 : 0, d)) #k th unit vector - for l in 0:2 - for i in R # iterate over all pixel - if (i[k] % 3) == l - JForward = i.I .+ ek.I #i + e_k - JBackward = i.I .- ek.I # i - e_k - all(JForward .<= maxInd) && - all(JBackward .>= minInd) && - prox_TV2!( - M.manifold, - [y[M, JBackward...], y[M, i.I...], y[M, JForward...]], - λ, - (y[M, JBackward...], y[M, i.I...], y[M, JForward...]), - p, - ) - end # if mod 3 - end # i in R - end # for mod 3 - end # directions - return y -end -@doc raw""" - project_collaborative_TV(M, λ, x, Ξ[, p=2,q=1]) - project_collaborative_TV!(M, Θ, λ, x, Ξ[, p=2,q=1]) - -compute the projection onto collaborative Norm unit (or α-) ball, i.e. of the function - -```math -F^q(x) = \sum_{i∈\mathcal G} - \Bigl( \sum_{j∈\mathcal I_i} - \sum_{k=1}^d \lVert X_{i,j}\rVert_x^p\Bigr)^\frac{q}{p}, -``` - -where ``\mathcal G`` is the set of indices for ``x∈\mathcal M`` and ``\mathcal I_i`` -is the set of its forward neighbors. -The computation can also be done in place of `Θ`. - -This is adopted from the paper [Duran, Möller, Sbert, Cremers, SIAM J Imag Sci, 2016](@cite DuranMoelleSbertCremers:2016), -see their Example 3 for details. -""" -function project_collaborative_TV(N::PowerManifold, λ, x, Ξ, p=2.0, q=1.0, α=1.0) - pdims = power_dimensions(N) - if length(pdims) == 1 - d = 1 - s = 1 - iRep = (1,) - else - d = pdims[end] - s = length(pdims) - 1 - if s != d - throw( - ErrorException( - "the last dimension ($(d)) has to be equal to the number of the previous ones ($(s)) but its not.", - ), - ) - end - iRep = (Integer.(ones(d))..., d) - end - if q == 1 # Example 3 case 2 - if p == 1 - normΞ = norm.(Ref(N.manifold), x, Ξ) - return max.(normΞ .- λ, 0.0) ./ ((normΞ .== 0) .+ normΞ) .* Ξ - end - if p == 2 # Example 3 case 3 - norms = sqrt.(sum(norm.(Ref(N.manifold), x, Ξ) .^ 2; dims=d + 1)) - if length(iRep) > 1 - norms = repeat(norms; inner=iRep) - end - # if the norm is zero add 1 to avoid division by zero, also then the - # nominator is already (max(-λ,0) = 0) so it stays zero then - return max.(norms .- λ, 0.0) ./ ((norms .== 0) .+ norms) .* Ξ - end - throw(ErrorException("The case p=$p, q=$q is not yet implemented")) - elseif q == Inf - if p == 2 - norms = sqrt.(sum(norm.(Ref(N.manifold), x, Ξ) .^ 2; dims=d + 1)) - if length(iRep) > 1 - norms = repeat(norms; inner=iRep) - end - elseif p == 1 - norms = sum(norm.(Ref(N.manifold), x, Ξ); dims=d + 1) - if length(iRep) > 1 - norms = repeat(norms; inner=iRep) - end - elseif p == Inf - norms = norm.(Ref(N.manifold), x, Ξ) - else - throw(ErrorException("The case p=$p, q=$q is not yet implemented")) - end - return (α .* Ξ) ./ max.(Ref(α), norms) - end # end q - return throw(ErrorException("The case p=$p, q=$q is not yet implemented")) -end -function project_collaborative_TV(N::PowerManifold, λ, x, Ξ, p::Int, q::Float64=1.0, α=1.0) - return project_collaborative_TV(N, λ, x, Ξ, Float64(p), q, α) -end -function project_collaborative_TV(N::PowerManifold, λ, x, Ξ, p::Float64, q::Int, α=1.0) - return project_collaborative_TV(N, λ, x, Ξ, p, Float64(q), α) -end -function project_collaborative_TV(N::PowerManifold, λ, x, Ξ, p::Int, q::Int, α=1.0) - return project_collaborative_TV(N, λ, x, Ξ, Float64(p), Float64(q), α) -end - -function project_collaborative_TV!(N::PowerManifold, Θ, λ, x, Ξ, p=2.0, q=1.0, α=1.0) - pdims = power_dimensions(N) - if length(pdims) == 1 - d = 1 - s = 1 - iRep = (1,) - else - d = pdims[end] - s = length(pdims) - 1 - if s != d - throw( - ErrorException( - "the last dimension ($d) has to be equal to the number of the previous ones ($s) but its not.", - ), - ) - end - iRep = (Integer.(ones(d))..., d) - end - if q == 1 # Example 3 case 2 - if p == 1 - normΞ = norm.(Ref(N.manifold), x, Ξ) - Θ .= max.(normΞ .- λ, 0.0) ./ ((normΞ .== 0) .+ normΞ) .* Ξ - return Θ - elseif p == 2 # Example 3 case 3 - norms = sqrt.(sum(norm.(Ref(N.manifold), x, Ξ) .^ 2; dims=d + 1)) - if length(iRep) > 1 - norms = repeat(norms; inner=iRep) - end - # if the norm is zero add 1 to avoid division by zero, also then the - # nominator is already (max(-λ,0) = 0) so it stays zero then - Θ .= max.(norms .- λ, 0.0) ./ ((norms .== 0) .+ norms) .* Ξ - return Θ - else - throw(ErrorException("The case p=$p, q=$q is not yet implemented")) - end - elseif q == Inf - if p == 2 - norms = sqrt.(sum(norm.(Ref(N.manifold), x, Ξ) .^ 2; dims=d + 1)) - (length(iRep) > 1) && (norms = repeat(norms; inner=iRep)) - elseif p == 1 - norms = sum(norm.(Ref(N.manifold), x, Ξ); dims=d + 1) - (length(iRep) > 1) && (norms = repeat(norms; inner=iRep)) - elseif p == Inf - norms = norm.(Ref(N.manifold), x, Ξ) - else - throw(ErrorException("The case p=$p, q=$q is not yet implemented")) - end - Θ .= (α .* Ξ) ./ max.(Ref(α), norms) - return Θ - end # end q - return throw(ErrorException("The case p=$p, q=$q is not yet implemented")) -end -function project_collaborative_TV!( - N::PowerManifold, Θ, λ, x, Ξ, p::Int, q::Float64=1.0, α=1.0 -) - return project_collaborative_TV!(N, Θ, λ, x, Ξ, Float64(p), q, α) -end -function project_collaborative_TV!(N::PowerManifold, Θ, λ, x, Ξ, p::Float64, q::Int, α=1.0) - return project_collaborative_TV!(N, Θ, λ, x, Ξ, p, Float64(q), α) -end -function project_collaborative_TV!(N::PowerManifold, Θ, λ, x, Ξ, p::Int, q::Int, α=1.0) - return project_collaborative_TV!(N, Θ, λ, x, Ξ, Float64(p), Float64(q), α) -end diff --git a/src/helpers/errorMeasures.jl b/src/helpers/errorMeasures.jl deleted file mode 100644 index 5dde1926ad..0000000000 --- a/src/helpers/errorMeasures.jl +++ /dev/null @@ -1,24 +0,0 @@ -@doc raw""" - meanSquaredError(M, p, q) - -Compute the (mean) squared error between the two -points `p` and `q` on the (power) manifold `M`. -""" -function meanSquaredError(M::mT, p, q) where {mT<:AbstractManifold} - return distance(M, p, q)^2 -end -function meanSquaredError(M::PowerManifold, x, y) - return 1 / prod(power_dimensions(M)) * sum(distance.(Ref(M.manifold), x, y) .^ 2) -end -@doc raw""" - meanSquaredError(M,x,y) - -Compute the (mean) squared error between the two -points `x` and `y` on the `PowerManifold` manifold `M`. -""" -function meanAverageError(M::AbstractManifold, x, y) - return distance(M, x, y) -end -function meanAverageError(M::PowerManifold, x, y) - return 1 / prod(power_dimensions(M)) * sum(distance.(Ref(M.manifold), x, y)) -end diff --git a/src/functions/manifold_functions.jl b/src/plans/Douglas_Rachford_plan.jl similarity index 83% rename from src/functions/manifold_functions.jl rename to src/plans/Douglas_Rachford_plan.jl index 4f3851fcaf..889e59aefb 100644 --- a/src/functions/manifold_functions.jl +++ b/src/plans/Douglas_Rachford_plan.jl @@ -33,12 +33,13 @@ retraction, respectively. This can also be done in place of `q`. ## Keyword arguments -* `retraction_method` - (`default_retraction_metiod(M, typeof(p))`) the retraction to use in the reflection -* `inverse_retraction_method` - (`default_inverse_retraction_method(M, typeof(p))`) the inverse retraction to use within the reflection + +* `retraction_method` (`default_retraction_metiod(M, typeof(p))`) the retraction to use in the reflection +* `inverse_retraction_method` (`default_inverse_retraction_method(M, typeof(p))`) the inverse retraction to use within the reflection and for the `reflect!` additionally -* `X` (`zero_vector(M,p)`) a temporary memory to compute the inverse retraction in place. +* `X` (`zero_vector(M,p)`) a temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways. Passing `X` to `reflect` will just have no effect. diff --git a/src/plans/plan.jl b/src/plans/plan.jl index 215fd6184f..aaaf62b3b0 100644 --- a/src/plans/plan.jl +++ b/src/plans/plan.jl @@ -52,6 +52,7 @@ include("frank_wolfe_plan.jl") include("quasi_newton_plan.jl") include("nonlinear_least_squares_plan.jl") include("difference_of_convex_plan.jl") +include("Douglas_Rachford_plan.jl") include("primal_dual_plan.jl") include("higher_order_primal_dual_plan.jl") diff --git a/test/functions/test_adjoint_differentials.jl b/test/functions/test_adjoint_differentials.jl deleted file mode 100644 index bba27ec616..0000000000 --- a/test/functions/test_adjoint_differentials.jl +++ /dev/null @@ -1,25 +0,0 @@ -using Manifolds, Manopt, Test, ManifoldsBase - -@testset "Differentials (on Sphere(2))" begin - # The Adjoint Differentials test using the same variables as the differentials - # test - p = [1.0, 0.0, 0.0] - q = [0.0, 1.0, 0.0] - M = Sphere(2) - X = log(M, p, q) - # Tept differentials (1) Dp of Log_pq - Y = similar(X) - Mp = PowerManifold(M, NestedPowerRepresentation(), 3) - pP = [p, q, p] - qP = [p, p, q] - XP = [X, zero_vector(M, p), -X] - YP = similar.(XP) - ZP = adjoint_differential_forward_logs(Mp, pP, XP) - @test norm(Mp, pP, ZP - [-X, X, zero_vector(M, p)]) ≈ 0 atol = 4 * 10.0^(-16) - adjoint_differential_forward_logs!(Mp, YP, pP, XP) - @test isapprox(Mp, pP, YP, ZP) - ZP = [[0.0, π / 2, 0.0], [0.0, 0.0, 0.0], [π / 2, 0.0, 0.0]] - @test Manopt.adjoint_differential_log_argument(Mp, pP, qP, XP) == ZP - Manopt.adjoint_differential_log_argument!(Mp, YP, pP, qP, XP) - @test ZP == YP -end diff --git a/test/functions/test_bezier.jl b/test/functions/test_bezier.jl deleted file mode 100644 index 342156dfd3..0000000000 --- a/test/functions/test_bezier.jl +++ /dev/null @@ -1,186 +0,0 @@ -using Manopt, Manifolds, Test - -@testset "Bezier Tests" begin - @testset "General Bezier Tests" begin - repr(BezierSegment([[0.0, 0.0], [0.0, 0.0]])) == - "BezierSegment([[0.0, 0.0], [0.0, 0.0]])" - end - @testset "Spherical Test" begin - M = Sphere(2) - pC = [0.0, 1.0, 0.0] - pT = exp(M, pC, [0.0, 0.0, 0.7]) - pB = exp(M, pC, [0.0, 0.0, -0.7]) - B = [ - BezierSegment(shortest_geodesic(M, pT, pC, [0.0, 1 / 3, 2 / 3, 1.0])), - BezierSegment(shortest_geodesic(M, pC, pB, [0.0, 1 / 3, 2 / 3, 1.0])), - ] - # this is equispaced, so the pure cost is zero and the gradient is a zero-vector - t = collect(range(0.0, 1.0; length=5)) - pts = shortest_geodesic(M, pT, pB, t) - pts2 = de_casteljau(M, B, 2 .* t) - @test sum(distance.(Ref(M), pts, pts2)) < 10 * eps() - aX = log(M, pT, pC) - aT1 = adjoint_differential_bezier_control(M, BezierSegment([pT, pC]), 0.5, aX).pts - aT1a = BezierSegment(similar.(aT1)) - adjoint_differential_bezier_control!(M, aT1a, BezierSegment([pT, pC]), 0.5, aX) - @test aT1a.pts == aT1 - aT2 = [ - Manopt.adjoint_differential_shortest_geodesic_startpoint(M, pT, pC, 0.5, aX), - Manopt.adjoint_differential_shortest_geodesic_endpoint(M, pT, pC, 0.5, aX), - ] - @test aT1 ≈ aT2 - # - @test sum( - norm.( - grad_acceleration_bezier( - M, B[1], collect(range(0.0, 1.0; length=20)) - ).pts .- - [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], - ), - ) ≈ 0 atol = 2e-12 - - # cost and gradient - T = collect(range(0.0, 2.0; length=51)) - degrees = get_bezier_degrees(M, B) - Bvec = get_bezier_points(M, B, :differentiable) - Mp = PowerManifold(M, NestedPowerRepresentation(), length(Bvec)) - @test cost_acceleration_bezier(M, Bvec, degrees, T) ≈ 0 atol = 10^-10 - z = zero_vector(Mp, Bvec) - distance(Mp, grad_acceleration_bezier(M, Bvec, degrees, T), z) - @test norm(Mp, Bvec, grad_acceleration_bezier(M, Bvec, degrees, T) - z) ≈ 0 atol = - 2e-12 - - d = [pT, exp(M, pC, [0.3, 0.0, 0.0]), pB] - λ = 3.0 - - # cost and gradient with data term - @test cost_L2_acceleration_bezier(M, Bvec, degrees, T, λ, [pT, pC, pB]) ≈ 0 atol = - 10^(-10) - @test cost_L2_acceleration_bezier(M, Bvec, degrees, T, λ, d) ≈ - λ / 2 * distance(M, d[2], pC) .^ 2 - # when the data are the junctions - @test norm( - Mp, Bvec, grad_L2_acceleration_bezier(M, Bvec, degrees, T, λ, [pT, pC, pB]) - z - ) ≈ 0 atol = 2e-12 - z[4][1] = -0.9 - @test norm(Mp, Bvec, grad_L2_acceleration_bezier(M, Bvec, degrees, T, λ, d) - z) ≈ 0 atol = - 2e-12 - # when the data is weighted with zero - @test cost_L2_acceleration_bezier(M, Bvec, degrees, T, 0.0, d) ≈ 0 atol = 10^(-10) - z[4][1] = 0.0 - @test norm(Mp, Bvec, grad_L2_acceleration_bezier(M, Bvec, degrees, T, 0.0, d) - z) ≈ - 0 atol = 2e-12 - end - @testset "de Casteljau variants" begin - M = Sphere(2) - B = artificial_S2_composite_bezier_curve() - b = B[2] - b2s = BezierSegment([b.pts[1], b.pts[end]]) - # (a) 2 points -> geo special case - f1 = de_casteljau(M, b2s) # fct -> recursive - pts1 = f1.([0.0, 0.5, 1.0]) - pts2 = de_casteljau(M, b2s, [0.0, 0.5, 1.0]) - @test pts1 ≈ pts2 - # (b) one segment - f2 = de_casteljau(M, b) # fct -> recursive - pts3 = f2.([0.0, 0.5, 1.0]) - pts4 = de_casteljau(M, b, [0.0, 0.5, 1.0]) - @test pts3 ≈ pts4 - # (c) whole composites - f3 = de_casteljau(M, B) # fct -> recursive - pts5 = f3.([0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]) - pts6 = de_casteljau(M, B, [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]) - @test pts5 ≈ pts6 - @test_throws DomainError f3(-0.1) - @test_throws DomainError de_casteljau(M, B, -0.1) - @test_throws DomainError f3(3.5) - @test_throws DomainError de_casteljau(M, B, 3.5) - end - @testset "Spherical Data" begin - M = Sphere(2) - B = artificial_S2_composite_bezier_curve() - @test de_casteljau(M, B, [0.0, 1.0, 2.0, 3.0]) ≈ - [B[1].pts[1], B[2].pts[1], B[3].pts[1], B[3].pts[4]] - @test get_bezier_junction_tangent_vectors(M, B) ≈ [ - log(M, B[1].pts[1], B[1].pts[2]), - log(M, B[1].pts[4], B[1].pts[3]), - log(M, B[2].pts[1], B[2].pts[2]), - log(M, B[2].pts[4], B[2].pts[3]), - log(M, B[3].pts[1], B[3].pts[2]), - log(M, B[3].pts[4], B[3].pts[3]), - ] - @test get_bezier_junction_tangent_vectors(M, B[1]) ≈ - [log(M, B[1].pts[1], B[1].pts[2]), log(M, B[1].pts[4], B[1].pts[3])] - @test get_bezier_junctions(M, B[1]) == B[1].pts[[1, end]] - @test get_bezier_inner_points(M, B) == - [B[1].pts[2], B[1].pts[3], B[2].pts[2], B[2].pts[3], B[3].pts[2], B[3].pts[3]] - @test get_bezier_inner_points(M, B[1]) == [B[1].pts[2], B[1].pts[3]] - - @test get_bezier_points(M, B) == cat([[b.pts...] for b in B]...; dims=1) - @test get_bezier_points(M, B, :continuous) == - cat([[b.pts[[1:3]...]...] for b in B]..., [B[3].pts[4]]; dims=1) - @test get_bezier_points(M, B, :differentiable) == - cat([B[1].pts[[1, 2]]...], [b.pts[[3, 4]] for b in B]...; dims=1) - @test get_bezier_points(M, B[1]) == B[1].pts - # for segments just check that they - d = get_bezier_degrees(M, B) - A = get_bezier_segments(M, get_bezier_points(M, B), d) - @test [A[i].pts for i in 1:3] == [B[i].pts for i in 1:3] - A = get_bezier_segments(M, get_bezier_points(M, B, :continuous), d, :continuous) - @test [A[i].pts for i in 1:3] == [B[i].pts for i in 1:3] - A = get_bezier_segments( - M, get_bezier_points(M, B, :differentiable), d, :differentiable - ) - @test [A[i].pts for i in 1:3] == [B[i].pts for i in 1:3] - - # out of range - @test_throws ErrorException adjoint_differential_bezier_control( - M, B, 7.0, zero_vector(M, B[1].pts[1]) - ) - # a shortcut to evaluate the adjoint at several points is equal to separate evals - b = B[2] - Xi = [log(M, b.pts[1], b.pts[2]), -log(M, b.pts[4], b.pts[3])] - Xs = adjoint_differential_bezier_control(M, b, [0.0, 1.0], Xi) - @test isapprox( - Xs.pts, - adjoint_differential_bezier_control(M, b, 0.0, log(M, b.pts[1], b.pts[2])).pts + - adjoint_differential_bezier_control(M, b, 1.0, -log(M, b.pts[4], b.pts[3])).pts, - ) - Ys = BezierSegment(similar.(Xs.pts)) - adjoint_differential_bezier_control!(M, Ys, b, [0.0, 1.0], Xi) - @test isapprox(Xs.pts, Ys.pts) - # differential - X = BezierSegment([ - log(M, b.pts[1], b.pts[2]), [zero_vector(M, b.pts[i]) for i in 2:4]... - ]) - Ye = zero(X.pts[1]) - @test differential_bezier_control(M, b, 0.0, X) ≈ X.pts[1] - differential_bezier_control!(M, Ye, b, 0.0, X) - @test Ye ≈ X.pts[1] - dT1 = differential_bezier_control.(Ref(M), Ref(b), [0.0, 1.0], Ref(X)) - dT2 = differential_bezier_control(M, b, [0.0, 1.0], X) - dT3 = similar.(dT2) - differential_bezier_control!(M, dT3, b, [0.0, 1.0], X) - @test dT1 ≈ dT2 - @test dT3 == dT3 - X2 = [ - BezierSegment([[0.0, 0.0, 0.0] for i in 1:4]), - X, - BezierSegment([[0.0, 0.0, 0.0] for i in 1:4]), - ] - @test_throws DomainError differential_bezier_control(M, B, 20.0, X2) - dbT2a = differential_bezier_control(M, B, 1.0, X2) - dbT3a = similar(dbT2a) - @test_throws DomainError differential_bezier_control!(M, dbT3a, B, 20.0, X2) - differential_bezier_control!(M, dbT3a, B, 1.0, X2) - @test dbT2a == dbT3a - @test dbT2a ≈ X.pts[1] - dbT2 = differential_bezier_control(M, B, [1.0, 2.0], X2) - dbT1 = differential_bezier_control.(Ref(M), Ref(B), [1.0, 2.0], Ref(X2)) - @test dT1 ≈ dbT1 - @test dbT2 ≈ dbT1 - dbT3 = similar.(dbT2) - differential_bezier_control!(M, dbT3, B, [1.0, 2.0], X2) - @test dbT2 == dbT3 - end -end diff --git a/test/functions/test_costs.jl b/test/functions/test_costs.jl deleted file mode 100644 index 684fb748b0..0000000000 --- a/test/functions/test_costs.jl +++ /dev/null @@ -1,27 +0,0 @@ -using Manifolds, Manopt, Test, ManifoldsBase - -@testset "Test Costs" begin - M = Sphere(2) - N = PowerManifold(M, NestedPowerRepresentation(), 3, 3) - f = repeat([[1.0, 0.0, 0.0]], 3, 3) - x = repeat([[1.0, 0.0, 0.0]], 3, 3) - x[2, 1] = [0.0, 1.0, 0.0] - - @test costIntrICTV12(N, f, f, f, 0.0, 0.0) == 0.0 - @test costIntrICTV12(N, f, x, x, 0.0, 0.0) == 1 / 2 * distance(N, x, f)^2 - @test costIntrICTV12(N, x, x, x, 2.0, 0.5) ≈ costTV2(N, x) + costTV(N, x) - @test costIntrICTV12(N, x, x, x, 1.0, 0.0) ≈ costTV2(N, x) - @test costIntrICTV12(N, x, x, x, 1.0, 1.0) ≈ costTV(N, x) - @test costL2TV2(N, f, 1.0, x) == 1 / 2 * distance(N, f, x)^2 + 1.0 * costTV2(N, x) - # - @test costL2TV(N, f, 1.0, f) ≈ 0.0 - @test costL2TV(N, x, 1.0, x) ≈ 3 * π / 2 - @test costL2TVTV2(N, f, 0.0, 1.0, x) ≈ 1 / 2 * distance(N, x, f)^2 + costTV2(N, x) - @test costL2TVTV2(N, f, 1.0, 1.0, x) ≈ - 1 / 2 * distance(N, x, f)^2 + costTV(N, x) + costTV2(N, x) - - @test costTV2(M, Tuple(x[1:3, 1])) ≈ π / 2 - @test costTV(N, x, 1, 2) ≈ sqrt(5 / 4) * π - @test sum(costTV2(N, x, 1, false)) == costTV2(N, x) - @test costTV2(N, f, 2) == 0 -end diff --git a/test/functions/test_differentials.jl b/test/functions/test_differentials.jl deleted file mode 100644 index 15727e9d9d..0000000000 --- a/test/functions/test_differentials.jl +++ /dev/null @@ -1,66 +0,0 @@ -using Manifolds, Manopt, Test, ManifoldsBase - -@testset "Differentials" begin - p = [1.0, 0.0, 0.0] - q = [0.0, 1.0, 0.0] - M = Sphere(2) - X = log(M, p, q) - Y = similar(X) - @testset "forward logs" begin - N = PowerManifold(M, NestedPowerRepresentation(), 3) - x = [p, q, p] - y = [p, p, q] - V = [X, zero_vector(M, p), -X] - Y = Manopt.differential_log_argument(M, p, q, -X) - W = similar.(V) - @test norm( - N, - x, - differential_forward_logs(N, x, V) - [-X, [π / 2, 0.0, 0.0], zero_vector(M, p)], - ) ≈ 0 atol = 8 * 10.0^(-16) - differential_forward_logs!(N, W, x, V) - @test norm(N, x, W - [-X, [π / 2, 0.0, 0.0], zero_vector(M, p)]) ≈ 0 atol = - 8 * 10.0^(-16) - @test isapprox(N, x, Manopt.differential_log_argument(N, x, y, V), [V[1], V[2], Y]) - Manopt.differential_log_argument!(N, W, x, y, V) - @test isapprox(N, x, W, [V[1], V[2], Y]) - end - @testset "forward logs on a multivariate power manifold" begin - S = Sphere(2) - M = PowerManifold(S, NestedPowerRepresentation(), 2, 2) - p = [zeros(3) for i in [1, 2], j in [1, 2]] - p[1, 1] = [1.0, 0.0, 0.0] - p[1, 2] = 1 / sqrt(2) .* [1.0, 1.0, 0.0] - p[2, 1] = 1 / sqrt(2) .* [1.0, 0.0, 1.0] - p[2, 2] = [0.0, 1.0, 0.0] - t1 = forward_logs(M, p) - @test t1[1, 1, 1] ≈ log(S, p[1, 1], p[2, 1]) - @test t1[1, 1, 2] ≈ log(S, p[1, 1], p[1, 2]) - @test t1[1, 2, 1] ≈ log(S, p[1, 2], p[2, 2]) - @test t1[1, 2, 2] ≈ log(S, p[1, 2], p[1, 2]) atol = 1e-15 - @test t1[2, 1, 1] ≈ log(S, p[2, 1], p[2, 1]) atol = 1e-15 - @test t1[2, 1, 2] ≈ log(S, p[2, 1], p[2, 2]) - @test t1[2, 2, 1] ≈ log(S, p[2, 2], p[2, 2]) - @test t1[2, 2, 2] ≈ log(S, p[2, 2], p[2, 2]) - t1a = zero.(t1) - forward_logs!(M, t1a, p) - @test all(t1 .== t1a) - X = zero_vector(M, p) - X[1, 1] .= [0.0, 0.5, 0.5] - t2 = differential_forward_logs(M, p, X) - a = - Manopt.differential_log_basepoint(S, p[1, 1], p[2, 1], X[1, 1]) + - Manopt.differential_log_argument(S, p[1, 1], p[2, 1], X[2, 1]) - @test t2[1, 1, 1] ≈ a - @test t2[1, 2, 1] ≈ zero_vector(S, p[1, 2]) atol = 1e-17 - @test t2[2, 1, 1] ≈ zero_vector(S, p[2, 1]) atol = 1e-17 - @test t2[2, 2, 1] ≈ zero_vector(S, p[2, 2]) atol = 1e-17 - b = - Manopt.differential_log_basepoint(S, p[1, 1], p[1, 2], X[1, 1]) + - Manopt.differential_log_argument(S, p[1, 1], p[1, 2], X[1, 2]) - @test t2[1, 1, 2] ≈ b - @test t2[1, 2, 2] ≈ zero_vector(S, p[1, 2]) atol = 1e-17 - @test t2[2, 1, 2] ≈ zero_vector(S, p[2, 1]) atol = 1e-17 - @test t2[2, 2, 2] ≈ zero_vector(S, p[2, 2]) atol = 1e-17 - end -end diff --git a/test/functions/test_gradients.jl b/test/functions/test_gradients.jl deleted file mode 100644 index b639551c10..0000000000 --- a/test/functions/test_gradients.jl +++ /dev/null @@ -1,115 +0,0 @@ -using Manifolds, Manopt, Test, ManifoldsBase - -@testset "gradients" begin - @testset "Circle (Allocating)" begin - M = Circle() - N = PowerManifold(M, 4) - x = [0.1, 0.2, 0.3, 0.5] - tvTestξ = [-1.0, 0.0, 0.0, 1.0] - @test grad_TV(N, x) == tvTestξ - @test grad_TV(M, (x[1], x[1])) == (zero_vector(M, x[1]), zero_vector(M, x[1])) - @test norm(N, x, grad_TV(N, x, 2) - tvTestξ) ≈ 0 - tv2Testξ = [0.0, 1.0, -1.0, 1.0] - @test grad_TV2(N, x) == tv2Testξ - @test norm(N, x, forward_logs(N, x) - [0.1, 0.1, 0.2, 0.0]) ≈ 0 atol = 10^(-16) - @test norm( - N, - x, - grad_intrinsic_infimal_convolution_TV12(N, x, x, x, 1.0, 1.0)[1] - - [-1.0, 0.0, 0.0, 1.0], - ) ≈ 0 - @test norm(N, x, grad_intrinsic_infimal_convolution_TV12(N, x, x, x, 1.0, 1.0)[2]) ≈ - 0 - x2 = [0.1, 0.2, 0.3] - N2 = PowerManifold(M, size(x2)...) - @test grad_TV2(N2, x2) == zeros(3) - @test grad_TV2(N2, x2, 2) == zeros(3) - @test grad_TV(M, (0.0, 0.0), 2) == (0.0, 0.0) - # 2d forward logs - N3 = PowerManifold(M, 2, 2) - N3C = PowerManifold(M, 2, 2, 2) - x3 = [0.1 0.2; 0.3 0.5] - x3C = cat(x3, x3; dims=3) - tC = cat([0.2 0.3; 0.0 0.0], [0.1 0.0; 0.2 0.0]; dims=3) - @test norm(N3C, x3C, forward_logs(N3, x3) - tC) ≈ 0 atol = 10^(-16) - - M = Circle() - p = 0 - q = π / 4 - @test grad_distance(M, p, q) == q - p - @test grad_distance(M, p, q, 1) == -distance(M, q, p)^(-1) * log(M, q, p) - p = q - @test grad_distance(M, p, q, 1) == zero_vector(M, p) - end - @testset "Sphere (Mutating)" begin - M = Sphere(2) - p = [0.0, 0.0, 1.0] - q = [0.0, 1.0, 0.0] - r = [1.0, 0.0, 0.0] - @testset "Gradient of the distance function" begin - X = zero_vector(M, p) - grad_distance!(M, X, p, q) - Y = grad_distance(M, p, q) - Z = [0.0, 0.0, -π / 2] # known solution - @test X == Y - @test X == Z - U = zero_vector(M, p) - grad_distance!(M, U, p, q, 1) - V = grad_distance(M, p, q, 1) - W = -distance(M, q, p)^(-1) * log(M, q, p) # solution - @test U == V - @test U == W - w = q - U = zero_vector(M, w) - grad_distance!(M, U, w, q, 1) - V = grad_distance(M, w, q, 1) - W = zero_vector(M, w) # solution - @test U == V - @test U == W - end - @testset "Gradient of total variation" begin - Y = grad_TV(M, (p, q)) - Z = [[0.0, -1.0, 0.0], [0.0, 0.0, -1.0]] - X = similar.(Z) - grad_TV!(M, X, (p, q)) - @test [y for y in Y] == X - @test [y for y in Y] ≈ Z - N = PowerManifold(M, NestedPowerRepresentation(), 3) - s = [p, q, r] - Y2 = grad_TV(N, s) - Z2 = [[0.0, -1.0, 0.0], [-1.0, 0.0, -1.0], [0.0, -1.0, 0.0]] - X2 = zero_vector(N, s) - grad_TV!(N, X2, s) - @test Y2 == Z2 - @test X2 == Z2 - Y2a = grad_TV(N, s, 2) - X2a = zero_vector(N, s) - grad_TV!(N, X2a, s, 2) - @test Y2a == X2a - N2 = PowerManifold(M, NestedPowerRepresentation(), 2) - Y3 = grad_TV(M, (p, q), 2) - X3 = zero_vector(N2, [p, q]) - grad_TV!(M, X3, (p, q), 2) - @test [y for y in Y3] == X3 - Y4 = grad_TV(M, (p, p)) - X4 = zero_vector(N2, [p, q]) - grad_TV!(M, X4, (p, p)) - @test [y for y in Y4] == X4 - end - @testset "Grad of second order total variation" begin - N = PowerManifold(M, NestedPowerRepresentation(), 3) - s = [p, q, r] - X = zero_vector(N, s) - grad_TV2!(M, X, s) - Y = grad_TV2(M, s) - Z = -1 / sqrt(2) .* [[0.0, 1.0, 0.0], [1.0, 0.0, 1.0], [0.0, 1.0, 0.0]] - @test Y == X - @test Y ≈ Z - Y2 = grad_TV2(M, s, 2) - Z2 = -1.110720734539 .* [[0.0, 1.0, 0.0], [1.0, 0.0, 1.0], [0.0, 1.0, 0.0]] - @test Y2 ≈ Z2 - s2 = [p, shortest_geodesic(M, p, q, 0.5), q] - @test grad_TV2(M, s2) == [zero_vector(M, se) for se in s2] - end - end -end diff --git a/test/functions/test_proximal_maps.jl b/test/functions/test_proximal_maps.jl deleted file mode 100644 index c44357437a..0000000000 --- a/test/functions/test_proximal_maps.jl +++ /dev/null @@ -1,177 +0,0 @@ -using Manifolds, Manopt, Test, Dates - -@testset "proximal maps" begin - # - # prox_TV - p = [1.0, 0.0, 0.0] - q = [0.0, 1.0, 0.0] - M = Sphere(2) - N = PowerManifold(M, NestedPowerRepresentation(), 2) - @test_throws ErrorException prox_distance(M, 1.0, p, q, 3) - @test_throws ErrorException prox_distance!(M, p, 1.0, p, q, 3) - @test distance( - M, prox_distance(M, distance(M, p, q) / 2, p, q, 1), shortest_geodesic(M, p, q, 0.5) - ) < eps() - t = similar(p) - prox_distance!(M, t, distance(M, p, q) / 2, p, q, 1) - @test t == prox_distance(M, distance(M, p, q) / 2, p, q, 1) - (r, s) = prox_TV(M, π / 4, (p, q)) - X = [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]] - prox_TV!(M, X, π / 4, (p, q)) - @test norm(r - s) < eps(Float64) - @test norm(X[1] - s) < eps(Float64) - @test norm(X[2] - r) < eps(Float64) - # i.e. they are moved together - @test distance(M, r, s) < eps(Float64) - (t, u) = prox_TV(M, π / 8, (p, q)) - @test_throws ErrorException prox_TV(M, π, (p, q), 3) - @test_throws ErrorException prox_TV!(M, [p, q], π, (p, q), 3) - # they cross correlate - @test ( - abs(t[1] - u[2]) < eps(Float64) && - abs(t[2] - u[1]) < eps(Float64) && - abs(t[3] - u[3]) < eps(Float64) - ) - @test distance(M, t, u) ≈ π / 4 # and have moved half their distance - # - (v, w) = prox_TV(M, 1.0, (p, q), 2) - vC, wC = shortest_geodesic(M, p, q, [1 / 3, 2 / 3]) - @test distance(M, v, vC) ≈ 0 - @test distance(M, w, wC) < eps() - P = [similar(p), similar(q)] - prox_TV!(M, P, 1.0, (p, q), 2) - @test P == [v, w] - # prox_TV on Power - T = prox_TV(N, π / 8, [p, q]) - @test distance(N, T, [t, u]) ≈ 0 - # parallelprox_TV - N2 = PowerManifold(M, NestedPowerRepresentation(), 3) - r = geodesic(M, p, q, 0.5) - s, t = prox_TV(M, π / 16, (r, q)) - u, v = prox_TV(M, π / 16, (p, r)) - y = prox_parallel_TV(N2, π / 16, [[p, r, q], [p, r, q]]) - yM = [similar.(ye) for ye in y] - prox_parallel_TV!(N2, yM, π / 16, [[p, r, q], [p, r, q]]) - @test y == yM - @test distance(N2, y[1], [p, s, t]) ≈ 0 # even indices in first comp - @test distance(N2, y[2], [u, v, q]) ≈ 0 # odd in second - # dimensions of x have to fit, here they don't - @test_throws ErrorException prox_parallel_TV(N2, π / 16, [[p, r, q]]) - @test_throws ErrorException prox_parallel_TV!(N2, yM, π / 16, [[p, r, q]]) - # prox_TV2 - p2, r2, q2 = prox_TV2(M, 1.0, (p, r, q)) - y = [similar(p) for _ in 1:3] - prox_TV2!(M, y, 1.0, (p, r, q)) - @test y ≈ [p2, r2, q2] - sum(distance.(Ref(M), [p, r, q], [p2, r2, q2])) ≈ 0 - @test_throws ErrorException prox_TV2(M, 1.0, (p, r, q), 2) # since prox_TV is only defined for p=1 - distance( - PowerManifold(M, NestedPowerRepresentation(), 3), - [p2, r2, q2], - prox_TV2(PowerManifold(M, NestedPowerRepresentation(), 3), 1.0, [p, r, q]), - ) ≈ 0 - # Circle - M2 = Circle() - N2 = PowerManifold(M2, 3) - pS, rS, qS = [-0.5, 0.1, 0.5] - d = sum([pS, rS, qS] .* [1.0, -2.0, 1.0]) - m = min(0.3, abs(Manifolds.sym_rem(d) / 6)) - s = sign(Manifolds.sym_rem(d)) - pSc, rSc, qSc = Manifolds.sym_rem.([pS, rS, qS] .- m .* s .* [1.0, -2.0, 1.0]) - pSr, rSr, qSr = prox_TV2(M2, 0.3, (pS, rS, qS)) - @test sum(distance.(Ref(M2), [pSc, rSc, qSc], [pSr, rSr, qSr])) ≈ 0 - # p=2 - t = 0.3 * Manifolds.sym_rem(d) / (1 + 0.3 * 6.0) - @test sum( - distance.( - Ref(M2), - [prox_TV2(M2, 0.3, (pS, rS, qS), 2)...], - [pS, rS, qS] .- t .* [1.0, -2.0, 1.0], - ), - ) ≈ 0 - @test prox_TV2(N2, 0.3, [pS, rS, qS]) == [pSr, rSr, qSr] - # others fail - @test_throws ErrorException prox_TV2(M2, 0.3, (pS, rS, qS), 3) - # Rn - M3 = Euclidean(1) - pR, rR, qR = [pS, rS, qS] - m = min.(Ref(0.3), abs.([pR, rR, qR] .* [1.0, -2.0, 1.0]) / 6) - s = sign(d) # we can reuse d - pRc, rRc, qRc = [pR, rR, qR] .- m .* s .* [1.0, -2.0, 1.0] - pRr, rRr, qRr = prox_TV2(M3, 0.3, (pR, rR, qR)) - @test sum(distance.(Ref(M3), [pRc, rRc, qRc], [pRr, rRr, qRr])) ≈ 0 - # p=2 - t = 0.3 * d / (1 + 0.3 * 6.0) - @test sum( - distance.( - Ref(M3), - [prox_TV2(M3, 0.3, (pR, rR, qR), 2)...], - [pR, rR, qR] .- t .* [1.0, -2.0, 1.0], - ), - ) ≈ 0 - # others fail - @test_throws ErrorException prox_TV2(M3, 0.3, (pR, rR, qR), 3) - # - # collaborative integer tests - # - @test_throws ErrorException prox_TV2(M3, 0.3, (pS, rS, qS), 3) - ξR, ηR, νR = [pS, rS, qS] - N3 = PowerManifold(M3, 3) - P = [pR rR qR] - Ξ = [ξR ηR νR] - Θ = similar(Ξ) - @test project_collaborative_TV(N3, 0.0, P, Ξ, 1, 1) == Ξ - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 1, 1) - @test Θ == Ξ - @test project_collaborative_TV(N3, 0.0, P, Ξ, 1.0, 1) == Ξ - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 1.0, 1) - @test Θ == Ξ - @test project_collaborative_TV(N3, 0.0, P, Ξ, 1, 1.0) == Ξ - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 1, 1.0) - @test Θ == Ξ - @test project_collaborative_TV(N3, 0.0, P, Ξ, 1.0, 1.0) == Ξ - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 1.0, 1.0) - @test Θ == Ξ - - @test project_collaborative_TV(N3, 0.0, P, Ξ, 2, 1) == Ξ - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 2, 1) - @test Θ == Ξ - @test norm(N3, P, project_collaborative_TV(N3, 0.0, P, Ξ, 2, Inf)) ≈ norm(Ξ) - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 2, Inf) - @test norm(N3, P, Θ) ≈ norm(Ξ) - @test sum(abs.(project_collaborative_TV(N3, 0.0, P, Ξ, 1, Inf))) ≈ 1.0 - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 1, Inf) - @test sum(abs.(Θ)) ≈ 1.0 - @test norm(N3, P, project_collaborative_TV(N3, 0.0, P, Ξ, Inf, Inf)) ≈ norm(Ξ) - project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, Inf, Inf) - @test norm(N3, P, Θ) ≈ norm(Ξ) - @test_throws ErrorException project_collaborative_TV(N3, 0.0, P, Ξ, 3, 3) - @test_throws ErrorException project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 3, 3) - @test_throws ErrorException project_collaborative_TV(N3, 0.0, P, Ξ, 3, 1) - @test_throws ErrorException project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 3, 1) - @test_throws ErrorException project_collaborative_TV(N3, 0.0, P, Ξ, 3, Inf) - @test_throws ErrorException project_collaborative_TV!(N3, Θ, 0.0, P, Ξ, 3, Inf) - - @testset "Multivariate project collaborative TV" begin - S = Sphere(2) - M = PowerManifold(S, NestedPowerRepresentation(), 2, 2, 2) - p = [zeros(3) for i in [1, 2], j in [1, 2], k in [1, 2]] - p[1, 1, 1] = [1.0, 0.0, 0.0] - p[1, 2, 1] = 1 / sqrt(2) .* [1.0, 1.0, 0.0] - p[2, 1, 1] = 1 / sqrt(2) .* [1.0, 0.0, 1.0] - p[2, 2, 1] = [0.0, 1.0, 0.0] - p[:, :, 2] = deepcopy(p[:, :, 1]) - X = zero_vector(M, p) - X[1, 1, 1] .= [0.0, 0.5, 0.5] - Y = zero_vector(M, p) - @test norm(project_collaborative_TV(M, 1, p, X, 2, 1)) ≈ 0 - project_collaborative_TV!(M, Y, 1, p, X, 2, 1) - @test norm(Y) ≈ 0 - @test norm(project_collaborative_TV(M, 0.5, p, X, 2, 1)) ≈ (norm(X[1, 1, 1]) - 0.5) - project_collaborative_TV!(M, Y, 0.5, p, X, 2, 1) - @test norm(Y) ≈ (norm(X[1, 1, 1]) - 0.5) - Nf = PowerManifold(S, NestedPowerRepresentation(), 2, 2, 1) - @test_throws ErrorException project_collaborative_TV(Nf, 1, p, X, 2, 1) - @test_throws ErrorException project_collaborative_TV!(Nf, Y, 1, p, X, 2, 1) - end -end diff --git a/test/helpers/test_data.jl b/test/helpers/test_data.jl deleted file mode 100644 index 815de23bdc..0000000000 --- a/test/helpers/test_data.jl +++ /dev/null @@ -1,31 +0,0 @@ -@testset "Data" begin - @test artificialIn_SAR_image(2) == 2 * π * ones(2, 2) - - @test artificial_S1_slope_signal(20, 0.0) == repeat([-π / 2], 20) - - @test ismissing(artificial_S1_signal(-1.0)) - @test ismissing(artificial_S1_signal(2.0)) - @test artificial_S1_signal(2) == [-3 * π / 4, -3 * π / 4] - - # for the remainder check data types only - @test length(artificial_S1_signal(20)) == 20 - - @test size(artificial_S2_whirl_image(64)) == (64, 64) - @test length(artificial_S2_whirl_image(64)[1, 1]) == 3 - - @test size(artificial_S2_rotation_image(64)) == (64, 64) - @test length(artificial_S2_rotation_image(64)[1, 1]) == 3 - - @test size(artificial_S2_whirl_patch(8)) == (8, 8) - @test length(artificial_S2_whirl_patch(8)[1, 1]) == 3 - - @test size(artificial_SPD_image(8)) == (8, 8) - @test size(artificial_SPD_image(8)[1, 1]) == (3, 3) - - @test size(artificial_SPD_image2(8)) == (8, 8) - @test size(artificial_SPD_image2(8)[1, 1]) == (3, 3) - @test eltype(artificial_SPD_image2(8)) == Array{Float64,2} - - @test length(artificial_S2_lemniscate([0.0, 0.0, 1.0], 20)) == 20 - @test length(artificial_S2_lemniscate([0.0, 0.0, 1.0], 20)[1]) == 3 -end diff --git a/test/helpers/test_error_measures.jl b/test/helpers/test_error_measures.jl deleted file mode 100644 index f891b10993..0000000000 --- a/test/helpers/test_error_measures.jl +++ /dev/null @@ -1,17 +0,0 @@ -@testset "Manopt.jl Error Measures" begin - M = Sphere(2) - N = PowerManifold(M, NestedPowerRepresentation(), 2) - using Random: seed! - seed!(42) - d = Manifolds.uniform_distribution(M, [1.0, 0.0, 0.0]) - w = rand(d) - x = rand(d) - y = rand(d) - z = rand(d) - a = [w, x] - b = [y, z] - @test meanSquaredError(M, x, y) == distance(M, x, y)^2 - @test meanSquaredError(N, a, b) == 1 / 2 * (distance(M, w, y)^2 + distance(M, x, z)^2) - @test meanAverageError(M, x, y) == distance(M, x, y) - @test meanAverageError(N, a, b) == 1 / 2 * sum(distance.(Ref(M), a, b)) -end diff --git a/test/functions/test_manifold.jl b/test/helpers/test_manifold_extra_functions.jl similarity index 95% rename from test/functions/test_manifold.jl rename to test/helpers/test_manifold_extra_functions.jl index 675f99e363..7c00c81730 100644 --- a/test/functions/test_manifold.jl +++ b/test/helpers/test_manifold_extra_functions.jl @@ -59,6 +59,8 @@ Random.seed!(42) @test mid_point(M, p, q, 1.0) ≈ π / 2 @test mid_point(M, p, q, -1.0) ≈ -π / 2 @test mid_point(M, 0, π / 2) ≈ π / 4 + # Without being too far away -> classical mid point + @test mid_point(M, 0, 0.1, π / 2) == mid_point(M, 0, 0.1) end @testset "max_stepsize" begin diff --git a/test/plans/test_higher_order_primal_dual_plan.jl b/test/plans/test_higher_order_primal_dual_plan.jl index 702a0bea09..fb15ae381a 100644 --- a/test/plans/test_higher_order_primal_dual_plan.jl +++ b/test/plans/test_higher_order_primal_dual_plan.jl @@ -1,5 +1,7 @@ using Manopt, Manifolds, ManifoldsBase, Test - +using ManoptExamples: forward_logs, adjoint_differential_forward_logs +using ManifoldDiff: + differential_shortest_geodesic_startpoint, differential_shortest_geodesic_startpoint! @testset "Test higher order primal dual plan" begin # Perform an really easy test, just compute a mid point # @@ -134,10 +136,10 @@ using Manopt, Manifolds, ManifoldsBase, Test end function Dprox_F(M, λ, p, X) - return Manopt.differential_shortest_geodesic_startpoint(M, p, data, λ / (α + λ), X) + return differential_shortest_geodesic_startpoint(M, p, data, λ / (α + λ), X) end function Dprox_F!(M, Y, λ, p, X) - Manopt.differential_shortest_geodesic_startpoint!(M, Y, p, data, λ / (α + λ), X) + differential_shortest_geodesic_startpoint!(M, Y, p, data, λ / (α + λ), X) return Y end function Dprox_G_dual(N, n, λ, X, Y) diff --git a/test/plans/test_primal_dual_plan.jl b/test/plans/test_primal_dual_plan.jl index 1ebb2b0402..feeaa037c4 100644 --- a/test/plans/test_primal_dual_plan.jl +++ b/test/plans/test_primal_dual_plan.jl @@ -1,4 +1,15 @@ -using Manopt, Manifolds, ManifoldsBase, Test +using Manopt, Manifolds, ManifoldsBase, ManifoldDiff, ManoptExamples, Test + +using ManoptExamples: + forward_logs, + forward_logs!, + adjoint_differential_forward_logs, + adjoint_differential_forward_logs!, + project_collaborative_TV, + project_collaborative_TV!, + differential_forward_logs, + differential_forward_logs! +using ManifoldDiff: prox_distance, prox_distance!, adjoint_differential_log_argument include("../utils/dummy_types.jl") diff --git a/test/plans/test_proximal_plan.jl b/test/plans/test_proximal_plan.jl index 7beb1a3cb9..469cc7af3b 100644 --- a/test/plans/test_proximal_plan.jl +++ b/test/plans/test_proximal_plan.jl @@ -1,5 +1,7 @@ using LRUCache, Manopt, Manifolds, Test import Manopt: get_proximal_map, get_proximal_map! +using ManifoldDiff: prox_distance, adjoint_differential_log_basepoint + function get_proximal_map(M, o::ManifoldProximalMapObjective, λ, p) return get_proximal_map(M, o, λ, p, 1) end diff --git a/test/runtests.jl b/test/runtests.jl index 3d1f361186..603ff34f9d 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -27,20 +27,10 @@ include("utils/example_tasks.jl") include("plans/test_stopping_criteria.jl") include("plans/test_subgradient_plan.jl") end - @testset "Function Tests " begin - include("functions/test_adjoint_differentials.jl") - include("functions/test_bezier.jl") - include("functions/test_differentials.jl") - include("functions/test_costs.jl") - include("functions/test_gradients.jl") - include("functions/test_proximal_maps.jl") - include("functions/test_manifold.jl") - end - @testset "Helper & Data Tests" begin - include("helpers/test_error_measures.jl") - include("helpers/test_data.jl") + @testset "Helper Tests " begin include("helpers/test_checks.jl") include("helpers/test_linesearches.jl") + include("helpers/test_manifold_extra_functions.jl") end @testset "Solver Tests " begin include("solvers/test_adaptive_regularization_with_cubics.jl") diff --git a/test/solvers/test_ChambollePock.jl b/test/solvers/test_ChambollePock.jl index bb87372316..81f5ffbea8 100644 --- a/test/solvers/test_ChambollePock.jl +++ b/test/solvers/test_ChambollePock.jl @@ -1,4 +1,5 @@ using Manopt, Manifolds, ManifoldsBase, Test +using ManoptExamples: forward_logs, adjoint_differential_forward_logs @testset "Chambolle-Pock" begin # diff --git a/test/solvers/test_Douglas_Rachford.jl b/test/solvers/test_Douglas_Rachford.jl index dfc388f20d..a2b228f78a 100644 --- a/test/solvers/test_Douglas_Rachford.jl +++ b/test/solvers/test_Douglas_Rachford.jl @@ -1,4 +1,6 @@ using Manifolds, Manopt, Test +using ManifoldDiff: prox_distance + @testset "DouglasRachford" begin # Though this seems a strange way, it is a way to compute the mid point M = Sphere(2) diff --git a/test/solvers/test_cyclic_proximal_point.jl b/test/solvers/test_cyclic_proximal_point.jl index 08a9351971..39513be8f3 100644 --- a/test/solvers/test_cyclic_proximal_point.jl +++ b/test/solvers/test_cyclic_proximal_point.jl @@ -1,13 +1,17 @@ using Manifolds, Manopt, Test, Dates, LRUCache +using ManifoldDiff: prox_distance, prox_distance! +using ManoptExamples: prox_Total_Variation, prox_Total_Variation!, L2_Total_Variation +using ManoptExamples: artificial_S1_signal, Lemniscate @testset "Cyclic Proximal Point" begin @testset "Allocating" begin n = 100 N = PowerManifold(Circle(), n) q = artificial_S1_signal(n) - f(M, p) = costL2TV(M, q, 0.5, p) + f(M, p) = L2_Total_Variation(M, q, 0.5, p) proxes = ( - (N, λ, p) -> prox_distance(N, λ, q, p), (N, λ, p) -> prox_TV(N, 0.5 * λ, p) + (N, λ, p) -> prox_distance(N, λ, q, p), + (N, λ, p) -> prox_Total_Variation(N, 0.5 * λ, p), ) q2 = cyclic_proximal_point( N, f, proxes, q; λ=i -> π / (2 * i), stopping_criterion=StopAfterIteration(100) @@ -43,14 +47,15 @@ using Manifolds, Manopt, Test, Dates, LRUCache n = 3 M = Sphere(2) N = PowerManifold(M, NestedPowerRepresentation(), n) - q = artificial_S2_lemniscate([0.0, 0.0, 1.0], n) - f(N, p) = costL2TV(N, q, 0.5, p) + q = [Lemniscate(t) for t in range(0, 2π; length=n)] + f(N, p) = L2_Total_Variation(N, q, 0.5, p) proxes! = ( (N, qr, λ, p) -> prox_distance!(N, qr, λ, q, p), - (N, q, λ, p) -> prox_TV!(N, q, 0.5 * λ, p), + (N, q, λ, p) -> prox_Total_Variation!(N, q, 0.5 * λ, p), ) proxes = ( - (N, λ, p) -> prox_distance(N, λ, q, p), (N, λ, p) -> prox_TV(N, 0.5 * λ, p) + (N, λ, p) -> prox_distance(N, λ, q, p), + (N, λ, p) -> prox_Total_Variation(N, 0.5 * λ, p), ) s1 = cyclic_proximal_point( N, f, proxes, q; λ=i -> π / (2 * i), stopping_criterion=StopAfterIteration(100) @@ -89,14 +94,15 @@ using Manifolds, Manopt, Test, Dates, LRUCache n = 3 M = Sphere(2) N = PowerManifold(M, NestedPowerRepresentation(), n) - q = artificial_S2_lemniscate([0.0, 0.0, 1.0], n) - f(N, x) = costL2TV(N, q, 0.5, x) + q = [Lemniscate(t) for t in range(0, 2π; length=n)] + f(N, x) = L2_Total_Variation(N, q, 0.5, x) proxes! = ( (N, qr, λ, p) -> prox_distance!(N, qr, λ, q, p), - (N, q, λ, p) -> prox_TV!(N, q, 0.5 * λ, p), + (N, q, λ, p) -> prox_Total_Variation!(N, q, 0.5 * λ, p), ) proxes = ( - (N, λ, p) -> prox_distance(N, λ, q, p), (N, λ, p) -> prox_TV(N, 0.5 * λ, p) + (N, λ, p) -> prox_distance(N, λ, q, p), + (N, λ, p) -> prox_Total_Variation(N, 0.5 * λ, p), ) for i in 1:2 mpo1 = ManifoldProximalMapObjective(f, proxes) @@ -123,9 +129,10 @@ using Manifolds, Manopt, Test, Dates, LRUCache M = Euclidean(3) p = ones(3) O = CyclicProximalPointState(M, p) - f(M, p) = costL2TV(M, q, 0.5, p) + f(M, p) = L2_Total_Variation(M, q, 0.5, p) proxes = ( - (M, λ, p) -> prox_distance(M, λ, q, p), (M, λ, p) -> prox_TV(M, 0.5 * λ, p) + (M, λ, p) -> prox_distance(M, λ, q, p), + (M, λ, p) -> prox_Total_Variation(M, 0.5 * λ, p), ) s = CyclicProximalPointState( M, f; stopping_criterion=StopAfterIteration(1), λ=i -> i diff --git a/test/solvers/test_gradient_descent.jl b/test/solvers/test_gradient_descent.jl index de84700e19..3a0cc64750 100644 --- a/test/solvers/test_gradient_descent.jl +++ b/test/solvers/test_gradient_descent.jl @@ -1,5 +1,7 @@ using Manopt, Manifolds, Test, Random +using ManifoldDiff: grad_distance + @testset "Gradient Descent" begin @testset "allocating Circle" begin M = Circle() diff --git a/test/solvers/test_primal_dual_semismooth_Newton.jl b/test/solvers/test_primal_dual_semismooth_Newton.jl index f0c3583168..d7d08fdbad 100644 --- a/test/solvers/test_primal_dual_semismooth_Newton.jl +++ b/test/solvers/test_primal_dual_semismooth_Newton.jl @@ -1,4 +1,6 @@ using Manopt, Manifolds, ManifoldsBase, Test +using ManoptExamples: adjoint_differential_forward_logs +using ManifoldDiff: differential_shortest_geodesic_startpoint @testset "PD-RSSN" begin # diff --git a/tutorials/CountAndCache.qmd b/tutorials/CountAndCache.qmd index 8aa02dd3c9..ee01b8a8ec 100644 --- a/tutorials/CountAndCache.qmd +++ b/tutorials/CountAndCache.qmd @@ -61,7 +61,8 @@ Pkg.activate("."); # for reproducibility use the local tutorial environment. ```{julia} #| output: false -using Manopt, Manifolds, Random, LRUCache, LinearAlgebra +using Manopt, Manifolds, Random, LRUCache, LinearAlgebra, ManifoldDiff +using ManifoldDiff: grad_distance ``` ## Counting diff --git a/tutorials/HowToRecord.qmd b/tutorials/HowToRecord.qmd index 02a291c367..73480088aa 100644 --- a/tutorials/HowToRecord.qmd +++ b/tutorials/HowToRecord.qmd @@ -30,7 +30,8 @@ Pkg.activate("."); # for reproducibility use the local tutorial environment. Let's first load the necessary packages. ```{julia} -using Manopt, Manifolds, Random +using Manopt, Manifolds, Random, ManifoldDiff +using ManifoldDiff: grad_distance Random.seed!(42); ``` diff --git a/tutorials/Optimize.qmd b/tutorials/Optimize.qmd index 7e257752aa..f331eb7868 100644 --- a/tutorials/Optimize.qmd +++ b/tutorials/Optimize.qmd @@ -63,7 +63,8 @@ Let's assume you have already installed both `Manopt.jl` and `Manifolds.jl` in J Then we can get started by loading both packages as well as `Random.jl` for persistency in this tutorial. ```{julia} -using Manopt, Manifolds, Random, LinearAlgebra +using Manopt, Manifolds, Random, LinearAlgebra, ManifoldDiff +using ManifoldDiff: grad_distance, prox_distance Random.seed!(42); ``` diff --git a/tutorials/Project.toml b/tutorials/Project.toml index 3623da7067..e1bb797b96 100644 --- a/tutorials/Project.toml +++ b/tutorials/Project.toml @@ -18,7 +18,7 @@ Distributions = "0.25" FiniteDifferences = "0.12" IJulia = "1" LRUCache = "1.4" -ManifoldDiff = "0.3" +ManifoldDiff = "0.3.9" Manifolds = "0.8.81, 0.9" ManifoldsBase = "0.14.12, 0.15" Manopt = "0.4.22" diff --git a/tutorials/StochasticGradientDescent.qmd b/tutorials/StochasticGradientDescent.qmd index e42d5d6050..83032cd964 100644 --- a/tutorials/StochasticGradientDescent.qmd +++ b/tutorials/StochasticGradientDescent.qmd @@ -33,7 +33,8 @@ Pkg.activate("."); # for reproducibility use the local tutorial environment. ``` ```{julia} -using Manifolds, Manopt, Random, BenchmarkTools +using Manifolds, Manopt, Random, BenchmarkTools, ManifoldDiff +using ManifoldDiff: grad_distance Random.seed!(42); ```