Skip to content

Commit

Permalink
rand -> allocate_result
Browse files Browse the repository at this point in the history
  • Loading branch information
mateuszbaran committed Dec 16, 2023
1 parent f79d857 commit bc795ae
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions src/plans/stepsize.jl
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ last step size.
this returns an initial guess. The default uses the last obtained stepsize
as well as for internal use
* `candidate_point` (`rand(M)`)
* `candidate_point` (`allocate_result(M, rand)`)
Furthermore the following fields act as safeguards
Expand Down Expand Up @@ -554,7 +554,7 @@ mutable struct NonmonotoneLinesearch{
M::AbstractManifold=DefaultManifold();
bb_min_stepsize::Float64=1e-3,
bb_max_stepsize::Float64=1e3,
candidate_point::P=rand(M),
candidate_point::P=allocate_result(M, rand),
initial_stepsize::Float64=1.0,
memory_size::Int=10,
retraction_method::TRM=default_retraction_method(M),
Expand Down Expand Up @@ -770,9 +770,9 @@ Generate a Wolfe-Powell linesearch
## Keyword Arguments
* `candidate_point` (`rand(M)`) memory for an internims candidate
* `candidate_tangent` (`zero_vector(M; vector_at=candidate_point)`) memory for a gradient
* `candidate_direcntion` (`zero_vector(M; vector_at=candidate_point)`) memory for a direction
* `candidate_point` (`allocate_result(M, rand)`) memory for an internims candidate
* `candidate_tangent` (`allocate_result(M, zero_vector, candidate_point)`) memory for a gradient
* `candidate_direcntion` (`allocate_result(M, zero_vector, candidate_point)`) memory for a direction
* `max_stepsize` ([`max_stepsize`](@ref)`(M, p)`) – largest stepsize allowed here.
* `retraction_method` – (`ExponentialRetraction()`) the retraction to use
* `stop_when_stepsize_less` - (`0.0`) smallest stepsize when to stop (the last one before is taken)
Expand Down Expand Up @@ -1088,7 +1088,7 @@ as well as the internal fields
# Constructor
AdaptiveWNGrad(M=DefaultManifold, grad_f=(M,p) -> zero_vector(M,rand(M)), p=rand(M); kwargs...)
AdaptiveWNGrad(M=DefaultManifold, grad_f=(M, p) -> zero_vector(M, rand(M)), p=rand(M); kwargs...)
Where all above fields with defaults are keyword arguments.
An additional keyword arguments
Expand Down

0 comments on commit bc795ae

Please sign in to comment.