-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add wavefield preconditioner #584
base: dev
Are you sure you want to change the base?
Conversation
b6d4920
to
8701960
Compare
You need to take the square root of the sum, not the sum of the square roots. |
How stupid of me, thank you! This now works as expected :) |
c00cdd1
to
4a74199
Compare
0b31028
to
1cd1e03
Compare
@daurer now we just need the four CUDA kernels Then I can add some tests for the cupy and pycuda kernels and we should be good to go. |
The secret sauce of the LSQ-ML algorithm seems to be what they call average update directions:
In reality this is just scaling the ML update directions by 1 / Lipschitz constant for the exit wave error as shown by Andy Maiden et al in "Further improvements to the ptychographical iterative engine". Note that 1/L is known to be the optimal step size for gradient descent on convex functions in the math optimization literature:
https://math.stackexchange.com/questions/3587312/the-biggest-step-size-with-guaranteed-convergence-for-constant-step-size-gradien
This PR adds an option to ML called "wavefield_precond" that enables this "average update preconditioner". The following results on Moonflower suggests that this is highly effective at accelerating ML as shown in the images below.
Vanilla ML (all parameters default, 200 iterations):
ML with wavefield_precond=True (all other parameters default, 200 iterations):