Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster solver for computing confidence maps #6767

Closed
wyli opened this issue Jul 25, 2023 · 1 comment · Fixed by #7876
Closed

faster solver for computing confidence maps #6767

wyli opened this issue Jul 25, 2023 · 1 comment · Fixed by #7876

Comments

@wyli
Copy link
Contributor

wyli commented Jul 25, 2023

follow up of #6709

The issue though is the process for installing Octave and oct2py is more complex than other things we rely on, requiring a separate install and a correctly configured PATH variable. This is also an issue for our CICD system since we do need to test this transform. The library you're using, oct2py, also seems like a rather small project whose long term support is not assured. I'd still recommend we look into some other solver (something wrapping eigen?) to avoid this dependency.

Hello, for now I've removed the octave dependency, now the code only uses scipy solver.
As far as my knowledge goes, there is no decomposition or iterative method that would make
the algorithm faster while preserving the results. I am also sharing some metrics for the comparison
of octave and scipy. I believe SciPy is not all bad, considering they also would optimize their sparse solver
over time.

image

For an image with 118677 pixels (around 350 x 350), using i7-13700K with 10 runs each:

Scipy mean time: 5.763958525657654 +- 0.2072540601999276
Octave mean time: 0.7772286891937256 +- 0.018855939984020915

Originally posted by @MrGranddy in #6709 (comment)

@MrGranddy
Copy link
Contributor

MrGranddy commented Jul 25, 2023

I actually achieved satisfactory results with "Conjugate Gradients" method recently, it is of course not deterministicly solving the system so there will be some minor differences compared to the original one, yet I believe it is close enough and even not so, the user always can trade between accuracy and speed according to their need if the parameters controlling that is also given. I will soon push the new version.

Basically the 2 parameters controlling the algorithm are max_iter: maximum number of iterations taken by the algorithm,
tol floating point numerical error to chech convergence, lower tol means better accuracy and slower operation. max_iter can also be used to put an upper limit for time required.

If you have any comments now, I can follow them before the initial commit, for example how to implement these "2-mode" behaviour one deterministic and the other iterative best in MONAI. You can of course also wait until initial commit if it makes more sense.

Here is a showcase of the new algorithm:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants