-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework LevenbergMarquardt to use the vector function functionality #432
Conversation
# Conflicts: # Changelog.md
# Conflicts: # Changelog.md # src/documentation_glossary.jl
# Conflicts: # Changelog.md
adapt the vectorial plan such that the functions are ordered alphabetically.
…e with smoothing implemented.
This was faster than I thought. Tests should be up and running again for the rework. For LM this is even non-breaking, since what mainly changed is the internal representation of the vector function and its jacobian as a Smoothing should be covered as well, though the For general vector functions there is one small (breaking) change, though that was more of a bug I think: |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #432 +/- ##
========================================
Coverage 99.87% 99.88%
========================================
Files 78 78
Lines 8278 8384 +106
========================================
+ Hits 8268 8374 +106
Misses 10 10 ☔ View full report in Codecov by Sentry. |
THere we are, everything has been tested, LM can now be used with much more diverse inputs of the Jacobian, also as gradients of the components and such, smoothing is available, though not yet 100% sure it converges. Do you, @Affie have a concrete example you wanted to test this on? |
I've taken a look and I think we need to generalize robustification a bit. Both Ceres and Triggs consider a more general objective, where the smoothing function I think for simplicity we can assume that each group has the same size but we need to let it be arbitrarily large. |
I thought about that a bit as well. My main problem is, that this current rework I could do in about a week. It still got quiiiite technical for the vectorial function, but nice to have Jacobians therein now covered nicely. But for arbitrary block the current structure of the smoothing thingy is useless. that would need a complete rewrite of this current work. I am not sure how that can be done and how much time that would need. So if this is useless without blocks – we should just abandon smoothies from this PR and only keep the vectorial rework. |
OK, then I think it's better to do the vectorial rework here and add smoothing when we have time to work on blocks. |
sounds fair. Sad for the whole factory I built, but then it is what it is, then I wasted my energy on that. We are also missing the proper |
We can finish smoothing later, it's certainly not a wasted effort. Maybe a little bit on the premature integration but finishing this will be a good next project IMO (after LieGroups.jl and GeometricKalman.jl). |
yeah will try to put it aside on another branch then. |
I reworked this PR again. There is now a “start-smoothing” branch to keep the current approach. There was something I did not like anyways: Ceres does smoothing only after absolute value and squaring the residuals. That way “Huber” is more like “Huber on the sqrt” to cancel the squaring. Currently these modifiers (smoothing) can not have parameters, also one thing one could improve in a new PR. |
Hi, I can create a toy example in RoME, but I'm not following all the technical discussion here. It looks like smoothing was removed from this PR, is it still needed to test anything? |
Hi! We need to derive efficient update rules to add smoothing to Manopt and it appears to be more work than initially anticipated. We already have two big ongoing projects so this will take some time. The rework in the PR makes nonlinear least squares in Manopt a bit more flexible but it probably won't be useful to you until smoothing is added. |
I would still be interested in the example, but the main problem for now is that the link in the issue to Ceres and the literature in there does the rescaling/updates for non smooth objectives for the Euclidean case, but no one has ever done that on manifolds. As Mateusz said, this is something we both find interesting and want to derive, but that is not something we have a time estimate for when this will happen, so we omit that from this PR for now. yes than this PR “only” reworks internals of LM, but that is still nice to have – just from a feature perspective, it is super boring. |
LGTM up to the |
Hi, here is a draft pr with a little toy example triangulating a point on SE2 with bearing measurements to 4 points on TranslationGroup(2) with one outlier: JuliaRobotics/RoME.jl#767 |
Thanks! We will try that when working on robustness. |
… is now unified to m.
This make the interface for Levenberg-Marquardt a bit more flexible.
🛣️🗺️
VectorGradientFunction
get_jacobian
(formerly “get-gradient-from-jacobian” to “pass through to the new innervgf
” without smoothingThe main thing to figure out is the new Jacobian to return in case of smoothing and how to adapt the regularisation parameter$\lambda_k$ for this case. This last point especially I have not yet understood, neither from the ceres documentation linked above or the nearly-the-same phrasing at the end of Section 4.3 in Triggs et al.