-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with quasi-newton and PositiveDefinite matrix #401
Comments
Without an MWE (minimal (non-)working example), or at least an error message where that appears, I can only ask more questions.
|
To start with, could you attach the stack trace for that exception? Maybe there is a relatively simple solution. Our SPD implementation already does some numerical stabilization so maybe we missed a few spots where it would be beneficial to do it. |
I will check to come up with a minimum working example. Currently the code I am using is here .
The stacktrace is here: stacktrace.txt |
v^TAv itself would be (locally, geodesicallz) convex, but in such long code I do not directly see how var you deviate from just that, so of course sums of these are also fine. Matrix size is okay, it is not too large. Concerning the point with the “stops before converged” for Gradient descent – why does it stop? (you can set At first glance from the error (distance line 73 on the input In such a long code I also do not directly see KL and chi ;) But I can try to find time in the evening today and take a closer look. edit: I narrowed it further down, it seems to be the locking condition check (that is not called when alpha is zero) – I can only check this further on a concrete (failing) example, but it might be that we already need a tolerance for the alpha. though interesting that this did not cause problems until now. |
I tried figuring it out too but I can't tell where the numerical error could have been introduced.
The default one is |
Sorry for the late answer.
So finally I think this is mostly a problem in numerical errors of my loss formulation, but I also guess that Manopt should keep the matrix positive definite even if there are numerical errors (?). I will try to get a simple example which is failing running and check for numerical problems in my formulation. But I am on holidays the next two weeks, so it will take some time. |
Thanks for coming back to this
I have two questions on this.
But if you have an idea how one could keep them SPD, sure, we can check how we can introduce that.
a) Constraint optimisation on the set of matrixes (JuMP) While I have a personal preference for b) for a few arguments, both are valid approaches. So if you conclude a) works better in your scenario, that is for sure also just fine. |
Thanks a lot for this information. |
Sure, both are valid ways to go, and it might depend on the actual application, which option performs better. Thanks still for the interest in Manopt.jl :) |
I am optimizing over positive definite matrices and with gradient descent everything works quite well. I tried to switch to quasi-Newton and it works most of the time, but sometimes I get a PosDefException in the LineSearch.
I tried to change the Linesearch, but also with AdaptiveWNGradient I get it sometimes and I run in similar problems with a constant stepsize.
I could imagine that this is because the eigenvalues are too close to 0 and cholesky fails, even though the matrix is not technically SDP yet.
Does someone have any insights on this issue?
The text was updated successfully, but these errors were encountered: