You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 6, 2023. It is now read-only.
I haven't been able to find appropriate documentation for the stopping criterion that are used in SAG(A) and coordinate descent. Is it violation of the KKT conditions ? It would be great to make this explicit in the documentation, and verify consistency of the stopping criterion across solvers.
Optionally it would also be nice to be able to monitor the loss on a validation set to do early stopping, as it is done with specific callbacks in e.g., keras -- but this is a feature that should appear in scikit-learn.
The text was updated successfully, but these errors were encountered:
arthurmensch
changed the title
Documentation: stopping criterion
[Documentation] Stopping criterion
Feb 20, 2017
yes we are looking at the residuals of the KKT conditions, "normalized" by the residuals at the first iteration if I remember correctly. Anyway, would be cool to have the stopping criteria you mention.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I haven't been able to find appropriate documentation for the stopping criterion that are used in SAG(A) and coordinate descent. Is it violation of the KKT conditions ? It would be great to make this explicit in the documentation, and verify consistency of the stopping criterion across solvers.
Optionally it would also be nice to be able to monitor the loss on a validation set to do early stopping, as it is done with specific callbacks in e.g.,
keras
-- but this is a feature that should appear inscikit-learn
.The text was updated successfully, but these errors were encountered: