Replies: 1 comment
-
My bad, the f(x) in each loop is not the same. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I don't see the discussion in the main CORN repo, thus I post it here.
The complexity and memory of the current CORN loss function is O(N * K) where N is the number of examples, and K is the number of ranks.
I think it can be computed in O(N) (or O(N * log(K)) if you are lazy). This is better for large K e.g. the age dataset.
Basically, you just need to sort by the label values. The 2 sum loops can be collapsed into one, as log(f(x)) and log(1-f(x)) are unchanged. Just telescope the the indicator function as the training subsets are subsets of each others.
Beta Was this translation helpful? Give feedback.
All reactions