You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I couldn't find an official code release for this paper arxiv.org/abs/2210.07277, in which an extension is proposed to MSN to allow arbitrary feature priors.
It looks like the main difference is a change of a single term in the loss function. Is that correct? How would I implement the changes mentioned in the PMSN paper?
Previous MSN loss:
New loss: Prior Matching for Siamese Networks, PMSN:
The text was updated successfully, but these errors were encountered:
In case anyone stumbles upon this, the lightly package has a nice implementation of PMSN. I believe they're using a regularization weight of $\lambda = 1$.
I couldn't find an official code release for this paper arxiv.org/abs/2210.07277, in which an extension is proposed to MSN to allow arbitrary feature priors.
It looks like the main difference is a change of a single term in the loss function. Is that correct? How would I implement the changes mentioned in the PMSN paper?
Previous MSN loss:
New loss: Prior Matching for Siamese Networks, PMSN:
The text was updated successfully, but these errors were encountered: