You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not the authors, so the answer is from my own understanding and may not be true.
Possibile after mofification.
According to my understanding, the loss.py is somehow appliable to any data whose loss function is CE
In this method, to filiter out noisy id is no difference with using small loss trick. Just rank the loss and label the bottom ones (e.g. last 5%) as possibile noisy. You can aggrate the noisy candidate over epoches and analyse which ones are frequent large loss samples.
Hi,
thanks for sharing your implementation. I have some questions about it:
Thanks!
The text was updated successfully, but these errors were encountered: