We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LabelSmoothingCrossEntropy这个函数最终返回的总loss的前半部分: loss*self.eps/c ,这里c是类别个数,我发现有的公式里写的这里应该是除以类别个数减一。 请教一下到底要不要减一
The text was updated successfully, but these errors were encountered:
确实有些实现是类别数减一,不过我认为不需要减一,原因有三: 1、labelsmooth的原始论文中的公式,分母就是类别数 2、在fastai中,实现的LabelSmoothingCrossEntropy也是没有减一的 3、事实上减不减一,对最终的优化效果并没有什么影响,唯一影响的只是每个batch的loss大小而已
我更坚持fastai的实现
Sorry, something went wrong.
非常感谢
No branches or pull requests
LabelSmoothingCrossEntropy这个函数最终返回的总loss的前半部分: loss*self.eps/c ,这里c是类别个数,我发现有的公式里写的这里应该是除以类别个数减一。
请教一下到底要不要减一
The text was updated successfully, but these errors were encountered: