We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我看到你LR里面有附带一个链接,解释为什么classification 的问题loss 用 cross entropy而不是用MSE. 链接里面的解释是因为MSE会有很多局部极小值,曲线不光滑(MSE求导不是一个线性函数吗,为什么会不光滑) 这个问题我也研究了很久,我觉得最好的解释是regression 和 classification 的假设是不同的 regression 的假设是派P(y|x) 服从正太分布,classification 的假设是P(y|x)服从伯努利分布。 具体我自己有写一个推导过程https://github.com/pluszeroplus/Deep-Learning/blob/master/loss/why%20not%20MSE.pdf
The text was updated successfully, but these errors were encountered:
No branches or pull requests
我看到你LR里面有附带一个链接,解释为什么classification 的问题loss 用 cross entropy而不是用MSE.
链接里面的解释是因为MSE会有很多局部极小值,曲线不光滑(MSE求导不是一个线性函数吗,为什么会不光滑)
这个问题我也研究了很久,我觉得最好的解释是regression 和 classification 的假设是不同的
regression 的假设是派P(y|x) 服从正太分布,classification 的假设是P(y|x)服从伯努利分布。
具体我自己有写一个推导过程https://github.com/pluszeroplus/Deep-Learning/blob/master/loss/why%20not%20MSE.pdf
The text was updated successfully, but these errors were encountered: