We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者您好,我看了一下关于cls_ohem部分的代码 label_prob相当于对gt_label为1的pos sample 取第二个cls_prob,对其余label取第一个cls_prob 然后 -tf.log(label_prob) 对于pos sample而言没错,但对于neg sample而言,其应该是 -tf.log(1 - label_prob) 但代码中没有实现 -tf.log(1- label_prob)的部分 下面的valid_inds 是将pos sample和neg sample都筛选出来的 所以和loss相乘的时候 并没有将neg sample的 -tf.log(label_prob)消除,所以我认为这里的loss计算会有一些问题 还望大家讨论一下
The text was updated successfully, but these errors were encountered:
作者的代码等价于使用softmax_cross_entropy, 这种情况的cross_entropy 就是-label*log(logit).
Sorry, something went wrong.
No branches or pull requests
作者您好,我看了一下关于cls_ohem部分的代码
label_prob相当于对gt_label为1的pos sample 取第二个cls_prob,对其余label取第一个cls_prob
然后 -tf.log(label_prob) 对于pos sample而言没错,但对于neg sample而言,其应该是 -tf.log(1 - label_prob) 但代码中没有实现 -tf.log(1- label_prob)的部分
下面的valid_inds 是将pos sample和neg sample都筛选出来的 所以和loss相乘的时候 并没有将neg sample的 -tf.log(label_prob)消除,所以我认为这里的loss计算会有一些问题
还望大家讨论一下
The text was updated successfully, but these errors were encountered: