You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suppose that in maximum likelihood estimation based model for training relevance based word embedding. In the paper (https://dl.acm.org/doi/10.1145/3077136.3080831), the author mentions that we can not use negative sampling in this training. We need to approximate the softmax here. Have you used hierarchical softmax in your implementation anywhere?
The text was updated successfully, but these errors were encountered:
I suppose that in maximum likelihood estimation based model for training relevance based word embedding. In the paper (https://dl.acm.org/doi/10.1145/3077136.3080831), the author mentions that we can not use negative sampling in this training. We need to approximate the softmax here. Have you used hierarchical softmax in your implementation anywhere?
The text was updated successfully, but these errors were encountered: