You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you get embeddings learned from classification, the results will depend on the underlying data, labels, embedding size, how good the model is. Maybe those words appear together in your dataset. Also you could experiment with the embedding layer size (think about it as number of features representing each word) and retrain the model.
For direct word embedding the output made sense
But how do we understand the relationship between words generated by word embeddings learned from classification
Is there a way to put this in better context?
The text was updated successfully, but these errors were encountered: