Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in training with Russian data #32

Open
Insaned79 opened this issue Dec 11, 2018 · 4 comments
Open

Problem in training with Russian data #32

Insaned79 opened this issue Dec 11, 2018 · 4 comments

Comments

@Insaned79
Copy link

After training, I manage to get some answers, but most often instead of words I get a strange "unk" in the answer.
default

@jdagnin
Copy link

jdagnin commented Feb 4, 2019

These 'unk' come from the input processing script, eg: data/twitter/data.py, line 13: UNK = 'unk'
I think it is used for unknown words, but I am also not sure how to reduce the occurance of 'unk' in the output. It appears that cleaner data and longer training times do help, but I still see the 'unk' even after 40 epoch.
When I used the provided sample twitter input data for training, I do not see 'unk' in the output. Could someone help out?
Edit: I found increasing the vocabulary, and cleaning the raw data helps a lot

@Sadler2
Copy link

Sadler2 commented Feb 15, 2019

Words in Russian may have 30+ different forms (cases, grammatical gender etc.), so, without any preprocessing your effective vocabulary size would become pretty low. That's why you have so many unks. There are three possible solutions: 1) preprocess using word2vec 2) convert every word to its main form and add form markers separately 3) greatly increase the vocabulary size.

@kananos
Copy link

kananos commented Apr 25, 2019

this problem due to your answer data
your answer data contains unk symbol, means it has words that not in vocab
increase the vocab size or delete all QA pairs whose answer contains UNK

@carlitoselmago
Copy link

I have a similar issue with french language. My corpus data is full of emojis, I try to add them as valid in the data whitelist but they don't seem to appear. As I understand, emojis are part of the utf-8 mb4 charset but get processed in a standard utf-8 charset. Using python3 this shouldn't be a problem right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants