Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the sentence result #18

Open
lydemo opened this issue Apr 18, 2018 · 2 comments
Open

About the sentence result #18

lydemo opened this issue Apr 18, 2018 · 2 comments

Comments

@lydemo
Copy link

lydemo commented Apr 18, 2018

I notice that the longer(the more batches)that I've trained the more perfect that generated sentences will be,but I find that some generated sentences can be totally the same as some sentences in my train corpus,is it possible?I just wonder whether it just generate the sentences like that or 'copy' like that.

@spiglerg
Copy link
Owner

spiglerg commented May 8, 2018

That sounds like overfitting. You may try adding a regularizer to the network's weights or decrease the number of parameters (or increase the size of the training set).

@niranjan8129
Copy link

@spiglerg I am facing overfitting issue . as you said " adding a regularizer to the network's weights or decrease the number of parameters (or increase the size of the training set) " are you referring to below to increase or decrease ? if yes, please give the exact value

lstm_size = 256 # 128
num_layers = 2
batch_size = 128 # 128
time_steps = 100 # 50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants