Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Writing operation is decomposed into two parts "an erase followed by an add"? #15

Open
ylqfp opened this issue Apr 27, 2016 · 1 comment

Comments

@ylqfp
Copy link

ylqfp commented Apr 27, 2016

The paper said:
"Taking inspiration from the input and forget gates in LSTM, we decompose each write into two parts: an erase followed by an add".
Why?
Thanks!

@shawntan
Copy link
Owner

shawntan commented May 1, 2016

The LSTM has the same idea of 'forgetting' and then adding the new input. That was what was meant by that line in the paper.

Just to be clear: I was in no way involved with the writing of the paper. This repo just happens to be one of the popular implementations of the NTM currently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants