Skip to content
This repository has been archived by the owner on Aug 18, 2021. It is now read-only.

Bahdanau Decoder Implementation #23

Closed
sidoki opened this issue May 22, 2017 · 6 comments
Closed

Bahdanau Decoder Implementation #23

sidoki opened this issue May 22, 2017 · 6 comments

Comments

@sidoki
Copy link

sidoki commented May 22, 2017

Hi @spro,

Thanks for really great explanation of decoder, especially for Bahdanau decoder. But, i'm little bit confuse about code in init function of BahdanauAttnDecoderRNN class.

self.attn = GeneralAttn(hidden_size)

I can't find any class that define GeneralAttn. This is built-in class? Can you please elaborate for this? Thanks again!

@spro
Copy link
Owner

spro commented May 22, 2017

Good catch, it was originally split out as 3 separate attention modules (GeneralAttn, DotAttn, ConcatAttn) instead of one with an argument to choose the strategy. Further, they actually used the "concat" strategy. So this should be self.attn = Attn("concat", hidden_size)

@sidoki
Copy link
Author

sidoki commented May 23, 2017

Cool, Thanks for the clarification!

@sidoki sidoki closed this as completed May 23, 2017
@rafaelvalle
Copy link

Can you please change that line on the notebook?

@poweihuang17
Copy link

Still not changed. Hope somebody could do it.

@anantzoid
Copy link

anantzoid commented Oct 30, 2018

#119 fixes this and some more issues with Bahdanau decoder.

@kyquang97
Copy link

@anantzoid it's still not fixes in tutorial.
image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants