-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mask zero and activation in HATT #28
Comments
Thanks for the implementation! The issues (2&3) you mentioned are also covered here in issue #24 . Can you do a pull & push so that everyone can benefit? |
Hi, I have implemented the new Attention layer but I get a error: ` AttributeError: module 'keras.backend' has no attribute 'bias_add'` Can someone help me? |
I hava push a new verison of this implements, and you can review full code in my repo |
Thanks! I will check it |
How can I implement to derive the attention weight and identify important words for the classification?? I have read in your post the last update, but I don't understand your approach |
It's not a fixed weight. Don't confused with the context vector or weights learned in the attention layer. You need to do a forward pass to derive importance of sentences and words. Different sentences and words will get to different result. Please read the paper. |
we use this code to build our project, but we found the acc dropped. So , we review the code, and find the following issues.
We made the above changes,and the acc increased by 4-5 percent from baseline in out task(text classification).
we give our "AttLayer" class, this input is the direct output from the GRU without an additional "Dense layer":
The text was updated successfully, but these errors were encountered: