-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance comparison #6
Comments
test performance bert-crf:epoch 1 epoch 3 |
The above two performance shows that only using topic_id as multi-task learning won't help the performance. next thing to try make the event extraction conditioned on topic. and also at the same time using multi-task learning. |
conditional on the topic, by adding the topic embedding directly to the sentence embedding. |
The topics information does not help might because of this: most of the topics are tail topics. token:military conflict, test_count:258, train_count:981 |
Add in the topic2event type distribution as prior. (similar to adding vocab for each topic into the extraction work, will this work?? not sure..) |
The topic will help the extraction by assuming two things: 1. given a topic, certain event type happens more often than others. |
test performance using topic as multi-task learning. only enhance the encoder, not using the topic as input for event extraction though
epoch 1
('lose Mention', u'cfcec8e30722564d5bf43fcb5f739cd8')
('lose Mention', u'04cdc0c45303f7024417e4d94f9a7c13')
('lose Mention', u'66b0e5014943dde6c74c048086b7a0a3')
('Micro_F1:', 67.776451242312973)
('Micro_Precision:', 66.690309424594375)
('Micro_Recall:', 68.898557362033429)
('Macro_F1:', 50.955298281699037)
('Macro_Precision:', 55.190862598324387)
('Macro_Recall:', 50.812833395810941)
epoch 2
('lose Mention', u'395e263d21484d8998d853b0b1b6ec5a')
('lose Mention', u'063e9a5b7265bc3ae0ea731c5f06545b')
('lose Mention', u'362cbeafe8c757195425faa40a88ff61')
('Micro_F1:', 67.062822261786408)
('Micro_Precision:', 67.72205099467638)
('Micro_Recall:', 66.416304098923746)
('Macro_F1:', 58.714489780943694)
('Macro_Precision:', 63.478851556871454)
('Macro_Recall:', 58.704903536659984)
epoch 3
('lose Mention', u'f0ae798aa5b9014e3c8e47aefb281330')
('lose Mention', u'd3f3a2e2f335d8c50f7612c29aa0cb2a')
('lose Mention', u'1418f6cb3c97797a7542e7ec6ac04427')
('lose Mention', u'913c2d91f695ccb637386f211cd8d94d')
('lose Mention', u'f697f918553fc11d73d709bf3029603d')
('Micro_F1:', 68.213119095965396)
('Micro_Precision:', 65.833084820858005)
('Micro_Recall:', 70.771696817036869)
('Macro_F1:', 60.237205257354191)
('Macro_Precision:', 61.145512218719368)
('Macro_Recall:', 62.252039674965374)
epoch 10
('Micro_F1:', 64.949966644429608)
('Micro_Precision:', 63.125135076723581)
('Micro_Recall:', 66.883444011907486)
('Macro_F1:', 59.76302032588179)
('Macro_Precision:', 58.998255594751669)
('Macro_Recall:', 61.678292061307531)
The text was updated successfully, but these errors were encountered: