-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
have you tried to use bert to improve the performance of JMEE? #4
Comments
Hi @xiaoya6666 I couldn't achieve the performance of the JMEE paper too.
I think word embedding of JMEE can be replaced with BERT, but I haven't tried it yet. |
@xiaoya6666 I also try to reproduce JMEE,but can't achieve the result of paper. I try to use Bert to replace the word embedding of JMEE ,but ed F1 just achieve 0.69. It seems that GCN in JMME paper actually can't bring very obvious performance. Maybe I combine GCN and bert in a inappropriate way. |
@bowbowbow exm, could you tell me how to set the parameters in this bert model? thank you very much! |
exm, do you have an overfitting problem when you replaced word embedding into BERT in JMEE? and What is the F1 score of 0.69 you got, argument classification or trigger classtication? I found it will have very serious overfitting problem when i add the entity type embedding into the model. |
我没有注意到这个问题啊,我触发词预测能到70%,但是都预测为O预测效果不是会降低么?但我触发词预测的效果还算比较稳定,主要是论元提不上去
2019-12-31
lxyyy123
发件人:Hanlard <[email protected]>
发送时间:2019-12-31 11:20
主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)
收件人:"nlpcl-lab/bert-event-extraction"<[email protected]>
抄送:"xiaoya6666"<[email protected]>,"Mention"<[email protected]>
我观察了一下, 好像是因为, step(batch_size=8)大于20之后, 触发词都预测为"O"了, 然后loss一直等于trigger_loss, argument_loss就不再下降了, 角色识别效果就提不上去了, 我感觉还是模型有些简单了...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
现在bert这个代码就是参考JMEE的论元抽取方法的吧。emmm
2020-01-02
lxyyy123
发件人:Hanlard <[email protected]>
发送时间:2020-01-02 11:16
主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)
收件人:"nlpcl-lab/bert-event-extraction"<[email protected]>
抄送:"xiaoya6666"<[email protected]>,"Mention"<[email protected]>
是我搞错了,抱歉。论元我觉得可以参考一下jmee的模型发自我的华为手机-------- 原始邮件 --------发件人: xiaoya6666 <[email protected]>日期: 2019年12月31日周二 下午4:38收件人: nlpcl-lab/bert-event-extraction <[email protected]>抄送: Hanlard <[email protected]>, Comment <[email protected]>主 题: Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)我没有注意到这个问题啊,我触发词预测能到70%,但是都预测为O预测效果不是会降低么?但我触发词预测的效果还算比较稳定,主要是论元提不上去
2019-12-31
lxyyy123
发件人:Hanlard <[email protected]>
发送时间:2019-12-31 11:20
主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)
收件人:"nlpcl-lab/bert-event-extraction"<[email protected]>
抄送:"xiaoya6666"<[email protected]>,"Mention"<[email protected]>
我观察了一下, 好像是因为, step(batch_size=8)大于20之后, 触发词都预测为"O"了, 然后loss一直等于trigger_loss, argument_loss就不再下降了, 角色识别效果就提不上去了, 我感觉还是模型有些简单了...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@xiaoya6666 您好,你的触发词70%是bert的代码还是jmee的,能加下QQ详聊吗(2512156864) |
bert的,我用的是那个bert-base-uncased模型微调的
2020-01-03
lxyyy123
发件人:ScuLilei2014 <[email protected]>
发送时间:2020-01-02 13:16
主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)
收件人:"nlpcl-lab/bert-event-extraction"<[email protected]>
抄送:"xiaoya6666"<[email protected]>,"Mention"<[email protected]>
@xiaoya6666 您好,你的触发词70%是bert的代码还是jmee的,能加下QQ详聊吗(2512156864)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I also got F1 for trigger classification around 69 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1 ranges from 73 - 80. I don't think a CRF will help a lot. |
I sent emails to all the authors with 80% experimental results, but no one replied to me. Moreover, there was no detail of hyparameters in the paper, so I still could not reproduce the effect. I think their experimental data is false |
Remember, most of the announcements in papers which are made by a new author always always are not able to be reproduced. |
Hi,
Thank you for sharing.
I'm interested if you tried to use bert to improve the performance of JMEE.
I try to reproduce JMEE,but I can't achieve the result of paper.
The text was updated successfully, but these errors were encountered: