You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe they use a GAT, and they claim to be the first to use a GNN for this fundamental NLP task.
I was thinking that, you made what seems to be like a major, task generic advance in GNNs, and that it would be interesting for you to make a collaboration with those researchers.
The text was updated successfully, but these errors were encountered:
They reach 97.3 in POS, 95.97 in UAS and 94.31 in LAS.
I dream of the day NLP tasks will have 99,99 % performance, and this collaboration could (I am maybe saying bullshit) make us closer to that goal: making NLP reliable.
This is an interesting idea, though I don't think that I'll have the time to include this dataset in the evaluation here anytime soon. I tried to have a look at the implementation listed in the paper, but the linked repository (https://github.com/AntNLP/gnn-dep-parsing) seems to not exist / be private. A first step may be to contact the authors to actually publish their code :-)
Hi!
I am just a fan of state of the art, not an NLP researcher.
I found on NLP-progress this SOTA result on dependency parsing using GNN.
https://www.aclweb.org/anthology/P19-1237
Source: https://github.com/sebastianruder/NLP-progress/blob/master/english/dependency_parsing.md
I believe they use a GAT, and they claim to be the first to use a GNN for this fundamental NLP task.
I was thinking that, you made what seems to be like a major, task generic advance in GNNs, and that it would be interesting for you to make a collaboration with those researchers.
The text was updated successfully, but these errors were encountered: