-
Notifications
You must be signed in to change notification settings - Fork 10
Home
Takahiro Kubo edited this page Apr 11, 2018
·
7 revisions
- Slot1
- Given: sentence
- Predict: E#A labels sentence
- ex: "The food is expensive but service is good." => FOOD#PRICE, SERVICE#QUALITY
- Slot2
- Given: sentence and E#A label
- Predict: opinion target
- ex: "The food is delicious", where is the FOOD#QUALITY? => "food"
- Slot3
- Given: sentence, E#A label
- Predict: polarity
- ex: "The food is delicious", what is the polarity for "food" / FOOD#QUALITY? => "positive"
- Slot1
- task: Multi-label
- feature: 1,000 most frequent unigrams of the training data excluding stop-words
- model: SVM (linear kernel)
- the label that over threshold prob is assigned (threshold = 0.2).
- evaluation: f1 (micro-averaging)
- Slot2
- task: Target Detection
- model: dictionary ({"category": "word"}), it is made from opinions of training data. Search category by using dictionary, and the first match one is used. If the target does not exist, return the NULL.
- evaluation: f1 (micro-averaging), discards NULL targets.
- Slot3:
- task: Sentiment (0/1 prediction)
- feature: 1,000 most frequent unigrams of the training data excluding stop-words + index of E&A cateogry
- accuracy (number of correctly predicted polarity of the (gold) aspect categories / total number of the gold aspect categories)
Baseline scores
SemEval-2016: Restaurant(EN) | chABSA-dataset | |
---|---|---|
Slot1 | 59.928 | 44.772 |
Slot2 | 44.071 | 15.256 |
Slot3 | 76.484 | 75.886 |
Best model Score
SemEval-2016: Restaurant(EN) | chABSA-dataset | |
---|---|---|
Slot1 | 73.031 | ? |
Slot2 | 72.34 | ? |
Slot3 | 88.126 | ? |