-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The emotion classification model's performance is almost the same as a random guess #75
Comments
Furthermore, I also tried the pre-trained model transformer_semeval.clf using the command line "!python3 run_classifier.py --load path-to-downloaded-models/transformer_semeval.clf --text-key Tweet --data data/semeval/test.csv --model transformer --write-results results.csv" on Jupyter notebook, the results are also terrible. |
Did you solve the problem?. Im currently dealing with the same issue |
@YipengUva I am also trying to use the finetuned classifier for inference by running the same command you mentioned. It is showing me segmentation fault (core dumped). Do you have any idea how to fix this ? Also what did you do to fix your issue? |
Sorry, I didn't have this problem for this task. As for how to fix, you can search it on google. It seems that multiple cores are occupied by other process or other terminal on the server.
Either, I didn't fix the issue. It still has the performance similar as random guess.
Regards, Yipeng
…On Aug 10 2020, at 7:18 am, Saumajit Saha ***@***.***> wrote:
@YipengUva (https://github.com/YipengUva) I am also trying to use the finetuned classifier for inference by running the same command you mentioned. It is showing me segmentation fault (core dumped). Do you have any idea how to fix this ? Also what did you do to fix your issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub (#75 (comment)), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AD52CBQEM43FESDU4KRPDHLR77XQZANCNFSM4MZ2HJPA).
|
Hi, I repeat the emotion classification experiment and get terrible results. I couldn't what is the issue.
The experiment is repeated using the command line "!python3 experiments/run_clf_multihead.py --text-key Tweet --train data/semeval/train.csv --val data/semeval/val.csv --process-fn process_tweet".
Then, I got a series of classifiers in transformer_multihead from the 1)step.
Then I used "!python3 run_classifier.py --load transformer_multihead/model_ep0.clf --text-key Tweet --data data/semeval/val.csv --model transformer --write-results results/semeval/val_result.csv" on the validation set.
The performance is evaulated with respect to balanced accuracy, f1 score and ROC using metrics module from sklearn package. The results are shown as follows.
balanced accuracy 0.500876 0.500000 0.537070 0.500000 0.500000 0.500000 0.499412 0.500593
f1_score 0.525000 0.245545 0.488992 0.240318 0.622084 0.460469 0.000000 0.092672
ROC 0.537700 0.450639 0.549253 0.474326 0.508107 0.481694 0.504079 0.500841
Is anything I can do to make it work?
Regards, Yipeng
The text was updated successfully, but these errors were encountered: