-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Add code snippets to run MeaningBERT locally #1
Comments
Thank you for you interest in improving Deepparse. |
I do not have time this week to investigate the problem. I will look at it next week. I may have pushed the wrong model online. I did not try the model after the push on HuggingFace. |
@DennisDavari, I have investigated the problem.
P.S. If you have/create a dataset, I would be more than happy to retrain the model and integrate it here. |
@DennisDavari, I am working on a better fix. I have lost a part of the data augmentation dataset between the article and model releases. The data augmentation is different, and this version's performance seems to be lower. I have created a better data augmentation procedure (I do not know why I did not do this at first). I am currently training the model to validate if performance matches the article. I will make sure to keep you posted. |
Thank you for the update! I am looking forward for the model! |
@DennisDavari, I have pushed for a better model version, but I sometimes get strange results. I have improved the data augmentation approach and released a new dataset (and model) version. I am working on a third version. |
Thanks a lot for your effort! Since I am finishing my master thesis soon I just wanted to know if you know about when you will be finishing the third version. I am asking this question because I am considering whether I should use the current version for my master thesis or whether I can still wait for the third version. |
I am currently training V3 as of RN. Maybe 2-3 days. |
I await the last training to see if I can get better quantitative metrics. I have created a Metrics Card to simplify the MeaningBERT use and quickly fix some errors. See here. |
@DennisDavari I just pushed V3 and released weights. Quantitative metrics are better, but I still observed some errors. I have fixed some issues with the metric card, so I recommend using the metric module. Here is a code snippet: import evaluate
documents = ["He wanted to make them pay.", "This sandwich looks delicious.", "He wants to eat."]
simplifications = ["He wanted to make them pay.", "This sandwich looks delicious.",
"Whatever, whenever, this is a sentence."]
meaning_bert = evaluate.load("davebulaval/meaningbert")
print(meaning_bert.compute(documents=documents, simplifications=simplifications)) |
Is your feature request related to a problem? Please describe.
How do you actually use the MeaningBERT metric? I wasn't able to reproduce sensible results with this model.
Describe the solution you'd like
Provide a code snippet for Python which can be used to use the model locally.
Describe alternatives you've considered
I tried this code to run the model locally:
Even though the code executes successfully I don't get sensible results. For completely identical sentences, I get a low score, and for completely unrelated sentences, I get a high score.
Additional context
I tried to verify whether the results are as they should be by comparing the results from the local model with the results from the remote models, but when I use the model via the "Compute" button on HuggingFace, I don't get any value as a result, and when I want to use the model via the Inference API I always get the value of 1. This is the code which I used to access the model via the Inference API:
The text was updated successfully, but these errors were encountered: