Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result in the CWMT2018 QE dataset #6

Open
chencong-jxnu opened this issue Sep 3, 2020 · 0 comments
Open

Result in the CWMT2018 QE dataset #6

chencong-jxnu opened this issue Sep 3, 2020 · 0 comments

Comments

@chencong-jxnu
Copy link

I trained the TransQuest model with WMT2020 QE dataset, and the test result in en-zh task2 is 0.5999. I want to know the performance of the TransQuest model on CWMT2018(China Workshop on Machine Translation, 2018) QE dataset. So,I trained the TransQuest model with CWMT2018 QE dataset. The test result of en-zh and zh-en is 0.516 and 0.568 respectively. I am not sure if there is a problem with my training method. So, I was wondering if you could provide a test result in the CWMT2018 QE dataset.
Thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants