A collection of machine translation benchmarks. Currently it includes:
forked from Helsinki-NLP/OPUS-MT-testsets
-
Notifications
You must be signed in to change notification settings - Fork 0
MeMAD-project/OPUS-MT-eval
About
benchmarks for evaluating MT models
Resources
Stars
Watchers
Forks
Packages 0
No packages published
Languages
- Smalltalk 34.4%
- JavaScript 31.4%
- Ruby 29.5%
- Handlebars 1.2%
- Nearley 1.1%
- UrWeb 0.8%
- Other 1.6%