Skip to content
Christian Federmann edited this page May 25, 2020 · 4 revisions

Welcome to the OCELoT wiki!

Our Mission

Project OCELoT aims at implementing an Open, Collaborative Evaluation Leaderboard of Translations. The goal is to provide infrastructure to conduct repeated evaluation campaigns, based on human evaluation methods, for a wide variety of content domains. Simple but bold, yes.

Current Focus

Work on OCELoT started in 2019, as part of the Machine Translation Marathon in the Americas, hosted at University of Maryland. The one-week hackathon resulted in a simple, proof-of-concept version of the OCELoT vision. In 2020, we are extending the project by implementing support for both 1) the Fifth Workshop on Machine Translation (WMT20), and 2) the International Workshop on Spoken Language Translation (IWSLT20).

Timeline

  • 5/25: code ownership transferred to AppraiseDev organisation;
  • 5/29: OCELoT.MTEval.org upgraded for WMT20;
  • 6/01: start bug bash for WMT20 instance;
  • 6/08: WMT20 test set available from OCELoT;
  • 6/15: WMT20 test week ends.
Clone this wiki locally