Skip to content

Latest commit

 

History

History
27 lines (14 loc) · 2.3 KB

File metadata and controls

27 lines (14 loc) · 2.3 KB

ELIXIR-EXCELERATE Benchmarking JSON Schemas

These are the latest benchmarking concepts modelled so far using JSON Schema Draft-07, available in 1.0.x folder. All of them have _id and _schema attributes.

Sample JSON files can be validated against these schemas using scripts located into extended JSON Schema validators repository or JSON Schema Validator. There is even a script to generate a chart describing these schemas:

Benchmarking JSON Schema 1.0.x

  • Community: The description of a benchmarking community, like CASP, CAFA, Quest for Orthologs, etc...

  • Contact: A reference contact of a community, tool or metrics.

  • Reference: A bibliographic reference, used to document a community, a contact, a tool, a dataset, a benchmarking event or metrics.

  • Tool: A tool which can be used in the lifecycle of one or more benchmarking communities.

  • Metrics: Defined metrics which can be computed from a dataset.

  • Dataset: Any one of the datasets involved in the benchmarking events lifecycle. So, they can be interrelated (for data provenance) and cross-referenced from the other concepts.

  • BenchmarkingEvent: A benchmarking event is defined as a set of challenges coordinated by a community, either attended or unattended.

  • Challenge: A challenge is composed by a set of one or more test actions, related to the participants involved in the challenge.

  • TestAction: The involvement of a tool in a challenge, taking as input the datasets defined for the challenge, and generating the result datasets in the format agreed by the community. The generated datasets are later related to metrics datasets, which are the metrics agreed by the community for the challenge, used later to assess the quality of the result.

  • idSolv: This side concept is used to model CURIE's which are not yet registered in identifiers.org.