Skip to content

Evaluation, Reproducibility, Benchmarks Meeting 12

AReinke edited this page Jun 24, 2021 · 1 revision

Minutes of meeting 12

Date: 23rd June 2021

Present: Keyvan, Carole, Jens, Annika, Lena, Nicola, Paul


TOP 1: MONAI Board Meeting Report (Carole and Nicola)

  • Include open datasets (for challenges)
  • Many challenge organizers are keen to provide data for the framework
  • Q1: Put data on AWS?
    • Decathlon data already on AWS
    • Other options than AWS? Longtime support needed
  • Q2: How much should MONAI be involved in the process?
    • Only download data when agreed to the rules
    • Should MONAI be in charge or challenge organizers?
    • Desired: Consistent rights and settings for all datasets (standard)
    • Proposal: Involve lawyers?
  • Q3: Licensing
  • Q4: Are there people in the WG who would manage the data part

TOP 2: Delphi process on metrics

  • Form expert groups for specific topics (segmentation, detection, classification, biomedical, cross-cutting topics)
    • For each task, problem fingerprint (size of structures, class imbalance for classification). For each scenario, pick metrics (proposal based on math properties, popularities, resource related issues etc) => expert decision
      • Experts should give a list of scenarios that we will present in the paper
    • Structuring of the metrics
      • Expert groups go through current metric list and group them into must have (main figure) and “appendix” metrics
      • Comments per metric: Reason why choosing them
    • What are the most under-considered pitfalls? What would be most interesting for the readers?
    • Cross-cutting group: Aspects that are not yet considered at all (e.g. Aggregation)
  • Proposal (preliminary) on metric structuring:
    • End goal of figure: Build metric clusters to pick from that are very similar
    • Improvements: Include GT information (before confusion matrix and for contour metrics)
Clone this wiki locally