Skip to content

Evaluation, Reproducibility, Benchmarks Meeting 13

AReinke edited this page Jul 28, 2021 · 2 revisions

Minutes of meeting 13

Date: 27th July 2021

Present: Carole, Annika, Lena, Nicola


TOP 1: MONAI open datasets

  • See discussion from last meeting (June 23rd 2021)
  • NCI offered to assist on hosting the data, but they are limited to specific applications which may it make difficult to host all available datasets
  • Upcoming call with Prerna to discuss the details
    • Goals:
      1. Retrospective challenge organizer involvement
      1. Prospective support for future challenge organizers
      • It would be great if we could be ready for MICCAI 2022 challenges
    • Carole will be our representative (and link to the data WG)
    • Survey to incorporate community feedback on which platforms are currently used by challenge organizers to host the data (2-3 questions)
      • Invite MICCAI 2021 challenge organizers
      • Possible questions:
        • Are they interested in providing their datasets to MONAI?
        • Which platforms are they currently using and which ones would be okay to use (if they are interested in MONAI linking)?
      • Finalize and send beginning of september
    • Boundary conditions from MONAI:
      • It would be nice to find solutions that work for all challenges instead of having many very different APIs
      • Who is building the integration part? Probably MONAI should do as much as possible
    • A question may be added in the structured challenge submission system for MICCAI 2022 challenge submissions, asking whether they would be interested
      • generally to be link with MONAI
      • present one working solution and ask them whether this would be feasible for them
    • Open questions for Prerna:
      • Ask Prerna whether she could help regarding regulatory support
      • Which data hosting options are (not) possible from MONAI’s side?
      • How should we deal with more complex use cases? Example: Challenge data can only be used for academic purposes, not for companies and startups. How can we make sure this holds?
  • Questions that should be added to the structured challenge submission system
    • Is registration needed to access data?
    • Data hosting platform

TOP 2: Taskforce report (metric implementation)

  • Metric taskforce collected feedback regarding the metric implementation and tutorials and sent it to the developer team

TOP 3: Delphi process - metrics matter

  • Annika won the best short oral presentation audience award at MIDL 2021
  • Several companies approached Nicola and told that they really like the dynamic arXiv paper
  • Intermediate questionnaires for expert groups
    • Goals:
      • Finalize the list of task-specific pitfalls: Based on the “problem characteristics” compiled by the consortium, generate a comprehensive list of potential pitfalls, including illustrations and references (extension of our ArXiv paper https://arxiv.org/abs/2104.05642)
      • Come up with a candidate list of medical/biological scenarios for which we want to provide concrete best practice recommendations in the paper.
    • Grouped problem properties into several properties
      • Reformulations:
        • Size of structures (e.g. size relative to pixel or image size)
        • Variability of sizes of structures (intra and inter image)
        • Acquisition protocols (e.g. angle or viewpoint)
        • Presence of artifacts or artificial structures, (such as text overlay) OR presence of image-modifying effects (such as artifacts or artificial structures)
      • Add comment box at the end of every category
Clone this wiki locally