diff --git a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv index 28dead5b0d..379055222d 100644 --- a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv +++ b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv @@ -279,69 +279,69 @@ "278","qbi-hackathon","QBI hackathon","The QBI hackathon","The QBI hackathon is a 48-hour event connecting the vibrant Bay Area developer community with the scientists from UCSF, UCB and UCSC, during which we work together on the cutting edge biomedical problems. Advances in computer vision, AI, and machine learning have enabled computers to pick out cat videos, recognize people’s faces from photos, play video games and drive cars. More recently, application of deep neural nets to protein structure prediction completely revolutionized the field. We look forward to seeing how far we can push science ahead when we apply these latest algorithms to biomedically relevant light microscopy, electron microscopy, and proteomics data. If you love FFTs, transformers, language models, topological data processing, or simply writing code, this is your chance to apply your skills to make an impact on global healthcare. Beyond the actual event, we hope to establish a better connection between talented developers and scientists in the Bay Area, so that we...","","https://www.eventbrite.com/e/qbi-hackathon-2023-tickets-633794304827?aff=oddtdtcreator","completed","intermediate","14","","2023-11-04","2023-11-05","2023-10-06 21:22:51","2023-11-15 22:49:20" "279","niddk-central-repository-data-centric-challenge","NIDDK Central Repository Data-Centric Challenge","Enhance NIDDK datasets for future Artificial Intelligence (AI) applications","The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repository (https://repository.niddk.nih.gov/home/) is conducting a Data Centric Challenge aimed at augmenting existing Repository data for future secondary research including data-driven discovery by artificial intelligence (AI) researchers. The NIDDK Central Repository (NIDDK-CR) program strives to increase the utilization and impact of the resources under its guardianship. However, lack of standardization and consistent metadata within and across studies limit the ability of secondary researchers to easily combine datasets from related studies to generate new insights using data science methods. In the fall of 2021, the NIDDK-CR began implementing approaches to augment data quality to improve AI-readiness by making research data FAIR (findable, accessible, interoperable, and reusable) via a small pilot project utilizing Natural Language Processing (NLP) to tag study variables. In 2022, the NIDD...","","https://www.challenge.gov/?challenge=niddk-central-repository-data-centric-challenge","completed","intermediate","14","","2023-09-20","2023-11-03","2023-10-18 16:58:17","2023-11-15 22:49:26" "280","stanford-ribonanza-rna-folding","Stanford Ribonanza RNA Folding","A path to programmable medicine and scientific breakthroughs","Ribonucleic acid (RNA) is essential for most biological functions. A better understanding of how to manipulate RNA could help usher in an age of programmable medicine, including first cures for pancreatic cancer and Alzheimer’s disease as well as much-needed antibiotics and new biotechnology approaches for climate change. But first, researchers must better understand each RNA molecule's structure, an ideal problem for data science.","","https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding","active","intermediate","8","","2023-08-23","2023-11-24","2023-10-23 20:58:06","2023-11-15 22:49:31" -"281","uls23","Universal Lesion Segmentation Challenge '23","Advancements, challenges, and a universal solution emerges","Significant advancements have been made in ai-based automatic segmentation models for tumours. Medical challenges focusing on e.g. Liver, kidney, or lung tumours have resulted in large performance improvements for segmenting these types of lesions. However, in clinical practice there is a need for versatile and robust models capable of quickly segmenting the many possible lesions types in the thorax-abdomen area. Developing a universal lesion segmentation (uls) model that can handle this diversity of lesions types requires a well-curated and varied dataset. Whilst there has been previous work on uls [6-8], most research in this field has made extensive use of a single partially annotated dataset [9], containing only the long- and short-axis diameters on a single axial slice. Furthermore, a test set containing 3d segmentation masks used during evaluation on this dataset by previous publications is not publicly available.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/747/ULS23_logo_aoB8tlx.png","https://uls23.grand-challenge.org/","active","intermediate","5","","2023-10-29","2024-03-17","2023-11-02 15:35:22","2023-11-15 22:09:20" -"282","vessel12","VESSEL12","Assess methods for blood vessels in lung CT images","The vessel12 challenge compares methods for automatic (and semi-automatic) segmentation of blood vessels in the lungs from ct images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/1/logo.png","https://vessel12.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2014.07.003","2011-11-25","2012-04-01","2023-11-08 00:42:00","2023-11-14 19:47:17" +"281","uls23","Universal Lesion Segmentation Challenge '23","Advancements, challenges, and a universal solution emerges","Significant advancements have been made in AI-based automatic segmentation models for tumours. Medical challenges focusing on e.g. Liver, kidney, or lung tumours have resulted in large performance improvements for segmenting these types of lesions. However, in clinical practice there is a need for versatile and robust models capable of quickly segmenting the many possible lesions types in the thorax-abdomen area. Developing a universal lesion segmentation (uls) model that can handle this diversity of lesions types requires a well-curated and varied dataset. Whilst there has been previous work on uls [6-8], most research in this field has made extensive use of a single partially annotated dataset [9], containing only the long- and short-axis diameters on a single axial slice. Furthermore, a test set containing 3d segmentation masks used during evaluation on this dataset by previous publications is not publicly available.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/747/ULS23_logo_aoB8tlx.png","https://uls23.grand-challenge.org/","active","intermediate","5","","2023-10-29","2024-03-17","2023-11-02 15:35:22","2023-11-17 21:29:35" +"282","vessel12","VESSEL12","Assess methods for blood vessels in lung CT images","The VESSEL12 challenge compares methods for automatic (and semi-automatic) segmentation of blood vessels in the lungs from CT images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/1/logo.png","https://vessel12.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2014.07.003","2011-11-25","2012-04-01","2023-11-08 00:42:00","2023-11-17 21:30:05" "283","crass","CRASS","Invites participants to submit clavicle segmentation results","Crass stands for chest radiograph anatomical structure segmentation. The challenge currently invites participants to send in results for clavicle segmentation algorithms.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/5/logo.png","https://crass.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:09:56" -"284","anode09","ANODE09","Automatic pulmonary nodule detection systems in chest CT scans","Anode09 is an initiative to compare systems that perform automatic detection of pulmonary nodules in chest ct scans on a single common database, with a single evaluation protocol.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/7/logo.png","https://anode09.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2010.05.005","\N","\N","2023-11-08 00:42:00","2023-11-14 19:47:31" -"285","cause07","CAUSE07","Compares algorithms for caudate nucleus segmentation in brain MRI scans","The goal of cause07 is to compare different algorithms to segment the caudate nucleaus from brain mri scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/8/logo.png","https://cause07.grand-challenge.org/","completed","intermediate","5","","2007-10-26","\N","2023-11-08 00:42:00","2023-11-11 01:43:29" +"284","anode09","ANODE09","Automatic pulmonary nodule detection systems in chest CT scans","ANODE09 is an initiative to compare systems that perform automatic detection of pulmonary nodules in chest CT scans on a single common database, with a single evaluation protocol.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/7/logo.png","https://anode09.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2010.05.005","\N","\N","2023-11-08 00:42:00","2023-11-17 23:17:55" +"285","cause07","CAUSE07","Compares algorithms for caudate nucleus segmentation in brain MRI scans","The goal of CAUSE07 is to compare different algorithms to segment the caudate nucleaus from brain MRI scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/8/logo.png","https://cause07.grand-challenge.org/","completed","intermediate","5","","2007-10-26","\N","2023-11-08 00:42:00","2023-11-17 21:34:10" "286","subsolidnodules","Subsolid Nodules","We present results of our segmentation method for subsolid lung nodules","We are presenting results of our segmentation method for subsolid lung nodules.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/10/logo.png","https://subsolidnodules.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 21:24:01" -"287","caddementia","CADDementia","Classification in AD, MCI, and healthy controls using MRI data","We seek algorithms that perform multi-class classification of patients with alzheimer‚äôs disease (ad), patients with mild cognitive impairment (mci) and healthy controls (cn) using multi-center structural mri data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/17/logo3_100.png","https://caddementia.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.neuroimage.2015.01.048","\N","\N","2023-11-08 00:42:00","2023-11-14 19:59:26" -"288","mitos-atypia-14","MITOS-ATYPIA-14","Mitosis detection and nuclear atypia on breast cancer H&E stained images","Mitos & atypia 14 contest, hosted by conference icpr 2014detection of mitosis and evaluation of nuclear atypia on breast cancer h&e stained images","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/20/logo_mitos_atypia.png","https://mitos-atypia-14.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:16" -"289","lola11","LOLA11","Segmentation of lungs and lobes in chest CT scans","The goal of lola11 (lobe and lung analysis 2011) is to compare methods for (semi-)automatic segmentation of the lungs and lobes from chest computed tomography scans. Any team, whether from academia or industry, can join.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/39/lola11_web_GVIrfhf.png","https://lola11.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:24" -"290","promise12","PROMISE12","Segmentation algorithms for MRI of the prostate","The goal of this challenge is to compare interactive and (semi)-automatic segmentation algorithms for mri of the prostate.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/40/promise12.png","https://promise12.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2013.12.002","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:34" +"287","caddementia","CADDementia","Classification in AD, MCI, and healthy controls using MRI data","We seek algorithms that perform multi-class classification of patients with Alzheimer's disease (AD), patients with mild cognitive impairment (MCI) and healthy controls (CN) using multi-center structural MRI data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/17/logo3_100.png","https://caddementia.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.neuroimage.2015.01.048","\N","\N","2023-11-08 00:42:00","2023-11-17 23:18:33" +"288","mitos-atypia-14","MITOS-ATYPIA-14","Mitosis detection and nuclear atypia on breast cancer H&E stained images","MITOS & ATYPIA 14 contest, hosted by conference ICPR 2014 - detection of mitosis and evaluation of nuclear atypia on breast cancer H&E stained images","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/20/logo_mitos_atypia.png","https://mitos-atypia-14.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:19:06" +"289","lola11","LOLA11","Segmentation of lungs and lobes in chest CT scans","The goal of LOLA11 (LObe and Lung Analysis 2011) is to compare methods for (semi-)automatic segmentation of the lungs and lobes from chest computed tomography scans. Any team, whether from academia or industry, can join.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/39/lola11_web_GVIrfhf.png","https://lola11.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:19:28" +"290","promise12","PROMISE12","Segmentation algorithms for MRI of the prostate","The goal of this challenge is to compare interactive and (semi)-automatic segmentation algorithms for MRI of the prostate.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/40/promise12.png","https://promise12.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2013.12.002","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:34" "291","camelyon16","CAMELYON16","Evaluating algorithms for automated cancer metastasis detection","The goal of this challenge is to evaluate new and existing algorithms for automated detection of cancer metastasis in digitized lymph node tissue sections. Two large datasets from both the radboud university medical center and the university medical center utrecht are provided.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/65/logo.png","https://camelyon16.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1001/jama.2017.14585","2015-11-25","2016-04-01","2023-11-08 00:42:00","2023-11-11 01:44:54" "292","isbi-aida","ISBI-AIDA","The isbi challenge focuses on evaluating endoscopic image analysis methods","The aim of this challenge is to bring together the community of researchers working on the various types of optical endoscopy at its multiple scales and different needs, to provide reference databases and reference results both for the imaging community and those interested in the translation to the clinical practice.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/67/logo.png","https://isbi-aida.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:50:12" -"293","luna16","LUNA16","Nodule detection algorithms for chest CT in a large-scale setting","The luna16 challenge will focus on a large-scale evaluation of automatic nodule detection algorithms for chest ct.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/71/luna16_logo.png","https://luna16.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2017.06.015","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:42" +"293","luna16","LUNA16","Nodule detection algorithms for chest CT in a large-scale setting","The LUNA16 challenge will focus on a large-scale evaluation of automatic nodule detection algorithms for chest CT.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/71/luna16_logo.png","https://luna16.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2017.06.015","\N","\N","2023-11-08 00:42:00","2023-11-17 23:19:48" "294","camelyon17","CAMELYON17","Automated detection and classification of breast cancer metastases","Automated detection and classification of breast cancer metastases in whole-slide images of histological lymph node sections. This task has high clinical relevance and would normally require extensive microscopic assessment by pathologists.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/80/camelyon17_logo.png","https://camelyon17.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/tmi.2018.2867350","2016-11-16","\N","2023-11-08 00:42:00","2023-11-11 01:45:01" -"295","retouch","RETOUCH","Detecting retinal fluid in optical coherence tomography images","Retinal oct fluid challenge (retouch) compares automated algorithms that are able to detect and segment different types of retinal fluid in optical coherence tomography (oct).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/111/retouch-logo.png","https://retouch.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2019.2901398","2017-04-03","\N","2023-11-08 00:42:00","2023-11-15 18:47:07" +"295","retouch","RETOUCH","Detecting retinal fluid in optical coherence tomography images","Retinal OCT fluid challenge (RETOUCH) compares automated algorithms that are able to detect and segment different types of retinal fluid in optical coherence tomography (OCT).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/111/retouch-logo.png","https://retouch.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2019.2901398","2017-04-03","\N","2023-11-08 00:42:00","2023-11-17 23:20:14" "296","cataracts","CATARACTS","Image-based tool detection algorithms for cataract surgery","The challenge on automatic tool annotation for cataract surgery aims at evaluating image-based tool detection algorithms in the context of the most common surgical procedure in the world.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/130/logo.png","https://cataracts.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2018.11.008","\N","\N","2023-11-08 00:42:00","2023-11-14 19:48:56" -"297","tadpole","TADPOLE","Assesses Alzheimer's disease prediction of longitudinal evolution","The alzheimer‚äôs disease prediction of longitudinal evolution (tadpole) challenge is brought to you by the europond consortium in collaboration with the alzheimer‚äôs disease neuroimaging initiative (adni).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/141/logo_gC1i5c5.png","https://tadpole.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2002.03419","\N","\N","2023-11-08 00:42:00","2023-11-14 19:49:01" +"297","tadpole","TADPOLE","Assesses Alzheimer's disease prediction of longitudinal evolution","The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) challenge is brought to you by the europond consortium in collaboration with the Alzheimer's Disease Neuroimaging Initiative (ADNI).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/141/logo_gC1i5c5.png","https://tadpole.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2002.03419","\N","\N","2023-11-08 00:42:00","2023-11-17 23:21:11" "298","coronare","CoronARe","Methods in coronary artery reconstruction using C-arm angiography","Coronare ranks state-of-the-art methods in symbolic and tomographic coronary artery reconstruction from interventional c-arm rotational angiography. Specifically, we will benchmark the performance of the methods using accurately pre-processed data, and study the effects of imperfect pre-processing conditions (segmentation and background subtraction errors). The evaluation will be performed in a controlled environment using digital phantom images.","","https://coronare.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:49:15" "299","iciar2018-challenge","ICIAR 2018","Automatic detection of cancerous regions in breast cancer histology images","Can you develop a method for automatic detection of cancerous regions in breast cancer histology images?","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/176/logo_small.png","https://iciar2018-challenge.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2019.05.010","2023-01-09","\N","2023-11-08 00:42:00","2023-11-14 19:49:22" -"300","sliver07","SLIVER07","Liver segmentation in clinical 3D CT scans in this competition","The goal of this competition is to compare different algorithms to segment the liver from clinical 3d computed tomography (ct) scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/178/splash2.jpg","https://sliver07.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2009.2013851","\N","\N","2023-11-08 00:42:00","2023-11-14 19:49:27" -"301","rocc","ROCC","DR disease in retina OCT volumes","Retinal oct classification challenge (rocc) is organized as a one day challenge in conjunction with mvip2017. The goal of this challenge is to call different automated algorithms that are able to detect dr disease from normal retina on a common dataset of oct volumes, acquired with topcon sd-oct devices.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/180/logo.jpg","https://rocc.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:59:48" -"302","idrid","IDRiD","Retinal lesion segmentation, optic disc/fovea detection, and DR grading","This challenge evaluates automated techniques for analysis of fundus photographs. We target segmentation of retinal lesions like exudates, microaneurysms, and hemorrhages and detection of the optic disc and fovea. Also, we seek grading of fundus images according to the severity level of dr and dme.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/183/g2385.png","https://idrid.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2019.101561","\N","\N","2023-11-08 00:42:00","2023-11-14 19:49:35" +"300","sliver07","SLIVER07","Liver segmentation in clinical 3D CT scans in this competition","The goal of this competition is to compare different algorithms to segment the liver from clinical 3d computed tomography (CT) scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/178/splash2.jpg","https://sliver07.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2009.2013851","\N","\N","2023-11-08 00:42:00","2023-11-17 23:21:37" +"301","rocc","ROCC","DR disease in retina OCT volumes","Retinal OCT Classification Challenge (ROCC) is organized as a one day challenge in conjunction with MVIP2017. The goal of this challenge is to call different automated algorithms that are able to detect DR disease from normal retina on a common dataset of OCT volumes, acquired with topcon SD-OCT devices.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/180/logo.jpg","https://rocc.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:22:17" +"302","idrid","IDRiD","Retinal lesion segmentation, optic disc/fovea detection, and DR grading","This challenge evaluates automated techniques for analysis of fundus photographs. We target segmentation of retinal lesions like exudates, microaneurysms, and hemorrhages and detection of the optic disc and fovea. Also, we seek grading of fundus images according to the severity level of DR and DME.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/183/g2385.png","https://idrid.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2019.101561","\N","\N","2023-11-08 00:42:00","2023-11-17 23:22:36" "303","empire10","EMPIRE10","Chest CT images; assess the accuracy of algorithms","The EMPIRE10 challenge was launched in early 2010 with an initial set of 20 scan pairs to be registered by participants in their own facility. This was followed in September by a workshop at the MICCAI 2010 conference where participants registered a further 10 scan pairs live within a 3 hour timeframe. This process and the results obtained are described in detail in Murphy et al., ""Evaluation of registration methods on thoracic CT: the EMPIRE10 challenge."", IEEE Trans Med Imaging. 2011 Nov;30(11):1901-20. Please cite this publication if you wish to reference the EMPIRE10 challenge. From this point forward all participants will be judged based on the full set of 30 scan pairs. New participants and new submissions are always welcome - in this way we hope that the EMPIRE10 website will continue to reflect the state of the art in registration of pulmonary CT images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/186/logo.png","https://empire10.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2011.2158349","\N","\N","2023-11-08 00:42:00","2023-11-15 22:51:00" -"304","lumic","LUMIC","CT chest images using an anthropomorphic digital phantom","The lumic challenge tests the accuracy in registration between pre- and post-contrast ct chest images for algorithms, using an anthropomophic digital phantom.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/203/lumiclogo.png","https://lumic.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:50:22" +"304","lumic","LUMIC","CT chest images using an anthropomorphic digital phantom","The LUMIC challenge tests the accuracy in registration between pre- and post-contrast CT chest images for algorithms, using an anthropomophic digital phantom.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/203/lumiclogo.png","https://lumic.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:22:52" "305","continuousregistration","Continuous Registration","Submit your lung and brain registration method","Submit your method for lung and brain registration on https://github.com/superelastix/superelastix! Your method is easily accessible to end-users and automatically compiled, tested, and benchmarked weekly on several different data sets.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/207/logo.png","https://continuousregistration.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:50:44" "306","drive","DRIVE","Develop a system to automatically segment vessels in human retina fundus images","The DRIVE database has been established to enable comparative studies on segmentation of blood vessels in retinal images. Retinal vessel segmentation and delineation of morphological attributes of retinal blood vessels, such as length, width, tortuosity, branching patterns and angles are utilized for the diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases such as diabetes, hypertension, arteriosclerosis and chorodial neovascularization. Automatic detection and analysis of the vasculature can assist in the implementation of screening programs for diabetic retinopathy, can aid research on the relationship between vessel tortuosity and hypertensive retinopathy, vessel diameter measurement in relation with diagnosis of hypertension, and computer-assisted laser surgery. Automatic generation of retinal maps and extraction of branch points have been used for temporal or multimodal image registration and retinal image mosaic synthesis. Mor...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/210/logo_drive.PNG","https://drive.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:51:30" "307","curious2018","CuRIOUS 2018","MICCAI Challenge 2018 for brainshift correction with intra-operative ultrasound","Early brain tumor resection can effectively improve the patient’s survival rate. However, resection quality and safety can often be heavily affected by intra-operative brain tissue shift due to factors, such as gravity, drug administration, intracranial pressure change, and tissue removal. Such tissue shift can displace the surgical target and vital structures (e.g., blood vessels) shown in pre-operative images while these displacements may not be directly visible in the surgeon’s field of view. Intra-operative ultrasound (iUS) is a robust and relatively inexpensive technique to track intra-operative tissue shift and surgical tools, but to help update pre-surgical plans with this information, accurate and robust image registration algorithms are needed to relate pre-surgical MRI to iUS images. Despite the great progress so far, medical image registration techniques still have not made into the surgical room to directly benefit the patients with brain tumors. This challege/worksh...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/221/curious2018_logo.png","https://curious2018.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2019.2935060","\N","\N","2023-11-08 00:42:00","2023-11-15 22:51:50" -"308","refuge","REFUGE","Algorithms for glaucoma detection and optic disc/cup segmentation","The goal of the retinal fundus glaucoma challenge (refuge) is to evaluate and compare automated algorithms for glaucoma detection and optic disc/cup segmentation on a common dataset of retinal fundus images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/229/logo_refuge_200x200.png","https://refuge.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:50:59" +"308","refuge","REFUGE","Algorithms for glaucoma detection and optic disc/cup segmentation","The goal of the REtinal FUndus Glaucoma Challenge (REFUGE) is to evaluate and compare automated algorithms for glaucoma detection and optic disc/cup segmentation on a common dataset of retinal fundus images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/229/logo_refuge_200x200.png","https://refuge.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:23:34" "309","monuseg","MoNuSeg","Segmenting nuclei from H&E stained histopathological images","This challenge will showcase the best nuclei segmentation techniques that will work on a diverse set of H&E stained histology images obtained from different hospitals spanning multiple patients and organs. This will enable the training and testing of readily usable (or generalized) nuclear segmentation softwares.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/238/monuseg_logo.png","https://monuseg.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:52:09" -"310","paves","PAVES","Mra images for lower limb arterial occlusive disease planning","Peripheral artery:vein enhanced segmentation (paves) is the challenge focussed on providing easily interpretable and relevant images that can be readily understood by clinicians (vascular interventional radiologists & vascular surgeons) from mra datasets where the venous and arterial vasculature may be equally enhanced. The setting is lower limb arterial occlusive disease where imaging of the below knee arterial vasculature is critical in planning limb salvage interventions. However, the competing demands of the high spatial resolution needed to image small vessels versus imaging time constraints where there is often a very short arteriovenous transit time for contrast passage form arterial to venous compartments makes imaging challenging. While dynamic mra techniques can usually allow arterial imaging without venous ‚äòcontamination‚äô these necessarily sacrifice spatial resolution.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/243/paveslogo.png","https://paves.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:51:11" +"310","paves","PAVES","Mra images for lower limb arterial occlusive disease planning","Peripheral artery:vein enhanced segmentation (PAVES) is the challenge focussed on providing easily interpretable and relevant images that can be readily understood by clinicians (vascular interventional radiologists & vascular surgeons) from mra datasets where the venous and arterial vasculature may be equally enhanced. The setting is lower limb arterial occlusive disease where imaging of the below knee arterial vasculature is critical in planning limb salvage interventions. However, the competing demands of the high spatial resolution needed to image small vessels versus imaging time constraints where there is often a very short arteriovenous transit time for contrast passage form arterial to venous compartments makes imaging challenging. While dynamic mra techniques can usually allow arterial imaging without venous ‚äòcontamination‚äô these necessarily sacrifice spatial resolution.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/243/paveslogo.png","https://paves.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:23:47" "311","prostatex","PROSTATEx","Clinical significance of prostate lesions using MRI data","This challenge is the live continuation of the offline PROSTATEx Challenge (""SPIE-AAPM-NCI Prostate MR Classification Challenge”) that was held in conjunction with the 2017 SPIE Medical Imaging Symposium. In this challenge, the task is to predict the clinical significance of prostate lesions found in MRI data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/258/prostatex-logo.jpg","https://prostatex.grand-challenge.org/","completed","intermediate","5","","\N","2022-04-30","2023-11-08 00:42:00","2023-11-15 22:52:25" -"312","hc18","HC18","Measuring fetal head circumference using 2D ultrasound images","Automated measurement of fetal head circumference using 2d ultrasound images","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/265/HC18_LogoV1.png","https://hc18.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:51:33" +"312","hc18","HC18","Measuring fetal head circumference using 2D ultrasound images","Automated measurement of fetal head circumference using 2D ultrasound images","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/265/HC18_LogoV1.png","https://hc18.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:23:58" "313","anhir","ANHIR","Aligning multi-stained histology tissue samples","The challenge focuses on comparing the accuracy (using manually annotated landmarks) and the approximate speed of automatic non-linear registration methods for aligning microscopy images of multi-stained histology tissue samples.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/285/logo_sq_OdYGo3e.png","https://anhir.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/tmi.2020.2986331","\N","\N","2023-11-08 00:42:00","2023-11-14 19:51:43" -"314","breastpathq","BreastPathQ: Cancer Cellularity Challenge 2019","Develop a method for analyzing histology patches","Spie-aapm-nci breastpathq:cancer circularity challenge 2019: participants will be tasked to develop an automated method for analyzing histology patches extracted from whole slide images and assign a score reflecting cancer cellularity for tumor burden assessment in each.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/296/spie_with_overlays.png","https://breastpathq.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1117/1.jmi.8.3.034501","\N","\N","2023-11-08 00:42:00","2023-11-14 19:51:52" -"315","chaos","CHAOS","Segment liver in CT data and liver, spleen, and kidneys in MRI data","In this challenge, you segment the liver in ct data, and segment liver, spleen, and kidneys in mri data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/298/logo_8sv4fA4_SWcTFEs.png","https://chaos.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2020.101950","\N","\N","2023-11-08 00:42:00","2023-11-14 19:52:00" -"316","ead2019","EAD2019","Address multi-class artefact detection, region segmentation, and detection","Endoscopic artefact detection (ead) is a core problem and needed for realising robust computer-assisted tools. The ead challenge has 3 tasks: 1) multi-class artefact detection, 2) region segmentation, 3) detection generalisation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/302/1772_A_M62_00022_1.jpg","https://ead2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:52:38" +"314","breastpathq","BreastPathQ: Cancer Cellularity Challenge 2019","Develop a method for analyzing histology patches","SPIE-AAPM-NCI BreastPathQ:Cancer Circularity Challenge 2019: Participants will be tasked to develop an automated method for analyzing histology patches extracted from whole slide images and assign a score reflecting cancer cellularity for tumor burden assessment in each.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/296/spie_with_overlays.png","https://breastpathq.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1117/1.jmi.8.3.034501","\N","\N","2023-11-08 00:42:00","2023-11-17 23:27:13" +"315","chaos","CHAOS","Segment liver in CT data and liver, spleen, and kidneys in MRI data","In this challenge, you segment the liver in CT data, and segment liver, spleen, and kidneys in MRI data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/298/logo_8sv4fA4_SWcTFEs.png","https://chaos.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2020.101950","\N","\N","2023-11-08 00:42:00","2023-11-17 21:31:53" +"316","ead2019","EAD2019","Address multi-class artefact detection, region segmentation, and detection","Endoscopic artefact detection (EAD) is a core problem and needed for realising robust computer-assisted tools. The EAD challenge has 3 tasks: 1) multi-class artefact detection, 2) region segmentation, 3) detection generalisation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/302/1772_A_M62_00022_1.jpg","https://ead2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:25:06" "317","acdc-lunghp","ACDC-LungHP","Methods for whole-slide lung histopathology images","Automatic cancer detection and classification in whole-slide lung histopathology","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/305/logo.png","https://acdc-lunghp.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:52:50" -"318","palm","PALM","Pathological Myopia diagnosis and fundus lesion segmentation in patients","The pathologic myopia challenge (palm) focuses on the investigation and development of algorithms associated with the diagnosis of pathological myopia (pm) and segmentation of lesions in fundus photos from pm patients.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/307/palm-logo.jpg","https://palm.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:59:58" -"319","ichallenges","iChallenges","Eye image modalities, including REFUGE, PALM, RETOUCH, among others","We organized a serial of challenges on different eye image modalities, such as refuge, palm, retouch, etc.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/345/ichallenge.png","https://ichallenges.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:53:04" +"318","palm","PALM","Pathological Myopia diagnosis and fundus lesion segmentation in patients","The pathologic myopia challenge (PALM) focuses on the investigation and development of algorithms associated with the diagnosis of pathological myopia (PM) and segmentation of lesions in fundus photos from PM patients.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/307/palm-logo.jpg","https://palm.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:25:23" +"319","ichallenges","iChallenges","Eye image modalities, including REFUGE, PALM, RETOUCH, among others","We organized a serial of challenges on different eye image modalities, such as REFUGE, PALM, RETOUCH, etc.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/345/ichallenge.png","https://ichallenges.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:25:37" "320","decathlon-10","Decathlon","Test machine learning algorithm generalizability across 10 different tasks","The medical segmentation decathlon challenge tests the generalisability of machine learning algorithms when applied to 10 different semantic segmentation task.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/350/background_dark_logo.png","https://decathlon-10.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1038/s41467-022-30695-9","2022-07-21","2023-08-20","2023-11-08 00:42:00","2023-11-14 19:53:19" -"321","lyon19","LYON19","Develop methods for automatic lymphocyte detection in IHC stained specimens","Automatic lymphocyte detection in ihc stained specimens.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/355/ban2_kpuoTJg.png","https://lyon19.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 22:40:36" -"322","kits19","KiTS19","Participate in the segmentation challenge for kidneys and kidney tumors in 2019","2019 kidney and kidney tumor segmentation challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/360/Screenshot_from_2019-01-02_17-23-36.png","https://kits19.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/1912.01054","\N","\N","2023-11-08 00:42:00","2023-11-08 00:58:20" -"323","paip2019","PAIP 2019","Address liver cancer segmentation and viable tumor burden estimation","Paip2019: liver cancer segmentation task 1: liver cancer segmentation task 2: viable tumor burden estimation","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/370/Untitled_design.png","https://paip2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:53:27" -"324","orcascore","orCaScore","Coronary artery calcium scoring in cardiac CT scans","The purpose of the orcascore challenge is to compare methods for automatic and semi-automatic coronary artery calcium scoring in cardiac ct scans. This evaluation framework was launched at the miccai 2014 workshops in boston, usa, where we organized the challenge on automatic coronary calcium scoring.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/378/00b670348bfdf4464e00c44310ec259f.jpg","https://orcascore.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1118/1.4945696","\N","\N","2023-11-08 00:42:00","2023-11-14 19:53:35" -"325","curious2019","curious2019","MRI to intra-operative ultrasound (iUS) before and after tumor resection","Miccai challenge 2019 for correction of brainshift with intra-operative ultrasound. Taks 1: register pre-operative mri to ius before tumor resection;taks 2: register ius after tumor resection to ius before tumor resection","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/380/CuRIOUS.png","https://curious2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:53:40" -"326","patchcamelyon","PatchCamelyon","Detect breast cancer metastasis in lymph nodes","Patchcamelyon is a new and challenging image classification dataset of 327.680 color images (96 x 96px) extracted from histopathology images of the camelyon16 challenge. The goal is to detect breast cancer metastasis in lymph nodes.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/381/Screen_Shot_2019-05-05_at_21.43.25.png","https://patchcamelyon.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:53:47" +"321","lyon19","LYON19","Develop methods for automatic lymphocyte detection in IHC stained specimens","Automatic Lymphocyte detection in IHC stained specimens.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/355/ban2_kpuoTJg.png","https://lyon19.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:27:28" +"322","kits19","KiTS19","Participate in the segmentation challenge for kidneys and kidney tumors in 2019","2019 Kidney and Kidney Tumor Segmentation Challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/360/Screenshot_from_2019-01-02_17-23-36.png","https://kits19.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/1912.01054","\N","\N","2023-11-08 00:42:00","2023-11-17 23:27:36" +"323","paip2019","PAIP 2019","Address liver cancer segmentation and viable tumor burden estimation","PAIP2019: Liver Cancer Segmentation Task 1: Liver Cancer Segmentation Task 2: Viable Tumor Burden Estimation","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/370/Untitled_design.png","https://paip2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:27:53" +"324","orcascore","orCaScore","Coronary artery calcium scoring in cardiac CT scans","The purpose of the orCaScore challenge is to compare methods for automatic and semi-automatic coronary artery calcium scoring in cardiac CT scans. This evaluation framework was launched at the MICCAI 2014 workshops in Boston, USA, where we organized the Challenge on Automatic Coronary Calcium Scoring.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/378/00b670348bfdf4464e00c44310ec259f.jpg","https://orcascore.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1118/1.4945696","\N","\N","2023-11-08 00:42:00","2023-11-17 23:28:02" +"325","curious2019","curious2019","MRI to intra-operative ultrasound (iUS) before and after tumor resection","MICCAI Challenge 2019 for Correction of Brainshift with Intra-Operative Ultrasound. Taks 1: Register pre-operative MRI to iUS before tumor resection;Taks 2: Register iUS after tumor resection to iUS before tumor resection","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/380/CuRIOUS.png","https://curious2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:28:13" +"326","patchcamelyon","PatchCamelyon","Detect breast cancer metastasis in lymph nodes","PatchCamelyon is a new and challenging image classification dataset of 327.680 color images (96 x 96px) extracted from histopathology images of the CAMELYON16 challenge. The goal is to detect breast cancer metastasis in lymph nodes.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/381/Screen_Shot_2019-05-05_at_21.43.25.png","https://patchcamelyon.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:28:22" "327","endovissub2019-scared","Stereo Correspondence and Reconstruction of Endoscopic Data","Address stereo correspondence and reconstruction challenges in endoscopic data","Stereo correspondence and reconstruction of endoscopic data","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/385/c7714704.jpg","https://endovissub2019-scared.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:58:30" -"328","verse2019","VerSe`19","Vertebrae labelling and segmentation on 150 CT scans","Vertebrae labelling and segmentation on a spine dataset on an unprecedented 150 ct scans with voxel-level vertebral annotations.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/388/logo_border.png","https://verse2019.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:54:26" -"329","gleason2019","Gleason2019","Methods for prostate cancer from H&E-stained histopathology images","Miccai 2019 automatic prostate gleason grading challenge: this challenge aims at the automatic gleason grading of prostate cancer from h&e-stained histopathology images. This task is of critical importance because gleason score is a strong prognostic predictor. On the other hand, it is very challenging because of the large degree of heterogeneity in the cellular and glandular patterns associated with each gleason grade, leading to significant inter-observer variability, even among expert pathologists.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/391/GLEASON2019.png","https://gleason2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 20:00:08" +"328","verse2019","VerSe`19","Vertebrae labelling and segmentation on 150 CT scans","Vertebrae labelling and segmentation on a spine dataset on an unprecedented 150 CT scans with voxel-level vertebral annotations.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/388/logo_border.png","https://verse2019.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 21:30:59" +"329","gleason2019","Gleason2019","Methods for prostate cancer from H&E-stained histopathology images","MICCAI 2019 automatic prostate gleason grading challenge: this challenge aims at the automatic gleason grading of prostate cancer from h&e-stained histopathology images. This task is of critical importance because gleason score is a strong prognostic predictor. On the other hand, it is very challenging because of the large degree of heterogeneity in the cellular and glandular patterns associated with each gleason grade, leading to significant inter-observer variability, even among expert pathologists.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/391/GLEASON2019.png","https://gleason2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 20:00:08" "330","age","AGE","Assess automatic methods for angle closure glaucoma evaluation","Angle closure glaucoma evaluation challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/394/icon.png","https://age.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-11 01:48:53" "331","amd","iChallenge-AMD","Tackle challenges in age-related macular degeneration diagnosis and analysis","Age-related macular degeneration challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/395/%E6%BC%94%E7%A4%BA%E6%96%87%E7%A8%BF1-2.png","https://amd.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:58:34" -"332","structseg2019","StructSeg2019","Automated structure segmentation for radiotherapy planning","Welcome to automatic structure segmentation for radiotherapy planning challenge 2019. This competition is part of the miccai 2019 challenge.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/398/logo_aAzg3xS.jpg","https://structseg2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:54:50" +"332","structseg2019","StructSeg2019","Automated structure segmentation for radiotherapy planning","Welcome to automatic structure segmentation for radiotherapy planning challenge 2019. This competition is part of the MICCAI 2019 challenge.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/398/logo_aAzg3xS.jpg","https://structseg2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:54:50" "333","digestpath2019","DigestPath2019","Algorithms for signet ring cell detection and colonoscopy tissue screening","The challenge aims to evaluate algorithms for signet ring cell detection and colonoscopy tissue screening in digestive system pathological images. It introduces the first public dataset for these tasks, providing expert-level annotations to advance research on automatic pathological object detection and lesion segmentation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/399/digestpath-logo2.png","https://digestpath2019.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 18:40:29" "334","odir2019","ODIR-2019","Compete in recognizing ocular diseases using morphological features","Peking university international competition on ocular disease intelligent recognition","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/402/logo.jpg","https://odir2019.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 20:27:41" -"335","lysto","Lymphocyte Assessment Hackathon","Workshop for lymphocyte assessment in computational pathology","Lymphocyte assessment hackathon in conjunction with the miccai compay 2019 workshop on computational pathology","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/421/lysto_square.png","https://lysto.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:01" +"335","lysto","Lymphocyte Assessment Hackathon","Workshop for lymphocyte assessment in computational pathology","Lymphocyte assessment hackathon in conjunction with the MICCAI compay 2019 workshop on computational pathology","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/421/lysto_square.png","https://lysto.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:01" "336","aasce19","AASCE","Develop accurate automated methods for estimating spinal curvature","Accurate automated spinal curvature estimation","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/424/logo.png","https://aasce19.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:58:39" "337","ecdp2020","HEROHE","Identify HER2-positive from HER2-negative breast cancer specimens","Unlike previous challenges, this proposes to find an image analysis algorithm to identify her2-positive from her2-negative breast cancer specimens evaluating only the morphological features present on the he slide, without the staining patterns of ihc.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/433/ECDP2020_square.jpg","https://ecdp2020.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:07" "338","monusac-2020","MoNuSAC 2020","Address segmentation and classification of nuclei in multi-organ images","Multi-organ nuclei segmentation and classification challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/445/logo.PNG","https://monusac-2020.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.13140/rg.2.2.12290.02244/1","\N","\N","2023-11-08 00:42:00","2023-11-08 00:58:42" "339","endocv","EndoCV2020","Focus on artefact detection and disease detection in endoscopic images","Endoscopy computer vision challenge (endocv2020) introduces two core sub-themes in endoscopy: 1) artefact detection and segmentation (ead2020) and 2) disease detection and segmentation (edd2020).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/462/endoLogo.png","https://endocv.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:17" -"340","lndb","LNDb Challenge","Determine nodule detection and characterization for lung cancer screening","Lung cancer screening and fleischner follow-up determination in chest ct through nodule detection, segmentation and characterization","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/470/thumbnail_lndb.png","https://lndb.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/1911.08434","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:31" -"341","verse2020","VerSe'20","Label and segment vertebrae on a diverse CT dataset","Vertebrae labelling and segmentation on a multi-centre, multi-scanner, and anatomically-diverse ct dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/473/logo_border.png","https://verse2020.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:39" -"342","paip2020","PAIP2020","Classify molecular subtypes in colorectal cancers, predict MSI","Built on the success of its predecessor, paip2020 is the second challenge organized by the pathology ai platform (paip) and the seoul national university hospital (snuh). Paip2020 will proceed to not only detect whole tumor areas in colorectal cancers but also to classify their molecular subtypes, which will lead to characterization of their heterogeneity with respect to prognoses and therapeutic responses. All participants should predict one of the molecular carcinogenesis pathways, i.e., microsatellite instability(msi) in colorectal cancer, by performing digital image analysis without clinical tests. This task has a high clinical relevance as the currently used procedure requires an extensive microscopic assessment by pathologists. Therefore, those automated algorithms would reduce the workload of pathologists as a diagnostic assistance.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/480/paip2020_thumb_640x640.jpg","https://paip2020.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:52" -"343","ribfrac","RibFrac","Benchmark rib fracture detection and classification on 660 CT scans","Rib fracture detection and classification challenge: a large-scale benchmark of 660 ct scans with ~5,000 rib fractures (around 80gb)","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/482/challenge-logo-white_b8a8xbr.jpg","https://ribfrac.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:55:59" +"340","lndb","LNDb Challenge","Determine nodule detection and characterization for lung cancer screening","Lung cancer screening and fleischner follow-up determination in chest CT through nodule detection, segmentation and characterization","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/470/thumbnail_lndb.png","https://lndb.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/1911.08434","\N","\N","2023-11-08 00:42:00","2023-11-17 21:31:00" +"341","verse2020","VerSe'20","Label and segment vertebrae on a diverse CT dataset","Vertebrae labelling and segmentation on a multi-centre, multi-scanner, and anatomically-diverse CT dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/473/logo_border.png","https://verse2020.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 21:31:02" +"342","paip2020","PAIP2020","Classify molecular subtypes in colorectal cancers, predict MSI","Built on the success of its predecessor, paip2020 is the second challenge organized by the pathology AI platform (paip) and the seoul national university hospital (snuh). Paip2020 will proceed to not only detect whole tumor areas in colorectal cancers but also to classify their molecular subtypes, which will lead to characterization of their heterogeneity with respect to prognoses and therapeutic responses. All participants should predict one of the molecular carcinogenesis pathways, i.e., microsatellite instability(msi) in colorectal cancer, by performing digital image analysis without clinical tests. This task has a high clinical relevance as the currently used procedure requires an extensive microscopic assessment by pathologists. Therefore, those automated algorithms would reduce the workload of pathologists as a diagnostic assistance.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/480/paip2020_thumb_640x640.jpg","https://paip2020.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 21:33:31" +"343","ribfrac","RibFrac","Benchmark rib fracture detection and classification on 660 CT scans","Rib fracture detection and classification challenge: a large-scale benchmark of 660 CT scans with ~5,000 rib fractures (around 80gb)","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/482/challenge-logo-white_b8a8xbr.jpg","https://ribfrac.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 21:31:05" "344","tn-scui2020","Thyroid Nodule Segmentation and Classification","Develop algorithms for thyroid nodules using a diverse ultrasound dataset","The main topic of this tn-scui2020 challenge is finding automatic algorithms to accurately classify the thyroid nodules in ultrasound images. It will provide the biggest public dataset of thyroid nodule with over 4500 patient cases from different ages, genders, and were collected using different ultrasound machines. Each ultrasound image is provided with its ground truth class (benign or maglinant) and a detailed delineation of the nodule. This challenge will provide a unique opportunity for participants from different backgrounds (e.g. academia, industry, and government, etc.) To compare their algorithms in an impartial way.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/484/Capture8888.PNG","https://tn-scui2020.grand-challenge.org/","active","intermediate","5","","2022-01-20","\N","2023-11-08 00:42:00","2023-11-14 19:56:07" "345","learn2reg","Learn2Reg","Address challenges in learning from small datasets","Challenge on medical image registration addressing: learning from small datasets; estimating large deformations; dealing with multi-modal scans; and learning from noisy annotations","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/486/logo_warped.png","https://learn2reg.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/tmi.2022.3213983","\N","\N","2023-11-08 00:42:00","2023-11-14 19:56:18" "346","asoca","Automated Segmentation Of Coronary Arteries","Develop automated methods for segmentation of coronary arteries","Automated segmentation of coronary arteries","","https://asoca.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 17:59:56" @@ -351,19 +351,19 @@ "350","surgvisdom","SurgVisDom","VR simulations to overcome data privacy concerns in context-aware models","Exploring visual domain adaptation using vr simulations to overcome data privacy concerns in context-aware models.","","https://surgvisdom.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-14 19:58:25" "351","cada","CADA","Cerebral aneurysm image analysis challenge","Cerebral aneurysms are local dilations of arterial blood vessels caused by a weakness of the vessel wall. Subarachnoid hemorrhage (sah) caused by the rupture of a cerebral aneurysm is a life-threatening condition associated with high mortality and morbidity. The mortality rate is above 40%, and even in case of survival cognitive impairment can affect patients for a long time. Major goals in image analysis are the detection and risk assessment of aneurysms. We, therefore, subdivided the challenge into three categories. The first task is finding the aneurysm; the second task is the accurate segmentation to allow for a longitudinal assessment of the development of suspicious aneurysms. The third task is the estimation of the rupture risk of the aneurysm.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/531/data-first-row-2.png","https://cada.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1007/978-3-030-72862-5","\N","\N","2023-11-08 00:42:00","2023-11-11 01:54:15" "352","dfu2020","Diabetic Foot Ulcer Challenge 2020","Diabetic foot ulcer challenge 2020","Diabetic foot ulcer challenge 2020","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/532/bb.png","https://dfu2020.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.compbiomed.2021.104596","\N","\N","2023-11-08 00:42:00","2023-11-08 22:47:03" -"353","covid-ct","CT diagnosis of COVID-19","COVID-19 CT Image Diagnosis Competition","Coronavirus disease 2019 (covid-19) has infected more than 1.3 million individuals all over the world and caused more than 106,000 deaths. One major hurdle in controlling the spreading of this disease is the inefficiency and shortage of medical tests. To mitigate the inefficiency and shortage of existing tests for covid-19, we propose this competition to encourage the development of effective deep learning techniques to diagnose covid-19 based on ct images. The problem we want to solve is to classify each ct image into positive covid-19 (the image has clinical findings of covid-19) or negative covid-19 ( the image does not have clinical findings of covid-19). It‚äôs a binary classification problem based on ct images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/537/covid-CT2.png","https://covid-ct.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-14 20:25:11","2023-11-08 00:59:01" -"354","autoimplant","AutoImplant","MICCAI 2020 Cranial Implant Design","The miccai 2020 cranial implant design challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/540/logos.PNG","https://autoimplant.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2006.12449","\N","\N","2023-11-14 20:25:44","2023-11-08 00:59:03" +"353","covid-ct","CT diagnosis of COVID-19","COVID-19 CT Image Diagnosis Competition","Coronavirus disease 2019 (COVID-19) has infected more than 1.3 million individuals all over the world and caused more than 106,000 deaths. One major hurdle in controlling the spreading of this disease is the inefficiency and shortage of medical tests. To mitigate the inefficiency and shortage of existing tests for COVID-19, we propose this competition to encourage the development of effective deep learning techniques to diagnose COVID-19 based on CT images. The problem we want to solve is to classify each CT image into positive COVID-19 (the image has clinical findings of COVID-19) or negative COVID-19 ( the image does not have clinical findings of COVID-19). It‚äôs a binary classification problem based on CT images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/537/covid-CT2.png","https://covid-ct.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-14 20:25:11","2023-11-17 21:32:02" +"354","autoimplant","AutoImplant","MICCAI 2020 Cranial Implant Design","The MICCAI 2020 cranial implant design challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/540/logos.PNG","https://autoimplant.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2006.12449","\N","\N","2023-11-14 20:25:44","2023-11-08 00:59:03" "355","cada-rre","CADA - Rupture Risk Estimation","Cerebral aneurysm challenge: detect, segment, and assess rupture risk","Cerebral aneurysms are local dilations of arterial blood vessels caused by a weakness of the vessel wall. Subarachnoid hemorrhage (sah) caused by the rupture of a cerebral aneurysm is a life-threatening condition associated with high mortality and morbidity. The mortality rate is above 40%, and even in case of survival cognitive impairment can affect patients for a long time. Major goals in image analysis are the detection and risk assessment of aneurysms. We, therefore, subdivided the challenge into three categories. The first task is finding the aneurysm; the second task is the accurate segmentation to allow for a longitudinal assessment of the development of suspicious aneurysms. The third task is the estimation of the rupture risk of the aneurysm.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/541/data-first-row-2.png","https://cada-rre.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-11 01:59:43" "356","cada-as","CADA - Aneurysm Segmentation","Cerebral aneurysm image analysis: detect, segment, assess risk","Cerebral aneurysms are local dilations of arterial blood vessels caused by a weakness of the vessel wall. Subarachnoid hemorrhage (sah) caused by the rupture of a cerebral aneurysm is a life-threatening condition associated with high mortality and morbidity. The mortality rate is above 40%, and even in case of survival cognitive impairment can affect patients for a long time. Major goals in image analysis are the detection and risk assessment of aneurysms. We, therefore, subdivided the challenge into three categories. The first task is finding the aneurysm; the second task is the accurate segmentation to allow for a longitudinal assessment of the development of suspicious aneurysms. The third task is the estimation of the rupture risk of the aneurysm.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/543/data-first-row-2.png","https://cada-as.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:10" "357","panda","The PANDA challenge","PANDA Challenge: prostate cancer grading","The panda challenge: prostate cancer grade assessment using the gleason grading system","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/544/panda_logo_notext.png","https://panda.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:10" "358","pathvqachallenge","Pathology Visual Question Answering","Pathology visual question answering","Pathology visual question answering","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/548/grand_challenge3.jpg","https://pathvqachallenge.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-11 01:50:06" "359","qubiq","QUBIQ","Biomedical image segmentation uncertainties","Quantification of uncertainties in biomedical image segmentation challenge","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/552/brain.png","https://qubiq.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:15" -"360","lodopab","LoDoPaB-CT","Low-dose CT reconstruction challenge","Low-dose ct reconstruction in the setting of the lodopab-ct dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/555/logo_white_bg.png","https://lodopab.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1038/s41597-021-00893-z","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:16" -"361","apples-ct","Apples-CT","Ct reconstruction for apple defect detection","High-throughput ct image reconstruction and defect detection for apples","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/557/logo.png","https://apples-ct.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:18" +"360","lodopab","LoDoPaB-CT","Low-dose CT reconstruction challenge","Low-dose CT reconstruction in the setting of the lodopab-ct dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/555/logo_white_bg.png","https://lodopab.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1038/s41597-021-00893-z","\N","\N","2023-11-08 00:42:00","2023-11-17 21:31:13" +"361","apples-ct","Apples-CT","Ct reconstruction for apple defect detection","High-throughput CT image reconstruction and defect detection for apples","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/557/logo.png","https://apples-ct.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:18" "362","riadd","RIADD (ISBI-2021)","Retinal image analysis for disease detection","Retinal image analysis for multi-disease detection","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/562/Logo_ISBI_640_OO4Fuj9.png","https://riadd.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:20" "363","mitoem","MitoEM","3D mitochondria segmentation benchmark","Large-scale 3d mitochondria instance segmentation benchmark","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/566/logo2.png","https://mitoem.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:22" "364","a-afma","A-AFMA","Automated prenatal ultrasound measurement","Prenatal ultrasound (us) measurement of amniotic fluid is an important part of fetal surveillance as it provides a non-invasive way of assessing if there is oligohydramnios (insufficient amniotic fluid) and polyhydramnios (excess amniotic fluid), which are associated with numerous problems both during pregnancy and after birth. In this image analysis challenge, we aim to attract attention from the image analysis community to work on the problem of automated measurement of the mvp using the predefined ultrasound video clip based on a linear-sweep protocol [1]. We define two tasks. The first task is to automatically detect amniotic fluid and the maternal bladder. The second task is to identify the appropriate points for mvp measurement given the selected frame of the video clip, and calculate the length of the connected line between these points. The data was collected from women in the second trimester of pregnancy, as part of the pure study at the john radcliffe hospital in oxford, uk.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/567/Figure_3_MVP_example.png","https://a-afma.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-11 07:00:24" -"365","covid-segmentation","COVID-19 LUNG CT LESION SEGMENTATION CHALLENGE - 2020","SARS-CoV-2 lung lesion segmentation","This challenge will create the platform to evaluate emerging methods for the segmentation and quantification of lung lesions caused by sars-cov-2 infection from ct images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/569/Challenge_Image.png","https://covid-segmentation.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:45:40" +"365","covid-segmentation","COVID-19 LUNG CT LESION SEGMENTATION CHALLENGE - 2020","SARS-CoV-2 lung lesion segmentation","This challenge will create the platform to evaluate emerging methods for the segmentation and quantification of lung lesions caused by SARS-CoV-2 infection from CT images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/569/Challenge_Image.png","https://covid-segmentation.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:09:24" "366","valdo","Where is VALDO?","Vascular lesion detection challenge 2021","This challenge aims at promoting the development of new solutions for the automated segmentation of such very sparse and small objects while leveraging weak and noisy labels. The central objective of this challenge is to facilitate quantification of CSVD in brain MRI scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/570/LogoVALDO.png","https://valdo.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:45:10" "367","segpc-2021","SegPC-2021","Plasma cell cancer segmentation challenge","This challenge is positioned towards robust segmentation of cells which is the first stage to build such a tool for plasma cell cancer, namely, multiple myeloma (mm), which is a type of blood cancer.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/574/logo_fRPkhwS.png","https://segpc-2021.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:29" "368","endocv2021","EndoCV2021","Endoscopy Computer Vision Challenge 2021","Computer-aided detection, localization, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking and development of computer vision methods remains an open problem. This is mostly due to the lack of datasets or challenges that incorporate highly heterogeneous dataset appealing participants to test for generalisation abilities of the methods. We aim to build a comprehensive, well-curated, and defined dataset from 6 different centres worldwide and provide 5 datasets types that include: i) multi-centre train-test split from 5 centres ii) polyp size-based split (participants should do this by themselves if of interest), iii) data centre wise split, iv) modality split (only test) and v) one hidden centre test. Participants will be evaluated on all types to address strength and weaknesses of each participants’ method. Both detection bounding boxes and pixel-wise segme...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/575/endoLogo-2021_AdZmuvg.jpg","https://endocv2021.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:38:42" @@ -375,15 +375,15 @@ "374","cholectriplet2021","CholecTriplet 2021","EndoVis sub-challenge for surgical action","This sub-challenge focuses on exploiting machine learning methods for the online automatic recognition of surgical actions as a series of triplets. Participants will develop and compete with algorithms to recognize action triplets directly from the provided surgical videos. This novel challenge investigates the state-of-the-art on surgical fine-grained activity recognition and will establish a new promising research direction in computer-assisted surgery.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/596/logo-challenge.png","https://cholectriplet2021.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:37:21" "375","paip2021","PAIP2021","PAIP 2021 challenge: perineural invasion","PAIP 2021 challenge aims to promote the development of a common algorithm for automatic detection of perineural invasion in resected specimens of multi-organ cancers. PAIP 2021 challenge will have a technical impact in the following fields: detection of composite targets (nerve and tumor) and common modeling for target images in multiple backgrounds. This challenge will provide a good opportunity to overcome the limitations of current disease-organ-specific modeling and develop a technological approach to the universality of histology in multiple organs.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/598/640-640.png","https://paip2021.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:37:00" "376","flare","FLARE21","Abdominal organ segmentation challenge","Abdominal organ segmentation plays an important role in clinical practice, and to some extent, it seems to be a solved problem because the state-of-the-art methods have achieved inter-observer performance in several benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can be generalized on more diverse datasets. Moreover, many SOTA methods use model ensembles to boost performance, but these solutions usually have a large model size and cost extensive computational resources, which are impractical to be deployed in clinical practice. To address these limitations, we organize the Fast and Low GPU Memory Abdominal Organ Segmentation challenge that has two main features: (1) the dataset is large and diverse, includes 511 cases from 11 medical centers. (2) we not only focus on segmentation accuracy but also segmentation efficiency, whi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/599/logo_hDqJ8uG.gif","https://flare.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2022.102616","\N","\N","2023-11-08 00:42:00","2023-11-15 22:36:39" -"377","nucls","NuCLS","Triple-negative breast cancer nuclei challenge","Classification, Localization And Segmentation Of Nuclei In Scanned Ffpe H&E Stained Slides Of Triple-Negative Breast Cancer From The Cancer Genome Atlas. See: Amgad Et Al. 2021. Arxiv:2102.09099 [Cs.Cv].","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/601/TCGA-AR-A0U4-DX1_id-5ea40a88ddda5f8398990ccf_left-42405_top-70784_bo_PgpXdUu.png","https://nucls.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:51" -"378","bcsegmentation","Breast Cancer Segmentation","Triple-negative breast cancer segmentation","Semantic Segmentation Of Histologic Regions In Scanned Ffpe H&E Stained Slides Of Triple-Negative Breast Cancer From The Cancer Genome Atlas. See: Amgad M, Elfandy H, ..., Gutman Da, Cooper Lad. Structured Crowdsourcing Enables Convolutional Segmentation Of Histology Images. Bioinformatics. 2019. Doi: 10.1093/Bioinformatics/Btz083","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/602/BCSegmentationLogo.png","https://bcsegmentation.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 00:59:52" +"377","nucls","NuCLS","Triple-negative breast cancer nuclei challenge","Classification, Localization and Segmentation of nuclei in scanned FFPE H&E stained slides of triple-negative breast cancer from The Cancer Genome Atlas. See: Amgad et al. 2021. arXiv:2102.09099 [cs.CV].","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/601/TCGA-AR-A0U4-DX1_id-5ea40a88ddda5f8398990ccf_left-42405_top-70784_bo_PgpXdUu.png","https://nucls.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:29:28" +"378","bcsegmentation","Breast Cancer Segmentation","Triple-negative breast cancer segmentation","Semantic segmentation of histologic regions in scanned FFPE H&E stained slides of triple-negative breast cancer from The Cancer Genome Atlas. See: Amgad M, Elfandy H, ..., Gutman DA, Cooper LAD. Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics. 2019. doi: 10.1093/bioinformatics/btz083","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/602/BCSegmentationLogo.png","https://bcsegmentation.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:29:37" "379","feta","FeTA - Fetal Tissue Annotation Challenge","Fetal tissue annotation challenge","The Fetal Tissue Annotation and Segmentation Challenge (FeTA) is a multi-class, multi-institution image segmentation challenge part of MICCAI 2022. The goal of FeTA is to develop generalizable automatic multi-class segmentation methods for the segmentation of developing human brain tissues that will work with data acquired at different hospitals. The challenge provides manually annotated, super-resolution reconstructed MRI data of human fetal brains which will be used for training and testing automated multi-class image segmentation algorithms. In FeTA 2021, we used the first publicly available dataset of fetal brain MRI to encourage teams to develop automatic brain tissue segmentation algorithms. This year, FeTA 2022 takes it to the next level by launching a multi-center challenge for the development of image segmentation algorithms that will be generalizable to different hospitals with unseen data. We will include data from two institutions in the training dataset, and there wi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/604/FeTA_logo_640.png","https://feta.grand-challenge.org/","upcoming","intermediate","5","","2024-03-21","2024-04-26","2023-11-08 00:42:00","2023-11-15 22:36:15" "380","fastpet-ld","fastPET-LD","PET scan ""hot spots"" detection challenge","In this challenge, we provide 2 training datasets of 68 cases each: the first one was acquired at Sheba medical center (Israel) nuclear medicine department with a very-short exposure of 30s pbp, while the second is the same data followed by a denoising step implemented by a fully convolutional Dnn architecture trained under perceptual loss [1,2]. The purpose of this challenge is the detection of “hot spots”, that is locations that have an elevated standard uptake value (SUV) and potential clinical significance. Corresponding CT scans are also provided. The ground truth, common to both datasets, was generated by Dr. Liran Domachevsky, chair of nuclear medicine at Sheba medical center. It consists of a 3-D segmentation map of the hot spots as well as an Excel file containing the position and size of a 3D cuboid bounding box for each hot spot.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/605/IMG_19052021_144815_600_x_600_pixel.jpg","https://fastpet-ld.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:35:52" "381","autoimplant2021","AutoImplant 2021","Automatic cranial implant design challenge","Please see our AutoImplant 2020 website for an overview of the cranial implant design topic. Our 2nd AutoImplant Challenge (referred to as AutoImplant 2021) sees the (not limited to) following three major improvements compared to the prior edition, besides a stronger team: Real craniotomy defective skulls will be provided in the evaluation phase. Task specific metrics (e.g., boundary Dice Score) that are optimally in agreement with the clinical criteria of cranial implant design will be implemented and used. Besides a metric-based scoring and ranking system, neurosurgeons will be invited to verify, score and rank the participants-submitted cranial implants based their clinical usability (for the real cases in Task 2).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/607/AutoImplant_2021_Logo.png","https://autoimplant2021.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1109/tmi.2021.3077047","\N","\N","2023-11-08 00:42:00","2023-11-16 17:41:01" "382","dfu-2021","DFUC2021","Diabetic foot ulcer challenge 2021","We have received approval from the UK National Health Service (NHS) Re-search Ethics Committee (REC) to use these images for the purpose of research. The NHS REC reference number is 15/NW/0539. Foot images with DFU were collected from the Lancashire Teaching Hospital over the past few years. Three cameras were used for capturing the foot images, Kodak DX4530, Nikon D3300and Nikon COOLPIX P100. The images were acquired with close-ups of the full foot at a distance of around 30–40 cm with the parallel orientation to the plane of an ulcer. The use of flash as the primary light source was avoided, and instead, adequate room lights were used to get the consistent colours in images. Images were acquired by a podiatrist and a consultant physician with specialization in the diabetic foot, both with more than 5 years professional experience. As a pre-processing stage, we have discarded photographs with out of focus and blurry artefacts. The DFUC2021 consists of 15,683 DFU patche...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/608/footsnap_logo.png","https://dfu-2021.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1007/978-3-030-94907-5_7","\N","\N","2023-11-08 00:42:00","2023-11-16 17:41:08" -"383","saras-mesad","SARAS-MESAD","MICCAI 2021 multi-domain surgeon action detection","This Challenge Is Organized Under Miccai 2021, The 24Th International Conference On Medical Image Computing And Computer Assisted Intervention. The Event Will Be Held From September 27Th To October 1St 2021 In Strasbourg, France. The Challenge Focuses On Multi-Domain Surgeon Action Detection.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/609/Screenshot_3.png","https://saras-mesad.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 01:00:12" +"383","saras-mesad","SARAS-MESAD","MICCAI 2021 multi-domain surgeon action detection","This challenge is organized under MICCAI 2021, the 24th International Conference on Medical Image Computing and Computer Assisted Intervention. The event will be held from September 27th to October 1st 2021 in Strasbourg, France. The challenge focuses on multi-domain surgeon action detection.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/609/Screenshot_3.png","https://saras-mesad.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:30:14" "384","node21","NODE21","NODE21: nodule generation and detection","Among both men and women, lung cancer causes the greatest number of cancer deaths worldwide. Symptoms of lung cancer typically occur at an advanced stage of the disease, when treatment has a reduced chance of success. Early detection is therefore a key factor in reducing mortality rates from lung cancer. Pulmonary nodules, detected through imaging, are the initial manifestation of lung cancer, visible well before clinical symptoms or signs emerge. They can be visible on a chest radiograph (CXR), and chest radiography is by far the most common radiological exam in the world. Thus, CXR plays a critical role in the accurate identification of nodules in the drive towards early detection of lung cancer. Pulmonary nodules are frequently encountered as incidental findings in patients undergoing routine examination or CXR imaging for issues unrelated to lung cancer.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/612/node21logo.jpg","https://node21.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:34:20" -"385","wsss4luad","WSSS4LUAD","WSSS4LUAD semantic segmentation challenge","The Wsss4Luad Dataset Contains Over 10,000 Patches Of Lung Adenocarcinoma From Whole Slide Images From Guangdong Provincial People'S Hospital And Tcga With Image-Level Annotations. The Goal Of This Challenge Is To Perform Semantic Segmentation For Differentiating Three Important Types Of Tissues In The Wsis Of Lung Adenocarcinoma, Including Cancerous Epithelial Region, Cancerous Stroma Region And Normal Region. Paticipants Have To Use Image-Level Annotations To Give Pixel-Level Prediction.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/621/%E6%88%AA%E5%B1%8F2021-07-05_%E4%B8%8A%E5%8D%8810.17.09.png","https://wsss4luad.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 01:00:14" +"385","wsss4luad","WSSS4LUAD","WSSS4LUAD semantic segmentation challenge","The WSSS4LUAD dataset contains over 10,000 patches of lung adenocarcinoma from whole slide images from Guangdong Provincial People's Hospital and TCGA with image-level annotations. The goal of this challenge is to perform semantic segmentation for differentiating three important types of tissues in the WSIs of lung adenocarcinoma, including cancerous epithelial region, cancerous stroma region and normal region. Paticipants have to use image-level annotations to give pixel-level prediction.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/621/%E6%88%AA%E5%B1%8F2021-07-05_%E4%B8%8A%E5%8D%8810.17.09.png","https://wsss4luad.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:30:23" "386","ski10","SKI10","SKI10 cartilage and bone segmentation challenge","Welcome to the SKI10 website. The goal of SKI10 was to compare different algorithms for cartilage and bone segmentation from knee MRI data. Knee cartilage segmentation is a clinically relevant segmentation problem that has gained considerable importance in recent years. Among others, it is used to quantify cartilage deterioration for the diagnosis of Osteoarthritis and to optimize surgical planning of knee implants. See the SKI10 paper in the SKI10 Zenodo repository for further details. SKI10 started out as one of the three competitions of the Grand Challenge Workshop 2010, organized in conjunction with the MICCAI 2010 conference.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/624/ski10sq-big.png","https://ski10.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:31:16" "387","emsig","EMSIG","EMSIG hackathon: radar-based activity recognition","Welcome to the EMSIG Hackathon 2021, organised by EMSIG (www.emsig.org.uk/), the University of Glasgow, Edinburgh Napier University, UCL, DSTL and BAE Systems plc. The goal of the challenge is to evaluate and compare algorithms for human activity recognition based on radar data. We invite the UK radar community to participate by developing and testing existing and novel automated classification methods.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/628/EMSIGgroup_nYL722C.png","https://emsig.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:30:58" "388","qubiq21","QUBIQ2021","Quantification of uncertainties challenge 2021","The QUBIQ challenge deals with benchmarking algorithms that quantify uncertainties in biomedical image segmentation. Participants will work on binary segmentation tasks, all of which with multiple annotations from domain experts. To be segmented are various pathologies and anatomical structures, such as brain, kidney, or prostate, in MR or CT image data. A successful algorithm will segment these structures and reproduce the distribution of the experts’ annotations.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/629/brain.png","https://qubiq21.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-15 22:30:44" @@ -398,20 +398,20 @@ "397","cholectriplet2022","Surgical Action Triplet Detection 2022","Dounding box localization of the regions of action triplets","Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies acted upon provides a better, comprehensive and fine-grained modeling of surgical activities. Automatic recognition of these triplet activities directly from surgical videos would facilitate the development of intra-operative decision support systems that are more helpful, especially for safety, in the operating room (OR). Our previous EndoVIS challenge, CholecTriplet2021 (MICCAI 2021), and existing works on surgical action triplet recognition tackles this as a multi-label classification of all possible combinations. For better clinical utility, real-time modeling of tool-tissue interaction will go beyond determining the presence of these action triplets, to also include estimating their locations in each video frame. Hence, this challenge extends our previous challenge on action triplet recognition to also include bounding box localization of the...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/649/t50-logo-2022_kz11kSp.png","https://cholectriplet2022.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1007/978-3-030-59716-0_35","\N","\N","2023-11-08 00:42:00","2023-11-15 21:57:01" "398","endocv2022","EndoCV2022","Endoscopic video sequence detection and segmentation","Accurate detection of artefacts is a core challenge in a wide-range of endoscopic applications addressing multiple different disease areas. The importance of precise detection of these artefacts is essential for high-quality endoscopic video acquisition crucial for realising reliable computer assisted endoscopy tools for improved patient care. In particular, colonoscopy requires colon preparation and cleaning to obtain improved adenoma detection rate. Computer aided systems can help to guide both expert and trainee endoscopists to obtain consistent high quality surveillance and detect, localize and segment widely known cancer precursor lesion, “polyps”. While deep learning has been successfully applied in the medical imaging, generalization is still an open problem. Generalizability issue of deep learning models need to be clearly defined and tackled to build more reliable technology for clinical translation. Inspired by the enthusiasm of participants on our previous challenges, t...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/650/EndoCV2022-logo-v2.jpg","https://endocv2022.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.media.2021.102002","2022-02-20","\N","2023-11-08 00:42:00","2023-11-15 21:56:32" "399","mela","MELA2022","MICCAI 2022 MELA challenge: ct scan benchmark","The mediastinum is the common site of various lesions, including hyperplasia, cysts, tumors, and lymph nodes transferred from the lungs, which might cause serious problems due to their location. Therefore, the detection of mediastinal lesions has important indications for the early screening and diagnosis of related diseases. Computer-aided diagnosis methods have been developed to assist doctors in interpreting massive computed tomography (CT) scans. However, few prior studies investigate deep learning methods on this labor-intensive task. This challenge establishes a large-scale benchmark dataset to automatically detect mediastinal lesions from 1100 CT scans, consisting of 770 CTs for training, 110 CTs for validation, and 220 CTs for testing. Each annotation file includes coordinates of the bounding box of each mediastinal lesion region per CT scan for serving the task of detection. We hope this challenge could facilitate the research and application of automatic mediastinal lesi...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/651/LOGO-%E8%93%9D-900.png","https://mela.grand-challenge.org/","completed","intermediate","5","","2022-07-02","2022-07-17","2023-11-08 00:42:00","2023-11-15 21:56:16" -"400","kipa22","KiPA22 (Regular Challenge)","Kidney and artery segmentation challenge","The Challenge Is Aimed To Segment Kidney, Renal Tumors, Arteries, And Veins From Computed Tomography Angiography (Cta) Images In One Inference.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/654/logo3_%E5%89%AF%E6%9C%AC.png","https://kipa22.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2021.102055","2022-07-01","\N","2023-11-08 00:42:00","2023-11-11 01:55:06" -"401","parse2022","Parse2022","Pulmonary artery segmentation challenge 2022","It Is Of Significant Clinical Interest To Study Pulmonary Artery Structures In The Field Of Medical Image Analysis. One Prerequisite Step Is To Segment Pulmonary Artery Structures From Ct With High Accuracy And Low Time-Consuming. The Segmentation Of Pulmonary Artery Structures Benefits The Quantification Of Its Morphological Changes For Diagnosis Of Pulmonary Hypertension And Thoracic Surgery. However, Due To The Complexity Of Pulmonary Artery Topology, Automated Segmentation Of Pulmonary Artery Topology Is A Challenging Task. Besides, The Open Accessible Large-Scale Ct Data With Well Labeled Pulmonary Artery Are Scarce (The Large Variations Of The Topological Structures From Different Patients Make The Annotation An Extremely Challenging Process). The Lack Of Well Labeled Pulmonary Artery Hinders The Development Of Automatic Pulmonary Artery Segmentation Algorithm. Hence, We Try To Host The First Pulmonary Artery Segmentation Challenge In Miccai 2022 (Named Parse2022) To Start A...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/658/logo.jpg","https://parse2022.grand-challenge.org/","active","intermediate","5","","2023-06-30","\N","2023-11-08 00:42:00","2023-11-08 22:47:26" +"400","kipa22","KiPA22 (Regular Challenge)","Kidney and artery segmentation challenge","The challenge is aimed to segment kidney, renal tumors, arteries, and veins from computed tomography angiography (CTA) images in one inference.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/654/logo3_%E5%89%AF%E6%9C%AC.png","https://kipa22.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1016/j.media.2021.102055","2022-07-01","\N","2023-11-08 00:42:00","2023-11-17 23:31:41" +"401","parse2022","Parse2022","Pulmonary artery segmentation challenge 2022","It is of significant clinical interest to study pulmonary artery structures in the field of medical image analysis. One prerequisite step is to segment pulmonary artery structures from CT with high accuracy and low time-consuming. The segmentation of pulmonary artery structures benefits the quantification of its morphological changes for diagnosis of pulmonary hypertension and thoracic surgery. However, due to the complexity of pulmonary artery topology, automated segmentation of pulmonary artery topology is a challenging task. Besides, the open accessible large-scale CT data with well labeled pulmonary artery are scarce (The large variations of the topological structures from different patients make the annotation an extremely challenging process). The lack of well labeled pulmonary artery hinders the development of automatic pulmonary artery segmentation algorithm. Hence, we try to host the first Pulmonary ARtery SEgmentation challenge in MICCAI 2022 (Named Parse2022) to start a...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/658/logo.jpg","https://parse2022.grand-challenge.org/","active","intermediate","5","","2023-06-30","\N","2023-11-08 00:42:00","2023-11-17 23:31:47" "402","tdsc-abus2023","TDSC-ABUS2023","Automated 3D breast ultrasound tumor challenge","Tumor Detection, Segmentation And Classification Challenge On Automated 3D Breast Ultrasound","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/662/logo.png","https://tdsc-abus2023.grand-challenge.org/","completed","intermediate","5","","2023-07-15","2023-08-20","2023-11-08 00:42:00","2023-11-08 01:00:36" "403","instance","INSTANCE2022","Intracranial hemorrhage segmentation challenge 2022","Participants are required to segment Intracranial Hemorrhage region in Non-Contrast head CT (NCCT).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/667/Logo_%E9%A1%B5%E9%9D%A2_2.png","https://instance.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/jbhi.2021.3103850","2022-07-14","\N","2023-11-08 00:42:00","2023-11-15 21:55:33" "404","bcnb","BCNB","Early breast cancer core-needle biopsy dataset","Breast cancer (BC) has become the greatest threat to women’s health worldwide. Clinically, identification of axillary lymph node (ALN) metastasis and other tumor clinical characteristics such as ER, PR, and so on, are important for evaluating the prognosis and guiding the treatment for BC patients. Several studies intended to predict the ALN status and other tumor clinical characteristics by clinicopathological data and genetic testing score. However, due to the relatively poor predictive values and high genetic testing costs, these methods are often limited. Recently, deep learning (DL) has enabled rapid advances in computational pathology, DL can perform high-throughput feature extraction on medical images and analyze the correlation between primary tumor features and above status. So far, there is no relevant research on preoperatively predicting ALN metastasis and other tumor clinical characteristics based on WSIs of primary BC samples.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/668/BCNB-logo_CUGMa0V.png","https://bcnb.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.3389/fonc.2021.759007","\N","\N","2023-11-08 00:42:00","2023-11-16 17:41:33" "405","ravir","RAVIR","Retinal arteries and veins segmentation dataset","The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/673/IR_Case_022.png","https://ravir.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/jbhi.2022.3163352","2022-07-18","\N","2023-11-08 00:42:00","2023-11-16 17:41:34" -"406","dfuc2022","DFUC 2022","Diabetic foot ulcer (DFU) segmentation challenge","Diabetes Is A Global Epidemic Affecting Around 425 Million People And Expected To Rise To 629 Million By 2045. Diabetic Foot Ulcer (Dfu) Is A Severe Condition That Can Result From The Disease. The Rise Of The Condition Over The Last Decades Is A Challenge For Healthcare Systems. Cases Of Dfu Usually Lead To Severe Conditions That Greatly Prolongs Treatment And Result In Limb Amputation Or Death. Recent Research Focuses On Creating Detection Algorithms To Monitor Their Condition To Improve Patient Care And Reduce Strain On Healthcare Systems. Work Between Manchester Metropolitan University, Lancashire Teaching Hospitals And Manchester University Nhs Foundation Trust Has Created An International Repository Of Up To 11,000 Dfu Images. Analysis Of Ulcer Regions Is A Key For Dfu Management. Delineation Of Ulcers Is Time-Consuming. With Effort From The Lead Scientists Of The Uk, Us, India And New Zealand, This Challenge Promotes Novel Work In Dfu Segmentation And Promote Interdisciplina...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/674/footsnap_logo.png","https://dfuc2022.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/2204.11618","2022-06-20","\N","2023-11-08 00:42:00","2023-11-08 01:00:48" +"406","dfuc2022","DFUC 2022","Diabetic foot ulcer (DFU) segmentation challenge","Diabetes is a global epidemic affecting around 425 million people and expected to rise to 629 million by 2045. Diabetic Foot Ulcer (DFU) is a severe condition that can result from the disease. The rise of the condition over the last decades is a challenge for healthcare systems. Cases of DFU usually lead to severe conditions that greatly prolongs treatment and result in limb amputation or death. Recent research focuses on creating detection algorithms to monitor their condition to improve patient care and reduce strain on healthcare systems. Work between Manchester Metropolitan University, Lancashire Teaching Hospitals and Manchester University NHS Foundation Trust has created an international repository of up to 11,000 DFU images. Analysis of ulcer regions is a key for DFU management. Delineation of ulcers is time-consuming. With effort from the lead scientists of the UK, US, India and New Zealand, this challenge promotes novel work in DFU segmentation and promote interdisciplina...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/674/footsnap_logo.png","https://dfuc2022.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/2204.11618","2022-06-20","\N","2023-11-08 00:42:00","2023-11-17 23:32:01" "407","atlas","ATLAS R2.0 - Stroke Lesion Segmentation","Anatomical tracings of lesions after stroke","Accurate lesion segmentation is critical in stroke rehabilitation research for the quantification of lesion burden and accurate image processing. Current automated lesion segmentation methods for T1-weighted (T1w) MRIs, commonly used in rehabilitation research, lack accuracy and reliability. Manual segmentation remains the gold standard, but it is time-consuming, subjective, and requires significant neuroanatomical expertise. However, many methods developed with ATLAS v1.2 report low accuracy, are not publicly accessible or are improperly validated, limiting their utility to the field. Here we present ATLAS v2.0 (N=1271), a larger dataset of T1w stroke MRIs and manually segmented lesion masks that includes training (public. n=655), test (masks hidden, n=300), and generalizability (completely hidden, n=316) data. Algorithm development using this larger sample should lead to more robust solutions, and the hidden test and generalizability datasets allow for unbiased performance eval...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/676/ATLAS_Logo_square.png","https://atlas.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1101/2021.12.09.21267554","2022-09-18","\N","2023-11-08 00:42:00","2023-11-15 21:54:33" -"408","3dteethseg","3D Teeth Scan Segmentation and Labeling Challenge MICCAI2022","Teeth segmentation in orthodontic CAD systems","Computer-Aided Design (Cad) Tools Have Become Increasingly Popular In Modern Dentistry For Highly Accurate Treatment Planning. In Particular, In Orthodontic Cad Systems, Advanced Intraoral Scanners (Ioss) Are Now Widely Used As They Provide Precise Digital Surface Models Of The Dentition. Such Models Can Dramatically Help Dentists Simulate Teeth Extraction, Move, Deletion, And Rearrangement And Therefore Ease The Prediction Of Treatment Outcomes. Although Ioss Are Becoming Widespread In Clinical Dental Practice, There Are Only Few Contributions On Teeth Segmentation/Labeling Available In The Literature And No Publicly Available Database. A Fundamental Issue That Appears With Ios Data Is The Ability To Reliably Segment And Identify Teeth In Scanned Observations. Teeth Segmentation And Labelling Is Difficult As A Result Of The Inherent Similarities Between Teeth Shapes As Well As Their Ambiguous Positions On Jaws.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/680/Grand-Challenge-Logo_2.jpg","https://3dteethseg.grand-challenge.org/","completed","intermediate","5","","2022-07-01","2022-08-15","2023-11-08 00:42:00","2023-11-11 02:00:03" +"408","3dteethseg","3D Teeth Scan Segmentation and Labeling Challenge MICCAI2022","Teeth segmentation in orthodontic CAD systems","Computer-aided design (CAD) tools have become increasingly popular in modern dentistry for highly accurate treatment planning. In particular, in orthodontic CAD systems, advanced intraoral scanners (IOSs) are now widely used as they provide precise digital surface models of the dentition. Such models can dramatically help dentists simulate teeth extraction, move, deletion, and rearrangement and therefore ease the prediction of treatment outcomes. Although IOSs are becoming widespread in clinical dental practice, there are only few contributions on teeth segmentation/labeling available in the literature and no publicly available database. A fundamental issue that appears with IOS data is the ability to reliably segment and identify teeth in scanned observations. Teeth segmentation and labelling is difficult as a result of the inherent similarities between teeth shapes as well as their ambiguous positions on jaws.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/680/Grand-Challenge-Logo_2.jpg","https://3dteethseg.grand-challenge.org/","completed","intermediate","5","","2022-07-01","2022-08-15","2023-11-08 00:42:00","2023-11-17 23:32:08" "409","flare22","MICCAI FLARE 2022","Fast and low-resource abdominal organ segmentation","We extend the FLARE 2021 Challenge from fully supervised settings to a semi-supervised setting that focuses on how to use unlabeled data. Specifically, we provide a small number of labeled cases (50) and a large number of unlabeled cases (2000) in the training set, 50 visible cases for validation, and 200 hidden cases for testing. The segmentation targets include 13 organs: liver, spleen, pancreas, right kidney, left kidney, stomach, gallbladder, esophagus, aorta, inferior vena cava, right adrenal gland, left adrenal gland, and duodenum. In addition to the typical Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD), our evaluation metrics also focus on the inference speed and resources (GPU, CPU) consumption. Compare to the FLARE 2021 challenge, the dataset is 4x larger and the segmentations targets are increased to 13 organs. Moreover, the resource-related metrics are changed to the area under GPU memory-time curve and the area under CPU utilization-time curve rat...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/683/challenge-logo_xJtvQKE.png","https://flare22.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/tpami.2021.3100536","2023-01-01","\N","2023-11-08 00:42:00","2023-11-15 21:54:18" -"410","aggc22","AGGC22","Segment the Circle of Willis vessel components for both CTA and MRA","Driving Innovation In Computational Pathology For Prostate Cancer Diagnosis. Develop Algorithms To Identify Gleason Patterns In H&E-Stained Whole Slide Images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/684/logo_yEwrCqI.PNG","https://aggc22.grand-challenge.org/","active","intermediate","5","","2022-06-29","\N","2023-11-08 00:42:00","2023-11-15 22:43:05" +"410","aggc22","AGGC22","Segment the Circle of Willis vessel components for both CTA and MRA","Driving innovation in computational pathology for prostate cancer diagnosis. Develop algorithms to identify Gleason Patterns in H&E-stained whole slide images.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/684/logo_yEwrCqI.PNG","https://aggc22.grand-challenge.org/","active","intermediate","5","","2022-06-29","\N","2023-11-08 00:42:00","2023-11-17 23:32:15" "411","autopet","autoPET","Whole-body FDG-PET/CT lesion segmentation","Automatic tumor lesion segmentation in whole-body FDG-PET/CT on large-scale database of 1014 studies of 900 patients (training database) acquired on a single site: accurate and fast lesion segmentation avoidance of false positives (brain, bladder, etc.) Testing will be performed on 200 150 studies (held-out test database) with 100 studies originating from the same hospital as the training database and 100 50 are drawn from a different hospital with similar acquisition protocol to assess algorithm robustness and generalizabilit","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/686/autopet-5.png","https://autopet.grand-challenge.org/","completed","intermediate","5","","2022-05-02","2022-09-04","2023-11-08 00:42:00","2023-11-15 21:53:42" -"412","acrobat","ACROBAT 2023","Acrobat challenge: WSI registration in breast cancer","The Acrobat Challenge Aims To Advance The Development Of Wsi Registration Algorithms That Can Align Wsis Of Ihc-Stained Breast Cancer Tissue Sections To Corresponding Tissue Regions That Were Stained With H&E. All Wsis Originate From Routine Diagnostic Workflows.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/687/final_logo_1280x1280_qgi9ILO.png","https://acrobat.grand-challenge.org/","active","intermediate","5","","2022-06-15","\N","2023-11-08 00:42:00","2023-11-08 22:47:46" -"413","surgt","SurgT: Surgical Tracking","Surgical video tracking for trajectory estimation","This Challenge Consists Of Surgical Videos With A Target Bounding Box And The Participants Are Expected To Develop Visual Tracking Algorithms To Estimate The Trajectory Of The Bounding Box Throughout The Video-Sequence.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/688/Screenshot_from_2022-06-27_09-41-30.png","https://surgt.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-08 22:47:48" +"412","acrobat","ACROBAT 2023","Acrobat challenge: WSI registration in breast cancer","The ACROBAT challenge aims to advance the development of WSI registration algorithms that can align WSIs of IHC-stained breast cancer tissue sections to corresponding tissue regions that were stained with H&E. All WSIs originate from routine diagnostic workflows.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/687/final_logo_1280x1280_qgi9ILO.png","https://acrobat.grand-challenge.org/","active","intermediate","5","","2022-06-15","\N","2023-11-08 00:42:00","2023-11-17 23:32:30" +"413","surgt","SurgT: Surgical Tracking","Surgical video tracking for trajectory estimation","This challenge consists of surgical videos with a target bounding box and the participants are expected to develop visual tracking algorithms to estimate the trajectory of the bounding box throughout the video-sequence.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/688/Screenshot_from_2022-06-27_09-41-30.png","https://surgt.grand-challenge.org/","completed","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:32:36" "414","slcn","Surface Learning for Clinical Neuroimaging","Developmental phenotypes prediction from cortical imaging","The goal of this challenge will therefore be to elicit submissions of novel methods for registration-free or registration-robust cortical phenotype regression, with emphasis on interpretable or explainable machine learning methods which deliver biomarkers predictive of risk for neurodevelopmental impairment. These will be benchmarked on the tasks of regression of gestational age at birth (seen as a correlate of prematurity) on both registered and native space cortical surface data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/689/SLCN_Logo.png","https://slcn.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1016/j.neuroimage.2018.01.054","2022-04-24","2022-07-15","2023-11-08 00:42:00","2023-11-15 21:53:02" "415","p2ilf","Preoperative to Intraoperative Laparoscopy Fusion","Preoperative to intraoperative laparoscopy fusion","Augmented reality (AR) in laparoscopic liver surgery needs key landmark detection in intraoperative 2D laparoscopic images and its registration with the preoperative 3D model from CT/MRI data. Such AR techniques are vital to surgeons as they enable precise tumor localisation for surgical removal. A full resection of targeted tumor minimises the risk of recurrence. However, the task of automatic anatomical curve segmentation (considered as landmarks), and its registration to 3D models is a non-trivial and complex task. Most developed methods in this domain are built around traditional methodologies in computer vision. This challenge is designed to challenge participants to deploy machine learning methods for two tasks - Task I: segmentation of five key anatomical curves from laparoscopic video images and 3D model, including ridge (L, R), ligament, silhouettes, liver boundary; Task 2: matching these segmented curves to the 3D liver model from volumetric data (CT/MR).","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/690/logo_V3.png","https://p2ilf.grand-challenge.org/","completed","intermediate","5","","2022-09-02","2022-09-14","2023-11-08 00:42:00","2023-11-15 21:52:39" "416","hecktor","MICCAI HECKTOR 2022","Head and neck tumor segmentation in PET/CT","Following the success of the first two editions of the HECKTOR challenge in 2020 and 2021, this challenge will be presented at the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. Two tasks are proposed this year (participants can choose to participate in either or both tasks): Task 1: The automatic segmentation of Head and Neck (H&N) primary tumors and lymph nodes (new!) in FDG-PET/CT images; Task 2: The prediction of patient outcomes, namely Recurrence-Free Survival (RFS) from the FDG-PET/CT images and available clinical data.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/691/grandchallenge_logo_d8JNqKz.png","https://hecktor.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1007/978-3-030-98253-9","2022-08-26","2024-05-01","2023-11-08 00:42:00","2023-11-15 21:52:24" @@ -421,10 +421,10 @@ "420","surgtoolloc","Surgical Tool Localization in endoscopic videos","Surgical tool localization in endoscopic videos","The ability to automatically detect and track surgical instruments in endoscopic video will enable many transformational interventions. Assessing surgical performance and efficiency, identifying skilled tool use and choreography, and planning operational and logistical aspects of OR resources are just some of the applications that would benefit. Unfortunately obtaining the annotations needed to train machine learning models to identify and localize surgical tools is a difficult task. Annotating bounding boxes frame-by-frame in video is tedious and time consuming, yet a wide variety of surgical tools and surgeries must be captured for robust training. Moreover, ongoing annotator training is needed to stay up to date with surgical instrument innovation. In robot-assisted surgery however, potentially informative data like timestamps of instrument installation and removal can be programmatically harvested. The ability to use only tool presence labels to localize tools would significan...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/698/grand-challenge_logo.png","https://surgtoolloc.grand-challenge.org/","completed","intermediate","5","","2022-08-29","2022-09-08","2023-11-08 00:42:00","2023-11-15 21:50:20" "421","curious2022","Brain shift with Intraoperative Ultrasound - Segmentation tasks","Brain shift with intraoperative ultrasound segmentation","Early brain tumor resection can effectively improve the patient’s survival rate. However, resection quality and safety can often be heavily affected by intra-operative brain tissue shift due to factors, such as gravity, drug administration, intracranial pressure change, and tissue removal. Such tissue shift can displace the surgical target and vital structures (e.g., blood vessels) shown in pre-operative images while these displacements may not be directly visible in the surgeon’s field of view. Intra-operative ultrasound (iUS) is a robust and relatively inexpensive technique to track intra-operative tissue shift and surgical tools. Automatic algorithms for brain tissue segmentation in iUS, especially brain tumors and resection cavity can greatly facilitate the robustness and accuracy of brain shift correction through image registration, and allow easy interpretation of the iUS. This has the potential to improve surgical outcomes and patient survival rate. The challenge is an ex...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/699/CuRIOUS_logo.png","https://curious2022.grand-challenge.org/","completed","intermediate","5","","2022-08-15","2022-09-13","2023-11-08 00:42:00","2023-11-15 21:50:06" "422","vessel-wall-segmentation-2022","Carotid Vessel Wall Segmentation and Atherosclerosis Diagnosis","Carotid vessel wall segmentation and diagnosis","In this challenge, the task is to segment the vessel wall from 3D-VISTA images and diagnose the atherosclerotic lesions with high accuracy and robustness. And then the clinical usable measurements such as wall thickness (difference between the lumen and outer wall contours), lumen area or stenosis percent can be derived from the vessel wall segmentation. In addition, the identification of the lumen and outer wall boundary of the vessel wall is also critical for the diagnosis of lesions. In summary, this challenge focuses on carotid vessel wall segmentation and atherosclerotic lesion diagnosis.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/700/%E5%9B%BE%E7%89%871.png","https://vessel-wall-segmentation-2022.grand-challenge.org/","completed","intermediate","5","","2022-07-08","2022-08-01","2023-11-08 00:42:00","2023-11-16 17:41:55" -"423","crossmoda2022","Cross-Modality Domain Adaptation: Segmentation & Classification","CrossMoDA 2022: unsupervised domain adaptation","The Crossmoda 2022 Challenge Is The Second Edition Of The First Large And Multi-Class Medical Dataset For Unsupervised Cross-Modality Domain Adaptation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/701/squarelogo_2022.png","https://crossmoda2022.grand-challenge.org/","active","intermediate","5","","2022-05-11","\N","2023-11-08 00:42:00","2023-11-08 22:48:22" -"424","atm22","Multi-site, Multi-Domain Airway Tree Modeling (ATM‚Äô22)","Airway segmentation in x-ray CT for pulmonary diseases","Airway Segmentation Is A Crucial Step For The Analysis Of Pulmonary Diseases Including Asthma, Bronchiectasis, And Emphysema. The Accurate Segmentation Based On X-Ray Computed Tomography (Ct) Enables The Quantitative Measurements Of Airway Dimensions And Wall Thickness, Which Can Reveal The Abnormality Of Patients With Chronic Obstructive Pulmonary Disease (Copd). Besides, The Extraction Of Patient-Specific Airway Models From Ct Images Is Required For Navigatiisted Surgery.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/702/logo_xqf7twK.png","https://atm22.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1007/978-3-031-16431-6_48","2022-08-17","\N","2023-11-08 00:42:00","2023-11-08 01:01:09" +"423","crossmoda2022","Cross-Modality Domain Adaptation: Segmentation & Classification","CrossMoDA 2022: unsupervised domain adaptation","The CrossMoDA 2022 challenge is the second edition of the first large and multi-class medical dataset for unsupervised cross-modality Domain Adaptation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/701/squarelogo_2022.png","https://crossmoda2022.grand-challenge.org/","active","intermediate","5","","2022-05-11","\N","2023-11-08 00:42:00","2023-11-17 23:32:53" +"424","atm22","Multi-site, Multi-Domain Airway Tree Modeling (ATM‚Äô22)","Airway segmentation in x-ray CT for pulmonary diseases","Airway segmentation is a crucial step for the analysis of pulmonary diseases including asthma, bronchiectasis, and emphysema. The accurate segmentation based on X-Ray computed tomography (CT) enables the quantitative measurements of airway dimensions and wall thickness, which can reveal the abnormality of patients with chronic obstructive pulmonary disease (COPD). Besides, the extraction of patient-specific airway models from CT images is required for navigatiisted surgery.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/702/logo_xqf7twK.png","https://atm22.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1007/978-3-031-16431-6_48","2022-08-17","\N","2023-11-08 00:42:00","2023-11-17 23:32:58" "425","ps-fh-aop-2023","FH-PS-AOP challenge","Fetal head and pubic symphysis segmentation","The task of the FH-PS-AOP grand challenge is to automatically segment 700 FH-PSs from transperineal ultrasound images in the devised Set 2 (test set), given the availability of Set 1, consisting of 401 images. Set 2 is held private and therefore not released to the potential participants to prevent algorithm tuning, but instead the algorithms have to be submitted in the form of Docker containers that will be run by organizers on Set 2. The challenge is organized by taking into account the current guidelines for biomedical image analysis competitions, in particular the recommendations of the Biomedical Image Analysis Challenges (BIAS) initiative for transparent challenge reporting.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/703/F2_WDBTbsq.tif","https://ps-fh-aop-2023.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.1007/s11517-022-02747-1","2023-03-27","2023-09-20","2023-11-08 00:42:00","2023-11-16 17:41:56" -"426","shifts","Shifts Challenge 2022","Shifts challenge 2022: distributional shift and uncertainty","The Goal Of The Shifts Challenge 2022 Is To Raise Awareness Among The Research Community About The Problems Of Distributional Shift, Robustness, And Uncertainty Estimation, And To Identify New Solutions To Address Them. The Competition Will Consist Of Two New Tracks: White Matter Multiple Sclerosis (Ms) Lesion Segmentation In 3D Magnetic Resonance Imaging (Mri) Of The Brain And Marine Cargo Vessel Power Estimation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/704/logo_1200.png","https://shifts.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/2206.15407","2022-09-15","2024-04-08","2023-11-08 00:42:00","2023-11-08 01:01:12" +"426","shifts","Shifts Challenge 2022","Shifts challenge 2022: distributional shift and uncertainty","The goal of the Shifts Challenge 2022 is to raise awareness among the research community about the problems of distributional shift, robustness, and uncertainty estimation, and to identify new solutions to address them. The competition will consist of two new tracks: White Matter Multiple Sclerosis (MS) lesion segmentation in 3D Magnetic Resonance Imaging (MRI) of the brain and Marine cargo vessel power estimation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/704/logo_1200.png","https://shifts.grand-challenge.org/","active","intermediate","5","https://arxiv.org/abs/2206.15407","2022-09-15","2024-04-08","2023-11-08 00:42:00","2023-11-17 23:33:07" "427","megc2022","ACMMM MEGC2022: Facial Micro-Expression Grand Challenge","Facial macro- and micro-expressions spotting","The unseen testing set (MEGC2022-testSet) contains 10 long video, including 5 long videos from SAMM (SAMM Challenge dataset) and 5 clips cropped from different videos in CAS(ME)3. The frame rate for SAMM Challenge dataset is 200fps and the frame rate for CAS(ME)3 is 30 fps. The participants should test on this unseen dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/705/acmmm2022_logo.png","https://megc2022.grand-challenge.org/","active","intermediate","5","https://doi.org/10.1109/fg47880.2020.00029","2022-05-23","\N","2023-11-08 00:42:00","2023-11-16 17:39:17" "428","midog2022","MItosis DOmain Generalization Challenge 2022","Mitosis domain generalization challenge 2022","Motivation: Mitosis detection is a key component of tumor prognostication for various tumors. Modern deep learning architectures provide detection accuracies for mitosis that are on the level of human experts. Mitosis is known to be relevant for many tumor types, yet, when trained on one tumor / tissue type, the performance will typically drop significantly on another. Scope: Detect mitotic figures (cells undergoing cell division) from histopathology images (object detection). You will be provided with images from 6 different tumor types, 5 out of which are labeled. In total the set consists of 405 cases and includes 9501 mitotic figure annotations in the training set. Evaluation will be done on ten different tumor types with the F1 score as main metric.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/706/midog_compact.png","https://midog2022.grand-challenge.org/","completed","intermediate","5","https://doi.org/10.5281/zenodo.6362337","2022-08-04","2022-08-30","2023-11-08 00:42:00","2023-11-16 17:39:11" "429","isles22","Ischemic Stroke Lesion Segmentation Challenge","Ischemic stroke lesion segmentation challenge","The goal of this challenge is to evaluate automated methods of stroke lesion segmentation in MR images. Participants are tasked with automatically generating lesion segmentation masks from DWI, ADC and FLAIR MR modalities. The task consist on a single phase of algorithms evaluation. Participants will submit their segmentation model (""algorithm"") via a Docker container which will then be used to generate predictions on a hidden dataset.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/707/Slide1_N1qHO1K.png","https://isles22.grand-challenge.org/","active","intermediate","5","","2022-07-15","2030-12-06","2023-11-08 00:42:00","2023-11-15 21:48:03" @@ -434,16 +434,16 @@ "433","2023paip","PAIP 2023: TC prediction in pancreatic and colon cancer","Tumor cellularity prediction in pancreatic and colon cancer","Tumor cellularity (TC) is used to compute the residual tumor burden in several organs, such as the breast and colon. The TC is measured based on semantic cell segmentation, which accurately classifies and delineates individual cells. However, manual analysis of TC is impractical in clinics because of the large volumes of pathological images and is unreliable owing to inconsistent TC values among pathologists. Essentially, tumor cellularity should be calculated by individual cell counting; however, manual counting is impossible, and human pathologists cannot avoid individual differences in diagnostic performance. Automatic image analysis is the ideal method for solving this problem, and it can efficiently reduce the workload of pathologists.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/716/PAIP2023-640.png","https://2023paip.grand-challenge.org/","active","intermediate","5","","2023-02-15","\N","2023-11-08 00:42:00","2023-11-16 17:39:26" "434","snemi3d","SNEMI3D: 3D Segmentation of neurites in EM images","IEEE ISBI 2013 challenge: multimodal segmentation","In this challenge, a full stack of electron microscopy (EM) slices will be used to train machine-learning algorithms for the purpose of automatic segmentation of neurites in 3D. This imaging technique visualizes the resulting volumes in a highly anisotropic way, i.e., the x- and y-directions have a high resolution, whereas the z-direction has a low resolution, primarily dependent on the precision of serial cutting. EM produces the images as a projection of the whole section, so some of the neural membranes that are not orthogonal to a cutting plane can appear very blurred. None of these problems led to major difficulties in the manual labeling of each neurite in the image stack by an expert human neuro-anatomist.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/717/logo.png","https://snemi3d.grand-challenge.org/","active","intermediate","5","","2013-01-15","\N","2023-11-08 00:42:00","2023-11-16 17:39:27" "435","han-seg2023","The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge","Endometrial carcinoma prediction on whole-slide images","Cancer in the region of the head and neck (HaN) is one of the most prominent cancers, for which radiotherapy represents an important treatment modality that aims to deliver a high radiation dose to the targeted cancerous cells while sparing the nearby healthy organs-at-risk (OARs). A precise three-dimensional spatial description, i.e. segmentation, of the target volumes as well as OARs is required for optimal radiation dose distribution calculation, which is primarily performed using computed tomography (CT) images. However, the HaN region contains many OARs that are poorly visible in CT, but better visible in magnetic resonance (MR) images. Although attempts have been made towards the segmentation of OARs from MR images, so far there has been no evaluation of the impact the combined analysis of CT and MR images has on the segmentation of OARs in the HaN region. The Head and Neck Organ-at-Risk Multi-Modal Segmentation Challenge aims to promote the development of new and applicatio...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/718/logo.jpg","https://han-seg2023.grand-challenge.org/","active","intermediate","5","","2023-03-26","2023-12-15","2023-11-08 00:42:00","2023-11-16 17:39:30" -"436","endo-aid","Endometrial Carcinoma Detection in Pipelle biopsies","Non-rigid registration challenge for expansion microscopy","Evaluation Platform As Reference Benchmark For Algorithms That Can Predict Endometrial Carcinoma On Whole-Slide Images Of Pipelle Sampled Endometrial Slides Stained In H&E, Based On The Test Data Set Used In Our Project.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/719/logo-challenge.png","https://endo-aid.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-11 02:00:55" +"436","endo-aid","Endometrial Carcinoma Detection in Pipelle biopsies","Non-rigid registration challenge for expansion microscopy","Evaluation platform as reference benchmark for algorithms that can predict endometrial carcinoma on whole-slide images of Pipelle sampled endometrial slides stained in H&E, based on the test data set used in our project.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/719/logo-challenge.png","https://endo-aid.grand-challenge.org/","active","intermediate","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:33:27" "437","rnr-exm","Robust Non-rigid Registration Challenge for Expansion Microscopy","Xray projectomic reconstruction with skeleton segmentation","Despite the wide adoption of ExM, there are few public benchmarks to evaluate the registration pipeline, which limits the development of robust methods for real-world deployment. To address this issue, we have launched RnR-ExM, a challenge that releases 24 pairs of 3D image volumes from three different species. Participants are asked to align these pairs and submit dense deformation fields for assessment. Half of the volume pairs (the validation and test set) have annotated cell structures (nuclei, blood vessels) as registration landmarks.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/720/RnR-ExM_Logo.png","https://rnr-exm.grand-challenge.org/","active","intermediate","5","","2023-02-17","2028-03-16","2023-11-08 00:42:00","2023-11-16 17:39:32" "438","xpress","Xray Projectomic Reconstruction Extracting Segment with Skeleton","Automated lesion segmentation in PET/CT - domain generalization","In this task, we provide volumetric XNH images of cortical white matter axons from the mouse brain at 100 nm per voxel isotropic resolution. Additionally, we provide ground truth annotations for axon trajectories. Manual voxel-wise annotation of ground truth is a time-consuming bottleneck for training segmentation networks. On the other hand, skeleton-based ground truth is much faster to annotate, and sufficient to determine connectivity. Therefore, we encourage participants to develop methods to leverage skeleton-based training. To this end, we provide two types of training (validation) sets: a small volume of voxel-wise annotations and a larger volume with skeleton-based annotations. The participants will have the flexibility to use either or both of the provided annotations to train their models, and are challenged to submit an accurate voxel-wise prediction on the test volume. Entries will be evaluated on how accurately the submitted segmentations agree with the ground-truth s...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/721/XPRESS_logo_sq2-01.png","https://xpress.grand-challenge.org/","active","intermediate","5","","2023-02-06","\N","2023-11-08 00:42:00","2023-11-16 17:39:34" "439","autopet-ii","autoPET-II","Automated lesion segmentation in PET/CT - domain generalization challenge","Positron Emission Tomography / Computed Tomography (PET/CT) is an integral part of the diagnostic workup for various malignant solid tumor entities. Due to its wide applicability, Fluorodeoxyglucose (FDG) is the most widely used PET tracer in an oncological setting reflecting glucose consumption of tissues, e.g. typically increased glucose consumption of tumor lesions. As part of the clinical routine analysis, PET/CT is mostly analyzed in a qualitative way by experienced medical imaging experts. Additional quantitative evaluation of PET information would potentially allow for more precise and individualized diagnostic decisions. A crucial initial processing step for quantitative PET/CT analysis is segmentation of tumor lesions enabling accurate feature extraction, tumor characterization, oncologic staging and image-based therapy response assessment. Manual lesion segmentation is however associated with enormous effort and cost and is thus infeasible in clinical routine. Automatio...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/722/autopet-5.png","https://autopet-ii.grand-challenge.org/","completed","intermediate","5","","2023-02-28","2023-09-24","2023-11-08 00:42:00","2023-11-15 21:45:49" -"440","toothfairy","ToothFairy: Cone-Beam Computed Tomography Segmentation Challenge","Toothfairy challenge: inferior alveolar canal segmentation","This Is The First Edition Of The Toothfairy Challenge Organized By The University Of Modena And Reggio Emilia With The Collaboration Of Raudboud University. This Challenge Aims At Pushing The Development Of Deep Learning Frameworks To Segment The Inferior Alveolar Canal (Iac) By Incrementally Extending The Amount Of Publicly Available 3D-Annotated Cone Beam Computed Tomography (Cbct) Scans. Cbct Modality Is Becoming Increasingly Important For Treatment Planning And Diagnosis In Implant Dentistry And Maxillofacial Surgery. The Three-Dimensional Information Acquired With Cbct Can Be Crucial To Plan A Vast Number Of Surgical Interventions With The Aim Of Preserving Noble Anatomical Structures Such As The Inferior Alveolar Canal (Iac), Which Contains The Homonymous Nerve (Inferior Alveolar Nerve, Ian). Deep Learning Models Can Support Medical Personnel In Surgical Planning Procedures By Providing A Voxel-Level Segmentation Of The Ian Automatically Extracted From Cbct Scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/723/logo.jpg","https://toothfairy.grand-challenge.org/","active","intermediate","5","","2023-06-30","\N","2023-11-08 00:42:00","2023-11-08 22:42:36" -"441","spider","SPIDER","Lumbar SPIDER challenge: MRI segmentation of spinal structures","The Lumbar Spider Challenge Focuses On The Segmentation Of Three Anatomical Structures In Lumbar Spine Mri: Vertebrae, Intervertebral Discs (Ivds), And Spinal Canal. The Segmentation Task Requires Participants To Produce Separate Masks For Each Vertebra, Ivd, And The Spinal Canal In The Lumbar Spine Mri Volume. The Numbering Of The Vertebrae And Ivds Is Not Specific And May Vary Across Different Cases.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/724/SPIDER_logo_square_jsl2NDu.png","https://spider.grand-challenge.org/","active","intermediate","5","","2023-07-26","2024-04-30","2023-11-08 00:42:00","2023-11-11 02:00:38" -"442","lnq2023","LNQ2023","3D lymph node segmentation for comprehensive disease evaluation","Accurate Lymph Node Size Estimation Is Critical For Staging Cancer Patients, Initial Therapeutic Management, And In Longitudinal Scans, Assessing Response To Therapy. Current Standard Practice For Quantifying Lymph Node Size Is Based On A Variety Of Criteria That Use Unidirectional Or Bidirectional Measurements On Just One Or A Few Nodes, Typically On Just One Axial Slice. But Humans Have Hundreds Of Lymph Nodes, Any Number Of Which May Be Enlarged To Various Degrees Due To Disease Or Immune Response. While A Normal Lymph Node May Be Approximately 5Mm In Diameter, A Diseased Lymph Node May Be Several Cm In Diameter. The Mediastinum, The Anatomical Area Between The Lungs And Around The Heart, May Contain Ten Or More Lymph Nodes, Often With Three Or More Enlarged Greater Than 1Cm. Accurate Segmentation In 3D Would Provide More Information To Evaluate Lymph Node Disease.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/725/LNQ-square.png","https://lnq2023.grand-challenge.org/","completed","intermediate","5","","2023-05-01","2023-09-30","2023-11-08 00:42:00","2023-11-08 01:01:29" +"440","toothfairy","ToothFairy: Cone-Beam Computed Tomography Segmentation Challenge","Toothfairy challenge: inferior alveolar canal segmentation","This is the first edition of the ToothFairy challenge organized by the University of Modena and Reggio Emilia with the collaboration of Raudboud University. This challenge aims at pushing the development of deep learning frameworks to segment the Inferior Alveolar Canal (IAC) by incrementally extending the amount of publicly available 3D-annotated Cone Beam Computed Tomography (CBCT) scans. CBCT modality is becoming increasingly important for treatment planning and diagnosis in implant dentistry and maxillofacial surgery. The three-dimensional information acquired with CBCT can be crucial to plan a vast number of surgical interventions with the aim of preserving noble anatomical structures such as the Inferior Alveolar Canal (IAC), which contains the homonymous nerve (Inferior Alveolar Nerve, IAN). Deep learning models can support medical personnel in surgical planning procedures by providing a voxel-level segmentation of the IAN automatically extracted from CBCT scans.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/723/logo.jpg","https://toothfairy.grand-challenge.org/","active","intermediate","5","","2023-06-30","\N","2023-11-08 00:42:00","2023-11-17 23:33:47" +"441","spider","SPIDER","Lumbar SPIDER challenge: MRI segmentation of spinal structures","The Lumbar SPIDER Challenge focuses on the segmentation of three anatomical structures in lumbar spine MRI: vertebrae, intervertebral discs (IVDs), and spinal canal. The segmentation task requires participants to produce separate masks for each vertebra, IVD, and the spinal canal in the lumbar spine MRI volume. The numbering of the vertebrae and IVDs is not specific and may vary across different cases.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/724/SPIDER_logo_square_jsl2NDu.png","https://spider.grand-challenge.org/","active","intermediate","5","","2023-07-26","2024-04-30","2023-11-08 00:42:00","2023-11-17 23:34:08" +"442","lnq2023","LNQ2023","3D lymph node segmentation for comprehensive disease evaluation","Accurate lymph node size estimation is critical for staging cancer patients, initial therapeutic management, and in longitudinal scans, assessing response to therapy. Current standard practice for quantifying lymph node size is based on a variety of criteria that use unidirectional or bidirectional measurements on just one or a few nodes, typically on just one axial slice. But humans have hundreds of lymph nodes, any number of which may be enlarged to various degrees due to disease or immune response. While a normal lymph node may be approximately 5mm in diameter, a diseased lymph node may be several cm in diameter. The mediastinum, the anatomical area between the lungs and around the heart, may contain ten or more lymph nodes, often with three or more enlarged greater than 1cm. Accurate segmentation in 3D would provide more information to evaluate lymph node disease.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/725/LNQ-square.png","https://lnq2023.grand-challenge.org/","completed","intermediate","5","","2023-05-01","2023-09-30","2023-11-08 00:42:00","2023-11-17 23:34:13" "443","arcade","ARCADE-MICCAI2023","ARCADE 2023: automatic region-based coronary artery disease diagnostics","Coronary artery disease (CAD) is a condition that affects blood supply of heart, due to buildup of atherosclerotic plaque in the coronary arteries. CAD is one of the leading death causes around the world. The most common diagnosis procedure for CAD is coronary angiography, which uses contrast material and X-rays for observation of lesions in arteries, this type of procedure showing blood flow in coronary arteries in real time what allows precise detection of stenosis and control of intraventricular interventions and stent insertions. Coronary angiography is useful diagnostic method for planning necessary revascularization procedures based on calculated occlusion and affected segment of coronary arteries. The development of automated analytical tool for lesion detection and localization is a promising strategy for increasing effectiveness of detection and treatment strategies for CAD.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/726/aRCADE__1.jpg","https://arcade.grand-challenge.org/","completed","intermediate","5","","2023-06-07","2023-09-20","2023-11-08 00:42:00","2023-11-15 21:45:02" "444","ultrasoundenhance2023","Ultrasound Image Enhancement challenge 2023","Ultrasound image enhancement challenge 2023","Ultrasound imaging is commonly used for aiding disease diagnosis and treatment, with advantages in noninvasive. Lately, medical ultrasound shows prospects revolving from expensive big-size machines in hospitals to economical hand-held devices in wider use. The barrier is that ultrasound examination with a handheld device has the drawback of low imaging quality due to hardware limitations. Toward this, ultrasound image enhancement provides a potential low-cost solution. Restoring high-quality images from low-quality ones using computer algorithms would exempt requirements for hardware improvements and promote ultrasound device revolutions and wider applications. We propose to hold the challenge of enhancement for ultrasound images in conjunction with MICCAI 2023. We will provide various ultrasound data of five organs, including the thyroid, carotid artery, liver, breast, and kidney. The challenging task is reconstructing high-quality ultrasound images from low-quality ones. A tota...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/727/logo.png","https://ultrasoundenhance2023.grand-challenge.org/","completed","intermediate","5","","2023-07-10","2023-08-31","2023-11-08 00:42:00","2023-11-15 21:44:45" -"445","multicenteraorta","SEG.A. - Segmentation of the Aorta","Aortic vessel tree segmentation challenge in CT images","Segmentation, Modeling And Visualization Of The Arterial Tree Are Still A Challenge In Medical Image Analysis. The Main Track Of This Challenge Deals With The Fully Automatic Segmentation Of The Aortic Vessel Tree In Computed Tomography Images. Optionally, Teams Can Submit Tailored Solutions For Meshing And Visualization Of The Vessel Tree.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/729/logo_final_miccai.jpg","https://multicenteraorta.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2108.02998","2023-06-15","2023-08-15","2023-11-08 00:42:00","2023-11-08 01:01:33" +"445","multicenteraorta","SEG.A. - Segmentation of the Aorta","Aortic vessel tree segmentation challenge in CT images","Segmentation, modeling and visualization of the arterial tree are still a challenge in medical image analysis. The main track of this challenge deals with the fully automatic segmentation of the aortic vessel tree in computed tomography images. Optionally, teams can submit tailored solutions for meshing and visualization of the vessel tree.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/729/logo_final_miccai.jpg","https://multicenteraorta.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2108.02998","2023-06-15","2023-08-15","2023-11-08 00:42:00","2023-11-17 23:34:24" "446","sppin","Surgical Planning in Pediatric Neuroblastoma","Pediatric neuroblastoma surgical planning challenge","Neuroblastoma: Neuroblastoma is one of the most common cancers in children, accounting for 15% of pediatric cancer related deaths. This tumor originates from the symphatic nervous system, and is often located in the abdomen. Treatment of neuroblastoma includes surgical resection of the tumor, but complete resection of the tumor is often challenging. Surgical planning in Neuroblastoma: Surgical procedures can be complicated due to the neuroblastoma often being in proximity or even encasing organs and vessels in the affected area. These structures can include abdominal organs such as kidneys, liver, pancreas and spleen or big abdominal vessels such as the aorta and renal veins. During surgical planning it is essential to have a clear understanding of the neuroblastoma in relation to the relevant anatomy. Currently, magnetic resonance imaging (MRI) is used as pre-operative imaging. Studying 3D models of the tumor and relevant structures guides surgeons in the pre-operative understan...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/730/SPINN_Logo_SB_2023_03_11-09_TnwZJgK.png","https://sppin.grand-challenge.org/","completed","intermediate","5","","2023-08-10","2023-09-01","2023-11-08 00:42:00","2023-11-11 01:52:06" "447","medfm2023","Foundation Model Prompting for Medical Image Classification","Model adaptation for medical image classification challenge","In the past few years, deep learning foundation models have been trendy, especially in computer vision and natural language processing. As a result, many milestone works have been proposed, such as Vision Transformers (ViT), Generative Pretrained Transformer (GPT), and Contrastive Language-Image Pretraining (CLIP). They aim to solve many downstream tasks by utilizing the robust representation learning and generalization abilities of foundation models.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/731/logo640.png","https://medfm2023.grand-challenge.org/","active","intermediate","5","","2023-07-14","2033-10-15","2023-11-08 00:42:00","2023-11-16 17:39:48" "448","dentex","DENTEX - MICCAI23","Dental enumeration and diagnosis on panoramic x-rays","Panoramic X-rays are widely used in dental practice to provide a comprehensive view of the oral cavity and aid in treatment planning for various dental conditions. However, interpreting these images can be a time-consuming process that can distract clinicians from essential clinical activities. Moreover, misdiagnosis is a significant concern, as general practitioners may lack specialized training in radiology, and communication errors can occur due to work exhaustion.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/732/logo_diseased.png","https://dentex.grand-challenge.org/","completed","intermediate","5","https://arxiv.org/abs/2303.06500","2023-04-30","2023-09-01","2023-11-08 00:42:00","2023-11-16 17:39:45" @@ -461,6 +461,6 @@ "460","pegs-dream-challenge","PEGS DREAM Challenge","","","","https://www.synapse.org/pegs","upcoming","intermediate","1","","\N","\N","2023-11-13 22:48:02","2023-11-16 16:20:18" "461","fda-data-centric-challenge","FDA Data-Centric Challenge","","","","https://www.synapse.org/fda_data_centric","upcoming","intermediate","1","","\N","\N","2023-11-13 22:49:41","2023-11-16 16:18:38" "462","ai-institute-for-dynamic-systems","AI Institute for Dynamic Systems","","","","https://www.synapse.org/#!Synapse:syn52052735","upcoming","intermediate","1","","\N","\N","2023-11-13 22:51:53","2023-11-17 0:13:33" -"463","competition-nih-alzheimers-adrd-1","PREPARE Phase 1 - Find IT!","Help the NIH discover novel approaches for the early prediction of Alzheimer...","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This first phase, Find IT!: Data for Early Prediction, is focused on finding, curating, or contributing data to create representative and open datasets that can be used for the early prediction of AD/ADRD.","","https://www.drivendata.org/competitions/253/competition-nih-alzheimers-adrd-1/","active","intermediate","19","","2023-09-01","2024-01-31","2023-11-16 21:57:03","2023-11-17 0:14:09" -"464","prepare-phase-2-build-it","PREPARE Phase 2 - Build IT!","","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This second phase, Build IT!: Algorithms and Approaches, is focused on advancing algorithms and analytic approaches for early prediction of AD/ADRD, with an emphasis on explainability of predictions.","","","upcoming","intermediate","19","","2024-09-01","\N","2023-11-17 00:09:25","2023-11-17 0:14:48" -"465","prepare-phase-3-put-it-all-together","PREPARE Phase 3 - Put IT All Together!","","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This third phase, Put IT All Together!: Proof of Principle Demonstration, is for the top solvers from Phase 2 demonstrate algorithmic approaches on diverse datasets and share their results at an innovation event.","","","upcoming","intermediate","19","","2025-03-01","\N","2023-11-17 00:09:26","2023-11-17 0:14:54" +"463","competition-nih-alzheimers-adrd-1","PREPARE Phase 1 - Find IT!","Help the NIH discover novel approaches for the early prediction of Alzheimer","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This first phase, Find IT!: Data for Early Prediction, is focused on finding, curating, or contributing data to create representative and open datasets that can be used for the early prediction of AD/ADRD.","","https://www.drivendata.org/competitions/253/competition-nih-alzheimers-adrd-1/","active","intermediate","19","","2023-09-01","2024-01-31","2023-11-16 21:57:03","2023-11-17 0:14:09" +"464","prepare-phase-2-build-it","PREPARE Phase 2 - Build IT!","Help the NIH discover novel approaches for the early prediction of Alzheimer","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This second phase, Build IT!: Algorithms and Approaches, is focused on advancing algorithms and analytic approaches for early prediction of AD/ADRD, with an emphasis on explainability of predictions.","","","upcoming","intermediate","19","","2024-09-01","\N","2023-11-17 00:09:25","2023-11-17 0:18:47" +"465","prepare-phase-3-put-it-all-together","PREPARE Phase 3 - Put IT All Together!","Help the NIH discover novel approaches for the early prediction of Alzheimer","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This third phase, Put IT All Together!: Proof of Principle Demonstration, is for the top solvers from Phase 2 demonstrate algorithmic approaches on diverse datasets and share their results at an innovation event.","","","upcoming","intermediate","19","","2025-03-01","\N","2023-11-17 00:09:26","2023-11-17 0:18:48"