Skip to content

Commit

Permalink
update statuses (Sage-Bionetworks#2419)
Browse files Browse the repository at this point in the history
  • Loading branch information
vpchung authored Jan 3, 2024
1 parent dad6440 commit b4c94b7
Showing 1 changed file with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,7 @@
"432","auto-rtp","Fully Automated Radiotherapy Treatment Planning Challenge","Automated radiotherapy treatment planning in prostate cancer","Participants will be provided with simulation CTs for ten prostate cancer patients, together with a treatment intent/prescription (in a machine readable format). The cases will be a mix of prostate only and prostate + nodes. Participants are asked to generate a treatment plan in an as-automated-as-possible way, including contouring and plan generation. No manual intervention on contouring or planning should be performed, but manual steps to transfer data between systems are permitted if required. Freedom is given to participants with respect to the ""treatment machine"" the plan is designed for. However, it is expected that all participants produce a plan that is deliverable in clinically reasonable time frame.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/713/AUTO-RTP_Logo.png","https://auto-rtp.grand-challenge.org/","active","5","","2023-06-05","\N","2023-11-08 00:42:00","2023-11-16 17:39:25"
"433","2023paip","PAIP 2023: TC prediction in pancreatic and colon cancer","Tumor cellularity prediction in pancreatic and colon cancer","Tumor cellularity (TC) is used to compute the residual tumor burden in several organs, such as the breast and colon. The TC is measured based on semantic cell segmentation, which accurately classifies and delineates individual cells. However, manual analysis of TC is impractical in clinics because of the large volumes of pathological images and is unreliable owing to inconsistent TC values among pathologists. Essentially, tumor cellularity should be calculated by individual cell counting; however, manual counting is impossible, and human pathologists cannot avoid individual differences in diagnostic performance. Automatic image analysis is the ideal method for solving this problem, and it can efficiently reduce the workload of pathologists.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/716/PAIP2023-640.png","https://2023paip.grand-challenge.org/","active","5","","2023-02-15","\N","2023-11-08 00:42:00","2023-11-16 17:39:26"
"434","snemi3d","SNEMI3D: 3D Segmentation of neurites in EM images","IEEE ISBI 2013 challenge: multimodal segmentation","In this challenge, a full stack of electron microscopy (EM) slices will be used to train machine-learning algorithms for the purpose of automatic segmentation of neurites in 3D. This imaging technique visualizes the resulting volumes in a highly anisotropic way, i.e., the x- and y-directions have a high resolution, whereas the z-direction has a low resolution, primarily dependent on the precision of serial cutting. EM produces the images as a projection of the whole section, so some of the neural membranes that are not orthogonal to a cutting plane can appear very blurred. None of these problems led to major difficulties in the manual labeling of each neurite in the image stack by an expert human neuro-anatomist.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/717/logo.png","https://snemi3d.grand-challenge.org/","active","5","","2013-01-15","\N","2023-11-08 00:42:00","2023-11-16 17:39:27"
"435","han-seg2023","The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge","Endometrial carcinoma prediction on whole-slide images","Cancer in the region of the head and neck (HaN) is one of the most prominent cancers, for which radiotherapy represents an important treatment modality that aims to deliver a high radiation dose to the targeted cancerous cells while sparing the nearby healthy organs-at-risk (OARs). A precise three-dimensional spatial description, i.e. segmentation, of the target volumes as well as OARs is required for optimal radiation dose distribution calculation, which is primarily performed using computed tomography (CT) images. However, the HaN region contains many OARs that are poorly visible in CT, but better visible in magnetic resonance (MR) images. Although attempts have been made towards the segmentation of OARs from MR images, so far there has been no evaluation of the impact the combined analysis of CT and MR images has on the segmentation of OARs in the HaN region. The Head and Neck Organ-at-Risk Multi-Modal Segmentation Challenge aims to promote the development of new and applicatio...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/718/logo.jpg","https://han-seg2023.grand-challenge.org/","active","5","","2023-03-26","2023-12-15","2023-11-08 00:42:00","2023-11-16 17:39:30"
"435","han-seg2023","The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge","Endometrial carcinoma prediction on whole-slide images","Cancer in the region of the head and neck (HaN) is one of the most prominent cancers, for which radiotherapy represents an important treatment modality that aims to deliver a high radiation dose to the targeted cancerous cells while sparing the nearby healthy organs-at-risk (OARs). A precise three-dimensional spatial description, i.e. segmentation, of the target volumes as well as OARs is required for optimal radiation dose distribution calculation, which is primarily performed using computed tomography (CT) images. However, the HaN region contains many OARs that are poorly visible in CT, but better visible in magnetic resonance (MR) images. Although attempts have been made towards the segmentation of OARs from MR images, so far there has been no evaluation of the impact the combined analysis of CT and MR images has on the segmentation of OARs in the HaN region. The Head and Neck Organ-at-Risk Multi-Modal Segmentation Challenge aims to promote the development of new and applicatio...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/718/logo.jpg","https://han-seg2023.grand-challenge.org/","completed","5","","2023-03-26","2023-12-15","2023-11-08 00:42:00","2023-11-16 17:39:30"
"436","endo-aid","Endometrial Carcinoma Detection in Pipelle biopsies","Non-rigid registration challenge for expansion microscopy","Evaluation platform as reference benchmark for algorithms that can predict endometrial carcinoma on whole-slide images of Pipelle sampled endometrial slides stained in H&E, based on the test data set used in our project.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/719/logo-challenge.png","https://endo-aid.grand-challenge.org/","active","5","","\N","\N","2023-11-08 00:42:00","2023-11-17 23:33:27"
"437","rnr-exm","Robust Non-rigid Registration Challenge for Expansion Microscopy","Xray projectomic reconstruction with skeleton segmentation","Despite the wide adoption of ExM, there are few public benchmarks to evaluate the registration pipeline, which limits the development of robust methods for real-world deployment. To address this issue, we have launched RnR-ExM, a challenge that releases 24 pairs of 3D image volumes from three different species. Participants are asked to align these pairs and submit dense deformation fields for assessment. Half of the volume pairs (the validation and test set) have annotated cell structures (nuclei, blood vessels) as registration landmarks.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/720/RnR-ExM_Logo.png","https://rnr-exm.grand-challenge.org/","active","5","","2023-02-17","2028-03-16","2023-11-08 00:42:00","2023-11-16 17:39:32"
"438","xpress","Xray Projectomic Reconstruction Extracting Segment with Skeleton","Automated lesion segmentation in PET/CT - domain generalization","In this task, we provide volumetric XNH images of cortical white matter axons from the mouse brain at 100 nm per voxel isotropic resolution. Additionally, we provide ground truth annotations for axon trajectories. Manual voxel-wise annotation of ground truth is a time-consuming bottleneck for training segmentation networks. On the other hand, skeleton-based ground truth is much faster to annotate, and sufficient to determine connectivity. Therefore, we encourage participants to develop methods to leverage skeleton-based training. To this end, we provide two types of training (validation) sets: a small volume of voxel-wise annotations and a larger volume with skeleton-based annotations. The participants will have the flexibility to use either or both of the provided annotations to train their models, and are challenged to submit an accurate voxel-wise prediction on the test volume. Entries will be evaluated on how accurately the submitted segmentations agree with the ground-truth s...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/721/XPRESS_logo_sq2-01.png","https://xpress.grand-challenge.org/","active","5","","2023-02-06","\N","2023-11-08 00:42:00","2023-11-16 17:39:34"
Expand Down Expand Up @@ -479,4 +479,4 @@
"478","brain-to-text-benchmark-24","Brain-to-Text Benchmark '24","Develop new and improved algorithms for decoding speech from the brain","People with ALS or brainstem stroke can lose the ability to move, rendering them “locked-in” their own bodies and unable to communicate. Speech brain-computer interfaces (BCIs) can restore communication by decoding what someone is trying to say directly from their brain activity. Once deciphered, the person’s intended message can be spoken for them or typed as text on a computer. We recently showed that a speech BCI can decode speech at 62 words per minute with a 23% word error rate, demonstrating the potential of a high-performance speech BCI. Nevertheless, word error rates are not yet low enough for fluent communication. The goal of this competition is to foster the development of new and improved algorithms for decoding speech from the brain. Improved accuracies will make it more likely that a speech BCI can be clinically translated, improving the lives of those with paralysis. We hope that this baseline can also serve as an indicator of progress in the field and provide a st...","https://evalai.s3.amazonaws.com/media/logos/35b2c474-c1be-41ae-97a4-49446766f9b1.png","https://eval.ai/web/challenges/challenge-page/2099/overview","active","16","","2023-06-01","2024-06-01","2023-12-12 21:54:25","2023-12-12 22:38:33"
"479","vqa-answertherapy-2024","VQA-AnswerTherapy 2024","Grounding all answers for each visual question","Visual Question Answering (VQA) is a task of predicting the answer to a question about an image. Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. To achieve this goal, we introduce the VQA-AnswerTherapy dataset, the first dataset that visually grounds each unique answer to each visual question. We offer this work as a valuable foundation for improving our understanding and handling of annotator differences. This work can inform how to account for annotator differences for other related tasks such as image captioning, visual dialog, and open-domain VQA (e.g., VQAs found on Yahoo!Answers and Stack Exchange). This work also contributes to ethical AI by enabling revisiting how VQA models are developed and evaluated to consider the diversity of plausible answer groundings rather than a single (typically majority) one.","https://evalai.s3.amazonaws.com/media/logos/e63bc0a0-cd35-4418-b32b-4ef2b9c61ce2.png","https://eval.ai/web/challenges/challenge-page/1910/overview","upcoming","16","","2024-01-30","\N","2023-12-12 22:41:48","2023-12-12 23:20:41"
"480","vqa-challenge-2021","VQA Challenge 2021","Answer open-ended, free-form natural language questions about images","Recent progress in computer vision and natural language processing has demonstrated that lower-level tasks are much closer to being solved. We believe that the time is ripe to pursue higher-level tasks, one of which is Visual Question Answering (VQA), where the goal is to be able to understand the semantics of scenes well enough to be able to answer open-ended, free-form natural language questions (asked by humans) about images. VQA Challenge 2021 is the 6th edition of the VQA Challenge on the VQA v2.0 dataset introduced in Goyal et al., CVPR 2017. The 2nd, 3rd, 4th and 5th editions of the VQA Challenge were organized in CVPR 2017, CVPR 2018, CVPR 2019 and CVPR 2020 on the VQA v2.0 dataset. The 1st edition of the VQA Challenge was organized in CVPR 2016 on the 1st edition (v1.0) of the VQA dataset introduced in Antol et al., ICCV 2015.","https://evalai.s3.amazonaws.com/media/logos/85d3b99e-b3a7-498a-a142-3325eab17138.png","https://eval.ai/web/challenges/challenge-page/830/overview","completed","16","","2021-02-24","2021-05-07","2023-12-12 22:42:59","2023-12-12 23:00:07"
"481","ntx-hackathon-2023-sleep-states","NTX Hackathon 2023 - Sleep States","Speculate on possible use-cases of Neurotechnology and BCI","This competition is dedicated to advancing the use of machine learning and deep learning techniques in the realm of Brain-Computer Interface (BCI). It focuses on analyzing EEG data obtained from IDUN Guardian Earbuds. Electroencephalography (EEG) is a non-invasive method of recording electrical activity in the brain. Its high-resolution, real-time data is crucial in various clinical and consumer applications. In clinical environments, EEG is instrumental in diagnosing and monitoring neurological disorders like epilepsy, sleep disorders, and brain injuries. It's also used for assessing brain function in patients under anesthesia or in comas. The real-time aspect of EEG data is vital for clinicians to make informed decisions about diagnosis and treatment, such as pinpointing the onset and location of a seizure. Beyond clinical use, EEG has significant applications in understanding human cognition. Researchers utilize EEG to explore cognitive processes including attention, percepti...","https://miniodis-rproxy.lisn.upsaclay.fr/coda-v2-prod-public/logos/2023-12-02-1701542051/06a6dc054e4b/NTXHackathon23-Logo-Black-Blue-2048.png","https://www.codabench.org/competitions/1777/","active","10","","2023-12-01","2023-12-15","2023-12-12 23:22:24","2023-12-12 23:30:24"
"481","ntx-hackathon-2023-sleep-states","NTX Hackathon 2023 - Sleep States","Speculate on possible use-cases of Neurotechnology and BCI","This competition is dedicated to advancing the use of machine learning and deep learning techniques in the realm of Brain-Computer Interface (BCI). It focuses on analyzing EEG data obtained from IDUN Guardian Earbuds. Electroencephalography (EEG) is a non-invasive method of recording electrical activity in the brain. Its high-resolution, real-time data is crucial in various clinical and consumer applications. In clinical environments, EEG is instrumental in diagnosing and monitoring neurological disorders like epilepsy, sleep disorders, and brain injuries. It's also used for assessing brain function in patients under anesthesia or in comas. The real-time aspect of EEG data is vital for clinicians to make informed decisions about diagnosis and treatment, such as pinpointing the onset and location of a seizure. Beyond clinical use, EEG has significant applications in understanding human cognition. Researchers utilize EEG to explore cognitive processes including attention, percepti...","https://miniodis-rproxy.lisn.upsaclay.fr/coda-v2-prod-public/logos/2023-12-02-1701542051/06a6dc054e4b/NTXHackathon23-Logo-Black-Blue-2048.png","https://www.codabench.org/competitions/1777/","completed","10","","2023-12-01","2023-12-15","2023-12-12 23:22:24","2023-12-12 23:30:24"

0 comments on commit b4c94b7

Please sign in to comment.