-
Notifications
You must be signed in to change notification settings - Fork 0
Projects
Build and train a convolution neural net to automatically segment brain tissue for PET images. For more info, check out: https://github.com/tfunck/pet_brainmask_convnet
Using EEG recordings from patients with epilepsy, detect whether a seizure is currently occurring. The data was obtained from 22 patients over several hours, across 23 separate electrodes. A label of 0 indicates interictal periods (non-seizure), and a label of 1 indicates a seizure is occurring at that time. Possible targets are listed below.
- Detect whether a seizure is occurring in a given input (binary target).
- Detect when a seizure is happening in a given input.
- Predict whether a seizure will happen within 60 minutes of the current sample. (note: labels will need some minor processing).
- Predict seizures in patients whose data has not been seen (leave-one-patient-out cross-validation).
The data will be available on-site and on ElementAI's machines, and is about 38GB.
Using the Human Connectome Project structural T1 T2 scans, learn to produce full-resolution brain masks. A label of 0 indicates 'not brain' and a label of 1 indicates 'brain'. Possible targets are listed below.
- Given co-registered T1/T2 scans, predict the brain mask.
- Given either a T1 or T2 scan, predict the brain mask.
The data will be available on-site and on ElementAI's machines. Use of the data requires you to sign up for HCP. The size of the data is 74GB for T1, 74GB for T2, and 300MB for the brain masks.
Train automatic segmentation for healthy brain tissues in T1 images. Data generously provided by Neuromorphometrics. This data was used for the MICCAI 2012 Grand Challenge on Multi-Atlas Labeling
The Stanford Centre for Reproducible Neuroscience has been working on quality control of MRI images using an automatic pipeline that computes 64 image quality metrics and uses them to train an automatic classifier, but have not been able to generalize it to new sites with different MRI parameters. Read their pre-print here: http://www.biorxiv.org/content/early/2017/07/15/111294
The code for the mriqc is here: https://github.com/poldracklab/mriqc They would like us to try to learn their QC labels and see if deep learning can generalize better than their random forest/SVM's trained on imaging metrics
Here's a writeup by Carolina Makowski about one possible quality control protocol for manual labeling: https://www.dropbox.com/s/4it50fjez1sibta/Structural_MR_QC_forLepageLab_10-08-2016.pdf?dl=0
https://openfmri.org/dataset/ds000116/
Using voxel-based 3D GAN, train a generator/discriminator convolutional neural network to generate synthetic FLAIR/T1 from T1/FLAIR to enable the use of complete datasets in instances of missing modalities. https://github.com/ravnoor/MRI-GAN
Train a neural network to reconstruct dynamically growing axonal arbours collected using 3D 2-photon microscopy of individually labeled neurons in living animals with a database of manual reconstructions. The primary objective will be to track changes in individual branches over time.
http://ruthazerlab.mcgill.ca/downloads/stack.gif
http://ruthazerlab.mcgill.ca/downloads/axon.gif
The ultimate clinical problem is the detection of brain lesions causing epilepsy on MRI scans, particularly in those with visually normal appearing scans. Multimodal MRI data on healthy controls and patients with visible lesions (manually delineated) will be available on ElementAI machines. The aim is to detect the abnormality on a given set of images on a patient in comparison to a database of controls.