Tools like TorchIO are a symptom of the maturation of medical AI research using deep learning techniques.
Jack Clark, Policy Director at OpenAI (link).
Package | |
Docs | |
Build | |
Coverage | |
Code | |
Notebooks | |
Social |
(Queue for patch-based training)
TorchIO is a Python package containing a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch, including intensity and spatial transforms for data augmentation and preprocessing. Transforms include typical computer vision operations such as random affine transformations and also domain-specific ones such as simulation of intensity artifacts due to MRI magnetic field inhomogeneity or k-space motion artifacts.
This package has been greatly inspired by NiftyNet, which is not actively maintained anymore.
If you like this repository, please click on Star!
If you use this package for your research, please cite the paper:
BibTeX entry:
@article{perez-garcia_torchio_2020,
title = {{TorchIO}: a {Python} library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning},
shorttitle = {{TorchIO}},
url = {http://arxiv.org/abs/2003.04696},
urldate = {2020-03-11},
journal = {arXiv:2003.04696 [cs, eess, stat]},
author = {P{\'e}rez-Garc{\'i}a, Fernando and Sparks, Rachel and Ourselin, Sebastien},
month = mar,
year = {2020},
note = {arXiv: 2003.04696},
keywords = {Computer Science - Computer Vision and Pattern Recognition, Electrical Engineering and Systems Science - Image and Video Processing, Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Statistics - Machine Learning},
}
This project is supported by the Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) (University College London) and the School of Biomedical Engineering & Imaging Sciences (BMEIS) (King's College London).
See Getting started for installation instructions and a Hello, World! example.
Longer usage examples can be found in the notebooks.
All the documentation is hosted on Read the Docs.
Please open a new issue if you think something is missing.
Thanks goes to all these people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!