-
Notifications
You must be signed in to change notification settings - Fork 4
Tumor segmentation
David Bouget edited this page May 4, 2023
·
4 revisions
All segmentation models were trained using the AGU-Net architecture set up with 5 levels, [16, 32, 128, 256, 256] as convolution blocks, multi-scale input, deep supervision, and a single attention scheme. The full details can be found in this article for the preoperative segmentation tasks. Regarding the glioblastoma early postoperative segmentation use-case, more details are available in this article
- Dataset: The model has been trained from 2125 Gd-enhanced T1-weighted MRI volumes, coming from fourteen different hospitals.
- Preprocessing: (i) resampling to an isotropic space of 0.75 cubic mm, (ii) skull-stripping, and (iii) intensity clipping to remove the 0.05% highest values and normalization to [0, 1].
- Training: patch-wise with dimensions 160x160x160 voxels, from scratch using the Adam optimizer (with 5e-4 as learning rate), over a batch size of 32 samples using gradient accumulation, stopped after 15 consecutive epochs without validation loss improvement.
- Dataset: The model has been trained from 678 FLAIR MRI volumes, gathered from four different hospitals.
- Preprocessing: (i) resampling to an isotropic space of 0.75 cubic mm, (ii) tight clipping around the patient head, and (iii) intensity clipping to remove the 0.05% highest values and normalization to [0, 1].
- Training: patch-wise with dimensions 160x160x160 voxels, from scratch using the Adam optimizer (with 5e-4 as learning rate), over a batch size of 32 samples using gradient accumulation, stopped after 15 consecutive epochs without validation loss improvement.
- Dataset: The model has been trained from 706 Gd-enhanced T1-weighted MRI volumes, collected from different hospital and clinics within Trondheim, Norway.
- Preprocessing: (i) resampling to an isotropic space of 0.75 cubic mm, (ii) tight clipping around the patient head, and (iii) intensity clipping to remove the 0.05% highest values and normalization to [0, 1].
- Training: patch-wise with dimensions 160x160x160 voxels, from scratch using the Adam optimizer (with 5e-4 as learning rate), over a batch size of 32 samples using gradient accumulation, stopped after 15 consecutive epochs without validation loss improvement.
- Dataset: The model has been trained from 394 Gd-enhanced T1-weighted MRI volumes, coming from two different hospitals.
- Preprocessing: (i) resampling to an isotropic space of 0.75 cubic mm, (ii) tight clipping around the patient head, and (iii) intensity clipping to remove the 0.05% highest values and normalization to [0, 1].
- Training: patch-wise with dimensions 160x160x160 voxels, from scratch using the Adam optimizer (with 5e-4 as learning rate), over a batch size of 32 samples using gradient accumulation, stopped after 15 consecutive epochs without validation loss improvement.