This repository is an implementation for 3D binary segmentation. This repository is still under construction.
This repository is based on PyTorch 1.4. To use this code, please first clone the repo and install the enviroment. In order to use this code on your own dataset, if the volumetric data is in (H x W x D), please set up the flag "transpose_dim" in your config yaml file as 1. If the data is in (D x H x W), set "transpose_dim" as 0. you will also have to prepare your dataset in a structure following:
path_to_dataset
└───labelled
| └───imgs # labelled scans.
| └───lbls # labels
└───unlabelled
| └───imgs # unlabelled scans
└───test
└───imgs # testing scans
└───lbls # testing labels
Then to train the model, call the following with your own custom yaml config file:
python main.py \
-c config/exp.yaml
Here is an example of the config yaml file:
dataset:
name: lungcancer
num_workers: 4
data_dir: '/SAN/medic/PerceptronHead/data/Task06_Lung' # data directory
data_format: 'nii' # use nii for nifti, use npy for numpy
logger:
tag: 'exp_log'
seed: 1024
model:
input_dim: 1 # channel number of the input volume. For example, 1 for CT, 4 for BRATS
output_dim: 1 # output channel number, 1 for binary for using Sigmoid
width: 8 # number of filters in the first encoder, it doubles in every encoder
depth: 3 # number of downsampling encoders
train:
transpose_dim: 1 # use 1 for transposing input if input is in: D x H x W. For example, for Task06_Lung from medicaldecathlon, this should be 1
optimizer:
weight_decay: 0.0005
lr: 0.001
iterations: 10000 # number of training iterations, it's worth to mention this is different from epoch
batch: 1 # batch size of labelled volumes
temp: 1.0 # temperature scaling on output, default as 1
contrast: True # random contrast augmentation
crop_aug: True # random crop augmentation
gaussian: True # random gaussian noise augmentation
new_size_d: 32 # crop size on depth (number of slices)
new_size_w: 256 # crop size on width
new_size_h: 256 # crop size on height
batch_u: 1 # this has to be zero in supervised setting, if set up larger than 0, semi-supervised learning will be used
mu: 0.5 # prior of the mean of the threshold distribution, we automatically scale the standard deviation, see libs.Loss.kld_loss for details
learn_threshold: 1 # 0 for using the original fixed pseudo label, 1 for learning pseudo label threshold
threshold_flag: 1 # 0 for the original implementation of bayesian pseudo label, 1 for a simplified implementation which approximates mean and learns the variance
alpha: 1.0 # weight on the unsupervised learning part if semi-supervised learning is used
warmup: 0.1 # ratio between warm-up iterations and total iterations
warmup_start: 0.1 # ratio between warm-up starting iteration and total iterations
checkpoint:
resume: False # resume training or not
checkpoint_path: '/some/path/to/saved/model' # checkpoint path
Task06_Lung from medicaldecathlon.com
If you find our paper or code useful for your research, please consider citing:
@inproceedings{xu2022bpl,
title={Bayesian Pseudo Labels: Expectation Maximization and Maximization for Robust and Efficient Semi-Supervised Segmentation},
author={Xu, Moucheng and Zhou, Yukun and Jin, Chen and deGroot, Marius and Alexander, Daniel C. and Oxtoby, Neil P. and Hu, Yipeng and Jacob, Joseph},
booktitle = {International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI)},
year = {2022} }
Please contact '[email protected]'