Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging
This repository contains the code and datasets required to reproduce the work shown in our paper.
Figure 1: The FlowNet-PET framework.
Figure 2: The convolutional neural network architecture.
-PyTorch: pip install torch torchvision
- We used version 1.10.0, but I would assume that other versions will still work.
-h5py: pip install h5py
-configparser: pip install configparser
The test sets can be downloaded here and the training set can be downloaded here (this file is around 50GB and not necessary for the analysis).
Once downloaded, unzip the files and place each file in the data directory. For instance, after doing this you should have the file path FlowNet_PET/data/xcat_frames_test_set.h5
.
To install from the command line:
-
install file downloader:
pip install zenodo_get
-
download the test sets:
cd data/ python -m zenodo_get 6510089 unzip FlowNetPET_test_sets.zip
-
download the training set (this file is around 50GB and not necessary for the analysis):
cd data/ python -m zenodo_get 6510358 unzip xcat_training_set.zip
Once downloaded, unzip the files and place each file in the data directory. For instance, after doing this you should have the file path FlowNet_PET/data/xcat_frames_test_set.h5
.
-
The model architecture and hyper-parameters are set within configuration file in the config directory. For instance, I have already created the original FlowNet-PET configuration file. You can copy this file under a new name and change whichever parameters you choose.
-
If you were to create a
fnp_2.ini
in Step 1, to train this model, you can runpython train_flownet_pet.py fnp_2 -v 500 -ct 15.00
which will train your model displaying the progress every 500 batch iterations and saves the model every 15 minutes. This same command will continue training the network if you already have the model saved in the model directory from previous training.
Alternatively, if operating on compute-canada, you can use the launch_model.py
script to simultaneously create a new configuration file and launch a bunch of jobs to train your model.
- Change the load modules file to include the lines necessary to load your own environment with pytorch, etc.
- Then, to copy the original FlowNet-PET configuration, but use, say, a loss weight of 500 for the invertibility loss term, you could use the command
python launch_model.py fnp_2 -iw 500
. This will launch ten 3-hour jobs on the GPU nodes to finish the training. You can checkout the other parameters that can be changed with the commandpython launch_model.py -h
.
- Checkout the Testing on XCAT Frames notebook to evaluate the trained network on raw XCAT PET frames and compare the results to images with and without motion.
- Checkout the Comparing Against RPB notebook to evaluate the trained network on XCAT PET data that is binned based on a clinical breathing trace. This notebook will compare the results against images produce by the retrospective phase binning method, which requires a scan duration that is six times longer.
- Checkout the Comparisons using MC Data notebook to evaluate the trained network on Monte Carlo PET data that is binned based on a clinical breathing trace. This notebook will compare the results against images produce by the retrospective phase binning method, which requires a scan duration that is six times longer.