Skip to content

Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging

License

Notifications You must be signed in to change notification settings

teaghan/FlowNet_PET

Repository files navigation

FlowNet-PET

Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging

This repository contains the code and datasets required to reproduce the work shown in our paper.

Figure 1: The FlowNet-PET framework.

Figure 2: The convolutional neural network architecture.

Dependencies

-PyTorch: pip install torch torchvision

  • We used version 1.10.0, but I would assume that other versions will still work.

-h5py: pip install h5py

-configparser: pip install configparser

Data download

Option 1

The test sets can be downloaded here and the training set can be downloaded here (this file is around 50GB and not necessary for the analysis).

Once downloaded, unzip the files and place each file in the data directory. For instance, after doing this you should have the file path FlowNet_PET/data/xcat_frames_test_set.h5.

Option 2

To install from the command line:

  • install file downloader: pip install zenodo_get

  • download the test sets:

    cd data/
    
    python -m zenodo_get 6510089
    
    unzip FlowNetPET_test_sets.zip
    
  • download the training set (this file is around 50GB and not necessary for the analysis):

    cd data/
    
    python -m zenodo_get 6510358
    
    unzip xcat_training_set.zip
    

Once downloaded, unzip the files and place each file in the data directory. For instance, after doing this you should have the file path FlowNet_PET/data/xcat_frames_test_set.h5.

Training the Network

Option 1

  1. The model architecture and hyper-parameters are set within configuration file in the config directory. For instance, I have already created the original FlowNet-PET configuration file. You can copy this file under a new name and change whichever parameters you choose.

  2. If you were to create a fnp_2.ini in Step 1, to train this model, you can run python train_flownet_pet.py fnp_2 -v 500 -ct 15.00 which will train your model displaying the progress every 500 batch iterations and saves the model every 15 minutes. This same command will continue training the network if you already have the model saved in the model directory from previous training.

Option 2

Alternatively, if operating on compute-canada, you can use the launch_model.py script to simultaneously create a new configuration file and launch a bunch of jobs to train your model.

  1. Change the load modules file to include the lines necessary to load your own environment with pytorch, etc.
  2. Then, to copy the original FlowNet-PET configuration, but use, say, a loss weight of 500 for the invertibility loss term, you could use the command python launch_model.py fnp_2 -iw 500. This will launch ten 3-hour jobs on the GPU nodes to finish the training. You can checkout the other parameters that can be changed with the command python launch_model.py -h.

Analysis notebooks

  1. Checkout the Testing on XCAT Frames notebook to evaluate the trained network on raw XCAT PET frames and compare the results to images with and without motion.
  2. Checkout the Comparing Against RPB notebook to evaluate the trained network on XCAT PET data that is binned based on a clinical breathing trace. This notebook will compare the results against images produce by the retrospective phase binning method, which requires a scan duration that is six times longer.
  3. Checkout the Comparisons using MC Data notebook to evaluate the trained network on Monte Carlo PET data that is binned based on a clinical breathing trace. This notebook will compare the results against images produce by the retrospective phase binning method, which requires a scan duration that is six times longer.

About

Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published