Skip to content

Pipeline for automatic defacing with manual correction of failed cases

Notifications You must be signed in to change notification settings

aaronkujawa/defacing_pipeline

Repository files navigation

Defacing pipeline

This defacing pipeline is based on affine registration with BRAINSFit and resampling with BRAINSResample. It was developed primarily for input images of brains with a reduced FOV (slabs) where the affine registration often fails. To access the quality of the initial fully automatic defacing step, we provide a second script that creates 3D visualizations of the defaced images in GIF format. As a fallback solution for cases where the defacing failed, an input transform can be produced by selecting landmark pairs in the template and the input image. This is done in 3D Slicer.

Steps

  1. Affine registration of the input image to a template image (that has a template face mask associated with it)
  2. Transforming the template face mask to the input image space by inverting the affine transform
  3. Applying the resampled template face mask to the input image to remove facial features
  4. Creation of 3D rendering of the defaced image with 3D Slicer
  5. Manually confirm the success of the defacing or mark as failed for each input image
  6. If the defacing failed, manual landmark selection in 3D Slicer to produce a manual affine transformation
  7. Repeat steps 2-4 based on the manual affine transformation

Requirements

The code was tested on Ubuntu 20.04 with Python 3.11

  1. BRAINSFit and BRAINSResample binaries from BRAINSTools (Note: The binaries are dependent on shared libraries that need to be installed as well) (The scripts were tested with BRAINSFit version: 5.4.0 and BRAINSResample version: 5.7.0)
  2. 3D Slicer (tested with version 5.2) with the Jupyter Notebook extension installed to create 3D renderings of the defaced images and to perform landmark registration for failed cases
  3. ITK-SNAP and MPV for viewing of the 3D renderings

Example run

The example run is based on two test images to be defaced, a T1 image IXI002-Guys-0828-T1.nii.gz and a T2 image IXI002-Guys-0828-T2.nii.gz. The T2 image also comes with a manual affine transform Transform_to_template.txt to explain how the pipeline deals with previously failed defacing attempts. How to create the manual transform is explained further down.

The template image and the template mask were taken from PyDeface.

All outputs are written to the example_output directory. If you want to rerun the example, delete the contents of the directory first, because the notebook will not overwrite existing files.

Steps 1-3: Automatic defacing steps

These steps are performed by the notebook [1]deface_with_brainsfit.ipynb In the second cell the paths to the input images, and output directories need to be set. One also needs to set a list of existing manual transforms (if none exist, set it to a list of None of the same length as the input images list). In this example, we have no manual transform for the T1 image but we have one for the T2 image, so the list looks like this:

existing_transform_paths = [None, "./data/example_input_images/IXI002-Guys-0828-T2/Transform_to_template.txt"]

Furthermore, the path to the BRAINSFit and BRAINSResample binaries need to be set in this cell.

For each case the notebook will create an output directory, for example for the T1 image: IXI002-Guys-0828-T1

This directory contains intermediate output files and the defaced image: IXI002-Guys-0828-T1_masked.nii.gz

Step 4: Create 3D rendering of the defaced images with 3D Slicer

This step is performed by the notebook [2]facemask_rendering_with_3DSlicer.ipynb. In the third cell, it is required to set the path to the defaced images and the output directory. The notebook will create a GIF animation of the 3D rendering of the defaced image. The GIF is saved in the output directory.

This script needs to be run with the 3D Slicer Jupyter Notebook kernel. To install the kernel, follow the instructions here. Alternatively, the notebook can be converted to a Python script and run directly in 3D Slicer, using the approach described here.

Step 5: Manual confirmation of the defacing

This step is performed by the notebook [3]loop_through_all_defaced_images_and_assess_manually.ipynb. The purpose is to efficiently go through all GIFs created in step 4 and manually assess the quality of the defacing for each image, and store the results in a CSV file, so that manual registration can be performed for the failed cases. In the second cell, the path to the defaced images, the 3D GIFs and the original images need to be set. The latter are used to compare the masked images with the original images.

The code will loop through all defaced images and show the GIFs in MPV like this (this requires MPV to be installed and added to the PATH): IXI002-Guys-0828-T1_masked.gif After assessing the quality of the defacing and closing MPV, the user is asked for an input string, which will be stored for each image in the CSV file. For example, this can be "y" for successful defacing and "n" for failed defacing. The only special input is "v", which will open the original image and the defaced image in ITK-SNAP for a side-by-side comparison.

The output CSV file is example_output_manual_assessment.csv

Step 6: Manual landmark selection in 3D Slicer

This step is performed manually in 3D Slicer.

  1. Open 3D Slicer.
  2. Drag the template image onto the 3D view and confirm the "Add data into the scene" dialog.
  3. Drag the input image onto the 3D view and confirm the "Add data into the scene" dialog.
  4. Under Modules, select the "Data" module: data_module_after_data_import.png
  5. Right-click on the input image and select "Register this ..."
  6. Right-click on the template image and select "Register <input_image_name> to this using ..." --> "Interactive Landmark Registration". This will automatically open the Landmark Registration module. You should see a 3x3 matrix of images: interactive_landmark_registration_overview.png The first row shows the template image views, the second row shows the input image views and the third row shows the overlay of both images.
  7. In the Landmark Registration module, scroll to "Landmarks" and click "+Add". Then, place the landmark in the template image, for example on the left eye by scrolling to the correct axial slice with the mouse wheel and placing the landmark with a left-click. The landmark will appear in the other views of the template image and in all views of the input image. First, make sure the landmark is placed correctly in the template image in all views, by dragging the landmark to the correct position. Then, make sure the landmark is placed correctly in the input image in all views: landmark_1.png
  8. Repeat the same for at least two more landmarks, for example on the right eye and a corner of the 4th ventricle: landmark_3.png
  9. To create the landmark based transform, scroll down in the "Landmark Registration" module to "Registration" and click on "Affine Registration" Then, scroll further down and under "Linear Registration" click on "Rigid" (even if already selected). If the landmarks are placed correctly, the overlaid images in row 3 should be registered. If not, try to place the landmarks again or add more landmarks: manual_registration_done.png
  10. Finally, to save the transform by clicking "File" --> "Save Data" and unselect all files except "Transform.h5". Change the File Format to "Text (.txt)" and select the correct output directory. In this example, the first defacing script expects the transform to be called "Transform_to_template.txt" and to be located in the same folder as the input image (alternatively, paths can be adjusted in the script): saving_the_transform.png

Step 7: Repeat steps 2-4 with the manual affine transformation

Running the first notebook again with the manual transform should produce a successful defacing. Make sure to delete previous outputs first, because the notebook will not overwrite existing files.

About

Pipeline for automatic defacing with manual correction of failed cases

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published