Skip to content

FracFormer: Semi-supervised Learning for Vertebrae and Fracture Classification on 3D Radiographs with Transformers

License

Notifications You must be signed in to change notification settings

seonokkim/FracFormer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FracFormer: Semi-supervised Learning for Vertebrae and Fracture Classification on 3D Radiographs with Transformers

This repository is the official implementation of FracFormer: Semi-supervised Learning for Vertebrae and Fracture Classification on 3D Radiographs with Transformers. Our framework applies transformer-based models to detect vertebrae fractures, incorporating a Vision Transformer for spine detection and a Swin Transformer for fracture identification. This approach leverages the strengths of transformers in medical imaging to advance vertebral and fracture classification accuracy on 3D radiographic data.

Overview

FracFormer Architecture

The architecture consists of:

  1. Vertebrae Network: Predicts vertebra visibility labels (C1–C7).
  2. Fracture Network: Predicts fracture probabilities using pseudo-labels from the Vertebrae Network.

Components

1. Vertebrae Network

  • Uses Vision Transformers (ViT) for vertebra visibility prediction.

2. Fracture Network

  • Employs Swin Transformers for detecting fractures.


Installation

  1. Clone the repository:

    git clone https://github.com/seonokkim/FracFormer.git
    cd FracFormer
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up directories:

    mkdir dataset models figures utils

Dataset Preparation

  1. Download the dataset from the RSNA Cervical Spine Fracture Detection. Place the dataset into the dataset directory with the following structure:

    dataset/
    ├── train_images/
    ├── test_images/
    ├── train.csv
    ├── test.csv
    
  2. Preprocess the dataset:

    python dataset/dataset.py

Training

1. Train Vertebrae Network

The first stage predicts vertebra visibility using Vision Transformers:

python models/fracformer.py

This step:

  • Trains the VertebraeNet.
  • Generates pseudo-labels for vertebra visibility (C1–C7).

2. Train Fracture Network

The second stage detects fractures using Swin Transformers:

python models/fracformer.py

This step:

  • Trains the FractureNet using the Vertebrae Network predictions.

Testing

  1. Ensure trained models are saved in the models/checkpoints directory:

    models/checkpoints/
    ├── vertebraenet_fold0.tph
    ├── vertebraenet_fold1.tph
    ├── fracturenet_fold0.tph
    ├── fracturenet_fold1.tph
    
  2. Run the inference script:

    python test.py
  3. Results:

    • Generates predictions for fractures.
    • Outputs classification reports and metrics like AUC-ROC.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Acknowledgments

This work builds upon the RSNA Cervical Spine Fracture Detection dataset and leverages cutting-edge Transformer architectures (ViT and Swin Transformers).


About

FracFormer: Semi-supervised Learning for Vertebrae and Fracture Classification on 3D Radiographs with Transformers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages