Skip to content

Conformal prediction for uncertainty quantification in image segmentation

License

Notifications You must be signed in to change notification settings

deel-ai-papers/conformal-segmentation

Repository files navigation

COSE: Conformal Segmentation

Repository with the code of our paper:

Luca Mossina, Joseba Dalmau and Léo Andéol (2024). Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 3574-3584

  • We will present our work at the 2024 CVPR Workshop SAIAD, on 2024 June 18.

Idea

We apply Conformal Prediction to semantic image segmentation with multiple classes. Our contribution includes:

  • Novel application of Conformal Risk Control (arXiv), by Angelopoulos, A. N., Bates, S., Fisch, A., Lei, L., & Schuster, T. (2022);
  • Novel visualization of conformal sets via heatmaps;
  • Tests on multiple datasets: Cityscapes (automotive), ADE20K (daily scenes), LoveDA (aerial imaging).

An example of conformalized segmentation on the Cityscapes dataset:

Get started

This repository relies on the libraries of the OpenMMLab codebase (via mmseg & mmengine) to handle the pretrained ML models and the datasets, and pytorch for all other things ML.

For the moment, you must either choose some existing models and datasets from mmsegmentation or adapt your code to this library. We plan on releasing a more general version that works with basic pytorch and dataloaders, with minimal requirements (softmax output).

1. Make a virtual environment and install our repo

The following steps should ensure that the library and experiments run correctly:

  1. Make a virtual environment named .venv, as specified in the Makefile

    $ make venv
    $ make cose_path
    
  2. Write the project's environmental variables to a file named .env, which should not be commited. For example:

    $ DATASET_NAME='Cityscapes'
    $ cd path/to/DATASET_NAME
    $ echo COSE_DATA_DATASET_NAME=$PWD >> ~/projects/vision/cose/.env
    

Repeat these steps for every dataset: "Cityscapes", "ADE20K", "LoveDA".

2. Alternative installation

If the make commands above do not work, try to reproduce the following steps:

  1. Create a virtual environment with the venv package from the Python Standard Library:
    $ python3.9 -m venv .venv
    
  2. You must ensure that your GPU/CUDA, pytorch and mmsegmentation (and their dependencies) libraries are compatible. It can require a process of trial and error, uninstalling and reinstalling different version of the same package (e.g. mmcv below). For our our machines, this worked:
    $ .venv/bin/python -m pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
    $ .venv/bin/python -m pip install -r requirements.txt
    $ .venv/bin/python -m pip uninstall mmcv
    $ .venv/bin/mim install mmcv
    
  3. To install locally the cose packages (it allows to execute cose after modifying it, with a reload of the notebook):
    $ .venv/bin/python -m pip install --editable 
    

3. Software architecture: pytorch, mmsegmentation, etc.

For Conformal segmentation, we just assume that the logits of an inference are available from the prediction. This could require you to modify the NN code to explicitly return them instead of the simple segmentation mask obtained via an argmax.

In this repo, we use mmsegmentation and other packages of the OpenMMLab projects. We use their models' specifications (pytorch), their pre-trained weights and their dataset wrappers. Their repos are being actively developed and are vast: we only use a small set of tools, ignoring most of the pre-baked ones for training or running inferences.

If you use other models/dataset (e.g. via torch-hub), you will need to adapt your code to their idiosynchrasies (should be straightforward).

In the future, we would like to make a a version that does not depend on mmseg. In the meantime, write an issue if you have problems.

4. Demo notebooks

TODO: Write some clean and simple notebooks to demo the approach.

In the meantime, have a look at the experiments directory:

5. Interactive web applications

We wrote two simple applications (see src/app) using the Gradio library by HuggingFace. To run it, you must download the datasets and models we used in our experiments: scripts/downloaders/download_mods_weights.ipynb.

  1. Thresholding app: observe how the value of the parameter $\lambda \in [0,1]$ influences the heatmap. This is the value we estimate with the CRC conformal algorithm.
  2. Conformal heatmap: run inferences with pre-conformalized models

Run the experiments

See the README.md in the experiments directory.

Citation

@InProceedings{Mossina_2024_conformal_segmentation,
    author    = {Mossina, Luca and Dalmau, Joseba and And\'eol, L\'eo},
    title     = {Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2024},
    pages     = {3574-3584}
}

Acknowledgments

We would like to thank our colleagues of the DEEL Project (DEpendable Explainable Learning), for their invaluable feedback.

We work on uncertainty, explainability, OOD detection and other topics in trustworthy and certifiable AI.

Have a look at our open-source projects and publications:

About

Conformal prediction for uncertainty quantification in image segmentation

Resources

License

Stars

Watchers

Forks

Packages

No packages published