From 4effbf26a1bbc499cf4f7a4ddb4f8121d3ff7206 Mon Sep 17 00:00:00 2001 From: Constantin Pape Date: Sun, 5 May 2024 22:02:13 +0200 Subject: [PATCH] Update documentation --- micro_sam.html | 370 +++--- micro_sam/bioimageio/model_export.html | 1037 ++++++++--------- micro_sam/evaluation/evaluation.html | 7 +- micro_sam/evaluation/inference.html | 681 +++++------ .../evaluation/instance_segmentation.html | 804 ++++++------- micro_sam/inference.html | 587 +++++----- micro_sam/training/training.html | 2 +- search.js | 2 +- 8 files changed, 1785 insertions(+), 1705 deletions(-) diff --git a/micro_sam.html b/micro_sam.html index 06268058..e0a22068 100644 --- a/micro_sam.html +++ b/micro_sam.html @@ -45,12 +45,12 @@

Contents

  • Using the Python Library
  • -
  • Finetuned models +
  • Finetuned Models
  • FAQ
  • Contribution Guide
  • -
  • Using micro_sam on BAND +
  • Using micro_sam on BAND
  • @@ -111,7 +105,7 @@

    Segment Anything for Microscopy

    -

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other bio-imaging data. +

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other biomedical imaging data. Its core components are:

      @@ -128,12 +122,12 @@

      We are still working on improving and extending its functionality. The current roadmap includes:

        -
      • Releasing more and better finetuned models.
      • -
      • Integrating parameter efficient training and compressed models for faster fine-tuning.
      • +
      • Releasing more and better finetuned models for the biomedical imaging domain.
      • +
      • Integrating parameter efficient training and compressed models for efficient fine-tuning and faster inference.
      • Improving the 3D segmentation and tracking functionality.
      -

      If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam.

      +

      If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam.

      Quickstart

      @@ -142,9 +136,9 @@

      Quickstart

      $ mamba install -c conda-forge micro_sam
       
      -

      We also provide installers for Windows and Linux. For more details on the available installation options check out the installation section.

      +

      We also provide installers for Windows and Linux. For more details on the available installation options, check out the installation section.

      -

      After installing micro_sam you can start napari and select the annotation tool you want to use from Plugins->Segment Anything for Microscopy. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.

      +

      After installing micro_sam you can start napari and select the annotation tool you want to use from Plugins -> SegmentAnything for Microscopy. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.

      The micro_sam python library can be imported via

      @@ -163,9 +157,9 @@

      Citation

      If you are using micro_sam in your research please cite

      Installation

      @@ -174,11 +168,11 @@

      Installation

      • From mamba is the recommended way if you want to use all functionality.
      • -
      • From source for setting up a development environment to use the development version and to change and contribute to our software.
      • -
      • From installer to install it without having to use mamba (supported platforms: Windows and Linux, only for CPU users).
      • +
      • From source for setting up a development environment to use the latest version and to change and contribute to our software.
      • +
      • From installer to install it without having to use mamba (supported platforms: Windows and Linux, supports only CPU).
      -

      You can find more information on the installation and how to troubleshoot it in the FAQ section.

      +

      You can find more information on the installation and how to troubleshoot it in the FAQ section.

      From mamba

      @@ -190,18 +184,24 @@

      From mamba

      micro_sam can be installed in an existing environment via:

      -
      $ mamba install -c conda-forge micro_sam
      +
      +
      $ mamba install -c conda-forge micro_sam
       
      +
      -

      or you can create a new environment (here called micro-sam) with it via:

      +

      or you can create a new environment (here called micro-sam) via:

      -
      $ mamba create -c conda-forge -n micro-sam micro_sam
      +
      +
      $ mamba create -c conda-forge -n micro-sam micro_sam
       
      +

      if you want to use the GPU you need to install PyTorch from the pytorch channel instead of conda-forge. For example:

      -
      $ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1
      +
      +
      $ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1
       
      +

      You may need to change this command to install the correct CUDA version for your system, see https://pytorch.org/ for details.

      @@ -220,36 +220,46 @@

      From source

    • Clone the repository:
    • -
      $ git clone https://github.com/computational-cell-analytics/micro-sam
      +
      +
      $ git clone https://github.com/computational-cell-analytics/micro-sam
       
      +
      1. Enter it:
      -
      $ cd micro-sam
      +
      +
      $ cd micro-sam
       
      +
      1. Create the GPU or CPU environment:
      -
      $ mamba env create -f <ENV_FILE>.yaml
      +
      +
      $ mamba env create -f <ENV_FILE>.yaml
       
      +
      1. Activate the environment:
      -
      $ mamba activate sam
      +
      +
      $ mamba activate sam
       
      +
      1. Install micro_sam:
      -
      $ pip install -e .
      +
      +
      $ pip install -e .
       
      +

      From installer

      @@ -265,25 +275,26 @@

      From installer

      The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via mamba instead. They will also not enable using the python library.

      -

      Linux Installer:

      +

      Linux Installer:

      To use the installer:

      • Unpack the zip file you have downloaded.
      • -
      • Make the installer executable: $ chmod +x micro_sam-0.2.0post1-Linux-x86_64.sh
      • -
      • Run the installer: $./micro_sam-0.2.0post1-Linux-x86_64.sh$ +
      • Make the installer executable: $ chmod +x micro_sam-1.0.0post0-Linux-x86_64.sh
      • +
      • Run the installer: ./micro_sam-1.0.0post0-Linux-x86_64.sh
        • You can select where to install micro_sam during the installation. By default it will be installed in $HOME/micro_sam.
        • The installer will unpack all micro_sam files to the installation directory.
      • -
      • After the installation you can start the annotator with the command .../micro_sam/bin/micro_sam.annotator. +
      • After the installation you can start the annotator with the command .../micro_sam/bin/napari.
          -
        • To make it easier to run the annotation tool you can add .../micro_sam/bin to your PATH or set a softlink to .../micro_sam/bin/micro_sam.annotator.
        • +
        • Proceed with the steps described in Annotation Tools
        • +
        • To make it easier to run the annotation tool you can add .../micro_sam/bin to your PATH or set a softlink to .../micro_sam/bin/napari.
      -

      Windows Installer:

      +

      Windows Installer:

      • Unpack the zip file you have downloaded.
      • @@ -293,7 +304,8 @@

        From installer

        • The installer will unpack all micro_sam files to the installation directory.
        -
      • After the installation you can start the annotator by double clicking on .\micro_sam\Scripts\micro_sam.annotator.exe or with the command .\micro_sam\Scripts\micro_sam.annotator.exe from the Command Prompt.
      • +
      • After the installation you can start the annotator by double clicking on .\micro_sam\Scripts\micro_sam.annotator.exe or with the command .\micro_sam\Scripts\napari.exe from the Command Prompt.
      • +
      • Proceed with the steps described in Annotation Tools

      +We recommend checking out our latest preprint for details on the results on how much data is required for finetuning Segment Anything.

      -

      The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it.

      +

      The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it. +We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster. +The notebook explains how to train it together with the rest of SAM and how to then use it.

      -

      We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster. -The notebook explains how to activate training it together with the rest of SAM and how to then use it.

      +

      More advanced examples, including quantitative and qualitative evaluation, can be found in the finetuning directory, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

      -

      More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

      +

      TODO put table with resources here

      -

      Finetuned models

      +

      Finetuned Models

      In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data. -The additional models are available in the bioimage.io modelzoo and are also hosted on zenodo.

      +The additional models are available in the BioImage.IO Model Zoo and are also hosted on Zenodo.

      We currently offer the following models:

        -
      • vit_h: Default Segment Anything model with vit-h backbone.
      • -
      • vit_l: Default Segment Anything model with vit-l backbone.
      • -
      • vit_b: Default Segment Anything model with vit-b backbone.
      • -
      • vit_t: Segment Anything model with vit-tiny backbone. From the Mobile SAM publication.
      • -
      • vit_l_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. (zenodo, bioimage.io)
      • -
      • vit_b_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, diplomatic-bug on bioimage.io)
      • -
      • vit_t_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. (zenodo, bioimage.io)
      • -
      • vit_l_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. (zenodo, bioimage.io)
      • -
      • vit_b_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. (zenodo, bioimage.io)
      • -
      • vit_t_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-t backbone. (zenodo, bioimage.io)
      • +
      • vit_h: Default Segment Anything model with ViT Huge backbone.
      • +
      • vit_l: Default Segment Anything model with ViT Large backbone.
      • +
      • vit_b: Default Segment Anything model with ViT Base backbone.
      • +
      • vit_t: Segment Anything model with ViT Tiny backbone. From the Mobile SAM publication.
      • +
      • vit_l_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. (Zenodo) (idealistic-rat on BioImage.IO)
      • +
      • vit_b_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Base backbone. (Zenodo) (diplomatic-bug on BioImage.IO)
      • +
      • vit_t_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. (Zenodo) (faithful-chicken BioImage.IO)
      • +
      • vit_l_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. (Zenodo) (humorous-crab on BioImage.IO)
      • +
      • vit_b_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. (Zenodo) (noisy-ox on BioImage.IO)
      • +
      • vit_t_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. (Zenodo) (greedy-whale on BioImage.IO)

      See the two figures below of the improvements through the finetuned model for LM and EM data.

      @@ -540,7 +551,7 @@

      Finetuned models

      -

      You can select which model to use for annotation by selecting the corresponding name in the embedding menu:

      +

      You can select which model to use in the annotation tools by selecting the corresponding name in the Model: drop-down menu in the embedding menu:

      @@ -561,9 +572,9 @@

      Choosing a Model

      See also the figures above for examples where the finetuned models work better than the default models. We are working on further improving these models and adding new models for other biomedical imaging domains.

      -

      Older Models

      +

      Other Models

      -

      Previous versions of our models are available on zenodo:

      +

      Previous versions of our models are available on Zenodo:

      • vit_b_em_boundaries: for segmenting compartments delineated by boundaries such as cells or neurites in EM.
      • @@ -575,10 +586,23 @@

        Older Models

        We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

        +

        We also provide additional models that were used for experiments in our publication on zenodo:

        + + +

        FAQ

        Here we provide frequently asked questions and common issues. -If you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

        +If you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

        Installation questions

        @@ -597,18 +621,26 @@

        3. What is the minimum system requirement for micro_sam?

        From our experience, the micro_sam annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM. -You might encounter some slowness for $leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

        +You might encounter some slowness for $\leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

        • Windows:
          • Windows 10 Pro, Intel i5 7th Gen, 8GB RAM
          • +
          • Windows 10 Enterprise LTSC, Intel i7 13th Gen, 32GB RAM
          • +
          • Windows 10 Pro for Workstations, Intel Xeon W-2295, 128GB RAM
        • -
        • Linux: +
        +
          +
        • Linux:

          + +
            +
          • Ubuntu 20.04, Intel i7 11th Gen, 32GB RAM
          • Ubuntu 22.04, Intel i7 12th Gen, 32GB RAM
        • -
        • Mac: +
        • Mac:

          +
          • macOS Sonoma 14.4.1
              @@ -652,20 +684,20 @@

              micro_sam models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure. +
            • In addition, you can finetune the Segment Anything / micro_sam models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure.

            2. Which model should I use for my data?

            We currently provide three different kind of models: the default models vit_h, vit_l, vit_b and vit_t; the models for light microscopy vit_l_lm, vit_b_lm and vit_t_lm; the models for electron microscopy vit_l_em_organelles, vit_b_em_organelles and vit_t_em_organelles. -You should first try the model that best fits the segmentation task your interested in, a lm model for cell or nucleus segmentation in light microscopy or a em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy. +You should first try the model that best fits the segmentation task your interested in, the lm model for cell or nucleus segmentation in light microscopy or the em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy. If your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead. The letter after vit denotes the size of the image encoder in SAM, h (huge) being the largest and t (tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l or vit_b model, they offer the best trade-off between speed and segmentation quality. You can find more information on model choice here.

            -

            3. I have high-resolution microscopy images, 'micro_sam' does not seem to work.

            +

            3. I have high-resolution microscopy images, micro_sam does not seem to work.

            -

            The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or somethimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

            +

            The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or sometimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

            • If you are using the micro_sam annotation tools, you can specify the values for the tile_shape and halo via the tile_x, tile_y, halo_x and halo_y parameters in the Embedding Settings drop-down menu.
            • @@ -679,16 +711,16 @@

              4. The computation of image embeddings takes very long in napari.

              -

              micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) store them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bootom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

              +

              micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) stores them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bottom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

                -
              • You can use the command micro_sam.precompute_embeddings for this (it is installed with the rest of the software). You can specify the location of the precomputed embeddings via the embedding_path argument.
              • -
              • You can cache the computed embedding in the napari tool (to avoid recomputing the embeddings again) by passing the path to store the embeddings in the embeddings_save_path option in the Embedding Settings drop-down. You can later load the precomputed image embeddings by entering the path to the stored embeddings there as well.
              • +
              • You can use the command micro_sam.precompute_embeddings for this (it is installed with the rest of the software). You can specify the location of the pre-computed embeddings via the embedding_path argument.
              • +
              • You can cache the computed embedding in the napari tool (to avoid recomputing the embeddings again) by passing the path to store the embeddings in the embeddings_save_path option in the Embedding Settings drop-down. You can later load the pre-computed image embeddings by entering the path to the stored embeddings there as well.

              5. Can I use micro_sam on a CPU?

              -

              Most other processing steps that are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the "Segment Anything" feature or AMG - Automatic Mask Generation) takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

              +

              Most other processing steps are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the "Segment Anything" feature or AMG - Automatic Mask Generation) however takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

              HINT: All the tutorial videos have been created on CPU resources.

              @@ -696,20 +728,20 @@

              5. Can I use micro_sam<

              6. I generated some segmentations from another tool, can I use it as a starting point in micro_sam?

              -

              You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation). -If you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and renae it to committed_objects.

              +

              You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation). +If you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and rename it to committed_objects.

              7. I am using micro_sam for segmenting objects. I would like to report the steps for reproducability. How can this be done?

              -

              The annotation steps and segmentation results can be saved to a zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

              +

              The annotation steps and segmentation results can be saved to a Zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

              NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.

              -

              8. I want to segment complex objects. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

              +

              8. I want to segment objects with complex structures. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

              -

              micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

              +

              micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section on finetuning). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

              9. I am using the annotation tool and napari outputs the following error: While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.

              @@ -729,7 +761,7 @@

              12. napari seems to be

              Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.

              -

              13. While computing the embeddings (and / or automatic segmentation), a window stating: "napari" is not responding. pops up.

              +

              13. While computing the embeddings (and / or automatic segmentation), a window stating: "napari" is not responding pops up.

              This can happen for long running computations. You just need to wait a bit longer and the computation will finish.

              @@ -808,47 +840,46 @@

              Contribution Guide

          -

          Discuss your ideas

          - -

          We welcome new contributions!

          +

          Discuss your ideas

          -

          First, discuss your idea by opening a new issue in micro-sam.

          - -

          This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

          - -

          You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

          +

          We welcome new contributions! First, discuss your idea by opening a new issue in micro-sam. +This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

          -

          Clone the repository

          +

          Clone the repository

          We use git for version control.

          Clone the repository, and checkout the development branch:

          -
          git clone https://github.com/computational-cell-analytics/micro-sam.git
          -cd micro-sam
          -git checkout dev
          +
          +
          $ git clone https://github.com/computational-cell-analytics/micro-sam.git
          +$ cd micro-sam
          +$ git checkout dev
           
          +
          -

          Create your development environment

          +

          Create your development environment

          We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

          -

          Now you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:

          +

          Now you can create the environment, install user and developer dependencies, and micro-sam as an editable installation:

          -
          conda env create environment-gpu.yml
          -conda activate sam
          -python -m pip install requirements-dev.txt
          -python -m pip install -e .
          +
          +
          $ mamba env create environment_gpu.yaml
          +$ mamba activate sam
          +$ python -m pip install requirements-dev.txt
          +$ python -m pip install -e .
           
          +
          -

          Make your changes

          +

          Make your changes

          Now it's time to make your code changes.

          Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

          -
          git checkout dev
          -git checkout -b my-new-feature
          +
          $ git checkout dev
          +$ git checkout -b my-new-feature
           

          We use google style python docstrings to create documentation for all new code.

          @@ -863,8 +894,10 @@

          Run the tests

          To run the tests:

          -
          pytest
          +
          +
          $ pytest
           
          +

          Writing your own tests

          @@ -890,27 +923,31 @@

          Code coverage

          We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

          -

          Open a pull request

          +

          Open a pull request

          Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

          Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

          -

          Optional: Build the documentation

          +

          Optional: Build the documentation

          We use pdoc to build the documentation.

          To build the documentation locally, run this command:

          -
          python build_doc.py
          +
          +
          $ python build_doc.py
           
          +

          This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

          If you want to save the HTML files, append --out to the command, like this:

          -
          python build_doc.py --out
          +
          +
          $ python build_doc.py --out
           
          +

          This will save the HTML files into a new directory named tmp.

          @@ -927,7 +964,7 @@

          Optional: Build the documentation

        -

        Optional: Benchmark performance

        +

        Optional: Benchmark performance

        There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

        @@ -938,26 +975,30 @@

        Optional: Benchmark performance

      • Memory profiling with memray
      -

      Run the benchmark script

      +

      Run the benchmark script

      There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

      To run the benchmark script:

      -
      python development/benchmark.py --model_type vit_t --device cpu`
      +
      +
      $ python development/benchmark.py --model_type vit_t --device cpu`
       
      +

      For more details about the user input arguments for the micro-sam benchmark script, see the help:

      -
      python development/benchmark.py --help
      +
      +
      $ python development/benchmark.py --help
       
      +
      -

      Line profiling

      +

      Line profiling

      For more detailed line by line performance results, we can use line-profiler.

      -

      line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

      +

      line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

      To do line-by-line profiling:

      @@ -972,15 +1013,17 @@

      Line profiling

      For more details about the user input arguments for the micro-sam benchmark script, see the help:

      -
      python development/benchmark.py --help
      +
      +
      $ python development/benchmark.py --help
       
      +
      -

      Snakeviz visualization

      +

      Snakeviz visualization

      For more detailed visualizations of profiling results, we use snakeviz.

      -

      SnakeViz is a browser based graphical viewer for the output of Python’s cProfile module

      +

      SnakeViz is a browser based graphical viewer for the output of Python’s cProfile module.

        @@ -991,7 +1034,7 @@

        Snakeviz visualization

        For more details about how to use snakeviz, see the documentation.

        -

        Memory profiling with memray

        +

        Memory profiling with memray

        If you need to investigate memory use specifically, we use memray.

        @@ -1001,7 +1044,11 @@

        Memory profiling with memray

        For more details about how to use memray, see the documentation.

        -

        Using micro_sam on BAND

        +

        Creating a new release

        + +

        To create a new release you have to edit the version number in micro_sam/__version__.py in a PR. After merging this PR the release will automatically be done by the CI.

        + +

        Using micro_sam on BAND

        BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there. In order to use BAND and start micro_sam on it follow these steps:

        @@ -1009,17 +1056,17 @@

        Using micro_sam on BAND

        Start BAND

          -
        • Go to https://band.embl.de/ and click Login. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a google account.
        • +
        • Go to https://band.embl.de/ and click Login. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a Google account.
        • Launch a BAND desktop with sufficient resources. It's particularly important to select a GPU. The settings from the image below are a good choice.
        • Go to the desktop by clicking GO TO DESKTOP in the Running Desktops menu. See also the screenshot below.

        image

        -

        Start micro_sam in BAND

        +

        Start micro_sam in BAND

          -
        • Select Applications->Image Analysis->uSAM (see screenshot) +
        • Select Applications -> Image Analysis -> uSAM (see screenshot) image
        • This will open the micro_sam menu, where you can select the tool you want to use (see screenshot). Note: this may take a few minutes. image
        • @@ -1036,10 +1083,10 @@

          Transfering data to BAND

          To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

            -
          • To copy text into BAND you first need to copy it on your computer (e.g. via selecting it + ctrl + c).
          • -
          • Then go to the browser window with BAND and press ctrl + shift + alt. This will open a side window where you can paste your text via ctrl + v.
          • -
          • Then select the text in this window and copy it via ctrl + c.
          • -
          • Now you can close the side window via ctrl + shift + alt and paste the text in band via ctrl + v
          • +
          • To copy text into BAND you first need to copy it on your computer (e.g. via selecting it + Ctrl + C).
          • +
          • Then go to the browser window with BAND and press Ctrl + Shift + Alt. This will open a side window where you can paste your text via Ctrl + V.
          • +
          • Then select the text in this window and copy it via Ctrl + C.
          • +
          • Now you can close the side window via Ctrl + Shift + Alt and paste the text in band via Ctrl + V

          The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

          @@ -1059,14 +1106,13 @@

          Transfering data to BAND

          6.. include:: ../doc/finetuned_models.md 7.. include:: ../doc/faq.md 8.. include:: ../doc/contributing.md - 9.. include:: ../doc/development.md -10.. include:: ../doc/band.md -11""" -12import os -13 -14from .__version__ import __version__ -15 -16os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" + 9.. include:: ../doc/band.md +10""" +11import os +12 +13from .__version__ import __version__ +14 +15os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
    diff --git a/micro_sam/bioimageio/model_export.html b/micro_sam/bioimageio/model_export.html index 8c2486b6..f4fe048e 100644 --- a/micro_sam/bioimageio/model_export.html +++ b/micro_sam/bioimageio/model_export.html @@ -297,8 +297,8 @@

    237 ).as_single_block() 238 prediction = pp.predict_sample_block(sample) 239 -240 assert len(prediction) == 3 -241 predicted_mask = prediction[0] +240 predicted_mask = prediction.blocks["masks"].data.data +241 assert predicted_mask.shape == mask.shape 242 assert np.allclose(mask, predicted_mask) 243 244 # Run the checks with partial prompts. @@ -322,268 +322,266 @@

    262 model=model_description, image=image, embeddings=embeddings, **kwargs 263 ).as_single_block() 264 prediction = pp.predict_sample_block(sample) -265 assert len(prediction) == 3 -266 predicted_mask = prediction[0] -267 assert predicted_mask.shape == mask.shape +265 predicted_mask = prediction.blocks["masks"].data.data +266 assert predicted_mask.shape == mask.shape +267 268 -269 -270def export_sam_model( -271 image: np.ndarray, -272 label_image: np.ndarray, -273 model_type: str, -274 name: str, -275 output_path: Union[str, os.PathLike], -276 checkpoint_path: Optional[Union[str, os.PathLike]] = None, -277 **kwargs -278) -> None: -279 """Export SAM model to BioImage.IO model format. -280 -281 The exported model can be uploaded to [bioimage.io](https://bioimage.io/#/) and -282 be used in tools that support the BioImage.IO model format. -283 -284 Args: -285 image: The image for generating test data. -286 label_image: The segmentation correspoding to `image`. -287 It is used to derive prompt inputs for the model. -288 model_type: The type of the SAM model. -289 name: The name of the exported model. -290 output_path: Where the exported model is saved. -291 checkpoint_path: Optional checkpoint for loading the SAM model. -292 """ -293 with tempfile.TemporaryDirectory() as tmp_dir: -294 checkpoint_path, decoder_path = _get_checkpoint(model_type, checkpoint_path, tmp_dir) -295 input_paths, result_paths = _create_test_inputs_and_outputs( -296 image, label_image, model_type, checkpoint_path, tmp_dir, -297 ) -298 input_descriptions = [ -299 # First input: the image data. -300 spec.InputTensorDescr( -301 id=spec.TensorId("image"), -302 axes=[ -303 spec.BatchAxis(), -304 # NOTE: to support 1 and 3 channels we can add another preprocessing. -305 # Best solution: Have a pre-processing for this! (1C -> RGB) -306 spec.ChannelAxis(channel_names=[spec.Identifier(cname) for cname in "RGB"]), -307 spec.SpaceInputAxis(id=spec.AxisId("y"), size=spec.ARBITRARY_SIZE), -308 spec.SpaceInputAxis(id=spec.AxisId("x"), size=spec.ARBITRARY_SIZE), -309 ], -310 test_tensor=spec.FileDescr(source=input_paths["image"]), -311 data=spec.IntervalOrRatioDataDescr(type="uint8") -312 ), -313 -314 # Second input: the box prompts (optional) -315 spec.InputTensorDescr( -316 id=spec.TensorId("box_prompts"), -317 optional=True, -318 axes=[ -319 spec.BatchAxis(), -320 spec.IndexInputAxis( -321 id=spec.AxisId("object"), -322 size=spec.ARBITRARY_SIZE -323 ), -324 spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "hwxy"]), -325 ], -326 test_tensor=spec.FileDescr(source=input_paths["box_prompts"]), -327 data=spec.IntervalOrRatioDataDescr(type="int64") -328 ), -329 -330 # Third input: the point prompt coordinates (optional) -331 spec.InputTensorDescr( -332 id=spec.TensorId("point_prompts"), -333 optional=True, -334 axes=[ -335 spec.BatchAxis(), -336 spec.IndexInputAxis( -337 id=spec.AxisId("object"), -338 size=spec.ARBITRARY_SIZE -339 ), -340 spec.IndexInputAxis( -341 id=spec.AxisId("point"), -342 size=spec.ARBITRARY_SIZE -343 ), -344 spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "xy"]), -345 ], -346 test_tensor=spec.FileDescr(source=input_paths["point_prompts"]), -347 data=spec.IntervalOrRatioDataDescr(type="int64") -348 ), -349 -350 # Fourth input: the point prompt labels (optional) -351 spec.InputTensorDescr( -352 id=spec.TensorId("point_labels"), -353 optional=True, -354 axes=[ -355 spec.BatchAxis(), -356 spec.IndexInputAxis( -357 id=spec.AxisId("object"), -358 size=spec.ARBITRARY_SIZE -359 ), -360 spec.IndexInputAxis( -361 id=spec.AxisId("point"), -362 size=spec.ARBITRARY_SIZE -363 ), -364 ], -365 test_tensor=spec.FileDescr(source=input_paths["point_labels"]), -366 data=spec.IntervalOrRatioDataDescr(type="int64") -367 ), -368 -369 # Fifth input: the mask prompts (optional) -370 spec.InputTensorDescr( -371 id=spec.TensorId("mask_prompts"), -372 optional=True, -373 axes=[ -374 spec.BatchAxis(), -375 spec.IndexInputAxis( -376 id=spec.AxisId("object"), -377 size=spec.ARBITRARY_SIZE -378 ), -379 spec.ChannelAxis(channel_names=["channel"]), -380 spec.SpaceInputAxis(id=spec.AxisId("y"), size=256), -381 spec.SpaceInputAxis(id=spec.AxisId("x"), size=256), -382 ], -383 test_tensor=spec.FileDescr(source=input_paths["mask_prompts"]), -384 data=spec.IntervalOrRatioDataDescr(type="float32") -385 ), -386 -387 # Sixth input: the image embeddings (optional) -388 spec.InputTensorDescr( -389 id=spec.TensorId("embeddings"), -390 optional=True, -391 axes=[ -392 spec.BatchAxis(), -393 # NOTE: we currently have to specify all the channel names -394 # (It would be nice to also support size) -395 spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]), -396 spec.SpaceInputAxis(id=spec.AxisId("y"), size=64), -397 spec.SpaceInputAxis(id=spec.AxisId("x"), size=64), -398 ], -399 test_tensor=spec.FileDescr(source=result_paths["embeddings"]), -400 data=spec.IntervalOrRatioDataDescr(type="float32") -401 ), -402 -403 ] -404 -405 output_descriptions = [ -406 # First output: The mask predictions. -407 spec.OutputTensorDescr( -408 id=spec.TensorId("masks"), -409 axes=[ -410 spec.BatchAxis(), -411 # NOTE: we use the data dependent size here to avoid dependency on optional inputs -412 spec.IndexOutputAxis( -413 id=spec.AxisId("object"), size=spec.DataDependentSize(), -414 ), -415 # NOTE: this could be a 3 once we use multi-masking -416 spec.ChannelAxis(channel_names=[spec.Identifier("mask")]), -417 spec.SpaceOutputAxis( -418 id=spec.AxisId("y"), -419 size=spec.SizeReference( -420 tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("y"), -421 ) -422 ), -423 spec.SpaceOutputAxis( -424 id=spec.AxisId("x"), -425 size=spec.SizeReference( -426 tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("x"), -427 ) -428 ) -429 ], -430 data=spec.IntervalOrRatioDataDescr(type="uint8"), -431 test_tensor=spec.FileDescr(source=result_paths["mask"]) -432 ), -433 -434 # The score predictions -435 spec.OutputTensorDescr( -436 id=spec.TensorId("scores"), -437 axes=[ -438 spec.BatchAxis(), -439 # NOTE: we use the data dependent size here to avoid dependency on optional inputs -440 spec.IndexOutputAxis( -441 id=spec.AxisId("object"), size=spec.DataDependentSize(), -442 ), -443 # NOTE: this could be a 3 once we use multi-masking -444 spec.ChannelAxis(channel_names=[spec.Identifier("mask")]), -445 ], -446 data=spec.IntervalOrRatioDataDescr(type="float32"), -447 test_tensor=spec.FileDescr(source=result_paths["score"]) -448 ), -449 -450 # The image embeddings -451 spec.OutputTensorDescr( -452 id=spec.TensorId("embeddings"), -453 axes=[ -454 spec.BatchAxis(), -455 spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]), -456 spec.SpaceOutputAxis(id=spec.AxisId("y"), size=64), -457 spec.SpaceOutputAxis(id=spec.AxisId("x"), size=64), -458 ], -459 data=spec.IntervalOrRatioDataDescr(type="float32"), -460 test_tensor=spec.FileDescr(source=result_paths["embeddings"]) -461 ) -462 ] -463 -464 architecture_path = os.path.join(os.path.split(__file__)[0], "predictor_adaptor.py") -465 architecture = spec.ArchitectureFromFileDescr( -466 source=Path(architecture_path), -467 callable="PredictorAdaptor", -468 kwargs={"model_type": model_type} -469 ) -470 -471 dependency_file = os.path.join(tmp_dir, "environment.yaml") -472 _write_dependencies(dependency_file, require_mobile_sam=model_type.startswith("vit_t")) -473 -474 weight_descriptions = spec.WeightsDescr( -475 pytorch_state_dict=spec.PytorchStateDictWeightsDescr( -476 source=Path(checkpoint_path), -477 architecture=architecture, -478 pytorch_version=spec.Version(torch.__version__), -479 dependencies=spec.EnvironmentFileDescr(source=dependency_file), -480 ) -481 ) -482 -483 doc_path = _write_documentation(kwargs.get("documentation", None), model_type, tmp_dir) -484 -485 covers = kwargs.get("covers", None) -486 if covers is None: -487 covers = _generate_covers(input_paths, result_paths, tmp_dir) -488 else: -489 assert all(os.path.exists(cov) for cov in covers) -490 -491 # the uploader information is only added if explicitly passed -492 extra_kwargs = {} -493 if "id" in kwargs: -494 extra_kwargs["id"] = kwargs["id"] -495 if "id_emoji" in kwargs: -496 extra_kwargs["id_emoji"] = kwargs["id_emoji"] -497 if "uploader" in kwargs: -498 extra_kwargs["uploader"] = kwargs["uploader"] -499 -500 if decoder_path is not None: -501 extra_kwargs["attachments"] = [spec.FileDescr(source=decoder_path)] -502 -503 model_description = spec.ModelDescr( -504 name=name, -505 inputs=input_descriptions, -506 outputs=output_descriptions, -507 weights=weight_descriptions, -508 description=kwargs.get("description", DEFAULTS["description"]), -509 authors=kwargs.get("authors", DEFAULTS["authors"]), -510 cite=kwargs.get("cite", DEFAULTS["cite"]), -511 license=spec.LicenseId("CC-BY-4.0"), -512 documentation=Path(doc_path), -513 git_repo=spec.HttpUrl("https://github.com/computational-cell-analytics/micro-sam"), -514 tags=kwargs.get("tags", DEFAULTS["tags"]), -515 covers=covers, -516 **extra_kwargs, -517 # TODO write specific settings in the config -518 # dict with yaml values, key must be a str -519 # micro_sam: ... -520 # config= -521 ) -522 -523 # TODO this requires the new bioimageio.core release -524 # _check_model(model_description, input_paths, result_paths) -525 -526 save_bioimageio_package(model_description, output_path=output_path) +269def export_sam_model( +270 image: np.ndarray, +271 label_image: np.ndarray, +272 model_type: str, +273 name: str, +274 output_path: Union[str, os.PathLike], +275 checkpoint_path: Optional[Union[str, os.PathLike]] = None, +276 **kwargs +277) -> None: +278 """Export SAM model to BioImage.IO model format. +279 +280 The exported model can be uploaded to [bioimage.io](https://bioimage.io/#/) and +281 be used in tools that support the BioImage.IO model format. +282 +283 Args: +284 image: The image for generating test data. +285 label_image: The segmentation correspoding to `image`. +286 It is used to derive prompt inputs for the model. +287 model_type: The type of the SAM model. +288 name: The name of the exported model. +289 output_path: Where the exported model is saved. +290 checkpoint_path: Optional checkpoint for loading the SAM model. +291 """ +292 with tempfile.TemporaryDirectory() as tmp_dir: +293 checkpoint_path, decoder_path = _get_checkpoint(model_type, checkpoint_path, tmp_dir) +294 input_paths, result_paths = _create_test_inputs_and_outputs( +295 image, label_image, model_type, checkpoint_path, tmp_dir, +296 ) +297 input_descriptions = [ +298 # First input: the image data. +299 spec.InputTensorDescr( +300 id=spec.TensorId("image"), +301 axes=[ +302 spec.BatchAxis(size=1), +303 # NOTE: to support 1 and 3 channels we can add another preprocessing. +304 # Best solution: Have a pre-processing for this! (1C -> RGB) +305 spec.ChannelAxis(channel_names=[spec.Identifier(cname) for cname in "RGB"]), +306 spec.SpaceInputAxis(id=spec.AxisId("y"), size=spec.ARBITRARY_SIZE), +307 spec.SpaceInputAxis(id=spec.AxisId("x"), size=spec.ARBITRARY_SIZE), +308 ], +309 test_tensor=spec.FileDescr(source=input_paths["image"]), +310 data=spec.IntervalOrRatioDataDescr(type="uint8") +311 ), +312 +313 # Second input: the box prompts (optional) +314 spec.InputTensorDescr( +315 id=spec.TensorId("box_prompts"), +316 optional=True, +317 axes=[ +318 spec.BatchAxis(size=1), +319 spec.IndexInputAxis( +320 id=spec.AxisId("object"), +321 size=spec.ARBITRARY_SIZE +322 ), +323 spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "hwxy"]), +324 ], +325 test_tensor=spec.FileDescr(source=input_paths["box_prompts"]), +326 data=spec.IntervalOrRatioDataDescr(type="int64") +327 ), +328 +329 # Third input: the point prompt coordinates (optional) +330 spec.InputTensorDescr( +331 id=spec.TensorId("point_prompts"), +332 optional=True, +333 axes=[ +334 spec.BatchAxis(size=1), +335 spec.IndexInputAxis( +336 id=spec.AxisId("object"), +337 size=spec.ARBITRARY_SIZE +338 ), +339 spec.IndexInputAxis( +340 id=spec.AxisId("point"), +341 size=spec.ARBITRARY_SIZE +342 ), +343 spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "xy"]), +344 ], +345 test_tensor=spec.FileDescr(source=input_paths["point_prompts"]), +346 data=spec.IntervalOrRatioDataDescr(type="int64") +347 ), +348 +349 # Fourth input: the point prompt labels (optional) +350 spec.InputTensorDescr( +351 id=spec.TensorId("point_labels"), +352 optional=True, +353 axes=[ +354 spec.BatchAxis(size=1), +355 spec.IndexInputAxis( +356 id=spec.AxisId("object"), +357 size=spec.ARBITRARY_SIZE +358 ), +359 spec.IndexInputAxis( +360 id=spec.AxisId("point"), +361 size=spec.ARBITRARY_SIZE +362 ), +363 ], +364 test_tensor=spec.FileDescr(source=input_paths["point_labels"]), +365 data=spec.IntervalOrRatioDataDescr(type="int64") +366 ), +367 +368 # Fifth input: the mask prompts (optional) +369 spec.InputTensorDescr( +370 id=spec.TensorId("mask_prompts"), +371 optional=True, +372 axes=[ +373 spec.BatchAxis(size=1), +374 spec.IndexInputAxis( +375 id=spec.AxisId("object"), +376 size=spec.ARBITRARY_SIZE +377 ), +378 spec.ChannelAxis(channel_names=["channel"]), +379 spec.SpaceInputAxis(id=spec.AxisId("y"), size=256), +380 spec.SpaceInputAxis(id=spec.AxisId("x"), size=256), +381 ], +382 test_tensor=spec.FileDescr(source=input_paths["mask_prompts"]), +383 data=spec.IntervalOrRatioDataDescr(type="float32") +384 ), +385 +386 # Sixth input: the image embeddings (optional) +387 spec.InputTensorDescr( +388 id=spec.TensorId("embeddings"), +389 optional=True, +390 axes=[ +391 spec.BatchAxis(size=1), +392 # NOTE: we currently have to specify all the channel names +393 # (It would be nice to also support size) +394 spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]), +395 spec.SpaceInputAxis(id=spec.AxisId("y"), size=64), +396 spec.SpaceInputAxis(id=spec.AxisId("x"), size=64), +397 ], +398 test_tensor=spec.FileDescr(source=result_paths["embeddings"]), +399 data=spec.IntervalOrRatioDataDescr(type="float32") +400 ), +401 +402 ] +403 +404 output_descriptions = [ +405 # First output: The mask predictions. +406 spec.OutputTensorDescr( +407 id=spec.TensorId("masks"), +408 axes=[ +409 spec.BatchAxis(size=1), +410 # NOTE: we use the data dependent size here to avoid dependency on optional inputs +411 spec.IndexOutputAxis( +412 id=spec.AxisId("object"), size=spec.DataDependentSize(), +413 ), +414 # NOTE: this could be a 3 once we use multi-masking +415 spec.ChannelAxis(channel_names=[spec.Identifier("mask")]), +416 spec.SpaceOutputAxis( +417 id=spec.AxisId("y"), +418 size=spec.SizeReference( +419 tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("y"), +420 ) +421 ), +422 spec.SpaceOutputAxis( +423 id=spec.AxisId("x"), +424 size=spec.SizeReference( +425 tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("x"), +426 ) +427 ) +428 ], +429 data=spec.IntervalOrRatioDataDescr(type="uint8"), +430 test_tensor=spec.FileDescr(source=result_paths["mask"]) +431 ), +432 +433 # The score predictions +434 spec.OutputTensorDescr( +435 id=spec.TensorId("scores"), +436 axes=[ +437 spec.BatchAxis(size=1), +438 # NOTE: we use the data dependent size here to avoid dependency on optional inputs +439 spec.IndexOutputAxis( +440 id=spec.AxisId("object"), size=spec.DataDependentSize(), +441 ), +442 # NOTE: this could be a 3 once we use multi-masking +443 spec.ChannelAxis(channel_names=[spec.Identifier("mask")]), +444 ], +445 data=spec.IntervalOrRatioDataDescr(type="float32"), +446 test_tensor=spec.FileDescr(source=result_paths["score"]) +447 ), +448 +449 # The image embeddings +450 spec.OutputTensorDescr( +451 id=spec.TensorId("embeddings"), +452 axes=[ +453 spec.BatchAxis(size=1), +454 spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]), +455 spec.SpaceOutputAxis(id=spec.AxisId("y"), size=64), +456 spec.SpaceOutputAxis(id=spec.AxisId("x"), size=64), +457 ], +458 data=spec.IntervalOrRatioDataDescr(type="float32"), +459 test_tensor=spec.FileDescr(source=result_paths["embeddings"]) +460 ) +461 ] +462 +463 architecture_path = os.path.join(os.path.split(__file__)[0], "predictor_adaptor.py") +464 architecture = spec.ArchitectureFromFileDescr( +465 source=Path(architecture_path), +466 callable="PredictorAdaptor", +467 kwargs={"model_type": model_type} +468 ) +469 +470 dependency_file = os.path.join(tmp_dir, "environment.yaml") +471 _write_dependencies(dependency_file, require_mobile_sam=model_type.startswith("vit_t")) +472 +473 weight_descriptions = spec.WeightsDescr( +474 pytorch_state_dict=spec.PytorchStateDictWeightsDescr( +475 source=Path(checkpoint_path), +476 architecture=architecture, +477 pytorch_version=spec.Version(torch.__version__), +478 dependencies=spec.EnvironmentFileDescr(source=dependency_file), +479 ) +480 ) +481 +482 doc_path = _write_documentation(kwargs.get("documentation", None), model_type, tmp_dir) +483 +484 covers = kwargs.get("covers", None) +485 if covers is None: +486 covers = _generate_covers(input_paths, result_paths, tmp_dir) +487 else: +488 assert all(os.path.exists(cov) for cov in covers) +489 +490 # the uploader information is only added if explicitly passed +491 extra_kwargs = {} +492 if "id" in kwargs: +493 extra_kwargs["id"] = kwargs["id"] +494 if "id_emoji" in kwargs: +495 extra_kwargs["id_emoji"] = kwargs["id_emoji"] +496 if "uploader" in kwargs: +497 extra_kwargs["uploader"] = kwargs["uploader"] +498 +499 if decoder_path is not None: +500 extra_kwargs["attachments"] = [spec.FileDescr(source=decoder_path)] +501 +502 model_description = spec.ModelDescr( +503 name=name, +504 inputs=input_descriptions, +505 outputs=output_descriptions, +506 weights=weight_descriptions, +507 description=kwargs.get("description", DEFAULTS["description"]), +508 authors=kwargs.get("authors", DEFAULTS["authors"]), +509 cite=kwargs.get("cite", DEFAULTS["cite"]), +510 license=spec.LicenseId("CC-BY-4.0"), +511 documentation=Path(doc_path), +512 git_repo=spec.HttpUrl("https://github.com/computational-cell-analytics/micro-sam"), +513 tags=kwargs.get("tags", DEFAULTS["tags"]), +514 covers=covers, +515 **extra_kwargs, +516 # TODO write specific settings in the config +517 # dict with yaml values, key must be a str +518 # micro_sam: ... +519 # config= +520 ) +521 +522 _check_model(model_description, input_paths, result_paths) +523 +524 save_bioimageio_package(model_description, output_path=output_path) @@ -612,263 +610,262 @@

    -
    271def export_sam_model(
    -272    image: np.ndarray,
    -273    label_image: np.ndarray,
    -274    model_type: str,
    -275    name: str,
    -276    output_path: Union[str, os.PathLike],
    -277    checkpoint_path: Optional[Union[str, os.PathLike]] = None,
    -278    **kwargs
    -279) -> None:
    -280    """Export SAM model to BioImage.IO model format.
    -281
    -282    The exported model can be uploaded to [bioimage.io](https://bioimage.io/#/) and
    -283    be used in tools that support the BioImage.IO model format.
    -284
    -285    Args:
    -286        image: The image for generating test data.
    -287        label_image: The segmentation correspoding to `image`.
    -288            It is used to derive prompt inputs for the model.
    -289        model_type: The type of the SAM model.
    -290        name: The name of the exported model.
    -291        output_path: Where the exported model is saved.
    -292        checkpoint_path: Optional checkpoint for loading the SAM model.
    -293    """
    -294    with tempfile.TemporaryDirectory() as tmp_dir:
    -295        checkpoint_path, decoder_path = _get_checkpoint(model_type, checkpoint_path, tmp_dir)
    -296        input_paths, result_paths = _create_test_inputs_and_outputs(
    -297            image, label_image, model_type, checkpoint_path, tmp_dir,
    -298        )
    -299        input_descriptions = [
    -300            # First input: the image data.
    -301            spec.InputTensorDescr(
    -302                id=spec.TensorId("image"),
    -303                axes=[
    -304                    spec.BatchAxis(),
    -305                    # NOTE: to support 1 and 3 channels we can add another preprocessing.
    -306                    # Best solution: Have a pre-processing for this! (1C -> RGB)
    -307                    spec.ChannelAxis(channel_names=[spec.Identifier(cname) for cname in "RGB"]),
    -308                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=spec.ARBITRARY_SIZE),
    -309                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=spec.ARBITRARY_SIZE),
    -310                ],
    -311                test_tensor=spec.FileDescr(source=input_paths["image"]),
    -312                data=spec.IntervalOrRatioDataDescr(type="uint8")
    -313            ),
    -314
    -315            # Second input: the box prompts (optional)
    -316            spec.InputTensorDescr(
    -317                id=spec.TensorId("box_prompts"),
    -318                optional=True,
    -319                axes=[
    -320                    spec.BatchAxis(),
    -321                    spec.IndexInputAxis(
    -322                        id=spec.AxisId("object"),
    -323                        size=spec.ARBITRARY_SIZE
    -324                    ),
    -325                    spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "hwxy"]),
    -326                ],
    -327                test_tensor=spec.FileDescr(source=input_paths["box_prompts"]),
    -328                data=spec.IntervalOrRatioDataDescr(type="int64")
    -329            ),
    -330
    -331            # Third input: the point prompt coordinates (optional)
    -332            spec.InputTensorDescr(
    -333                id=spec.TensorId("point_prompts"),
    -334                optional=True,
    -335                axes=[
    -336                    spec.BatchAxis(),
    -337                    spec.IndexInputAxis(
    -338                        id=spec.AxisId("object"),
    -339                        size=spec.ARBITRARY_SIZE
    -340                    ),
    -341                    spec.IndexInputAxis(
    -342                        id=spec.AxisId("point"),
    -343                        size=spec.ARBITRARY_SIZE
    -344                    ),
    -345                    spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "xy"]),
    -346                ],
    -347                test_tensor=spec.FileDescr(source=input_paths["point_prompts"]),
    -348                data=spec.IntervalOrRatioDataDescr(type="int64")
    -349            ),
    -350
    -351            # Fourth input: the point prompt labels (optional)
    -352            spec.InputTensorDescr(
    -353                id=spec.TensorId("point_labels"),
    -354                optional=True,
    -355                axes=[
    -356                    spec.BatchAxis(),
    -357                    spec.IndexInputAxis(
    -358                        id=spec.AxisId("object"),
    -359                        size=spec.ARBITRARY_SIZE
    -360                    ),
    -361                    spec.IndexInputAxis(
    -362                        id=spec.AxisId("point"),
    -363                        size=spec.ARBITRARY_SIZE
    -364                    ),
    -365                ],
    -366                test_tensor=spec.FileDescr(source=input_paths["point_labels"]),
    -367                data=spec.IntervalOrRatioDataDescr(type="int64")
    -368            ),
    -369
    -370            # Fifth input: the mask prompts (optional)
    -371            spec.InputTensorDescr(
    -372                id=spec.TensorId("mask_prompts"),
    -373                optional=True,
    -374                axes=[
    -375                    spec.BatchAxis(),
    -376                    spec.IndexInputAxis(
    -377                        id=spec.AxisId("object"),
    -378                        size=spec.ARBITRARY_SIZE
    -379                    ),
    -380                    spec.ChannelAxis(channel_names=["channel"]),
    -381                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=256),
    -382                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=256),
    -383                ],
    -384                test_tensor=spec.FileDescr(source=input_paths["mask_prompts"]),
    -385                data=spec.IntervalOrRatioDataDescr(type="float32")
    -386            ),
    -387
    -388            # Sixth input: the image embeddings (optional)
    -389            spec.InputTensorDescr(
    -390                id=spec.TensorId("embeddings"),
    -391                optional=True,
    -392                axes=[
    -393                    spec.BatchAxis(),
    -394                    # NOTE: we currently have to specify all the channel names
    -395                    # (It would be nice to also support size)
    -396                    spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]),
    -397                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=64),
    -398                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=64),
    -399                ],
    -400                test_tensor=spec.FileDescr(source=result_paths["embeddings"]),
    -401                data=spec.IntervalOrRatioDataDescr(type="float32")
    -402            ),
    -403
    -404        ]
    -405
    -406        output_descriptions = [
    -407            # First output: The mask predictions.
    -408            spec.OutputTensorDescr(
    -409                id=spec.TensorId("masks"),
    -410                axes=[
    -411                    spec.BatchAxis(),
    -412                    # NOTE: we use the data dependent size here to avoid dependency on optional inputs
    -413                    spec.IndexOutputAxis(
    -414                        id=spec.AxisId("object"), size=spec.DataDependentSize(),
    -415                    ),
    -416                    # NOTE: this could be a 3 once we use multi-masking
    -417                    spec.ChannelAxis(channel_names=[spec.Identifier("mask")]),
    -418                    spec.SpaceOutputAxis(
    -419                        id=spec.AxisId("y"),
    -420                        size=spec.SizeReference(
    -421                            tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("y"),
    -422                        )
    -423                    ),
    -424                    spec.SpaceOutputAxis(
    -425                        id=spec.AxisId("x"),
    -426                        size=spec.SizeReference(
    -427                            tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("x"),
    -428                        )
    -429                    )
    -430                ],
    -431                data=spec.IntervalOrRatioDataDescr(type="uint8"),
    -432                test_tensor=spec.FileDescr(source=result_paths["mask"])
    -433            ),
    -434
    -435            # The score predictions
    -436            spec.OutputTensorDescr(
    -437                id=spec.TensorId("scores"),
    -438                axes=[
    -439                    spec.BatchAxis(),
    -440                    # NOTE: we use the data dependent size here to avoid dependency on optional inputs
    -441                    spec.IndexOutputAxis(
    -442                        id=spec.AxisId("object"), size=spec.DataDependentSize(),
    -443                    ),
    -444                    # NOTE: this could be a 3 once we use multi-masking
    -445                    spec.ChannelAxis(channel_names=[spec.Identifier("mask")]),
    -446                ],
    -447                data=spec.IntervalOrRatioDataDescr(type="float32"),
    -448                test_tensor=spec.FileDescr(source=result_paths["score"])
    -449            ),
    -450
    -451            # The image embeddings
    -452            spec.OutputTensorDescr(
    -453                id=spec.TensorId("embeddings"),
    -454                axes=[
    -455                    spec.BatchAxis(),
    -456                    spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]),
    -457                    spec.SpaceOutputAxis(id=spec.AxisId("y"), size=64),
    -458                    spec.SpaceOutputAxis(id=spec.AxisId("x"), size=64),
    -459                ],
    -460                data=spec.IntervalOrRatioDataDescr(type="float32"),
    -461                test_tensor=spec.FileDescr(source=result_paths["embeddings"])
    -462            )
    -463        ]
    -464
    -465        architecture_path = os.path.join(os.path.split(__file__)[0], "predictor_adaptor.py")
    -466        architecture = spec.ArchitectureFromFileDescr(
    -467            source=Path(architecture_path),
    -468            callable="PredictorAdaptor",
    -469            kwargs={"model_type": model_type}
    -470        )
    -471
    -472        dependency_file = os.path.join(tmp_dir, "environment.yaml")
    -473        _write_dependencies(dependency_file, require_mobile_sam=model_type.startswith("vit_t"))
    -474
    -475        weight_descriptions = spec.WeightsDescr(
    -476            pytorch_state_dict=spec.PytorchStateDictWeightsDescr(
    -477                source=Path(checkpoint_path),
    -478                architecture=architecture,
    -479                pytorch_version=spec.Version(torch.__version__),
    -480                dependencies=spec.EnvironmentFileDescr(source=dependency_file),
    -481            )
    -482        )
    -483
    -484        doc_path = _write_documentation(kwargs.get("documentation", None), model_type, tmp_dir)
    -485
    -486        covers = kwargs.get("covers", None)
    -487        if covers is None:
    -488            covers = _generate_covers(input_paths, result_paths, tmp_dir)
    -489        else:
    -490            assert all(os.path.exists(cov) for cov in covers)
    -491
    -492        # the uploader information is only added if explicitly passed
    -493        extra_kwargs = {}
    -494        if "id" in kwargs:
    -495            extra_kwargs["id"] = kwargs["id"]
    -496        if "id_emoji" in kwargs:
    -497            extra_kwargs["id_emoji"] = kwargs["id_emoji"]
    -498        if "uploader" in kwargs:
    -499            extra_kwargs["uploader"] = kwargs["uploader"]
    -500
    -501        if decoder_path is not None:
    -502            extra_kwargs["attachments"] = [spec.FileDescr(source=decoder_path)]
    -503
    -504        model_description = spec.ModelDescr(
    -505            name=name,
    -506            inputs=input_descriptions,
    -507            outputs=output_descriptions,
    -508            weights=weight_descriptions,
    -509            description=kwargs.get("description", DEFAULTS["description"]),
    -510            authors=kwargs.get("authors", DEFAULTS["authors"]),
    -511            cite=kwargs.get("cite", DEFAULTS["cite"]),
    -512            license=spec.LicenseId("CC-BY-4.0"),
    -513            documentation=Path(doc_path),
    -514            git_repo=spec.HttpUrl("https://github.com/computational-cell-analytics/micro-sam"),
    -515            tags=kwargs.get("tags", DEFAULTS["tags"]),
    -516            covers=covers,
    -517            **extra_kwargs,
    -518            # TODO write specific settings in the config
    -519            # dict with yaml values, key must be a str
    -520            # micro_sam: ...
    -521            # config=
    -522        )
    -523
    -524        # TODO this requires the new bioimageio.core release
    -525        # _check_model(model_description, input_paths, result_paths)
    -526
    -527        save_bioimageio_package(model_description, output_path=output_path)
    +            
    270def export_sam_model(
    +271    image: np.ndarray,
    +272    label_image: np.ndarray,
    +273    model_type: str,
    +274    name: str,
    +275    output_path: Union[str, os.PathLike],
    +276    checkpoint_path: Optional[Union[str, os.PathLike]] = None,
    +277    **kwargs
    +278) -> None:
    +279    """Export SAM model to BioImage.IO model format.
    +280
    +281    The exported model can be uploaded to [bioimage.io](https://bioimage.io/#/) and
    +282    be used in tools that support the BioImage.IO model format.
    +283
    +284    Args:
    +285        image: The image for generating test data.
    +286        label_image: The segmentation correspoding to `image`.
    +287            It is used to derive prompt inputs for the model.
    +288        model_type: The type of the SAM model.
    +289        name: The name of the exported model.
    +290        output_path: Where the exported model is saved.
    +291        checkpoint_path: Optional checkpoint for loading the SAM model.
    +292    """
    +293    with tempfile.TemporaryDirectory() as tmp_dir:
    +294        checkpoint_path, decoder_path = _get_checkpoint(model_type, checkpoint_path, tmp_dir)
    +295        input_paths, result_paths = _create_test_inputs_and_outputs(
    +296            image, label_image, model_type, checkpoint_path, tmp_dir,
    +297        )
    +298        input_descriptions = [
    +299            # First input: the image data.
    +300            spec.InputTensorDescr(
    +301                id=spec.TensorId("image"),
    +302                axes=[
    +303                    spec.BatchAxis(size=1),
    +304                    # NOTE: to support 1 and 3 channels we can add another preprocessing.
    +305                    # Best solution: Have a pre-processing for this! (1C -> RGB)
    +306                    spec.ChannelAxis(channel_names=[spec.Identifier(cname) for cname in "RGB"]),
    +307                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=spec.ARBITRARY_SIZE),
    +308                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=spec.ARBITRARY_SIZE),
    +309                ],
    +310                test_tensor=spec.FileDescr(source=input_paths["image"]),
    +311                data=spec.IntervalOrRatioDataDescr(type="uint8")
    +312            ),
    +313
    +314            # Second input: the box prompts (optional)
    +315            spec.InputTensorDescr(
    +316                id=spec.TensorId("box_prompts"),
    +317                optional=True,
    +318                axes=[
    +319                    spec.BatchAxis(size=1),
    +320                    spec.IndexInputAxis(
    +321                        id=spec.AxisId("object"),
    +322                        size=spec.ARBITRARY_SIZE
    +323                    ),
    +324                    spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "hwxy"]),
    +325                ],
    +326                test_tensor=spec.FileDescr(source=input_paths["box_prompts"]),
    +327                data=spec.IntervalOrRatioDataDescr(type="int64")
    +328            ),
    +329
    +330            # Third input: the point prompt coordinates (optional)
    +331            spec.InputTensorDescr(
    +332                id=spec.TensorId("point_prompts"),
    +333                optional=True,
    +334                axes=[
    +335                    spec.BatchAxis(size=1),
    +336                    spec.IndexInputAxis(
    +337                        id=spec.AxisId("object"),
    +338                        size=spec.ARBITRARY_SIZE
    +339                    ),
    +340                    spec.IndexInputAxis(
    +341                        id=spec.AxisId("point"),
    +342                        size=spec.ARBITRARY_SIZE
    +343                    ),
    +344                    spec.ChannelAxis(channel_names=[spec.Identifier(bname) for bname in "xy"]),
    +345                ],
    +346                test_tensor=spec.FileDescr(source=input_paths["point_prompts"]),
    +347                data=spec.IntervalOrRatioDataDescr(type="int64")
    +348            ),
    +349
    +350            # Fourth input: the point prompt labels (optional)
    +351            spec.InputTensorDescr(
    +352                id=spec.TensorId("point_labels"),
    +353                optional=True,
    +354                axes=[
    +355                    spec.BatchAxis(size=1),
    +356                    spec.IndexInputAxis(
    +357                        id=spec.AxisId("object"),
    +358                        size=spec.ARBITRARY_SIZE
    +359                    ),
    +360                    spec.IndexInputAxis(
    +361                        id=spec.AxisId("point"),
    +362                        size=spec.ARBITRARY_SIZE
    +363                    ),
    +364                ],
    +365                test_tensor=spec.FileDescr(source=input_paths["point_labels"]),
    +366                data=spec.IntervalOrRatioDataDescr(type="int64")
    +367            ),
    +368
    +369            # Fifth input: the mask prompts (optional)
    +370            spec.InputTensorDescr(
    +371                id=spec.TensorId("mask_prompts"),
    +372                optional=True,
    +373                axes=[
    +374                    spec.BatchAxis(size=1),
    +375                    spec.IndexInputAxis(
    +376                        id=spec.AxisId("object"),
    +377                        size=spec.ARBITRARY_SIZE
    +378                    ),
    +379                    spec.ChannelAxis(channel_names=["channel"]),
    +380                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=256),
    +381                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=256),
    +382                ],
    +383                test_tensor=spec.FileDescr(source=input_paths["mask_prompts"]),
    +384                data=spec.IntervalOrRatioDataDescr(type="float32")
    +385            ),
    +386
    +387            # Sixth input: the image embeddings (optional)
    +388            spec.InputTensorDescr(
    +389                id=spec.TensorId("embeddings"),
    +390                optional=True,
    +391                axes=[
    +392                    spec.BatchAxis(size=1),
    +393                    # NOTE: we currently have to specify all the channel names
    +394                    # (It would be nice to also support size)
    +395                    spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]),
    +396                    spec.SpaceInputAxis(id=spec.AxisId("y"), size=64),
    +397                    spec.SpaceInputAxis(id=spec.AxisId("x"), size=64),
    +398                ],
    +399                test_tensor=spec.FileDescr(source=result_paths["embeddings"]),
    +400                data=spec.IntervalOrRatioDataDescr(type="float32")
    +401            ),
    +402
    +403        ]
    +404
    +405        output_descriptions = [
    +406            # First output: The mask predictions.
    +407            spec.OutputTensorDescr(
    +408                id=spec.TensorId("masks"),
    +409                axes=[
    +410                    spec.BatchAxis(size=1),
    +411                    # NOTE: we use the data dependent size here to avoid dependency on optional inputs
    +412                    spec.IndexOutputAxis(
    +413                        id=spec.AxisId("object"), size=spec.DataDependentSize(),
    +414                    ),
    +415                    # NOTE: this could be a 3 once we use multi-masking
    +416                    spec.ChannelAxis(channel_names=[spec.Identifier("mask")]),
    +417                    spec.SpaceOutputAxis(
    +418                        id=spec.AxisId("y"),
    +419                        size=spec.SizeReference(
    +420                            tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("y"),
    +421                        )
    +422                    ),
    +423                    spec.SpaceOutputAxis(
    +424                        id=spec.AxisId("x"),
    +425                        size=spec.SizeReference(
    +426                            tensor_id=spec.TensorId("image"), axis_id=spec.AxisId("x"),
    +427                        )
    +428                    )
    +429                ],
    +430                data=spec.IntervalOrRatioDataDescr(type="uint8"),
    +431                test_tensor=spec.FileDescr(source=result_paths["mask"])
    +432            ),
    +433
    +434            # The score predictions
    +435            spec.OutputTensorDescr(
    +436                id=spec.TensorId("scores"),
    +437                axes=[
    +438                    spec.BatchAxis(size=1),
    +439                    # NOTE: we use the data dependent size here to avoid dependency on optional inputs
    +440                    spec.IndexOutputAxis(
    +441                        id=spec.AxisId("object"), size=spec.DataDependentSize(),
    +442                    ),
    +443                    # NOTE: this could be a 3 once we use multi-masking
    +444                    spec.ChannelAxis(channel_names=[spec.Identifier("mask")]),
    +445                ],
    +446                data=spec.IntervalOrRatioDataDescr(type="float32"),
    +447                test_tensor=spec.FileDescr(source=result_paths["score"])
    +448            ),
    +449
    +450            # The image embeddings
    +451            spec.OutputTensorDescr(
    +452                id=spec.TensorId("embeddings"),
    +453                axes=[
    +454                    spec.BatchAxis(size=1),
    +455                    spec.ChannelAxis(channel_names=[spec.Identifier(f"c{i}") for i in range(256)]),
    +456                    spec.SpaceOutputAxis(id=spec.AxisId("y"), size=64),
    +457                    spec.SpaceOutputAxis(id=spec.AxisId("x"), size=64),
    +458                ],
    +459                data=spec.IntervalOrRatioDataDescr(type="float32"),
    +460                test_tensor=spec.FileDescr(source=result_paths["embeddings"])
    +461            )
    +462        ]
    +463
    +464        architecture_path = os.path.join(os.path.split(__file__)[0], "predictor_adaptor.py")
    +465        architecture = spec.ArchitectureFromFileDescr(
    +466            source=Path(architecture_path),
    +467            callable="PredictorAdaptor",
    +468            kwargs={"model_type": model_type}
    +469        )
    +470
    +471        dependency_file = os.path.join(tmp_dir, "environment.yaml")
    +472        _write_dependencies(dependency_file, require_mobile_sam=model_type.startswith("vit_t"))
    +473
    +474        weight_descriptions = spec.WeightsDescr(
    +475            pytorch_state_dict=spec.PytorchStateDictWeightsDescr(
    +476                source=Path(checkpoint_path),
    +477                architecture=architecture,
    +478                pytorch_version=spec.Version(torch.__version__),
    +479                dependencies=spec.EnvironmentFileDescr(source=dependency_file),
    +480            )
    +481        )
    +482
    +483        doc_path = _write_documentation(kwargs.get("documentation", None), model_type, tmp_dir)
    +484
    +485        covers = kwargs.get("covers", None)
    +486        if covers is None:
    +487            covers = _generate_covers(input_paths, result_paths, tmp_dir)
    +488        else:
    +489            assert all(os.path.exists(cov) for cov in covers)
    +490
    +491        # the uploader information is only added if explicitly passed
    +492        extra_kwargs = {}
    +493        if "id" in kwargs:
    +494            extra_kwargs["id"] = kwargs["id"]
    +495        if "id_emoji" in kwargs:
    +496            extra_kwargs["id_emoji"] = kwargs["id_emoji"]
    +497        if "uploader" in kwargs:
    +498            extra_kwargs["uploader"] = kwargs["uploader"]
    +499
    +500        if decoder_path is not None:
    +501            extra_kwargs["attachments"] = [spec.FileDescr(source=decoder_path)]
    +502
    +503        model_description = spec.ModelDescr(
    +504            name=name,
    +505            inputs=input_descriptions,
    +506            outputs=output_descriptions,
    +507            weights=weight_descriptions,
    +508            description=kwargs.get("description", DEFAULTS["description"]),
    +509            authors=kwargs.get("authors", DEFAULTS["authors"]),
    +510            cite=kwargs.get("cite", DEFAULTS["cite"]),
    +511            license=spec.LicenseId("CC-BY-4.0"),
    +512            documentation=Path(doc_path),
    +513            git_repo=spec.HttpUrl("https://github.com/computational-cell-analytics/micro-sam"),
    +514            tags=kwargs.get("tags", DEFAULTS["tags"]),
    +515            covers=covers,
    +516            **extra_kwargs,
    +517            # TODO write specific settings in the config
    +518            # dict with yaml values, key must be a str
    +519            # micro_sam: ...
    +520            # config=
    +521        )
    +522
    +523        _check_model(model_description, input_paths, result_paths)
    +524
    +525        save_bioimageio_package(model_description, output_path=output_path)
     
    diff --git a/micro_sam/evaluation/evaluation.html b/micro_sam/evaluation/evaluation.html index f3dd89f8..7615a1b2 100644 --- a/micro_sam/evaluation/evaluation.html +++ b/micro_sam/evaluation/evaluation.html @@ -178,7 +178,7 @@

    115 list_of_results = [] 116 prediction_folders = sorted(glob(os.path.join(prediction_root, "iteration*"))) 117 for pred_folder in prediction_folders: -118 print("Evaluating", pred_folder) +118 print("Evaluating", os.path.split(pred_folder)[-1]) 119 pred_paths = sorted(glob(os.path.join(pred_folder, "*"))) 120 result = run_evaluation(gt_paths=gt_paths, prediction_paths=pred_paths, save_path=None) 121 list_of_results.append(result) @@ -186,9 +186,6 @@

    123 124 res_df = pd.concat(list_of_results, ignore_index=True) 125 res_df.to_csv(csv_path) -126 -127 -128# TODO function to evaluate full experiment and resave in one table

    @@ -314,7 +311,7 @@

    Returns:
    116 list_of_results = [] 117 prediction_folders = sorted(glob(os.path.join(prediction_root, "iteration*"))) 118 for pred_folder in prediction_folders: -119 print("Evaluating", pred_folder) +119 print("Evaluating", os.path.split(pred_folder)[-1]) 120 pred_paths = sorted(glob(os.path.join(pred_folder, "*"))) 121 result = run_evaluation(gt_paths=gt_paths, prediction_paths=pred_paths, save_path=None) 122 list_of_results.append(result) diff --git a/micro_sam/evaluation/inference.html b/micro_sam/evaluation/inference.html index bb0a7ab5..fc01e371 100644 --- a/micro_sam/evaluation/inference.html +++ b/micro_sam/evaluation/inference.html @@ -467,215 +467,226 @@

    393 prediction_paths, 394 use_masks=False 395) -> None: -396 prompt_generator = IterativePromptGenerator() +396 verbose_embeddings = False 397 -398 gt_ids = np.unique(gt)[1:] +398 prompt_generator = IterativePromptGenerator() 399 -400 # Use multi-masking only if we have a single positive point without box -401 if start_with_box_prompt: -402 use_boxes, use_points = True, False -403 n_positives = 0 -404 multimasking = False -405 else: -406 use_boxes, use_points = False, True -407 n_positives = 1 -408 multimasking = True -409 -410 points, point_labels, boxes = _get_batched_prompts( -411 gt, gt_ids, -412 use_points=use_points, -413 use_boxes=use_boxes, -414 n_positives=n_positives, -415 n_negatives=0, -416 dilation=dilation -417 ) -418 -419 sampled_binary_gt = util.segmentation_to_one_hot(gt.astype("int64"), gt_ids) +400 gt_ids = np.unique(gt)[1:] +401 +402 # Use multi-masking only if we have a single positive point without box +403 if start_with_box_prompt: +404 use_boxes, use_points = True, False +405 n_positives = 0 +406 multimasking = False +407 else: +408 use_boxes, use_points = False, True +409 n_positives = 1 +410 multimasking = True +411 +412 points, point_labels, boxes = _get_batched_prompts( +413 gt, gt_ids, +414 use_points=use_points, +415 use_boxes=use_boxes, +416 n_positives=n_positives, +417 n_negatives=0, +418 dilation=dilation +419 ) 420 -421 for iteration in range(n_iterations): -422 if iteration == 0: # logits mask can not be used for the first iteration. -423 logits_masks = None -424 else: -425 if not use_masks: # logits mask should not be used when not desired. -426 logits_masks = None -427 -428 batched_outputs = batched_inference( -429 predictor, image, batch_size, -430 boxes=boxes, points=points, point_labels=point_labels, -431 multimasking=multimasking, embedding_path=embedding_path, -432 return_instance_segmentation=False, logits_masks=logits_masks -433 ) -434 -435 # switching off multimasking after first iter, as next iters (with multiple prompts) don't expect multimasking -436 multimasking = False -437 -438 masks = torch.stack([m["segmentation"][None] for m in batched_outputs]).to(torch.float32) -439 -440 next_coords, next_labels = _get_batched_iterative_prompts( -441 sampled_binary_gt, masks, batch_size, prompt_generator +421 sampled_binary_gt = util.segmentation_to_one_hot(gt.astype("int64"), gt_ids) +422 +423 for iteration in range(n_iterations): +424 if iteration == 0: # logits mask can not be used for the first iteration. +425 logits_masks = None +426 else: +427 if not use_masks: # logits mask should not be used when not desired. +428 logits_masks = None +429 +430 batched_outputs = batched_inference( +431 predictor=predictor, +432 image=image, +433 batch_size=batch_size, +434 boxes=boxes, +435 points=points, +436 point_labels=point_labels, +437 multimasking=multimasking, +438 embedding_path=embedding_path, +439 return_instance_segmentation=False, +440 logits_masks=logits_masks, +441 verbose_embeddings=verbose_embeddings, 442 ) -443 next_coords, next_labels = next_coords.detach().cpu().numpy(), next_labels.detach().cpu().numpy() -444 -445 if points is not None: -446 points = np.concatenate([points, next_coords], axis=1) -447 else: -448 points = next_coords -449 -450 if point_labels is not None: -451 point_labels = np.concatenate([point_labels, next_labels], axis=1) -452 else: -453 point_labels = next_labels -454 -455 if use_masks: -456 logits_masks = torch.stack([m["logits"] for m in batched_outputs]) -457 -458 _save_segmentation(masks, prediction_paths[iteration]) -459 -460 -461def run_inference_with_iterative_prompting( -462 predictor: SamPredictor, -463 image_paths: List[Union[str, os.PathLike]], -464 gt_paths: List[Union[str, os.PathLike]], -465 embedding_dir: Union[str, os.PathLike], -466 prediction_dir: Union[str, os.PathLike], -467 start_with_box_prompt: bool, -468 dilation: int = 5, -469 batch_size: int = 32, -470 n_iterations: int = 8, -471 use_masks: bool = False -472) -> None: -473 """Run segment anything inference for multiple images using prompts iteratively -474 derived from model outputs and groundtruth -475 -476 Args: -477 predictor: The SegmentAnything predictor. -478 image_paths: The image file paths. -479 gt_paths: The ground-truth segmentation file paths. -480 embedding_dir: The directory where the image embeddings will be saved or are already saved. -481 prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration. -482 start_with_box_prompt: Whether to use the first prompt as bounding box or a single point -483 dilation: The dilation factor for the radius around the ground-truth object -484 around which points will not be sampled. -485 batch_size: The batch size used for batched predictions. -486 n_iterations: The number of iterations for iterative prompting. -487 use_masks: Whether to make use of logits from previous prompt-based segmentation -488 """ -489 if len(image_paths) != len(gt_paths): -490 raise ValueError(f"Expect same number of images and gt images, got {len(image_paths)}, {len(gt_paths)}") -491 -492 # create all prediction folders for all intermediate iterations -493 for i in range(n_iterations): -494 os.makedirs(os.path.join(prediction_dir, f"iteration{i:02}"), exist_ok=True) -495 -496 if use_masks: -497 print("The iterative prompting will make use of logits masks from previous iterations.") -498 -499 for image_path, gt_path in tqdm( -500 zip(image_paths, gt_paths), total=len(image_paths), desc="Run inference with iterative prompting for all images" -501 ): -502 image_name = os.path.basename(image_path) -503 -504 # We skip the images that already have been segmented -505 prediction_paths = [os.path.join(prediction_dir, f"iteration{i:02}", image_name) for i in range(n_iterations)] -506 if all(os.path.exists(prediction_path) for prediction_path in prediction_paths): -507 continue -508 -509 assert os.path.exists(image_path), image_path -510 assert os.path.exists(gt_path), gt_path -511 -512 image = imageio.imread(image_path) -513 gt = imageio.imread(gt_path).astype("uint32") -514 gt = relabel_sequential(gt)[0] -515 -516 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") -517 -518 _run_inference_with_iterative_prompting_for_image( -519 predictor, image, gt, start_with_box_prompt=start_with_box_prompt, -520 dilation=dilation, batch_size=batch_size, embedding_path=embedding_path, -521 n_iterations=n_iterations, prediction_paths=prediction_paths, use_masks=use_masks -522 ) -523 -524 -525# -526# AMG FUNCTION -527# +443 +444 # switching off multimasking after first iter, as next iters (with multiple prompts) don't expect multimasking +445 multimasking = False +446 +447 masks = torch.stack([m["segmentation"][None] for m in batched_outputs]).to(torch.float32) +448 +449 next_coords, next_labels = _get_batched_iterative_prompts( +450 sampled_binary_gt, masks, batch_size, prompt_generator +451 ) +452 next_coords, next_labels = next_coords.detach().cpu().numpy(), next_labels.detach().cpu().numpy() +453 +454 if points is not None: +455 points = np.concatenate([points, next_coords], axis=1) +456 else: +457 points = next_coords +458 +459 if point_labels is not None: +460 point_labels = np.concatenate([point_labels, next_labels], axis=1) +461 else: +462 point_labels = next_labels +463 +464 if use_masks: +465 logits_masks = torch.stack([m["logits"] for m in batched_outputs]) +466 +467 _save_segmentation(masks, prediction_paths[iteration]) +468 +469 +470def run_inference_with_iterative_prompting( +471 predictor: SamPredictor, +472 image_paths: List[Union[str, os.PathLike]], +473 gt_paths: List[Union[str, os.PathLike]], +474 embedding_dir: Union[str, os.PathLike], +475 prediction_dir: Union[str, os.PathLike], +476 start_with_box_prompt: bool, +477 dilation: int = 5, +478 batch_size: int = 32, +479 n_iterations: int = 8, +480 use_masks: bool = False +481) -> None: +482 """Run segment anything inference for multiple images using prompts iteratively +483 derived from model outputs and groundtruth +484 +485 Args: +486 predictor: The SegmentAnything predictor. +487 image_paths: The image file paths. +488 gt_paths: The ground-truth segmentation file paths. +489 embedding_dir: The directory where the image embeddings will be saved or are already saved. +490 prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration. +491 start_with_box_prompt: Whether to use the first prompt as bounding box or a single point +492 dilation: The dilation factor for the radius around the ground-truth object +493 around which points will not be sampled. +494 batch_size: The batch size used for batched predictions. +495 n_iterations: The number of iterations for iterative prompting. +496 use_masks: Whether to make use of logits from previous prompt-based segmentation. +497 """ +498 if len(image_paths) != len(gt_paths): +499 raise ValueError(f"Expect same number of images and gt images, got {len(image_paths)}, {len(gt_paths)}") +500 +501 # create all prediction folders for all intermediate iterations +502 for i in range(n_iterations): +503 os.makedirs(os.path.join(prediction_dir, f"iteration{i:02}"), exist_ok=True) +504 +505 if use_masks: +506 print("The iterative prompting will make use of logits masks from previous iterations.") +507 +508 for image_path, gt_path in tqdm( +509 zip(image_paths, gt_paths), +510 total=len(image_paths), +511 desc="Run inference with iterative prompting for all images", +512 ): +513 image_name = os.path.basename(image_path) +514 +515 # We skip the images that already have been segmented +516 prediction_paths = [os.path.join(prediction_dir, f"iteration{i:02}", image_name) for i in range(n_iterations)] +517 if all(os.path.exists(prediction_path) for prediction_path in prediction_paths): +518 continue +519 +520 assert os.path.exists(image_path), image_path +521 assert os.path.exists(gt_path), gt_path +522 +523 image = imageio.imread(image_path) +524 gt = imageio.imread(gt_path).astype("uint32") +525 gt = relabel_sequential(gt)[0] +526 +527 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") 528 -529 -530def run_amg( -531 checkpoint: Union[str, os.PathLike], -532 model_type: str, -533 experiment_folder: Union[str, os.PathLike], -534 val_image_paths: List[Union[str, os.PathLike]], -535 val_gt_paths: List[Union[str, os.PathLike]], -536 test_image_paths: List[Union[str, os.PathLike]], -537 iou_thresh_values: Optional[List[float]] = None, -538 stability_score_values: Optional[List[float]] = None, -539) -> str: -540 embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved -541 os.makedirs(embedding_folder, exist_ok=True) -542 -543 predictor = util.get_sam_model(model_type=model_type, checkpoint_path=checkpoint) -544 amg = AutomaticMaskGenerator(predictor) -545 amg_prefix = "amg" -546 -547 # where the predictions are saved -548 prediction_folder = os.path.join(experiment_folder, amg_prefix, "inference") -549 os.makedirs(prediction_folder, exist_ok=True) -550 -551 # where the grid-search results are saved -552 gs_result_folder = os.path.join(experiment_folder, amg_prefix, "grid_search") -553 os.makedirs(gs_result_folder, exist_ok=True) -554 -555 grid_search_values = instance_segmentation.default_grid_search_values_amg( -556 iou_thresh_values=iou_thresh_values, -557 stability_score_values=stability_score_values, -558 ) -559 -560 instance_segmentation.run_instance_segmentation_grid_search_and_inference( -561 amg, grid_search_values, -562 val_image_paths, val_gt_paths, test_image_paths, -563 embedding_folder, prediction_folder, gs_result_folder, -564 ) -565 return prediction_folder -566 -567 -568# -569# INSTANCE SEGMENTATION FUNCTION -570# -571 -572 -573def run_instance_segmentation_with_decoder( -574 checkpoint: Union[str, os.PathLike], -575 model_type: str, -576 experiment_folder: Union[str, os.PathLike], -577 val_image_paths: List[Union[str, os.PathLike]], -578 val_gt_paths: List[Union[str, os.PathLike]], -579 test_image_paths: List[Union[str, os.PathLike]], -580) -> str: -581 embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved -582 os.makedirs(embedding_folder, exist_ok=True) +529 _run_inference_with_iterative_prompting_for_image( +530 predictor, image, gt, start_with_box_prompt=start_with_box_prompt, +531 dilation=dilation, batch_size=batch_size, embedding_path=embedding_path, +532 n_iterations=n_iterations, prediction_paths=prediction_paths, use_masks=use_masks +533 ) +534 +535 +536# +537# AMG FUNCTION +538# +539 +540 +541def run_amg( +542 checkpoint: Union[str, os.PathLike], +543 model_type: str, +544 experiment_folder: Union[str, os.PathLike], +545 val_image_paths: List[Union[str, os.PathLike]], +546 val_gt_paths: List[Union[str, os.PathLike]], +547 test_image_paths: List[Union[str, os.PathLike]], +548 iou_thresh_values: Optional[List[float]] = None, +549 stability_score_values: Optional[List[float]] = None, +550) -> str: +551 embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved +552 os.makedirs(embedding_folder, exist_ok=True) +553 +554 predictor = util.get_sam_model(model_type=model_type, checkpoint_path=checkpoint) +555 amg = AutomaticMaskGenerator(predictor) +556 amg_prefix = "amg" +557 +558 # where the predictions are saved +559 prediction_folder = os.path.join(experiment_folder, amg_prefix, "inference") +560 os.makedirs(prediction_folder, exist_ok=True) +561 +562 # where the grid-search results are saved +563 gs_result_folder = os.path.join(experiment_folder, amg_prefix, "grid_search") +564 os.makedirs(gs_result_folder, exist_ok=True) +565 +566 grid_search_values = instance_segmentation.default_grid_search_values_amg( +567 iou_thresh_values=iou_thresh_values, +568 stability_score_values=stability_score_values, +569 ) +570 +571 instance_segmentation.run_instance_segmentation_grid_search_and_inference( +572 amg, grid_search_values, +573 val_image_paths, val_gt_paths, test_image_paths, +574 embedding_folder, prediction_folder, gs_result_folder, +575 ) +576 return prediction_folder +577 +578 +579# +580# INSTANCE SEGMENTATION FUNCTION +581# +582 583 -584 predictor, decoder = get_predictor_and_decoder(model_type=model_type, checkpoint_path=checkpoint) -585 segmenter = InstanceSegmentationWithDecoder(predictor, decoder) -586 seg_prefix = "instance_segmentation_with_decoder" -587 -588 # where the predictions are saved -589 prediction_folder = os.path.join(experiment_folder, seg_prefix, "inference") -590 os.makedirs(prediction_folder, exist_ok=True) -591 -592 # where the grid-search results are saved -593 gs_result_folder = os.path.join(experiment_folder, seg_prefix, "grid_search") -594 os.makedirs(gs_result_folder, exist_ok=True) -595 -596 grid_search_values = instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder() -597 -598 instance_segmentation.run_instance_segmentation_grid_search_and_inference( -599 segmenter, grid_search_values, -600 val_image_paths, val_gt_paths, test_image_paths, -601 embedding_dir=embedding_folder, prediction_dir=prediction_folder, -602 result_dir=gs_result_folder, -603 ) -604 return prediction_folder +584def run_instance_segmentation_with_decoder( +585 checkpoint: Union[str, os.PathLike], +586 model_type: str, +587 experiment_folder: Union[str, os.PathLike], +588 val_image_paths: List[Union[str, os.PathLike]], +589 val_gt_paths: List[Union[str, os.PathLike]], +590 test_image_paths: List[Union[str, os.PathLike]], +591) -> str: +592 embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved +593 os.makedirs(embedding_folder, exist_ok=True) +594 +595 predictor, decoder = get_predictor_and_decoder(model_type=model_type, checkpoint_path=checkpoint) +596 segmenter = InstanceSegmentationWithDecoder(predictor, decoder) +597 seg_prefix = "instance_segmentation_with_decoder" +598 +599 # where the predictions are saved +600 prediction_folder = os.path.join(experiment_folder, seg_prefix, "inference") +601 os.makedirs(prediction_folder, exist_ok=True) +602 +603 # where the grid-search results are saved +604 gs_result_folder = os.path.join(experiment_folder, seg_prefix, "grid_search") +605 os.makedirs(gs_result_folder, exist_ok=True) +606 +607 grid_search_values = instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder() +608 +609 instance_segmentation.run_instance_segmentation_grid_search_and_inference( +610 segmenter, grid_search_values, +611 val_image_paths, val_gt_paths, test_image_paths, +612 embedding_dir=embedding_folder, prediction_dir=prediction_folder, +613 result_dir=gs_result_folder, +614 ) +615 return prediction_folder @@ -940,68 +951,70 @@

    Arguments:
    -
    462def run_inference_with_iterative_prompting(
    -463    predictor: SamPredictor,
    -464    image_paths: List[Union[str, os.PathLike]],
    -465    gt_paths: List[Union[str, os.PathLike]],
    -466    embedding_dir: Union[str, os.PathLike],
    -467    prediction_dir: Union[str, os.PathLike],
    -468    start_with_box_prompt: bool,
    -469    dilation: int = 5,
    -470    batch_size: int = 32,
    -471    n_iterations: int = 8,
    -472    use_masks: bool = False
    -473) -> None:
    -474    """Run segment anything inference for multiple images using prompts iteratively
    -475        derived from model outputs and groundtruth
    -476
    -477    Args:
    -478        predictor: The SegmentAnything predictor.
    -479        image_paths: The image file paths.
    -480        gt_paths: The ground-truth segmentation file paths.
    -481        embedding_dir: The directory where the image embeddings will be saved or are already saved.
    -482        prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration.
    -483        start_with_box_prompt: Whether to use the first prompt as bounding box or a single point
    -484        dilation: The dilation factor for the radius around the ground-truth object
    -485            around which points will not be sampled.
    -486        batch_size: The batch size used for batched predictions.
    -487        n_iterations: The number of iterations for iterative prompting.
    -488        use_masks: Whether to make use of logits from previous prompt-based segmentation
    -489    """
    -490    if len(image_paths) != len(gt_paths):
    -491        raise ValueError(f"Expect same number of images and gt images, got {len(image_paths)}, {len(gt_paths)}")
    -492
    -493    # create all prediction folders for all intermediate iterations
    -494    for i in range(n_iterations):
    -495        os.makedirs(os.path.join(prediction_dir, f"iteration{i:02}"), exist_ok=True)
    -496
    -497    if use_masks:
    -498        print("The iterative prompting will make use of logits masks from previous iterations.")
    -499
    -500    for image_path, gt_path in tqdm(
    -501        zip(image_paths, gt_paths), total=len(image_paths), desc="Run inference with iterative prompting for all images"
    -502    ):
    -503        image_name = os.path.basename(image_path)
    -504
    -505        # We skip the images that already have been segmented
    -506        prediction_paths = [os.path.join(prediction_dir, f"iteration{i:02}", image_name) for i in range(n_iterations)]
    -507        if all(os.path.exists(prediction_path) for prediction_path in prediction_paths):
    -508            continue
    -509
    -510        assert os.path.exists(image_path), image_path
    -511        assert os.path.exists(gt_path), gt_path
    -512
    -513        image = imageio.imread(image_path)
    -514        gt = imageio.imread(gt_path).astype("uint32")
    -515        gt = relabel_sequential(gt)[0]
    -516
    -517        embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr")
    -518
    -519        _run_inference_with_iterative_prompting_for_image(
    -520            predictor, image, gt, start_with_box_prompt=start_with_box_prompt,
    -521            dilation=dilation, batch_size=batch_size, embedding_path=embedding_path,
    -522            n_iterations=n_iterations, prediction_paths=prediction_paths, use_masks=use_masks
    -523        )
    +            
    471def run_inference_with_iterative_prompting(
    +472    predictor: SamPredictor,
    +473    image_paths: List[Union[str, os.PathLike]],
    +474    gt_paths: List[Union[str, os.PathLike]],
    +475    embedding_dir: Union[str, os.PathLike],
    +476    prediction_dir: Union[str, os.PathLike],
    +477    start_with_box_prompt: bool,
    +478    dilation: int = 5,
    +479    batch_size: int = 32,
    +480    n_iterations: int = 8,
    +481    use_masks: bool = False
    +482) -> None:
    +483    """Run segment anything inference for multiple images using prompts iteratively
    +484        derived from model outputs and groundtruth
    +485
    +486    Args:
    +487        predictor: The SegmentAnything predictor.
    +488        image_paths: The image file paths.
    +489        gt_paths: The ground-truth segmentation file paths.
    +490        embedding_dir: The directory where the image embeddings will be saved or are already saved.
    +491        prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration.
    +492        start_with_box_prompt: Whether to use the first prompt as bounding box or a single point
    +493        dilation: The dilation factor for the radius around the ground-truth object
    +494            around which points will not be sampled.
    +495        batch_size: The batch size used for batched predictions.
    +496        n_iterations: The number of iterations for iterative prompting.
    +497        use_masks: Whether to make use of logits from previous prompt-based segmentation.
    +498    """
    +499    if len(image_paths) != len(gt_paths):
    +500        raise ValueError(f"Expect same number of images and gt images, got {len(image_paths)}, {len(gt_paths)}")
    +501
    +502    # create all prediction folders for all intermediate iterations
    +503    for i in range(n_iterations):
    +504        os.makedirs(os.path.join(prediction_dir, f"iteration{i:02}"), exist_ok=True)
    +505
    +506    if use_masks:
    +507        print("The iterative prompting will make use of logits masks from previous iterations.")
    +508
    +509    for image_path, gt_path in tqdm(
    +510        zip(image_paths, gt_paths),
    +511        total=len(image_paths),
    +512        desc="Run inference with iterative prompting for all images",
    +513    ):
    +514        image_name = os.path.basename(image_path)
    +515
    +516        # We skip the images that already have been segmented
    +517        prediction_paths = [os.path.join(prediction_dir, f"iteration{i:02}", image_name) for i in range(n_iterations)]
    +518        if all(os.path.exists(prediction_path) for prediction_path in prediction_paths):
    +519            continue
    +520
    +521        assert os.path.exists(image_path), image_path
    +522        assert os.path.exists(gt_path), gt_path
    +523
    +524        image = imageio.imread(image_path)
    +525        gt = imageio.imread(gt_path).astype("uint32")
    +526        gt = relabel_sequential(gt)[0]
    +527
    +528        embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr")
    +529
    +530        _run_inference_with_iterative_prompting_for_image(
    +531            predictor, image, gt, start_with_box_prompt=start_with_box_prompt,
    +532            dilation=dilation, batch_size=batch_size, embedding_path=embedding_path,
    +533            n_iterations=n_iterations, prediction_paths=prediction_paths, use_masks=use_masks
    +534        )
     
    @@ -1021,7 +1034,7 @@
    Arguments:
    around which points will not be sampled.
  • batch_size: The batch size used for batched predictions.
  • n_iterations: The number of iterations for iterative prompting.
  • -
  • use_masks: Whether to make use of logits from previous prompt-based segmentation
  • +
  • use_masks: Whether to make use of logits from previous prompt-based segmentation.
  • @@ -1038,42 +1051,42 @@
    Arguments:
    -
    531def run_amg(
    -532    checkpoint: Union[str, os.PathLike],
    -533    model_type: str,
    -534    experiment_folder: Union[str, os.PathLike],
    -535    val_image_paths: List[Union[str, os.PathLike]],
    -536    val_gt_paths: List[Union[str, os.PathLike]],
    -537    test_image_paths: List[Union[str, os.PathLike]],
    -538    iou_thresh_values: Optional[List[float]] = None,
    -539    stability_score_values: Optional[List[float]] = None,
    -540) -> str:
    -541    embedding_folder = os.path.join(experiment_folder, "embeddings")  # where the precomputed embeddings are saved
    -542    os.makedirs(embedding_folder, exist_ok=True)
    -543
    -544    predictor = util.get_sam_model(model_type=model_type, checkpoint_path=checkpoint)
    -545    amg = AutomaticMaskGenerator(predictor)
    -546    amg_prefix = "amg"
    -547
    -548    # where the predictions are saved
    -549    prediction_folder = os.path.join(experiment_folder, amg_prefix, "inference")
    -550    os.makedirs(prediction_folder, exist_ok=True)
    -551
    -552    # where the grid-search results are saved
    -553    gs_result_folder = os.path.join(experiment_folder, amg_prefix, "grid_search")
    -554    os.makedirs(gs_result_folder, exist_ok=True)
    -555
    -556    grid_search_values = instance_segmentation.default_grid_search_values_amg(
    -557        iou_thresh_values=iou_thresh_values,
    -558        stability_score_values=stability_score_values,
    -559    )
    -560
    -561    instance_segmentation.run_instance_segmentation_grid_search_and_inference(
    -562        amg, grid_search_values,
    -563        val_image_paths, val_gt_paths, test_image_paths,
    -564        embedding_folder, prediction_folder, gs_result_folder,
    -565    )
    -566    return prediction_folder
    +            
    542def run_amg(
    +543    checkpoint: Union[str, os.PathLike],
    +544    model_type: str,
    +545    experiment_folder: Union[str, os.PathLike],
    +546    val_image_paths: List[Union[str, os.PathLike]],
    +547    val_gt_paths: List[Union[str, os.PathLike]],
    +548    test_image_paths: List[Union[str, os.PathLike]],
    +549    iou_thresh_values: Optional[List[float]] = None,
    +550    stability_score_values: Optional[List[float]] = None,
    +551) -> str:
    +552    embedding_folder = os.path.join(experiment_folder, "embeddings")  # where the precomputed embeddings are saved
    +553    os.makedirs(embedding_folder, exist_ok=True)
    +554
    +555    predictor = util.get_sam_model(model_type=model_type, checkpoint_path=checkpoint)
    +556    amg = AutomaticMaskGenerator(predictor)
    +557    amg_prefix = "amg"
    +558
    +559    # where the predictions are saved
    +560    prediction_folder = os.path.join(experiment_folder, amg_prefix, "inference")
    +561    os.makedirs(prediction_folder, exist_ok=True)
    +562
    +563    # where the grid-search results are saved
    +564    gs_result_folder = os.path.join(experiment_folder, amg_prefix, "grid_search")
    +565    os.makedirs(gs_result_folder, exist_ok=True)
    +566
    +567    grid_search_values = instance_segmentation.default_grid_search_values_amg(
    +568        iou_thresh_values=iou_thresh_values,
    +569        stability_score_values=stability_score_values,
    +570    )
    +571
    +572    instance_segmentation.run_instance_segmentation_grid_search_and_inference(
    +573        amg, grid_search_values,
    +574        val_image_paths, val_gt_paths, test_image_paths,
    +575        embedding_folder, prediction_folder, gs_result_folder,
    +576    )
    +577    return prediction_folder
     
    @@ -1091,38 +1104,38 @@
    Arguments:
    -
    574def run_instance_segmentation_with_decoder(
    -575    checkpoint: Union[str, os.PathLike],
    -576    model_type: str,
    -577    experiment_folder: Union[str, os.PathLike],
    -578    val_image_paths: List[Union[str, os.PathLike]],
    -579    val_gt_paths: List[Union[str, os.PathLike]],
    -580    test_image_paths: List[Union[str, os.PathLike]],
    -581) -> str:
    -582    embedding_folder = os.path.join(experiment_folder, "embeddings")  # where the precomputed embeddings are saved
    -583    os.makedirs(embedding_folder, exist_ok=True)
    -584
    -585    predictor, decoder = get_predictor_and_decoder(model_type=model_type, checkpoint_path=checkpoint)
    -586    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)
    -587    seg_prefix = "instance_segmentation_with_decoder"
    -588
    -589    # where the predictions are saved
    -590    prediction_folder = os.path.join(experiment_folder, seg_prefix, "inference")
    -591    os.makedirs(prediction_folder, exist_ok=True)
    -592
    -593    # where the grid-search results are saved
    -594    gs_result_folder = os.path.join(experiment_folder, seg_prefix, "grid_search")
    -595    os.makedirs(gs_result_folder, exist_ok=True)
    -596
    -597    grid_search_values = instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder()
    -598
    -599    instance_segmentation.run_instance_segmentation_grid_search_and_inference(
    -600        segmenter, grid_search_values,
    -601        val_image_paths, val_gt_paths, test_image_paths,
    -602        embedding_dir=embedding_folder, prediction_dir=prediction_folder,
    -603        result_dir=gs_result_folder,
    -604    )
    -605    return prediction_folder
    +            
    585def run_instance_segmentation_with_decoder(
    +586    checkpoint: Union[str, os.PathLike],
    +587    model_type: str,
    +588    experiment_folder: Union[str, os.PathLike],
    +589    val_image_paths: List[Union[str, os.PathLike]],
    +590    val_gt_paths: List[Union[str, os.PathLike]],
    +591    test_image_paths: List[Union[str, os.PathLike]],
    +592) -> str:
    +593    embedding_folder = os.path.join(experiment_folder, "embeddings")  # where the precomputed embeddings are saved
    +594    os.makedirs(embedding_folder, exist_ok=True)
    +595
    +596    predictor, decoder = get_predictor_and_decoder(model_type=model_type, checkpoint_path=checkpoint)
    +597    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)
    +598    seg_prefix = "instance_segmentation_with_decoder"
    +599
    +600    # where the predictions are saved
    +601    prediction_folder = os.path.join(experiment_folder, seg_prefix, "inference")
    +602    os.makedirs(prediction_folder, exist_ok=True)
    +603
    +604    # where the grid-search results are saved
    +605    gs_result_folder = os.path.join(experiment_folder, seg_prefix, "grid_search")
    +606    os.makedirs(gs_result_folder, exist_ok=True)
    +607
    +608    grid_search_values = instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder()
    +609
    +610    instance_segmentation.run_instance_segmentation_grid_search_and_inference(
    +611        segmenter, grid_search_values,
    +612        val_image_paths, val_gt_paths, test_image_paths,
    +613        embedding_dir=embedding_folder, prediction_dir=prediction_folder,
    +614        result_dir=gs_result_folder,
    +615    )
    +616    return prediction_folder
     
    diff --git a/micro_sam/evaluation/instance_segmentation.html b/micro_sam/evaluation/instance_segmentation.html index 3b8cf53d..9a8e6604 100644 --- a/micro_sam/evaluation/instance_segmentation.html +++ b/micro_sam/evaluation/instance_segmentation.html @@ -258,218 +258,227 @@

    181 result_dir: Folder to cache the evaluation results per image. 182 embedding_dir: Folder to cache the image embeddings. 183 fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter. -184 verbose_gs: Whether to run the gridsearch for individual images in a verbose mode. +184 verbose_gs: Whether to run the grid-search for individual images in a verbose mode. 185 image_key: Key for loading the image data from a more complex file format like HDF5. 186 If not given a simple image format like tif is assumed. 187 gt_key: Key for loading the ground-truth data from a more complex file format like HDF5. 188 If not given a simple image format like tif is assumed. 189 rois: Region of interests to resetrict the evaluation to. 190 """ -191 assert len(image_paths) == len(gt_paths) -192 fixed_generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs -193 -194 duplicate_params = [gs_param for gs_param in grid_search_values.keys() if gs_param in fixed_generate_kwargs] -195 if duplicate_params: -196 raise ValueError( -197 "You may not pass duplicate parameters in 'grid_search_values' and 'fixed_generate_kwargs'." -198 f"The parameters {duplicate_params} are duplicated." -199 ) -200 -201 # Compute all combinations of grid search values. -202 gs_combinations = product(*grid_search_values.values()) -203 # Map each combination back to a valid kwarg input. -204 gs_combinations = [ -205 {k: v for k, v in zip(grid_search_values.keys(), vals)} for vals in gs_combinations -206 ] -207 -208 os.makedirs(result_dir, exist_ok=True) -209 predictor = getattr(segmenter, "_predictor", None) -210 -211 for i, (image_path, gt_path) in tqdm( -212 enumerate(zip(image_paths, gt_paths)), desc="Run instance segmentation grid-search", total=len(image_paths) -213 ): -214 image_name = Path(image_path).stem -215 result_path = os.path.join(result_dir, f"{image_name}.csv") -216 -217 # We skip images for which the grid search was done already. -218 if os.path.exists(result_path): -219 continue -220 -221 assert os.path.exists(image_path), image_path -222 assert os.path.exists(gt_path), gt_path -223 -224 image = _load_image(image_path, image_key, roi=None if rois is None else rois[i]) -225 gt = _load_image(gt_path, gt_key, roi=None if rois is None else rois[i]) -226 -227 if embedding_dir is None: -228 segmenter.initialize(image) -229 else: -230 assert predictor is not None -231 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") -232 image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2) -233 segmenter.initialize(image, image_embeddings) -234 -235 _grid_search_iteration( -236 segmenter, gs_combinations, gt, image_name, -237 fixed_generate_kwargs=fixed_generate_kwargs, result_path=result_path, verbose=verbose_gs, -238 ) -239 -240 -241def run_instance_segmentation_inference( -242 segmenter: Union[AMGBase, InstanceSegmentationWithDecoder], -243 image_paths: List[Union[str, os.PathLike]], -244 embedding_dir: Union[str, os.PathLike], -245 prediction_dir: Union[str, os.PathLike], -246 generate_kwargs: Optional[Dict[str, Any]] = None, -247) -> None: -248 """Run inference for automatic mask generation. -249 -250 Args: -251 segmenter: The class implementing the instance segmentation functionality. -252 image_paths: The input images. -253 embedding_dir: Folder to cache the image embeddings. -254 prediction_dir: Folder to save the predictions. -255 generate_kwargs: The keyword arguments for the `generate` method of the segmenter. -256 """ -257 -258 generate_kwargs = {} if generate_kwargs is None else generate_kwargs -259 predictor = segmenter._predictor -260 min_object_size = generate_kwargs.get("min_mask_region_area", 0) +191 verbose_embeddings = False +192 +193 assert len(image_paths) == len(gt_paths) +194 fixed_generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs +195 +196 duplicate_params = [gs_param for gs_param in grid_search_values.keys() if gs_param in fixed_generate_kwargs] +197 if duplicate_params: +198 raise ValueError( +199 "You may not pass duplicate parameters in 'grid_search_values' and 'fixed_generate_kwargs'." +200 f"The parameters {duplicate_params} are duplicated." +201 ) +202 +203 # Compute all combinations of grid search values. +204 gs_combinations = product(*grid_search_values.values()) +205 # Map each combination back to a valid kwarg input. +206 gs_combinations = [ +207 {k: v for k, v in zip(grid_search_values.keys(), vals)} for vals in gs_combinations +208 ] +209 +210 os.makedirs(result_dir, exist_ok=True) +211 predictor = getattr(segmenter, "_predictor", None) +212 +213 for i, (image_path, gt_path) in tqdm( +214 enumerate(zip(image_paths, gt_paths)), desc="Run instance segmentation grid-search", total=len(image_paths) +215 ): +216 image_name = Path(image_path).stem +217 result_path = os.path.join(result_dir, f"{image_name}.csv") +218 +219 # We skip images for which the grid search was done already. +220 if os.path.exists(result_path): +221 continue +222 +223 assert os.path.exists(image_path), image_path +224 assert os.path.exists(gt_path), gt_path +225 +226 image = _load_image(image_path, image_key, roi=None if rois is None else rois[i]) +227 gt = _load_image(gt_path, gt_key, roi=None if rois is None else rois[i]) +228 +229 if embedding_dir is None: +230 segmenter.initialize(image) +231 else: +232 assert predictor is not None +233 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") +234 image_embeddings = util.precompute_image_embeddings( +235 predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings +236 ) +237 segmenter.initialize(image, image_embeddings) +238 +239 _grid_search_iteration( +240 segmenter, gs_combinations, gt, image_name, +241 fixed_generate_kwargs=fixed_generate_kwargs, result_path=result_path, verbose=verbose_gs, +242 ) +243 +244 +245def run_instance_segmentation_inference( +246 segmenter: Union[AMGBase, InstanceSegmentationWithDecoder], +247 image_paths: List[Union[str, os.PathLike]], +248 embedding_dir: Union[str, os.PathLike], +249 prediction_dir: Union[str, os.PathLike], +250 generate_kwargs: Optional[Dict[str, Any]] = None, +251) -> None: +252 """Run inference for automatic mask generation. +253 +254 Args: +255 segmenter: The class implementing the instance segmentation functionality. +256 image_paths: The input images. +257 embedding_dir: Folder to cache the image embeddings. +258 prediction_dir: Folder to save the predictions. +259 generate_kwargs: The keyword arguments for the `generate` method of the segmenter. +260 """ 261 -262 for image_path in tqdm(image_paths, desc="Run inference for automatic mask generation"): -263 image_name = os.path.basename(image_path) -264 -265 # We skip the images that already have been segmented. -266 prediction_path = os.path.join(prediction_dir, image_name) -267 if os.path.exists(prediction_path): -268 continue -269 -270 assert os.path.exists(image_path), image_path -271 image = imageio.imread(image_path) -272 -273 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") -274 image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2) +262 verbose_embeddings = False +263 +264 generate_kwargs = {} if generate_kwargs is None else generate_kwargs +265 predictor = segmenter._predictor +266 min_object_size = generate_kwargs.get("min_mask_region_area", 0) +267 +268 for image_path in tqdm(image_paths, desc="Run inference for automatic mask generation"): +269 image_name = os.path.basename(image_path) +270 +271 # We skip the images that already have been segmented. +272 prediction_path = os.path.join(prediction_dir, image_name) +273 if os.path.exists(prediction_path): +274 continue 275 -276 segmenter.initialize(image, image_embeddings) -277 masks = segmenter.generate(**generate_kwargs) +276 assert os.path.exists(image_path), image_path +277 image = imageio.imread(image_path) 278 -279 if len(masks) == 0: # the instance segmentation can have no masks, hence we just save empty labels -280 if isinstance(segmenter, InstanceSegmentationWithDecoder): -281 this_shape = segmenter._foreground.shape -282 elif isinstance(segmenter, AMGBase): -283 this_shape = segmenter._original_size -284 else: -285 this_shape = image.shape[-2:] +279 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") +280 image_embeddings = util.precompute_image_embeddings( +281 predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings +282 ) +283 +284 segmenter.initialize(image, image_embeddings) +285 masks = segmenter.generate(**generate_kwargs) 286 -287 instances = np.zeros(this_shape, dtype="uint32") -288 else: -289 instances = mask_data_to_segmentation(masks, with_background=True, min_object_size=min_object_size) -290 -291 # It's important to compress here, otherwise the predictions would take up a lot of space. -292 imageio.imwrite(prediction_path, instances, compression=5) -293 +287 if len(masks) == 0: # the instance segmentation can have no masks, hence we just save empty labels +288 if isinstance(segmenter, InstanceSegmentationWithDecoder): +289 this_shape = segmenter._foreground.shape +290 elif isinstance(segmenter, AMGBase): +291 this_shape = segmenter._original_size +292 else: +293 this_shape = image.shape[-2:] 294 -295def evaluate_instance_segmentation_grid_search( -296 result_dir: Union[str, os.PathLike], -297 grid_search_parameters: List[str], -298 criterion: str = "mSA" -299) -> Tuple[Dict[str, Any], float]: -300 """Evaluate gridsearch results. +295 instances = np.zeros(this_shape, dtype="uint32") +296 else: +297 instances = mask_data_to_segmentation(masks, with_background=True, min_object_size=min_object_size) +298 +299 # It's important to compress here, otherwise the predictions would take up a lot of space. +300 imageio.imwrite(prediction_path, instances, compression=5) 301 -302 Args: -303 result_dir: The folder with the gridsearch results. -304 grid_search_parameters: The names for the gridsearch parameters. -305 criterion: The metric to use for determining the best parameters. -306 -307 Returns: -308 The best parameter setting. -309 The evaluation score for the best setting. -310 """ -311 -312 # Load all the grid search results. -313 gs_files = glob(os.path.join(result_dir, "*.csv")) -314 gs_result = pd.concat([pd.read_csv(gs_file) for gs_file in gs_files]) -315 -316 # Retrieve only the relevant columns and group by the gridsearch columns. -317 gs_result = gs_result[grid_search_parameters + [criterion]].reset_index() -318 -319 # Compute the mean over the grouped columns. -320 grouped_result = gs_result.groupby(grid_search_parameters).mean().reset_index() -321 -322 # Find the best score and corresponding parameters. -323 best_score, best_idx = grouped_result[criterion].max(), grouped_result[criterion].idxmax() -324 best_params = grouped_result.iloc[best_idx] -325 assert np.isclose(best_params[criterion], best_score) -326 best_kwargs = {k: v for k, v in zip(grid_search_parameters, best_params)} -327 -328 return best_kwargs, best_score +302 +303def evaluate_instance_segmentation_grid_search( +304 result_dir: Union[str, os.PathLike], +305 grid_search_parameters: List[str], +306 criterion: str = "mSA" +307) -> Tuple[Dict[str, Any], float]: +308 """Evaluate gridsearch results. +309 +310 Args: +311 result_dir: The folder with the gridsearch results. +312 grid_search_parameters: The names for the gridsearch parameters. +313 criterion: The metric to use for determining the best parameters. +314 +315 Returns: +316 The best parameter setting. +317 The evaluation score for the best setting. +318 """ +319 +320 # Load all the grid search results. +321 gs_files = glob(os.path.join(result_dir, "*.csv")) +322 gs_result = pd.concat([pd.read_csv(gs_file) for gs_file in gs_files]) +323 +324 # Retrieve only the relevant columns and group by the gridsearch columns. +325 gs_result = gs_result[grid_search_parameters + [criterion]].reset_index() +326 +327 # Compute the mean over the grouped columns. +328 grouped_result = gs_result.groupby(grid_search_parameters).mean().reset_index() 329 -330 -331def save_grid_search_best_params(best_kwargs, best_msa, grid_search_result_dir=None): -332 # saving the best parameters estimated from grid-search in the `results` folder -333 param_df = pd.DataFrame.from_dict([best_kwargs]) -334 res_df = pd.DataFrame.from_dict([{"best_msa": best_msa}]) -335 best_param_df = pd.merge(res_df, param_df, left_index=True, right_index=True) -336 -337 path_name = "grid_search_params_amg.csv" if "pred_iou_thresh" and "stability_score_thresh" in best_kwargs \ -338 else "grid_search_params_instance_segmentation_with_decoder.csv" -339 -340 if grid_search_result_dir is not None: -341 os.makedirs(os.path.join(grid_search_result_dir, "results"), exist_ok=True) -342 res_path = os.path.join(grid_search_result_dir, "results", path_name) -343 else: -344 res_path = path_name -345 -346 best_param_df.to_csv(res_path) +330 # Find the best score and corresponding parameters. +331 best_score, best_idx = grouped_result[criterion].max(), grouped_result[criterion].idxmax() +332 best_params = grouped_result.iloc[best_idx] +333 assert np.isclose(best_params[criterion], best_score) +334 best_kwargs = {k: v for k, v in zip(grid_search_parameters, best_params)} +335 +336 return best_kwargs, best_score +337 +338 +339def save_grid_search_best_params(best_kwargs, best_msa, grid_search_result_dir=None): +340 # saving the best parameters estimated from grid-search in the `results` folder +341 param_df = pd.DataFrame.from_dict([best_kwargs]) +342 res_df = pd.DataFrame.from_dict([{"best_msa": best_msa}]) +343 best_param_df = pd.merge(res_df, param_df, left_index=True, right_index=True) +344 +345 path_name = "grid_search_params_amg.csv" if "pred_iou_thresh" and "stability_score_thresh" in best_kwargs \ +346 else "grid_search_params_instance_segmentation_with_decoder.csv" 347 -348 -349def run_instance_segmentation_grid_search_and_inference( -350 segmenter: Union[AMGBase, InstanceSegmentationWithDecoder], -351 grid_search_values: Dict[str, List], -352 val_image_paths: List[Union[str, os.PathLike]], -353 val_gt_paths: List[Union[str, os.PathLike]], -354 test_image_paths: List[Union[str, os.PathLike]], -355 embedding_dir: Union[str, os.PathLike], -356 prediction_dir: Union[str, os.PathLike], -357 result_dir: Union[str, os.PathLike], -358 fixed_generate_kwargs: Optional[Dict[str, Any]] = None, -359 verbose_gs: bool = True, -360) -> None: -361 """Run grid search and inference for automatic mask generation. -362 -363 Please refer to the documentation of `run_instance_segmentation_grid_search` -364 for details on how to specify the grid search parameters. -365 -366 Args: -367 segmenter: The class implementing the instance segmentation functionality. -368 grid_search_values: The grid search values for parameters of the `generate` function. -369 val_image_paths: The input images for the grid search. -370 val_gt_paths: The ground-truth segmentation for the grid search. -371 test_image_paths: The input images for inference. -372 embedding_dir: Folder to cache the image embeddings. -373 prediction_dir: Folder to save the predictions. -374 result_dir: Folder to cache the evaluation results per image. -375 fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter. -376 verbose_gs: Whether to run the gridsearch for individual images in a verbose mode. -377 """ -378 run_instance_segmentation_grid_search( -379 segmenter, grid_search_values, val_image_paths, val_gt_paths, -380 result_dir=result_dir, embedding_dir=embedding_dir, -381 fixed_generate_kwargs=fixed_generate_kwargs, verbose_gs=verbose_gs, -382 ) -383 -384 best_kwargs, best_msa = evaluate_instance_segmentation_grid_search(result_dir, list(grid_search_values.keys())) -385 best_param_str = ", ".join(f"{k} = {v}" for k, v in best_kwargs.items()) -386 print("Best grid-search result:", best_msa, "with parmeters:\n", best_param_str) -387 -388 save_grid_search_best_params(best_kwargs, best_msa, Path(embedding_dir).parent) -389 -390 generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs -391 generate_kwargs.update(best_kwargs) -392 -393 run_instance_segmentation_inference( -394 segmenter, test_image_paths, embedding_dir, prediction_dir, generate_kwargs -395 ) +348 if grid_search_result_dir is not None: +349 os.makedirs(os.path.join(grid_search_result_dir, "results"), exist_ok=True) +350 res_path = os.path.join(grid_search_result_dir, "results", path_name) +351 else: +352 res_path = path_name +353 +354 best_param_df.to_csv(res_path) +355 +356 +357def run_instance_segmentation_grid_search_and_inference( +358 segmenter: Union[AMGBase, InstanceSegmentationWithDecoder], +359 grid_search_values: Dict[str, List], +360 val_image_paths: List[Union[str, os.PathLike]], +361 val_gt_paths: List[Union[str, os.PathLike]], +362 test_image_paths: List[Union[str, os.PathLike]], +363 embedding_dir: Union[str, os.PathLike], +364 prediction_dir: Union[str, os.PathLike], +365 result_dir: Union[str, os.PathLike], +366 fixed_generate_kwargs: Optional[Dict[str, Any]] = None, +367 verbose_gs: bool = True, +368) -> None: +369 """Run grid search and inference for automatic mask generation. +370 +371 Please refer to the documentation of `run_instance_segmentation_grid_search` +372 for details on how to specify the grid search parameters. +373 +374 Args: +375 segmenter: The class implementing the instance segmentation functionality. +376 grid_search_values: The grid search values for parameters of the `generate` function. +377 val_image_paths: The input images for the grid search. +378 val_gt_paths: The ground-truth segmentation for the grid search. +379 test_image_paths: The input images for inference. +380 embedding_dir: Folder to cache the image embeddings. +381 prediction_dir: Folder to save the predictions. +382 result_dir: Folder to cache the evaluation results per image. +383 fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter. +384 verbose_gs: Whether to run the gridsearch for individual images in a verbose mode. +385 """ +386 run_instance_segmentation_grid_search( +387 segmenter, grid_search_values, val_image_paths, val_gt_paths, +388 result_dir=result_dir, embedding_dir=embedding_dir, +389 fixed_generate_kwargs=fixed_generate_kwargs, verbose_gs=verbose_gs, +390 ) +391 +392 best_kwargs, best_msa = evaluate_instance_segmentation_grid_search(result_dir, list(grid_search_values.keys())) +393 best_param_str = ", ".join(f"{k} = {v}" for k, v in best_kwargs.items()) +394 print("Best grid-search result:", best_msa, "with parmeters:\n", best_param_str) +395 print() +396 +397 save_grid_search_best_params(best_kwargs, best_msa, Path(embedding_dir).parent) +398 +399 generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs +400 generate_kwargs.update(best_kwargs) +401 +402 run_instance_segmentation_inference( +403 segmenter, test_image_paths, embedding_dir, prediction_dir, generate_kwargs +404 )

    @@ -670,61 +679,65 @@
    Returns:
    182 result_dir: Folder to cache the evaluation results per image. 183 embedding_dir: Folder to cache the image embeddings. 184 fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter. -185 verbose_gs: Whether to run the gridsearch for individual images in a verbose mode. +185 verbose_gs: Whether to run the grid-search for individual images in a verbose mode. 186 image_key: Key for loading the image data from a more complex file format like HDF5. 187 If not given a simple image format like tif is assumed. 188 gt_key: Key for loading the ground-truth data from a more complex file format like HDF5. 189 If not given a simple image format like tif is assumed. 190 rois: Region of interests to resetrict the evaluation to. 191 """ -192 assert len(image_paths) == len(gt_paths) -193 fixed_generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs -194 -195 duplicate_params = [gs_param for gs_param in grid_search_values.keys() if gs_param in fixed_generate_kwargs] -196 if duplicate_params: -197 raise ValueError( -198 "You may not pass duplicate parameters in 'grid_search_values' and 'fixed_generate_kwargs'." -199 f"The parameters {duplicate_params} are duplicated." -200 ) -201 -202 # Compute all combinations of grid search values. -203 gs_combinations = product(*grid_search_values.values()) -204 # Map each combination back to a valid kwarg input. -205 gs_combinations = [ -206 {k: v for k, v in zip(grid_search_values.keys(), vals)} for vals in gs_combinations -207 ] -208 -209 os.makedirs(result_dir, exist_ok=True) -210 predictor = getattr(segmenter, "_predictor", None) -211 -212 for i, (image_path, gt_path) in tqdm( -213 enumerate(zip(image_paths, gt_paths)), desc="Run instance segmentation grid-search", total=len(image_paths) -214 ): -215 image_name = Path(image_path).stem -216 result_path = os.path.join(result_dir, f"{image_name}.csv") -217 -218 # We skip images for which the grid search was done already. -219 if os.path.exists(result_path): -220 continue -221 -222 assert os.path.exists(image_path), image_path -223 assert os.path.exists(gt_path), gt_path -224 -225 image = _load_image(image_path, image_key, roi=None if rois is None else rois[i]) -226 gt = _load_image(gt_path, gt_key, roi=None if rois is None else rois[i]) -227 -228 if embedding_dir is None: -229 segmenter.initialize(image) -230 else: -231 assert predictor is not None -232 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") -233 image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2) -234 segmenter.initialize(image, image_embeddings) -235 -236 _grid_search_iteration( -237 segmenter, gs_combinations, gt, image_name, -238 fixed_generate_kwargs=fixed_generate_kwargs, result_path=result_path, verbose=verbose_gs, -239 ) +192 verbose_embeddings = False +193 +194 assert len(image_paths) == len(gt_paths) +195 fixed_generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs +196 +197 duplicate_params = [gs_param for gs_param in grid_search_values.keys() if gs_param in fixed_generate_kwargs] +198 if duplicate_params: +199 raise ValueError( +200 "You may not pass duplicate parameters in 'grid_search_values' and 'fixed_generate_kwargs'." +201 f"The parameters {duplicate_params} are duplicated." +202 ) +203 +204 # Compute all combinations of grid search values. +205 gs_combinations = product(*grid_search_values.values()) +206 # Map each combination back to a valid kwarg input. +207 gs_combinations = [ +208 {k: v for k, v in zip(grid_search_values.keys(), vals)} for vals in gs_combinations +209 ] +210 +211 os.makedirs(result_dir, exist_ok=True) +212 predictor = getattr(segmenter, "_predictor", None) +213 +214 for i, (image_path, gt_path) in tqdm( +215 enumerate(zip(image_paths, gt_paths)), desc="Run instance segmentation grid-search", total=len(image_paths) +216 ): +217 image_name = Path(image_path).stem +218 result_path = os.path.join(result_dir, f"{image_name}.csv") +219 +220 # We skip images for which the grid search was done already. +221 if os.path.exists(result_path): +222 continue +223 +224 assert os.path.exists(image_path), image_path +225 assert os.path.exists(gt_path), gt_path +226 +227 image = _load_image(image_path, image_key, roi=None if rois is None else rois[i]) +228 gt = _load_image(gt_path, gt_key, roi=None if rois is None else rois[i]) +229 +230 if embedding_dir is None: +231 segmenter.initialize(image) +232 else: +233 assert predictor is not None +234 embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr") +235 image_embeddings = util.precompute_image_embeddings( +236 predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings +237 ) +238 segmenter.initialize(image, image_embeddings) +239 +240 _grid_search_iteration( +241 segmenter, gs_combinations, gt, image_name, +242 fixed_generate_kwargs=fixed_generate_kwargs, result_path=result_path, verbose=verbose_gs, +243 ) @@ -756,7 +769,7 @@

    Arguments:
  • result_dir: Folder to cache the evaluation results per image.
  • embedding_dir: Folder to cache the image embeddings.
  • fixed_generate_kwargs: Fixed keyword arguments for the generate method of the segmenter.
  • -
  • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
  • +
  • verbose_gs: Whether to run the grid-search for individual images in a verbose mode.
  • image_key: Key for loading the image data from a more complex file format like HDF5. If not given a simple image format like tif is assumed.
  • gt_key: Key for loading the ground-truth data from a more complex file format like HDF5. @@ -778,58 +791,62 @@
    Arguments:
    -
    242def run_instance_segmentation_inference(
    -243    segmenter: Union[AMGBase, InstanceSegmentationWithDecoder],
    -244    image_paths: List[Union[str, os.PathLike]],
    -245    embedding_dir: Union[str, os.PathLike],
    -246    prediction_dir: Union[str, os.PathLike],
    -247    generate_kwargs: Optional[Dict[str, Any]] = None,
    -248) -> None:
    -249    """Run inference for automatic mask generation.
    -250
    -251    Args:
    -252        segmenter: The class implementing the instance segmentation functionality.
    -253        image_paths: The input images.
    -254        embedding_dir: Folder to cache the image embeddings.
    -255        prediction_dir: Folder to save the predictions.
    -256        generate_kwargs: The keyword arguments for the `generate` method of the segmenter.
    -257    """
    -258
    -259    generate_kwargs = {} if generate_kwargs is None else generate_kwargs
    -260    predictor = segmenter._predictor
    -261    min_object_size = generate_kwargs.get("min_mask_region_area", 0)
    +            
    246def run_instance_segmentation_inference(
    +247    segmenter: Union[AMGBase, InstanceSegmentationWithDecoder],
    +248    image_paths: List[Union[str, os.PathLike]],
    +249    embedding_dir: Union[str, os.PathLike],
    +250    prediction_dir: Union[str, os.PathLike],
    +251    generate_kwargs: Optional[Dict[str, Any]] = None,
    +252) -> None:
    +253    """Run inference for automatic mask generation.
    +254
    +255    Args:
    +256        segmenter: The class implementing the instance segmentation functionality.
    +257        image_paths: The input images.
    +258        embedding_dir: Folder to cache the image embeddings.
    +259        prediction_dir: Folder to save the predictions.
    +260        generate_kwargs: The keyword arguments for the `generate` method of the segmenter.
    +261    """
     262
    -263    for image_path in tqdm(image_paths, desc="Run inference for automatic mask generation"):
    -264        image_name = os.path.basename(image_path)
    -265
    -266        # We skip the images that already have been segmented.
    -267        prediction_path = os.path.join(prediction_dir, image_name)
    -268        if os.path.exists(prediction_path):
    -269            continue
    -270
    -271        assert os.path.exists(image_path), image_path
    -272        image = imageio.imread(image_path)
    -273
    -274        embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr")
    -275        image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2)
    +263    verbose_embeddings = False
    +264
    +265    generate_kwargs = {} if generate_kwargs is None else generate_kwargs
    +266    predictor = segmenter._predictor
    +267    min_object_size = generate_kwargs.get("min_mask_region_area", 0)
    +268
    +269    for image_path in tqdm(image_paths, desc="Run inference for automatic mask generation"):
    +270        image_name = os.path.basename(image_path)
    +271
    +272        # We skip the images that already have been segmented.
    +273        prediction_path = os.path.join(prediction_dir, image_name)
    +274        if os.path.exists(prediction_path):
    +275            continue
     276
    -277        segmenter.initialize(image, image_embeddings)
    -278        masks = segmenter.generate(**generate_kwargs)
    +277        assert os.path.exists(image_path), image_path
    +278        image = imageio.imread(image_path)
     279
    -280        if len(masks) == 0:  # the instance segmentation can have no masks, hence we just save empty labels
    -281            if isinstance(segmenter, InstanceSegmentationWithDecoder):
    -282                this_shape = segmenter._foreground.shape
    -283            elif isinstance(segmenter, AMGBase):
    -284                this_shape = segmenter._original_size
    -285            else:
    -286                this_shape = image.shape[-2:]
    +280        embedding_path = os.path.join(embedding_dir, f"{os.path.splitext(image_name)[0]}.zarr")
    +281        image_embeddings = util.precompute_image_embeddings(
    +282            predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings
    +283        )
    +284
    +285        segmenter.initialize(image, image_embeddings)
    +286        masks = segmenter.generate(**generate_kwargs)
     287
    -288            instances = np.zeros(this_shape, dtype="uint32")
    -289        else:
    -290            instances = mask_data_to_segmentation(masks, with_background=True, min_object_size=min_object_size)
    -291
    -292        # It's important to compress here, otherwise the predictions would take up a lot of space.
    -293        imageio.imwrite(prediction_path, instances, compression=5)
    +288        if len(masks) == 0:  # the instance segmentation can have no masks, hence we just save empty labels
    +289            if isinstance(segmenter, InstanceSegmentationWithDecoder):
    +290                this_shape = segmenter._foreground.shape
    +291            elif isinstance(segmenter, AMGBase):
    +292                this_shape = segmenter._original_size
    +293            else:
    +294                this_shape = image.shape[-2:]
    +295
    +296            instances = np.zeros(this_shape, dtype="uint32")
    +297        else:
    +298            instances = mask_data_to_segmentation(masks, with_background=True, min_object_size=min_object_size)
    +299
    +300        # It's important to compress here, otherwise the predictions would take up a lot of space.
    +301        imageio.imwrite(prediction_path, instances, compression=5)
     
    @@ -859,40 +876,40 @@
    Arguments:
    -
    296def evaluate_instance_segmentation_grid_search(
    -297    result_dir: Union[str, os.PathLike],
    -298    grid_search_parameters: List[str],
    -299    criterion: str = "mSA"
    -300) -> Tuple[Dict[str, Any], float]:
    -301    """Evaluate gridsearch results.
    -302
    -303    Args:
    -304        result_dir: The folder with the gridsearch results.
    -305        grid_search_parameters: The names for the gridsearch parameters.
    -306        criterion: The metric to use for determining the best parameters.
    -307
    -308    Returns:
    -309        The best parameter setting.
    -310        The evaluation score for the best setting.
    -311    """
    -312
    -313    # Load all the grid search results.
    -314    gs_files = glob(os.path.join(result_dir, "*.csv"))
    -315    gs_result = pd.concat([pd.read_csv(gs_file) for gs_file in gs_files])
    -316
    -317    # Retrieve only the relevant columns and group by the gridsearch columns.
    -318    gs_result = gs_result[grid_search_parameters + [criterion]].reset_index()
    -319
    -320    # Compute the mean over the grouped columns.
    -321    grouped_result = gs_result.groupby(grid_search_parameters).mean().reset_index()
    -322
    -323    # Find the best score and corresponding parameters.
    -324    best_score, best_idx = grouped_result[criterion].max(), grouped_result[criterion].idxmax()
    -325    best_params = grouped_result.iloc[best_idx]
    -326    assert np.isclose(best_params[criterion], best_score)
    -327    best_kwargs = {k: v for k, v in zip(grid_search_parameters, best_params)}
    -328
    -329    return best_kwargs, best_score
    +            
    304def evaluate_instance_segmentation_grid_search(
    +305    result_dir: Union[str, os.PathLike],
    +306    grid_search_parameters: List[str],
    +307    criterion: str = "mSA"
    +308) -> Tuple[Dict[str, Any], float]:
    +309    """Evaluate gridsearch results.
    +310
    +311    Args:
    +312        result_dir: The folder with the gridsearch results.
    +313        grid_search_parameters: The names for the gridsearch parameters.
    +314        criterion: The metric to use for determining the best parameters.
    +315
    +316    Returns:
    +317        The best parameter setting.
    +318        The evaluation score for the best setting.
    +319    """
    +320
    +321    # Load all the grid search results.
    +322    gs_files = glob(os.path.join(result_dir, "*.csv"))
    +323    gs_result = pd.concat([pd.read_csv(gs_file) for gs_file in gs_files])
    +324
    +325    # Retrieve only the relevant columns and group by the gridsearch columns.
    +326    gs_result = gs_result[grid_search_parameters + [criterion]].reset_index()
    +327
    +328    # Compute the mean over the grouped columns.
    +329    grouped_result = gs_result.groupby(grid_search_parameters).mean().reset_index()
    +330
    +331    # Find the best score and corresponding parameters.
    +332    best_score, best_idx = grouped_result[criterion].max(), grouped_result[criterion].idxmax()
    +333    best_params = grouped_result.iloc[best_idx]
    +334    assert np.isclose(best_params[criterion], best_score)
    +335    best_kwargs = {k: v for k, v in zip(grid_search_parameters, best_params)}
    +336
    +337    return best_kwargs, best_score
     
    @@ -927,22 +944,22 @@
    Returns:
    -
    332def save_grid_search_best_params(best_kwargs, best_msa, grid_search_result_dir=None):
    -333    # saving the best parameters estimated from grid-search in the `results` folder
    -334    param_df = pd.DataFrame.from_dict([best_kwargs])
    -335    res_df = pd.DataFrame.from_dict([{"best_msa": best_msa}])
    -336    best_param_df = pd.merge(res_df, param_df, left_index=True, right_index=True)
    -337
    -338    path_name = "grid_search_params_amg.csv" if "pred_iou_thresh" and "stability_score_thresh" in best_kwargs \
    -339        else "grid_search_params_instance_segmentation_with_decoder.csv"
    -340
    -341    if grid_search_result_dir is not None:
    -342        os.makedirs(os.path.join(grid_search_result_dir, "results"), exist_ok=True)
    -343        res_path = os.path.join(grid_search_result_dir, "results", path_name)
    -344    else:
    -345        res_path = path_name
    -346
    -347    best_param_df.to_csv(res_path)
    +            
    340def save_grid_search_best_params(best_kwargs, best_msa, grid_search_result_dir=None):
    +341    # saving the best parameters estimated from grid-search in the `results` folder
    +342    param_df = pd.DataFrame.from_dict([best_kwargs])
    +343    res_df = pd.DataFrame.from_dict([{"best_msa": best_msa}])
    +344    best_param_df = pd.merge(res_df, param_df, left_index=True, right_index=True)
    +345
    +346    path_name = "grid_search_params_amg.csv" if "pred_iou_thresh" and "stability_score_thresh" in best_kwargs \
    +347        else "grid_search_params_instance_segmentation_with_decoder.csv"
    +348
    +349    if grid_search_result_dir is not None:
    +350        os.makedirs(os.path.join(grid_search_result_dir, "results"), exist_ok=True)
    +351        res_path = os.path.join(grid_search_result_dir, "results", path_name)
    +352    else:
    +353        res_path = path_name
    +354
    +355    best_param_df.to_csv(res_path)
     
    @@ -960,53 +977,54 @@
    Returns:
    -
    350def run_instance_segmentation_grid_search_and_inference(
    -351    segmenter: Union[AMGBase, InstanceSegmentationWithDecoder],
    -352    grid_search_values: Dict[str, List],
    -353    val_image_paths: List[Union[str, os.PathLike]],
    -354    val_gt_paths: List[Union[str, os.PathLike]],
    -355    test_image_paths: List[Union[str, os.PathLike]],
    -356    embedding_dir: Union[str, os.PathLike],
    -357    prediction_dir: Union[str, os.PathLike],
    -358    result_dir: Union[str, os.PathLike],
    -359    fixed_generate_kwargs: Optional[Dict[str, Any]] = None,
    -360    verbose_gs: bool = True,
    -361) -> None:
    -362    """Run grid search and inference for automatic mask generation.
    -363
    -364    Please refer to the documentation of `run_instance_segmentation_grid_search`
    -365    for details on how to specify the grid search parameters.
    -366
    -367    Args:
    -368        segmenter: The class implementing the instance segmentation functionality.
    -369        grid_search_values: The grid search values for parameters of the `generate` function.
    -370        val_image_paths: The input images for the grid search.
    -371        val_gt_paths: The ground-truth segmentation for the grid search.
    -372        test_image_paths: The input images for inference.
    -373        embedding_dir: Folder to cache the image embeddings.
    -374        prediction_dir: Folder to save the predictions.
    -375        result_dir: Folder to cache the evaluation results per image.
    -376        fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter.
    -377        verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    -378    """
    -379    run_instance_segmentation_grid_search(
    -380        segmenter, grid_search_values, val_image_paths, val_gt_paths,
    -381        result_dir=result_dir, embedding_dir=embedding_dir,
    -382        fixed_generate_kwargs=fixed_generate_kwargs, verbose_gs=verbose_gs,
    -383    )
    -384
    -385    best_kwargs, best_msa = evaluate_instance_segmentation_grid_search(result_dir, list(grid_search_values.keys()))
    -386    best_param_str = ", ".join(f"{k} = {v}" for k, v in best_kwargs.items())
    -387    print("Best grid-search result:", best_msa, "with parmeters:\n", best_param_str)
    -388
    -389    save_grid_search_best_params(best_kwargs, best_msa, Path(embedding_dir).parent)
    -390
    -391    generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs
    -392    generate_kwargs.update(best_kwargs)
    -393
    -394    run_instance_segmentation_inference(
    -395        segmenter, test_image_paths, embedding_dir, prediction_dir, generate_kwargs
    -396    )
    +            
    358def run_instance_segmentation_grid_search_and_inference(
    +359    segmenter: Union[AMGBase, InstanceSegmentationWithDecoder],
    +360    grid_search_values: Dict[str, List],
    +361    val_image_paths: List[Union[str, os.PathLike]],
    +362    val_gt_paths: List[Union[str, os.PathLike]],
    +363    test_image_paths: List[Union[str, os.PathLike]],
    +364    embedding_dir: Union[str, os.PathLike],
    +365    prediction_dir: Union[str, os.PathLike],
    +366    result_dir: Union[str, os.PathLike],
    +367    fixed_generate_kwargs: Optional[Dict[str, Any]] = None,
    +368    verbose_gs: bool = True,
    +369) -> None:
    +370    """Run grid search and inference for automatic mask generation.
    +371
    +372    Please refer to the documentation of `run_instance_segmentation_grid_search`
    +373    for details on how to specify the grid search parameters.
    +374
    +375    Args:
    +376        segmenter: The class implementing the instance segmentation functionality.
    +377        grid_search_values: The grid search values for parameters of the `generate` function.
    +378        val_image_paths: The input images for the grid search.
    +379        val_gt_paths: The ground-truth segmentation for the grid search.
    +380        test_image_paths: The input images for inference.
    +381        embedding_dir: Folder to cache the image embeddings.
    +382        prediction_dir: Folder to save the predictions.
    +383        result_dir: Folder to cache the evaluation results per image.
    +384        fixed_generate_kwargs: Fixed keyword arguments for the `generate` method of the segmenter.
    +385        verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    +386    """
    +387    run_instance_segmentation_grid_search(
    +388        segmenter, grid_search_values, val_image_paths, val_gt_paths,
    +389        result_dir=result_dir, embedding_dir=embedding_dir,
    +390        fixed_generate_kwargs=fixed_generate_kwargs, verbose_gs=verbose_gs,
    +391    )
    +392
    +393    best_kwargs, best_msa = evaluate_instance_segmentation_grid_search(result_dir, list(grid_search_values.keys()))
    +394    best_param_str = ", ".join(f"{k} = {v}" for k, v in best_kwargs.items())
    +395    print("Best grid-search result:", best_msa, "with parmeters:\n", best_param_str)
    +396    print()
    +397
    +398    save_grid_search_best_params(best_kwargs, best_msa, Path(embedding_dir).parent)
    +399
    +400    generate_kwargs = {} if fixed_generate_kwargs is None else fixed_generate_kwargs
    +401    generate_kwargs.update(best_kwargs)
    +402
    +403    run_instance_segmentation_inference(
    +404        segmenter, test_image_paths, embedding_dir, prediction_dir, generate_kwargs
    +405    )
     
    diff --git a/micro_sam/inference.html b/micro_sam/inference.html index 29b0ffab..c15c0357 100644 --- a/micro_sam/inference.html +++ b/micro_sam/inference.html @@ -83,151 +83,155 @@

    26 return_instance_segmentation: bool = True, 27 segmentation_ids: Optional[list] = None, 28 reduce_multimasking: bool = True, - 29 logits_masks: Optional[torch.Tensor] = None - 30): - 31 """Run batched inference for input prompts. - 32 - 33 Args: - 34 predictor: The segment anything predictor. - 35 image: The input image. - 36 batch_size: The batch size to use for inference. - 37 boxes: The box prompts. Array of shape N_PROMPTS x 4. - 38 The bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y]. - 39 points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2. - 40 The points are represented by their coordinates [X, Y], which are given - 41 in the last dimension. - 42 point_labels: The point prompt labels. Array of shape N_PROMPTS x 1. - 43 The labels are either 0 (negative prompt) or 1 (positive prompt). - 44 multimasking: Whether to predict with 3 or 1 mask. - 45 embedding_path: Cache path for the image embeddings. - 46 return_instance_segmentation: Whether to return a instance segmentation - 47 or the individual mask data. - 48 segmentation_ids: Fixed segmentation ids to assign to the masks - 49 derived from the prompts. - 50 reduce_multimasking: Whether to choose the most likely masks with - 51 highest ious from multimasking - 52 logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256. - 53 Whether to use the logits masks from previous segmentation. - 54 - 55 Returns: - 56 The predicted segmentation masks. - 57 """ - 58 if multimasking and (segmentation_ids is not None) and (not return_instance_segmentation): - 59 raise NotImplementedError - 60 - 61 if (points is None) != (point_labels is None): - 62 raise ValueError( - 63 "If you have point prompts both `points` and `point_labels` have to be passed, " - 64 "but you passed only one of them." - 65 ) - 66 - 67 have_points = points is not None - 68 have_boxes = boxes is not None - 69 have_logits = logits_masks is not None - 70 if (not have_points) and (not have_boxes): - 71 raise ValueError("Point and/or box prompts have to be passed, you passed neither.") - 72 - 73 if have_points and (len(point_labels) != len(points)): - 74 raise ValueError( - 75 "The number of point coordinates and labels does not match: " - 76 f"{len(point_labels)} != {len(points)}" - 77 ) - 78 - 79 if (have_points and have_boxes) and (len(points) != len(boxes)): - 80 raise ValueError( - 81 "The number of point and box prompts does not match: " - 82 f"{len(points)} != {len(boxes)}" - 83 ) - 84 - 85 if have_logits: - 86 if have_points and (len(logits_masks) != len(point_labels)): - 87 raise ValueError( - 88 "The number of point and logits does not match: " - 89 f"{len(points) != len(logits_masks)}" - 90 ) - 91 elif have_boxes and (len(logits_masks) != len(boxes)): - 92 raise ValueError( - 93 "The number of boxes and logits does not match: " - 94 f"{len(boxes)} != {len(logits_masks)}" - 95 ) - 96 - 97 n_prompts = boxes.shape[0] if have_boxes else points.shape[0] + 29 logits_masks: Optional[torch.Tensor] = None, + 30 verbose_embeddings: bool = True, + 31): + 32 """Run batched inference for input prompts. + 33 + 34 Args: + 35 predictor: The segment anything predictor. + 36 image: The input image. + 37 batch_size: The batch size to use for inference. + 38 boxes: The box prompts. Array of shape N_PROMPTS x 4. + 39 The bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y]. + 40 points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2. + 41 The points are represented by their coordinates [X, Y], which are given + 42 in the last dimension. + 43 point_labels: The point prompt labels. Array of shape N_PROMPTS x 1. + 44 The labels are either 0 (negative prompt) or 1 (positive prompt). + 45 multimasking: Whether to predict with 3 or 1 mask. + 46 embedding_path: Cache path for the image embeddings. + 47 return_instance_segmentation: Whether to return a instance segmentation + 48 or the individual mask data. + 49 segmentation_ids: Fixed segmentation ids to assign to the masks + 50 derived from the prompts. + 51 reduce_multimasking: Whether to choose the most likely masks with + 52 highest ious from multimasking + 53 logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256. + 54 Whether to use the logits masks from previous segmentation. + 55 verbose_embeddings: Whether to show progress outputs of computing image embeddings. + 56 + 57 Returns: + 58 The predicted segmentation masks. + 59 """ + 60 if multimasking and (segmentation_ids is not None) and (not return_instance_segmentation): + 61 raise NotImplementedError + 62 + 63 if (points is None) != (point_labels is None): + 64 raise ValueError( + 65 "If you have point prompts both `points` and `point_labels` have to be passed, " + 66 "but you passed only one of them." + 67 ) + 68 + 69 have_points = points is not None + 70 have_boxes = boxes is not None + 71 have_logits = logits_masks is not None + 72 if (not have_points) and (not have_boxes): + 73 raise ValueError("Point and/or box prompts have to be passed, you passed neither.") + 74 + 75 if have_points and (len(point_labels) != len(points)): + 76 raise ValueError( + 77 "The number of point coordinates and labels does not match: " + 78 f"{len(point_labels)} != {len(points)}" + 79 ) + 80 + 81 if (have_points and have_boxes) and (len(points) != len(boxes)): + 82 raise ValueError( + 83 "The number of point and box prompts does not match: " + 84 f"{len(points)} != {len(boxes)}" + 85 ) + 86 + 87 if have_logits: + 88 if have_points and (len(logits_masks) != len(point_labels)): + 89 raise ValueError( + 90 "The number of point and logits does not match: " + 91 f"{len(points) != len(logits_masks)}" + 92 ) + 93 elif have_boxes and (len(logits_masks) != len(boxes)): + 94 raise ValueError( + 95 "The number of boxes and logits does not match: " + 96 f"{len(boxes)} != {len(logits_masks)}" + 97 ) 98 - 99 if (segmentation_ids is not None) and (len(segmentation_ids) != n_prompts): -100 raise ValueError( -101 "The number of segmentation ids and prompts does not match: " -102 f"{len(segmentation_ids)} != {n_prompts}" -103 ) -104 -105 # Compute the image embeddings. -106 image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2) -107 util.set_precomputed(predictor, image_embeddings) -108 -109 # Determine the number of batches. -110 n_batches = int(np.ceil(float(n_prompts) / batch_size)) -111 -112 # Preprocess the prompts. -113 device = predictor.device -114 transform_function = ResizeLongestSide(1024) -115 image_shape = predictor.original_size -116 if have_boxes: -117 boxes = transform_function.apply_boxes(boxes, image_shape) -118 boxes = torch.tensor(boxes, dtype=torch.float32).to(device) -119 if have_points: -120 points = transform_function.apply_coords(points, image_shape) -121 points = torch.tensor(points, dtype=torch.float32).to(device) -122 point_labels = torch.tensor(point_labels, dtype=torch.float32).to(device) -123 -124 masks = amg_utils.MaskData() -125 for batch_idx in range(n_batches): -126 batch_start = batch_idx * batch_size -127 batch_stop = min((batch_idx + 1) * batch_size, n_prompts) -128 -129 batch_boxes = boxes[batch_start:batch_stop] if have_boxes else None -130 batch_points = points[batch_start:batch_stop] if have_points else None -131 batch_labels = point_labels[batch_start:batch_stop] if have_points else None -132 batch_logits = logits_masks[batch_start:batch_stop] if have_logits else None -133 -134 batch_masks, batch_ious, batch_logits = predictor.predict_torch( -135 point_coords=batch_points, -136 point_labels=batch_labels, -137 boxes=batch_boxes, -138 mask_input=batch_logits, -139 multimask_output=multimasking -140 ) -141 -142 # If we expect to reduce the masks from multimasking and use multi-masking, -143 # then we need to select the most likely mask (according to the predicted IOU) here. -144 if reduce_multimasking and multimasking: -145 _, max_index = batch_ious.max(axis=1) -146 batch_masks = torch.cat([batch_masks[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -147 batch_ious = torch.cat([batch_ious[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -148 batch_logits = torch.cat([batch_logits[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -149 -150 batch_data = amg_utils.MaskData(masks=batch_masks.flatten(0, 1), iou_preds=batch_ious.flatten(0, 1)) -151 batch_data["masks"] = (batch_data["masks"] > predictor.model.mask_threshold).type(torch.bool) -152 batch_data["boxes"] = batched_mask_to_box(batch_data["masks"]) -153 batch_data["logits"] = batch_logits -154 -155 masks.cat(batch_data) -156 -157 # Mask data to records. -158 masks = [ -159 { -160 "segmentation": masks["masks"][idx], -161 "area": masks["masks"][idx].sum(), -162 "bbox": amg_utils.box_xyxy_to_xywh(masks["boxes"][idx]).tolist(), -163 "predicted_iou": masks["iou_preds"][idx].item(), -164 "seg_id": idx + 1 if segmentation_ids is None else int(segmentation_ids[idx]), -165 "logits": masks["logits"][idx] -166 } -167 for idx in range(len(masks["masks"])) -168 ] -169 -170 if return_instance_segmentation: -171 masks = mask_data_to_segmentation(masks, with_background=False, min_object_size=0) -172 -173 return masks + 99 n_prompts = boxes.shape[0] if have_boxes else points.shape[0] +100 +101 if (segmentation_ids is not None) and (len(segmentation_ids) != n_prompts): +102 raise ValueError( +103 "The number of segmentation ids and prompts does not match: " +104 f"{len(segmentation_ids)} != {n_prompts}" +105 ) +106 +107 # Compute the image embeddings. +108 image_embeddings = util.precompute_image_embeddings( +109 predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings +110 ) +111 util.set_precomputed(predictor, image_embeddings) +112 +113 # Determine the number of batches. +114 n_batches = int(np.ceil(float(n_prompts) / batch_size)) +115 +116 # Preprocess the prompts. +117 device = predictor.device +118 transform_function = ResizeLongestSide(1024) +119 image_shape = predictor.original_size +120 if have_boxes: +121 boxes = transform_function.apply_boxes(boxes, image_shape) +122 boxes = torch.tensor(boxes, dtype=torch.float32).to(device) +123 if have_points: +124 points = transform_function.apply_coords(points, image_shape) +125 points = torch.tensor(points, dtype=torch.float32).to(device) +126 point_labels = torch.tensor(point_labels, dtype=torch.float32).to(device) +127 +128 masks = amg_utils.MaskData() +129 for batch_idx in range(n_batches): +130 batch_start = batch_idx * batch_size +131 batch_stop = min((batch_idx + 1) * batch_size, n_prompts) +132 +133 batch_boxes = boxes[batch_start:batch_stop] if have_boxes else None +134 batch_points = points[batch_start:batch_stop] if have_points else None +135 batch_labels = point_labels[batch_start:batch_stop] if have_points else None +136 batch_logits = logits_masks[batch_start:batch_stop] if have_logits else None +137 +138 batch_masks, batch_ious, batch_logits = predictor.predict_torch( +139 point_coords=batch_points, +140 point_labels=batch_labels, +141 boxes=batch_boxes, +142 mask_input=batch_logits, +143 multimask_output=multimasking +144 ) +145 +146 # If we expect to reduce the masks from multimasking and use multi-masking, +147 # then we need to select the most likely mask (according to the predicted IOU) here. +148 if reduce_multimasking and multimasking: +149 _, max_index = batch_ious.max(axis=1) +150 batch_masks = torch.cat([batch_masks[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +151 batch_ious = torch.cat([batch_ious[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +152 batch_logits = torch.cat([batch_logits[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +153 +154 batch_data = amg_utils.MaskData(masks=batch_masks.flatten(0, 1), iou_preds=batch_ious.flatten(0, 1)) +155 batch_data["masks"] = (batch_data["masks"] > predictor.model.mask_threshold).type(torch.bool) +156 batch_data["boxes"] = batched_mask_to_box(batch_data["masks"]) +157 batch_data["logits"] = batch_logits +158 +159 masks.cat(batch_data) +160 +161 # Mask data to records. +162 masks = [ +163 { +164 "segmentation": masks["masks"][idx], +165 "area": masks["masks"][idx].sum(), +166 "bbox": amg_utils.box_xyxy_to_xywh(masks["boxes"][idx]).tolist(), +167 "predicted_iou": masks["iou_preds"][idx].item(), +168 "seg_id": idx + 1 if segmentation_ids is None else int(segmentation_ids[idx]), +169 "logits": masks["logits"][idx] +170 } +171 for idx in range(len(masks["masks"])) +172 ] +173 +174 if return_instance_segmentation: +175 masks = mask_data_to_segmentation(masks, with_background=False, min_object_size=0) +176 +177 return masks

    @@ -238,7 +242,7 @@

    @torch.no_grad()
    def - batched_inference( predictor: segment_anything.predictor.SamPredictor, image: numpy.ndarray, batch_size: int, boxes: Optional[numpy.ndarray] = None, points: Optional[numpy.ndarray] = None, point_labels: Optional[numpy.ndarray] = None, multimasking: bool = False, embedding_path: Union[str, os.PathLike, NoneType] = None, return_instance_segmentation: bool = True, segmentation_ids: Optional[list] = None, reduce_multimasking: bool = True, logits_masks: Optional[torch.Tensor] = None): + batched_inference( predictor: segment_anything.predictor.SamPredictor, image: numpy.ndarray, batch_size: int, boxes: Optional[numpy.ndarray] = None, points: Optional[numpy.ndarray] = None, point_labels: Optional[numpy.ndarray] = None, multimasking: bool = False, embedding_path: Union[str, os.PathLike, NoneType] = None, return_instance_segmentation: bool = True, segmentation_ids: Optional[list] = None, reduce_multimasking: bool = True, logits_masks: Optional[torch.Tensor] = None, verbose_embeddings: bool = True): @@ -257,151 +261,155 @@

    27 return_instance_segmentation: bool = True, 28 segmentation_ids: Optional[list] = None, 29 reduce_multimasking: bool = True, - 30 logits_masks: Optional[torch.Tensor] = None - 31): - 32 """Run batched inference for input prompts. - 33 - 34 Args: - 35 predictor: The segment anything predictor. - 36 image: The input image. - 37 batch_size: The batch size to use for inference. - 38 boxes: The box prompts. Array of shape N_PROMPTS x 4. - 39 The bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y]. - 40 points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2. - 41 The points are represented by their coordinates [X, Y], which are given - 42 in the last dimension. - 43 point_labels: The point prompt labels. Array of shape N_PROMPTS x 1. - 44 The labels are either 0 (negative prompt) or 1 (positive prompt). - 45 multimasking: Whether to predict with 3 or 1 mask. - 46 embedding_path: Cache path for the image embeddings. - 47 return_instance_segmentation: Whether to return a instance segmentation - 48 or the individual mask data. - 49 segmentation_ids: Fixed segmentation ids to assign to the masks - 50 derived from the prompts. - 51 reduce_multimasking: Whether to choose the most likely masks with - 52 highest ious from multimasking - 53 logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256. - 54 Whether to use the logits masks from previous segmentation. - 55 - 56 Returns: - 57 The predicted segmentation masks. - 58 """ - 59 if multimasking and (segmentation_ids is not None) and (not return_instance_segmentation): - 60 raise NotImplementedError - 61 - 62 if (points is None) != (point_labels is None): - 63 raise ValueError( - 64 "If you have point prompts both `points` and `point_labels` have to be passed, " - 65 "but you passed only one of them." - 66 ) - 67 - 68 have_points = points is not None - 69 have_boxes = boxes is not None - 70 have_logits = logits_masks is not None - 71 if (not have_points) and (not have_boxes): - 72 raise ValueError("Point and/or box prompts have to be passed, you passed neither.") - 73 - 74 if have_points and (len(point_labels) != len(points)): - 75 raise ValueError( - 76 "The number of point coordinates and labels does not match: " - 77 f"{len(point_labels)} != {len(points)}" - 78 ) - 79 - 80 if (have_points and have_boxes) and (len(points) != len(boxes)): - 81 raise ValueError( - 82 "The number of point and box prompts does not match: " - 83 f"{len(points)} != {len(boxes)}" - 84 ) - 85 - 86 if have_logits: - 87 if have_points and (len(logits_masks) != len(point_labels)): - 88 raise ValueError( - 89 "The number of point and logits does not match: " - 90 f"{len(points) != len(logits_masks)}" - 91 ) - 92 elif have_boxes and (len(logits_masks) != len(boxes)): - 93 raise ValueError( - 94 "The number of boxes and logits does not match: " - 95 f"{len(boxes)} != {len(logits_masks)}" - 96 ) - 97 - 98 n_prompts = boxes.shape[0] if have_boxes else points.shape[0] + 30 logits_masks: Optional[torch.Tensor] = None, + 31 verbose_embeddings: bool = True, + 32): + 33 """Run batched inference for input prompts. + 34 + 35 Args: + 36 predictor: The segment anything predictor. + 37 image: The input image. + 38 batch_size: The batch size to use for inference. + 39 boxes: The box prompts. Array of shape N_PROMPTS x 4. + 40 The bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y]. + 41 points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2. + 42 The points are represented by their coordinates [X, Y], which are given + 43 in the last dimension. + 44 point_labels: The point prompt labels. Array of shape N_PROMPTS x 1. + 45 The labels are either 0 (negative prompt) or 1 (positive prompt). + 46 multimasking: Whether to predict with 3 or 1 mask. + 47 embedding_path: Cache path for the image embeddings. + 48 return_instance_segmentation: Whether to return a instance segmentation + 49 or the individual mask data. + 50 segmentation_ids: Fixed segmentation ids to assign to the masks + 51 derived from the prompts. + 52 reduce_multimasking: Whether to choose the most likely masks with + 53 highest ious from multimasking + 54 logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256. + 55 Whether to use the logits masks from previous segmentation. + 56 verbose_embeddings: Whether to show progress outputs of computing image embeddings. + 57 + 58 Returns: + 59 The predicted segmentation masks. + 60 """ + 61 if multimasking and (segmentation_ids is not None) and (not return_instance_segmentation): + 62 raise NotImplementedError + 63 + 64 if (points is None) != (point_labels is None): + 65 raise ValueError( + 66 "If you have point prompts both `points` and `point_labels` have to be passed, " + 67 "but you passed only one of them." + 68 ) + 69 + 70 have_points = points is not None + 71 have_boxes = boxes is not None + 72 have_logits = logits_masks is not None + 73 if (not have_points) and (not have_boxes): + 74 raise ValueError("Point and/or box prompts have to be passed, you passed neither.") + 75 + 76 if have_points and (len(point_labels) != len(points)): + 77 raise ValueError( + 78 "The number of point coordinates and labels does not match: " + 79 f"{len(point_labels)} != {len(points)}" + 80 ) + 81 + 82 if (have_points and have_boxes) and (len(points) != len(boxes)): + 83 raise ValueError( + 84 "The number of point and box prompts does not match: " + 85 f"{len(points)} != {len(boxes)}" + 86 ) + 87 + 88 if have_logits: + 89 if have_points and (len(logits_masks) != len(point_labels)): + 90 raise ValueError( + 91 "The number of point and logits does not match: " + 92 f"{len(points) != len(logits_masks)}" + 93 ) + 94 elif have_boxes and (len(logits_masks) != len(boxes)): + 95 raise ValueError( + 96 "The number of boxes and logits does not match: " + 97 f"{len(boxes)} != {len(logits_masks)}" + 98 ) 99 -100 if (segmentation_ids is not None) and (len(segmentation_ids) != n_prompts): -101 raise ValueError( -102 "The number of segmentation ids and prompts does not match: " -103 f"{len(segmentation_ids)} != {n_prompts}" -104 ) -105 -106 # Compute the image embeddings. -107 image_embeddings = util.precompute_image_embeddings(predictor, image, embedding_path, ndim=2) -108 util.set_precomputed(predictor, image_embeddings) -109 -110 # Determine the number of batches. -111 n_batches = int(np.ceil(float(n_prompts) / batch_size)) -112 -113 # Preprocess the prompts. -114 device = predictor.device -115 transform_function = ResizeLongestSide(1024) -116 image_shape = predictor.original_size -117 if have_boxes: -118 boxes = transform_function.apply_boxes(boxes, image_shape) -119 boxes = torch.tensor(boxes, dtype=torch.float32).to(device) -120 if have_points: -121 points = transform_function.apply_coords(points, image_shape) -122 points = torch.tensor(points, dtype=torch.float32).to(device) -123 point_labels = torch.tensor(point_labels, dtype=torch.float32).to(device) -124 -125 masks = amg_utils.MaskData() -126 for batch_idx in range(n_batches): -127 batch_start = batch_idx * batch_size -128 batch_stop = min((batch_idx + 1) * batch_size, n_prompts) -129 -130 batch_boxes = boxes[batch_start:batch_stop] if have_boxes else None -131 batch_points = points[batch_start:batch_stop] if have_points else None -132 batch_labels = point_labels[batch_start:batch_stop] if have_points else None -133 batch_logits = logits_masks[batch_start:batch_stop] if have_logits else None -134 -135 batch_masks, batch_ious, batch_logits = predictor.predict_torch( -136 point_coords=batch_points, -137 point_labels=batch_labels, -138 boxes=batch_boxes, -139 mask_input=batch_logits, -140 multimask_output=multimasking -141 ) -142 -143 # If we expect to reduce the masks from multimasking and use multi-masking, -144 # then we need to select the most likely mask (according to the predicted IOU) here. -145 if reduce_multimasking and multimasking: -146 _, max_index = batch_ious.max(axis=1) -147 batch_masks = torch.cat([batch_masks[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -148 batch_ious = torch.cat([batch_ious[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -149 batch_logits = torch.cat([batch_logits[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) -150 -151 batch_data = amg_utils.MaskData(masks=batch_masks.flatten(0, 1), iou_preds=batch_ious.flatten(0, 1)) -152 batch_data["masks"] = (batch_data["masks"] > predictor.model.mask_threshold).type(torch.bool) -153 batch_data["boxes"] = batched_mask_to_box(batch_data["masks"]) -154 batch_data["logits"] = batch_logits -155 -156 masks.cat(batch_data) -157 -158 # Mask data to records. -159 masks = [ -160 { -161 "segmentation": masks["masks"][idx], -162 "area": masks["masks"][idx].sum(), -163 "bbox": amg_utils.box_xyxy_to_xywh(masks["boxes"][idx]).tolist(), -164 "predicted_iou": masks["iou_preds"][idx].item(), -165 "seg_id": idx + 1 if segmentation_ids is None else int(segmentation_ids[idx]), -166 "logits": masks["logits"][idx] -167 } -168 for idx in range(len(masks["masks"])) -169 ] -170 -171 if return_instance_segmentation: -172 masks = mask_data_to_segmentation(masks, with_background=False, min_object_size=0) -173 -174 return masks +100 n_prompts = boxes.shape[0] if have_boxes else points.shape[0] +101 +102 if (segmentation_ids is not None) and (len(segmentation_ids) != n_prompts): +103 raise ValueError( +104 "The number of segmentation ids and prompts does not match: " +105 f"{len(segmentation_ids)} != {n_prompts}" +106 ) +107 +108 # Compute the image embeddings. +109 image_embeddings = util.precompute_image_embeddings( +110 predictor, image, embedding_path, ndim=2, verbose=verbose_embeddings +111 ) +112 util.set_precomputed(predictor, image_embeddings) +113 +114 # Determine the number of batches. +115 n_batches = int(np.ceil(float(n_prompts) / batch_size)) +116 +117 # Preprocess the prompts. +118 device = predictor.device +119 transform_function = ResizeLongestSide(1024) +120 image_shape = predictor.original_size +121 if have_boxes: +122 boxes = transform_function.apply_boxes(boxes, image_shape) +123 boxes = torch.tensor(boxes, dtype=torch.float32).to(device) +124 if have_points: +125 points = transform_function.apply_coords(points, image_shape) +126 points = torch.tensor(points, dtype=torch.float32).to(device) +127 point_labels = torch.tensor(point_labels, dtype=torch.float32).to(device) +128 +129 masks = amg_utils.MaskData() +130 for batch_idx in range(n_batches): +131 batch_start = batch_idx * batch_size +132 batch_stop = min((batch_idx + 1) * batch_size, n_prompts) +133 +134 batch_boxes = boxes[batch_start:batch_stop] if have_boxes else None +135 batch_points = points[batch_start:batch_stop] if have_points else None +136 batch_labels = point_labels[batch_start:batch_stop] if have_points else None +137 batch_logits = logits_masks[batch_start:batch_stop] if have_logits else None +138 +139 batch_masks, batch_ious, batch_logits = predictor.predict_torch( +140 point_coords=batch_points, +141 point_labels=batch_labels, +142 boxes=batch_boxes, +143 mask_input=batch_logits, +144 multimask_output=multimasking +145 ) +146 +147 # If we expect to reduce the masks from multimasking and use multi-masking, +148 # then we need to select the most likely mask (according to the predicted IOU) here. +149 if reduce_multimasking and multimasking: +150 _, max_index = batch_ious.max(axis=1) +151 batch_masks = torch.cat([batch_masks[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +152 batch_ious = torch.cat([batch_ious[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +153 batch_logits = torch.cat([batch_logits[i, max_id][None] for i, max_id in enumerate(max_index)]).unsqueeze(1) +154 +155 batch_data = amg_utils.MaskData(masks=batch_masks.flatten(0, 1), iou_preds=batch_ious.flatten(0, 1)) +156 batch_data["masks"] = (batch_data["masks"] > predictor.model.mask_threshold).type(torch.bool) +157 batch_data["boxes"] = batched_mask_to_box(batch_data["masks"]) +158 batch_data["logits"] = batch_logits +159 +160 masks.cat(batch_data) +161 +162 # Mask data to records. +163 masks = [ +164 { +165 "segmentation": masks["masks"][idx], +166 "area": masks["masks"][idx].sum(), +167 "bbox": amg_utils.box_xyxy_to_xywh(masks["boxes"][idx]).tolist(), +168 "predicted_iou": masks["iou_preds"][idx].item(), +169 "seg_id": idx + 1 if segmentation_ids is None else int(segmentation_ids[idx]), +170 "logits": masks["logits"][idx] +171 } +172 for idx in range(len(masks["masks"])) +173 ] +174 +175 if return_instance_segmentation: +176 masks = mask_data_to_segmentation(masks, with_background=False, min_object_size=0) +177 +178 return masks @@ -430,6 +438,7 @@

    Arguments:
    highest ious from multimasking
  • logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256. Whether to use the logits masks from previous segmentation.
  • +
  • verbose_embeddings: Whether to show progress outputs of computing image embeddings.
  • Returns:
    diff --git a/micro_sam/training/training.html b/micro_sam/training/training.html index 5c0befb0..3b10930b 100644 --- a/micro_sam/training/training.html +++ b/micro_sam/training/training.html @@ -82,7 +82,7 @@

    10 11try: 12 from qtpy.QtCore import QObject - 13except ModuleNotFoundError: + 13except Exception: 14 QObject = Any 15 16from torch.optim.lr_scheduler import _LRScheduler diff --git a/search.js b/search.js index 5f99fe11..f7396032 100644 --- a/search.js +++ b/search.js @@ -1,6 +1,6 @@ window.pdocSearch = (function(){ /** elasticlunr - http://weixsong.github.io * Copyright (C) 2017 Oliver Nightingale * Copyright (C) 2017 Wei Song * MIT Licensed */!function(){function e(e){if(null===e||"object"!=typeof e)return e;var t=e.constructor();for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n]);return t}var t=function(e){var n=new t.Index;return n.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),e&&e.call(n,n),n};t.version="0.9.5",lunr=t,t.utils={},t.utils.warn=function(e){return function(t){e.console&&console.warn&&console.warn(t)}}(this),t.utils.toString=function(e){return void 0===e||null===e?"":e.toString()},t.EventEmitter=function(){this.events={}},t.EventEmitter.prototype.addListener=function(){var e=Array.prototype.slice.call(arguments),t=e.pop(),n=e;if("function"!=typeof t)throw new TypeError("last argument must be a function");n.forEach(function(e){this.hasHandler(e)||(this.events[e]=[]),this.events[e].push(t)},this)},t.EventEmitter.prototype.removeListener=function(e,t){if(this.hasHandler(e)){var n=this.events[e].indexOf(t);-1!==n&&(this.events[e].splice(n,1),0==this.events[e].length&&delete this.events[e])}},t.EventEmitter.prototype.emit=function(e){if(this.hasHandler(e)){var t=Array.prototype.slice.call(arguments,1);this.events[e].forEach(function(e){e.apply(void 0,t)},this)}},t.EventEmitter.prototype.hasHandler=function(e){return e in this.events},t.tokenizer=function(e){if(!arguments.length||null===e||void 0===e)return[];if(Array.isArray(e)){var n=e.filter(function(e){return null===e||void 0===e?!1:!0});n=n.map(function(e){return t.utils.toString(e).toLowerCase()});var i=[];return n.forEach(function(e){var n=e.split(t.tokenizer.seperator);i=i.concat(n)},this),i}return e.toString().trim().toLowerCase().split(t.tokenizer.seperator)},t.tokenizer.defaultSeperator=/[\s\-]+/,t.tokenizer.seperator=t.tokenizer.defaultSeperator,t.tokenizer.setSeperator=function(e){null!==e&&void 0!==e&&"object"==typeof e&&(t.tokenizer.seperator=e)},t.tokenizer.resetSeperator=function(){t.tokenizer.seperator=t.tokenizer.defaultSeperator},t.tokenizer.getSeperator=function(){return t.tokenizer.seperator},t.Pipeline=function(){this._queue=[]},t.Pipeline.registeredFunctions={},t.Pipeline.registerFunction=function(e,n){n in t.Pipeline.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+n),e.label=n,t.Pipeline.registeredFunctions[n]=e},t.Pipeline.getRegisteredFunction=function(e){return e in t.Pipeline.registeredFunctions!=!0?null:t.Pipeline.registeredFunctions[e]},t.Pipeline.warnIfFunctionNotRegistered=function(e){var n=e.label&&e.label in this.registeredFunctions;n||t.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",e)},t.Pipeline.load=function(e){var n=new t.Pipeline;return e.forEach(function(e){var i=t.Pipeline.getRegisteredFunction(e);if(!i)throw new Error("Cannot load un-registered function: "+e);n.add(i)}),n},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(e){t.Pipeline.warnIfFunctionNotRegistered(e),this._queue.push(e)},this)},t.Pipeline.prototype.after=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var i=this._queue.indexOf(e);if(-1===i)throw new Error("Cannot find existingFn");this._queue.splice(i+1,0,n)},t.Pipeline.prototype.before=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var i=this._queue.indexOf(e);if(-1===i)throw new Error("Cannot find existingFn");this._queue.splice(i,0,n)},t.Pipeline.prototype.remove=function(e){var t=this._queue.indexOf(e);-1!==t&&this._queue.splice(t,1)},t.Pipeline.prototype.run=function(e){for(var t=[],n=e.length,i=this._queue.length,o=0;n>o;o++){for(var r=e[o],s=0;i>s&&(r=this._queue[s](r,o,e),void 0!==r&&null!==r);s++);void 0!==r&&null!==r&&t.push(r)}return t},t.Pipeline.prototype.reset=function(){this._queue=[]},t.Pipeline.prototype.get=function(){return this._queue},t.Pipeline.prototype.toJSON=function(){return this._queue.map(function(e){return t.Pipeline.warnIfFunctionNotRegistered(e),e.label})},t.Index=function(){this._fields=[],this._ref="id",this.pipeline=new t.Pipeline,this.documentStore=new t.DocumentStore,this.index={},this.eventEmitter=new t.EventEmitter,this._idfCache={},this.on("add","remove","update",function(){this._idfCache={}}.bind(this))},t.Index.prototype.on=function(){var e=Array.prototype.slice.call(arguments);return this.eventEmitter.addListener.apply(this.eventEmitter,e)},t.Index.prototype.off=function(e,t){return this.eventEmitter.removeListener(e,t)},t.Index.load=function(e){e.version!==t.version&&t.utils.warn("version mismatch: current "+t.version+" importing "+e.version);var n=new this;n._fields=e.fields,n._ref=e.ref,n.documentStore=t.DocumentStore.load(e.documentStore),n.pipeline=t.Pipeline.load(e.pipeline),n.index={};for(var i in e.index)n.index[i]=t.InvertedIndex.load(e.index[i]);return n},t.Index.prototype.addField=function(e){return this._fields.push(e),this.index[e]=new t.InvertedIndex,this},t.Index.prototype.setRef=function(e){return this._ref=e,this},t.Index.prototype.saveDocument=function(e){return this.documentStore=new t.DocumentStore(e),this},t.Index.prototype.addDoc=function(e,n){if(e){var n=void 0===n?!0:n,i=e[this._ref];this.documentStore.addDoc(i,e),this._fields.forEach(function(n){var o=this.pipeline.run(t.tokenizer(e[n]));this.documentStore.addFieldLength(i,n,o.length);var r={};o.forEach(function(e){e in r?r[e]+=1:r[e]=1},this);for(var s in r){var u=r[s];u=Math.sqrt(u),this.index[n].addToken(s,{ref:i,tf:u})}},this),n&&this.eventEmitter.emit("add",e,this)}},t.Index.prototype.removeDocByRef=function(e){if(e&&this.documentStore.isDocStored()!==!1&&this.documentStore.hasDoc(e)){var t=this.documentStore.getDoc(e);this.removeDoc(t,!1)}},t.Index.prototype.removeDoc=function(e,n){if(e){var n=void 0===n?!0:n,i=e[this._ref];this.documentStore.hasDoc(i)&&(this.documentStore.removeDoc(i),this._fields.forEach(function(n){var o=this.pipeline.run(t.tokenizer(e[n]));o.forEach(function(e){this.index[n].removeToken(e,i)},this)},this),n&&this.eventEmitter.emit("remove",e,this))}},t.Index.prototype.updateDoc=function(e,t){var t=void 0===t?!0:t;this.removeDocByRef(e[this._ref],!1),this.addDoc(e,!1),t&&this.eventEmitter.emit("update",e,this)},t.Index.prototype.idf=function(e,t){var n="@"+t+"/"+e;if(Object.prototype.hasOwnProperty.call(this._idfCache,n))return this._idfCache[n];var i=this.index[t].getDocFreq(e),o=1+Math.log(this.documentStore.length/(i+1));return this._idfCache[n]=o,o},t.Index.prototype.getFields=function(){return this._fields.slice()},t.Index.prototype.search=function(e,n){if(!e)return[];e="string"==typeof e?{any:e}:JSON.parse(JSON.stringify(e));var i=null;null!=n&&(i=JSON.stringify(n));for(var o=new t.Configuration(i,this.getFields()).get(),r={},s=Object.keys(e),u=0;u0&&t.push(e);for(var i in n)"docs"!==i&&"df"!==i&&this.expandToken(e+i,t,n[i]);return t},t.InvertedIndex.prototype.toJSON=function(){return{root:this.root}},t.Configuration=function(e,n){var e=e||"";if(void 0==n||null==n)throw new Error("fields should not be null");this.config={};var i;try{i=JSON.parse(e),this.buildUserConfig(i,n)}catch(o){t.utils.warn("user configuration parse failed, will use default configuration"),this.buildDefaultConfig(n)}},t.Configuration.prototype.buildDefaultConfig=function(e){this.reset(),e.forEach(function(e){this.config[e]={boost:1,bool:"OR",expand:!1}},this)},t.Configuration.prototype.buildUserConfig=function(e,n){var i="OR",o=!1;if(this.reset(),"bool"in e&&(i=e.bool||i),"expand"in e&&(o=e.expand||o),"fields"in e)for(var r in e.fields)if(n.indexOf(r)>-1){var s=e.fields[r],u=o;void 0!=s.expand&&(u=s.expand),this.config[r]={boost:s.boost||0===s.boost?s.boost:1,bool:s.bool||i,expand:u}}else t.utils.warn("field name in user configuration not found in index instance fields");else this.addAllFields2UserConfig(i,o,n)},t.Configuration.prototype.addAllFields2UserConfig=function(e,t,n){n.forEach(function(n){this.config[n]={boost:1,bool:e,expand:t}},this)},t.Configuration.prototype.get=function(){return this.config},t.Configuration.prototype.reset=function(){this.config={}},lunr.SortedSet=function(){this.length=0,this.elements=[]},lunr.SortedSet.load=function(e){var t=new this;return t.elements=e,t.length=e.length,t},lunr.SortedSet.prototype.add=function(){var e,t;for(e=0;e1;){if(r===e)return o;e>r&&(t=o),r>e&&(n=o),i=n-t,o=t+Math.floor(i/2),r=this.elements[o]}return r===e?o:-1},lunr.SortedSet.prototype.locationFor=function(e){for(var t=0,n=this.elements.length,i=n-t,o=t+Math.floor(i/2),r=this.elements[o];i>1;)e>r&&(t=o),r>e&&(n=o),i=n-t,o=t+Math.floor(i/2),r=this.elements[o];return r>e?o:e>r?o+1:void 0},lunr.SortedSet.prototype.intersect=function(e){for(var t=new lunr.SortedSet,n=0,i=0,o=this.length,r=e.length,s=this.elements,u=e.elements;;){if(n>o-1||i>r-1)break;s[n]!==u[i]?s[n]u[i]&&i++:(t.add(s[n]),n++,i++)}return t},lunr.SortedSet.prototype.clone=function(){var e=new lunr.SortedSet;return e.elements=this.toArray(),e.length=e.elements.length,e},lunr.SortedSet.prototype.union=function(e){var t,n,i;this.length>=e.length?(t=this,n=e):(t=e,n=this),i=t.clone();for(var o=0,r=n.toArray();oSegment Anything for Microscopy

    \n\n

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other bio-imaging data.\nIts core components are:

    \n\n
      \n
    • The micro_sam tools for interactive data annotation, built as napari plugin.
    • \n
    • The micro_sam library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.
    • \n
    • The micro_sam models that are fine-tuned on publicly available microscopy data and that are available on BioImage.IO.
    • \n
    \n\n

    Based on these components micro_sam enables fast interactive and automatic annotation for microscopy data, like interactive cell segmentation from bounding boxes:

    \n\n

    \"box-prompts\"

    \n\n

    micro_sam is now available as stable version 1.0 and we will not change its user interface significantly in the foreseeable future.\nWe are still working on improving and extending its functionality. The current roadmap includes:

    \n\n
      \n
    • Releasing more and better finetuned models.
    • \n
    • Integrating parameter efficient training and compressed models for faster fine-tuning.
    • \n
    • Improving the 3D segmentation and tracking functionality.
    • \n
    \n\n

    If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam.

    \n\n

    Quickstart

    \n\n

    You can install micro_sam via mamba:

    \n\n
    $ mamba install -c conda-forge micro_sam\n
    \n\n

    We also provide installers for Windows and Linux. For more details on the available installation options check out the installation section.

    \n\n

    After installing micro_sam you can start napari and select the annotation tool you want to use from Plugins->Segment Anything for Microscopy. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.

    \n\n

    The micro_sam python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    It is explained in more detail here.

    \n\n

    We provide different finetuned models for microscopy that can be used within our tools or any other tool that supports Segment Anything. See finetuned models for details on the available models.\nYou can also train models on your own data, see here for details.

    \n\n

    Citation

    \n\n

    If you are using micro_sam in your research please cite

    \n\n\n\n

    Installation

    \n\n

    There are three ways to install micro_sam:

    \n\n
      \n
    • From mamba is the recommended way if you want to use all functionality.
    • \n
    • From source for setting up a development environment to use the development version and to change and contribute to our software.
    • \n
    • From installer to install it without having to use mamba (supported platforms: Windows and Linux, only for CPU users).
    • \n
    \n\n

    You can find more information on the installation and how to troubleshoot it in the FAQ section.

    \n\n

    From mamba

    \n\n

    mamba is a drop-in replacement for conda, but much faster.\nWhile the steps below may also work with conda, we highly recommend using mamba.\nYou can follow the instructions here to install mamba.

    \n\n

    IMPORTANT: Make sure to avoid installing anything in the base environment.

    \n\n

    micro_sam can be installed in an existing environment via:

    \n\n
    $ mamba install -c conda-forge micro_sam\n
    \n\n

    or you can create a new environment (here called micro-sam) with it via:

    \n\n
    $ mamba create -c conda-forge -n micro-sam micro_sam\n
    \n\n

    if you want to use the GPU you need to install PyTorch from the pytorch channel instead of conda-forge. For example:

    \n\n
    $ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1\n
    \n\n

    You may need to change this command to install the correct CUDA version for your system, see https://pytorch.org/ for details.

    \n\n

    From source

    \n\n

    To install micro_sam from source, we recommend to first set up an environment with the necessary requirements:

    \n\n\n\n

    To create one of these environments and install micro_sam into it follow these steps

    \n\n
      \n
    1. Clone the repository:
    2. \n
    \n\n
    $ git clone https://github.com/computational-cell-analytics/micro-sam\n
    \n\n
      \n
    1. Enter it:
    2. \n
    \n\n
    $ cd micro-sam\n
    \n\n
      \n
    1. Create the GPU or CPU environment:
    2. \n
    \n\n
    $ mamba env create -f <ENV_FILE>.yaml\n
    \n\n
      \n
    1. Activate the environment:
    2. \n
    \n\n
    $ mamba activate sam\n
    \n\n
      \n
    1. Install micro_sam:
    2. \n
    \n\n
    $ pip install -e .\n
    \n\n

    From installer

    \n\n

    We also provide installers for Linux and Windows:

    \n\n\n\n

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via mamba instead. They will also not enable using the python library.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n
      \n
    • Unpack the zip file you have downloaded.
    • \n
    • Make the installer executable: $ chmod +x micro_sam-0.2.0post1-Linux-x86_64.sh
    • \n
    • Run the installer: $./micro_sam-0.2.0post1-Linux-x86_64.sh$ \n
        \n
      • You can select where to install micro_sam during the installation. By default it will be installed in $HOME/micro_sam.
      • \n
      • The installer will unpack all micro_sam files to the installation directory.
      • \n
    • \n
    • After the installation you can start the annotator with the command .../micro_sam/bin/micro_sam.annotator.\n
        \n
      • To make it easier to run the annotation tool you can add .../micro_sam/bin to your PATH or set a softlink to .../micro_sam/bin/micro_sam.annotator.
      • \n
    • \n
    \n\n

    Windows Installer:

    \n\n
      \n
    • Unpack the zip file you have downloaded.
    • \n
    • Run the installer by double clicking on it.
    • \n
    • Choose installation type: Just Me(recommended) or All Users(requires admin privileges).
    • \n
    • Choose installation path. By default it will be installed in C:\\Users\\<Username>\\micro_sam for Just Me installation or in C:\\ProgramData\\micro_sam for All Users.\n
        \n
      • The installer will unpack all micro_sam files to the installation directory.
      • \n
    • \n
    • After the installation you can start the annotator by double clicking on .\\micro_sam\\Scripts\\micro_sam.annotator.exe or with the command .\\micro_sam\\Scripts\\micro_sam.annotator.exe from the Command Prompt.
    • \n
    \n\n

    \n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari yet, start here.\nThe micro_sam tools mainly use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. We also provide video tutorials.

    \n\n

    The annotation tools can be started from the napari plugin menu:\n

    \n\n

    You can find additional information on the annotation tools in the FAQ section.

    \n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n
      \n
    • clicking Annotator 2d in the plugin menu.
    • \n
    • running $ micro_sam.annotator_2d in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_2d in a python script. Check out examples/annotator_2d.py for details.
    • \n
    \n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the segmentations and prompts:\n
        \n
      • prompts: shape layer that is used to provide box prompts to SegmentAnything. Annotations can be given as rectangle (box prompt in the image), ellipse or polygon.
      • \n
      • point_prompts: point layer that is used to provide point prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer with the results from automatic instance segmentation.
      • \n
      • current_object: label layer for the object(s) you're currently segmenting.
      • \n
    2. \n
    3. The embedding menu. For selecting the image to process, the Segment Anything model that is used and computing the image embeddings with the model. The Embedding Settings contain advanced settings for loading cached embeddings from file or using tiled embeddings.
    4. \n
    5. The prompt menu for changing whether the currently selected point is a positive or a negative prompt. This can also be done by pressing T.
    6. \n
    7. The menu for interactive segmentation. Clicking Segment Object (or pressing S) will run segmentation for the current prompts. The result is displayed in current_object. Activating batched enables segmentation of multiple objects with point prompts. In this case an object will be segmented per positive prompt.
    8. \n
    9. The menu for automatic segmentation. Clicking Automatic Segmentation will segment all objects n the image. The results will be displayed in the auto_segmentation layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings controls the segmentation results, check the tooltips for details.
    10. \n
    11. The menu for commiting the segmentation. When clicking Commit (or pressing C) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.\nWhen commit_path is given the results will automatically be saved there.
    12. \n
    13. The menu for clearing the current annotations. Clicking Clear Annotations (or pressing Shift + C) will clear the current annotations and the current segmentation.
    14. \n
    \n\n

    Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched mode is activated. With box prompts you can segment several objects at once, both in the normal and batched mode.

    \n\n

    Check out this video for a tutorial for this tool.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n
      \n
    • clicking Annotator 3d in the plugin menu.
    • \n
    • running $ micro_sam.annotator_3d in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_3d in a python script. Check out examples/annotator_3d.py for details.
    • \n
    \n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu for interactive segmentation.
    8. \n
    9. The menu for interactive 3d segmentation. Clicking Segment All Slices (or Shift + S) will extend the segmentation for the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings, please refer to the tooltips for details.
    10. \n
    11. The menu for automatic segmentation. The overall functionality is the same as for the 2d annotator. To segment the full volume Apply to Volume needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.
    12. \n
    13. The menu for committing the current object.
    14. \n
    15. The menu for clearing the current annotations. If all slices is set all annotations will be cleared, otherwise they are only cleared for the current slice.
    16. \n
    \n\n

    Note that you can only segment one object at a time using the interactive segmentation functionality with this tool.

    \n\n

    Check out this video for a tutorial for the 3d annotation tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n
      \n
    • clicking Annotator Tracking in the plugin menu.
    • \n
    • running $ micro_sam.annotator_tracking in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_tracking in a python script. Check out examples/annotator_tracking.py for details.
    • \n
    \n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts. Same as for the 2d segmentation app but without the auto_segmentation layer.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    8. \n
    9. The menu for interactive segmentation.
    10. \n
    11. The menu for interactive tracking menu. Click Track Object (or press Shift + S) to segment the current object across time.
    12. \n
    13. The menu for committing the current tracking result.
    14. \n
    15. The menu for clearing the current annotations.
    16. \n
    \n\n

    Note that the tracking annotator only supports 2d image data, volumetric data is not supported. We also do not support automatic tracking yet.

    \n\n

    Check out this video for a tutorial for how to use the tracking annotation tool.

    \n\n

    Image Series Annotator

    \n\n

    The image series annotation tool enables running the 2d annotator or 2d annotator for multiple images that are saved within an folder. This makes it convenient to annotate many images without having to close the tool. It can be started by

    \n\n
      \n
    • clicking Image Series Annotator in the plugin menu.
    • \n
    • running $ micro_sam.image_series_annotator in the command line.
    • \n
    • calling micro_sam.sam_annotator.image_series_annotator in a python script. Check out examples/image_series_annotator.py for details.
    • \n
    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the folder where your image data is saved with Input Folder. The annotation results will be saved in Output Folder.\nYou can specify a rule for loading only a subset of images via pattern, for example *.tif to only load tif images. Set is_volumetric if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images the images from the folder you have specified will be loaded and the annotation tool is started for them.

    \n\n

    This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.

    \n\n

    Check out this video for a tutorial for how to use the image series annotator.

    \n\n

    Finetuning UI

    \n\n

    We also provide a graphical tool for finetuning models on your own data. It can be started by clicking Finetuning in the plugin menu.

    \n\n

    Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.

    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the image data via Path to images. We can either load images from a folder or select a single file for training. By providing Image data key you can either provide a pattern for selecting files from a folder or provide an internal filepath for hdf5, zarr or similar fileformats.

    \n\n

    You can select the label data via Path to labels and Label data key, following the same logic as for the image data. We expect label masks stored in the same size as the image data for training. You can for example use annotations created with one of the micro_sam annotation tools for this, they are stored in the correct format!

    \n\n

    The Configuration option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Please refer to the tooltips for the other parameters.

    \n\n

    Using the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    This library extends the Segment Anything library and

    \n\n
      \n
    • implements functions to apply Segment Anything to 2d and 3d data in micro_sam.prompt_based_segmentation.
    • \n
    • provides improved automatic instance segmentation functionality in micro_sam.instance_segmentation.
    • \n
    • implements training functionality that can be used for finetuning on your own data in micro_sam.training.
    • \n
    • provides functionality for quantitative and qualitative evaluation of Segment Anything models in micro_sam.evaluation.
    • \n
    \n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.

    \n\n

    Training your own model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\n

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it.

    \n\n

    We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to activate training it together with the rest of SAM and how to then use it.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

    \n\n

    Finetuned models

    \n\n

    In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThe additional models are available in the bioimage.io modelzoo and are also hosted on zenodo.

    \n\n

    We currently offer the following models:

    \n\n
      \n
    • vit_h: Default Segment Anything model with vit-h backbone.
    • \n
    • vit_l: Default Segment Anything model with vit-l backbone.
    • \n
    • vit_b: Default Segment Anything model with vit-b backbone.
    • \n
    • vit_t: Segment Anything model with vit-tiny backbone. From the Mobile SAM publication.
    • \n
    • vit_l_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. (zenodo, bioimage.io)
    • \n
    • vit_b_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, diplomatic-bug on bioimage.io)
    • \n
    • vit_t_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. (zenodo, bioimage.io)
    • \n
    • vit_l_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. (zenodo, bioimage.io)
    • \n
    • vit_b_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. (zenodo, bioimage.io)
    • \n
    • vit_t_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-t backbone. (zenodo, bioimage.io)
    • \n
    \n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which model to use for annotation by selecting the corresponding name in the embedding menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example.

    \n\n

    Choosing a Model

    \n\n

    As a rule of thumb:

    \n\n
      \n
    • Use the vit_l_lm or vit_b_lm model for segmenting cells or nuclei in light microscopy. The larger model (vit_l_lm) yields a bit better segmentation quality, especially for automatic segmentation, but needs more computational resources.
    • \n
    • Use the vit_l_em_organelles or vit_b_em_organelles models for segmenting mitochondria, nuclei or other roundish organelles in electron microscopy.
    • \n
    • For other use-cases use one of the default models.
    • \n
    • The vit_t_... models run much faster than other models, but yield inferior quality for many applications. It can still make sense to try them for your use-case if your working on a laptop and want to annotate many images or volumetric data.
    • \n
    \n\n

    See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Older Models

    \n\n

    Previous versions of our models are available on zenodo:

    \n\n
      \n
    • vit_b_em_boundaries: for segmenting compartments delineated by boundaries such as cells or neurites in EM.
    • \n
    • vit_b_em_organelles: for segmenting mitochondria, nuclei or other organelles in EM.
    • \n
    • vit_b_lm: for segmenting cells and nuclei in LM.
    • \n
    • vit_h_em: for general EM segmentation.
    • \n
    • vit_h_lm: for general LM segmentation.
    • \n
    \n\n

    We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

    \n\n

    FAQ

    \n\n

    Here we provide frequently asked questions and common issues.\nIf you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

    \n\n

    Installation questions

    \n\n

    1. How to install micro_sam?

    \n\n

    The installation for micro_sam is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    \n\n

    2. I cannot install micro_sam using the installer, I am getting some errors.

    \n\n

    The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.

    \n\n
    \n

    NOTE: The installers enable using micro_sam without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    \n
    \n\n

    3. What is the minimum system requirement for micro_sam?

    \n\n

    From our experience, the micro_sam annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.\nYou might encounter some slowness for $leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

    \n\n
      \n
    • Windows:\n
        \n
      • Windows 10 Pro, Intel i5 7th Gen, 8GB RAM
      • \n
    • \n
    • Linux:\n
        \n
      • Ubuntu 22.04, Intel i7 12th Gen, 32GB RAM
      • \n
    • \n
    • Mac:\n
        \n
      • macOS Sonoma 14.4.1\n
          \n
        • M1 Chip, 8GB RAM
        • \n
        • M3 Max Chip, 36GB RAM
        • \n
      • \n
    • \n
    \n\n

    Having a GPU will significantly speed up the annotation tools and especially the model finetuning.

    \n\n\n\n

    micro_sam has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.

    \n\n

    5. I am missing a few packages (eg. ModuleNotFoundError: No module named 'elf.io). What should I do?

    \n\n

    With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you.\nSo please reinstall micro_sam.

    \n\n

    6. Can I install micro_sam using pip?

    \n\n

    The installation is not supported via pip.

    \n\n

    7. I get the following error: importError: cannot import name 'UNETR' from 'torch_em.model'.

    \n\n

    It's possible that you have an older version of torch-em installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam, then a) is most likely the reason . We recommend installing the latest version following the installation instructions.

    \n\n

    Usage questions

    \n\n

    \n\n

    1. I have some micropscopy images. Can I use the annotator tool for segmenting them?

    \n\n

    Yes, you can use the annotator tool for:

    \n\n
      \n
    • Segmenting objects in 2d images (using automatic and/or interactive segmentation).
    • \n
    • Segmenting objects in 3d volumes (using automatic and/or interactive segmentation for the entire object(s)).
    • \n
    • Tracking objects over time in time-series data.
    • \n
    • Segmenting objects in a series of 2d / 3d images.
    • \n
    • (OPTIONAL) You can finetune the Segment Anything / micro_sam models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure.
    • \n
    \n\n

    2. Which model should I use for my data?

    \n\n

    We currently provide three different kind of models: the default models vit_h, vit_l, vit_b and vit_t; the models for light microscopy vit_l_lm, vit_b_lm and vit_t_lm; the models for electron microscopy vit_l_em_organelles, vit_b_em_organelles and vit_t_em_organelles.\nYou should first try the model that best fits the segmentation task your interested in, a lm model for cell or nucleus segmentation in light microscopy or a em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.\nIf your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.\nThe letter after vit denotes the size of the image encoder in SAM, h (huge) being the largest and t (tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l or vit_b model, they offer the best trade-off between speed and segmentation quality.\nYou can find more information on model choice here.

    \n\n

    3. I have high-resolution microscopy images, 'micro_sam' does not seem to work.

    \n\n

    The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or somethimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

    \n\n
      \n
    • If you are using the micro_sam annotation tools, you can specify the values for the tile_shape and halo via the tile_x, tile_y, halo_x and halo_y parameters in the Embedding Settings drop-down menu.
    • \n
    • If you are using the micro_sam library in a python script, you can pass them as tuples, e.g. tile_shape=(1024, 1024), halo=(256, 256). See also the wholeslide annotator example.
    • \n
    • If you are using the command line functionality, you can pass them via the options --tile_shape 1024 1024 --halo 256 256.
    • \n
    \n\n
    \n

    NOTE: It's recommended to choose the halo so that it is larger than half of the maximal radius of the objects you want to segment.

    \n
    \n\n

    4. The computation of image embeddings takes very long in napari.

    \n\n

    micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) store them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bootom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

    \n\n
      \n
    • You can use the command micro_sam.precompute_embeddings for this (it is installed with the rest of the software). You can specify the location of the precomputed embeddings via the embedding_path argument.
    • \n
    • You can cache the computed embedding in the napari tool (to avoid recomputing the embeddings again) by passing the path to store the embeddings in the embeddings_save_path option in the Embedding Settings drop-down. You can later load the precomputed image embeddings by entering the path to the stored embeddings there as well.
    • \n
    \n\n

    5. Can I use micro_sam on a CPU?

    \n\n

    Most other processing steps that are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the \"Segment Anything\" feature or AMG - Automatic Mask Generation) takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

    \n\n
    \n

    HINT: All the tutorial videos have been created on CPU resources.

    \n
    \n\n

    6. I generated some segmentations from another tool, can I use it as a starting point in micro_sam?

    \n\n

    You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation).\nIf you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and renae it to committed_objects.

    \n\n

    7. I am using micro_sam for segmenting objects. I would like to report the steps for reproducability. How can this be done?

    \n\n

    The annotation steps and segmentation results can be saved to a zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

    \n\n
    \n

    NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.

    \n
    \n\n

    8. I want to segment complex objects. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

    \n\n

    micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

    \n\n

    9. I am using the annotation tool and napari outputs the following error: While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.

    \n\n

    These messages occur when an internal error happens in micro_sam. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.\nWe want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.

    \n\n

    10. The objects are not segmented in my 3d data using the interactive annotation tool.

    \n\n

    The first thing to check is: a) make sure you are using the latest version of micro_sam (pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using mamba update micro_sam), and b) try out the steps from the 3d annotator tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.\nc) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.

    \n\n

    11. I have very small or fine-grained structures in my high-resolution microscopic images. Can I use micro_sam to annotate them?

    \n\n

    Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).

    \n\n

    12. napari seems to be very slow for large images.

    \n\n

    Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.

    \n\n

    13. While computing the embeddings (and / or automatic segmentation), a window stating: \"napari\" is not responding. pops up.

    \n\n

    This can happen for long running computations. You just need to wait a bit longer and the computation will finish.

    \n\n

    Fine-tuning questions

    \n\n

    1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using 'micro_sam'?

    \n\n

    Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:

    \n\n
      \n
    • Check out the tutorial notebook on how to fine-tune Segment Anything with our micro_sam.training library.
    • \n
    • Or check the examples for additional scripts that demonstrate finetuning.
    • \n
    • If you are not familiar with coding in python at all then you can also use the graphical interface for finetuning. But we recommend using a script for more flexibility and reproducibility.
    • \n
    \n\n

    2. I would like to fine-tune Segment Anything on open-source cloud services (e.g. Kaggle Notebooks), is it possible?

    \n\n

    Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.

    \n\n

    3. What kind of annotations do I need to finetune Segment Anything?

    \n\n

    TODO: explain instance segmentation labels, that you can get them by annotation with micro_sam, and dense vs. sparse annotation (for training without / with decoder)

    \n\n

    4. I have finetuned Segment Anything on my microscopy data. How can I use it for annotating new images?

    \n\n

    You can load your finetuned model by entering the path to its checkpoint in the custom_weights_path field in the Embedding Settings drop-down menu.\nIf you are using the python library or CLI you can specify this path with the checkpoint_path parameter.

    \n\n

    5. What is the background of the new AIS (Automatic Instance Segmentation) feature in micro_sam?

    \n\n

    micro_sam introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.\n

    \n\n

    6. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?

    \n\n

    Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b) by reducing the number of objects per image to 15.\nThis parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.\nYou can find an overview of the resources we have tested for finetuning here.\nWe also provide a the convenience function micro_sam.training.train_sam_for_configuration that selects the best training settings for these configuration. This function is also used by the finetuning UI.

    \n\n

    7. I want to create a dataloader for my data, for finetuning Segment Anything.

    \n\n

    Thanks to torch-em, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.\nSee the tutorial notebook on how to create dataloaders using torch-em and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.

    \n\n
    \n

    NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using sampler for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model in micro_sam).

    \n
    \n\n

    8. How can I evaluate a model I have finetuned?

    \n\n

    TODO: move the content of https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md here.

    \n\n

    Contribution Guide

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions!

    \n\n

    First, discuss your idea by opening a new issue in micro-sam.

    \n\n

    This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    git clone https://github.com/computational-cell-analytics/micro-sam.git\ncd micro-sam\ngit checkout dev\n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:

    \n\n
    conda env create environment-gpu.yml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    git checkout dev\ngit checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    pytest\n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    python build_doc.py\n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    python build_doc.py --out\n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    python development/benchmark.py --model_type vit_t --device cpu`\n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n
      \n
    • Go to https://band.embl.de/ and click Login. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a google account.
    • \n
    • Launch a BAND desktop with sufficient resources. It's particularly important to select a GPU. The settings from the image below are a good choice.
    • \n
    • Go to the desktop by clicking GO TO DESKTOP in the Running Desktops menu. See also the screenshot below.
    • \n
    \n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n
      \n
    • Select Applications->Image Analysis->uSAM (see screenshot)\n\"image\"
    • \n
    • This will open the micro_sam menu, where you can select the tool you want to use (see screenshot). Note: this may take a few minutes.\n\"image\"
    • \n
    • For testing if the tool works, it's best to use the 2d annotator first.\n
        \n
      • You can find an example image to use here: /scratch/cajal-connectomics/hela-2d-image.png. Select it via Select image. (see screenshot)\n\"image\"
      • \n
    • \n
    • Then press 2d annotator and the tool will start.
    • \n
    \n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n
      \n
    • To copy text into BAND you first need to copy it on your computer (e.g. via selecting it + ctrl + c).
    • \n
    • Then go to the browser window with BAND and press ctrl + shift + alt. This will open a side window where you can paste your text via ctrl + v.
    • \n
    • Then select the text in this window and copy it via ctrl + c.
    • \n
    • Now you can close the side window via ctrl + shift + alt and paste the text in band via ctrl + v
    • \n
    \n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "

    \n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "

    Export SAM model to BioImage.IO model format.

    \n\n

    The exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The image for generating test data.
    • \n
    • label_image: The segmentation correspoding to image.\nIt is used to derive prompt inputs for the model.
    • \n
    • model_type: The type of the SAM model.
    • \n
    • name: The name of the exported model.
    • \n
    • output_path: Where the exported model is saved.
    • \n
    • checkpoint_path: Optional checkpoint for loading the SAM model.
    • \n
    \n", "signature": "(\timage: numpy.ndarray,\tlabel_image: numpy.ndarray,\tmodel_type: str,\tname: str,\toutput_path: Union[str, os.PathLike],\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor", "kind": "class", "doc": "

    Wrapper around the SamPredictor.

    \n\n

    This model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The type of the model for the image encoder.\nCan be one of 'vit_b', 'vit_l', 'vit_h' or 'vit_t'.\nFor 'vit_t' support the 'mobile_sam' package has to be installed.
    • \n
    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.__init__", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "

    Copies parameters and buffers from state_dict into\nthis module and its descendants. If strict is True, then\nthe keys of state_dict must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict() function.

    \n\n
    \n\n

    If assign is True the optimizer must be created after\nthe call to load_state_dict.

    \n\n
    \n\n
    Arguments:
    \n\n
      \n
    • state_dict (dict): a dict containing parameters and\npersistent buffers.
    • \n
    • strict (bool, optional): whether to strictly enforce that the keys\nin state_dict match the keys returned by this module's\n~torch.nn.Module.state_dict() function. Default: True
    • \n
    • assign (bool, optional): whether to assign items in the state\ndictionary to their corresponding keys in the module instead\nof copying them inplace into the module's current parameters and buffers.\nWhen False, the properties of the tensors in the current\nmodule are preserved while when True, the properties of the\nTensors in the state dict are preserved.\nDefault: False
    • \n
    \n\n
    Returns:
    \n\n
    \n

    NamedTuple with missing_keys and unexpected_keys fields:\n * missing_keys is a list of str containing the missing keys\n * unexpected_keys is a list of str containing the unexpected keys

    \n
    \n\n
    Note:
    \n\n
    \n

    If a parameter or buffer is registered as None and its corresponding key\n exists in state_dict, load_state_dict() will raise a\n RuntimeError.

    \n
    \n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "
    Arguments:
    \n\n
      \n
    • image: torch inputs of dimensions B x C x H x W
    • \n
    • box_prompts: box coordinates of dimensions B x OBJECTS x 4
    • \n
    • point_prompts: point coordinates of dimension B x OBJECTS x POINTS x 2
    • \n
    • point_labels: point labels of dimension B x OBJECTS x POINTS
    • \n
    • mask_prompts: mask prompts of dimension B x OBJECTS x 256 x 256
    • \n
    • embeddings: precomputed image embeddings B x 256 x 64 x 64
    • \n
    \n\n

    Returns:

    \n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The list of paths to ground-truth images.
    • \n
    • prediction_paths: The list of paths with the instance segmentations to evaluate.
    • \n
    • save_path: Optional path for saving the results.
    • \n
    • verbose: Whether to print the progress.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "

    Run evaluation for iterative prompt-based segmentation predictions.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The list of paths to ground-truth images.
    • \n
    • prediction_root: The folder with the iterative prompt-based instance segmentations to evaluate.
    • \n
    • experiment_folder: The folder where all the experiment results are stored.
    • \n
    • start_with_box_prompt: Whether to evaluate on experiments with iterative prompting starting with box.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n
      \n
    • use_boxes: Whether to run the experiments with or without boxes.
    • \n
    • positive_range: The different number of positive points that will be used.\nBy defaul the values are set to [1, 2, 4, 8, 16].
    • \n
    • negative_range: The different number of negative points that will be used.\nBy defaul the values are set to [0, 1, 2, 4, 8, 16].
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n
      \n
    • setting: The experiment setting.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • embedding_dir: The directory where the embeddings will be saved.
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The file paths to the ground-truth segmentations.
    • \n
    • prompt_save_dir: The directory where the prompt files will be saved.
    • \n
    • prompt_settings: The settings for which the prompts will be computed.
    • \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • gt_paths: The ground-truth segmentation file paths.
    • \n
    • embedding_dir: The directory where the image embddings will be saved or are already saved.
    • \n
    • use_points: Whether to use point prompts.
    • \n
    • use_boxes: Whether to use box prompts
    • \n
    • n_positives: The number of positive point prompts that will be sampled.
    • \n
    • n_negativess: The number of negative point prompts that will be sampled.
    • \n
    • dilation: The dilation factor for the radius around the ground-truth object\naround which points will not be sampled.
    • \n
    • prompt_save_dir: The directory where point prompts will be saved or are already saved.\nThis enables running multiple experiments in a reproducible manner.
    • \n
    • batch_size: The batch size used for batched prediction.
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • gt_paths: The ground-truth segmentation file paths.
    • \n
    • embedding_dir: The directory where the image embeddings will be saved or are already saved.
    • \n
    • prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration.
    • \n
    • start_with_box_prompt: Whether to use the first prompt as bounding box or a single point
    • \n
    • dilation: The dilation factor for the radius around the ground-truth object\naround which points will not be sampled.
    • \n
    • batch_size: The batch size used for batched predictions.
    • \n
    • n_iterations: The number of iterations for iterative prompting.
    • \n
    • use_masks: Whether to make use of logits from previous prompt-based segmentation
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8,\tuse_masks: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_amg", "modulename": "micro_sam.evaluation.inference", "qualname": "run_amg", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.inference", "qualname": "run_instance_segmentation_with_decoder", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n
      \n
    • pred_iou_thresh, the threshold for keeping objects according to the IoU predicted by the model.
    • \n
    • stability_score_thresh, the theshold for keepong objects according to their stability.
    • \n
    \n\n
    Arguments:
    \n\n
      \n
    • iou_thresh_values: The values for pred_iou_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • stability_score_values: The values for stability_score_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • center_distance_threshold_values: The values for center_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • boundary_distance_threshold_values: The values for boundary_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • distance_smoothing_values: The values for distance_smoothing used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.
    • \n
    • min_size_values: The values for min_size used in the gridsearch.\nBy default the values 50, 100 and 200 are used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • grid_search_values: The grid search values for parameters of the generate function.
    • \n
    • image_paths: The input images for the grid search.
    • \n
    • gt_paths: The ground-truth segmentation for the grid search.
    • \n
    • result_dir: Folder to cache the evaluation results per image.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • fixed_generate_kwargs: Fixed keyword arguments for the generate method of the segmenter.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    • image_key: Key for loading the image data from a more complex file format like HDF5.\nIf not given a simple image format like tif is assumed.
    • \n
    • gt_key: Key for loading the ground-truth data from a more complex file format like HDF5.\nIf not given a simple image format like tif is assumed.
    • \n
    • rois: Region of interests to resetrict the evaluation to.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • image_paths: The input images.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • prediction_dir: Folder to save the predictions.
    • \n
    • generate_kwargs: The keyword arguments for the generate method of the segmenter.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n
      \n
    • result_dir: The folder with the gridsearch results.
    • \n
    • grid_search_parameters: The names for the gridsearch parameters.
    • \n
    • criterion: The metric to use for determining the best parameters.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "

    \n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • grid_search_values: The grid search values for parameters of the generate function.
    • \n
    • val_image_paths: The input images for the grid search.
    • \n
    • val_gt_paths: The ground-truth segmentation for the grid search.
    • \n
    • test_image_paths: The input images for inference.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • prediction_dir: Folder to save the predictions.
    • \n
    • result_dir: Folder to cache the evaluation results per image.
    • \n
    • fixed_generate_kwargs: Fixed keyword arguments for the generate method of the segmenter.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segment anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • use_points: Whether to use point prompts.
    • \n
    • use_boxes: Whether to use box prompts.
    • \n
    • n_positives: The number of positive point prompts.
    • \n
    • n_negatives: The number of negative point prompts.
    • \n
    • prompt_folder: The folder where the prompts should be saved.
    • \n
    • predictor: The segment anything predictor.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_precompute_embeddings", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_precompute_embeddings", "kind": "function", "doc": "

    Run precomputation of val and test image embeddings for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tn_val_per_cell_type: int = 25) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_iterative_prompting", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_iterative_prompting", "kind": "function", "doc": "

    Run inference on livecell with iterative prompting setting.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segment anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • start_with_box_prompt: Whether to use the first prompt as bounding box or a single point.
    • \n
    • use_masks: Whether to make use of logits from previous prompt-based segmentation.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tstart_with_box: bool = False,\tuse_masks: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • iou_thresh_values: The values for pred_iou_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • stability_score_values: The values for stability_score_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • center_distance_threshold_values: The values for center_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • boundary_distance_threshold_values: The values for boundary_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • distance_smoothing_values: The values for distance_smoothing used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.
    • \n
    • min_size_values: The values for min_size used in the gridsearch.\nBy default the values 50, 100 and 200 are used.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LIVECell inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LiveCELL evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n
      \n
    • loader: The torch dataloader from which samples are drawn.
    • \n
    • output_folder: The folder where the samples will be saved.
    • \n
    • model_type1: The first model to use for comparison.\nThe value needs to be a valid model_type for micro_sam.util.get_sam_model.
    • \n
    • model_type2: The second model to use for comparison.\nThe value needs to be a valid model_type for micro_sam.util.get_sam_model.
    • \n
    • n_samples: The number of samples to draw from the dataloader.
    • \n
    • checkpoint1: Optional checkpoint for the first model.
    • \n
    • checkpoint2: Optional checkpoint for the second model.
    • \n
    \n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int,\tmodel_type3: Optional[str] = None,\tcheckpoint1: Union[str, os.PathLike, NoneType] = None,\tcheckpoint2: Union[str, os.PathLike, NoneType] = None,\tcheckpoint3: Union[str, os.PathLike, NoneType] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n
      \n
    • output_folder: The folder with the data precomputed by generate_data_for_model_comparison.
    • \n
    • n_images_per_sample: The number of images to generate per precomputed sample.
    • \n
    • min_size: The min size of ground-truth objects to take into account.
    • \n
    • plot_folder: The folder where to save the plots. If not given the plots will be displayed.
    • \n
    • point_radius: The radius of the point overlay.
    • \n
    • outline_dilation: The dilation factor of the outline overlay.
    • \n
    \n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[str, os.PathLike, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0,\thave_model3=False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n
      \n
    • output_folder: The folder with the data precomputed by generate_data_for_model_comparison.
    • \n
    • show_points: Whether to show the results for point or for box prompts.
    • \n
    \n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.default_grid_search_values_multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "default_grid_search_values_multi_dimensional_segmentation", "kind": "function", "doc": "

    Default grid-search parameters for multi-dimensional prompt-based instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • iou_threshold_values: The values for iou_threshold used in the grid-search.\nBy default values in the range from 0.5 to 0.9 with a stepsize of 0.1 will be used.
    • \n
    • projection_method_values: The values for projection method used in the grid-search.\nBy default the values mask, bounding_box and points are used.
    • \n
    • box_extension_values: The values for box_extension used in the grid-search.\nBy default values in the range from 0 to 0.25 with a stepsize of 0.025 will be used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "

    Segment all objects in a volume by prompt-based segmentation in one slice per object.

    \n\n

    This function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • ground_truth: The label volume with instance segmentations.
    • \n
    • model_type: Choice of segment anything model.
    • \n
    • checkpoint_path: Path to the model checkpoint.
    • \n
    • embedding_path: Path to cache the computed embeddings.
    • \n
    • iou_threshold: The criterion to decide whether to link the objects in the consecutive slice's segmentation.
    • \n
    • projection: The projection (prompting) method to generate prompts for consecutive slices.
    • \n
    • box_extension: Extension factor for increasing the box size after projection.
    • \n
    • device: The selected device for computation.
    • \n
    • interactive_seg_mode: Method for guiding prompt-based instance segmentation.
    • \n
    • verbose: Whether to get the trace for projected segmentations.
    • \n
    • return_segmentation: Whether to return the segmented volume.
    • \n
    • min_size: The minimal size for evaluating an object in the ground-truth.\nThe size is measured within the central slice.
    • \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tiou_threshold: float = 0.8,\tprojection: Union[str, dict] = 'mask',\tbox_extension: Union[float, int] = 0.025,\tdevice: Union[str, torch.device] = None,\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\treturn_segmentation: bool = False,\tmin_size: int = 0) -> Union[float, Tuple[numpy.ndarray, float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.run_multi_dimensional_segmentation_grid_search", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "run_multi_dimensional_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for prompt-based multi-dimensional instance segmentation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values argument. For example, to run a grid search over the parameters iou_threshold,\nprojection and box_extension, you can pass the following:

    \n\n
    grid_search_values = {\n    \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n    \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n    \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
    \n\n

    All combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation is used\nto get the default grid search parameters for the instance segmentation method.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • ground_truth: The label volume with instance segmentations.
    • \n
    • model_type: Choice of segment anything model.
    • \n
    • checkpoint_path: Path to the model checkpoint.
    • \n
    • embedding_path: Path to cache the computed embeddings.
    • \n
    • result_path: Path to save the grid search results.
    • \n
    • interactive_seg_mode: Method for guiding prompt-based instance segmentation.
    • \n
    • verbose: Whether to get the trace for projected segmentations.
    • \n
    • grid_search_values: The grid search values for parameters of the segment_slices_from_ground_truth function.
    • \n
    • min_size: The minimal size for evaluating an object in the ground-truth.\nThe size is measured within the central slice.
    • \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\tgrid_search_values: Optional[Dict[str, List]] = None,\tmin_size: int = 0):", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • image: The input image.
    • \n
    • batch_size: The batch size to use for inference.
    • \n
    • boxes: The box prompts. Array of shape N_PROMPTS x 4.\nThe bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y].
    • \n
    • points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2.\nThe points are represented by their coordinates [X, Y], which are given\nin the last dimension.
    • \n
    • point_labels: The point prompt labels. Array of shape N_PROMPTS x 1.\nThe labels are either 0 (negative prompt) or 1 (positive prompt).
    • \n
    • multimasking: Whether to predict with 3 or 1 mask.
    • \n
    • embedding_path: Cache path for the image embeddings.
    • \n
    • return_instance_segmentation: Whether to return a instance segmentation\nor the individual mask data.
    • \n
    • segmentation_ids: Fixed segmentation ids to assign to the masks\nderived from the prompts.
    • \n
    • reduce_multimasking: Whether to choose the most likely masks with\nhighest ious from multimasking
    • \n
    • logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256.\nWhether to use the logits masks from previous segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • masks: The outputs generated by AutomaticMaskGenerator or EmbeddingMaskGenerator.\nOnly supports output_mode=binary_mask.
    • \n
    • with_background: Whether the segmentation has background. If yes this function assures that the largest\nobject in the output will be mapped to zero (the background value).
    • \n
    • min_object_size: The minimal size of an object in pixels.
    • \n
    • max_object_size: The maximal size of an object in pixels.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation.

    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • state: The state of the mask generator, e.g. from serialized state.
    • \n
    \n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.clear_state", "kind": "function", "doc": "

    Clear the state of the mask generator.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points_per_side: The number of points to be sampled along one side of the image.\nIf None, point_grids must provide explicit point sampling.
    • \n
    • points_per_batch: The number of points run simultaneously by the model.\nHigher numbers may be faster but use more GPU memory.
    • \n
    • crop_n_layers: If >0, the mask prediction will be run again on crops of the image.
    • \n
    • crop_overlap_ratio: Sets the degree to which crops overlap.
    • \n
    • crop_n_points_downscale_factor: How the number of points is downsampled when predicting with crops.
    • \n
    • point_grids: A lisst over explicit grids of points used for sampling masks.\nNormalized to [0, 1] with respect to the image coordinate system.
    • \n
    • stability_score_offset: The amount to shift the cutoff when calculating the stability score.
    • \n
    \n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Whether to print computation progress.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n
      \n
    • pred_iou_thresh: Filter threshold in [0, 1], using the mask quality predicted by the model.
    • \n
    • stability_score_thresh: Filter threshold in [0, 1], using the stability of the mask\nunder changes to the cutoff used to binarize the model prediction.
    • \n
    • box_nms_thresh: The IoU threshold used by nonmax suppression to filter duplicate masks.
    • \n
    • crop_nms_thresh: The IoU threshold used by nonmax suppression to filter duplicate masks between crops.
    • \n
    • min_mask_region_area: Minimal size for the predicted masks.
    • \n
    • output_mode: The form masks are returned in.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points_per_side: The number of points to be sampled along one side of the image.\nIf None, point_grids must provide explicit point sampling.
    • \n
    • points_per_batch: The number of points run simultaneously by the model.\nHigher numbers may be faster but use more GPU memory.
    • \n
    • point_grids: A lisst over explicit grids of points used for sampling masks.\nNormalized to [0, 1] with respect to the image coordinate system.
    • \n
    • stability_score_offset: The amount to shift the cutoff when calculating the stability score.
    • \n
    \n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • tile_shape: The tile shape for embedding prediction.
    • \n
    • halo: The overlap of between tiles.
    • \n
    • verbose: Whether to print computation progress.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Defines the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_unetr", "modulename": "micro_sam.instance_segmentation", "qualname": "get_unetr", "kind": "function", "doc": "

    Get UNETR model for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • image_encoder: The image encoder of the SAM model.\nThis is used as encoder by the UNETR too.
    • \n
    • decoder_state: Optional decoder state to initialize the weights\nof the UNETR decoder.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The UNETR model.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "

    Get decoder to predict outputs for automatic instance segmentation

    \n\n
    Arguments:
    \n\n
      \n
    • image_encoder: The image encoder of the SAM model.
    • \n
    • decoder_state: State to initialize the weights of the UNETR decoder.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The decoder for instance segmentation.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "

    Load the SAM model (predictor) and instance segmentation decoder.

    \n\n

    This requires a checkpoint that contains the state for both predictor\nand decoder.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The type of the image encoder used in the SAM model.
    • \n
    • checkpoint_path: Path to the checkpoint from which to load the data.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The SAM predictor.\n The decoder for instance segmentation.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • decoder: The decoder to predict intermediate representations\nfor instance segmentation.
    • \n
    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Whether to be verbose.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n
      \n
    • center_distance_threshold: Center distance predictions below this value will be\nused to find seeds (intersected with thresholded boundary distance predictions).
    • \n
    • boundary_distance_threshold: Boundary distance predictions below this value will be\nused to find seeds (intersected with thresholded center distance predictions).
    • \n
    • foreground_smoothing: Sigma value for smoothing the foreground predictions, to avoid\ncheckerboard artifacts in the prediction.
    • \n
    • foreground_threshold: Foreground predictions above this value will be used as foreground mask.
    • \n
    • distance_smoothing: Sigma value for smoothing the distance predictions.
    • \n
    • min_size: Minimal object size in the segmentation result.
    • \n
    • output_mode: The form masks are returned in. Pass None to directly return the instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "

    Get the initialized state of the instance segmenter.

    \n\n
    Returns:
    \n\n
    \n

    Instance segmentation state.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "

    Set the state of the instance segmenter.

    \n\n
    Arguments:
    \n\n
      \n
    • state: The instance segmentation state
    • \n
    \n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.clear_state", "kind": "function", "doc": "

    Clear the state of the instance segmenter.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Same as InstanceSegmentationWithDecoder but for tiled image embeddings.

    \n", "bases": "InstanceSegmentationWithDecoder"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Dummy input to be compatible with other function signatures.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • is_tiled: Whether tiled embeddings are used.
    • \n
    • decoder: Decoder to predict instacne segmmentation.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "

    \n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The initial segmentation for the object.
    • \n
    • predictor: The segment anything predictor.
    • \n
    • image_embeddings: The precomputed image embeddings for the volume.
    • \n
    • segmented_slices: List of slices for which this object has already been segmented.
    • \n
    • stop_lower: Whether to stop at the lowest segmented slice.
    • \n
    • stop_upper: Wheter to stop at the topmost segmented slice.
    • \n
    • iou_threshold: The IOU threshold for continuing segmentation across 3d.
    • \n
    • projection: The projection method to use. One of 'box', 'mask', 'points', 'points_and_mask' or 'single point'.\nPass a dictionary to choose the excact combination of projection modes.
    • \n
    • update_progress: Callback to update an external progress bar.
    • \n
    • box_extension: Extension factor for increasing the box size after projection.
    • \n
    • verbose: Whether to print details about the segmentation steps.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "

    Merge stacked 2d instance segmentations into a consistent 3d segmentation.

    \n\n

    Solves a multicut problem based on the overlap of objects to merge across z.

    \n\n
    Arguments:
    \n\n
      \n
    • slice_segmentation: The stacked segmentation across the slices.\nWe assume that the segmentation is labeled consecutive across z.
    • \n
    • beta: The bias term for the multicut. Higher values lead to a larger\ndegree of over-segmentation and vice versa.
    • \n
    • with_background: Whether this is a segmentation problem with background.\nIn that case all edges connecting to the background are set to be repulsive.
    • \n
    • gap_closing: If given, gaps in the segmentation are closed with a binary closing\noperation. The value is used to determine the number of iterations for the closing.
    • \n
    • min_z_extent: Require a minimal extent in z for the segmented objects.\nThis can help to prevent segmentation artifacts.
    • \n
    • verbose: Verbosity flag.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The merged segmentation.

    \n
    \n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "

    Segment volume in 3d.

    \n\n

    First segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • predictor: The SAM model.
    • \n
    • segmentor: The instance segmentation class.
    • \n
    • embedding_path: The path to save pre-computed embeddings.
    • \n
    • with_background: Whether the segmentation has background.
    • \n
    • gap_closing: If given, gaps in the segmentation are closed with a binary closing\noperation. The value is used to determine the number of iterations for the closing.
    • \n
    • min_z_extent: Require a minimal extent in z for the segmented objects.\nThis can help to prevent segmentation artifacts.
    • \n
    • verbose: Verbosity flag.
    • \n
    • kwargs: Keyword arguments for the 'generate' method of the 'segmentor'.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The segmentation.

    \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • raw: The image data.
    • \n
    • image_embeddings: The image embeddings.
    • \n
    • save_path: The embedding save path. The AMG state will be stored in 'save_path/amg_state.pickle'.
    • \n
    • verbose: Whether to run the computation verbose.
    • \n
    • i: The index for which to cache the state.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • decoder: The instance segmentation decoder.
    • \n
    • raw: The image data.
    • \n
    • image_embeddings: The image embeddings.
    • \n
    • save_path: The embedding save path. The AMG state will be stored in 'save_path/amg_state.pickle'.
    • \n
    • verbose: Whether to run the computation verbose.
    • \n
    • i: The index for which to cache the state.
    • \n
    • skip_load: Skip loading the state if it is precomputed.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n
      \n
    • input_path: The input image file(s). Can either be a single image file (e.g. tif or png),\na container file (e.g. hdf5 or zarr) or a folder with images files.\nIn case of a container file the argument key must be given. In case of a folder\nit can be given to provide a glob pattern to subselect files from the folder.
    • \n
    • output_path: The output path were the embeddings and other state will be saved.
    • \n
    • pattern: Glob pattern to select files in a folder. The embeddings will be computed\nfor each of these files. To select all files in a folder pass \"*\".
    • \n
    • model_type: The SegmentAnything model to use. Will use the standard vit_h model by default.
    • \n
    • checkpoint_path: Path to a checkpoint for a custom model.
    • \n
    • key: The key to the input file. This is needed for contaner files (e.g. hdf5 or zarr)\nor to load several images as 3d volume. Provide a glob pattern, e.g. \"*.tif\", for this case.
    • \n
    • ndim: The dimensionality of the data.
    • \n
    • tile_shape: Shape of tiles for tiled prediction. By default prediction is run without tiling.
    • \n
    • halo: Overlap of the tiles for tiled prediction.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic instance segmentation\nin addition to the image embeddings.
    • \n
    \n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tpattern: Optional[str] = None,\tmodel_type: str = 'vit_l',\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points: The point prompts given in the image coordinate system.
    • \n
    • labels: The labels (positive or negative) associated with the points.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • use_best_multimask: Whether to use multimask output and then choose the best mask.\nBy default this is used for a single positive point and not otherwise.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • mask: The mask used to derive prompts.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • use_box: Whether to derive the bounding box prompt from the mask.
    • \n
    • use_mask: Whether to use the mask itself as prompt.
    • \n
    • use_points: Whether to derive point prompts from the mask.
    • \n
    • original_size: Full image shape. Use this if the mask that is being passed\ndownsampled compared to the original image.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • box_extension: Relative factor used to enlarge the bounding box prompt.
    • \n
    • box: Precomputed bounding box.
    • \n
    • points: Precomputed point prompts.
    • \n
    • labels: Positive/negative labels corresponding to the point prompts.
    • \n
    • use_single_point: Whether to derive just a single point from the mask.\nIn case use_points is true.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • box: The box prompt.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • box_extension: Relative factor used to enlarge the bounding box prompt.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • box: The box prompt.
    • \n
    • points: The point prompts, given in the image coordinates system.
    • \n
    • labels: The point labels, either positive or negative.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • n_positive_points: The number of positive point prompts to generate per mask.
    • \n
    • n_negative_points: The number of negative point prompts to generate per mask.
    • \n
    • dilation_strength: The factor by which the mask is dilated before generating prompts.
    • \n
    • get_point_prompts: Whether to generate point prompts.
    • \n
    • get_box_prompts: Whether to generate box prompts.
    • \n
    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    Start the 2d annotation tool for a given image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The image data.
    • \n
    • embedding_path: Filepath where to save the embeddings.
    • \n
    • segmentation_result: An initial segmentation to load.\nThis can be used to correct segmentations with Segment Anything or to save and load progress.\nThe segmentation will be loaded as the 'committed_objects' layer.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    Start the 3d annotation tool for a given image volume.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The volumetric image data.
    • \n
    • embedding_path: Filepath for saving the precomputed embeddings.
    • \n
    • segmentation_result: An initial segmentation to load.\nThis can be used to correct segmentations with Segment Anything or to save and load progress.\nThe segmentation will be loaded as the 'committed_objects' layer.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    Start the tracking annotation tool fora given timeseries.

    \n\n
    Arguments:
    \n\n
      \n
    • raw: The image data.
    • \n
    • embedding_path: Filepath for saving the precomputed embeddings.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the annotation tool for a series of images (supported for both 2d and 3d images).

    \n\n
    Arguments:
    \n\n
      \n
    • images: List of the file paths or list of (set of) slices for the images to be annotated.
    • \n
    • output_folder: The folder where the segmentation results are saved.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • embedding_path: Filepath where to save the embeddings.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • is_volumetric: Whether to use the 3d annotator.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timages: Union[List[Union[os.PathLike, str]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n
      \n
    • input_folder: The folder with the images to be annotated.
    • \n
    • output_folder: The folder where the segmentation results are saved.
    • \n
    • pattern: The glob patter for loading files from input_folder.\nBy default all files will be loaded.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • kwargs: The keyword arguments for micro_sam.sam_annotator.image_series_annotator.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "

    \n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "

    \n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n
      \n
    • layer: The point layer from which to extract the prompts.
    • \n
    • i: Index for the data (required for 3d or timeseries data).
    • \n
    • track_id: Id of the current track (required for tracking data).
    • \n
    • with_stop_annotation: Whether a single negative point will be interpreted\nas stop annotation or just returned as normal prompt.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n
      \n
    • prompt_layer: The napari shape layer.
    • \n
    • shape: The image shape.
    • \n
    • i: Index for the data (required for 3d or timeseries data).
    • \n
    • track_id: Id of the current track (required for tracking data).
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n
      \n
    • prompt_layer: The napari layer.
    • \n
    • i: Timeframe of the data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n
      \n
    • point_layer: The napari point layer.
    • \n
    • box_layer: The napari box layer.
    • \n
    • i: Timeframe of the data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n
      \n
    • convert_inputs: The class that converts outputs of the dataloader to the expected input format of SAM.\nThe class micro_sam.training.util.ConvertToSamInputs can be used here.
    • \n
    • n_sub_iteration: The number of iteration steps for which the masks predicted for one object are updated.\nIn each sub-iteration new point prompts are sampled where the model was wrong.
    • \n
    • n_objects_per_batch: If not given, we compute the loss for all objects in a sample.\nOtherwise the loss computation is limited to n_objects_per_batch, and the objects are randomly sampled.
    • \n
    • mse_loss: The regression loss to compare the IoU predicted by the model with the true IoU.
    • \n
    • prompt_generator: The iterative prompt generator which takes care of the iterative prompting logic for training
    • \n
    • mask_prob: The probability of using the mask inputs in the iterative prompting (per n_sub_iteration)
    • \n
    • **kwargs: The keyword arguments of the DefaultTrainer super class.
    • \n
    \n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n
      \n
    • convert_inputs: The class that converts outputs of the dataloader to the expected input format of SAM.\nThe class micro_sam.training.util.ConvertToSamInputs can be used here.
    • \n
    • n_sub_iteration: The number of iteration steps for which the masks predicted for one object are updated.\nIn each sub-iteration new point prompts are sampled where the model was wrong.
    • \n
    • n_objects_per_batch: If not given, we compute the loss for all objects in a sample.\nOtherwise the loss computation is limited to n_objects_per_batch, and the objects are randomly sampled.
    • \n
    • mse_loss: The regression loss to compare the IoU predicted by the model with the true IoU.
    • \n
    • prompt_generator: The iterative prompt generator which takes care of the iterative prompting logic for training
    • \n
    • mask_prob: The probability of using the mask inputs in the iterative prompting (per n_sub_iteration)
    • \n
    • **kwargs: The keyword arguments of the DefaultTrainer super class.
    • \n
    \n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n
      \n
    • sam: The SegmentAnything Model.
    • \n
    • device: The device for training.
    • \n
    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n
      \n
    • x: The input tensor.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n
      \n
    • batched_inputs: The batched input images and prompts.
    • \n
    • image_embeddings: The precompute image embeddings. If not passed then they will be computed.
    • \n
    • multimask_output: Whether to predict mutiple or just a single mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "

    \n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "

    Run training for a SAM model.

    \n\n
    Arguments:
    \n\n
      \n
    • name: The name of the model to be trained.\nThe checkpoint and logs wil have this name.
    • \n
    • model_type: The type of the SAM model.
    • \n
    • train_loader: The dataloader for training.
    • \n
    • val_loader: The dataloader for validation.
    • \n
    • n_epochs: The number of epochs to train for.
    • \n
    • early_stopping: Enable early stopping after this number of epochs\nwithout improvement.
    • \n
    • n_objects_per_batch: The number of objects per batch used to compute\nthe loss for interative segmentation. If None all objects will be used,\nif given objects will be randomly sub-sampled.
    • \n
    • checkpoint_path: Path to checkpoint for initializing the SAM model.
    • \n
    • with_segmentation_decoder: Whether to train additional UNETR decoder\nfor automatic instance segmentation.
    • \n
    • freeze: Specify parts of the model that should be frozen, namely:\nimage_encoder, prompt_encoder and mask_decoder\nBy default nothing is frozen and the full model is updated.
    • \n
    • device: The device to use for training.
    • \n
    • lr: The learning rate.
    • \n
    • n_sub_iteration: The number of iterative prompts per training iteration.
    • \n
    • save_root: Optional root directory for saving the checkpoints and logs.\nIf not given the current working directory is used.
    • \n
    • mask_prob: The probability for using a mask as input in a given training sub-iteration.
    • \n
    • n_iterations: The number of iterations to use for training. This will over-ride n_epochs if given.
    • \n
    • scheduler_class: The learning rate scheduler to update the learning rate.\nBy default, ReduceLROnPlateau is used.
    • \n
    • scheduler_kwargs: The learning rate scheduler parameters.\nIf passed None, the chosen default parameters are used in ReduceLROnPlateau.
    • \n
    • save_every_kth_epoch: Save checkpoints after every kth epoch separately.
    • \n
    • pbar_signals: Controls for napari progress bar.
    • \n
    \n", "signature": "(\tname: str,\tmodel_type: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tn_epochs: int = 100,\tearly_stopping: Optional[int] = 10,\tn_objects_per_batch: Optional[int] = 25,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tfreeze: Optional[List[str]] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tlr: float = 1e-05,\tn_sub_iteration: int = 8,\tsave_root: Union[os.PathLike, str, NoneType] = None,\tmask_prob: float = 0.5,\tn_iterations: Optional[int] = None,\tscheduler_class: Optional[torch.optim.lr_scheduler._LRScheduler] = <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>,\tscheduler_kwargs: Optional[Dict[str, Any]] = None,\tsave_every_kth_epoch: Optional[int] = None,\tpbar_signals: Optional[PyQt5.QtCore.QObject] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_dataset", "modulename": "micro_sam.training.training", "qualname": "default_sam_dataset", "kind": "function", "doc": "

    Create a PyTorch Dataset for training a SAM model.

    \n\n
    Arguments:
    \n\n
      \n
    • raw_paths: The path(s) to the image data used for training.\nCan either be multiple 2D images or volumetric data.
    • \n
    • raw_key: The key for accessing the image data. Internal filepath for hdf5-like input\nor a glob pattern for selecting multiple files.
    • \n
    • label_paths: The path(s) to the label data used for training.\nCan either be multiple 2D images or volumetric data.
    • \n
    • label_key: The key for accessing the label data. Internal filepath for hdf5-like input\nor a glob pattern for selecting multiple files.
    • \n
    • patch_shape: The shape for training patches.
    • \n
    • with_segmentation_decoder: Whether to train with additional segmentation decoder.
    • \n
    • with_channels: Whether the image data has RGB channels.
    • \n
    • sampler: A sampler to reject batches according to a given criterion.
    • \n
    • n_samples: The number of samples for this dataset.
    • \n
    • is_train: Whether this dataset is used for training or validation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The dataset.

    \n
    \n", "signature": "(\traw_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler=None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "

    \n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "

    Best training configurations for given hardware resources.

    \n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "

    Run training for a SAM model with the configuration for a given hardware resource.

    \n\n

    Selects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS.

    \n\n
    Arguments:
    \n\n
      \n
    • name: The name of the model to be trained.\nThe checkpoint and logs wil have this name.
    • \n
    • configuration: The configuration (= name of hardware resource).
    • \n
    • train_loader: The dataloader for training.
    • \n
    • val_loader: The dataloader for validation.
    • \n
    • checkpoint_path: Path to checkpoint for initializing the SAM model.
    • \n
    • with_segmentation_decoder: Whether to train additional UNETR decoder\nfor automatic instance segmentation.
    • \n
    • kwargs: Additional keyword parameterts that will be passed to train_sam.
    • \n
    \n", "signature": "(\tname: str,\tconfiguration: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "

    Transformation to require 8bit input data range (0-255).

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The segment anything model that should be finetuned.\nThe weights of this model will be used for initialization, unless a\ncustom weight file is passed via checkpoint_path.
    • \n
    • device: The device to use for training.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the model weights.
    • \n
    • freeze: Specify parts of the model that should be frozen, namely: image_encoder, prompt_encoder and mask_decoder\nBy default nothing is frozen and the full model is updated.
    • \n
    • return_state: Whether to return the full checkpoint state.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n
      \n
    • transform: The transformation to resize the prompts. Should be the same transform used in the\nmodel to resize the inputs. If None the prompts will not be resized.
    • \n
    • dilation_strength: The dilation factor.\nIt determines a \"safety\" border from which prompts are not sampled to avoid ambiguous prompts\ndue to imprecise groundtruth masks.
    • \n
    • box_distortion_factor: Factor for distorting the box annotations derived from the groundtruth masks.
    • \n
    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=True, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n
      \n
    • device: The input device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n
      \n
    • model_type: The SegmentAnything model to use. Will use the standard vit_h model by default.\nTo get a list of all available model names you can call get_model_names.
    • \n
    • device: The device for the model. If none is given will use GPU if available.
    • \n
    • checkpoint_path: The path to a file with weights that should be used instead of using the\nweights corresponding to model_type. If given, model_type must match the architecture\ncorresponding to the weight file. E.g. if you use weights for SAM with vit_b encoder\nthen model_type must be given as \"vit_b\".
    • \n
    • return_sam: Return the sam model object as well as the predictor.
    • \n
    • return_state: Return the unpickled checkpoint state.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint_path: The path to the corresponding checkpoint if not in the default model folder.
    • \n
    • model_type: The SegmentAnything model type corresponding to the checkpoint (vit_h, vit_b, vit_l or vit_t).
    • \n
    • save_path: Where to save the exported model.
    • \n
    \n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • input_: The input data. Can be 2 or 3 dimensional, corresponding to an image, volume or timeseries.
    • \n
    • save_path: Path to save the embeddings in a zarr container.
    • \n
    • lazy_loading: Whether to load all embeddings into memory or return an\nobject to load them on demand when required. This only has an effect if 'save_path' is given\nand if the input is 3 dimensional.
    • \n
    • ndim: The dimensionality of the data. If not given will be deduced from the input data.
    • \n
    • tile_shape: Shape of tiles for tiled prediction. By default prediction is run without tiling.
    • \n
    • halo: Overlap of the tiles for tiled prediction.
    • \n
    • verbose: Whether to be verbose in the computation.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The image embeddings.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_embeddings: The precomputed image embeddings computed by precompute_image_embeddings.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • tile_id: Index for the tile. This is required if the embeddings are tiled.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predictor with set features.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n
      \n
    • mask1: The first mask.
    • \n
    • mask2: The second mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The segmentation.
    • \n
    • mode: Determines the functionality used for computing the centers.
    • \n
    • If 'v', the object's eccentricity centers computed by vigra are used.
    • \n
    • If 'p' the object's centroids computed by skimage are used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n
      \n
    • path: The filepath to the image data.
    • \n
    • key: The internal filepath for complex data formats like hdf5.
    • \n
    • lazy_loading: Whether to lazyly load data. Only supported for n5 and zarr data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The segmentation.
    • \n
    • segmentation_ids: Optional subset of ids that will be used to subsample the masks.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "

    Get a suitable block shape for chunking a given shape.

    \n\n

    The primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.

    \n\n
    Arguments:
    \n\n
      \n
    • shape: The image or volume shape.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The block shape.

    \n
    \n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n
      \n
    • embeddings: The embeddings. For example predicted by the SAM image encoder.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n
      \n
    • image_embeddings: The image embeddings.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; + /** pdoc search index */const docs = [{"fullname": "micro_sam", "modulename": "micro_sam", "kind": "module", "doc": "

    Segment Anything for Microscopy

    \n\n

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other biomedical imaging data.\nIts core components are:

    \n\n
      \n
    • The micro_sam tools for interactive data annotation, built as napari plugin.
    • \n
    • The micro_sam library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.
    • \n
    • The micro_sam models that are fine-tuned on publicly available microscopy data and that are available on BioImage.IO.
    • \n
    \n\n

    Based on these components micro_sam enables fast interactive and automatic annotation for microscopy data, like interactive cell segmentation from bounding boxes:

    \n\n

    \"box-prompts\"

    \n\n

    micro_sam is now available as stable version 1.0 and we will not change its user interface significantly in the foreseeable future.\nWe are still working on improving and extending its functionality. The current roadmap includes:

    \n\n
      \n
    • Releasing more and better finetuned models for the biomedical imaging domain.
    • \n
    • Integrating parameter efficient training and compressed models for efficient fine-tuning and faster inference.
    • \n
    • Improving the 3D segmentation and tracking functionality.
    • \n
    \n\n

    If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam.

    \n\n

    Quickstart

    \n\n

    You can install micro_sam via mamba:

    \n\n
    $ mamba install -c conda-forge micro_sam\n
    \n\n

    We also provide installers for Windows and Linux. For more details on the available installation options, check out the installation section.

    \n\n

    After installing micro_sam you can start napari and select the annotation tool you want to use from Plugins -> SegmentAnything for Microscopy. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.

    \n\n

    The micro_sam python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    It is explained in more detail here.

    \n\n

    We provide different finetuned models for microscopy that can be used within our tools or any other tool that supports Segment Anything. See finetuned models for details on the available models.\nYou can also train models on your own data, see here for details.

    \n\n

    Citation

    \n\n

    If you are using micro_sam in your research please cite

    \n\n\n\n

    Installation

    \n\n

    There are three ways to install micro_sam:

    \n\n
      \n
    • From mamba is the recommended way if you want to use all functionality.
    • \n
    • From source for setting up a development environment to use the latest version and to change and contribute to our software.
    • \n
    • From installer to install it without having to use mamba (supported platforms: Windows and Linux, supports only CPU).
    • \n
    \n\n

    You can find more information on the installation and how to troubleshoot it in the FAQ section.

    \n\n

    From mamba

    \n\n

    mamba is a drop-in replacement for conda, but much faster.\nWhile the steps below may also work with conda, we highly recommend using mamba.\nYou can follow the instructions here to install mamba.

    \n\n

    IMPORTANT: Make sure to avoid installing anything in the base environment.

    \n\n

    micro_sam can be installed in an existing environment via:

    \n\n
    \n
    $ mamba install -c conda-forge micro_sam\n
    \n
    \n\n

    or you can create a new environment (here called micro-sam) via:

    \n\n
    \n
    $ mamba create -c conda-forge -n micro-sam micro_sam\n
    \n
    \n\n

    if you want to use the GPU you need to install PyTorch from the pytorch channel instead of conda-forge. For example:

    \n\n
    \n
    $ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1\n
    \n
    \n\n

    You may need to change this command to install the correct CUDA version for your system, see https://pytorch.org/ for details.

    \n\n

    From source

    \n\n

    To install micro_sam from source, we recommend to first set up an environment with the necessary requirements:

    \n\n\n\n

    To create one of these environments and install micro_sam into it follow these steps

    \n\n
      \n
    1. Clone the repository:
    2. \n
    \n\n
    \n
    $ git clone https://github.com/computational-cell-analytics/micro-sam\n
    \n
    \n\n
      \n
    1. Enter it:
    2. \n
    \n\n
    \n
    $ cd micro-sam\n
    \n
    \n\n
      \n
    1. Create the GPU or CPU environment:
    2. \n
    \n\n
    \n
    $ mamba env create -f <ENV_FILE>.yaml\n
    \n
    \n\n
      \n
    1. Activate the environment:
    2. \n
    \n\n
    \n
    $ mamba activate sam\n
    \n
    \n\n
      \n
    1. Install micro_sam:
    2. \n
    \n\n
    \n
    $ pip install -e .\n
    \n
    \n\n

    From installer

    \n\n

    We also provide installers for Linux and Windows:

    \n\n\n\n

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via mamba instead. They will also not enable using the python library.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n
      \n
    • Unpack the zip file you have downloaded.
    • \n
    • Make the installer executable: $ chmod +x micro_sam-1.0.0post0-Linux-x86_64.sh
    • \n
    • Run the installer: ./micro_sam-1.0.0post0-Linux-x86_64.sh \n
        \n
      • You can select where to install micro_sam during the installation. By default it will be installed in $HOME/micro_sam.
      • \n
      • The installer will unpack all micro_sam files to the installation directory.
      • \n
    • \n
    • After the installation you can start the annotator with the command .../micro_sam/bin/napari.\n
        \n
      • Proceed with the steps described in Annotation Tools
      • \n
      • To make it easier to run the annotation tool you can add .../micro_sam/bin to your PATH or set a softlink to .../micro_sam/bin/napari.
      • \n
    • \n
    \n\n

    Windows Installer:

    \n\n
      \n
    • Unpack the zip file you have downloaded.
    • \n
    • Run the installer by double clicking on it.
    • \n
    • Choose installation type: Just Me(recommended) or All Users(requires admin privileges).
    • \n
    • Choose installation path. By default it will be installed in C:\\Users\\<Username>\\micro_sam for Just Me installation or in C:\\ProgramData\\micro_sam for All Users.\n
        \n
      • The installer will unpack all micro_sam files to the installation directory.
      • \n
    • \n
    • After the installation you can start the annotator by double clicking on .\\micro_sam\\Scripts\\micro_sam.annotator.exe or with the command .\\micro_sam\\Scripts\\napari.exe from the Command Prompt.
    • \n
    • Proceed with the steps described in Annotation Tools
    • \n
    \n\n

    \n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari, we recommend to start here.\nThe micro_sam tools mainly use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. We also provide video tutorials.

    \n\n

    The annotation tools can be started from the napari plugin menu:\n

    \n\n

    You can find additional information on the annotation tools in the FAQ section.

    \n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n
      \n
    • clicking Annotator 2d in the plugin menu.
    • \n
    • running $ micro_sam.annotator_2d in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_2d in a python script. Check out examples/annotator_2d.py for details.
    • \n
    \n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the segmentations and prompts:\n
        \n
      • prompts: shape layer that is used to provide box prompts to Segment Anything. Prompts can be given as rectangle (marked as box prompt in the image), ellipse or polygon.
      • \n
      • point_prompts: point layer that is used to provide point prompts to Segment Anything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer with the results from automatic instance segmentation.
      • \n
      • current_object: label layer for the object(s) you're currently segmenting.
      • \n
    2. \n
    3. The embedding menu. For selecting the image to process, the Segment Anything model that is used and computing its image embeddings. The Embedding Settings contain advanced settings for loading cached embeddings from file or for using tiled embeddings.
    4. \n
    5. The prompt menu for changing whether the currently selected point is a positive or a negative prompt. This can also be done by pressing T.
    6. \n
    7. The menu for interactive segmentation. Clicking Segment Object (or pressing S) will run segmentation for the current prompts. The result is displayed in current_object. Activating batched enables segmentation of multiple objects with point prompts. In this case one object will be segmented per positive prompt.
    8. \n
    9. The menu for automatic segmentation. Clicking Automatic Segmentation will segment all objects n the image. The results will be displayed in the auto_segmentation layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings controls the segmentation results, check the tooltips for details.
    10. \n
    11. The menu for commiting the segmentation. When clicking Commit (or pressing C) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.\nWhen commit_path is given the results will automatically be saved there.
    12. \n
    13. The menu for clearing the current annotations. Clicking Clear Annotations (or pressing Shift + C) will clear the current annotations and the current segmentation.
    14. \n
    \n\n

    Point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched mode is activated. With box prompts you can segment several objects at once, both in the normal and batched mode.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n
      \n
    • clicking Annotator 3d in the plugin menu.
    • \n
    • running $ micro_sam.annotator_3d in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_3d in a python script. Check out examples/annotator_3d.py for details.
    • \n
    \n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu for interactive segmentation in the current slice.
    8. \n
    9. The menu for interactive 3d segmentation. Clicking Segment All Slices (or pressing Shift + S) will extend the segmentation of the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings, please refer to the tooltips for details.
    10. \n
    11. The menu for automatic segmentation. The overall functionality is the same as for the 2d annotator. To segment the full volume Apply to Volume needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.
    12. \n
    13. The menu for committing the current object.
    14. \n
    15. The menu for clearing the current annotations. If all slices is set all annotations will be cleared, otherwise they are only cleared for the current slice.
    16. \n
    \n\n

    You can only segment one object at a time using the interactive segmentation functionality with this tool.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n
      \n
    • clicking Annotator Tracking in the plugin menu.
    • \n
    • running $ micro_sam.annotator_tracking in the command line.
    • \n
    • calling micro_sam.sam_annotator.annotator_tracking in a python script. Check out examples/annotator_tracking.py for details.
    • \n
    \n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts. Same as for the 2d segmentation application but without the auto_segmentation layer.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    8. \n
    9. The menu for interactive segmentation in the current frame.
    10. \n
    11. The menu for interactive tracking. Click Track Object (or press Shift + S) to segment the current object across time.
    12. \n
    13. The menu for committing the current tracking result.
    14. \n
    15. The menu for clearing the current annotations.
    16. \n
    \n\n

    The tracking annotator only supports 2d image data, volumetric data is not supported. We also do not support automatic tracking yet.

    \n\n

    Check out the video tutorial (coming soon!) for an in-depth explanation on how to use this tool.

    \n\n

    Image Series Annotator

    \n\n

    The image series annotation tool enables running the 2d annotator or 3d annotator for multiple images that are saved in a folder. This makes it convenient to annotate many images without having to restart the tool for every image. It can be started by

    \n\n
      \n
    • clicking Image Series Annotator in the plugin menu.
    • \n
    • running $ micro_sam.image_series_annotator in the command line.
    • \n
    • calling micro_sam.sam_annotator.image_series_annotator in a python script. Check out examples/image_series_annotator.py for details.
    • \n
    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the folder where your images are saved with Input Folder. The annotation results will be saved in Output Folder.\nYou can specify a rule for loading only a subset of images via pattern, for example *.tif to only load tif images. Set is_volumetric if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images the images from the folder you have specified will be loaded and the annotation tool is started for them.

    \n\n

    This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.

    \n\n

    Check out the video tutorial (coming soon!) for an in-depth explanation on how to use the image series annotator.

    \n\n

    Finetuning UI

    \n\n

    We also provide a graphical inferface for fine-tuning models on your own data. It can be started by clicking Finetuning in the plugin menu.

    \n\n

    Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.

    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the image data via Path to images. You can either load images from a folder or select a single image file. By providing Image data key you can either provide a pattern for selecting files from the folder or provide an internal filepath for HDF5, Zarr or similar fileformats.

    \n\n

    You can select the label data via Path to labels and Label data key, following the same logic as for the image data. The label masks are expected to have the same size as the image data. You can for example use annotations created with one of the micro_sam annotation tools for this, they are stored in the correct format. See the FAQ for more details on the expected label data.

    \n\n

    The Configuration option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found here.

    \n\n

    Using the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    This library extends the Segment Anything library and

    \n\n
      \n
    • implements functions to apply Segment Anything to 2d and 3d data in micro_sam.prompt_based_segmentation.
    • \n
    • provides improved automatic instance segmentation functionality in micro_sam.instance_segmentation.
    • \n
    • implements training functionality that can be used for finetuning Segment Anything on your own data in micro_sam.training.
    • \n
    • provides functionality for quantitative and qualitative evaluation of Segment Anything models in micro_sam.evaluation.
    • \n
    \n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.

    \n\n

    Training your Own Model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\nWe recommend checking out our latest preprint for details on the results on how much data is required for finetuning Segment Anything.

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it.\nWe also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to train it together with the rest of SAM and how to then use it.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, can be found in the finetuning directory, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

    \n\n

    TODO put table with resources here

    \n\n

    Finetuned Models

    \n\n

    In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThe additional models are available in the BioImage.IO Model Zoo and are also hosted on Zenodo.

    \n\n

    We currently offer the following models:

    \n\n
      \n
    • vit_h: Default Segment Anything model with ViT Huge backbone.
    • \n
    • vit_l: Default Segment Anything model with ViT Large backbone.
    • \n
    • vit_b: Default Segment Anything model with ViT Base backbone.
    • \n
    • vit_t: Segment Anything model with ViT Tiny backbone. From the Mobile SAM publication.
    • \n
    • vit_l_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. (Zenodo) (idealistic-rat on BioImage.IO)
    • \n
    • vit_b_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Base backbone. (Zenodo) (diplomatic-bug on BioImage.IO)
    • \n
    • vit_t_lm: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. (Zenodo) (faithful-chicken BioImage.IO)
    • \n
    • vit_l_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. (Zenodo) (humorous-crab on BioImage.IO)
    • \n
    • vit_b_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. (Zenodo) (noisy-ox on BioImage.IO)
    • \n
    • vit_t_em_organelles: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. (Zenodo) (greedy-whale on BioImage.IO)
    • \n
    \n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which model to use in the annotation tools by selecting the corresponding name in the Model: drop-down menu in the embedding menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example.

    \n\n

    Choosing a Model

    \n\n

    As a rule of thumb:

    \n\n
      \n
    • Use the vit_l_lm or vit_b_lm model for segmenting cells or nuclei in light microscopy. The larger model (vit_l_lm) yields a bit better segmentation quality, especially for automatic segmentation, but needs more computational resources.
    • \n
    • Use the vit_l_em_organelles or vit_b_em_organelles models for segmenting mitochondria, nuclei or other roundish organelles in electron microscopy.
    • \n
    • For other use-cases use one of the default models.
    • \n
    • The vit_t_... models run much faster than other models, but yield inferior quality for many applications. It can still make sense to try them for your use-case if your working on a laptop and want to annotate many images or volumetric data.
    • \n
    \n\n

    See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Other Models

    \n\n

    Previous versions of our models are available on Zenodo:

    \n\n
      \n
    • vit_b_em_boundaries: for segmenting compartments delineated by boundaries such as cells or neurites in EM.
    • \n
    • vit_b_em_organelles: for segmenting mitochondria, nuclei or other organelles in EM.
    • \n
    • vit_b_lm: for segmenting cells and nuclei in LM.
    • \n
    • vit_h_em: for general EM segmentation.
    • \n
    • vit_h_lm: for general LM segmentation.
    • \n
    \n\n

    We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

    \n\n

    We also provide additional models that were used for experiments in our publication on zenodo:

    \n\n\n\n

    FAQ

    \n\n

    Here we provide frequently asked questions and common issues.\nIf you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

    \n\n

    Installation questions

    \n\n

    1. How to install micro_sam?

    \n\n

    The installation for micro_sam is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    \n\n

    2. I cannot install micro_sam using the installer, I am getting some errors.

    \n\n

    The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.

    \n\n
    \n

    NOTE: The installers enable using micro_sam without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    \n
    \n\n

    3. What is the minimum system requirement for micro_sam?

    \n\n

    From our experience, the micro_sam annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.\nYou might encounter some slowness for $\\leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

    \n\n
      \n
    • Windows:\n
        \n
      • Windows 10 Pro, Intel i5 7th Gen, 8GB RAM
      • \n
      • Windows 10 Enterprise LTSC, Intel i7 13th Gen, 32GB RAM
      • \n
      • Windows 10 Pro for Workstations, Intel Xeon W-2295, 128GB RAM
      • \n
    • \n
    \n\n
      \n
    • Linux:

      \n\n
        \n
      • Ubuntu 20.04, Intel i7 11th Gen, 32GB RAM
      • \n
      • Ubuntu 22.04, Intel i7 12th Gen, 32GB RAM
      • \n
    • \n
    • Mac:

      \n\n
        \n
      • macOS Sonoma 14.4.1\n
          \n
        • M1 Chip, 8GB RAM
        • \n
        • M3 Max Chip, 36GB RAM
        • \n
      • \n
    • \n
    \n\n

    Having a GPU will significantly speed up the annotation tools and especially the model finetuning.

    \n\n\n\n

    micro_sam has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.

    \n\n

    5. I am missing a few packages (eg. ModuleNotFoundError: No module named 'elf.io). What should I do?

    \n\n

    With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you.\nSo please reinstall micro_sam.

    \n\n

    6. Can I install micro_sam using pip?

    \n\n

    The installation is not supported via pip.

    \n\n

    7. I get the following error: importError: cannot import name 'UNETR' from 'torch_em.model'.

    \n\n

    It's possible that you have an older version of torch-em installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam, then a) is most likely the reason . We recommend installing the latest version following the installation instructions.

    \n\n

    Usage questions

    \n\n

    \n\n

    1. I have some micropscopy images. Can I use the annotator tool for segmenting them?

    \n\n

    Yes, you can use the annotator tool for:

    \n\n
      \n
    • Segmenting objects in 2d images (using automatic and/or interactive segmentation).
    • \n
    • Segmenting objects in 3d volumes (using automatic and/or interactive segmentation for the entire object(s)).
    • \n
    • Tracking objects over time in time-series data.
    • \n
    • Segmenting objects in a series of 2d / 3d images.
    • \n
    • In addition, you can finetune the Segment Anything / micro_sam models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure.
    • \n
    \n\n

    2. Which model should I use for my data?

    \n\n

    We currently provide three different kind of models: the default models vit_h, vit_l, vit_b and vit_t; the models for light microscopy vit_l_lm, vit_b_lm and vit_t_lm; the models for electron microscopy vit_l_em_organelles, vit_b_em_organelles and vit_t_em_organelles.\nYou should first try the model that best fits the segmentation task your interested in, the lm model for cell or nucleus segmentation in light microscopy or the em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.\nIf your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.\nThe letter after vit denotes the size of the image encoder in SAM, h (huge) being the largest and t (tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l or vit_b model, they offer the best trade-off between speed and segmentation quality.\nYou can find more information on model choice here.

    \n\n

    3. I have high-resolution microscopy images, micro_sam does not seem to work.

    \n\n

    The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or sometimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

    \n\n
      \n
    • If you are using the micro_sam annotation tools, you can specify the values for the tile_shape and halo via the tile_x, tile_y, halo_x and halo_y parameters in the Embedding Settings drop-down menu.
    • \n
    • If you are using the micro_sam library in a python script, you can pass them as tuples, e.g. tile_shape=(1024, 1024), halo=(256, 256). See also the wholeslide annotator example.
    • \n
    • If you are using the command line functionality, you can pass them via the options --tile_shape 1024 1024 --halo 256 256.
    • \n
    \n\n
    \n

    NOTE: It's recommended to choose the halo so that it is larger than half of the maximal radius of the objects you want to segment.

    \n
    \n\n

    4. The computation of image embeddings takes very long in napari.

    \n\n

    micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) stores them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bottom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

    \n\n
      \n
    • You can use the command micro_sam.precompute_embeddings for this (it is installed with the rest of the software). You can specify the location of the pre-computed embeddings via the embedding_path argument.
    • \n
    • You can cache the computed embedding in the napari tool (to avoid recomputing the embeddings again) by passing the path to store the embeddings in the embeddings_save_path option in the Embedding Settings drop-down. You can later load the pre-computed image embeddings by entering the path to the stored embeddings there as well.
    • \n
    \n\n

    5. Can I use micro_sam on a CPU?

    \n\n

    Most other processing steps are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the \"Segment Anything\" feature or AMG - Automatic Mask Generation) however takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

    \n\n
    \n

    HINT: All the tutorial videos have been created on CPU resources.

    \n
    \n\n

    6. I generated some segmentations from another tool, can I use it as a starting point in micro_sam?

    \n\n

    You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation).\nIf you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and rename it to committed_objects.

    \n\n

    7. I am using micro_sam for segmenting objects. I would like to report the steps for reproducability. How can this be done?

    \n\n

    The annotation steps and segmentation results can be saved to a Zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

    \n\n
    \n

    NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.

    \n
    \n\n

    8. I want to segment objects with complex structures. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

    \n\n

    micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section on finetuning). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

    \n\n

    9. I am using the annotation tool and napari outputs the following error: While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.

    \n\n

    These messages occur when an internal error happens in micro_sam. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.\nWe want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.

    \n\n

    10. The objects are not segmented in my 3d data using the interactive annotation tool.

    \n\n

    The first thing to check is: a) make sure you are using the latest version of micro_sam (pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using mamba update micro_sam), and b) try out the steps from the 3d annotator tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.\nc) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.

    \n\n

    11. I have very small or fine-grained structures in my high-resolution microscopic images. Can I use micro_sam to annotate them?

    \n\n

    Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).

    \n\n

    12. napari seems to be very slow for large images.

    \n\n

    Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.

    \n\n

    13. While computing the embeddings (and / or automatic segmentation), a window stating: \"napari\" is not responding pops up.

    \n\n

    This can happen for long running computations. You just need to wait a bit longer and the computation will finish.

    \n\n

    Fine-tuning questions

    \n\n

    1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using 'micro_sam'?

    \n\n

    Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:

    \n\n
      \n
    • Check out the tutorial notebook on how to fine-tune Segment Anything with our micro_sam.training library.
    • \n
    • Or check the examples for additional scripts that demonstrate finetuning.
    • \n
    • If you are not familiar with coding in python at all then you can also use the graphical interface for finetuning. But we recommend using a script for more flexibility and reproducibility.
    • \n
    \n\n

    2. I would like to fine-tune Segment Anything on open-source cloud services (e.g. Kaggle Notebooks), is it possible?

    \n\n

    Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.

    \n\n

    3. What kind of annotations do I need to finetune Segment Anything?

    \n\n

    TODO: explain instance segmentation labels, that you can get them by annotation with micro_sam, and dense vs. sparse annotation (for training without / with decoder)

    \n\n

    4. I have finetuned Segment Anything on my microscopy data. How can I use it for annotating new images?

    \n\n

    You can load your finetuned model by entering the path to its checkpoint in the custom_weights_path field in the Embedding Settings drop-down menu.\nIf you are using the python library or CLI you can specify this path with the checkpoint_path parameter.

    \n\n

    5. What is the background of the new AIS (Automatic Instance Segmentation) feature in micro_sam?

    \n\n

    micro_sam introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.\n

    \n\n

    6. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?

    \n\n

    Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b) by reducing the number of objects per image to 15.\nThis parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.\nYou can find an overview of the resources we have tested for finetuning here.\nWe also provide a the convenience function micro_sam.training.train_sam_for_configuration that selects the best training settings for these configuration. This function is also used by the finetuning UI.

    \n\n

    7. I want to create a dataloader for my data, for finetuning Segment Anything.

    \n\n

    Thanks to torch-em, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.\nSee the tutorial notebook on how to create dataloaders using torch-em and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.

    \n\n
    \n

    NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using sampler for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model in micro_sam).

    \n
    \n\n

    8. How can I evaluate a model I have finetuned?

    \n\n

    TODO: move the content of https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md here.

    \n\n

    Contribution Guide

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions! First, discuss your idea by opening a new issue in micro-sam.\nThis allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    \n
    $ git clone https://github.com/computational-cell-analytics/micro-sam.git\n$ cd micro-sam\n$ git checkout dev\n
    \n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and developer dependencies, and micro-sam as an editable installation:

    \n\n
    \n
    $ mamba env create environment_gpu.yaml\n$ mamba activate sam\n$ python -m pip install requirements-dev.txt\n$ python -m pip install -e .\n
    \n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    $ git checkout dev\n$ git checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    \n
    $ pytest\n
    \n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    \n
    $ python build_doc.py\n
    \n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    \n
    $ python build_doc.py --out\n
    \n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    \n
    $ python development/benchmark.py --model_type vit_t --device cpu`\n
    \n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module.

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    Creating a new release

    \n\n

    To create a new release you have to edit the version number in micro_sam/__version__.py in a PR. After merging this PR the release will automatically be done by the CI.

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n
      \n
    • Go to https://band.embl.de/ and click Login. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a Google account.
    • \n
    • Launch a BAND desktop with sufficient resources. It's particularly important to select a GPU. The settings from the image below are a good choice.
    • \n
    • Go to the desktop by clicking GO TO DESKTOP in the Running Desktops menu. See also the screenshot below.
    • \n
    \n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n
      \n
    • Select Applications -> Image Analysis -> uSAM (see screenshot)\n\"image\"
    • \n
    • This will open the micro_sam menu, where you can select the tool you want to use (see screenshot). Note: this may take a few minutes.\n\"image\"
    • \n
    • For testing if the tool works, it's best to use the 2d annotator first.\n
        \n
      • You can find an example image to use here: /scratch/cajal-connectomics/hela-2d-image.png. Select it via Select image. (see screenshot)\n\"image\"
      • \n
    • \n
    • Then press 2d annotator and the tool will start.
    • \n
    \n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n
      \n
    • To copy text into BAND you first need to copy it on your computer (e.g. via selecting it + Ctrl + C).
    • \n
    • Then go to the browser window with BAND and press Ctrl + Shift + Alt. This will open a side window where you can paste your text via Ctrl + V.
    • \n
    • Then select the text in this window and copy it via Ctrl + C.
    • \n
    • Now you can close the side window via Ctrl + Shift + Alt and paste the text in band via Ctrl + V
    • \n
    \n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "

    \n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "

    Export SAM model to BioImage.IO model format.

    \n\n

    The exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The image for generating test data.
    • \n
    • label_image: The segmentation correspoding to image.\nIt is used to derive prompt inputs for the model.
    • \n
    • model_type: The type of the SAM model.
    • \n
    • name: The name of the exported model.
    • \n
    • output_path: Where the exported model is saved.
    • \n
    • checkpoint_path: Optional checkpoint for loading the SAM model.
    • \n
    \n", "signature": "(\timage: numpy.ndarray,\tlabel_image: numpy.ndarray,\tmodel_type: str,\tname: str,\toutput_path: Union[str, os.PathLike],\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor", "kind": "class", "doc": "

    Wrapper around the SamPredictor.

    \n\n

    This model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The type of the model for the image encoder.\nCan be one of 'vit_b', 'vit_l', 'vit_h' or 'vit_t'.\nFor 'vit_t' support the 'mobile_sam' package has to be installed.
    • \n
    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.__init__", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "

    Copies parameters and buffers from state_dict into\nthis module and its descendants. If strict is True, then\nthe keys of state_dict must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict() function.

    \n\n
    \n\n

    If assign is True the optimizer must be created after\nthe call to load_state_dict.

    \n\n
    \n\n
    Arguments:
    \n\n
      \n
    • state_dict (dict): a dict containing parameters and\npersistent buffers.
    • \n
    • strict (bool, optional): whether to strictly enforce that the keys\nin state_dict match the keys returned by this module's\n~torch.nn.Module.state_dict() function. Default: True
    • \n
    • assign (bool, optional): whether to assign items in the state\ndictionary to their corresponding keys in the module instead\nof copying them inplace into the module's current parameters and buffers.\nWhen False, the properties of the tensors in the current\nmodule are preserved while when True, the properties of the\nTensors in the state dict are preserved.\nDefault: False
    • \n
    \n\n
    Returns:
    \n\n
    \n

    NamedTuple with missing_keys and unexpected_keys fields:\n * missing_keys is a list of str containing the missing keys\n * unexpected_keys is a list of str containing the unexpected keys

    \n
    \n\n
    Note:
    \n\n
    \n

    If a parameter or buffer is registered as None and its corresponding key\n exists in state_dict, load_state_dict() will raise a\n RuntimeError.

    \n
    \n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "
    Arguments:
    \n\n
      \n
    • image: torch inputs of dimensions B x C x H x W
    • \n
    • box_prompts: box coordinates of dimensions B x OBJECTS x 4
    • \n
    • point_prompts: point coordinates of dimension B x OBJECTS x POINTS x 2
    • \n
    • point_labels: point labels of dimension B x OBJECTS x POINTS
    • \n
    • mask_prompts: mask prompts of dimension B x OBJECTS x 256 x 256
    • \n
    • embeddings: precomputed image embeddings B x 256 x 64 x 64
    • \n
    \n\n

    Returns:

    \n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The list of paths to ground-truth images.
    • \n
    • prediction_paths: The list of paths with the instance segmentations to evaluate.
    • \n
    • save_path: Optional path for saving the results.
    • \n
    • verbose: Whether to print the progress.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "

    Run evaluation for iterative prompt-based segmentation predictions.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The list of paths to ground-truth images.
    • \n
    • prediction_root: The folder with the iterative prompt-based instance segmentations to evaluate.
    • \n
    • experiment_folder: The folder where all the experiment results are stored.
    • \n
    • start_with_box_prompt: Whether to evaluate on experiments with iterative prompting starting with box.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n
      \n
    • use_boxes: Whether to run the experiments with or without boxes.
    • \n
    • positive_range: The different number of positive points that will be used.\nBy defaul the values are set to [1, 2, 4, 8, 16].
    • \n
    • negative_range: The different number of negative points that will be used.\nBy defaul the values are set to [0, 1, 2, 4, 8, 16].
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n
      \n
    • setting: The experiment setting.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • embedding_dir: The directory where the embeddings will be saved.
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n
      \n
    • gt_paths: The file paths to the ground-truth segmentations.
    • \n
    • prompt_save_dir: The directory where the prompt files will be saved.
    • \n
    • prompt_settings: The settings for which the prompts will be computed.
    • \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • gt_paths: The ground-truth segmentation file paths.
    • \n
    • embedding_dir: The directory where the image embddings will be saved or are already saved.
    • \n
    • use_points: Whether to use point prompts.
    • \n
    • use_boxes: Whether to use box prompts
    • \n
    • n_positives: The number of positive point prompts that will be sampled.
    • \n
    • n_negativess: The number of negative point prompts that will be sampled.
    • \n
    • dilation: The dilation factor for the radius around the ground-truth object\naround which points will not be sampled.
    • \n
    • prompt_save_dir: The directory where point prompts will be saved or are already saved.\nThis enables running multiple experiments in a reproducible manner.
    • \n
    • batch_size: The batch size used for batched prediction.
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_paths: The image file paths.
    • \n
    • gt_paths: The ground-truth segmentation file paths.
    • \n
    • embedding_dir: The directory where the image embeddings will be saved or are already saved.
    • \n
    • prediction_dir: The directory where the predictions from SegmentAnything will be saved per iteration.
    • \n
    • start_with_box_prompt: Whether to use the first prompt as bounding box or a single point
    • \n
    • dilation: The dilation factor for the radius around the ground-truth object\naround which points will not be sampled.
    • \n
    • batch_size: The batch size used for batched predictions.
    • \n
    • n_iterations: The number of iterations for iterative prompting.
    • \n
    • use_masks: Whether to make use of logits from previous prompt-based segmentation.
    • \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8,\tuse_masks: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_amg", "modulename": "micro_sam.evaluation.inference", "qualname": "run_amg", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.inference", "qualname": "run_instance_segmentation_with_decoder", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n
      \n
    • pred_iou_thresh, the threshold for keeping objects according to the IoU predicted by the model.
    • \n
    • stability_score_thresh, the theshold for keepong objects according to their stability.
    • \n
    \n\n
    Arguments:
    \n\n
      \n
    • iou_thresh_values: The values for pred_iou_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • stability_score_values: The values for stability_score_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • center_distance_threshold_values: The values for center_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • boundary_distance_threshold_values: The values for boundary_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • distance_smoothing_values: The values for distance_smoothing used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.
    • \n
    • min_size_values: The values for min_size used in the gridsearch.\nBy default the values 50, 100 and 200 are used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • grid_search_values: The grid search values for parameters of the generate function.
    • \n
    • image_paths: The input images for the grid search.
    • \n
    • gt_paths: The ground-truth segmentation for the grid search.
    • \n
    • result_dir: Folder to cache the evaluation results per image.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • fixed_generate_kwargs: Fixed keyword arguments for the generate method of the segmenter.
    • \n
    • verbose_gs: Whether to run the grid-search for individual images in a verbose mode.
    • \n
    • image_key: Key for loading the image data from a more complex file format like HDF5.\nIf not given a simple image format like tif is assumed.
    • \n
    • gt_key: Key for loading the ground-truth data from a more complex file format like HDF5.\nIf not given a simple image format like tif is assumed.
    • \n
    • rois: Region of interests to resetrict the evaluation to.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • image_paths: The input images.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • prediction_dir: Folder to save the predictions.
    • \n
    • generate_kwargs: The keyword arguments for the generate method of the segmenter.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n
      \n
    • result_dir: The folder with the gridsearch results.
    • \n
    • grid_search_parameters: The names for the gridsearch parameters.
    • \n
    • criterion: The metric to use for determining the best parameters.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "

    \n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n
      \n
    • segmenter: The class implementing the instance segmentation functionality.
    • \n
    • grid_search_values: The grid search values for parameters of the generate function.
    • \n
    • val_image_paths: The input images for the grid search.
    • \n
    • val_gt_paths: The ground-truth segmentation for the grid search.
    • \n
    • test_image_paths: The input images for inference.
    • \n
    • embedding_dir: Folder to cache the image embeddings.
    • \n
    • prediction_dir: Folder to save the predictions.
    • \n
    • result_dir: Folder to cache the evaluation results per image.
    • \n
    • fixed_generate_kwargs: Fixed keyword arguments for the generate method of the segmenter.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    \n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segment anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • use_points: Whether to use point prompts.
    • \n
    • use_boxes: Whether to use box prompts.
    • \n
    • n_positives: The number of positive point prompts.
    • \n
    • n_negatives: The number of negative point prompts.
    • \n
    • prompt_folder: The folder where the prompts should be saved.
    • \n
    • predictor: The segment anything predictor.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_precompute_embeddings", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_precompute_embeddings", "kind": "function", "doc": "

    Run precomputation of val and test image embeddings for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tn_val_per_cell_type: int = 25) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_iterative_prompting", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_iterative_prompting", "kind": "function", "doc": "

    Run inference on livecell with iterative prompting setting.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segment anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • start_with_box_prompt: Whether to use the first prompt as bounding box or a single point.
    • \n
    • use_masks: Whether to make use of logits from previous prompt-based segmentation.
    • \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tstart_with_box: bool = False,\tuse_masks: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • iou_thresh_values: The values for pred_iou_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • stability_score_values: The values for stability_score_thresh used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint: The segment anything model checkpoint.
    • \n
    • input_folder: The folder with the livecell data.
    • \n
    • model_type: The type of the segmenta anything model.
    • \n
    • experiment_folder: The folder where to save all data associated with the experiment.
    • \n
    • center_distance_threshold_values: The values for center_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • boundary_distance_threshold_values: The values for boundary_distance_threshold used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.
    • \n
    • distance_smoothing_values: The values for distance_smoothing used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.
    • \n
    • min_size_values: The values for min_size used in the gridsearch.\nBy default the values 50, 100 and 200 are used.
    • \n
    • verbose_gs: Whether to run the gridsearch for individual images in a verbose mode.
    • \n
    • n_val_per_cell_type: The number of validation images per cell type.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LIVECell inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LiveCELL evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n
      \n
    • loader: The torch dataloader from which samples are drawn.
    • \n
    • output_folder: The folder where the samples will be saved.
    • \n
    • model_type1: The first model to use for comparison.\nThe value needs to be a valid model_type for micro_sam.util.get_sam_model.
    • \n
    • model_type2: The second model to use for comparison.\nThe value needs to be a valid model_type for micro_sam.util.get_sam_model.
    • \n
    • n_samples: The number of samples to draw from the dataloader.
    • \n
    • checkpoint1: Optional checkpoint for the first model.
    • \n
    • checkpoint2: Optional checkpoint for the second model.
    • \n
    \n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int,\tmodel_type3: Optional[str] = None,\tcheckpoint1: Union[str, os.PathLike, NoneType] = None,\tcheckpoint2: Union[str, os.PathLike, NoneType] = None,\tcheckpoint3: Union[str, os.PathLike, NoneType] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n
      \n
    • output_folder: The folder with the data precomputed by generate_data_for_model_comparison.
    • \n
    • n_images_per_sample: The number of images to generate per precomputed sample.
    • \n
    • min_size: The min size of ground-truth objects to take into account.
    • \n
    • plot_folder: The folder where to save the plots. If not given the plots will be displayed.
    • \n
    • point_radius: The radius of the point overlay.
    • \n
    • outline_dilation: The dilation factor of the outline overlay.
    • \n
    \n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[str, os.PathLike, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0,\thave_model3=False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n
      \n
    • output_folder: The folder with the data precomputed by generate_data_for_model_comparison.
    • \n
    • show_points: Whether to show the results for point or for box prompts.
    • \n
    \n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.default_grid_search_values_multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "default_grid_search_values_multi_dimensional_segmentation", "kind": "function", "doc": "

    Default grid-search parameters for multi-dimensional prompt-based instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • iou_threshold_values: The values for iou_threshold used in the grid-search.\nBy default values in the range from 0.5 to 0.9 with a stepsize of 0.1 will be used.
    • \n
    • projection_method_values: The values for projection method used in the grid-search.\nBy default the values mask, bounding_box and points are used.
    • \n
    • box_extension_values: The values for box_extension used in the grid-search.\nBy default values in the range from 0 to 0.25 with a stepsize of 0.025 will be used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "

    Segment all objects in a volume by prompt-based segmentation in one slice per object.

    \n\n

    This function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • ground_truth: The label volume with instance segmentations.
    • \n
    • model_type: Choice of segment anything model.
    • \n
    • checkpoint_path: Path to the model checkpoint.
    • \n
    • embedding_path: Path to cache the computed embeddings.
    • \n
    • iou_threshold: The criterion to decide whether to link the objects in the consecutive slice's segmentation.
    • \n
    • projection: The projection (prompting) method to generate prompts for consecutive slices.
    • \n
    • box_extension: Extension factor for increasing the box size after projection.
    • \n
    • device: The selected device for computation.
    • \n
    • interactive_seg_mode: Method for guiding prompt-based instance segmentation.
    • \n
    • verbose: Whether to get the trace for projected segmentations.
    • \n
    • return_segmentation: Whether to return the segmented volume.
    • \n
    • min_size: The minimal size for evaluating an object in the ground-truth.\nThe size is measured within the central slice.
    • \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tiou_threshold: float = 0.8,\tprojection: Union[str, dict] = 'mask',\tbox_extension: Union[float, int] = 0.025,\tdevice: Union[str, torch.device] = None,\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\treturn_segmentation: bool = False,\tmin_size: int = 0) -> Union[float, Tuple[numpy.ndarray, float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.run_multi_dimensional_segmentation_grid_search", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "run_multi_dimensional_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for prompt-based multi-dimensional instance segmentation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values argument. For example, to run a grid search over the parameters iou_threshold,\nprojection and box_extension, you can pass the following:

    \n\n
    grid_search_values = {\n    \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n    \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n    \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
    \n\n

    All combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation is used\nto get the default grid search parameters for the instance segmentation method.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • ground_truth: The label volume with instance segmentations.
    • \n
    • model_type: Choice of segment anything model.
    • \n
    • checkpoint_path: Path to the model checkpoint.
    • \n
    • embedding_path: Path to cache the computed embeddings.
    • \n
    • result_path: Path to save the grid search results.
    • \n
    • interactive_seg_mode: Method for guiding prompt-based instance segmentation.
    • \n
    • verbose: Whether to get the trace for projected segmentations.
    • \n
    • grid_search_values: The grid search values for parameters of the segment_slices_from_ground_truth function.
    • \n
    • min_size: The minimal size for evaluating an object in the ground-truth.\nThe size is measured within the central slice.
    • \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\tgrid_search_values: Optional[Dict[str, List]] = None,\tmin_size: int = 0):", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • image: The input image.
    • \n
    • batch_size: The batch size to use for inference.
    • \n
    • boxes: The box prompts. Array of shape N_PROMPTS x 4.\nThe bounding boxes are represented by [MIN_X, MIN_Y, MAX_X, MAX_Y].
    • \n
    • points: The point prompt coordinates. Array of shape N_PROMPTS x 1 x 2.\nThe points are represented by their coordinates [X, Y], which are given\nin the last dimension.
    • \n
    • point_labels: The point prompt labels. Array of shape N_PROMPTS x 1.\nThe labels are either 0 (negative prompt) or 1 (positive prompt).
    • \n
    • multimasking: Whether to predict with 3 or 1 mask.
    • \n
    • embedding_path: Cache path for the image embeddings.
    • \n
    • return_instance_segmentation: Whether to return a instance segmentation\nor the individual mask data.
    • \n
    • segmentation_ids: Fixed segmentation ids to assign to the masks\nderived from the prompts.
    • \n
    • reduce_multimasking: Whether to choose the most likely masks with\nhighest ious from multimasking
    • \n
    • logits_masks: The logits masks. Array of shape N_PROMPTS x 1 x 256 x 256.\nWhether to use the logits masks from previous segmentation.
    • \n
    • verbose_embeddings: Whether to show progress outputs of computing image embeddings.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None,\tverbose_embeddings: bool = True):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • masks: The outputs generated by AutomaticMaskGenerator or EmbeddingMaskGenerator.\nOnly supports output_mode=binary_mask.
    • \n
    • with_background: Whether the segmentation has background. If yes this function assures that the largest\nobject in the output will be mapped to zero (the background value).
    • \n
    • min_object_size: The minimal size of an object in pixels.
    • \n
    • max_object_size: The maximal size of an object in pixels.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation.

    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • state: The state of the mask generator, e.g. from serialized state.
    • \n
    \n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.clear_state", "kind": "function", "doc": "

    Clear the state of the mask generator.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points_per_side: The number of points to be sampled along one side of the image.\nIf None, point_grids must provide explicit point sampling.
    • \n
    • points_per_batch: The number of points run simultaneously by the model.\nHigher numbers may be faster but use more GPU memory.
    • \n
    • crop_n_layers: If >0, the mask prediction will be run again on crops of the image.
    • \n
    • crop_overlap_ratio: Sets the degree to which crops overlap.
    • \n
    • crop_n_points_downscale_factor: How the number of points is downsampled when predicting with crops.
    • \n
    • point_grids: A lisst over explicit grids of points used for sampling masks.\nNormalized to [0, 1] with respect to the image coordinate system.
    • \n
    • stability_score_offset: The amount to shift the cutoff when calculating the stability score.
    • \n
    \n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Whether to print computation progress.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n
      \n
    • pred_iou_thresh: Filter threshold in [0, 1], using the mask quality predicted by the model.
    • \n
    • stability_score_thresh: Filter threshold in [0, 1], using the stability of the mask\nunder changes to the cutoff used to binarize the model prediction.
    • \n
    • box_nms_thresh: The IoU threshold used by nonmax suppression to filter duplicate masks.
    • \n
    • crop_nms_thresh: The IoU threshold used by nonmax suppression to filter duplicate masks between crops.
    • \n
    • min_mask_region_area: Minimal size for the predicted masks.
    • \n
    • output_mode: The form masks are returned in.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points_per_side: The number of points to be sampled along one side of the image.\nIf None, point_grids must provide explicit point sampling.
    • \n
    • points_per_batch: The number of points run simultaneously by the model.\nHigher numbers may be faster but use more GPU memory.
    • \n
    • point_grids: A lisst over explicit grids of points used for sampling masks.\nNormalized to [0, 1] with respect to the image coordinate system.
    • \n
    • stability_score_offset: The amount to shift the cutoff when calculating the stability score.
    • \n
    \n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • tile_shape: The tile shape for embedding prediction.
    • \n
    • halo: The overlap of between tiles.
    • \n
    • verbose: Whether to print computation progress.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Defines the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_unetr", "modulename": "micro_sam.instance_segmentation", "qualname": "get_unetr", "kind": "function", "doc": "

    Get UNETR model for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n
      \n
    • image_encoder: The image encoder of the SAM model.\nThis is used as encoder by the UNETR too.
    • \n
    • decoder_state: Optional decoder state to initialize the weights\nof the UNETR decoder.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The UNETR model.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "

    Get decoder to predict outputs for automatic instance segmentation

    \n\n
    Arguments:
    \n\n
      \n
    • image_encoder: The image encoder of the SAM model.
    • \n
    • decoder_state: State to initialize the weights of the UNETR decoder.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The decoder for instance segmentation.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "

    Load the SAM model (predictor) and instance segmentation decoder.

    \n\n

    This requires a checkpoint that contains the state for both predictor\nand decoder.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The type of the image encoder used in the SAM model.
    • \n
    • checkpoint_path: Path to the checkpoint from which to load the data.
    • \n
    • device: The device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The SAM predictor.\n The decoder for instance segmentation.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • decoder: The decoder to predict intermediate representations\nfor instance segmentation.
    • \n
    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Whether to be verbose.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n
      \n
    • center_distance_threshold: Center distance predictions below this value will be\nused to find seeds (intersected with thresholded boundary distance predictions).
    • \n
    • boundary_distance_threshold: Boundary distance predictions below this value will be\nused to find seeds (intersected with thresholded center distance predictions).
    • \n
    • foreground_smoothing: Sigma value for smoothing the foreground predictions, to avoid\ncheckerboard artifacts in the prediction.
    • \n
    • foreground_threshold: Foreground predictions above this value will be used as foreground mask.
    • \n
    • distance_smoothing: Sigma value for smoothing the distance predictions.
    • \n
    • min_size: Minimal object size in the segmentation result.
    • \n
    • output_mode: The form masks are returned in. Pass None to directly return the instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "

    Get the initialized state of the instance segmenter.

    \n\n
    Returns:
    \n\n
    \n

    Instance segmentation state.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "

    Set the state of the instance segmenter.

    \n\n
    Arguments:
    \n\n
      \n
    • state: The instance segmentation state
    • \n
    \n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.clear_state", "kind": "function", "doc": "

    Clear the state of the instance segmenter.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Same as InstanceSegmentationWithDecoder but for tiled image embeddings.

    \n", "bases": "InstanceSegmentationWithDecoder"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The input image, volume or timeseries.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\nSee util.precompute_image_embeddings for details.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • verbose: Dummy input to be compatible with other function signatures.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • is_tiled: Whether tiled embeddings are used.
    • \n
    • decoder: Decoder to predict instacne segmmentation.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "

    \n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The initial segmentation for the object.
    • \n
    • predictor: The segment anything predictor.
    • \n
    • image_embeddings: The precomputed image embeddings for the volume.
    • \n
    • segmented_slices: List of slices for which this object has already been segmented.
    • \n
    • stop_lower: Whether to stop at the lowest segmented slice.
    • \n
    • stop_upper: Wheter to stop at the topmost segmented slice.
    • \n
    • iou_threshold: The IOU threshold for continuing segmentation across 3d.
    • \n
    • projection: The projection method to use. One of 'box', 'mask', 'points', 'points_and_mask' or 'single point'.\nPass a dictionary to choose the excact combination of projection modes.
    • \n
    • update_progress: Callback to update an external progress bar.
    • \n
    • box_extension: Extension factor for increasing the box size after projection.
    • \n
    • verbose: Whether to print details about the segmentation steps.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "

    Merge stacked 2d instance segmentations into a consistent 3d segmentation.

    \n\n

    Solves a multicut problem based on the overlap of objects to merge across z.

    \n\n
    Arguments:
    \n\n
      \n
    • slice_segmentation: The stacked segmentation across the slices.\nWe assume that the segmentation is labeled consecutive across z.
    • \n
    • beta: The bias term for the multicut. Higher values lead to a larger\ndegree of over-segmentation and vice versa.
    • \n
    • with_background: Whether this is a segmentation problem with background.\nIn that case all edges connecting to the background are set to be repulsive.
    • \n
    • gap_closing: If given, gaps in the segmentation are closed with a binary closing\noperation. The value is used to determine the number of iterations for the closing.
    • \n
    • min_z_extent: Require a minimal extent in z for the segmented objects.\nThis can help to prevent segmentation artifacts.
    • \n
    • verbose: Verbosity flag.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The merged segmentation.

    \n
    \n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "

    Segment volume in 3d.

    \n\n

    First segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.

    \n\n
    Arguments:
    \n\n
      \n
    • volume: The input volume.
    • \n
    • predictor: The SAM model.
    • \n
    • segmentor: The instance segmentation class.
    • \n
    • embedding_path: The path to save pre-computed embeddings.
    • \n
    • with_background: Whether the segmentation has background.
    • \n
    • gap_closing: If given, gaps in the segmentation are closed with a binary closing\noperation. The value is used to determine the number of iterations for the closing.
    • \n
    • min_z_extent: Require a minimal extent in z for the segmented objects.\nThis can help to prevent segmentation artifacts.
    • \n
    • verbose: Verbosity flag.
    • \n
    • kwargs: Keyword arguments for the 'generate' method of the 'segmentor'.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The segmentation.

    \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • raw: The image data.
    • \n
    • image_embeddings: The image embeddings.
    • \n
    • save_path: The embedding save path. The AMG state will be stored in 'save_path/amg_state.pickle'.
    • \n
    • verbose: Whether to run the computation verbose.
    • \n
    • i: The index for which to cache the state.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • decoder: The instance segmentation decoder.
    • \n
    • raw: The image data.
    • \n
    • image_embeddings: The image embeddings.
    • \n
    • save_path: The embedding save path. The AMG state will be stored in 'save_path/amg_state.pickle'.
    • \n
    • verbose: Whether to run the computation verbose.
    • \n
    • i: The index for which to cache the state.
    • \n
    • skip_load: Skip loading the state if it is precomputed.
    • \n
    • kwargs: The keyword arguments for the amg class.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The instance segmentation class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n
      \n
    • input_path: The input image file(s). Can either be a single image file (e.g. tif or png),\na container file (e.g. hdf5 or zarr) or a folder with images files.\nIn case of a container file the argument key must be given. In case of a folder\nit can be given to provide a glob pattern to subselect files from the folder.
    • \n
    • output_path: The output path were the embeddings and other state will be saved.
    • \n
    • pattern: Glob pattern to select files in a folder. The embeddings will be computed\nfor each of these files. To select all files in a folder pass \"*\".
    • \n
    • model_type: The SegmentAnything model to use. Will use the standard vit_h model by default.
    • \n
    • checkpoint_path: Path to a checkpoint for a custom model.
    • \n
    • key: The key to the input file. This is needed for contaner files (e.g. hdf5 or zarr)\nor to load several images as 3d volume. Provide a glob pattern, e.g. \"*.tif\", for this case.
    • \n
    • ndim: The dimensionality of the data.
    • \n
    • tile_shape: Shape of tiles for tiled prediction. By default prediction is run without tiling.
    • \n
    • halo: Overlap of the tiles for tiled prediction.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic instance segmentation\nin addition to the image embeddings.
    • \n
    \n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tpattern: Optional[str] = None,\tmodel_type: str = 'vit_l',\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • points: The point prompts given in the image coordinate system.
    • \n
    • labels: The labels (positive or negative) associated with the points.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • use_best_multimask: Whether to use multimask output and then choose the best mask.\nBy default this is used for a single positive point and not otherwise.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • mask: The mask used to derive prompts.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • use_box: Whether to derive the bounding box prompt from the mask.
    • \n
    • use_mask: Whether to use the mask itself as prompt.
    • \n
    • use_points: Whether to derive point prompts from the mask.
    • \n
    • original_size: Full image shape. Use this if the mask that is being passed\ndownsampled compared to the original image.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • box_extension: Relative factor used to enlarge the bounding box prompt.
    • \n
    • box: Precomputed bounding box.
    • \n
    • points: Precomputed point prompts.
    • \n
    • labels: Positive/negative labels corresponding to the point prompts.
    • \n
    • use_single_point: Whether to derive just a single point from the mask.\nIn case use_points is true.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • box: The box prompt.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    • box_extension: Relative factor used to enlarge the bounding box prompt.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The segment anything predictor.
    • \n
    • box: The box prompt.
    • \n
    • points: The point prompts, given in the image coordinates system.
    • \n
    • labels: The point labels, either positive or negative.
    • \n
    • image_embeddings: Optional precomputed image embeddings.\n Has to be passed if the predictor is not yet initialized.\ni: Index for the image data. Required if the input data has three spatial dimensions\n or a time dimension and two spatial dimensions.
    • \n
    • multimask_output: Whether to return multiple or just a single mask.
    • \n
    • return_all: Whether to return the score and logits in addition to the mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n
      \n
    • n_positive_points: The number of positive point prompts to generate per mask.
    • \n
    • n_negative_points: The number of negative point prompts to generate per mask.
    • \n
    • dilation_strength: The factor by which the mask is dilated before generating prompts.
    • \n
    • get_point_prompts: Whether to generate point prompts.
    • \n
    • get_box_prompts: Whether to generate box prompts.
    • \n
    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    Start the 2d annotation tool for a given image.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The image data.
    • \n
    • embedding_path: Filepath where to save the embeddings.
    • \n
    • segmentation_result: An initial segmentation to load.\nThis can be used to correct segmentations with Segment Anything or to save and load progress.\nThe segmentation will be loaded as the 'committed_objects' layer.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    Start the 3d annotation tool for a given image volume.

    \n\n
    Arguments:
    \n\n
      \n
    • image: The volumetric image data.
    • \n
    • embedding_path: Filepath for saving the precomputed embeddings.
    • \n
    • segmentation_result: An initial segmentation to load.\nThis can be used to correct segmentations with Segment Anything or to save and load progress.\nThe segmentation will be loaded as the 'committed_objects' layer.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n
      \n
    • viewer: The napari viewer.
    • \n
    • ndim: The number of spatial dimension of the image data (2 or 3).
    • \n
    \n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    Start the tracking annotation tool fora given timeseries.

    \n\n
    Arguments:
    \n\n
      \n
    • raw: The image data.
    • \n
    • embedding_path: Filepath for saving the precomputed embeddings.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • device: The computational device to use for the SAM model.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the annotation tool for a series of images (supported for both 2d and 3d images).

    \n\n
    Arguments:
    \n\n
      \n
    • images: List of the file paths or list of (set of) slices for the images to be annotated.
    • \n
    • output_folder: The folder where the segmentation results are saved.
    • \n
    • model_type: The Segment Anything model to use. For details on the available models check out\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models.
    • \n
    • embedding_path: Filepath where to save the embeddings.
    • \n
    • tile_shape: Shape of tiles for tiled embedding prediction.\nIf None then the whole image is passed to Segment Anything.
    • \n
    • halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • precompute_amg_state: Whether to precompute the state for automatic mask generation.\nThis will take more time when precomputing embeddings, but will then make\nautomatic mask generation much faster.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the SAM model.
    • \n
    • is_volumetric: Whether to use the 3d annotator.
    • \n
    • prefer_decoder: Whether to use decoder based instance segmentation if\nthe model used has an additional decoder for instance segmentation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timages: Union[List[Union[os.PathLike, str]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n
      \n
    • input_folder: The folder with the images to be annotated.
    • \n
    • output_folder: The folder where the segmentation results are saved.
    • \n
    • pattern: The glob patter for loading files from input_folder.\nBy default all files will be loaded.
    • \n
    • viewer: The viewer to which the SegmentAnything functionality should be added.\nThis enables using a pre-initialized viewer.
    • \n
    • return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
    • \n
    • kwargs: The keyword arguments for micro_sam.sam_annotator.image_series_annotator.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "

    \n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "

    \n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n
      \n
    • layer: The point layer from which to extract the prompts.
    • \n
    • i: Index for the data (required for 3d or timeseries data).
    • \n
    • track_id: Id of the current track (required for tracking data).
    • \n
    • with_stop_annotation: Whether a single negative point will be interpreted\nas stop annotation or just returned as normal prompt.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n
      \n
    • prompt_layer: The napari shape layer.
    • \n
    • shape: The image shape.
    • \n
    • i: Index for the data (required for 3d or timeseries data).
    • \n
    • track_id: Id of the current track (required for tracking data).
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n
      \n
    • prompt_layer: The napari layer.
    • \n
    • i: Timeframe of the data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n
      \n
    • point_layer: The napari point layer.
    • \n
    • box_layer: The napari box layer.
    • \n
    • i: Timeframe of the data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n
      \n
    • save_directory: Root folder to save the downloaded data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n
      \n
    • convert_inputs: The class that converts outputs of the dataloader to the expected input format of SAM.\nThe class micro_sam.training.util.ConvertToSamInputs can be used here.
    • \n
    • n_sub_iteration: The number of iteration steps for which the masks predicted for one object are updated.\nIn each sub-iteration new point prompts are sampled where the model was wrong.
    • \n
    • n_objects_per_batch: If not given, we compute the loss for all objects in a sample.\nOtherwise the loss computation is limited to n_objects_per_batch, and the objects are randomly sampled.
    • \n
    • mse_loss: The regression loss to compare the IoU predicted by the model with the true IoU.
    • \n
    • prompt_generator: The iterative prompt generator which takes care of the iterative prompting logic for training
    • \n
    • mask_prob: The probability of using the mask inputs in the iterative prompting (per n_sub_iteration)
    • \n
    • **kwargs: The keyword arguments of the DefaultTrainer super class.
    • \n
    \n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n
      \n
    • convert_inputs: The class that converts outputs of the dataloader to the expected input format of SAM.\nThe class micro_sam.training.util.ConvertToSamInputs can be used here.
    • \n
    • n_sub_iteration: The number of iteration steps for which the masks predicted for one object are updated.\nIn each sub-iteration new point prompts are sampled where the model was wrong.
    • \n
    • n_objects_per_batch: If not given, we compute the loss for all objects in a sample.\nOtherwise the loss computation is limited to n_objects_per_batch, and the objects are randomly sampled.
    • \n
    • mse_loss: The regression loss to compare the IoU predicted by the model with the true IoU.
    • \n
    • prompt_generator: The iterative prompt generator which takes care of the iterative prompting logic for training
    • \n
    • mask_prob: The probability of using the mask inputs in the iterative prompting (per n_sub_iteration)
    • \n
    • **kwargs: The keyword arguments of the DefaultTrainer super class.
    • \n
    \n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n
      \n
    • sam: The SegmentAnything Model.
    • \n
    • device: The device for training.
    • \n
    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n
      \n
    • x: The input tensor.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n
      \n
    • batched_inputs: The batched input images and prompts.
    • \n
    • image_embeddings: The precompute image embeddings. If not passed then they will be computed.
    • \n
    • multimask_output: Whether to predict mutiple or just a single mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "

    \n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "

    Run training for a SAM model.

    \n\n
    Arguments:
    \n\n
      \n
    • name: The name of the model to be trained.\nThe checkpoint and logs wil have this name.
    • \n
    • model_type: The type of the SAM model.
    • \n
    • train_loader: The dataloader for training.
    • \n
    • val_loader: The dataloader for validation.
    • \n
    • n_epochs: The number of epochs to train for.
    • \n
    • early_stopping: Enable early stopping after this number of epochs\nwithout improvement.
    • \n
    • n_objects_per_batch: The number of objects per batch used to compute\nthe loss for interative segmentation. If None all objects will be used,\nif given objects will be randomly sub-sampled.
    • \n
    • checkpoint_path: Path to checkpoint for initializing the SAM model.
    • \n
    • with_segmentation_decoder: Whether to train additional UNETR decoder\nfor automatic instance segmentation.
    • \n
    • freeze: Specify parts of the model that should be frozen, namely:\nimage_encoder, prompt_encoder and mask_decoder\nBy default nothing is frozen and the full model is updated.
    • \n
    • device: The device to use for training.
    • \n
    • lr: The learning rate.
    • \n
    • n_sub_iteration: The number of iterative prompts per training iteration.
    • \n
    • save_root: Optional root directory for saving the checkpoints and logs.\nIf not given the current working directory is used.
    • \n
    • mask_prob: The probability for using a mask as input in a given training sub-iteration.
    • \n
    • n_iterations: The number of iterations to use for training. This will over-ride n_epochs if given.
    • \n
    • scheduler_class: The learning rate scheduler to update the learning rate.\nBy default, ReduceLROnPlateau is used.
    • \n
    • scheduler_kwargs: The learning rate scheduler parameters.\nIf passed None, the chosen default parameters are used in ReduceLROnPlateau.
    • \n
    • save_every_kth_epoch: Save checkpoints after every kth epoch separately.
    • \n
    • pbar_signals: Controls for napari progress bar.
    • \n
    \n", "signature": "(\tname: str,\tmodel_type: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tn_epochs: int = 100,\tearly_stopping: Optional[int] = 10,\tn_objects_per_batch: Optional[int] = 25,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tfreeze: Optional[List[str]] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tlr: float = 1e-05,\tn_sub_iteration: int = 8,\tsave_root: Union[os.PathLike, str, NoneType] = None,\tmask_prob: float = 0.5,\tn_iterations: Optional[int] = None,\tscheduler_class: Optional[torch.optim.lr_scheduler._LRScheduler] = <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>,\tscheduler_kwargs: Optional[Dict[str, Any]] = None,\tsave_every_kth_epoch: Optional[int] = None,\tpbar_signals: Optional[PyQt5.QtCore.QObject] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_dataset", "modulename": "micro_sam.training.training", "qualname": "default_sam_dataset", "kind": "function", "doc": "

    Create a PyTorch Dataset for training a SAM model.

    \n\n
    Arguments:
    \n\n
      \n
    • raw_paths: The path(s) to the image data used for training.\nCan either be multiple 2D images or volumetric data.
    • \n
    • raw_key: The key for accessing the image data. Internal filepath for hdf5-like input\nor a glob pattern for selecting multiple files.
    • \n
    • label_paths: The path(s) to the label data used for training.\nCan either be multiple 2D images or volumetric data.
    • \n
    • label_key: The key for accessing the label data. Internal filepath for hdf5-like input\nor a glob pattern for selecting multiple files.
    • \n
    • patch_shape: The shape for training patches.
    • \n
    • with_segmentation_decoder: Whether to train with additional segmentation decoder.
    • \n
    • with_channels: Whether the image data has RGB channels.
    • \n
    • sampler: A sampler to reject batches according to a given criterion.
    • \n
    • n_samples: The number of samples for this dataset.
    • \n
    • is_train: Whether this dataset is used for training or validation.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The dataset.

    \n
    \n", "signature": "(\traw_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler=None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "

    \n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "

    Best training configurations for given hardware resources.

    \n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "

    Run training for a SAM model with the configuration for a given hardware resource.

    \n\n

    Selects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS.

    \n\n
    Arguments:
    \n\n
      \n
    • name: The name of the model to be trained.\nThe checkpoint and logs wil have this name.
    • \n
    • configuration: The configuration (= name of hardware resource).
    • \n
    • train_loader: The dataloader for training.
    • \n
    • val_loader: The dataloader for validation.
    • \n
    • checkpoint_path: Path to checkpoint for initializing the SAM model.
    • \n
    • with_segmentation_decoder: Whether to train additional UNETR decoder\nfor automatic instance segmentation.
    • \n
    • kwargs: Additional keyword parameterts that will be passed to train_sam.
    • \n
    \n", "signature": "(\tname: str,\tconfiguration: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "

    Transformation to require 8bit input data range (0-255).

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n
      \n
    • model_type: The segment anything model that should be finetuned.\nThe weights of this model will be used for initialization, unless a\ncustom weight file is passed via checkpoint_path.
    • \n
    • device: The device to use for training.
    • \n
    • checkpoint_path: Path to a custom checkpoint from which to load the model weights.
    • \n
    • freeze: Specify parts of the model that should be frozen, namely: image_encoder, prompt_encoder and mask_decoder\nBy default nothing is frozen and the full model is updated.
    • \n
    • return_state: Whether to return the full checkpoint state.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n
      \n
    • transform: The transformation to resize the prompts. Should be the same transform used in the\nmodel to resize the inputs. If None the prompts will not be resized.
    • \n
    • dilation_strength: The dilation factor.\nIt determines a \"safety\" border from which prompts are not sampled to avoid ambiguous prompts\ndue to imprecise groundtruth masks.
    • \n
    • box_distortion_factor: Factor for distorting the box annotations derived from the groundtruth masks.
    • \n
    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=True, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n
      \n
    • device: The input device.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n
      \n
    • model_type: The SegmentAnything model to use. Will use the standard vit_h model by default.\nTo get a list of all available model names you can call get_model_names.
    • \n
    • device: The device for the model. If none is given will use GPU if available.
    • \n
    • checkpoint_path: The path to a file with weights that should be used instead of using the\nweights corresponding to model_type. If given, model_type must match the architecture\ncorresponding to the weight file. E.g. if you use weights for SAM with vit_b encoder\nthen model_type must be given as \"vit_b\".
    • \n
    • return_sam: Return the sam model object as well as the predictor.
    • \n
    • return_state: Return the unpickled checkpoint state.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n
      \n
    • checkpoint_path: The path to the corresponding checkpoint if not in the default model folder.
    • \n
    • model_type: The SegmentAnything model type corresponding to the checkpoint (vit_h, vit_b, vit_l or vit_t).
    • \n
    • save_path: Where to save the exported model.
    • \n
    \n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • input_: The input data. Can be 2 or 3 dimensional, corresponding to an image, volume or timeseries.
    • \n
    • save_path: Path to save the embeddings in a zarr container.
    • \n
    • lazy_loading: Whether to load all embeddings into memory or return an\nobject to load them on demand when required. This only has an effect if 'save_path' is given\nand if the input is 3 dimensional.
    • \n
    • ndim: The dimensionality of the data. If not given will be deduced from the input data.
    • \n
    • tile_shape: Shape of tiles for tiled prediction. By default prediction is run without tiling.
    • \n
    • halo: Overlap of the tiles for tiled prediction.
    • \n
    • verbose: Whether to be verbose in the computation.
    • \n
    • pbar_init: Callback to initialize an external progress bar. Must accept number of steps and description.\nCan be used together with pbar_update to handle napari progress bar in other thread.\nTo enables using this function within a threadworker.
    • \n
    • pbar_update: Callback to update an external progress bar.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The image embeddings.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n
      \n
    • predictor: The SegmentAnything predictor.
    • \n
    • image_embeddings: The precomputed image embeddings computed by precompute_image_embeddings.
    • \n
    • i: Index for the image data. Required if image has three spatial dimensions\nor a time dimension and two spatial dimensions.
    • \n
    • tile_id: Index for the tile. This is required if the embeddings are tiled.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The predictor with set features.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n
      \n
    • mask1: The first mask.
    • \n
    • mask2: The second mask.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The segmentation.
    • \n
    • mode: Determines the functionality used for computing the centers.
    • \n
    • If 'v', the object's eccentricity centers computed by vigra are used.
    • \n
    • If 'p' the object's centroids computed by skimage are used.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n
      \n
    • path: The filepath to the image data.
    • \n
    • key: The internal filepath for complex data formats like hdf5.
    • \n
    • lazy_loading: Whether to lazyly load data. Only supported for n5 and zarr data.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n
      \n
    • segmentation: The segmentation.
    • \n
    • segmentation_ids: Optional subset of ids that will be used to subsample the masks.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "

    Get a suitable block shape for chunking a given shape.

    \n\n

    The primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.

    \n\n
    Arguments:
    \n\n
      \n
    • shape: The image or volume shape.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The block shape.

    \n
    \n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n
      \n
    • embeddings: The embeddings. For example predicted by the SAM image encoder.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n
      \n
    • image_embeddings: The image embeddings.
    • \n
    \n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; // mirrored in build-search-index.js (part 1) // Also split on html tags. this is a cheap heuristic, but good enough.