From 8414d2c4bf5c7c62f5e09553c17382c7f23a5d49 Mon Sep 17 00:00:00 2001 From: Constantin Pape Date: Sat, 21 Dec 2024 20:28:20 +0100 Subject: [PATCH] Update documentation --- micro_sam.html | 179 +- micro_sam/__version__.html | 2 +- micro_sam/_vendored.html | 10 +- micro_sam/automatic_segmentation.html | 495 +- micro_sam/bioimageio.html | 1 + micro_sam/evaluation/evaluation.html | 179 +- micro_sam/evaluation/inference.html | 278 +- .../evaluation/instance_segmentation.html | 667 ++- .../multi_dimensional_segmentation.html | 975 +-- micro_sam/instance_segmentation.html | 5255 ++++++++--------- micro_sam/models/peft_sam.html | 2056 +++++-- micro_sam/multi_dimensional_segmentation.html | 1028 ++-- micro_sam/sam_annotator/_annotator.html | 20 +- micro_sam/sam_annotator/_state.html | 968 +-- micro_sam/sam_annotator/_tooltips.html | 4 +- micro_sam/sam_annotator/_widgets.html | 3236 +++++----- micro_sam/sam_annotator/training_ui.html | 76 +- micro_sam/sam_annotator/util.html | 6 +- micro_sam/training/training.html | 1057 ++-- search.js | 2 +- 20 files changed, 8871 insertions(+), 7623 deletions(-) diff --git a/micro_sam.html b/micro_sam.html index d64d38e7..f794bc7f 100644 --- a/micro_sam.html +++ b/micro_sam.html @@ -31,7 +31,7 @@

Contents

  • Installation
  • @@ -43,6 +43,7 @@

    Contents

  • Image Series Annotator
  • Finetuning UI
  • +
  • Using the Command Line Interface (CLI)
  • Using the Python Library -

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via mamba instead. They will also not enable using the python library.

    +

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via conda instead. They will also not enable using the python library.

    Linux Installer:

    @@ -367,8 +388,8 @@

    Annotation Tools

    HINT: If you would like to start napari to use micro-sam from the plugin menu, you must start it by activating the environment where micro-sam has been installed using:

    -
    $ mamba activate <ENVIRONMENT_NAME>
    -$ napari
    +
    conda activate <ENVIRONMENT_NAME>
    +napari
     
    @@ -511,6 +532,41 @@

    Finetuning UI

    The Configuration option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found here.

    +

    Using the Command Line Interface (CLI)

    + +

    micro-sam extends access to a bunch of functionalities using the command line interface (CLI) scripts via terminal.

    + +

    The supported CLIs can be used by

    + +
      +
    • Running $ micro_sam.precompute_embeddings for precomputing and caching the image embeddings.
    • +
    • Running $ micro_sam.annotator_2d for starting the 2d annotator.
    • +
    • Running $ micro_sam.annotator_3d for starting the 3d annotator.
    • +
    • Running $ micro_sam.annotator_tracking for starting the tracking annotator.
    • +
    • Running $ micro_sam.image_series_annotator for starting the image series annotator.
    • +
    • Running $ micro_sam.automatic_segmentation for automatic instance segmentation. +
        +
      • We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS). +
          +
        • The automatic segmentation mode can be controlled by: --mode <MODE_NAME>, where the available choice for MODE_NAME is amg / ais.
        • +
        • AMG is supported by both default Segment Anything models and micro-sam models / finetuned models.
        • +
        • AIS is supported by micro-sam models (or finetuned models; subjected to they are trained with the additional instance segmentation decoder)
        • +
      • +
      • If these parameters are not provided by the user, micro-sam makes use of the best post-processing parameters (depending on the choice of model).
      • +
      • The post-processing parameters can be changed by parsing the parameters via the CLI using --<PARAMETER_NAME> <VALUE>. For example, one can update the parameter values (eg. pred_iou_thresh, stability_iou_thresh, etc. - supported by AMG) using +
        +
        $ micro_sam.automatic_segmentation ... --pred_iou_thresh 0.6 --stability_iou_thresh 0.6 ...
        +
        +
      • +
    • +
    + +
        - Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.
    +- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
    +
    + +

    NOTE: For all CLIs above, you can find more details by adding the argument -h to the CLI script (eg. $ micro_sam.annotator_2d -h).

    +

    Using the Python Library

    The python library can be imported via

    @@ -733,14 +789,14 @@

    Installation questions

    1. How to install micro_sam?

    -

    The installation for micro_sam is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    +

    The installation for micro_sam is supported in three ways: from conda (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    2. I cannot install micro_sam using the installer, I am getting some errors.

    The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.

    -

    NOTE: The installers enable using micro_sam without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    +

    NOTE: The installers enable using micro_sam without conda. However, we recommend the installation from conda or from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    3. What is the minimum system requirement for micro_sam?

    @@ -783,7 +839,7 @@

    5. I am missing a few packages (eg. ModuleNotFoundError: No module named 'elf.io). What should I do?

    -

    With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you. +

    With the latest release 1.0.0, the installation from conda and source should take care of this and install all the relevant packages for you. So please reinstall micro_sam, following the installation guide.

    6. Can I install micro_sam using pip?

    @@ -798,6 +854,26 @@

    micro_sam, then a) is most likely the reason . We recommend installing the latest version following the installation instructions.

    +

    8. My system does not support internet connection. Where should I put the model checkpoints for the micro-sam models?

    + +

    We recommend transferring the model checkpoints to the system-level cache directory (you can find yours by running the following in terminal: python -c "from micro_sam import util; print(util.microsam_cachedir())). Once you have identified the cache directory, you need to create an additional models directory inside the micro-sam cache directory (if not present already) and move the model checkpoints there. At last, you must rename the transferred checkpoints as per the respective key values in the url dictionaries located in the micro_sam.util.models function (below mentioned is an example for Linux users).

    + +
    +
    # Download and transfer the model checkpoints for 'vit_b_lm' and `vit_b_lm_decoder`.
    +# Next, verify the cache directory.
    +> python -c "from micro_sam import util; print(util.microsam_cachedir())"
    +/home/anwai/.cache/micro_sam
    +
    +# Create 'models' folder in the cache directory
    +> mkdir /home/anwai/.cache/micro_sam/models
    +
    +# Move the checkpoints to the models directory and rename them
    +# The following steps transfer and rename the checkpoints to the desired filenames.
    +> mv vit_b.pt /home/anwai/.cache/micro_sam/models/vit_b_lm
    +> mv vit_b_decoder.pt /home/anwai/.cache/micro_sam/models/vit_b_lm_decoder
    +
    +
    +

    Usage questions

  • \n\n\n

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via mamba instead. They will also not enable using the python library.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n\n\n

    Windows Installer:

    \n\n\n\n

    \n\n

    Easybuild installation

    \n\n

    There is also an easy-build recipe for micro_sam under development. You can find more information here.

    \n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari, we recommend to start here.\nThe micro_sam tools mainly use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. We also provide video tutorials.

    \n\n

    The annotation tools can be started from the napari plugin menu:\n

    \n\n

    You can find additional information on the annotation tools in the FAQ section.

    \n\n

    HINT: If you would like to start napari to use micro-sam from the plugin menu, you must start it by activating the environment where micro-sam has been installed using:

    \n\n
    \n
    $ mamba activate <ENVIRONMENT_NAME>\n$ napari\n
    \n
    \n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n\n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the segmentations and prompts:\n
        \n
      • prompts: shape layer that is used to provide box prompts to Segment Anything. Prompts can be given as rectangle (marked as box prompt in the image), ellipse or polygon.
      • \n
      • point_prompts: point layer that is used to provide point prompts to Segment Anything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer with the results from automatic instance segmentation.
      • \n
      • current_object: label layer for the object(s) you're currently segmenting.
      • \n
    2. \n
    3. The embedding menu. For selecting the image to process, the Segment Anything model that is used and computing its image embeddings. The Embedding Settings contain advanced settings for loading cached embeddings from file or for using tiled embeddings.
    4. \n
    5. The prompt menu for changing whether the currently selected point is a positive or a negative prompt. This can also be done by pressing T.
    6. \n
    7. The menu for interactive segmentation. Clicking Segment Object (or pressing S) will run segmentation for the current prompts. The result is displayed in current_object. Activating batched enables segmentation of multiple objects with point prompts. In this case one object will be segmented per positive prompt.
    8. \n
    9. The menu for automatic segmentation. Clicking Automatic Segmentation will segment all objects n the image. The results will be displayed in the auto_segmentation layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings controls the segmentation results, check the tooltips for details.
    10. \n
    11. The menu for commiting the segmentation. When clicking Commit (or pressing C) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.\nWhen commit_path is given the results will automatically be saved there.
    12. \n
    13. The menu for clearing the current annotations. Clicking Clear Annotations (or pressing Shift + C) will clear the current annotations and the current segmentation.
    14. \n
    \n\n

    Point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched mode is activated. With box prompts you can segment several objects at once, both in the normal and batched mode.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n\n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu for interactive segmentation in the current slice.
    8. \n
    9. The menu for interactive 3d segmentation. Clicking Segment All Slices (or pressing Shift + S) will extend the segmentation of the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings, please refer to the tooltips for details.
    10. \n
    11. The menu for automatic segmentation. The overall functionality is the same as for the 2d annotator. To segment the full volume Apply to Volume needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.
    12. \n
    13. The menu for committing the current object.
    14. \n
    15. The menu for clearing the current annotations. If all slices is set all annotations will be cleared, otherwise they are only cleared for the current slice.
    16. \n
    \n\n

    You can only segment one object at a time using the interactive segmentation functionality with this tool.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n\n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts. Same as for the 2d segmentation application but without the auto_segmentation layer.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    8. \n
    9. The menu for interactive segmentation in the current frame.
    10. \n
    11. The menu for interactive tracking. Click Track Object (or press Shift + S) to segment the current object across time.
    12. \n
    13. The menu for committing the current tracking result.
    14. \n
    15. The menu for clearing the current annotations.
    16. \n
    \n\n

    The tracking annotator only supports 2d image data with a time dimension, volumetric data + time is not supported. We also do not support automatic tracking yet.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Image Series Annotator

    \n\n

    The image series annotation tool enables running the 2d annotator or 3d annotator for multiple images that are saved in a folder. This makes it convenient to annotate many images without having to restart the tool for every image. It can be started by

    \n\n\n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the folder where your images are saved with Input Folder. The annotation results will be saved in Output Folder.\nYou can specify a rule for loading only a subset of images via pattern, for example *.tif to only load tif images. Set is_volumetric if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images the images from the folder you have specified will be loaded and the annotation tool is started for them.

    \n\n

    This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use the image series annotator.

    \n\n

    Finetuning UI

    \n\n

    We also provide a graphical inferface for fine-tuning models on your own data. It can be started by clicking Finetuning in the plugin menu after starting napari.

    \n\n

    Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.

    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the image data via Path to images. You can either load images from a folder or select a single image file. By providing Image data key you can either provide a pattern for selecting files from the folder or provide an internal filepath for HDF5, Zarr or similar fileformats.

    \n\n

    You can select the label data via Path to labels and Label data key, following the same logic as for the image data. The label masks are expected to have the same size as the image data. You can for example use annotations created with one of the micro_sam annotation tools for this, they are stored in the correct format. See the FAQ for more details on the expected label data.

    \n\n

    The Configuration option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found here.

    \n\n

    Using the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    This library extends the Segment Anything library and

    \n\n\n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.

    \n\n

    Training your Own Model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\nWe recommend checking out our latest preprint for details on the results on how much data is required for finetuning Segment Anything.

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it.\nWe also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to train it together with the rest of SAM and how to then use it.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, can be found in the finetuning directory, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

    \n\n

    Here is a list of resources, together with their recommended training settings, for which we have tested model finetuning:

    \n\n\n\n\n \n \n \n \n \n \n\n\n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n
    Resource NameCapacityModel TypeBatch SizeFinetuned PartsNumber of Objects
    CPU32GBViT Base1all10
    CPU64GBViT Base1all15
    GPU (NVIDIA GTX 1080Ti)8GBViT Base1Mask Decoder, Prompt Encoder10
    GPU (NVIDIA Quadro RTX5000)16GBViT Base1all10
    GPU (Tesla V100)32GBViT Base1all10
    GPU (NVIDIA A100)80GBViT Tiny2all50
    GPU (NVIDIA A100)80GBViT Base2all40
    GPU (NVIDIA A100)80GBViT Large2all30
    GPU (NVIDIA A100)80GBViT Huge2all25
    \n\n
    \n

    NOTE: If you use the finetuning UI or micro_sam.training.training.train_sam_for_configuration you can specify the hardware configuration and the best settings for it will be set automatically. If your hardware is not in the settings we have tested choose the closest match. You can set the training parameters yourself when using micro_sam.training.training.train_sam. Be aware that the choice for the number of objects per image, the batch size, and the type of model have a strong impact on the VRAM needed for training and the duration of training. See the finetuning notebook for an overview of these parameters.

    \n
    \n\n

    Finetuned Models

    \n\n

    In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThey are available in the BioImage.IO Model Zoo and are also hosted on Zenodo.

    \n\n

    We currently offer the following models:

    \n\n\n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which model to use in the annotation tools by selecting the corresponding name in the Model: drop-down menu in the embedding menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example.

    \n\n

    Choosing a Model

    \n\n

    As a rule of thumb:

    \n\n\n\n

    See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Other Models

    \n\n

    Previous versions of our models are available on Zenodo:

    \n\n\n\n

    We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

    \n\n

    We provide additional models that were used for experiments in our publication on Zenodo:

    \n\n\n\n

    FAQ

    \n\n

    Here we provide frequently asked questions and common issues.\nIf you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

    \n\n

    Installation questions

    \n\n

    1. How to install micro_sam?

    \n\n

    The installation for micro_sam is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    \n\n

    2. I cannot install micro_sam using the installer, I am getting some errors.

    \n\n

    The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.

    \n\n
    \n

    NOTE: The installers enable using micro_sam without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    \n
    \n\n

    3. What is the minimum system requirement for micro_sam?

    \n\n

    From our experience, the micro_sam annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.\nYou might encounter some slowness for $\\leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

    \n\n\n\n\n\n

    Having a GPU will significantly speed up the annotation tools and especially the model finetuning.

    \n\n\n\n

    micro_sam has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.

    \n\n

    5. I am missing a few packages (eg. ModuleNotFoundError: No module named 'elf.io). What should I do?

    \n\n

    With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you.\nSo please reinstall micro_sam, following the installation guide.

    \n\n

    6. Can I install micro_sam using pip?

    \n\n

    We do not recommend installing micro-sam with pip. It has several dependencies that are only avaoiable from conda-forge, which will not install correctly via pip.

    \n\n

    Please see the installation guide for the recommended way to install micro-sam.

    \n\n

    The PyPI page for micro-sam exists only so that the napari-hub can find it.

    \n\n

    7. I get the following error: importError: cannot import name 'UNETR' from 'torch_em.model'.

    \n\n

    It's possible that you have an older version of torch-em installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam, then a) is most likely the reason . We recommend installing the latest version following the installation instructions.

    \n\n

    Usage questions

    \n\n

    \n\n

    1. I have some micropscopy images. Can I use the annotator tool for segmenting them?

    \n\n

    Yes, you can use the annotator tool for:

    \n\n\n\n

    2. Which model should I use for my data?

    \n\n

    We currently provide three different kind of models: the default models vit_h, vit_l, vit_b and vit_t; the models for light microscopy vit_l_lm, vit_b_lm and vit_t_lm; the models for electron microscopy vit_l_em_organelles, vit_b_em_organelles and vit_t_em_organelles.\nYou should first try the model that best fits the segmentation task your interested in, the lm model for cell or nucleus segmentation in light microscopy or the em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.\nIf your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.\nThe letter after vit denotes the size of the image encoder in SAM, h (huge) being the largest and t (tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l or vit_b model, they offer the best trade-off between speed and segmentation quality.\nYou can find more information on model choice here.

    \n\n

    3. I have high-resolution microscopy images, micro_sam does not seem to work.

    \n\n

    The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or sometimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

    \n\n\n\n
    \n

    NOTE: It's recommended to choose the halo so that it is larger than half of the maximal radius of the objects you want to segment.

    \n
    \n\n

    4. The computation of image embeddings takes very long in napari.

    \n\n

    micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) stores them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bottom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

    \n\n\n\n

    5. Can I use micro_sam on a CPU?

    \n\n

    Most other processing steps are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the \"Segment Anything\" feature or AMG - Automatic Mask Generation) however takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

    \n\n
    \n

    HINT: All the tutorial videos have been created on CPU resources.

    \n
    \n\n

    6. I generated some segmentations from another tool, can I use it as a starting point in micro_sam?

    \n\n

    You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation).\nIf you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and rename it to committed_objects.

    \n\n

    7. I am using micro_sam for segmenting objects. I would like to report the steps for reproducability. How can this be done?

    \n\n

    The annotation steps and segmentation results can be saved to a Zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

    \n\n
    \n

    NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.

    \n
    \n\n

    8. I want to segment objects with complex structures. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

    \n\n

    micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section on finetuning). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

    \n\n

    9. I am using the annotation tool and napari outputs the following error: While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.

    \n\n

    These messages occur when an internal error happens in micro_sam. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.\nWe want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.

    \n\n

    10. The objects are not segmented in my 3d data using the interactive annotation tool.

    \n\n

    The first thing to check is: a) make sure you are using the latest version of micro_sam (pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using mamba update micro_sam), and b) try out the steps from the 3d annotation tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.\nc) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.

    \n\n

    11. I have very small or fine-grained structures in my high-resolution microscopic images. Can I use micro_sam to annotate them?

    \n\n

    Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).

    \n\n

    12. napari seems to be very slow for large images.

    \n\n

    Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.

    \n\n

    13. While computing the embeddings (and / or automatic segmentation), a window stating: \"napari\" is not responding pops up.

    \n\n

    This can happen for long running computations. You just need to wait a bit longer and the computation will finish.

    \n\n

    14. I have 3D RGB microscopy volumes. How does micro_sam handle these images?

    \n\n

    micro_sam performs automatic segmentation in 3D volumes by first segmenting slices individually in 2D and merging the segmentations across 3D based on overlap of objects between slices. The expected shape of your 3D RGB volume should be (Z * Y * X * 3) (reason: Segment Anything is devised to consider 3-channel inputs, so while the user provides micro-sam with 1-channel inputs, we handle this by triplicating this to fit the requirement, or with 3-channel inputs, we use them in the expected RGB array structures as it is).

    \n\n

    15. I want to use a model stored in a different directory than the micro_sam cache. How can I do this?

    \n\n

    The micro-sam CLIs for precomputation of image embeddings and annotators (Annotator 2d, Annotator 3d, Annotator Tracking, Image Series Annotator) accept the argument -c / --checkpoint to pass model checkpoints. If you start a micro-sam annotator from the napari plugin menu, you can provide the path to model checkpoints in the annotator widget (on right) under Embedding Settings drop-down in the custom weights path option.

    \n\n

    NOTE: It is important to choose the correct model type when you opt for the above recommendation, using the -m / --model_type argument or selecting it from the Model dropdown in Embedding Settings respectively. Otherwise you will face parameter mismatch issues.

    \n\n

    Fine-tuning questions

    \n\n

    1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using micro_sam?

    \n\n

    Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:

    \n\n\n\n

    2. I would like to fine-tune Segment Anything on open-source cloud services (e.g. Kaggle Notebooks), is it possible?

    \n\n

    Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.

    \n\n

    \n\n

    3. What kind of annotations do I need to finetune Segment Anything?

    \n\n

    Annotations are referred to the instance segmentation labels, i.e. each object of interests in your microscopy images have an individual id to uniquely identify all the segmented objects. You can obtain them by micro_sam's annotation tools. In micro_sam, it's expected to provide dense segmentations (i.e. all objects per image are annotated) for finetuning Segment Anything with the additional decoder, however it's okay to use sparse segmentations (i.e. few objects per image are annotated) for just finetuning Segment Anything (without the additional decoder).

    \n\n

    4. I have finetuned Segment Anything on my microscopy data. How can I use it for annotating new images?

    \n\n

    You can load your finetuned model by entering the path to its checkpoint in the custom_weights_path field in the Embedding Settings drop-down menu.\nIf you are using the python library or CLI you can specify this path with the checkpoint_path parameter.

    \n\n

    5. What is the background of the new AIS (Automatic Instance Segmentation) feature in micro_sam?

    \n\n

    micro_sam introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.

    \n\n

    6. I want to finetune only the Segment Anything model without the additional instance decoder.

    \n\n

    The instance segmentation decoder is optional. So you can only finetune SAM or SAM and the additional decoder. Finetuning with the decoder will increase training times, but will enable you to use AIS. See this example for finetuning with both the objectives.

    \n\n
    \n

    NOTE: To try out the other way round (i.e. the automatic instance segmentation framework without the interactive capability, i.e. a UNETR: a vision transformer encoder and a convolutional decoder), you can take inspiration from this example on LIVECell.

    \n
    \n\n

    7. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?

    \n\n

    Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b) by reducing the number of objects per image to 15.\nThis parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.\nYou can find an overview of the resources we have tested for finetuning here.\nWe also provide a the convenience function micro_sam.training.train_sam_for_configuration that selects the best training settings for these configuration. This function is also used by the finetuning UI.

    \n\n

    8. I want to create a dataloader for my data, to finetune Segment Anything.

    \n\n

    Thanks to torch-em, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.\nSee the tutorial notebook on how to create dataloaders using torch-em and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.

    \n\n
    \n

    NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using sampler for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model in micro_sam).

    \n
    \n\n

    9. How can I evaluate a model I have finetuned?

    \n\n

    To validate a Segment Anything model for your data, you have different options, depending on the task you want to solve and whether you have segmentation annotations for your data.

    \n\n\n\n

    We provide an example notebook that shows how to use this evaluation functionality.

    \n\n

    Contribution Guide

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions! First, discuss your idea by opening a new issue in micro-sam.\nThis allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    \n
    $ git clone https://github.com/computational-cell-analytics/micro-sam.git\n$ cd micro-sam\n$ git checkout dev\n
    \n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and developer dependencies, and micro-sam as an editable installation:

    \n\n
    \n
    $ mamba env create environment_gpu.yaml\n$ mamba activate sam\n$ python -m pip install requirements-dev.txt\n$ python -m pip install -e .\n
    \n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    $ git checkout dev\n$ git checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    \n
    $ pytest\n
    \n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    \n
    $ python build_doc.py\n
    \n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    \n
    $ python build_doc.py --out\n
    \n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    \n
    $ python development/benchmark.py --model_type vit_t --device cpu`\n
    \n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module.

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    Creating a new release

    \n\n

    To create a new release you have to edit the version number in micro_sam/__version__.py in a PR. After merging this PR the release will automatically be done by the CI.

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n\n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n\n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n\n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.automatic_segmentation", "modulename": "micro_sam.automatic_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.automatic_segmentation.get_predictor_and_segmenter", "modulename": "micro_sam.automatic_segmentation", "qualname": "get_predictor_and_segmenter", "kind": "function", "doc": "

    Get the Segment Anything model and class for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The Segment Anything model.\n The automatic instance segmentation class.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint: Union[os.PathLike, str, NoneType] = None,\tdevice: str = None,\tamg: Optional[bool] = None,\tis_tiled: bool = False,\t**kwargs) -> Tuple[mobile_sam.predictor.SamPredictor, Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]]:", "funcdef": "def"}, {"fullname": "micro_sam.automatic_segmentation.automatic_instance_segmentation", "modulename": "micro_sam.automatic_segmentation", "qualname": "automatic_instance_segmentation", "kind": "function", "doc": "

    Run automatic segmentation for the input image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation result.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tinput_path: Union[os.PathLike, str, numpy.ndarray],\toutput_path: Union[os.PathLike, str, NoneType] = None,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\t**generate_kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "

    \n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "

    Export SAM model to BioImage.IO model format.

    \n\n

    The exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\timage: numpy.ndarray,\tlabel_image: numpy.ndarray,\tmodel_type: str,\tname: str,\toutput_path: Union[str, os.PathLike],\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor", "kind": "class", "doc": "

    Wrapper around the SamPredictor.

    \n\n

    This model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.__init__", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "

    Copy parameters and buffers from state_dict into this module and its descendants.

    \n\n

    If strict is True, then\nthe keys of state_dict must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict() function.

    \n\n
    \n\n

    If assign is True the optimizer must be created after\nthe call to load_state_dict unless\n~torch.__future__.get_swap_module_params_on_conversion() is True.

    \n\n
    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    NamedTuple with missing_keys and unexpected_keys fields:\n * missing_keys is a list of str containing any keys that are expected\n by this module but missing from the provided state_dict.\n * unexpected_keys is a list of str containing the keys that are not\n expected by this module but present in the provided state_dict.

    \n
    \n\n
    Note:
    \n\n
    \n

    If a parameter or buffer is registered as None and its corresponding key\n exists in state_dict, load_state_dict() will raise a\n RuntimeError.

    \n
    \n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "
    Arguments:
    \n\n\n\n

    Returns:

    \n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.benchmark_datasets", "modulename": "micro_sam.evaluation.benchmark_datasets", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.LM_2D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "LM_2D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['livecell', 'deepbacs', 'tissuenet', 'neurips_cellseg', 'dynamicnuclearnet', 'hpa', 'covid_if', 'pannuke', 'lizard', 'orgasegment', 'omnipose', 'dic_hepg2']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.LM_3D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "LM_3D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['plantseg_root', 'plantseg_ovules', 'gonuclear', 'mouse_embryo', 'embegseg', 'cellseg3d']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.EM_2D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "EM_2D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['mitolab_tem']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.EM_3D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "EM_3D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['mitoem_rat', 'mitoem_human', 'platynereis_nuclei', 'lucchi', 'mitolab', 'nuc_mm_mouse', 'num_mm_zebrafish', 'uro_cell', 'sponge_em', 'platynereis_cilia', 'vnc', 'asem_mito']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.DATASET_RETURNS_FOLDER", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "DATASET_RETURNS_FOLDER", "kind": "variable", "doc": "

    \n", "default_value": "{'deepbacs': '*.tif'}"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.DATASET_CONTAINER_KEYS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "DATASET_CONTAINER_KEYS", "kind": "variable", "doc": "

    \n", "default_value": "{'lucchi': ['raw', 'labels']}"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.run_benchmark_evaluations", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "run_benchmark_evaluations", "kind": "function", "doc": "

    Run evaluation for benchmarking Segment Anything models on microscopy datasets.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_folder: Union[os.PathLike, str],\tdataset_choice: str,\tmodel_type: str = 'vit_l',\toutput_folder: Union[os.PathLike, str, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\trun_amg: bool = False,\tretain: Optional[List[str]] = None,\tignore_warnings: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "

    Run evaluation for iterative prompt-based segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run Segment Anything inference for multiple images using prompts iteratively\nderived from model outputs and ground-truth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool = True,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8,\tuse_masks: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_amg", "modulename": "micro_sam.evaluation.inference", "qualname": "run_amg", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tpeft_kwargs: Optional[Dict] = None) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.inference", "qualname": "run_instance_segmentation_with_decoder", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tpeft_kwargs: Optional[Dict] = None) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "

    \n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_precompute_embeddings", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_precompute_embeddings", "kind": "function", "doc": "

    Run precomputation of val and test image embeddings for livecell.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tn_val_per_cell_type: int = 25) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_iterative_prompting", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_iterative_prompting", "kind": "function", "doc": "

    Run inference on livecell with iterative prompting setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tstart_with_box: bool = False,\tuse_masks: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LIVECell inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LIVECell evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int,\tmodel_type3: Optional[str] = None,\tcheckpoint1: Union[str, os.PathLike, NoneType] = None,\tcheckpoint2: Union[str, os.PathLike, NoneType] = None,\tcheckpoint3: Union[str, os.PathLike, NoneType] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[str, os.PathLike, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0,\thave_model3=False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n\n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.default_grid_search_values_multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "default_grid_search_values_multi_dimensional_segmentation", "kind": "function", "doc": "

    Default grid-search parameters for multi-dimensional prompt-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[int, float, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "

    Segment all objects in a volume by prompt-based segmentation in one slice per object.

    \n\n

    This function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\tsave_path: Union[os.PathLike, str, NoneType] = None,\tiou_threshold: float = 0.8,\tprojection: Union[str, dict] = 'mask',\tbox_extension: Union[float, int] = 0.025,\tdevice: Union[str, torch.device] = None,\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\treturn_segmentation: bool = False,\tmin_size: int = 0) -> Union[float, Tuple[numpy.ndarray, float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.run_multi_dimensional_segmentation_grid_search", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "run_multi_dimensional_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for prompt-based multi-dimensional instance segmentation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values argument. For example, to run a grid search over the parameters iou_threshold,\nprojection and box_extension, you can pass the following:

    \n\n
    grid_search_values = {\n    \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n    \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n    \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
    \n\n

    All combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation is used\nto get the default grid search parameters for the instance segmentation method.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\tgrid_search_values: Optional[Dict[str, List]] = None,\tmin_size: int = 0):", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None,\tverbose_embeddings: bool = True):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Args:\n    masks: The outputs generated by AutomaticMaskGenerator or EmbeddingMaskGenerator.\n        Only supports output_mode=binary_mask.\n    with_background: Whether the segmentation has background. If yes this function assures that the largest\n        object in the output will be mapped to zero (the background value).\n    min_object_size: The minimal size of an object in pixels.\n    max_object_size: The maximal size of an object in pixels.\n
    \n\n

    <<<<<<< HEAD

    \n\n

    =======\n label_masks: Whether to apply connected components to the result before remving small objects.

    \n\n
    \n
    \n
    \n
    \n
    \n
    >>> master\n    Returns:\n        The instance segmentation.\n
    \n
    \n
    \n
    \n
    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None,\tlabel_masks: bool = True) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.clear_state", "kind": "function", "doc": "

    Clear the state of the mask generator.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n\n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_unetr", "modulename": "micro_sam.instance_segmentation", "qualname": "get_unetr", "kind": "function", "doc": "

    Get UNETR model for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The UNETR model.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "

    Get decoder to predict outputs for automatic instance segmentation

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The decoder for instance segmentation.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "

    Load the SAM model (predictor) and instance segmentation decoder.

    \n\n

    This requires a checkpoint that contains the state for both predictor\nand decoder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The SAM predictor.\n The decoder for instance segmentation.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None,\tpeft_kwargs: Optional[Dict] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "

    Get the initialized state of the instance segmenter.

    \n\n
    Returns:
    \n\n
    \n

    Instance segmentation state.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "

    Set the state of the instance segmenter.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.clear_state", "kind": "function", "doc": "

    Clear the state of the instance segmenter.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Same as InstanceSegmentationWithDecoder but for tiled image embeddings.

    \n", "bases": "InstanceSegmentationWithDecoder"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.models", "modulename": "micro_sam.models", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.build_sam", "modulename": "micro_sam.models.build_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_h", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_h", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_l", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_l", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_b", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_b", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.sam_model_registry", "modulename": "micro_sam.models.build_sam", "qualname": "sam_model_registry", "kind": "variable", "doc": "

    \n", "default_value": "{'default': <function build_sam_vit_h>, 'vit_h': <function build_sam_vit_h>, 'vit_l': <function build_sam_vit_l>, 'vit_b': <function build_sam_vit_b>}"}, {"fullname": "micro_sam.models.peft_sam", "modulename": "micro_sam.models.peft_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery", "kind": "class", "doc": "

    Operates on the attention layers for performing low-rank adaptation.

    \n\n

    (Inspired from: https://github.com/JamesQFreeman/Sam_LoRA/)

    \n\n

    In SAM, it is implemented as:

    \n\n
    \n
    self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)\nB, N, C = x.shape\nqkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\nq, k, v = qkv.unbind(0)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(rank: int, block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.qkv_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.qkv_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.dim", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.dim", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_a_linear_q", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_a_linear_q", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_b_linear_q", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_b_linear_q", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_a_linear_v", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_a_linear_v", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_b_linear_v", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_b_linear_v", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.reset_parameters", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.reset_parameters", "kind": "function", "doc": "

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery", "kind": "class", "doc": "

    Operates on the attention layers for performing factorized attention.

    \n\n

    (Inspired from: https://github.com/cchen-cc/MA-SAM/blob/main/MA-SAM/sam_fact_tt_image_encoder.py)

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\trank: int,\tblock: torch.nn.modules.module.Module,\tdropout: Optional[float] = 0.1)"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.qkv_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.qkv_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.dim", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.dim", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.q_FacTs", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.q_FacTs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.v_FacTs", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.v_FacTs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.dropout", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.dropout", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.FacTu", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.FacTu", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.FacTv", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.FacTv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery", "kind": "class", "doc": "

    Base class for selectively allowing gradient updates for certain parameters.

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.block", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.block", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.allow_gradient_update_for_parameters", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.allow_gradient_update_for_parameters", "kind": "function", "doc": "

    This function decides the parameter attributes to match for allowing gradient updates.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\tprefix: Optional[List[str]] = None,\tsuffix: Optional[List[str]] = None,\tinfix: Optional[List[str]] = None):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.AttentionSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "AttentionSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates for parameters in attention layers.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.AttentionSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "AttentionSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.BiasSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "BiasSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates for bias parameters.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.BiasSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "BiasSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.LayerNormSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "LayerNormSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates in normalization layers.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.LayerNormSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "LayerNormSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam", "kind": "class", "doc": "

    Wraps the Segment Anything model's image encoder to different parameter efficient finetuning methods.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tmodel: segment_anything.modeling.sam.Sam,\trank: int,\tpeft_module: torch.nn.modules.module.Module = <class 'micro_sam.models.peft_sam.LoRASurgery'>,\tattention_layers_to_update: List[int] = None,\t**module_kwargs)"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.peft_module", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.peft_module", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.peft_blocks", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.peft_blocks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.sam", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, batched_input, multimask_output):", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.get_sam_3d_model", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "get_sam_3d_model", "kind": "function", "doc": "

    \n", "signature": "(\tdevice,\tn_classes,\timage_size,\tlora_rank=None,\tfreeze_encoder=False,\tmodel_type='vit_b',\tcheckpoint_path=None):", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.__init__", "kind": "function", "doc": "

    Initializes the Sam3DWrapper object.

    \n\n
    Arguments:
    \n\n\n", "signature": "(sam_model: segment_anything.modeling.sam.Sam, freeze_encoder: bool)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.sam_model", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.sam_model", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.freeze_encoder", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.freeze_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.forward", "kind": "function", "doc": "

    Predict 3D masks for the current inputs.

    \n\n

    Unlike original SAM this model only supports automatic segmentation and does not support prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A list over input images, where each element is as dictionary with the following keys:\n 'masks': Mask prediction for this object.\n 'iou_predictions': IOU score prediction for this object.\n 'low_res_masks': Low resolution mask prediction for this object.

    \n
    \n", "signature": "(\tself,\tbatched_input: List[Dict[str, Any]],\tmultimask_output: bool) -> List[Dict[str, torch.Tensor]]:", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tnum_heads: int = 12,\tembed_dim: int = 768)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.image_encoder", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.image_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.img_size", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.img_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x: torch.Tensor, d_size: int) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tblock: torch.nn.modules.module.Module,\tdim: int,\tnum_heads: int,\tnorm_layer: Type[torch.nn.modules.module.Module] = <class 'torch.nn.modules.normalization.LayerNorm'>,\tadapter_channels: int = 384)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.block", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.block", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_channels", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_channels", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_down", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_down", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_up", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_up", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_conv", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_act", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_act", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_norm", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_norm", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_down_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_down_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_up_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_up_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_conv_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_conv_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_act_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_act_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_norm_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_norm_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x: torch.Tensor, d_size) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.get_simple_sam_3d_model", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "get_simple_sam_3d_model", "kind": "function", "doc": "

    \n", "signature": "(\tdevice,\tn_classes,\timage_size,\tlora_rank=None,\tfreeze_encoder=False,\tmodel_type='vit_b',\tcheckpoint_path=None):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tin_channels,\tout_channels,\tkernel_size=(3, 3, 3),\tstride=(1, 1, 1),\tpadding=(1, 1, 1),\tbias=True,\tmode='nearest')"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.conv1", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.conv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.conv2", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.conv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.downsample", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.downsample", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.leakyrelu", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.leakyrelu", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.up", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.up", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead", "kind": "class", "doc": "

    A sequential container.

    \n\n

    Modules will be added to it in the order they are passed in the\nconstructor. Alternatively, an OrderedDict of modules can be\npassed in. The forward() method of Sequential accepts any\ninput and forwards it to the first module it contains. It then\n\"chains\" outputs to inputs sequentially for each subsequent module,\nfinally returning the output of the last module.

    \n\n

    The value a Sequential provides over manually calling a sequence\nof modules is that it allows treating the whole container as a\nsingle module, such that performing a transformation on the\nSequential applies to each of the modules it stores (which are\neach a registered submodule of the Sequential).

    \n\n

    What's the difference between a Sequential and a\ntorch.nn.ModuleList? A ModuleList is exactly what it\nsounds like--a list for storing Module s! On the other hand,\nthe layers in a Sequential are connected in a cascading way.

    \n\n

    Example::

    \n\n
    # Using Sequential to create a small model. When `model` is run,\n# input will first be passed to `Conv2d(1,20,5)`. The output of\n# `Conv2d(1,20,5)` will be used as the input to the first\n# `ReLU`; the output of the first `ReLU` will become the input\n# for `Conv2d(20,64,5)`. Finally, the output of\n# `Conv2d(20,64,5)` will be used as input to the second `ReLU`\nmodel = nn.Sequential(\n          nn.Conv2d(1,20,5),\n          nn.ReLU(),\n          nn.Conv2d(20,64,5),\n          nn.ReLU()\n        )\n\n# Using Sequential with OrderedDict. This is functionally the\n# same as the above code\nmodel = nn.Sequential(OrderedDict([\n          ('conv1', nn.Conv2d(1,20,5)),\n          ('relu1', nn.ReLU()),\n          ('conv2', nn.Conv2d(20,64,5)),\n          ('relu2', nn.ReLU())\n        ]))\n
    \n", "bases": "torch.nn.modules.container.Sequential"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tin_channels,\tout_channels,\tkernel_size=(3, 3, 3),\tstride=(1, 1, 1),\tpadding=(1, 1, 1),\tbias=True)"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.conv_pred", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.conv_pred", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.segmentation_head", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.segmentation_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(sam, num_classes, freeze_encoder)"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.sam", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.freeze_encoder", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.freeze_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.decoders", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.decoders", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.out_conv", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.forward", "kind": "function", "doc": "

    Predict 3D masks for the current inputs.

    \n\n

    Unlike original SAM this model only supports automatic segmentation and does not support prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A list over input images, where each element is as dictionary with the following keys:\n 'masks': Mask prediction for this object.

    \n
    \n", "signature": "(\tself,\tbatched_input: List[Dict[str, Any]],\tmultimask_output: bool) -> List[Dict[str, torch.Tensor]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "

    \n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "

    Merge stacked 2d instance segmentations into a consistent 3d segmentation.

    \n\n

    Solves a multicut problem based on the overlap of objects to merge across z.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The merged segmentation.

    \n
    \n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "

    Segment volume in 3d.

    \n\n

    First segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation.

    \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tpattern: Optional[str] = None,\tmodel_type: str = 'vit_l',\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    Start the 2d annotation tool for a given image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    Start the 3d annotation tool for a given image volume.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    Start the tracking annotation tool fora given timeseries.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the annotation tool for a series of images (supported for both 2d and 3d images).

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timages: Union[List[Union[str, os.PathLike]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True,\tskip_segmented: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "

    \n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "

    \n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for jointly training the Segment Anything model with an additional convolutional decoder.

    \n\n

    This class is inherited from SamTrainer.\nCheck out https://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/sam_trainer.py\nfor details on its implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\tmask_loss: Optional[torch.nn.modules.module.Module] = None,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer", "modulename": "micro_sam.training.semantic_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss", "kind": "class", "doc": "

    Loss for computing dice over one-hot labels.

    \n\n

    Expects prediction and target with num_classes channels: the number of classes for semantic segmentation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.__init__", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(num_classes: int, softmax: bool = True)"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.num_classes", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.num_classes", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.dice_loss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.dice_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.softmax", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.softmax", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model for semantic segmentation.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.__init__", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tnum_classes: int,\tdice_weight: Optional[float] = None,\t**kwargs)"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.convert_inputs", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.num_classes", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.num_classes", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.compute_ce_loss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.compute_ce_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.dice_weight", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.dice_weight", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticMapsSamTrainer", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticMapsSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model for semantic segmentation.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "SemanticSamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer", "modulename": "micro_sam.training.simple_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer", "kind": "class", "doc": "

    Trainer class for creating a simple SAM trainer for limited prompt-based segmentation.

    \n\n

    This class is inherited from SamTrainer.\nCheck out https://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/sam_trainer.py\nfor details on its implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.__init__", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(use_points: bool = True, use_box: bool = True, **kwargs)"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.use_points", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.use_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.use_box", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.use_box", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.MedSAMTrainer", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "MedSAMTrainer", "kind": "class", "doc": "

    Trainer class for replicating the trainer of MedSAM (https://arxiv.org/abs/2304.12306).

    \n\n

    This class is inherited from SimpleSamTrainer.\nCheck out\nhttps://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/simple_sam_trainer.py\nfor details on its implementation.

    \n", "bases": "SimpleSamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer.MedSAMTrainer.__init__", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "MedSAMTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(**kwargs)"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(sam: segment_anything.modeling.sam.Sam)"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "

    \n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "

    Run training for a SAM model.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tmodel_type: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tn_epochs: int = 100,\tearly_stopping: Optional[int] = 10,\tn_objects_per_batch: Optional[int] = 25,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tfreeze: Optional[List[str]] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tlr: float = 1e-05,\tn_sub_iteration: int = 8,\tsave_root: Union[os.PathLike, str, NoneType] = None,\tmask_prob: float = 0.5,\tn_iterations: Optional[int] = None,\tscheduler_class: Optional[torch.optim.lr_scheduler._LRScheduler] = <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>,\tscheduler_kwargs: Optional[Dict[str, Any]] = None,\tsave_every_kth_epoch: Optional[int] = None,\tpbar_signals: Optional[PyQt5.QtCore.QObject] = None,\toptimizer_class: Optional[torch.optim.optimizer.Optimizer] = <class 'torch.optim.adamw.AdamW'>,\tpeft_kwargs: Optional[Dict] = None,\tignore_warnings: bool = True,\tverify_n_labels_in_loader: Optional[int] = 50,\t**model_kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_dataset", "modulename": "micro_sam.training.training", "qualname": "default_sam_dataset", "kind": "function", "doc": "

    Create a PyTorch Dataset for training a SAM model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The dataset.

    \n
    \n", "signature": "(\traw_paths: Union[List[Union[str, os.PathLike]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[str, os.PathLike]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler: Optional[Callable] = None,\traw_transform: Optional[Callable] = None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\tmin_size: int = 25,\tmax_sampling_attempts: Optional[int] = None,\tis_seg_dataset: Optional[bool] = None,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "

    \n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "

    Best training configurations for given hardware resources.

    \n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "

    Run training for a SAM model with the configuration for a given hardware resource.

    \n\n

    Selects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tconfiguration: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tmodel_type: Optional[str] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "

    Transformation to require 8bit input data range (0-255).

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False,\tpeft_kwargs: Optional[Dict] = None,\tflexible_load_checkpoint: bool = False,\t**model_kwargs) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSemanticSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSemanticSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model\nfor semantic segmentation.

    \n"}, {"fullname": "micro_sam.training.util.normalize_to_8bit", "modulename": "micro_sam.training.util", "qualname": "normalize_to_8bit", "kind": "function", "doc": "

    \n", "signature": "(raw):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=False, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False,\tpeft_kwargs: Optional[Dict] = None,\tflexible_load_checkpoint: bool = False,\t**model_kwargs) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image embeddings.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predictor with set features.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "

    Get a suitable block shape for chunking a given shape.

    \n\n

    The primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The block shape.

    \n
    \n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; + /** pdoc search index */const docs = [{"fullname": "micro_sam", "modulename": "micro_sam", "kind": "module", "doc": "

    Segment Anything for Microscopy

    \n\n

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other biomedical imaging data.\nIts core components are:

    \n\n\n\n

    Based on these components micro_sam enables fast interactive and automatic annotation for microscopy data, like interactive cell segmentation from bounding boxes:

    \n\n

    \"box-prompts\"

    \n\n

    micro_sam is now available as stable version 1.0 and we will not change its user interface significantly in the foreseeable future.\nWe are still working on improving and extending its functionality. The current roadmap includes:

    \n\n\n\n

    If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam.

    \n\n

    Quickstart

    \n\n

    You can install micro_sam via conda:

    \n\n
    \n
    conda install -c conda-forge micro_sam\n
    \n
    \n\n

    We also provide installers for Windows and Linux. For more details on the available installation options, check out the installation section.

    \n\n

    After installing micro_sam, you can start napari from within your environment using

    \n\n
    \n
    $ napari\n
    \n
    \n\n

    After starting napari, you can select the annotation tool you want to use from Plugins -> SegmentAnything for Microscopy. Check out the quickstart tutorial video for a short introduction, the video of our virtual I2K tutorial for an in-depth explanation and the annotation tool section for details.

    \n\n

    The micro_sam python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    It is explained in more detail here.

    \n\n

    We provide different finetuned models for microscopy that can be used within our tools or any other tool that supports Segment Anything. See finetuned models for details on the available models.\nYou can also train models on your own data, see here for details.

    \n\n

    Citation

    \n\n

    If you are using micro_sam in your research please cite

    \n\n\n\n

    Installation

    \n\n

    There are three ways to install micro_sam:

    \n\n\n\n

    You can find more information on the installation and how to troubleshoot it in the FAQ section.

    \n\n

    We do not support installing micro_sam with pip.

    \n\n

    From conda

    \n\n

    conda is a python package manager. If you don't have it installed yet you can follow the instructions here to set it up on your system.\nPlease make sure that you are using an up-to-date version of conda to install micro_sam.\nYou can also use mamba, which is a drop-in replacement for conda, to install it. In this case, just replace the conda command below with mamba.

    \n\n

    IMPORTANT: Do not install micro_sam in the base conda environment.

    \n\n

    Installation on Linux and Mac OS:

    \n\n

    micro_sam can be installed in an existing environment via:

    \n\n
    \n
    conda install -c conda-forge micro_sam\n
    \n
    \n\n

    or you can create a new environment with it (here called micro-sam) via:

    \n\n
    \n
    conda create -c conda-forge -n micro-sam micro_sam\n
    \n
    \n\n

    and then activate it via

    \n\n
    \n
    conda activate micro-sam\n
    \n
    \n\n

    This will also install pytorch from the conda-forge channel. If you have a recent enough operating system, it will automatically install the best suitable pytorch version on your system.\nThis means it will install the CPU version if you don't have a nVidia GPU, and will install a GPU version if you have.\nHowever, if you have an older operating system, or a CUDA version older than 12, than it may not install the correct version. In this case you will have to specify you're CUDA version, for example for CUDA 11, like this:

    \n\n
    \n
    conda install -c conda-forge micro_sam "libtorch=*=cuda11*"\n
    \n
    \n\n

    Installation on Windows:

    \n\n

    pytorch is currently not available on conda-forge for windows. Thus, you have to install it from the pytorch conda channel. In addition, you have to specify two specific dependencies to avoid incompatibilities.\nThis can be done with the following commands:

    \n\n
    \n
    conda install -c pytorch -c conda-forge micro_sam "nifty=1.2.1=*_4" "protobuf<5"\n
    \n
    \n\n

    to install micro_sam in an existing environment and

    \n\n
    \n
    conda create -c pytorch -c conda-forge -n micro-sam micro_sam "nifty=1.2.1=*_4" "protobuf<5"\n
    \n
    \n\n

    From source

    \n\n

    To install micro_sam from source, we recommend to first set up an environment with the necessary requirements:

    \n\n\n\n

    To create one of these environments and install micro_sam into it follow these steps

    \n\n
      \n
    1. Clone the repository:
    2. \n
    \n\n
    \n
    git clone https://github.com/computational-cell-analytics/micro-sam\n
    \n
    \n\n
      \n
    1. Enter it:
    2. \n
    \n\n
    \n
    cd micro-sam\n
    \n
    \n\n
      \n
    1. Create the respective environment:
    2. \n
    \n\n
    \n
    conda env create -f <ENV_FILE>.yaml\n
    \n
    \n\n
      \n
    1. Activate the environment:
    2. \n
    \n\n
    \n
    conda activate sam\n
    \n
    \n\n
      \n
    1. Install micro_sam:
    2. \n
    \n\n
    \n
    $ pip install -e .\n
    \n
    \n\n

    From installer

    \n\n

    We also provide installers for Linux and Windows:

    \n\n\n\n

    The installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam via conda instead. They will also not enable using the python library.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n\n\n

    Windows Installer:

    \n\n\n\n

    \n\n

    Easybuild installation

    \n\n

    There is also an easy-build recipe for micro_sam under development. You can find more information here.

    \n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari, we recommend to start here.\nThe micro_sam tools mainly use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. We also provide video tutorials.

    \n\n

    The annotation tools can be started from the napari plugin menu:\n

    \n\n

    You can find additional information on the annotation tools in the FAQ section.

    \n\n

    HINT: If you would like to start napari to use micro-sam from the plugin menu, you must start it by activating the environment where micro-sam has been installed using:

    \n\n
    \n
    conda activate <ENVIRONMENT_NAME>\nnapari\n
    \n
    \n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n\n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the segmentations and prompts:\n
        \n
      • prompts: shape layer that is used to provide box prompts to Segment Anything. Prompts can be given as rectangle (marked as box prompt in the image), ellipse or polygon.
      • \n
      • point_prompts: point layer that is used to provide point prompts to Segment Anything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer with the results from automatic instance segmentation.
      • \n
      • current_object: label layer for the object(s) you're currently segmenting.
      • \n
    2. \n
    3. The embedding menu. For selecting the image to process, the Segment Anything model that is used and computing its image embeddings. The Embedding Settings contain advanced settings for loading cached embeddings from file or for using tiled embeddings.
    4. \n
    5. The prompt menu for changing whether the currently selected point is a positive or a negative prompt. This can also be done by pressing T.
    6. \n
    7. The menu for interactive segmentation. Clicking Segment Object (or pressing S) will run segmentation for the current prompts. The result is displayed in current_object. Activating batched enables segmentation of multiple objects with point prompts. In this case one object will be segmented per positive prompt.
    8. \n
    9. The menu for automatic segmentation. Clicking Automatic Segmentation will segment all objects n the image. The results will be displayed in the auto_segmentation layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings controls the segmentation results, check the tooltips for details.
    10. \n
    11. The menu for commiting the segmentation. When clicking Commit (or pressing C) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.\nWhen commit_path is given the results will automatically be saved there.
    12. \n
    13. The menu for clearing the current annotations. Clicking Clear Annotations (or pressing Shift + C) will clear the current annotations and the current segmentation.
    14. \n
    \n\n

    Point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched mode is activated. With box prompts you can segment several objects at once, both in the normal and batched mode.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n\n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu for interactive segmentation in the current slice.
    8. \n
    9. The menu for interactive 3d segmentation. Clicking Segment All Slices (or pressing Shift + S) will extend the segmentation of the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings, please refer to the tooltips for details.
    10. \n
    11. The menu for automatic segmentation. The overall functionality is the same as for the 2d annotator. To segment the full volume Apply to Volume needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.
    12. \n
    13. The menu for committing the current object.
    14. \n
    15. The menu for clearing the current annotations. If all slices is set all annotations will be cleared, otherwise they are only cleared for the current slice.
    16. \n
    \n\n

    You can only segment one object at a time using the interactive segmentation functionality with this tool.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n\n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the segmentations and prompts. Same as for the 2d segmentation application but without the auto_segmentation layer.
    2. \n
    3. The embedding menu.
    4. \n
    5. The prompt menu.
    6. \n
    7. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    8. \n
    9. The menu for interactive segmentation in the current frame.
    10. \n
    11. The menu for interactive tracking. Click Track Object (or press Shift + S) to segment the current object across time.
    12. \n
    13. The menu for committing the current tracking result.
    14. \n
    15. The menu for clearing the current annotations.
    16. \n
    \n\n

    The tracking annotator only supports 2d image data with a time dimension, volumetric data + time is not supported. We also do not support automatic tracking yet.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use this tool.

    \n\n

    Image Series Annotator

    \n\n

    The image series annotation tool enables running the 2d annotator or 3d annotator for multiple images that are saved in a folder. This makes it convenient to annotate many images without having to restart the tool for every image. It can be started by

    \n\n\n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the folder where your images are saved with Input Folder. The annotation results will be saved in Output Folder.\nYou can specify a rule for loading only a subset of images via pattern, for example *.tif to only load tif images. Set is_volumetric if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images the images from the folder you have specified will be loaded and the annotation tool is started for them.

    \n\n

    This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.

    \n\n

    Check out the video tutorial for an in-depth explanation on how to use the image series annotator.

    \n\n

    Finetuning UI

    \n\n

    We also provide a graphical inferface for fine-tuning models on your own data. It can be started by clicking Finetuning in the plugin menu after starting napari.

    \n\n

    Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.

    \n\n

    When starting this tool via the plugin menu the following interface opens:

    \n\n

    \n\n

    You can select the image data via Path to images. You can either load images from a folder or select a single image file. By providing Image data key you can either provide a pattern for selecting files from the folder or provide an internal filepath for HDF5, Zarr or similar fileformats.

    \n\n

    You can select the label data via Path to labels and Label data key, following the same logic as for the image data. The label masks are expected to have the same size as the image data. You can for example use annotations created with one of the micro_sam annotation tools for this, they are stored in the correct format. See the FAQ for more details on the expected label data.

    \n\n

    The Configuration option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found here.

    \n\n

    Using the Command Line Interface (CLI)

    \n\n

    micro-sam extends access to a bunch of functionalities using the command line interface (CLI) scripts via terminal.

    \n\n

    The supported CLIs can be used by

    \n\n\n\n
        - Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.\n- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.\n
    \n\n

    NOTE: For all CLIs above, you can find more details by adding the argument -h to the CLI script (eg. $ micro_sam.annotator_2d -h).

    \n\n

    Using the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    This library extends the Segment Anything library and

    \n\n\n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.

    \n\n

    Training your Own Model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\nWe recommend checking out our latest preprint for details on the results on how much data is required for finetuning Segment Anything.

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Check out the finetuning notebook to see how to use it.\nWe also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to train it together with the rest of SAM and how to then use it.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, can be found in the finetuning directory, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.

    \n\n

    Here is a list of resources, together with their recommended training settings, for which we have tested model finetuning:

    \n\n\n\n\n \n \n \n \n \n \n\n\n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n \n \n \n \n \n \n\n\n
    Resource NameCapacityModel TypeBatch SizeFinetuned PartsNumber of Objects
    CPU32GBViT Base1all10
    CPU64GBViT Base1all15
    GPU (NVIDIA GTX 1080Ti)8GBViT Base1Mask Decoder, Prompt Encoder10
    GPU (NVIDIA Quadro RTX5000)16GBViT Base1all10
    GPU (Tesla V100)32GBViT Base1all10
    GPU (NVIDIA A100)80GBViT Tiny2all50
    GPU (NVIDIA A100)80GBViT Base2all40
    GPU (NVIDIA A100)80GBViT Large2all30
    GPU (NVIDIA A100)80GBViT Huge2all25
    \n\n
    \n

    NOTE: If you use the finetuning UI or micro_sam.training.training.train_sam_for_configuration you can specify the hardware configuration and the best settings for it will be set automatically. If your hardware is not in the settings we have tested choose the closest match. You can set the training parameters yourself when using micro_sam.training.training.train_sam. Be aware that the choice for the number of objects per image, the batch size, and the type of model have a strong impact on the VRAM needed for training and the duration of training. See the finetuning notebook for an overview of these parameters.

    \n
    \n\n

    Finetuned Models

    \n\n

    In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThey are available in the BioImage.IO Model Zoo and are also hosted on Zenodo.

    \n\n

    We currently offer the following models:

    \n\n\n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which model to use in the annotation tools by selecting the corresponding name in the Model: drop-down menu in the embedding menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example.

    \n\n

    Choosing a Model

    \n\n

    As a rule of thumb:

    \n\n\n\n

    See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Other Models

    \n\n

    Previous versions of our models are available on Zenodo:

    \n\n\n\n

    We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

    \n\n

    We provide additional models that were used for experiments in our publication on Zenodo:

    \n\n\n\n

    FAQ

    \n\n

    Here we provide frequently asked questions and common issues.\nIf you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam.

    \n\n

    Installation questions

    \n\n

    1. How to install micro_sam?

    \n\n

    The installation for micro_sam is supported in three ways: from conda (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam, briefly walking you through the installation process and how to start the tool.

    \n\n

    2. I cannot install micro_sam using the installer, I am getting some errors.

    \n\n

    The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.

    \n\n
    \n

    NOTE: The installers enable using micro_sam without conda. However, we recommend the installation from conda or from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).

    \n
    \n\n

    3. What is the minimum system requirement for micro_sam?

    \n\n

    From our experience, the micro_sam annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.\nYou might encounter some slowness for $\\leq$ 8GB RAM. The resources micro_sam's annotation tools have been tested on are:

    \n\n\n\n\n\n

    Having a GPU will significantly speed up the annotation tools and especially the model finetuning.

    \n\n\n\n

    micro_sam has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.

    \n\n

    5. I am missing a few packages (eg. ModuleNotFoundError: No module named 'elf.io). What should I do?

    \n\n

    With the latest release 1.0.0, the installation from conda and source should take care of this and install all the relevant packages for you.\nSo please reinstall micro_sam, following the installation guide.

    \n\n

    6. Can I install micro_sam using pip?

    \n\n

    We do not recommend installing micro-sam with pip. It has several dependencies that are only avaoiable from conda-forge, which will not install correctly via pip.

    \n\n

    Please see the installation guide for the recommended way to install micro-sam.

    \n\n

    The PyPI page for micro-sam exists only so that the napari-hub can find it.

    \n\n

    7. I get the following error: importError: cannot import name 'UNETR' from 'torch_em.model'.

    \n\n

    It's possible that you have an older version of torch-em installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam, then a) is most likely the reason . We recommend installing the latest version following the installation instructions.

    \n\n

    8. My system does not support internet connection. Where should I put the model checkpoints for the micro-sam models?

    \n\n

    We recommend transferring the model checkpoints to the system-level cache directory (you can find yours by running the following in terminal: python -c \"from micro_sam import util; print(util.microsam_cachedir())). Once you have identified the cache directory, you need to create an additional models directory inside the micro-sam cache directory (if not present already) and move the model checkpoints there. At last, you must rename the transferred checkpoints as per the respective key values in the url dictionaries located in the micro_sam.util.models function (below mentioned is an example for Linux users).

    \n\n
    \n
    # Download and transfer the model checkpoints for 'vit_b_lm' and `vit_b_lm_decoder`.\n# Next, verify the cache directory.\n> python -c "from micro_sam import util; print(util.microsam_cachedir())"\n/home/anwai/.cache/micro_sam\n\n# Create 'models' folder in the cache directory\n> mkdir /home/anwai/.cache/micro_sam/models\n\n# Move the checkpoints to the models directory and rename them\n# The following steps transfer and rename the checkpoints to the desired filenames.\n> mv vit_b.pt /home/anwai/.cache/micro_sam/models/vit_b_lm\n> mv vit_b_decoder.pt /home/anwai/.cache/micro_sam/models/vit_b_lm_decoder\n
    \n
    \n\n

    Usage questions

    \n\n

    \n\n

    1. I have some micropscopy images. Can I use the annotator tool for segmenting them?

    \n\n

    Yes, you can use the annotator tool for:

    \n\n\n\n

    2. Which model should I use for my data?

    \n\n

    We currently provide three different kind of models: the default models vit_h, vit_l, vit_b and vit_t; the models for light microscopy vit_l_lm, vit_b_lm and vit_t_lm; the models for electron microscopy vit_l_em_organelles, vit_b_em_organelles and vit_t_em_organelles.\nYou should first try the model that best fits the segmentation task your interested in, the lm model for cell or nucleus segmentation in light microscopy or the em_organelles model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.\nIf your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.\nThe letter after vit denotes the size of the image encoder in SAM, h (huge) being the largest and t (tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l or vit_b model, they offer the best trade-off between speed and segmentation quality.\nYou can find more information on model choice here.

    \n\n

    3. I have high-resolution microscopy images, micro_sam does not seem to work.

    \n\n

    The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or sometimes not work at all. To address this, micro_sam implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape parameter, which determines the size of the inner tile and halo, which determines the size of the additional overlap.

    \n\n\n\n
    \n

    NOTE: It's recommended to choose the halo so that it is larger than half of the maximal radius of the objects you want to segment.

    \n
    \n\n

    4. The computation of image embeddings takes very long in napari.

    \n\n

    micro_sam pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) stores them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bottom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.

    \n\n\n\n

    5. Can I use micro_sam on a CPU?

    \n\n

    Most other processing steps are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the \"Segment Anything\" feature or AMG - Automatic Mask Generation) however takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).

    \n\n
    \n

    HINT: All the tutorial videos have been created on CPU resources.

    \n
    \n\n

    6. I generated some segmentations from another tool, can I use it as a starting point in micro_sam?

    \n\n

    You can save and load the results from the committed_objects layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File -> Save Selected Layers (s) ... in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result parameter in the CLI or python script (2d and 3d segmentation).\nIf you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and rename it to committed_objects.

    \n\n

    7. I am using micro_sam for segmenting objects. I would like to report the steps for reproducability. How can this be done?

    \n\n

    The annotation steps and segmentation results can be saved to a Zarr file by providing the commit_path in the commit widget. This file will contain all relevant information to reproduce the segmentation.

    \n\n
    \n

    NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.

    \n
    \n\n

    8. I want to segment objects with complex structures. Both the default Segment Anything models and the micro_sam generalist models do not work for my data. What should I do?

    \n\n

    micro_sam supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam also supports finetuning a model on your data (see the next section on finetuning). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.

    \n\n

    9. I am using the annotation tool and napari outputs the following error: While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.

    \n\n

    These messages occur when an internal error happens in micro_sam. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.\nWe want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.

    \n\n

    10. The objects are not segmented in my 3d data using the interactive annotation tool.

    \n\n

    The first thing to check is: a) make sure you are using the latest version of micro_sam (pull the latest commit from master if your installation is from source, or update the installation from conda using conda update micro_sam), and b) try out the steps from the 3d annotation tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.\nc) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.

    \n\n

    11. I have very small or fine-grained structures in my high-resolution microscopic images. Can I use micro_sam to annotate them?

    \n\n

    Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).

    \n\n

    12. napari seems to be very slow for large images.

    \n\n

    Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.

    \n\n

    13. While computing the embeddings (and / or automatic segmentation), a window stating: \"napari\" is not responding pops up.

    \n\n

    This can happen for long running computations. You just need to wait a bit longer and the computation will finish.

    \n\n

    14. I have 3D RGB microscopy volumes. How does micro_sam handle these images?

    \n\n

    micro_sam performs automatic segmentation in 3D volumes by first segmenting slices individually in 2D and merging the segmentations across 3D based on overlap of objects between slices. The expected shape of your 3D RGB volume should be (Z * Y * X * 3) (reason: Segment Anything is devised to consider 3-channel inputs, so while the user provides micro-sam with 1-channel inputs, we handle this by triplicating this to fit the requirement, or with 3-channel inputs, we use them in the expected RGB array structures as it is).

    \n\n

    15. I want to use a model stored in a different directory than the micro_sam cache. How can I do this?

    \n\n

    The micro-sam CLIs for precomputation of image embeddings and annotators (Annotator 2d, Annotator 3d, Annotator Tracking, Image Series Annotator) accept the argument -c / --checkpoint to pass model checkpoints. If you start a micro-sam annotator from the napari plugin menu, you can provide the path to model checkpoints in the annotator widget (on right) under Embedding Settings drop-down in the custom weights path option.

    \n\n

    NOTE: It is important to choose the correct model type when you opt for the above recommendation, using the -m / --model_type argument or selecting it from the Model dropdown in Embedding Settings respectively. Otherwise you will face parameter mismatch issues.

    \n\n

    16. Some parameters in the annotator / finetuning widget are unclear to me.

    \n\n

    micro-sam has tooltips for menu options across all widgets (i.e. an information window will appear if you hover over name of the menu), which briefly describe the utility of the specific menu option.

    \n\n

    Fine-tuning questions

    \n\n

    1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using micro_sam?

    \n\n

    Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:

    \n\n\n\n

    2. I would like to fine-tune Segment Anything on open-source cloud services (e.g. Kaggle Notebooks), is it possible?

    \n\n

    Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.

    \n\n

    \n\n

    3. What kind of annotations do I need to finetune Segment Anything?

    \n\n

    Annotations are referred to the instance segmentation labels, i.e. each object of interests in your microscopy images have an individual id to uniquely identify all the segmented objects. You can obtain them by micro_sam's annotation tools. In micro_sam, it's expected to provide dense segmentations (i.e. all objects per image are annotated) for finetuning Segment Anything with the additional decoder, however it's okay to use sparse segmentations (i.e. few objects per image are annotated) for just finetuning Segment Anything (without the additional decoder).

    \n\n

    4. I have finetuned Segment Anything on my microscopy data. How can I use it for annotating new images?

    \n\n

    You can load your finetuned model by entering the path to its checkpoint in the custom_weights_path field in the Embedding Settings drop-down menu.\nIf you are using the python library or CLI you can specify this path with the checkpoint_path parameter.

    \n\n

    5. What is the background of the new AIS (Automatic Instance Segmentation) feature in micro_sam?

    \n\n

    micro_sam introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.

    \n\n

    6. I want to finetune only the Segment Anything model without the additional instance decoder.

    \n\n

    The instance segmentation decoder is optional. So you can only finetune SAM or SAM and the additional decoder. Finetuning with the decoder will increase training times, but will enable you to use AIS. See this example for finetuning with both the objectives.

    \n\n
    \n

    NOTE: To try out the other way round (i.e. the automatic instance segmentation framework without the interactive capability, i.e. a UNETR: a vision transformer encoder and a convolutional decoder), you can take inspiration from this example on LIVECell.

    \n
    \n\n

    7. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?

    \n\n

    Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b) by reducing the number of objects per image to 15.\nThis parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.\nYou can find an overview of the resources we have tested for finetuning here.\nWe also provide a the convenience function micro_sam.training.train_sam_for_configuration that selects the best training settings for these configuration. This function is also used by the finetuning UI.

    \n\n

    8. I want to create a dataloader for my data, to finetune Segment Anything.

    \n\n

    Thanks to torch-em, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.\nSee the tutorial notebook on how to create dataloaders using torch-em and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.

    \n\n
    \n

    NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using sampler for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model in micro_sam).

    \n
    \n\n

    9. How can I evaluate a model I have finetuned?

    \n\n

    To validate a Segment Anything model for your data, you have different options, depending on the task you want to solve and whether you have segmentation annotations for your data.

    \n\n\n\n

    We provide an example notebook that shows how to use this evaluation functionality.

    \n\n

    Contribution Guide

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions! First, discuss your idea by opening a new issue in micro-sam.\nThis allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    \n
    $ git clone https://github.com/computational-cell-analytics/micro-sam.git\n$ cd micro-sam\n$ git checkout dev\n
    \n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and developer dependencies, and micro-sam as an editable installation:

    \n\n
    \n
    conda env create environment.yaml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
    \n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    $ git checkout dev\n$ git checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    \n
    $ pytest\n
    \n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    \n
    $ python build_doc.py\n
    \n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    \n
    $ python build_doc.py --out\n
    \n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    \n
    $ python development/benchmark.py --model_type vit_t --device cpu`\n
    \n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    \n
    $ python development/benchmark.py --help\n
    \n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module.

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    Creating a new release

    \n\n

    To create a new release you have to edit the version number in micro_sam/__version__.py in a PR. After merging this PR the release will automatically be done by the CI.

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n\n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n\n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n\n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.automatic_segmentation", "modulename": "micro_sam.automatic_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.automatic_segmentation.get_predictor_and_segmenter", "modulename": "micro_sam.automatic_segmentation", "qualname": "get_predictor_and_segmenter", "kind": "function", "doc": "

    Get the Segment Anything model and class for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The Segment Anything model.\n The automatic instance segmentation class.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint: Union[os.PathLike, str, NoneType] = None,\tdevice: str = None,\tamg: Optional[bool] = None,\tis_tiled: bool = False,\t**kwargs) -> Tuple[mobile_sam.predictor.SamPredictor, Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]]:", "funcdef": "def"}, {"fullname": "micro_sam.automatic_segmentation.automatic_instance_segmentation", "modulename": "micro_sam.automatic_segmentation", "qualname": "automatic_instance_segmentation", "kind": "function", "doc": "

    Run automatic segmentation for the input image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation result.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tinput_path: Union[os.PathLike, str, numpy.ndarray],\toutput_path: Union[os.PathLike, str, NoneType] = None,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\t**generate_kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.bioengine_export", "modulename": "micro_sam.bioimageio.bioengine_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.bioengine_export.ENCODER_CONFIG", "modulename": "micro_sam.bioimageio.bioengine_export", "qualname": "ENCODER_CONFIG", "kind": "variable", "doc": "

    \n", "default_value": "'name: "%s"\\nbackend: "pytorch"\\nplatform: "pytorch_libtorch"\\n\\nmax_batch_size : 1\\ninput [\\n {\\n name: "input0__0"\\n data_type: TYPE_FP32\\n dims: [3, -1, -1]\\n }\\n]\\noutput [\\n {\\n name: "output0__0"\\n data_type: TYPE_FP32\\n dims: [256, 64, 64]\\n }\\n]\\n\\nparameters: {\\n key: "INFERENCE_MODE"\\n value: {\\n string_value: "true"\\n }\\n}'"}, {"fullname": "micro_sam.bioimageio.bioengine_export.DECODER_CONFIG", "modulename": "micro_sam.bioimageio.bioengine_export", "qualname": "DECODER_CONFIG", "kind": "variable", "doc": "

    \n", "default_value": "'name: "%s"\\nbackend: "onnxruntime"\\nplatform: "onnxruntime_onnx"\\n\\nparameters: {\\n key: "INFERENCE_MODE"\\n value: {\\n string_value: "true"\\n }\\n}\\n\\ninstance_group {\\n count: 1\\n kind: KIND_CPU\\n}'"}, {"fullname": "micro_sam.bioimageio.bioengine_export.export_image_encoder", "modulename": "micro_sam.bioimageio.bioengine_export", "qualname": "export_image_encoder", "kind": "function", "doc": "

    Export SAM image encoder to torchscript.

    \n\n

    The torchscript image encoder can be used for predicting image embeddings\nwith a backed, e.g. with the bioengine.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tmodel_type: str,\toutput_root: Union[str, os.PathLike],\texport_name: Optional[str] = None,\tcheckpoint_path: Optional[str] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.bioengine_export.export_onnx_model", "modulename": "micro_sam.bioimageio.bioengine_export", "qualname": "export_onnx_model", "kind": "function", "doc": "

    Export SAM prompt encoder and mask decoder to onnx.

    \n\n

    The onnx encoder and decoder can be used for interactive segmentation in the browser.\nThis code is adapted from\nhttps://github.com/facebookresearch/segment-anything/blob/main/scripts/export_onnx_model.py

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tmodel_type,\toutput_root,\topset: int,\texport_name: Optional[str] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\treturn_single_mask: bool = True,\tgelu_approximate: bool = False,\tuse_stability_score: bool = False,\treturn_extra_metrics: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.bioengine_export.export_bioengine_model", "modulename": "micro_sam.bioimageio.bioengine_export", "qualname": "export_bioengine_model", "kind": "function", "doc": "

    Export SAM model to a format compatible with the BioEngine.

    \n\n

    The bioengine enables running the\nimage encoder on an online backend, so that SAM can be used in an online tool, or to predict\nthe image embeddings via the online backend rather than on CPU.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tmodel_type,\toutput_root,\topset: int,\texport_name: Optional[str] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\treturn_single_mask: bool = True,\tgelu_approximate: bool = False,\tuse_stability_score: bool = False,\treturn_extra_metrics: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "

    \n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "

    Export SAM model to BioImage.IO model format.

    \n\n

    The exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\timage: numpy.ndarray,\tlabel_image: numpy.ndarray,\tmodel_type: str,\tname: str,\toutput_path: Union[str, os.PathLike],\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor", "kind": "class", "doc": "

    Wrapper around the SamPredictor.

    \n\n

    This model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.__init__", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "

    Copy parameters and buffers from state_dict into this module and its descendants.

    \n\n

    If strict is True, then\nthe keys of state_dict must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict() function.

    \n\n
    \n\n

    If assign is True the optimizer must be created after\nthe call to load_state_dict unless\n~torch.__future__.get_swap_module_params_on_conversion() is True.

    \n\n
    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    NamedTuple with missing_keys and unexpected_keys fields:\n * missing_keys is a list of str containing any keys that are expected\n by this module but missing from the provided state_dict.\n * unexpected_keys is a list of str containing the keys that are not\n expected by this module but present in the provided state_dict.

    \n
    \n\n
    Note:
    \n\n
    \n

    If a parameter or buffer is registered as None and its corresponding key\n exists in state_dict, load_state_dict() will raise a\n RuntimeError.

    \n
    \n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "
    Arguments:
    \n\n\n\n

    Returns:

    \n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.benchmark_datasets", "modulename": "micro_sam.evaluation.benchmark_datasets", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.LM_2D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "LM_2D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['livecell', 'deepbacs', 'tissuenet', 'neurips_cellseg', 'dynamicnuclearnet', 'hpa', 'covid_if', 'pannuke', 'lizard', 'orgasegment', 'omnipose', 'dic_hepg2']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.LM_3D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "LM_3D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['plantseg_root', 'plantseg_ovules', 'gonuclear', 'mouse_embryo', 'embegseg', 'cellseg3d']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.EM_2D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "EM_2D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['mitolab_tem']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.EM_3D_DATASETS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "EM_3D_DATASETS", "kind": "variable", "doc": "

    \n", "default_value": "['mitoem_rat', 'mitoem_human', 'platynereis_nuclei', 'lucchi', 'mitolab', 'nuc_mm_mouse', 'num_mm_zebrafish', 'uro_cell', 'sponge_em', 'platynereis_cilia', 'vnc', 'asem_mito']"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.DATASET_RETURNS_FOLDER", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "DATASET_RETURNS_FOLDER", "kind": "variable", "doc": "

    \n", "default_value": "{'deepbacs': '*.tif'}"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.DATASET_CONTAINER_KEYS", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "DATASET_CONTAINER_KEYS", "kind": "variable", "doc": "

    \n", "default_value": "{'lucchi': ['raw', 'labels']}"}, {"fullname": "micro_sam.evaluation.benchmark_datasets.run_benchmark_evaluations", "modulename": "micro_sam.evaluation.benchmark_datasets", "qualname": "run_benchmark_evaluations", "kind": "function", "doc": "

    Run evaluation for benchmarking Segment Anything models on microscopy datasets.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_folder: Union[os.PathLike, str],\tdataset_choice: str,\tmodel_type: str = 'vit_l',\toutput_folder: Union[os.PathLike, str, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\trun_amg: bool = False,\tretain: Optional[List[str]] = None,\tignore_warnings: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "

    Run evaluation for iterative prompt-based segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False,\tuse_masks: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run Segment Anything inference for multiple images using prompts iteratively\nderived from model outputs and ground-truth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool = True,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8,\tuse_masks: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_amg", "modulename": "micro_sam.evaluation.inference", "qualname": "run_amg", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tpeft_kwargs: Optional[Dict] = None,\tcache_embeddings: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.inference", "qualname": "run_instance_segmentation_with_decoder", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tpeft_kwargs: Optional[Dict] = None,\tcache_embeddings: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike, NoneType],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "

    \n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike, NoneType],\tprediction_dir: Union[str, os.PathLike],\texperiment_folder: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_precompute_embeddings", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_precompute_embeddings", "kind": "function", "doc": "

    Run precomputation of val and test image embeddings for livecell.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tn_val_per_cell_type: int = 25) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_iterative_prompting", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_iterative_prompting", "kind": "function", "doc": "

    Run inference on livecell with iterative prompting setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tstart_with_box: bool = False,\tuse_masks: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LIVECell inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LIVECell evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int,\tmodel_type3: Optional[str] = None,\tcheckpoint1: Union[str, os.PathLike, NoneType] = None,\tcheckpoint2: Union[str, os.PathLike, NoneType] = None,\tcheckpoint3: Union[str, os.PathLike, NoneType] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[str, os.PathLike, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0,\thave_model3=False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n\n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.default_grid_search_values_multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "default_grid_search_values_multi_dimensional_segmentation", "kind": "function", "doc": "

    Default grid-search parameters for multi-dimensional prompt-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "

    Segment all objects in a volume by prompt-based segmentation in one slice per object.

    \n\n

    This function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\tsave_path: Union[os.PathLike, str, NoneType] = None,\tiou_threshold: float = 0.8,\tprojection: Union[str, dict] = 'mask',\tbox_extension: Union[float, int] = 0.025,\tdevice: Union[str, torch.device] = None,\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\treturn_segmentation: bool = False,\tmin_size: int = 0,\tevaluation_metric: Literal['sa', 'dice'] = 'sa') -> Union[float, Tuple[numpy.ndarray, float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.run_multi_dimensional_segmentation_grid_search", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "run_multi_dimensional_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for prompt-based multi-dimensional instance segmentation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values argument. For example, to run a grid search over the parameters iou_threshold,\nprojection and box_extension, you can pass the following:

    \n\n
    grid_search_values = {\n    \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n    \"projection\": [\"mask\", \"box\", \"points\"],\n    \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
    \n\n

    All combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation is used\nto get the default grid search parameters for the instance segmentation method.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[os.PathLike, str, NoneType],\tresult_dir: Union[str, os.PathLike],\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\tgrid_search_values: Optional[Dict[str, List]] = None,\tmin_size: int = 0,\tevaluation_metric: Literal['sa', 'dice'] = 'sa'):", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None,\tverbose_embeddings: bool = True):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation.

    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None,\tlabel_masks: bool = True) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.clear_state", "kind": "function", "doc": "

    Clear the state of the mask generator.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n\n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_unetr", "modulename": "micro_sam.instance_segmentation", "qualname": "get_unetr", "kind": "function", "doc": "

    Get UNETR model for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The UNETR model.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "

    Get decoder to predict outputs for automatic instance segmentation

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The decoder for instance segmentation.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "

    Load the SAM model (predictor) and instance segmentation decoder.

    \n\n

    This requires a checkpoint that contains the state for both predictor\nand decoder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The SAM predictor.\n The decoder for instance segmentation.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None,\tpeft_kwargs: Optional[Dict] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "

    Get the initialized state of the instance segmenter.

    \n\n
    Returns:
    \n\n
    \n

    Instance segmentation state.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "

    Set the state of the instance segmenter.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.clear_state", "kind": "function", "doc": "

    Clear the state of the instance segmenter.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Same as InstanceSegmentationWithDecoder but for tiled image embeddings.

    \n", "bases": "InstanceSegmentationWithDecoder"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.models", "modulename": "micro_sam.models", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.build_sam", "modulename": "micro_sam.models.build_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_h", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_h", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_l", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_l", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.build_sam_vit_b", "modulename": "micro_sam.models.build_sam", "qualname": "build_sam_vit_b", "kind": "function", "doc": "

    \n", "signature": "(checkpoint=None, num_multimask_outputs=3, image_size=1024):", "funcdef": "def"}, {"fullname": "micro_sam.models.build_sam.sam_model_registry", "modulename": "micro_sam.models.build_sam", "qualname": "sam_model_registry", "kind": "variable", "doc": "

    \n", "default_value": "{'default': <function build_sam_vit_h>, 'vit_h': <function build_sam_vit_h>, 'vit_l': <function build_sam_vit_l>, 'vit_b': <function build_sam_vit_b>}"}, {"fullname": "micro_sam.models.peft_sam", "modulename": "micro_sam.models.peft_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery", "kind": "class", "doc": "

    Operates on the attention layers for performing low-rank adaptation.

    \n\n

    (Inspired from: https://github.com/JamesQFreeman/Sam_LoRA/)

    \n\n

    In SAM, it is implemented as:

    \n\n
    \n
    self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)\nB, N, C = x.shape\nqkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)\nq, k, v = qkv.unbind(0)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(rank: int, block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.qkv_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.qkv_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.dim", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.dim", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.alpha", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.alpha", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.rank", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.rank", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_a_linear_q", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_a_linear_q", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_b_linear_q", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_b_linear_q", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_a_linear_v", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_a_linear_v", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.w_b_linear_v", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.w_b_linear_v", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.reset_parameters", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.reset_parameters", "kind": "function", "doc": "

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.LoRASurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "LoRASurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery", "kind": "class", "doc": "

    Operates on the attention layers for performing factorized attention.

    \n\n

    (Inspired from: https://github.com/cchen-cc/MA-SAM/blob/main/MA-SAM/sam_fact_tt_image_encoder.py)

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\trank: int,\tblock: torch.nn.modules.module.Module,\tdropout: Optional[float] = 0.1)"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.qkv_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.qkv_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.dim", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.dim", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.q_FacTs", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.q_FacTs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.v_FacTs", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.v_FacTs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.dropout", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.dropout", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.FacTu", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.FacTu", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.FacTv", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.FacTv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.FacTSurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "FacTSurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(layer, dim)"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer.layer", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer.layer", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer.scale", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer.scale", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer.shift", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer.shift", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.ScaleShiftLayer.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "ScaleShiftLayer.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.SSFSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "SSFSurgery", "kind": "class", "doc": "

    Operates on all layers in the transformer block for adding learnable scale and shift parameters.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.SSFSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "SSFSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(rank: int, block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.SSFSurgery.block", "modulename": "micro_sam.models.peft_sam", "qualname": "SSFSurgery.block", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.SSFSurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "SSFSurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery", "kind": "class", "doc": "

    Base class for selectively allowing gradient updates for certain parameters.

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.block", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.block", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.allow_gradient_update_for_parameters", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.allow_gradient_update_for_parameters", "kind": "function", "doc": "

    This function decides the parameter attributes to match for allowing gradient updates.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\tprefix: Optional[List[str]] = None,\tsuffix: Optional[List[str]] = None,\tinfix: Optional[List[str]] = None):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.SelectiveSurgery.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "SelectiveSurgery.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer", "kind": "class", "doc": "

    Adds AdaptFormer Module in place of the MLP Layers

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\trank: int,\tblock: torch.nn.modules.module.Module,\talpha: Union[str, float, NoneType] = 'learnable_scalar',\tdropout: Optional[float] = None,\tprojection_size: int = 64)"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.mlp_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.mlp_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.n_embd", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.n_embd", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.projection_size", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.projection_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.dropout", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.dropout", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.down_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.down_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.non_linear_func", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.non_linear_func", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.up_proj", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.up_proj", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.AdaptFormer.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "AdaptFormer.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.peft_sam.AttentionSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "AttentionSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates for parameters in attention layers.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.AttentionSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "AttentionSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.BiasSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "BiasSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates for bias parameters.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.BiasSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "BiasSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.LayerNormSurgery", "modulename": "micro_sam.models.peft_sam", "qualname": "LayerNormSurgery", "kind": "class", "doc": "

    Child class for allowing gradient updates in normalization layers.

    \n", "bases": "SelectiveSurgery"}, {"fullname": "micro_sam.models.peft_sam.LayerNormSurgery.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "LayerNormSurgery.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(block: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam", "kind": "class", "doc": "

    Wraps the Segment Anything model's image encoder to different parameter efficient finetuning methods.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.__init__", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tmodel: segment_anything.modeling.sam.Sam,\trank: int,\tpeft_module: torch.nn.modules.module.Module = <class 'micro_sam.models.peft_sam.LoRASurgery'>,\tattention_layers_to_update: List[int] = None,\t**module_kwargs)"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.peft_module", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.peft_module", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.peft_blocks", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.peft_blocks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.sam", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.peft_sam.PEFT_Sam.forward", "modulename": "micro_sam.models.peft_sam", "qualname": "PEFT_Sam.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, batched_input, multimask_output):", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.get_sam_3d_model", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "get_sam_3d_model", "kind": "function", "doc": "

    \n", "signature": "(\tdevice,\tn_classes,\timage_size,\tlora_rank=None,\tfreeze_encoder=False,\tmodel_type='vit_b',\tcheckpoint_path=None):", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.__init__", "kind": "function", "doc": "

    Initializes the Sam3DWrapper object.

    \n\n
    Arguments:
    \n\n\n", "signature": "(sam_model: segment_anything.modeling.sam.Sam, freeze_encoder: bool)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.sam_model", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.sam_model", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.freeze_encoder", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.freeze_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.Sam3DWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "Sam3DWrapper.forward", "kind": "function", "doc": "

    Predict 3D masks for the current inputs.

    \n\n

    Unlike original SAM this model only supports automatic segmentation and does not support prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A list over input images, where each element is as dictionary with the following keys:\n 'masks': Mask prediction for this object.\n 'iou_predictions': IOU score prediction for this object.\n 'low_res_masks': Low resolution mask prediction for this object.

    \n
    \n", "signature": "(\tself,\tbatched_input: List[Dict[str, Any]],\tmultimask_output: bool) -> List[Dict[str, torch.Tensor]]:", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tnum_heads: int = 12,\tembed_dim: int = 768)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.image_encoder", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.image_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.img_size", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.img_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.ImageEncoderViT3DWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "ImageEncoderViT3DWrapper.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x: torch.Tensor, d_size: int) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.__init__", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tblock: torch.nn.modules.module.Module,\tdim: int,\tnum_heads: int,\tnorm_layer: Type[torch.nn.modules.module.Module] = <class 'torch.nn.modules.normalization.LayerNorm'>,\tadapter_channels: int = 384)"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.block", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.block", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_channels", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_channels", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_down", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_down", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_up", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_up", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_conv", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_act", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_act", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_norm", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_norm", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_down_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_down_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_linear_up_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_linear_up_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_conv_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_conv_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_act_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_act_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.adapter_norm_2", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.adapter_norm_2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.sam_3d_wrapper.NDBlockWrapper.forward", "modulename": "micro_sam.models.sam_3d_wrapper", "qualname": "NDBlockWrapper.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x: torch.Tensor, d_size) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.get_simple_sam_3d_model", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "get_simple_sam_3d_model", "kind": "function", "doc": "

    \n", "signature": "(\tdevice,\tn_classes,\timage_size,\tlora_rank=None,\tfreeze_encoder=False,\tmodel_type='vit_b',\tcheckpoint_path=None):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tin_channels,\tout_channels,\tkernel_size=(3, 3, 3),\tstride=(1, 1, 1),\tpadding=(1, 1, 1),\tbias=True,\tmode='nearest')"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.conv1", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.conv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.conv2", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.conv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.downsample", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.downsample", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.leakyrelu", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.leakyrelu", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.up", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.up", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.BasicBlock.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "BasicBlock.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead", "kind": "class", "doc": "

    A sequential container.

    \n\n

    Modules will be added to it in the order they are passed in the\nconstructor. Alternatively, an OrderedDict of modules can be\npassed in. The forward() method of Sequential accepts any\ninput and forwards it to the first module it contains. It then\n\"chains\" outputs to inputs sequentially for each subsequent module,\nfinally returning the output of the last module.

    \n\n

    The value a Sequential provides over manually calling a sequence\nof modules is that it allows treating the whole container as a\nsingle module, such that performing a transformation on the\nSequential applies to each of the modules it stores (which are\neach a registered submodule of the Sequential).

    \n\n

    What's the difference between a Sequential and a\ntorch.nn.ModuleList? A ModuleList is exactly what it\nsounds like--a list for storing Module s! On the other hand,\nthe layers in a Sequential are connected in a cascading way.

    \n\n

    Example::

    \n\n
    # Using Sequential to create a small model. When `model` is run,\n# input will first be passed to `Conv2d(1,20,5)`. The output of\n# `Conv2d(1,20,5)` will be used as the input to the first\n# `ReLU`; the output of the first `ReLU` will become the input\n# for `Conv2d(20,64,5)`. Finally, the output of\n# `Conv2d(20,64,5)` will be used as input to the second `ReLU`\nmodel = nn.Sequential(\n          nn.Conv2d(1,20,5),\n          nn.ReLU(),\n          nn.Conv2d(20,64,5),\n          nn.ReLU()\n        )\n\n# Using Sequential with OrderedDict. This is functionally the\n# same as the above code\nmodel = nn.Sequential(OrderedDict([\n          ('conv1', nn.Conv2d(1,20,5)),\n          ('relu1', nn.ReLU()),\n          ('conv2', nn.Conv2d(20,64,5)),\n          ('relu2', nn.ReLU())\n        ]))\n
    \n", "bases": "torch.nn.modules.container.Sequential"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tin_channels,\tout_channels,\tkernel_size=(3, 3, 3),\tstride=(1, 1, 1),\tpadding=(1, 1, 1),\tbias=True)"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.conv_pred", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.conv_pred", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.segmentation_head", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.segmentation_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SegmentationHead.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SegmentationHead.forward", "kind": "function", "doc": "

    Define the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, x):", "funcdef": "def"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper", "kind": "class", "doc": "

    Base class for all neural network modules.

    \n\n

    Your models should also subclass this class.

    \n\n

    Modules can also contain other Modules, allowing to nest them in\na tree structure. You can assign the submodules as regular attributes::

    \n\n
    import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(1, 20, 5)\n        self.conv2 = nn.Conv2d(20, 20, 5)\n\n    def forward(self, x):\n        x = F.relu(self.conv1(x))\n        return F.relu(self.conv2(x))\n
    \n\n

    Submodules assigned in this way will be registered, and will have their\nparameters converted too when you call to(), etc.

    \n\n
    \n\n

    As per the example above, an __init__() call to the parent class\nmust be made before assignment on the child.

    \n\n
    \n\n

    :ivar training: Boolean represents whether this module is in training or\n evaluation mode.\n:vartype training: bool

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.__init__", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(sam, num_classes, freeze_encoder)"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.sam", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.freeze_encoder", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.freeze_encoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.decoders", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.decoders", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.out_conv", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.models.simple_sam_3d_wrapper.SimpleSam3DWrapper.forward", "modulename": "micro_sam.models.simple_sam_3d_wrapper", "qualname": "SimpleSam3DWrapper.forward", "kind": "function", "doc": "

    Predict 3D masks for the current inputs.

    \n\n

    Unlike original SAM this model only supports automatic segmentation and does not support prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A list over input images, where each element is as dictionary with the following keys:\n 'masks': Mask prediction for this object.

    \n
    \n", "signature": "(\tself,\tbatched_input: List[Dict[str, Any]],\tmultimask_output: bool) -> List[Dict[str, torch.Tensor]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "

    \n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "

    Merge stacked 2d instance segmentations into a consistent 3d segmentation.

    \n\n

    Solves a multicut problem based on the overlap of objects to merge across z.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The merged segmentation.

    \n
    \n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "

    Segment volume in 3d.

    \n\n

    First segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation.

    \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tpattern: Optional[str] = None,\tmodel_type: str = 'vit_l',\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    Start the 2d annotation tool for a given image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    Start the 3d annotation tool for a given image volume.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    Start the tracking annotation tool fora given timeseries.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the annotation tool for a series of images (supported for both 2d and 3d images).

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timages: Union[List[Union[str, os.PathLike]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True,\tskip_segmented: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "

    \n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "

    \n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for jointly training the Segment Anything model with an additional convolutional decoder.

    \n\n

    This class is inherited from SamTrainer.\nCheck out https://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/sam_trainer.py\nfor details on its implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\tmask_loss: Optional[torch.nn.modules.module.Module] = None,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer", "modulename": "micro_sam.training.semantic_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss", "kind": "class", "doc": "

    Loss for computing dice over one-hot labels.

    \n\n

    Expects prediction and target with num_classes channels: the number of classes for semantic segmentation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.__init__", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(num_classes: int, softmax: bool = True)"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.num_classes", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.num_classes", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.dice_loss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.dice_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.CustomDiceLoss.softmax", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "CustomDiceLoss.softmax", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model for semantic segmentation.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.__init__", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tnum_classes: int,\tdice_weight: Optional[float] = None,\t**kwargs)"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.convert_inputs", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.num_classes", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.num_classes", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.compute_ce_loss", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.compute_ce_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticSamTrainer.dice_weight", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticSamTrainer.dice_weight", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.semantic_sam_trainer.SemanticMapsSamTrainer", "modulename": "micro_sam.training.semantic_sam_trainer", "qualname": "SemanticMapsSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model for semantic segmentation.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "SemanticSamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer", "modulename": "micro_sam.training.simple_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer", "kind": "class", "doc": "

    Trainer class for creating a simple SAM trainer for limited prompt-based segmentation.

    \n\n

    This class is inherited from SamTrainer.\nCheck out https://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/sam_trainer.py\nfor details on its implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.__init__", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(use_points: bool = True, use_box: bool = True, **kwargs)"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.use_points", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.use_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.SimpleSamTrainer.use_box", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "SimpleSamTrainer.use_box", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.simple_sam_trainer.MedSAMTrainer", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "MedSAMTrainer", "kind": "class", "doc": "

    Trainer class for replicating the trainer of MedSAM (https://arxiv.org/abs/2304.12306).

    \n\n

    This class is inherited from SimpleSamTrainer.\nCheck out\nhttps://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/training/simple_sam_trainer.py\nfor details on its implementation.

    \n", "bases": "SimpleSamTrainer"}, {"fullname": "micro_sam.training.simple_sam_trainer.MedSAMTrainer.__init__", "modulename": "micro_sam.training.simple_sam_trainer", "qualname": "MedSAMTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(**kwargs)"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initialize internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(sam: segment_anything.modeling.sam.Sam)"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "

    \n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "

    Run training for a SAM model.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tmodel_type: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tn_epochs: int = 100,\tearly_stopping: Optional[int] = 10,\tn_objects_per_batch: Optional[int] = 25,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tfreeze: Optional[List[str]] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tlr: float = 1e-05,\tn_sub_iteration: int = 8,\tsave_root: Union[os.PathLike, str, NoneType] = None,\tmask_prob: float = 0.5,\tn_iterations: Optional[int] = None,\tscheduler_class: Optional[torch.optim.lr_scheduler._LRScheduler] = <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>,\tscheduler_kwargs: Optional[Dict[str, Any]] = None,\tsave_every_kth_epoch: Optional[int] = None,\tpbar_signals: Optional[PyQt5.QtCore.QObject] = None,\toptimizer_class: Optional[torch.optim.optimizer.Optimizer] = <class 'torch.optim.adamw.AdamW'>,\tpeft_kwargs: Optional[Dict] = None,\tignore_warnings: bool = True,\tverify_n_labels_in_loader: Optional[int] = 50,\t**model_kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_dataset", "modulename": "micro_sam.training.training", "qualname": "default_sam_dataset", "kind": "function", "doc": "

    Create a PyTorch Dataset for training a SAM model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation dataset.

    \n
    \n", "signature": "(\traw_paths: Union[List[Union[str, os.PathLike]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[str, os.PathLike]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler: Optional[Callable] = None,\traw_transform: Optional[Callable] = None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\tmin_size: int = 25,\tmax_sampling_attempts: Optional[int] = None,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "

    Create a PyTorch DataLoader for training a SAM model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The DataLoader.

    \n
    \n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "

    Best training configurations for given hardware resources.

    \n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "

    Run training for a SAM model with the configuration for a given hardware resource.

    \n\n

    Selects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tconfiguration: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tmodel_type: Optional[str] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "

    Transformation to require 8bit input data range (0-255).

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False,\tpeft_kwargs: Optional[Dict] = None,\tflexible_load_checkpoint: bool = False,\t**model_kwargs) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSemanticSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSemanticSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model\nfor semantic segmentation.

    \n"}, {"fullname": "micro_sam.training.util.normalize_to_8bit", "modulename": "micro_sam.training.util", "qualname": "normalize_to_8bit", "kind": "function", "doc": "

    \n", "signature": "(raw):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=False, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False,\tpeft_kwargs: Optional[Dict] = None,\tflexible_load_checkpoint: bool = False,\t**model_kwargs) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image embeddings.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predictor with set features.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "

    Get a suitable block shape for chunking a given shape.

    \n\n

    The primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The block shape.

    \n
    \n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; // mirrored in build-search-index.js (part 1) // Also split on html tags. this is a cheap heuristic, but good enough.