Skip to content

Commit

Permalink
Doc updates v10 (#588)
Browse files Browse the repository at this point in the history
Documentation updates
  • Loading branch information
constantinpape authored May 5, 2024
1 parent 0faccd8 commit d89ef32
Show file tree
Hide file tree
Showing 11 changed files with 181 additions and 130 deletions.
12 changes: 8 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ If you run into any problems or have questions regarding our tool please open an

## Installation and Usage

Please check [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for details on how to install and use `micro_sam`. You can also find a quickstart guide in [this video](TODO) and find all video tutorials [here](TODO).
Please check [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for details on how to install and use `micro_sam`. You can also watch [the quickstart video](https://youtu.be/HauT-D2BHKc) or [all video tutorials](https://youtube.com/playlist?list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&si=qNbB8IFXqAX33r_Z).


## Contributing
Expand All @@ -35,7 +35,7 @@ If you are interested in contributing to micro-sam, please see the [contributing
## Citation

If you are using this repository in your research please cite
- Our [preprint](https://doi.org/10.1101/2023.08.21.554208)
- our [preprint](https://doi.org/10.1101/2023.08.21.554208)
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643).
- If you use a `vit-tiny` models please also cite [Mobile SAM](https://arxiv.org/abs/2306.14289).

Expand All @@ -55,11 +55,15 @@ Compared to these we support more applications (2d, 3d and tracking), and provid

**New in version 1.0.0**

- TODO
This release mainly fixes issues with the previous release and marks the napari user interface as stable.

**New in version 0.5.0**

- TODO
This version includes a lot of new functionality and improvements. The most important changes are:
- Re-implementation of the annotation tools. The tools are now implemented as napari plugin.
- Using our improved functionality for automatic instance segmentation in the annotation tools, including automatic segmentation for 3D data.
- New widgets to use the finetuning and image series annotation functionality from napari.
- Improved finetuned models for light microscopy and electron microscopy data that are available via bioimage.io.

**New in version 0.4.1**

Expand Down
56 changes: 28 additions & 28 deletions doc/annotation_tools.md

Large diffs are not rendered by default.

26 changes: 15 additions & 11 deletions doc/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@
* [Snakeviz visualization](#snakeviz-visualization)
* [Memory profiling with memray](#memory-profiling-with-memray)

## Discuss your ideas
### Discuss your ideas

We welcome new contributions! First, discuss your idea by opening a [new issue](https://github.com/computational-cell-analytics/micro-sam/issues/new) in micro-sam.
This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

## Clone the repository
### Clone the repository

We use [git](https://git-scm.com/) for version control.

Expand All @@ -31,7 +31,7 @@ $ cd micro-sam
$ git checkout dev
```

## Create your development environment
### Create your development environment

We use [conda](https://docs.conda.io/en/latest/) to [manage our environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). If you don't have this already, install [miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or [mamba](https://mamba.readthedocs.io/en/latest/) to get started.

Expand All @@ -43,7 +43,7 @@ $ python -m pip install requirements-dev.txt
$ python -m pip install -e .
```

## Make your changes
### Make your changes

Now it's time to make your code changes.

Expand Down Expand Up @@ -92,13 +92,13 @@ The [Coverage Gutters VSCode extension](https://marketplace.visualstudio.com/ite

We also use [codecov.io](https://app.codecov.io/gh/computational-cell-analytics/micro-sam) to display the code coverage results from our Github Actions continuous integration.

## Open a pull request
### Open a pull request

Once you've made changes to the code and written some tests to go with it, you are ready to [open a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests). You can [mark your pull request as a draft](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests#draft-pull-requests) if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the `dev` branch [like this](https://github.com/computational-cell-analytics/micro-sam/compare/dev...dev).

## Optional: Build the documentation
### Optional: Build the documentation

We use [pdoc](https://pdoc.dev/docs/pdoc.html) to build the documentation.

Expand All @@ -122,7 +122,7 @@ You can add content to the documentation in two ways:
2. By adding or editing markdown files in the micro-sam `doc` directory.
* If you add a new markdown file to the documentation, you must tell [pdoc](https://pdoc.dev/docs/pdoc.html) that it exists by adding a line to the `micro_sam/__init__.py` module docstring (eg: `.. include:: ../doc/my_amazing_new_docs_page.md`). Otherwise it will not be included in the final documentation build!

## Optional: Benchmark performance
### Optional: Benchmark performance

There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

Expand All @@ -131,7 +131,7 @@ There are a number of options you can use to benchmark performance, and identify
* [Snakeviz visualization](#snakeviz-visualization)
* [Memory profiling with memray](#memory-profiling-with-memray)

### Run the benchmark script
#### Run the benchmark script

There is a performance benchmark script available in the micro-sam repository at `development/benchmark.py`.

Expand All @@ -145,7 +145,7 @@ For more details about the user input arguments for the micro-sam benchmark scri
$ python development/benchmark.py --help
```

### Line profiling
#### Line profiling

For more detailed line by line performance results, we can use [line-profiler](https://github.com/pyutils/line_profiler).

Expand All @@ -163,7 +163,7 @@ For more details about the user input arguments for the micro-sam benchmark scri
$ python development/benchmark.py --help
```
### Snakeviz visualization
#### Snakeviz visualization
For more detailed visualizations of profiling results, we use [snakeviz](https://jiffyclub.github.io/snakeviz/).
Expand All @@ -175,10 +175,14 @@ For more detailed visualizations of profiling results, we use [snakeviz](https:/
For more details about how to use snakeviz, see [the documentation](https://jiffyclub.github.io/snakeviz/).
### Memory profiling with memray
#### Memory profiling with memray
If you need to investigate memory use specifically, we use [memray](https://github.com/bloomberg/memray).
> Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.
For more details about how to use memray, see [the documentation](https://bloomberg.github.io/memray/getting_started.html).
## Creating a new release
To create a new release you have to edit the version number in [micro_sam/__version__.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/micro_sam/__version__.py) in a PR. After merging this PR the release will automatically be done by the CI.
24 changes: 17 additions & 7 deletions doc/finetuned_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,20 @@ We currently offer the following models:
- `vit_l`: Default Segment Anything model with ViT Large backbone.
- `vit_b`: Default Segment Anything model with ViT Base backbone.
- `vit_t`: Segment Anything model with ViT Tiny backbone. From the [Mobile SAM publication](https://arxiv.org/abs/2306.14289).
- `vit_l_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. ([Zenodo](TODO)) ([idealistic-rat on BioImage.IO](TODO))
- `vit_l_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11111176)) ([idealistic-rat on BioImage.IO](TODO))
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Base backbone. ([Zenodo](https://zenodo.org/doi/10.5281/zenodo.11103797)) ([diplomatic-bug on BioImage.IO](TODO))
- `vit_t_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. ([Zenodo](TODO)) ([faithful-chicken BioImage.IO](TODO))
- `vit_l_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. ([Zenodo](TODO)) ([humorous-crab on BioImage.IO](TODO))
- `vit_b_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. ([Zenodo](TODO)) ([noisy-ox on BioImage.IO](TODO))
- `vit_t_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. ([Zenodo](TODO)) ([greedy-whale on BioImage.IO](https://doi.org/10.5281/zenodo.11110950))
- `vit_t_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11111328)) ([faithful-chicken BioImage.IO](TODO))
- `vit_l_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11111054)) ([humorous-crab on BioImage.IO](TODO))
- `vit_b_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11111293)) ([noisy-ox on BioImage.IO](TODO))
- `vit_t_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11110950)) ([greedy-whale on BioImage.IO](TODO))

See the two figures below of the improvements through the finetuned model for LM and EM data.

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/lm_comparison.png" width="768">

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/em_comparison.png" width="768">

You can select which model to use for annotation by selecting the corresponding name in the `Model:` drop-down menu in the embedding widget console:
You can select which model to use in the [annotation tools](#annotation-tools) by selecting the corresponding name in the `Model:` drop-down menu in the embedding menu:

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/model-type-selector.png" width="256">

Expand All @@ -42,7 +42,7 @@ See also the figures above for examples where the finetuned models work better t
We are working on further improving these models and adding new models for other biomedical imaging domains.


## Older Models
## Other Models

Previous versions of our models are available on Zenodo:
- [vit_b_em_boundaries](https://zenodo.org/records/10524894): for segmenting compartments delineated by boundaries such as cells or neurites in EM.
Expand All @@ -52,3 +52,13 @@ Previous versions of our models are available on Zenodo:
- [vit_h_lm](https://zenodo.org/records/8250299): for general LM segmentation.

We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

We also provide additional models that were used for experiments in our publication on zenodo:
- [LIVECell Specialist Models](https://zenodo.org/records/11115426)
- [TissueNet Specialist Models](TODO)
- [NEURIPS Cell Seg Specialist Models](TODO)
- [DeepBacs Specialist Models](TODO)
- [PlantSeg (Ovules) Specialist Models](TODO)
- [CREMI Specialist Models](TODO)
- [ASEM ER Specialist Models](TODO)
- [VIT-H Generalist Models and User Study Models](TODO)
8 changes: 4 additions & 4 deletions doc/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

There are three ways to install `micro_sam`:
- [From mamba](#from-mamba) is the recommended way if you want to use all functionality.
- [From source](#from-source) for setting up a development environment to use the development version and to change and contribute to our software.
- [From installer](#from-installer) to install it without having to use mamba (supported platforms: Windows and Linux, only for CPU users).
- [From source](#from-source) for setting up a development environment to use the latest version and to change and contribute to our software.
- [From installer](#from-installer) to install it without having to use mamba (supported platforms: Windows and Linux, supports only CPU).

You can find more information on the installation and how to troubleshoot it in [the FAQ section](installation-questions).
You can find more information on the installation and how to troubleshoot it in [the FAQ section](#installation-questions).

## From mamba

Expand All @@ -19,7 +19,7 @@ You can follow the instructions [here](https://mamba.readthedocs.io/en/latest/in
```bash
$ mamba install -c conda-forge micro_sam
```
or you can create a new environment (here called `micro-sam`) with it via:
or you can create a new environment (here called `micro-sam`) via:
```bash
$ mamba create -c conda-forge -n micro-sam micro_sam
```
Expand Down
7 changes: 4 additions & 3 deletions doc/python_library.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,9 @@ So a good strategy is to annotate a few images with one of the provided models u
We recommend checking out our latest [preprint](https://doi.org/10.1101/2023.08.21.554208) for details on the results on how much data is required for finetuning Segment Anything.

The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Check out [the finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) to see how to use it.

We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
The notebook explains how to activate training it together with the rest of SAM and how to then use it.
The notebook explains how to train it together with the rest of SAM and how to then use it.

More advanced examples, including quantitative and qualitative evaluation, can be found in [the finetuning directory](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating [our models](finetuned-models). You can find further information on model training in the [FAQ section](fine-tuning-questions).

More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in [finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating [our models](finetuned-models). You can find further information on model training in the [FAQ section](fine-tuning-questions).
TODO put table with resources here
6 changes: 3 additions & 3 deletions doc/start_page.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Segment Anything for Microscopy

Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of [Segment Anything](https://segment-anything.com/) by Meta AI and specializes it for microscopy and other bio-imaging data.
Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of [Segment Anything](https://segment-anything.com/) by Meta AI and specializes it for microscopy and other biomedical imaging data.
Its core components are:
- The `micro_sam` tools for interactive data annotation, built as [napari](https://napari.org/stable/) plugin.
- The `micro_sam` library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.
Expand All @@ -27,7 +27,7 @@ $ mamba install -c conda-forge micro_sam
```
We also provide installers for Windows and Linux. For more details on the available installation options, check out [the installation section](#installation).

After installing `micro_sam` you can start napari and select the annotation tool you want to use from `Plugins -> SegmentAnything for Microscopy`. Check out the [quickstart tutorial video](TODO) for a short introduction and [the annotation tool section](#annotation-tools) for details.
After installing `micro_sam` you can start napari and select the annotation tool you want to use from `Plugins -> SegmentAnything for Microscopy`. Check out the [quickstart tutorial video](https://youtu.be/HauT-D2BHKc) for a short introduction and [the annotation tool section](#annotation-tools) for details.

The `micro_sam` python library can be imported via

Expand All @@ -43,6 +43,6 @@ You can also train models on your own data, see [here for details](#training-you
## Citation

If you are using `micro_sam` in your research please cite
- Our [preprint](https://doi.org/10.1101/2023.08.21.554208).
- our [preprint](https://doi.org/10.1101/2023.08.21.554208)
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643).
- If you use a `vit-tiny` models, please also cite [Mobile SAM](https://arxiv.org/abs/2306.14289).
7 changes: 6 additions & 1 deletion notebooks/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
# Example Noteboks

TODO
We provide three example notebooks that demonstrate how the `micro_sam` python library can be used for:
- Running automatic instance segmentation in `automatic_segmentation.ipynb`.
- Applying a Segment Anything Model (SAM) on many images and evaluating the segmentation quality against ground-truth segmentations in `inference_and_evaluation.ipynb`.
- Fine-tuning SAM on your custom data to obtain a better model for your segmentation tasks in `sam_finetuning.ipynb`.

If you have an environment with `micro_sam` you can directly run these notebooks. You can also run them on [Kaggle Notebooks](https://www.kaggle.com/code); we install the correct dependencies for it in each notebook.
Loading

0 comments on commit d89ef32

Please sign in to comment.