A napari plugin for making image annotation using feature space of vision transformers and random forest classifier.
We developed a napari plugin to train a Random Forest model using extracted features of vision foundation models and just a few scribble labels provided by the user as input. This approach can do the segmentation of desired objects almost as well as manual segmentations but in a much shorter time with less manual effort.
The plugin documentation is here.
To install this plugin you need to use conda or mamba to create a environment and install the requirements. Use the commands below to create the environment and install the plugin:
# for GPU
conda env create -f ./env_gpu.yml
# if you don't have a GPU
conda env create -f ./env_cpu.yml
Note: You need to install sam-2
which can be installed easily using conda. To install sam-2
using pip
please refer to the official sam-2 repository.
python >= 3.10
numpy==1.24.4
opencv-python
scikit-learn
scikit-image
matplotlib
pyqt
magicgui
qtpy
napari
h5py
pytorch=2.3.1
torchvision=0.18.1
timm=1.0.9
pynrrd
segment-anything
sam-2
If you want to install the plugin manually using GPU, please follow the pytorch installation instruction here.
For detailed napari installation see here.
If you use the provided conda environment yaml files, the plugin will be installed automatically. But in case you already have the environment setup, you can just install the plugin. First clone the repository:
git clone https://github.com/juglab/featureforest
Then run the following commands:
cd ./featureforest
pip install .
Distributed under the terms of the BSD-3 license, "featureforest" is free and open source software
If you encounter any problems, please [file an issue] along with a detailed description.