Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update readme #43

Merged
merged 2 commits into from
Oct 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 21 additions & 5 deletions vista3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,21 @@ Download the [model checkpoint](https://drive.google.com/file/d/1eLIxQwnxGsjggxi

### Inference
The [NIM Demo (VISTA3D NVIDIA Inference Microservices)](https://build.nvidia.com/nvidia/vista-3d) does not support medical data upload due to legal concerns.
We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json).
1. We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation.
We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json). For exact number of supported automatic segmentation class and the reason, please to refer to [issue](https://github.com/Project-MONAI/VISTA/issues/41).

#### MONAI Bundle

For automatic segmentation and batch processing, we highly recommend using the MONAI model zoo. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The following command will download the vista3d standalone bundle. The documentation in the bundle contains a detailed explanation for finetuning and inference.

```
pip install "monai[fire]"
python -m monai.bundle download "vista3d" --bundle_dir "bundles/"
```

#### Debugger

We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation.

```
python -m scripts.debugger run
```
Expand All @@ -91,12 +104,11 @@ To segment everything, run
```
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file 'example-1.nii.gz'
```
The output path and other configs can be changed in the `configs/infer.yaml`.

The output path and other configs can be changed in the `configs/infer.yaml`

2. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The running enviroment requires a monai docker. The MONAI bundle is more suitable for automatic segmentattion in batch.
```
docker pull projectmonai/monai:1.3.2
NOTE: `infer.py` does not support `lung`, `kidney`, and `bone` class segmentation while MONAI bundle supports those classes. MONAI bundle uses better memory management and will not easily face OOM issue.
```


Expand Down Expand Up @@ -134,6 +146,10 @@ For finetuning, user need to change `label_set` and `mapped_label_set` in the js
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7;torchrun --nnodes=1 --nproc_per_node=8 -m scripts.train_finetune run --config_file "['configs/finetune/train_finetune_word.yaml']"
```

```
Note: MONAI bundle also provides a unified API for finetuning, but the results in the table and paper are from this research repository.
```

### NEW! [SAM2 Benchmark Tech Report](https://arxiv.org/abs/2408.11210)
We provide scripts to run SAM2 evaluation. Modify SAM2 source code to support background remove: Add `z_slice` to `sam2_video_predictor.py`. Require SAM2 package [installation](https://github.com/facebookresearch/segment-anything-2)
```
Expand Down
8 changes: 0 additions & 8 deletions vista3d/data/jsons/label_dict.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
{
"liver": 1,
"kidney": 2,
"spleen": 3,
"pancreas": 4,
"right kidney": 5,
Expand All @@ -14,12 +13,8 @@
"duodenum": 13,
"left kidney": 14,
"bladder": 15,
"prostate or uterus (deprecated)": 16,
"portal vein and splenic vein": 17,
"rectum (deprecated)": 18,
"small bowel": 19,
"lung": 20,
"bone": 21,
"brain": 22,
"lung tumor": 23,
"pancreatic tumor": 24,
Expand Down Expand Up @@ -127,8 +122,5 @@
"thyroid gland": 126,
"vertebrae S1": 127,
"bone lesion": 128,
"kidney mass (deprecated)": 129,
"liver tumor (deprecated)": 130,
"vertebrae L6 (deprecated)": 131,
"airway": 132
}