From b98259d953be6fda4c80a86511b33aa2d53927ea Mon Sep 17 00:00:00 2001 From: Yufan He <59374597+heyufan1995@users.noreply.github.com> Date: Wed, 2 Oct 2024 11:53:02 -0500 Subject: [PATCH] Update readme (#43) Fixes # . ### Description A few sentences describing the changes proposed in this pull request. ### Types of changes - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [ ] Breaking change (fix or new feature that would cause existing functionality to change). - [ ] New tests added to cover the changes. - [ ] In-line docstrings updated. --------- Signed-off-by: heyufan1995 Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- vista3d/README.md | 26 +++++++++++++++++++++----- vista3d/data/jsons/label_dict.json | 8 -------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/vista3d/README.md b/vista3d/README.md index ebcbbce..23df329 100644 --- a/vista3d/README.md +++ b/vista3d/README.md @@ -78,8 +78,21 @@ Download the [model checkpoint](https://drive.google.com/file/d/1eLIxQwnxGsjggxi ### Inference The [NIM Demo (VISTA3D NVIDIA Inference Microservices)](https://build.nvidia.com/nvidia/vista-3d) does not support medical data upload due to legal concerns. -We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json). -1. We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation. +We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json). For exact number of supported automatic segmentation class and the reason, please to refer to [issue](https://github.com/Project-MONAI/VISTA/issues/41). + +#### MONAI Bundle + +For automatic segmentation and batch processing, we highly recommend using the MONAI model zoo. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The following command will download the vista3d standalone bundle. The documentation in the bundle contains a detailed explanation for finetuning and inference. + +``` +pip install "monai[fire]" +python -m monai.bundle download "vista3d" --bundle_dir "bundles/" +``` + +#### Debugger + +We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation. + ``` python -m scripts.debugger run ``` @@ -91,12 +104,11 @@ To segment everything, run ``` export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file 'example-1.nii.gz' ``` +The output path and other configs can be changed in the `configs/infer.yaml`. -The output path and other configs can be changed in the `configs/infer.yaml` -2. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The running enviroment requires a monai docker. The MONAI bundle is more suitable for automatic segmentattion in batch. ``` -docker pull projectmonai/monai:1.3.2 +NOTE: `infer.py` does not support `lung`, `kidney`, and `bone` class segmentation while MONAI bundle supports those classes. MONAI bundle uses better memory management and will not easily face OOM issue. ``` @@ -134,6 +146,10 @@ For finetuning, user need to change `label_set` and `mapped_label_set` in the js export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7;torchrun --nnodes=1 --nproc_per_node=8 -m scripts.train_finetune run --config_file "['configs/finetune/train_finetune_word.yaml']" ``` +``` +Note: MONAI bundle also provides a unified API for finetuning, but the results in the table and paper are from this research repository. +``` + ### NEW! [SAM2 Benchmark Tech Report](https://arxiv.org/abs/2408.11210) We provide scripts to run SAM2 evaluation. Modify SAM2 source code to support background remove: Add `z_slice` to `sam2_video_predictor.py`. Require SAM2 package [installation](https://github.com/facebookresearch/segment-anything-2) ``` diff --git a/vista3d/data/jsons/label_dict.json b/vista3d/data/jsons/label_dict.json index 92a6414..ef1819f 100644 --- a/vista3d/data/jsons/label_dict.json +++ b/vista3d/data/jsons/label_dict.json @@ -1,6 +1,5 @@ { "liver": 1, - "kidney": 2, "spleen": 3, "pancreas": 4, "right kidney": 5, @@ -14,12 +13,8 @@ "duodenum": 13, "left kidney": 14, "bladder": 15, - "prostate or uterus (deprecated)": 16, "portal vein and splenic vein": 17, - "rectum (deprecated)": 18, "small bowel": 19, - "lung": 20, - "bone": 21, "brain": 22, "lung tumor": 23, "pancreatic tumor": 24, @@ -127,8 +122,5 @@ "thyroid gland": 126, "vertebrae S1": 127, "bone lesion": 128, - "kidney mass (deprecated)": 129, - "liver tumor (deprecated)": 130, - "vertebrae L6 (deprecated)": 131, "airway": 132 }