Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segment Template #514

Merged
merged 14 commits into from
Oct 12, 2023
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,14 @@ python -m monai.bundle download "wholeBody_ct_segmentation" --bundle_dir "bundle

To get started with the models, please see [the example use cases](https://github.com/Project-MONAI/tutorials/tree/main/model_zoo).

## Template Bundles

We aim to provide a number of template bundles in the zoo for you to copy and adapt to your own needs.
This should help you reduce effort in developing your own bundles and also demonstrate what we feel to be good practice and design.
We currently have the following:

* [Segmentation Template](./models/segmentation_template)

## License

Bundles released on the MONAI Model Zoo require a license for the software itself comprising the configuration files and model weights. You are required to adhere to the license conditions included with each bundle, as well as any license conditions stated for data bundles may include or use (please check the file `docs/data_license.txt` if it is existing within the bundle directory).
Expand Down
21 changes: 21 additions & 0 deletions models/segmentation_template/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 MONAI Consortium

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
113 changes: 113 additions & 0 deletions models/segmentation_template/configs/inference.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# This implements the workflow for applying the network to a directory of images and saving the predicted segmentations.

imports:
- $import os
- $import torch
- $import glob

# pull out some constants from MONAI
image: $monai.utils.CommonKeys.IMAGE
pred: $monai.utils.CommonKeys.PRED

# hyperparameters for you to modify on the command line
batch_size: 1 # number of images per batch
num_workers: 0 # number of workers to generate batches with
num_classes: 4 # number of classes in training data which network should predict
device: $torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# define various paths
bundle_root: . # root directory of the bundle
ckpt_path: $@bundle_root + '/models/model.pt' # checkpoint to load before starting
dataset_dir: $@bundle_root + '/test_data' # where data is coming from
output_dir: './outputs' # directory to store images to

# network definition, this could be parameterised by pre-defined values or on the command line
network_def:
_target_: UNet
spatial_dims: 3
in_channels: 1
out_channels: '@num_classes'
channels: [8, 16, 32, 64]
strides: [2, 2, 2]
num_res_units: 2
network: $@network_def.to(@device)

# list all niftis in the input directory
file_pattern: '*.nii*'
data_list: '$list(sorted(glob.glob(os.path.join(@dataset_dir, @file_pattern))))'
# collect data dictionaries for all files
data_dicts: '$[{@image:i} for i in @data_list]'

# these transforms are used for inference to load and regularise inputs
transforms:
- _target_: LoadImaged
keys: '@image'
image_only: true
- _target_: EnsureChannelFirstd
keys: '@image'
- _target_: ScaleIntensityd
keys: '@image'

preprocessing:
_target_: Compose
transforms: $@transforms

dataset:
_target_: Dataset
data: '@data_dicts'
transform: '@preprocessing'

dataloader:
_target_: ThreadDataLoader # generate data ansynchronously from inference
dataset: '@dataset'
batch_size: '@batch_size'
num_workers: '@num_workers'

# should be replaced with other inferer types if training process is different for your network
inferer:
_target_: SimpleInferer

# transform to apply to data from network to be suitable for loss function and validation
postprocessing:
_target_: Compose
transforms:
- _target_: Activationsd
keys: '@pred'
softmax: true
- _target_: AsDiscreted
keys: '@pred'
argmax: true
- _target_: SaveImaged
keys: '@pred'
meta_keys: pred_meta_dict
data_root_dir: '@dataset_dir'
output_dir: '@output_dir'
dtype: $None
output_dtype: $None
output_postfix: ''
resample: false
separate_folder: true

# inference handlers to load checkpoint, gather statistics
handlers:
- _target_: CheckpointLoader
_disabled_: $not os.path.exists(@ckpt_path)
load_path: '@ckpt_path'
load_dict:
model: '@network'
- _target_: StatsHandler
name: null # use engine.logger as the Logger object to log to
output_transform: '$lambda x: None'

# engine for running inference, ties together objects defined above and has metric definitions
evaluator:
_target_: SupervisedEvaluator
device: '@device'
val_data_loader: '@dataloader'
network: '@network'
inferer: '@inferer'
postprocessing: '@postprocessing'
val_handlers: '@handlers'

run:
- [email protected]()
21 changes: 21 additions & 0 deletions models/segmentation_template/configs/logging.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[loggers]
keys=root

[handlers]
keys=consoleHandler

[formatters]
keys=fullFormatter

[logger_root]
level=INFO
handlers=consoleHandler

[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=fullFormatter
args=(sys.stdout,)

[formatter_fullFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
64 changes: 64 additions & 0 deletions models/segmentation_template/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
"version": "0.0.1",
"changelog": {
"0.0.1": "Initial version"
},
"monai_version": "1.2.0",
"pytorch_version": "2.0.1",
"numpy_version": "1.24.4",
"optional_packages_version": {
"nibabel": "5.1.0",
"pytorch-ignite": "0.4.12"
},
"name": "Segmentation Template",
"task": "Segmentation of randomly generated spheres in 3D images",
"description": "This is a template bundle for segmenting in 3D, take this as a basis for your own bundles.",
"authors": "Eric Kerfoot",
"copyright": "Copyright (c) 2023 MONAI Consortium",
"network_data_format": {
"inputs": {
"image": {
"type": "image",
"format": "magnitude",
"modality": "none",
"num_channels": 1,
"spatial_shape": [
128,
128,
128
],
"dtype": "float32",
"value_range": [],
"is_patch_data": false,
"channel_def": {
"0": "image"
}
}
},
"outputs": {
"pred": {
"type": "image",
"format": "segmentation",
"num_channels": 4,
"spatial_shape": [
128,
128,
128
],
"dtype": "float32",
"value_range": [
0,
3
],
"is_patch_data": false,
"channel_def": {
"0": "background",
"1": "category 1",
"2": "category 2",
"3": "category 3"
}
}
}
}
}
37 changes: 37 additions & 0 deletions models/segmentation_template/configs/multi_gpu_train.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# This file contains the changes to implement DDP training with the train.yaml config.

is_dist: '$dist.is_initialized()'
rank: '$dist.get_rank() if @is_dist else 0'
device: '$torch.device(f"cuda:{@rank}" if torch.cuda.is_available() else "cpu")' # assumes GPU # matches rank #

# wrap the network in a DistributedDataParallel instance, moving it to the chosen device for this process
network:
_target_: torch.nn.parallel.DistributedDataParallel
module: $@network_def.to(@device)
device_ids: ['@device']
find_unused_parameters: true

train_sampler:
_target_: DistributedSampler
dataset: '@train_dataset'
even_divisible: true
shuffle: true

train_dataloader#sampler: '@train_sampler'
train_dataloader#shuffle: false

val_sampler:
_target_: DistributedSampler
dataset: '@val_dataset'
even_divisible: false
shuffle: false

val_dataloader#sampler: '@val_sampler'

run:
- $import torch.distributed as dist
- $dist.init_process_group(backend='nccl')
- $torch.cuda.set_device(@device)
- $monai.utils.set_determinism(seed=123) # may want to choose a different seed or not do this here
- [email protected]()
- $dist.destroy_process_group()
Loading
Loading