Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison between 2D and 3D models #6

Open
naga-karthik opened this issue Jan 31, 2023 · 1 comment
Open

Comparison between 2D and 3D models #6

naga-karthik opened this issue Jan 31, 2023 · 1 comment

Comments

@naga-karthik
Copy link
Member

I ran a few experiments with both 2D and 3D models and the purpose of this issue is to document the results and draw comparisons.

Config file used for 2D model
{
    "command": "train",
    "gpu_ids": [1],
    "path_output": "outputs/2d-model_ccrop=64-64_N=full",
    "model_name": "modified_unet",
    "debugging": true,
    "object_detection_params": {
        "object_detection_path": null,
        "safety_factor": [1.0, 1.0, 1.0]
    },
    "wandb": {
        "wandb_api_key": "bf043c64b6c6b4abcc1ee7d8501c300b19945028",
        "project_name": "centerline-detection",
        "group_name": "ivadomed-2d-unet",
        "run_name": "ccrop=64-64_N=full",
        "log_grads_every": 5000
    },
    "loader_parameters": {
        "path_data": ["/home/GRAMES.POLYMTL.CA/u114716/duke/temp/janvalosek/canproco_T2w_centerline_2023-01-23/data_processed"],
        "target_suffix": ["_seg_centerline"],
        "extensions": [".nii.gz"],
        "roi_params": {
            "suffix": null,
            "slice_filter_roi": null
        },
        "contrast_params": {
            "training_validation": ["T2w"],
            "testing": ["T2w"],
            "balance": {}
        },
        "slice_filter_params": {
            "filter_empty_mask": false,
            "filter_empty_input": true
        },
        "slice_axis": "axial",
        "multichannel": false,
        "soft_gt": false,
        "bids_validate": false
    },
    "split_dataset": {
        "fname_split": null,
        "random_seed": 100,
        "split_method" : "participant_id",
        "data_testing": {"data_type": null, "data_value":[]},
        "balance": null,
        "train_fraction": 0.6,
        "test_fraction": 0.2
    },
    "training_parameters": {
        "batch_size": 64,
        "loss": {
            "name": "DiceLoss"
        },
        "training_time": {
            "num_epochs": 200,
            "early_stopping_patience": 75,
            "early_stopping_epsilon": 0.001
        },
        "scheduler": {
            "initial_lr": 1e-4,
            "lr_scheduler": {
                "name": "CosineAnnealingLR",
                "base_lr": 1e-5,
                "max_lr": 1e-3
            }
        },
        "balance_samples": {"applied": false, "type": "gt"},
        "mixup_alpha": null,
        "transfer_learning": {
            "retrain_model": null,
            "retrain_fraction": 1.0,
            "reset": true
        }
    },
    "default_model": {
        "name": "Unet",
        "dropout_rate": 0.25,
        "bn_momentum": 0.9,
        "is_2d": true,
        "final_activation": "relu",
        "depth": 4
    },
    "uncertainty": {
        "epistemic": false,
        "aleatoric": false,
        "n_it": 0
    },
    "postprocessing": {
        "remove_noise": {"thr": -1},
        "keep_largest": {},
        "binarize_prediction": {"thr": 0.5},
        "uncertainty": {"thr": -1, "suffix": "_unc-vox.nii.gz"},
        "fill_holes": {},
        "remove_small": {"unit": "vox", "thr": 3}
    },
    "evaluation_parameters": {},
    "Modified3DUNet": {
        "applied": false,
        "length_3D": [320, 320, 56],
        "stride_3D": [320, 320, 28],
        "attention": false,
        "n_filters": 8
    },
    "transformation": {
        "Resample": {
            "hspace": 0.8,
            "wspace": 0.8,
            "dspace": 0.8
        },
        "CenterCrop": {
            "size": [64, 64]
        },
        "RandomReverse": {
            "applied_to": ["im", "gt"],
            "dataset_type": ["training"]
        },
        "RandomAffine": {
            "degrees": 5,
            "scale": [0.1, 0.1],
            "translate": [0.03, 0.03],
            "applied_to": ["im", "gt"],
            "dataset_type": ["training"]
        },
        "ElasticTransform": {
			"alpha_range": [28.0, 30.0],
			"sigma_range":  [3.5, 4.5],
			"p": 0.1,
            "applied_to": ["im", "gt"],
            "dataset_type": ["training"]
        },
        "NormalizeInstance": {"applied_to": ["im"]}
    }
}
Config file used for 3D model
{
    "command": "train",
    "gpu_ids": [2],
    "path_output": "outputs/3d-model_ccrop=320-256-64_len=320-256-64_str=160-128-32_N=full",
    "model_name": "modified_unet",
    "debugging": true,
    "object_detection_params": {
        "object_detection_path": null,
        "safety_factor": [1.0, 1.0, 1.0]
    },
    "wandb": {
        "wandb_api_key": "bf043c64b6c6b4abcc1ee7d8501c300b19945028",
        "project_name": "centerline-detection",
        "group_name": "ivadomed-3d-unet",
        "run_name": "ccrop=320-256-64_len=320-256-64_str=160-128-32_N=full",
        "log_grads_every": 4000
    },
    "loader_parameters": {
        "path_data": ["/home/GRAMES.POLYMTL.CA/u114716/duke/temp/janvalosek/canproco_T2w_centerline_2023-01-23/data_processed"],
        "target_suffix": ["_seg_centerline"],
        "extensions": [".nii.gz"],
        "roi_params": {
            "suffix": null,
            "slice_filter_roi": null
        },
        "contrast_params": {
            "training_validation": ["T2w"],
            "testing": ["T2w"],
            "balance": {}
        },
        "slice_filter_params": {
            "filter_empty_mask": false,
            "filter_empty_input": false
        },
        "slice_axis": "sagittal",
        "multichannel": false,
        "soft_gt": false,
        "bids_validate": false
    },
    "split_dataset": {
        "fname_split": null,
        "random_seed": 7,
        "split_method" : "participant_id",
        "data_testing": {"data_type": null, "data_value":[]},
        "balance": null,
        "train_fraction": 0.6,
        "test_fraction": 0.2
    },
    "training_parameters": {
        "batch_size": 2,
        "loss": {
            "name": "DiceLoss"
        },
        "training_time": {
            "num_epochs": 400,
            "early_stopping_patience": 50,
            "early_stopping_epsilon": 0.001
        },
        "scheduler": {
            "initial_lr": 1e-4,
            "lr_scheduler": {
                "name": "CosineAnnealingLR",
                "base_lr": 1e-5,
                "max_lr": 1e-3
            }
        },
        "balance_samples": {"applied": false, "type": "gt"},
        "transfer_learning": {
            "retrain_model": null,
            "retrain_fraction": 1.0,
            "reset": true
        }
    },
    "default_model": {
        "name": "Unet",
        "dropout_rate": 0.25,
        "bn_momentum": 0.1,
        "is_2d": false,
        "final_activation": "relu"
    },
    "uncertainty": {
        "epistemic": false,
        "aleatoric": false,
        "n_it": 0
    },
    "postprocessing": {
        "remove_noise": {"thr": -1},
        "keep_largest": {},
        "binarize_prediction": {"thr": 0.5},
        "uncertainty": {"thr": -1, "suffix": "_unc-vox.nii.gz"},
        "fill_holes": {},
        "remove_small": {"unit": "vox", "thr": 3}
    },
    "evaluation_parameters": {},
    "Modified3DUNet": {
        "applied": true,
        "length_3D": [320, 256, 64],
        "stride_3D": [160, 128, 32],
        "attention": false,
        "n_filters": 8
    },
    "transformation": {
        "Resample": {
            "hspace": 0.8,
            "wspace": 0.8,
            "dspace": 0.8
        },
        "CenterCrop": {
            "size": [320, 256, 64],
            "dataset_type": ["training", "validation"]
        },
        "RandomAffine": {
            "degrees": 5,
            "scale": [0.15, 0.15, 0.15],
            "translate": [0.1, 0.1, 0.1],
            "applied_to": ["im", "gt"],
            "dataset_type": ["training"]
        },
        "RandomBiasField": {
            "coefficients": 0.5,
            "order": 3,
            "p": 0.25,
            "applied_to": ["im"],
            "dataset_type": ["training"]
        },
        "RandomReverse": {
            "applied_to": ["im", "gt"],
            "dataset_type": ["training"]
        },    
        "HistogramClipping": {
            "min_percentile": 3,
            "max_percentile": 97,
            "applied_to": ["im"]
        },
        "NormalizeInstance": {"applied_to": ["im"]}
    }
}

The average test Dice from the 2D model was 0.302 and that of the 3D model was 0.493. Shown below is a gif of how the predictions look like:

Comparison 1

The GT centerline (from sct_get_centerline) is shown in white, the prediction from 2D model is in green and prediction from 3D model is red.

ezgif com-gif-maker

Comparison 2

The GT centerline (from sct_get_centerline) is shown in white, the prediction from 2D model is in green and prediction from 3D model is red.

ezgif com-gif-maker-2

Observations:

  1. In both cases, the model is not predicting the centerline at the thoracic regions. This issue still persists and I am trying to think of ways to overcome that.
  2. What's good is that the models are also predicting the centerline even when it's not defined in the GT. This suggests that there is a scope in improving the model's predictions such that it cover the centerline better than the OptiC which sct_get_centerline uses.
@jcohenadad
Copy link
Member

This is very cool @naga-karthik ! However I don't trust the sagittal view for QCing, because of the reason mentioned here neuropoly/idea-projects#15 (comment). It is possible that the centerline is partly on another sagittal slice, hence it makes the comparison meaningless. My suggestion:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants