strange and bad results from training 3d segmentation model #1448
-
Hi, I followed the tutorial spleen_segmentation_3d.ipynb tried to train the model with my own data. In the binary labelmap, 1=brain, 0=non-brain areas or background. The images and labelmaps only have one channel, dimension 256x256x16. In some slices, the brain area is relatively small compared to the whole image. I got very strange results and trying to figure out why. So I have couple questions. If anyone can help or give me some hints, it will be highly appreciated!
The rest codes are pretty much the same as the tutorial. so I only attached the parts that I made changes. I set
The tutorial also models a binary segmentation, but why they can use
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 9 replies
-
Hi @esther1262, Thanks for opening this discussion.
That's correct. That's being specified in the arguments you've used in the transforms:
I'd recommend keeping the output channels in 2. One channel is for the background and the other is for the foreground.
The model may need more training. However, it is difficult to asses without seeing the results/images. How many images are you using for training? Are you using MR or CT images for this task? If using MR images, you may want to consider a different type of intensity normalization. Hope that helps, |
Beta Was this translation helpful? Give feedback.
Hi @esther1262,
Thanks for the details.
You are trying to scale the intensity of the image and the label (arguments
keys=["image", "label"]
in theRandScaleIntensityd
transform). The label only has values of zeros and ones. That's why the error.When scaling intensities, you don't usually apply those transforms to the label.
Hope it makes sense,