Replies: 2 comments 1 reply
-
Hi @songeeu , Thanks for your experiments and feedback, this is related to a big topic about updating meta data, we are working on it. Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Could you give more details about how you visualize them? Did you display the monai generated segmentation + original image? different visualization tools may use the qform/sform differently. |
Beta Was this translation helpful? Give feedback.
-
Hi! I have been following the 3D U-Net inference tutorial to create an inference pipeline. While the inference result looks fine, the mask is not aligned with the image. It seems like the post transforms are not copying over the entire meta_data of the image.
Meta_data of the image
vs
Meta_data of the label
Since the original affine is there, the invertd seems to be functioning partially (at least the affine was copied over). Is there a way to copy over the entire meta_data (including qoffset, which might be what was causing the misalignment)? Thanks!
Here is the post transforms I used
post_transforms = Compose([
EnsureTyped(keys="pred"),
Activationsd(keys="pred", softmax=True),
Invertd(
keys="pred", # invert the
pred
data field, also support multiple fieldstransform=test_transforms,
orig_keys="img", # get the previously applied pre_transforms information on the
img
data field,# then invert
pred
based on this information. we can use same info# for multiple fields, also support different orig_keys for different fields
meta_keys="pred_meta_dict", # key field to save inverted meta data, every item maps to
keys
orig_meta_keys="img_meta_dict", # get the meta data from
img_meta_dict
field when inverting,# for example, may need the
affine
to invertSpacingd
transform,# multiple fields can use the same meta data to invert
meta_key_postfix="meta_dict", # if
meta_keys=None
, use "{keys}{meta_key_postfix}" as the meta key,# if
orig_meta_keys=None
, use "{orig_keys}{meta_key_postfix}",# otherwise, no need this arg during inverting
nearest_interp=False, # don't change the interpolation mode to "nearest" when inverting transforms
# to ensure a smooth output, then execute
AsDiscreted
transformto_tensor=True, # convert to PyTorch Tensor after inverting
),
AsDiscreted(keys="pred", argmax=True),
SaveImaged(keys="pred", meta_keys="pred_meta_dict", output_dir="./out", output_postfix="seg", resample=False),
])
Beta Was this translation helpful? Give feedback.
All reactions