-
I want to load nii.gz file from s3 and apply transform on that image. Below code works fine when i run on sagemaker. As i don't need to download image from s3
Below code is same as above with extra lines of code to download object from s3.
Please suggest me the different ways to load the images and apply transform. |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments 1 reply
-
Hi @j-sieger , Thanks for your interest and feedback! Thanks. |
Beta Was this translation helpful? Give feedback.
-
@Nic-Ma i have tried Nibabel but after that i try applying transform but getting below error
So what is the expected format of data input to transform when we use data loaded with nibabel |
Beta Was this translation helpful? Give feedback.
-
Hi @j-sieger , For Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi @Nic-Ma , But the real problem is with loading s3 object and applying transformation. I will check the ways to do the same. You can also suggest me if there is any way. With out this i wont able able to deploy my MONAI trained model on AWS. |
Beta Was this translation helpful? Give feedback.
-
duplicate of Project-MONAI/MONAI#1656? |
Beta Was this translation helpful? Give feedback.
-
@Nic-Ma , @wyli
ERROR:
|
Beta Was this translation helpful? Give feedback.
-
could you ensure that |
Beta Was this translation helpful? Give feedback.
-
print(img.get_fdata().shape) --> (512, 512, 31) >> looks like 3rd dimension is the channels print(val_data['image'].shape) --> torch.Size([1, 21, 342, 208]) >> here 2nd dimension represents the channel as iam using @wyli |
Beta Was this translation helpful? Give feedback.
-
I see, assuming this is about 3D image processing... given that you have the script would be val_transforms = Compose(
[
# AsChannelFirstd(keys=["image"]),
AddChanneld(keys=["image"]),
Spacingd(keys=["image"], pixdim=(
1.5, 1.5, 2.0), mode=("bilinear")),
Orientationd(keys=["image"], axcodes="RAS"),
ScaleIntensityRanged(
keys=["image"], a_min=-57, a_max=164,
b_min=0.0, b_max=1.0, clip=True,
),
CropForegroundd(keys=["image"], source_key="image"),
EnsureTyped(keys=["image"]),
]
)
val_data=val_transforms({"image":img.get_fdata()})
roi_size = (160, 160, 160)
sw_batch_size = 4
val_outputs = sliding_window_inference(
val_data["image"].to(device), roi_size, sw_batch_size, model
) the |
Beta Was this translation helpful? Give feedback.
I see, assuming this is about 3D image processing... given that you have
print(img.get_fdata().shape) --> (512, 512, 31)
the script would be