Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fuzzy segmentation issue #45

Open
thatsvenyouknow1 opened this issue Oct 31, 2024 · 2 comments
Open

Fuzzy segmentation issue #45

thatsvenyouknow1 opened this issue Oct 31, 2024 · 2 comments

Comments

@thatsvenyouknow1
Copy link

Hey,

I have played around with the Vista3D model as I want to pseudolabel a few CT images from the LIDC dataset for MAISI. Unfortunately, I am encountering a problem where the segmentations look quite fuzzy.

For reproduction:
I am running inference as explained in the README via
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file '<filename>nii.gz'

And this is the minimal code I use to get the LIDC sample:

import pylidc as pl
import nibabel as nib
import numpy as np

os.environ['PYLIDC_PATH'] = '/path/to/LIDC-IDRI' 
patient_id = 'LIDC-IDRI-0001'

#Query for the scan
scan = pl.query(pl.Scan).filter(pl.Scan.patient_id == patient_id).first()

if not scan:
    print(f"Scan with patient ID {patient_id} not found.")
    exit()

#Load scan volume
volume = scan.to_volume() #Gives volume in Hounsfield units
print(f"Volume shape: {volume.shape}") #(512, 512, 133)

#Clip volume (align with a_max and a_min in infer.yaml)
a_max=1053.678477684517
a_min=-963.8247715525971
volume = np.clip(volume, a_min=a_min, a_max=a_max)

#Get voxel spacing
voxel_dims = scan.spacings

#Create affine transformation matrix
affine = np.diag(np.append(voxel_dims, 1))

#Create and save the NIfTI image
nifti_img = nib.Nifti1Image(volume, affine)
output_filename = f"/tmp/{patient_id}.nii.gz"
nib.save(nifti_img, output_filename)
print(f"NIfTI file saved to {output_filename}")

I have also tried varying the transforms and patch size in the infer.yaml without being able to improve the result by much. I would appreciate if you have any hint as to what might be the problem (e.g. data input format, config settings, ...).

Thanks in advance!
ct_scan_and_seg

@heyufan1995
Copy link
Member

Hi @thatsvenyouknow1
Thanks for testing vista3d out! appreciate the feedback. I believe the issue might due to how you save the LIDC data to nifiti file. You should not perform that intensity clip because that's not how MONAI scaleintensity transform (clip =True) work. I'm not familiar with pylidc so not sure if the header part is correct or not.
I downloaded the 'LIDC-IDRI-0001' from https://www.cancerimagingarchive.net/collection/lidc-idri/ and used the following command to convert it to nifti files, you change that path to your downloaded dicom:

dcm2niix -z y -o folder_to_dcm_files .

Then I use monai bundle which wraps this vista3d repo and is much more memory efficient.

pip install monai==1.4.0;
python -m monai.bundle download "vista3d" --bundle_dir "bundles/";
cd bundles/vista3d

and change the input_dict in configs/inference.json to "input_dict": "${'image': 'lidc.nii.gz'}". I ran this command on a 12G old GPU and get the results.

python -m monai.bundle run --config_file configs/inference.json

image
image
image

@thatsvenyouknow1
Copy link
Author

thatsvenyouknow1 commented Nov 5, 2024

Hi @heyufan1995,
Thanks a lot for the quick reply. I applied the "pre-clipping" because the documentation for the ScaleIntensityRange transform specified:

  • a_min: intensity original range min (which is -963.82... in the inference json)
  • a_max: intensity original range max (which is 1053.67...)

With pre-clipping, the the pylidc data is normalized to values between 0 and 1, whereas without clipping, the data is between -0.54 and 1.99. I thought the goal was probably to have something in the range of [0:1]?

However, thanks to your example, I have figured out that the actual problem was the affine which appears to be used in the Orientation transform.

Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants