Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image normalization #11

Closed
ieee8023 opened this issue Mar 13, 2024 · 8 comments
Closed

Image normalization #11

ieee8023 opened this issue Mar 13, 2024 · 8 comments

Comments

@ieee8023
Copy link

I'm working on running the models and I'm not sure about the image normalization. I have a notebook here that loads the MaskedAutoencoderCNN with the densenet121_CXR_0.3M_mae.pth weights and reconstructs an image. But the image does not appear reconstructed correctly. I've tried many normalizations but the area outside of the lungs appears blurry and the lungs appear reconstructed ok. Any idea what is wrong?

https://github.com/mlmed/torchxrayvision/blob/c44435011d97288e4af7d458dab95dc7ed6e6790/scripts/medical_mae_example.ipynb

Here is the PR to integrate it into torchxrayvision:
mlmed/torchxrayvision#150

@lambert-x
Copy link
Owner

The image normalization settings are here:

mean_dict = {'chexpert': [0.485, 0.456, 0.406],
'chestxray_nih': [0.5056, 0.5056, 0.5056],
'mimic_cxr': [0.485, 0.456, 0.406]
}
std_dict = {'chexpert': [0.229, 0.224, 0.225],
'chestxray_nih': [0.252, 0.252, 0.252],
'mimic_cxr': [0.229, 0.224, 0.225]
}

Could you please try it?

@ieee8023
Copy link
Author

In that notebook I used

torchvision.transforms.Normalize([0.5056, 0.5056, 0.5056], [0.252, 0.252, 0.252])

@lambert-x
Copy link
Owner

For plotting, I think you need to denormalize the output with the used mean and std.

@ieee8023
Copy link
Author

matplotlib does its own linear normalization. The images should look the same with any mean and stdev adjustment.

Do you think you can modify that notebook to get the normalization correct?

@lambert-x
Copy link
Owner

Please refer to this function (I used for visualization before):

chestxray_mean = np.array([0.5056, 0.5056, 0.5056])
chestxray_std = np.array([0.252, 0.252, 0.252])

def show_image(image, title='', mean=chestxray_mean, std=chestxray_std):
    # image is [H, W, 3]
    assert image.shape[2] == 3
    plt.imshow(torch.clip((image * std + mean) * 255, 0, 255).int())
    plt.title(title, fontsize=16)
    plt.axis('off')
    print(title,'min:', image.min().item(),'max:', image.max().item())
    print(title,'mean:', image.mean().item(),'std:',  image.std().item())
    return

@ieee8023
Copy link
Author

What does the rest of your code look like? The output of the densenet121_CXR_0.3M_mae.pth model was NCHW so I had to swap the axis. Are you using a different model?

Screenshot 2024-03-13 at 17 10 36

@ieee8023
Copy link
Author

Also, with the VIT model using unpatchify I'm getting a patchy artifact and blurry reconstructions.

https://github.com/mlmed/torchxrayvision/blob/4fa907a53b9eb3ce722f833f1409f6099c38a63f/scripts/medical_mae_vit_example.ipynb

image

@lambert-x
Copy link
Owner

Just uploaded a notebook for visualization. Please check https://github.com/lambert-x/medical_mae/blob/main/visualization.ipynb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants