Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calculate evaluation metrics such as Dice, IOU, and HD #16

Open
hczyni opened this issue Sep 24, 2024 · 5 comments
Open

calculate evaluation metrics such as Dice, IOU, and HD #16

hczyni opened this issue Sep 24, 2024 · 5 comments

Comments

@hczyni
Copy link

hczyni commented Sep 24, 2024

Dear AI Engineer, how can I calculate evaluation metrics such as Dice, IOU, and HD? Could you provide me with some code? I found some code online regarding this topic, but I encountered some issues when running it. I would greatly appreciate it if you could share your code for these metrics! Thank you very much!

@xiongxyowo
Copy link
Collaborator

For Dice and IoU, we provide a demo code below:

import os
import torch
from PIL import Image
from torchvision import transforms

def evaluate(pred, gt):
    if isinstance(pred, (list, tuple)):
        pred = pred[0]

    pred_binary = (pred >= 0.5).float()
    pred_binary_inverse = (pred_binary == 0).float()

    gt_binary = (gt >= 0.5).float()
    gt_binary_inverse = (gt_binary == 0).float()

    TP = pred_binary.mul(gt_binary).sum()
    FP = pred_binary.mul(gt_binary_inverse).sum()
    TN = pred_binary_inverse.mul(gt_binary_inverse).sum()
    FN = pred_binary_inverse.mul(gt_binary).sum()

    if TP.item() == 0:
        TP = torch.Tensor([1])

    Dice = 2 * TP / (2 * TP + FP + FN)
    IoU = TP / (TP + FP + FN)
    Sen = TP / (TP + FN)
    Spe = TN / (TN + FP)
    Acc = (TP + TN) / (TP + FP + TN + FN)
    return Dice, IoU, Sen, Spe, Acc


class Metrics(object):
    def __init__(self, metrics_list):
        self.metrics = {}
        for metric in metrics_list:
            self.metrics[metric] = 0

    def update(self, **kwargs):
        for k, v in kwargs.items():
            assert (k in self.metrics.keys()), "The k {} is not in metrics".format(k)
            if isinstance(v, torch.Tensor):
                v = v.item()

            self.metrics[k] += v

    def mean(self, total):
        mean_metrics = {}
        for k, v in self.metrics.items():
            mean_metrics[k] = v / total
        return mean_metrics


class EvalDataset:
    def __init__(self, pred_root, gt_root):
        self.preds = [pred_root + f for f in os.listdir(pred_root) if f.endswith('.png') or f.endswith('.jpg')]
        self.gts = [gt_root + f for f in os.listdir(gt_root) if f.endswith('.png')]
        self.preds = sorted(self.preds)
        self.gts = sorted(self.gts)
        self.size = len(self.preds)
        self.transform = transforms.ToTensor()
        self.index = 0

    def load_data(self):
        pred = self.transform(self.binary_loader(self.preds[self.index]))
        gt = self.transform(self.binary_loader(self.gts[self.index]))
        self.index += 1
        return pred, gt

    def binary_loader(self, path):
        with open(path, 'rb') as f:
            img = Image.open(f)
            return img.convert('L')


pred_root = ""
gt_root = ""
eval_loader = EvalDataset(pred_root, gt_root)
metrics = Metrics(['Dice', 'IoU', 'Sen', 'Spe', 'Acc'])

for i in range(eval_loader.size):
    print(i)
    with torch.no_grad():
        pred, gt = eval_loader.load_data()
        _Dice, _IoU, _Sen, _Spe, _Acc = evaluate(pred, gt)
        metrics.update(Dice = _Dice, IoU = _IoU, Sen = _Sen, 
                       Spe = _Spe, Acc = _Acc)

metrics_result = metrics.mean(eval_loader.size)
print("Test Result:")
print('Dice:  %.4f\nIoU: %.4f\nSen: %.4f\nSpe: %.4f\nAcc: %.4f, '
        % (metrics_result['Dice'], metrics_result['IoU'], metrics_result['Sen'],
           metrics_result['Spe'], metrics_result['Acc']))

For HD, please refer to py-hausdorff.

@hczyni
Copy link
Author

hczyni commented Sep 28, 2024

For Dice and IoU, we provide a demo code below:

import os
import torch
from PIL import Image
from torchvision import transforms

def evaluate(pred, gt):
    if isinstance(pred, (list, tuple)):
        pred = pred[0]

    pred_binary = (pred >= 0.5).float()
    pred_binary_inverse = (pred_binary == 0).float()

    gt_binary = (gt >= 0.5).float()
    gt_binary_inverse = (gt_binary == 0).float()

    TP = pred_binary.mul(gt_binary).sum()
    FP = pred_binary.mul(gt_binary_inverse).sum()
    TN = pred_binary_inverse.mul(gt_binary_inverse).sum()
    FN = pred_binary_inverse.mul(gt_binary).sum()

    if TP.item() == 0:
        TP = torch.Tensor([1])

    Dice = 2 * TP / (2 * TP + FP + FN)
    IoU = TP / (TP + FP + FN)
    Sen = TP / (TP + FN)
    Spe = TN / (TN + FP)
    Acc = (TP + TN) / (TP + FP + TN + FN)
    return Dice, IoU, Sen, Spe, Acc


class Metrics(object):
    def __init__(self, metrics_list):
        self.metrics = {}
        for metric in metrics_list:
            self.metrics[metric] = 0

    def update(self, **kwargs):
        for k, v in kwargs.items():
            assert (k in self.metrics.keys()), "The k {} is not in metrics".format(k)
            if isinstance(v, torch.Tensor):
                v = v.item()

            self.metrics[k] += v

    def mean(self, total):
        mean_metrics = {}
        for k, v in self.metrics.items():
            mean_metrics[k] = v / total
        return mean_metrics


class EvalDataset:
    def __init__(self, pred_root, gt_root):
        self.preds = [pred_root + f for f in os.listdir(pred_root) if f.endswith('.png') or f.endswith('.jpg')]
        self.gts = [gt_root + f for f in os.listdir(gt_root) if f.endswith('.png')]
        self.preds = sorted(self.preds)
        self.gts = sorted(self.gts)
        self.size = len(self.preds)
        self.transform = transforms.ToTensor()
        self.index = 0

    def load_data(self):
        pred = self.transform(self.binary_loader(self.preds[self.index]))
        gt = self.transform(self.binary_loader(self.gts[self.index]))
        self.index += 1
        return pred, gt

    def binary_loader(self, path):
        with open(path, 'rb') as f:
            img = Image.open(f)
            return img.convert('L')


pred_root = ""
gt_root = ""
eval_loader = EvalDataset(pred_root, gt_root)
metrics = Metrics(['Dice', 'IoU', 'Sen', 'Spe', 'Acc'])

for i in range(eval_loader.size):
    print(i)
    with torch.no_grad():
        pred, gt = eval_loader.load_data()
        _Dice, _IoU, _Sen, _Spe, _Acc = evaluate(pred, gt)
        metrics.update(Dice = _Dice, IoU = _IoU, Sen = _Sen, 
                       Spe = _Spe, Acc = _Acc)

metrics_result = metrics.mean(eval_loader.size)
print("Test Result:")
print('Dice:  %.4f\nIoU: %.4f\nSen: %.4f\nSpe: %.4f\nAcc: %.4f, '
        % (metrics_result['Dice'], metrics_result['IoU'], metrics_result['Sen'],
           metrics_result['Spe'], metrics_result['Acc']))

For HD, please refer to py-hausdorff.

Running this project is successful !!! Pancreatic tumor dice is 93%!!!!!! I love this project and I love you !

@Asagami-Fujino
Copy link

I'm trying to use SAM2-UNet for prostate tumor segmentation but the result is bad. so I'm asking for some information about your data. Is it MR? Is the tumor large and easy to seg? Have you changed something like loss in your project?

@hczyni
Copy link
Author

hczyni commented Oct 14, 2024

I'm trying to use SAM2-UNet for prostate tumor segmentation but the result is bad. so I'm asking for some information about your data. Is it MR? Is the tumor large and easy to seg? Have you changed something like loss in your project?我正在尝试使用 SAM2-UNet 进行前列腺肿瘤分割,但结果很糟糕。因此,我询问有关您的数据的一些信息。是 MR 吗?肿瘤大容易阻隔吗?您是否更改了项目中的 loss 之类的内容?

ROI of pancreatic! NO change this code!

@hczyni
Copy link
Author

hczyni commented Oct 14, 2024

I'm trying to use SAM2-UNet for prostate tumor segmentation but the result is bad. so I'm asking for some information about your data. Is it MR? Is the tumor large and easy to seg? Have you changed something like loss in your project?我正在尝试使用 SAM2-UNet 进行前列腺肿瘤分割,但结果很糟糕。因此,我询问有关您的数据的一些信息。是 MR 吗?肿瘤大容易阻隔吗?您是否更改了项目中的 loss 之类的内容?

ROI of pancreatic! NO change this code!胰腺的 ROI!不更改此代码!

我的数据集的模态是CT,胰腺肿瘤在整个CT图像的比例特别小,送入网络训练时,进行了裁切处理:在整幅CT图像中将胰腺及其肿瘤的区域取最大联通域,然后裁切;有尝试过将源代码中的struction loss 更换为 bceloss ,但是报错了,在pre0 pre1 pre2 送入bceloss 前进行了 sigmoid才能正确执行!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants