-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
calculate evaluation metrics such as Dice, IOU, and HD #16
Comments
For Dice and IoU, we provide a demo code below: import os
import torch
from PIL import Image
from torchvision import transforms
def evaluate(pred, gt):
if isinstance(pred, (list, tuple)):
pred = pred[0]
pred_binary = (pred >= 0.5).float()
pred_binary_inverse = (pred_binary == 0).float()
gt_binary = (gt >= 0.5).float()
gt_binary_inverse = (gt_binary == 0).float()
TP = pred_binary.mul(gt_binary).sum()
FP = pred_binary.mul(gt_binary_inverse).sum()
TN = pred_binary_inverse.mul(gt_binary_inverse).sum()
FN = pred_binary_inverse.mul(gt_binary).sum()
if TP.item() == 0:
TP = torch.Tensor([1])
Dice = 2 * TP / (2 * TP + FP + FN)
IoU = TP / (TP + FP + FN)
Sen = TP / (TP + FN)
Spe = TN / (TN + FP)
Acc = (TP + TN) / (TP + FP + TN + FN)
return Dice, IoU, Sen, Spe, Acc
class Metrics(object):
def __init__(self, metrics_list):
self.metrics = {}
for metric in metrics_list:
self.metrics[metric] = 0
def update(self, **kwargs):
for k, v in kwargs.items():
assert (k in self.metrics.keys()), "The k {} is not in metrics".format(k)
if isinstance(v, torch.Tensor):
v = v.item()
self.metrics[k] += v
def mean(self, total):
mean_metrics = {}
for k, v in self.metrics.items():
mean_metrics[k] = v / total
return mean_metrics
class EvalDataset:
def __init__(self, pred_root, gt_root):
self.preds = [pred_root + f for f in os.listdir(pred_root) if f.endswith('.png') or f.endswith('.jpg')]
self.gts = [gt_root + f for f in os.listdir(gt_root) if f.endswith('.png')]
self.preds = sorted(self.preds)
self.gts = sorted(self.gts)
self.size = len(self.preds)
self.transform = transforms.ToTensor()
self.index = 0
def load_data(self):
pred = self.transform(self.binary_loader(self.preds[self.index]))
gt = self.transform(self.binary_loader(self.gts[self.index]))
self.index += 1
return pred, gt
def binary_loader(self, path):
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('L')
pred_root = ""
gt_root = ""
eval_loader = EvalDataset(pred_root, gt_root)
metrics = Metrics(['Dice', 'IoU', 'Sen', 'Spe', 'Acc'])
for i in range(eval_loader.size):
print(i)
with torch.no_grad():
pred, gt = eval_loader.load_data()
_Dice, _IoU, _Sen, _Spe, _Acc = evaluate(pred, gt)
metrics.update(Dice = _Dice, IoU = _IoU, Sen = _Sen,
Spe = _Spe, Acc = _Acc)
metrics_result = metrics.mean(eval_loader.size)
print("Test Result:")
print('Dice: %.4f\nIoU: %.4f\nSen: %.4f\nSpe: %.4f\nAcc: %.4f, '
% (metrics_result['Dice'], metrics_result['IoU'], metrics_result['Sen'],
metrics_result['Spe'], metrics_result['Acc'])) For HD, please refer to py-hausdorff. |
Running this project is successful !!! Pancreatic tumor dice is 93%!!!!!! I love this project and I love you ! |
I'm trying to use SAM2-UNet for prostate tumor segmentation but the result is bad. so I'm asking for some information about your data. Is it MR? Is the tumor large and easy to seg? Have you changed something like loss in your project? |
ROI of pancreatic! NO change this code! |
我的数据集的模态是CT,胰腺肿瘤在整个CT图像的比例特别小,送入网络训练时,进行了裁切处理:在整幅CT图像中将胰腺及其肿瘤的区域取最大联通域,然后裁切;有尝试过将源代码中的struction loss 更换为 bceloss ,但是报错了,在pre0 pre1 pre2 送入bceloss 前进行了 sigmoid才能正确执行! |
Dear AI Engineer, how can I calculate evaluation metrics such as Dice, IOU, and HD? Could you provide me with some code? I found some code online regarding this topic, but I encountered some issues when running it. I would greatly appreciate it if you could share your code for these metrics! Thank you very much!
The text was updated successfully, but these errors were encountered: