Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find recall for each class using MeanAveragePrecision #2821

Open
shanalikhan opened this issue Nov 4, 2024 · 4 comments
Open

How to find recall for each class using MeanAveragePrecision #2821

shanalikhan opened this issue Nov 4, 2024 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@shanalikhan
Copy link

shanalikhan commented Nov 4, 2024

🚀 Feature

How to find the recall of class 0 and class 1 for this code. Sorry the documentation is not clear to me. I can set micro but how to identify overall P and R by class name?

Motivation

class CocoDNN(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
        self.metric = MeanAveragePrecision(iou_type="bbox",average="macro",class_metrics = True, iou_thresholds=[0.5, 0.75],extended_summary=True)  


    def training_step(self, batch, batch_idx):
      #### Some code here
    
    def validation_step(self, batch, batch_idx):
        imgs, annot = batch
        targets ,preds = [], []
        for img_b, annot_b in zip(imgs, annot):
            if len(img_b) == 0:
                continue
            if len(annot_b)> 1:
                targets.extend(annot_b)
            else:
                targets.append(annot_b[0])

            #print(f"Annotated : {len(annot_b)} - {annot_b}")
            #print("")
            loss_dict = self.model(img_b, annot_b)
        
            #print(f"Predicted : {len(loss_dict)} -  {loss_dict}")
            if len(loss_dict)> 1:
                preds.extend(loss_dict)
            else:
                preds.append(loss_dict[0])
            #preds.append(loss_dict)

        self.metric.update(preds, targets)
        map_results = self.metric.compute()
        #self.log_dict('logs',map_results)
        print("RECALL")
        print(map_results['recall'])
        #print(map_results['map_50'].float().item())
        self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return map_results['map_75']


### Pitch

<!-- A clear and concise description of what you want to happen. -->

### Alternatives

<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->

### Additional context

<!-- Add any other context or screenshots about the feature request here. -->
@shanalikhan shanalikhan added the enhancement New feature or request label Nov 4, 2024
Copy link

github-actions bot commented Nov 4, 2024

Hi! thanks for your contribution!, great first issue!

@shanalikhan
Copy link
Author

I have used the following code

self.log('precision', map_results['precision'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('recall', map_results['recall'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)

Its overall score and its in negative, how come its in negative? Also since I'm finetuning the model for binary class therefore I think mean is eventually not suitable for binary class here and I must take mean for 2 classes instead.

@SkafteNicki
Copy link
Member

@shanalikhan is it the mean average recall you are looking for e.g. the MAR value per class?
I assume so because that is one of the more commonly used metrics within detection tasks. If this is the case then you just need to set class_metrics=True and then look for the map_results["mar_100_per_class"] which is the mean average recall at 100 detections per image (maximum number of detection per class with default settings) per class. Assuming that your classes are simply 0 and 1 then

map_results["mar_100_per_class"][0]  # mar value for class 0
map_results["mar_100_per_class"][1]  # mar value for class 1

@SkafteNicki SkafteNicki added question Further information is requested and removed enhancement New feature or request labels Nov 11, 2024
@SkafteNicki SkafteNicki self-assigned this Nov 11, 2024
@shanalikhan
Copy link
Author

@SkafteNicki
Thanks for sharing the details. One quick question:
Why the map_* values are negative sometimes, Is it really possible to have negative MAP. for example:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants