-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Average Precision Calculation does not calculate area under curve #46
Comments
Hello. The common practice in object detection (PASCAL VOC, COCO and so on) is to define an average precision with, either a set of IoU threshold and perform the same equation than explained in the wikipedia page (e.g. In our case, we considered only a single IoU threshold (0.5) and compute the AP as the average of precision of the kept predictions. There is no error, this is a basic metric in the PASCAL VOC benchmark. Feel free to propose your own evaluation metric if you prefer. For further details, please have a look to this link. |
Hi, If you want to take the mean you need to calculate interpolated precisions first over equally spaced recall levels: I think in your code the interpolated precision is not calculated (second equation). For example checkout lines 135 - 138 in nuscenes devkit for interpolation of the precision on equally spaced recall levels: I don't want to suggest an additional evaluation metric but want to correct the existing one. Best regards! Edit: To make it more clear - you are only "integrating" a limited recall range: |
Ok I see. RADIal/FFTRadNet/utils/metrics.py Line 113 in ce58c8d
The current thresholds are [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] .The code should be for threshold in np.arange(0, 1.1, 0.1): to integrate the thresholds 0. and 1..I did not implement this code so I don't know why this choice has been made. But it looks like we're missing two threshold values. @jrebut any idea? Thanks for pointing this out. |
I expect that only adding more confidence thresholds will still not correctly sample the recall precision curve.
Change the loop to ensure correct sorting: and instead of
|
Thanks for your valuable comment, it is appreciated. |
You are welcome - I think the issue is set to closed currently. So might be good to reopen |
In the metric code, the AP value seems to be calculated incorrectly.
https://github.com/valeoai/RADIal/blob/main/FFTRadNet/utils/metrics.py#L199
AP should be calculated as the area under the precision-recall curve instead of just taking the mean over precisions:
https://en.wikipedia.org/w/index.php?title=Information_retrieval&oldid=793358396#Average_precision
The text was updated successfully, but these errors were encountered: