Replies: 1 comment
-
Hi @KennyTC, I'll start by listing the names of the metrics below: Calculations are tied together through the All visibility (vis):Lines 549 to 582 in 5093f69 The distances (dist.dists):Lines 472 to 492 in 5093f69 The distance errors (dist.avg, dist.pXX):Lines 495 to 520 in 5093f69 All Percentage of Correct Points (pck):Lines 523 to 546 in 5093f69 Mean Object Keypoint Similarity (oks.mOKS)Lines 632 to 635 in 5093f69 where positive_pairs is calculated hereLines 340 to 386 in 5093f69 All Visual Object Classes (_voc):Lines 389 to 469 in 5093f69 The model evaluation notebook provides further info on what metrics might be helpful to look at. Hope this helps! Thanks, |
Beta Was this translation helpful? Give feedback.
-
After running training models, I see a lot of evaluation metrics, for example
vis.tp
vis.fp
vis.tn
vis.fn
vis.precision
vis.recall
dist.dists
dist.avg
dist.p50
dist.p75
dist.p90
dist.p95
dist.p99
pck.thresholds
pck.pcks
pck.mPCK_parts
pck.mPCK
oks.mOKS
oks_voc.match_score_thresholds
oks_voc.recall_thresholds
oks_voc.match_scores
oks_voc.precisions
oks_voc.recalls
oks_voc.AP
oks_voc.AR
oks_voc.mAP
oks_voc.mAR
pck_voc.match_score_thresholds
pck_voc.recall_thresholds
pck_voc.match_scores
pck_voc.precisions
pck_voc.recalls
pck_voc.AP
pck_voc.AR
pck_voc.mAP
For distance related metrics such as (dist.dists, dist.avg, dist.p50, dist.p75.....), I can understand.
But for other metrics, I could not find how these metrics are calculated. How can I get the overview of how these metrics calculated?
Any recommendation of which metrics should I focus when comparing these models?
Beta Was this translation helpful? Give feedback.
All reactions