Investigate value differences between our ConfusionMatrix
and YOLOv8 implementation
#227
Replies: 10 comments
-
@SkalskiP I found that |
Beta Was this translation helpful? Give feedback.
-
Hi, @hardikdava 👋🏻! Yeah, I know the result matrix is transposed, but that's not the issue. :) But even if you consider that, there are still some differences. No worries, we will figure it out. 👍🏻 |
Beta Was this translation helpful? Give feedback.
-
@SkalskiP I think I have found the issue. There are two issue.
|
Beta Was this translation helpful? Give feedback.
-
I'm not sure I understand this part. :) |
Beta Was this translation helpful? Give feedback.
-
@SkalskiP I am investigating the issue with |
Beta Was this translation helpful? Give feedback.
-
@hardikdava Oh, interesting. So if there are no detections, we are dropping the image from the dataset instead of loading |
Beta Was this translation helpful? Give feedback.
-
Yeah, also the dataset has issue. I found that 1 image is present but relative annotation file is missing, also 1 annotation file is present and relative image is missing. 😆 |
Beta Was this translation helpful? Give feedback.
-
@SkalskiP Summary of my experiments:
I do not see any other difference in the implementation. |
Beta Was this translation helpful? Give feedback.
-
Excellent work, @hardikdava! Thanks a lot for the time you spend investigating the issue. So the takeaways are:
|
Beta Was this translation helpful? Give feedback.
-
@SkalskiP Please convert this issue as |
Beta Was this translation helpful? Give feedback.
-
Search before asking
Bug
We recently added object detection
ConfusionMatrix
tosupervision
. However, we noticed that ourConfusionMatrix
value tends to differ from the one you get from YOLOv8. We would like to understand why that is happening. And fix any potential bugs in our codebase. Heda more here.Beta Was this translation helpful? Give feedback.
All reactions