You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upon using the same finetuned YOLOv8 Model, I have encountered a significant decrease in the mAP 50 results when evaluating the model on the same evaluation dataset using two different methods: "ultralitycs evaluate" and "sahi coco evaluate" with the no_sliced_prediction parameter set to True. The mAP 50 score dropped from 0.7 to 0.49, and I am seeking to understand the reasons behind this reduction.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Upon using the same finetuned YOLOv8 Model, I have encountered a significant decrease in the mAP 50 results when evaluating the model on the same evaluation dataset using two different methods: "ultralitycs evaluate" and "sahi coco evaluate" with the no_sliced_prediction parameter set to True. The mAP 50 score dropped from 0.7 to 0.49, and I am seeking to understand the reasons behind this reduction.
Configuration
Here is my current configuration:
sahi coco evaluate --dataset_json_path <dataset_json_path> --result_json_path "$file_path"
Beta Was this translation helpful? Give feedback.
All reactions