-
I'm dealing with satellite imagery from the dota dataset and I want to train a yolov8 on it. The objects are very small in size which for which sahi is great. From what I've gathered, slicing aided fine-tuning provides higher increase in map. A lot of people have asked about how to train, however, i couldn't find a clear cut answer or code. i've also gone through the tutorial mentioned by the authors in the list of tutorials (https://blog.ml6.eu/how-to-detect-small-objects-in-very-large-images-70234bab0f98). However, I want to know the official way of fine-tuning to achieve the highest possible mAP? For example, this is how it's done in the blog. Should I also use a random crop while training? or there is a better approach? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Hello @yoloyash. Here you can find the full training scripts and configs for slicing aided fine-tuning: https://github.com/fcakyon/small-object-detection-benchmark A sample config is also available for xview dataset (satellite imagery). For yolov8, you have to slice the coco formatted dataset using this: https://github.com/obss/sahi/blob/main/docs/slicing.md Then you train your yolov8 model with slices. Then you perform sliced inference with sahi+yolov8 as in https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb |
Beta Was this translation helpful? Give feedback.
Hello @yoloyash. Here you can find the full training scripts and configs for slicing aided fine-tuning: https://github.com/fcakyon/small-object-detection-benchmark
A sample config is also available for xview dataset (satellite imagery).
For yolov8, you have to slice the coco formatted dataset using this: https://github.com/obss/sahi/blob/main/docs/slicing.md
Then you train your yolov8 model with slices.
Then you perform sliced inference with sahi+yolov8 as in https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb