You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are there any recommendations about settings the min and max sizes of priorboxes and their aspect ratios? Is there any logic to set these values effectively?
We use object detections on thermal images from camera with resolution of 160x120 and we need the detection to be fast on embedded device so we cannot upscale images to 300x300.
When we use priorboxes scaled from 0.2 to 0.9 for 160 pixels, the detection result is very poor - lots of missing detections. However, if we use 0.2 to 0.9 scales for 300 pixels (but input is still 160x120) then the detection works very well. Although the last priorboxes have out-of-range sizes which does not make sense.
We tried using Kmeans algorithm to computing the best sizes, but this does not make sense for SSD models. So we tried setting some random values and they have very large impact on training loss and testing accuracy. Although the set values don't make any sense, the trained model performs better (lower loss, higher accuracy) than with the default values. But we would like to find some logic rather than setting random values "that works".
The most common aspect ratios of our objects are 0.7, 1.0 1.3, 4, 5. The wide objects of 4:1 and 5:1 have the sizes such as 40x10, the other objects are mostly larger (30x30, 50x70, 68x100 etc.)
The text was updated successfully, but these errors were encountered:
Are there any recommendations about settings the min and max sizes of priorboxes and their aspect ratios? Is there any logic to set these values effectively?
We use object detections on thermal images from camera with resolution of 160x120 and we need the detection to be fast on embedded device so we cannot upscale images to 300x300.
When we use priorboxes scaled from 0.2 to 0.9 for 160 pixels, the detection result is very poor - lots of missing detections. However, if we use 0.2 to 0.9 scales for 300 pixels (but input is still 160x120) then the detection works very well. Although the last priorboxes have out-of-range sizes which does not make sense.
We tried using Kmeans algorithm to computing the best sizes, but this does not make sense for SSD models. So we tried setting some random values and they have very large impact on training loss and testing accuracy. Although the set values don't make any sense, the trained model performs better (lower loss, higher accuracy) than with the default values. But we would like to find some logic rather than setting random values "that works".
The most common aspect ratios of our objects are 0.7, 1.0 1.3, 4, 5. The wide objects of 4:1 and 5:1 have the sizes such as 40x10, the other objects are mostly larger (30x30, 50x70, 68x100 etc.)
The text was updated successfully, but these errors were encountered: