You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if this is on purpose for inference to work on the OAK. Correct naming of the output layers (output1_yolov8 etc.) might make things clearer for manual conversion afterwards.
The text was updated successfully, but these errors were encountered:
Hey, good catch! This is indeed intentional. Since both use the same processing, we accelerated the support for YoloV8 by naming them like YoloV6 layers. This will be fixed once support on FW (in our DepthAI) for YoloV8 directly is added with the new release.
Output layers of the YOLOv8 ONNX model are currently named
output1_yolov6r2
,output2_yolov6r2
,output3_yolov6r2
.https://github.com/luxonis/tools/blob/master/yolo/export_yolov8.py#L60
Not sure if this is on purpose for inference to work on the OAK. Correct naming of the output layers (
output1_yolov8
etc.) might make things clearer for manual conversion afterwards.The text was updated successfully, but these errors were encountered: