You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used ultralytics yolov8 to label some data using the predict function on a video. I set the confidence threshold to 0.01 while predicting. I was only tracking a single class so I set the scores = [majority_class_score, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5] (essentially simulating the main class plus 4 dummy classes with lower probabilities).
The unfortunate part is that when I run REPP on these detections, I actually get worse detections. More flickering. It flickers or drops the objects even when the original predict function (without any other tracking or post processing) produces a label with a confidence score of 0.8 to 0.95 in the exact same frame.
I tried a bunch of hyperparameters, including clf_thr: 0.00001 and different re-coordinate values, but perhaps I am doing something wrong. Why is it deteriorating results over even the base predictor? Please help I would be eternally grateful to you as this is very much needed for our project. We can even pay you some amount for advice on getting this set up if this works well and you are willing to help us.
I even tried different ways of defining the score such as only including the confidence for the main class, or increasing to 19 dummy classes with low probabilities + the main class, etc.
I believe that your work is very great and it is me who is doing something wrong in using it. Please let me know if you can help point out some fundamental flaw in my approach.
Please help with this. Thank you.
The text was updated successfully, but these errors were encountered:
I used ultralytics yolov8 to label some data using the predict function on a video. I set the confidence threshold to 0.01 while predicting. I was only tracking a single class so I set the scores = [majority_class_score, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5] (essentially simulating the main class plus 4 dummy classes with lower probabilities).
The unfortunate part is that when I run REPP on these detections, I actually get worse detections. More flickering. It flickers or drops the objects even when the original predict function (without any other tracking or post processing) produces a label with a confidence score of 0.8 to 0.95 in the exact same frame.
I tried a bunch of hyperparameters, including clf_thr: 0.00001 and different re-coordinate values, but perhaps I am doing something wrong. Why is it deteriorating results over even the base predictor? Please help I would be eternally grateful to you as this is very much needed for our project. We can even pay you some amount for advice on getting this set up if this works well and you are willing to help us.
I even tried different ways of defining the score such as only including the confidence for the main class, or increasing to 19 dummy classes with low probabilities + the main class, etc.
I believe that your work is very great and it is me who is doing something wrong in using it. Please let me know if you can help point out some fundamental flaw in my approach.
Please help with this. Thank you.
I'm also trying to use repp on the yolov8 model.Can you give me some suggestions?
I used ultralytics yolov8 to label some data using the predict function on a video. I set the confidence threshold to 0.01 while predicting. I was only tracking a single class so I set the
scores = [majority_class_score, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5, (1 - majority_class_score)/5]
(essentially simulating the main class plus 4 dummy classes with lower probabilities).The unfortunate part is that when I run REPP on these detections, I actually get worse detections. More flickering. It flickers or drops the objects even when the original predict function (without any other tracking or post processing) produces a label with a confidence score of 0.8 to 0.95 in the exact same frame.
Config file:
I tried a bunch of hyperparameters, including
clf_thr: 0.00001
and different re-coordinate values, but perhaps I am doing something wrong. Why is it deteriorating results over even the base predictor? Please help I would be eternally grateful to you as this is very much needed for our project. We can even pay you some amount for advice on getting this set up if this works well and you are willing to help us.I even tried different ways of defining the score such as only including the confidence for the main class, or increasing to 19 dummy classes with low probabilities + the main class, etc.
I believe that your work is very great and it is me who is doing something wrong in using it. Please let me know if you can help point out some fundamental flaw in my approach.
Please help with this. Thank you.
The text was updated successfully, but these errors were encountered: