Frigate+ 2025.0 Base Model Update #16132
Replies: 14 comments 67 replies
-
yea i was just wondering is moving off my coral to a nvidia card for 640x640 worth it due to the bigger resolution and increased sensitivity for smaller objects. whats the current recommended way |
Beta Was this translation helpful? Give feedback.
-
Is the config verb for waste bin waste_bin? |
Beta Was this translation helpful? Give feedback.
-
I've got a pair of Corals and a P2200 for frigate/plex- might be time to transition away from the coral with this update. |
Beta Was this translation helpful? Give feedback.
-
Very nice, awesome to see the constant progression. Very happy so far. The 640x640 from the 24.3 model has improved my detection accuracy by quite a bit. Although I'm not sure how an Nvidia card with the 640x640 compares to a coral device - my old coral system handles about 10 cameras without problems, when using my Nvidia RTX 4000 with onnx it did hit 100% CPU use on the detector after just 5 cameras (4K each). |
Beta Was this translation helpful? Give feedback.
-
Thanks Blake! Do I need to regenerate my yolov7x-320 or yolov7x-640 to take use of these? I will say I did move from a coral tpu to RTX3060 and the difference in detection and accuracy is pretty crazy. Inference speed isn't much faster but it's more consistent and accurate. |
Beta Was this translation helpful? Give feedback.
-
Blake, I assume suggestions are still by request? What is the timeline to have these implemented either on demand in the web UI or automatically on upload? |
Beta Was this translation helpful? Give feedback.
-
Thanks for your hard work! I've been using the coral mobiledet model, is there a way to try yolonas 640x640 using the same model generate request, without having to make 2 requests? |
Beta Was this translation helpful? Give feedback.
-
Couple of quick questions. Reading the above comments, seems the larger resolution isn't a good idea for a coral? Is the model done per camera or collectively. I have a camera where I plan to rotate 30-40 degrees due to a tree growing in its FOV that is constantly detected as a waste bin. Does this undo the work so far and should I delete all images? Any way to delete an old camera name from frigate+ with no images? Is the Australia Post logo on the list for future? |
Beta Was this translation helpful? Give feedback.
-
Will these new labels be auto-detected on Frigate+ website in the future or now? I love this new feature and think it's pretty cool and hopeful even if I am not detecting these for my needs the community could benefit from image submissions that "accidentally" have these in them as well. |
Beta Was this translation helpful? Give feedback.
-
I guess I haven't even considered my detect resolution after setting up everything initially. Considering testing openvino but was wondering if I should also up my sub stream resolutions to 1080 vs 720? CPU/GPU usage is almost nothing and the coral does just fine with 720p currently. I don't tend to notice detections going unnoticed but I guess I wouldn't know unless I watched consistently. |
Beta Was this translation helpful? Give feedback.
-
Just curious what the Frigate+ criteria is for including infrared night shots (mostly for people and cars)? are you particular about motion blur, graininess, overexposure? should we only be including really clean shots? |
Beta Was this translation helpful? Give feedback.
-
My front camera is 180degree FOV. Do you tag all these cars in the distance in order to not confuse the model. |
Beta Was this translation helpful? Give feedback.
-
The new 2025.0 model offers a higher resolution for yolonas, but not mobiledet? Why not? I'm not understanding the explanation provided above. If, for example, I have 11x cameras (detection @ 10fps 1280x720) and my assigned coral only ever ticks over at a few percent of CPU utilization and 6ms inference speed. It seems like it should be able to handle significantly more load, no? And having the option for higher accuracy for "smaller objects" has strong appeal, as we have a lot of normal-sized, but "far field" objects and would like to see fewer false negatives (and positives). The further away the object (or the smaller it is) it seems like having 640x480 would enable more accurate object identification. Otherwise, if 640x480 doesn't offer an improvement for some use cases, then why even offer it for yolonas? Please explain |
Beta Was this translation helpful? Give feedback.
-
Running Frigate on i5-12500 (UHD 770 iGPU), with 640x640 model ov time went from 20-30ms to 50-60ms, CPU load from 20% went to 60%, rolled back to 320x320, will try again when I add nvidia GPU to my frigate server. |
Beta Was this translation helpful? Give feedback.
-
The base model in Frigate+ has now been updated to 2025.0. All model requests going forward will use this version. In order to get a new model, you will need to submit a model request.
This update was focused on incorporating additional training data from the new labels added in 2024.3 (motorcycle, bicycle, boat, usps, dhl, an post, purolator, postnl, nzpost, postnord, gls, dpd, horse, bird, raccoon, fox, bear, cow, squirrel, goat, rabbit, waste bin, bbq grill, robot lawnmower, umbrella). In addition, thousands of additional examples of reported false positives were incorporated.
Along with this update, you will see that yolonas model requests now generate two different sizes: 640x640 and 320x320. If you have sufficiently powerful hardware, the 640x640 model offers higher accuracy especially for smaller objects. When using the larger model, you can expect GPU/CPU usage for the detector to double and it will have 2x slower inference times. However, the detections per second should also decrease because it can scan larger parts of the frame at once. I would love to hear your feedback on the larger size as the training data isn't always representative of real world Frigate usage.
Beta Was this translation helpful? Give feedback.
All reactions