[Config Support]: High memory usage and ffmpeg keeps crashing #11326
Replies: 7 comments 8 replies
-
we will need information from the container about what is using memory. This can be retrieved using the system page in the UI as well as |
Beta Was this translation helpful? Give feedback.
-
Here is an image from the system page Do note that the pid climbs up from 300 all the way to over 100k in around 2 days |
Beta Was this translation helpful? Give feedback.
-
im also facing similar issue: mqtt:
host: home-assistant-mq
user: hass
password: 'xxx'
detectors:
ov:
type: openvino
device: GPU
model:
path: /models/yolo_nas_s7.onnx
labelmap_path: /labelmap/coco-80.txt
width: 640
height: 640
input_tensor: nchw
input_pixel_format: bgr
model_type: yolonas
detect:
width: 1088
height: 640
go2rtc:
ffmpeg:
cam: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -use_wallclock_as_timestamps
1 -rtsp_transport tcp -i {input}
streams:
cam1:
- ffmpeg:rtsp://[email protected]/stream1#input=cam#video=h264#hardware#audio=aac
cam2:
- ffmpeg:rtsp://[email protected]/stream1#input=cam#video=h264#hardware#audio=aac
ffmpeg:
hwaccel_args: preset-intel-qsv-h264
output_args:
record: preset-record-generic-audio-copy
input_args: preset-rtsp-restream
objects:
filters:
person:
min_score: 0.7
threshold: 0.7
cat:
min_score: 0.5
threshold: 0.7
track:
- person
- cat
record:
enabled: true
expire_interval: 5
retain:
days: 7
mode: all
events:
retain:
default: 7
mode: motion
snapshots:
enabled: true
timestamp: false
bounding_box: true
retain:
default: 7
motion:
threshold: 30
contour_area: 10
improve_contrast: false
cameras:
cam1:
onvif:
host: 192.168.20.21
port: 2020
user: camera
password: xxx
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/cam1
roles:
- record
- detect
detect:
fps: 5
min_initialized: 2
motion:
mask: 0,0.071,0.328,0.068,0.328,0,0,0
threshold: 30
contour_area: 10
improve_contrast: 'true'
zones:
inside:
coordinates: 0,0.523,0.336,0.341,0.348,0.476,0.656,0.269,0.661,0,1,0,1,1,0,1
inertia: 1
outside:
coordinates: 0.002,0,0,0.52,0.337,0.337,0.349,0.472,0.654,0.266,0.658,0
inertia: 1
review:
detections:
required_zones:
- inside
- outside
alerts:
required_zones: inside
cam2:
onvif:
host: 192.168.20.22
port: 2020
user: camera
password: xxx
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/cam2
roles:
- record
- detect
detect:
fps: 5
min_initialized: 2
motion:
mask: 0,0,0.336,0,0.336,0.07,0,0.072
objects:
mask: 0.519,0.374,0.615,0.355,0.63,0.673,0.534,0.663
zones:
inside:
coordinates: 0,0,1,0,1,1,0,1
inertia: 1
review:
alerts:
required_zones: inside
detections:
required_zones: inside
version: 0.14 logs:
i limit the docker memory usage to 2gb and currently it is at 1.8gb of usage. processor is i5 6500T. |
Beta Was this translation helpful? Give feedback.
-
Ok, i've also encountered this. Here's the go2rtc logs:
Here's the output of the main frigate log:
This is my config:
This is on a RPi4, cameras are running Thingino firmware, RTSP h264 1920x1080, 24fps I understand that there might have been a timeout on reading the camera's output (probably bad WiFi signal), but why would it hog the ram? Also, here's the ffmpeg command for one camera (the other one is identical):
|
Beta Was this translation helpful? Give feedback.
-
Also having this issue, and it's bringing down my entire system regularly.
|
Beta Was this translation helpful? Give feedback.
-
I still have this issue with frigate. I have upgraded to the latest commit 0.14.1-e01b6ee. It would basically continue using more and more ram then freeze or crash my RPI. A temporary solution that I wouldn't recommend is creating a cronjob to restart frigate docker container every 12 or 24 hours. But this obviously creates safety issues for you if you are using it for surveillance. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Describe the problem you are having
I ran frigate version 0.13.2-6476f8a on RPI 4B 8gb ram installed on docker without HASS. The frigate process starts with 1.9gb on ram and uses an extra 1gb each day until it runs out of memory and my RPI stops being responsive. Not using coral currently using CPU.
Version
0.13.2-6476f8a
Frigate config file
Relevant log output
Frigate stats
Operating system
Debian
Install method
Docker Compose
Coral version
CPU (no coral)
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions