-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error submitting a packet to the muxer: Error number -10054 occurred #203
Comments
Rolled back the mistserver version to 3.0 (was 3.4), the error went away. |
Hello,
Interesting, if it is working in 3.0 versus 3.4 that would mean there was
some kind of regression. We did change quite a lot to RTMP input in
between, so it is possible.
There's a few recommendations I can do, but I'd also be interested in
seeing if we can simply "fix" whatever the problem is.
*Recommendations*
- So, in theory MistServer does run into a slowdown in the interface if you
have 100+ streams set up, but wildcard streams are not counted towards this
number. Even then it's a slowdown of the interface, not the server itself.
- Every stream will be it's own separate process, so threadwise you should
also be fine. We have several users running several thousand streams on a
single server. So I don't think you should be hitting a limit unless the
hardware is somehow lacking.
- Hardwarewise I would recommend checking the server in use by running `htop`
or `top` to see the CPU and RAM stats. It should give you a hint if one or
the other is especially busy.
- One thing to keep in mind is that by default Linux has 50% of RAM
available for shared memory. We would recommend increasing this to 80-90%
of RAM if the server is reserved for streaming. You can verify the amount
of RAM through `df -h` and looking for the line that mentions `/dev/shm`.
- To increase it temporary: `sudo mount -o remount,size=#G /dev/shm/`where
# should be the number in gb you want to target. For example for 32G ram
you'd go for 30: `sudo mount -o remount,size=30G /dev/shm/`
- To make this permanent you'd most likely have to edit fstab in `
/etc/fstab` with a line like `tmpfs /dev/shm
tmpfs size=#g 0 0` I would recommend looking this up to be
sure however!
Running out of RAM would usually show as all streams being dropped nearly
at the same time. Then a few might recover, but in general you'd see a full
outage and everything at the same time. (Though setting the RAM to not be
allowed to use 100% does make the recovery better and could mean only
"some" streams drop.)
Running out of CPU would show as some streams might get dropped, which then
opens the "space" for the rest to stabilize. If processes automatically
reboot this could very well repeat for random streams.
If something always closes exactly at time X or Y no matter the server load
you're usually looking at a connection timeout or a process that checks &
stops something on a timer.
*The error itself*
The -10054 error is an FFmpeg error, in my experience it usually comes if
the other side closes the connection, either by time-out or by
authorization mismatch. There should be more context around the error
providing the reason, so if you could share the lines above/below of that
error that should give more context about what specifically calls that
error.
Could be that you might need to run the ffmpeg application on a higher
debug level to see the context you might be missing right now.
The 5 minute timeout with an error -10054 very much sounds/feels like an
automatic connection drop because one of the sides isn't getting
confirmations that the connection should still be open... It is weird that
the time changes if you switch stream names though.
*Debug information from MistServer*
So here I would have to recommend setting up a separate stream like the `
cptest` you've done before and setting the debug level to 4. That will add
additional debug information that should help in figuring out what is
causing the ingest to stop. Then simply push and afterwards try and collect
the logs for that stream through the journal:
journalctl --since=-30m -u mistserver |grep cptest
(change the `-30m` to longer if the crash was before 30minutes, change `
cptest` to the stream name if you went for a different stream name)
You can save the logs by writing them to a log/txt file:
journalctl --since=-30m -u mistserver |grep cptest > mistcptest.log
The logs might at least give a hint what causes the source to stop,
MistServer should report if it closed down by request or whether it
disappeared expectantly.
If you're repushing the stream towards another local stream I would
recommend setting that one on a higher debug level as well and collecting
those logs as well.
You could also raise the debug level of the RTMP process itself in the
protocol list. However with the amount of pushes you're receiving I would
not recommend this as this gets spammy fast. Once done please don't forget
to remove the debug levels as well if you want to reduce the spam in the
logs.
*Setup*
So if I understand correctly, you're receiving several hundreds of streams
using the wildcard setup. Assigning them a name dynamically as they come in.
You then repush them to another stream internally to another stream name
- After 5 minutes the connection drops, I assume that the original stream
drops as the error given is an FFmpeg error
- or when you use a different stream name to receive the stream it drops
after 20 minutes
Could you perhaps share the encoding profile/settings you're using in the
FFmpeg application? That would allow me to try and reproduce the setup and
verify/fix the error. If this is something you feel best left out of the
github issue feel free to mail it to me directly using ***@***.***
or ***@***.***
Or if you can share more information about how the setup works that would
be appreciated as well, and again if that's something you'd rather send in
private I would recommend mailing directly.
With kind regards,
Balder Viëtor
Head of Testing
MistServer
…On Tue, Sep 3, 2024 at 4:06 AM EvgenyKirichenko ***@***.***> wrote:
Rolled back the mistserver version to 3.0 (was 3.4), the error went away.
—
Reply to this email directly, view it on GitHub
<#203 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN2RJDXKDNDIIPLDGPBIX3ZUUKTFAVCNFSM6AAAAABNPPBSQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRVGQ3TCNZXGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello.
Thank you for the quality software, stable and easy to use. Great work!
I would like to ask your advice.
I created a push stream on my server with the name cp, for which receivers are dynamically created. For example like this:
rtmp://mist.myserver.com/remote/cp+1stream
rtmp://mist.myserver.com/remote/cp+2stream
rtmp://mist.myserver.com/remote/cp+3stream
and so on.
The video stream is transmitted by ffmpeg utility, h.264 codec, flv container. At the moment I have more than 200 such streams. Broadcasting of each of them periodically, about every 5 minutes, crashes with error
Error submitting a packet to the muxer: Error number -10054 occurred
If I start a new push stream on the same server, say with the name cptest, and I attach the broadcast to it.
rtmp://mist.myserver.com/remote/cptest+1stream
then the stream behaves much more stable, crashing with error 10054 once every 20 minutes.
Perhaps you can recommend the right approach for my case. Let's say we need to make sure that the number of dynamically created threads (via +) does not exceed, say, 64. Or it is necessary to add RAM on the server. Or something else related to fine-tuning thread properties.
The text was updated successfully, but these errors were encountered: