-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PLI dilution #3422
PLI dilution #3422
Conversation
@natikaltura you should change the target branch to 0.x |
@natikaltura thanks for the PRs! Answering here even though some considerations will apply to #3423 as well (where there's more changes due to the multiple streams). I think the patches can be simplified, since you're adding a new volatile that's probably not needed. The main problem is that the fence we currently have doesn't seem to be working well with the multistream nature of Janus (the same function is invoked by multiple threads), and mostly because we only update I don't like the change of the threshold from 100ms to 1 second. It will make responsiveness for new users (and simulcast changes) awful in some cases. While we can agree that 100ms may be too short (and probably the cause of your ~10 PLIs per second to the publisher), I don't see 1s as the answer, especially with the fence fix from the paragraph above. Coming to multistream, I don't see the point of adding a new function. I'd just add a new property to the existing function to dictate a potential different behaviour (e.g., -1 does what it did before, a positive integer only pokes the specific stream). Not even sure why we need two? If we're starting to do drill down PLIs, why not only do that?
As you noticed, that's because the simulcast code defaults to 250ms, but that's a (reasonable) default. You can override that with the |
@lminiero Thanks a lot for the comments! I’ll fix both PRs after getting your inputs about the following:
Thanks! I'll remove the sending_pli volatile from the 0.x stream since it seems unnecessary with the other two PR changes:
I originally copied the sending_pli volatile from the multistream implementation.
Just to emphasize the need for this fix, I have logs showing 200 and more PLIs per second (for each layer), especially with screen sharing on challenging networks :)
Sure. I will unite it to one function.
Thanks! I realize I missed the full picture... |
Hi @lminiero, have you had a chance to review @natikaltura questions? Thanks! |
@DenisSicunKaltura thanks for the heads up and sorry for missing this. I remember reading the mail notification but being unable to answer right away, and then I forgot about that. Answering the relevant points below.
IIRC we added it to multistream because there you can have multiple video m-lines, all of them potentially asking for a PLI at the same time, but I may be wrong. It's not a bad idea to keep it in
I should really remember my own code better 🤣
I don't have a specific suggestion on how long that should be, just reasonings on response times and what would work best with many new subscribers coming in, or many subscribers to the same sources switching feeds. If you can make some experiments with different thresholds and evaluate what looks like a good trade-off in your environments (since you have big deployments to test this with) it could indeed help. We could even make it configurable, I guess, even though that would be something to do in a separate effort, in case, as it wouldn't be clear what the API would need to be.
While it makes sense as an enhancement, I'd rather keep the "default simulcast threshold for publishers" effort to a completely different PR. It's best not to cram too much stuff in the same PR, especially when they touch different things and for different requirements. This PR is good to address the "too many PLIs" issue, but for simulcast switching we have an API already, so a separate effort may provide a different approach to the existing one. |
Thanks for the quick answer @lminiero !
As I mentioned, we deployed it to our production. From the last few weeks, 1 second seems to work really well, also for large sessions, with hundreds of simultaneous subscribers (joining/switching feeds). I think it's good to keep it at a pretty high value for users that come from challenging networks or long geographical distance. After those clarifications, I think that @natikaltura will complete the rest of the PR comments soon. |
Thanks for the feedback! Waiting for the updates then ✌️ |
This reverts commit 9cb4bf9.
41fc449
to
51fe38a
Compare
51fe38a
to
aed150c
Compare
Fix: overflow of PLI requests to Publishers (especially with simulcast)
This is a PR for v0.14.3 branch (see PR# for the multistream)
We observed a significant issue while using the Janus Media Server on both the v0.14.3 branch and the v1.2.3 multistream branch. When VP8 simulcast is enabled with a group of viewers for a Publisher, the Publisher sometimes receives dozens of Picture Loss Indication (PLI) requests per second. Although these PLI requests don't directly lead to full frame retransmissions or additional outgoing networking overhead from the Publisher, they do trigger a lot of processing, resulting in CPU spikes on the Publisher.
Additional Context
While this PR addresses the overflow of PLI requests, a related issue remains unresolved. Specifically, when Chrome is used for desktop sharing, the generated video streams often operate at a very low frame rate (0.2–0.5 FPS). This low frame rate causes Janus to generate excessive PLIs. The issue arises primarily from the janus_rtp_simulcasting_context_process_rtp() function in rtp.c. It suggests that the Publisher requires a PLI (->need_pli) if no stream packets have been received for more than 250,000 microseconds.
As a temporary fix, we applied a patch that identifies desktop sharing by checking for the “_desktop” string constant we add to the Publisher's display property. However, for a correect solution it might be beneficial to add a lowFps property for Publishers. This would allow for more precise logic to handle such cases effectively.
Thank you in advance for your input.
Nati Baratz