-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
obs-webrtc: Adjust RtcpNackResponder from bitrate #10518
Conversation
Thanks for the review @RytoEX I made the changes |
@tt2468 @Bleuzen Do you think it would be better to set a 'sensible' higher default then? In libdatachannel the default is 512. For 8mbit it is ~1200 packets a second, I was hoping to give users ~5 seconds of cache to support people with mixed work loads.
|
Not sure how this translates to resource usage (memory / bandwidth?) and if everyone would be happy with it so I don't have a strong opinion on that currently However I wonder if it is better to make this user configurable |
The extra memory usage will be the 5 seconds of video. It shouldn't impact bandwidth, the NACKs are only requested on loss. I don't have a strong opinion either. I just realized when debugging lossy networks our current NACK setup isn't useful. |
If we're just trying to determine a buffer size which has to be available but not always filled, then yeah we can probably go pretty high like 8MB without too much worry. If the behavior of the output changes depending on whether the buffer is the required size versus if it's oversized, then we might have to be more careful. |
@tt2468 Yep as you described, available but not always filled. It would be really nice to bump this especially for users with higher bitrates and packet loss, the current value is only giving them ~.5 second of buffer |
89b9da6
to
95e1a63
Compare
Wouldn't it make more sense to adjust the buffer size based on (configured) output bitrate? Currently OBS outputs are guaranteed to target CBR, so you could query the bitrate and set it appropiately. See the RTMP Windows send loop for example: https://github.com/obsproject/obs-studio/blob/master/plugins/obs-outputs/rtmp-stream.c#L1046-L1067 |
@derrod either works for me! The original worked that way, @tt2468 and @Bleuzen lefts comments around the following
So it felt safer to have a high default value. |
@derrod I found that for webrtc using VBR works better anyway because most encoders do introduce big spikes in the data rate on I-Frames especially in low motion scenes As an idea we could use the maximum-bitrate (if configured, depending on encoder) for the calculation in VBR mode |
I think this could be fixed in two stages. One just unblocks general usage of this feature with an appropriate sized buffer (say 4000 packets or 6 megabytes). This is a negligible amount and could be optimized later with dynamic behavior based on bitrate or configuration |
To give people confidence that I have fixed this properly I wrote https://github.com/sean-der/whip-nack-test This is a WHIP server that simulates packet loss. You can start with Without packet loss OBS will stop responding to packets at
With my change you now will see it go past that point now
Giving us 6 Megabytes of history, giving a much better experience in broken networks. |
Before we used the default value set by libdatachannel. At higher bitrates the cache would be undersized. Because of the undersized cache we wouldn't have enough NACK history to fix the stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Just making the buffer bigger seems like a good compromise for now to make the end user experience better, especially since it's only about 8mb of memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems fine at a glance.
Description
Adjust RtcpNackResponder from bitrate
Motivation and Context
At higher bitrates the cache would be undersized. Because of the undersized cache we wouldn't have enough NACK history to fix the stream.
How Has This Been Tested?
Against Broadcast Box with a packet loss percentage of 1%
Types of changes
Checklist: