Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

obs-webrtc: Adjust RtcpNackResponder from bitrate #10518

Merged
merged 1 commit into from
Jun 4, 2024

Conversation

Sean-Der
Copy link
Contributor

@Sean-Der Sean-Der commented Apr 11, 2024

Description

Adjust RtcpNackResponder from bitrate

Motivation and Context

At higher bitrates the cache would be undersized. Because of the undersized cache we wouldn't have enough NACK history to fix the stream.

How Has This Been Tested?

Against Broadcast Box with a packet loss percentage of 1%

Types of changes

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code has been run through clang-format.
  • I have read the contributing document.
  • My code is not on the master branch.
  • The code has been tested.
  • All commit messages are properly formatted and commits squashed where appropriate.
  • I have included updates to all appropriate documentation.

plugins/obs-webrtc/whip-output.cpp Outdated Show resolved Hide resolved
plugins/obs-webrtc/whip-output.cpp Outdated Show resolved Hide resolved
@Sean-Der
Copy link
Contributor Author

Thanks for the review @RytoEX I made the changes

plugins/obs-webrtc/whip-output.cpp Outdated Show resolved Hide resolved
plugins/obs-webrtc/whip-output.cpp Outdated Show resolved Hide resolved
plugins/obs-webrtc/whip-output.cpp Outdated Show resolved Hide resolved
@Sean-Der
Copy link
Contributor Author

@tt2468 @Bleuzen Do you think it would be better to set a 'sensible' higher default then?

In libdatachannel the default is 512. For 8mbit it is ~1200 packets a second, I was hoping to give users ~5 seconds of cache to support people with mixed work loads.

  • People viewing in real-time get some packet loss
  • The server is also saving the video and will aggressively NACK old video to make sure saved video is perfect

@Sean-Der Sean-Der requested review from tt2468 and Bleuzen April 12, 2024 15:02
@Bleuzen
Copy link
Contributor

Bleuzen commented Apr 12, 2024

Do you think it would be better to set a 'sensible' higher default then?

Not sure how this translates to resource usage (memory / bandwidth?) and if everyone would be happy with it so I don't have a strong opinion on that currently

However I wonder if it is better to make this user configurable

@Sean-Der
Copy link
Contributor Author

The extra memory usage will be the 5 seconds of video. It shouldn't impact bandwidth, the NACKs are only requested on loss.

I don't have a strong opinion either. I just realized when debugging lossy networks our current NACK setup isn't useful.

@tt2468
Copy link
Member

tt2468 commented Apr 12, 2024

If we're just trying to determine a buffer size which has to be available but not always filled, then yeah we can probably go pretty high like 8MB without too much worry. If the behavior of the output changes depending on whether the buffer is the required size versus if it's oversized, then we might have to be more careful.

@WizardCM WizardCM added the Bug Fix Non-breaking change which fixes an issue label Apr 13, 2024
@Sean-Der
Copy link
Contributor Author

@tt2468 Yep as you described, available but not always filled.

It would be really nice to bump this especially for users with higher bitrates and packet loss, the current value is only giving them ~.5 second of buffer

@Sean-Der Sean-Der force-pushed the nack-size branch 3 times, most recently from 89b9da6 to 95e1a63 Compare May 16, 2024 03:56
@Sean-Der Sean-Der requested a review from RytoEX May 16, 2024 15:34
@derrod
Copy link
Member

derrod commented May 18, 2024

Wouldn't it make more sense to adjust the buffer size based on (configured) output bitrate? Currently OBS outputs are guaranteed to target CBR, so you could query the bitrate and set it appropiately. See the RTMP Windows send loop for example: https://github.com/obsproject/obs-studio/blob/master/plugins/obs-outputs/rtmp-stream.c#L1046-L1067

@Sean-Der
Copy link
Contributor Author

Sean-Der commented May 19, 2024

@derrod either works for me! The original worked that way, @tt2468 and @Bleuzen lefts comments around the following

This only works for CBR and similar modes. Is there any consideration made for if a user uses an encoder 
in CQP or similar modes? Additionally, I could see it being an issue if an encoder has a "bitrate" value 
applied to its settings object, but the encoder is not using a mode which actually references the "bitrate" value.

So it felt safer to have a high default value.

@Bleuzen
Copy link
Contributor

Bleuzen commented May 19, 2024

Currently OBS outputs are guaranteed to target CBR

@derrod
How exactly does this work?
The UI does not enforce CBR

I found that for webrtc using VBR works better anyway because most encoders do introduce big spikes in the data rate on I-Frames especially in low motion scenes
These spikes do cause freezes very often
VBR does lower the bitrate in low motion scenes which does also make these spikes smaller
So VBR streams freeze less often and are more stable than CBR streams
Have been using VBR for webrtc streaming over the last months mainly for this reason
(also it makes mobile viewers with limited data plans happier as a nice bonus)


As an idea we could use the maximum-bitrate (if configured, depending on encoder) for the calculation in VBR mode
Or make this value user configurable so it also works in case one uses other rate control modes or just has more memory to spare in their system

@kc5nra
Copy link
Contributor

kc5nra commented May 19, 2024

I think this could be fixed in two stages. One just unblocks general usage of this feature with an appropriate sized buffer (say 4000 packets or 6 megabytes). This is a negligible amount and could be optimized later with dynamic behavior based on bitrate or configuration

@Sean-Der
Copy link
Contributor Author

To give people confidence that I have fixed this properly I wrote https://github.com/sean-der/whip-nack-test This is a WHIP server that simulates packet loss. You can start with go run . and then set your WHIP server to http://localhost:8080/ and use any key.

Without packet loss OBS will stop responding to packets at 550. This means that at most 750 kilobytes of history is available for video.

Sending NACK for Packet (15726) which is (550) in the past

With my change you now will see it go past that point now

Pkt with SequenceNumber(35500) correctly RTXed
Sending NACK for Packet (36358) which is (900) in the past

Giving us 6 Megabytes of history, giving a much better experience in broken networks.

Before we used the default value set by libdatachannel. At
higher bitrates the cache would be undersized. Because of the undersized
cache we wouldn't have enough NACK history to fix the stream.
Copy link
Member

@DDRBoxman DDRBoxman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Just making the buffer bigger seems like a good compromise for now to make the end user experience better, especially since it's only about 8mb of memory.

Copy link
Member

@RytoEX RytoEX left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems fine at a glance.

@RytoEX RytoEX added this to the OBS Studio (Next Version) milestone Jun 4, 2024
@RytoEX RytoEX merged commit b9de99a into obsproject:master Jun 4, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Fix Non-breaking change which fixes an issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants