-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use of remote:prot=tcp on resource constrained devices #92
Comments
So I've now tried this (remote device: OrangePi Zero LTS using a FUNcube Dongle Pro+, with latest SoapyFCDPP driver, local device my Lenovo E590 laptop using WiFi to introduce some network jitter!), result: stable for a few seconds, then begins emitting many XRUN recoveries ( I suspect this is due to the still quite low transfer size / period selected (1006 frames), whereby any network jitter destabilises the flow control. My own solution has no flow control, uses a larger transfer size / period (default 24000 frames), traded for latency of course, but does not have these challenges. For comparison, omitting |
@phlash Something has to be very broken with the flow control. I pushed a branch to disable flow control, if thats worth trying
What transfer size are you using? This is where transfer size is defined: https://github.com/pothosware/SoapyRemote/blob/master/common/SoapyRemoteDefs.hpp#L91 Its currently 4096 because some platforms would bomb out on larger sizes, I think apple and/or windows. It think though it could easily be increased on linux. The flow control window comes from the https://github.com/pothosware/SoapyRemote/wiki#remotewindow remote:window setting which supposedly resizes the socket buffer on the receive size so the kernel guarantees that much space. And the window is just that divided by the transfer size. Its currently set to 42 MiB by default. Which should have been like 10K of these transfers before needing a response from flow control. |
@guruofquality I'll give that try tomorrow. I'm only guessing that it's the flow control going wrong somewhere, as that seems to be the major difference between the approach taken here and my dumb 'let the kernel sort it out' approach (that is only possible when using TCP). |
@guruofquality Flow control is exonerated, your test build has similar behaviour to unmodified SoapyRemote, as does my own code with a small period specified for the ALSA buffer in SoapyFCDPP. Looks like any overrun/overflow is down to the ability of my small test CPU to keep up when there is more task switching in general, eg: if I enable TRACE logging with small ALSA periods, then I see overflows continuously, especially if that logging also goes over the network to the client. I have made a couple of changes to my own solution that seem to help, and may be worth considering for SoapyRemote:
|
Originally posted by @guruofquality in pothosware/SoapyFCDPP#13 (comment)
How does it (remoting solution) compare when the protocol is set to tcp for soapy remote?
https://github.com/pothosware/SoapyRemote/wiki#remoteprot
SoapyRemote is trying to have headers with metadata and some kind of flow control. But if plain tcp is useful, I dont see why that couldnt be a mode in SoapyStreamEndpoint.cpp
The text was updated successfully, but these errors were encountered: