-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: rpc client concurrently waits for requests and new connections #62
fix: rpc client concurrently waits for requests and new connections #62
Conversation
e6830d5
to
5dbf20b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh boy, that must have taken a while to find out...
Additionally, it fixes quic-rpc hanging when the server is not connected. This was caused by handling obtaining a connection to the server and reading the channel for requests in a sequential manner. To solve this we use instead a tokio::select to concurrently wait for requests and new connections.
Would it have been an option to just use futures::future::select? I know it's a lot of boilerplate to map to the same type, but somehow I find such code nicer than tokio::select!. Maybe it was too ugly, and in any case it does not solve the issue with the future being non cancel safe, right?
Did you raise this with the flume team?
I added a link in the code to a flume issue. They are aware of their futures not being cancel safe, but it seems to be they are fine with it. Linking here for completeness. I have not checked how different the code would be with a |
Replaces #55
Fixes handling a stale connection with an active request, which previously behaved as quic-rpc not realizing the server has left and producing the somewhat obscure error
Open(LocallyClosed)
.Additionally, it fixes quic-rpc hanging when the server is not connected. This was caused by handling obtaining a connection to the server and reading the channel for requests in a sequential manner. To solve this we use instead a
tokio::select
to concurrently wait for requests and new connections.Note that part of the code complexity introduced is due to flume's receiving future not being cancel safe.
Fix includes a test, and I have tested it in iroh and the problematic behaviour is no longer there.