Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API for streaming HTTP client bodies. #242

Open
jeffutter opened this issue Oct 3, 2024 · 3 comments
Open

API for streaming HTTP client bodies. #242

jeffutter opened this issue Oct 3, 2024 · 3 comments

Comments

@jeffutter
Copy link

Hi there,

Very interested in what you are doing here with wtx.

I was wondering if there is support for streaming HTTP response bodies?

I see ClientStream.recv_res, but that says.

Higher operation that awaits for the data necessary to build a response and then closes the stream.

which is not what I want.

It does indicate "Higher operation", but I can't really seem to find any lower-level client APIs to create a request and stream the response body.

I'm wondering if I'm overlooking something here. Or, if it doesn't exist, are there plans to add such APIs?

@c410-f3r
Copy link
Owner

c410-f3r commented Oct 4, 2024

Hello @jeffutter

streaming HTTP response bodies?
I see ClientStream.recv_res

For servers, wtx doesn't support HTTP/2 Server Push because Google removed it from Chrome (https://developer.chrome.com/blog/removing-push?hl=fr), as such, it is not possible to send arbitrary numbers of responses to clients.

// https://c410-f3r.github.io/wtx/http2/index.html

Passes the hpack-test-case and the h2spec test suites. Due to official deprecation, server push and prioritization are not supported.

For clients, wtx only supports the sending of requests to servers and I am not aware of any other projects that allow clients send responses.

create a request and stream the response body.

Contrary to HTTP/1, an HTTP/2 connection can live indefinitely so when a stream is opened, both parts can theoretically transfer as much data as needed. One caveat is that this interaction is an "one-shot" semiduplex-like (https://en.wikipedia.org/wiki/Duplex_(telecommunications)) scenario.

I have a feeling that you looking for a full-duplex communication between the client and the server. If that is the case, then WebSockets over HTTP/2 streams should be enough (https://datatracker.ietf.org/doc/html/rfc8441).

Unfortunately wtx doesn't support such a feature at the current time and I am not sure when an implementation will be available.

Another thing worth mentioning is the fact that wtx is still experimental and shouldn't be used in production environments.

Cheers

@jeffutter
Copy link
Author

Thanks for the quick reply. I should have provided some examples of what I'm asking about for clarity as some of the terms (like "stream") are pretty overloaded.

I'm looking for unidirectional communication where the client can read bytes of the response before the full response has been delivered.

A common example might be 'streaming' a video file. You have one request to the server and one response, but you don't have to wait until the entire file is downloaded before you can do anything with it.

ClientStream.recv_res returns a ReqResBuffer which I think has the entire body buffered up into data: Vector<u8>, and it then closes the http/2 stream.

What I'm looking for is more like hyper's SendRequest.send_request which returns a Result<Response<IncomingBody>> on which you can call poll_frame() (or frame() from http-body-util BodyExt). This gets you a chunk of the body that you can use before the entire request is finished.

FWIW, my actual use-case is to implement a client for Apollo GraphQL's http-multipart graphql subscriptions since they don't support subscriptions over Websockets.

Hope this clears up the ask.

Thanks!

@c410-f3r
Copy link
Owner

c410-f3r commented Oct 4, 2024

Thank you for the clarification.

ClientStream.recv_res returns a ReqResBuffer which I think has the entire body buffered up into data: Vector, and it then closes the http/2 stream.

Yeah, that is correct. All data must be available in advance.

The underlying machinery automatically splits the data according to https://datatracker.ietf.org/doc/html/rfc9113#SETTINGS_MAX_FRAME_SIZE and sends each block concurrently respecting flow control parameters. This approach is pragmatic but lacks flexibility, as you noticed.

What I'm looking for is more like hyper's SendRequest.send_request which returns a Result<Response> on which you can call poll_frame() (or frame() from http-body-util BodyExt). This gets you a chunk of the body that you can use before the entire request is finished.

Oh, I see. Fine-grained or "low-level" stream operations weren’t added because no one asked until now :)

#243

I plan to add this new functionality in the next version of wtx which will probably happen in approximately 20 days. Nevertheless, you preferably should use hyper or h2 to implement the client for Apollo GraphQL's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants