-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TestBackpressure test #273
Conversation
0cb4954
to
9d8aa24
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @adam-fowler, I'm about to do a full review but I wanted to let you know early that you'll need to run the test generation script to get the CI working.
sorry Cory forgot about that. Sorted now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, broadly looks good! A few notes in the diff.
@@ -2533,4 +2533,59 @@ class HTTPClientTests: XCTestCase { | |||
XCTAssertEqual(info.connectionNumber, 1) | |||
XCTAssertEqual(info.requestNumber, 1) | |||
} | |||
|
|||
func testBackpressue() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
func testBackpressue() { | |
func testBackpressure() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh dear
count += 1 | ||
} | ||
// wait one second before returning a successful future | ||
return task.eventLoop.scheduleTask(in: .milliseconds(1000) ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Waiting a second does force this test to take quite a long time. Is there any reason we think this test wouldn't work equally well with a delay of, say, 200ms instead of the full second?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reduced to 200ms
No need to apologise! Easily done. 😁 |
Nice, looks exactly right. Test failures seem to back up #274 as well, so this is a good leaping off point for providing a fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, couple of questions.
func didFinishRequest(task: HTTPClient.Task<Response>) throws {} | ||
} | ||
|
||
let elg = MultiThreadedEventLoopGroup(numberOfThreads: 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't we use the self.defaultHTTPClient
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then we don't need a group or a client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found a client that uses an elg with more than one thread is more likely to fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should get #177 in for that tbh. But your call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Until #177 is merged I leave this using the local client
@@ -2533,4 +2533,59 @@ class HTTPClientTests: XCTestCase { | |||
XCTAssertEqual(info.connectionNumber, 1) | |||
XCTAssertEqual(info.requestNumber, 1) | |||
} | |||
|
|||
func testBackpressure() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should maybe name it testDownloadBackpressure
? That'd match the existing test.
Asl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
func didReceiveBodyPart(task: HTTPClient.Task<Response>, _ part: ByteBuffer) -> EventLoopFuture<Void> { | ||
self.lock.withLock { | ||
// if processingBodyPart is true then previous body part is still being processed | ||
// XCTAssertEqual doesn't work here so store result to test later |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of interest: What about XCTAssertEqual
doesn't work here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XCTAssertEqual doesn’t return an error while assert does not totally sure why
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adam-fowler yes, it continues the execution but it will still fail the test run once it's complete. So I think you can just use XCTAssert...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this situation it doesn't work.
return task.eventLoop.makeSucceededFuture(()) | ||
} | ||
|
||
func didReceiveBodyPart(task: HTTPClient.Task<Response>, _ part: ByteBuffer) -> EventLoopFuture<Void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how are we making sure that didReceiveBodyPart
is actually invoked multiple times? Because if it's only invoked once, then we don't know if download streaming works or not I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We aren’t at the moment. I can add a test to see if it did. Is there a way we can force it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adam-fowler From the NIO level: Yes, you could set ChannelOptions.maxMessagesPerRead
to 1
and set the allocator to FixedSizeRecvAllocator(1)
which means that NIO will read everything byte to byte. That'll cause AHC to call out multiple times.
The issue here is that through AHC's API, you can't easily mess with the underlying Channel
... You could make it an internal test and force it that way.
The other (probably easier) option is:
- Download say 1 MB of data. NIO's default allocator will never send you more than 64k. Given that AHC doesn't change it, you can't get 1 MB in one big chunk.
- Add
XCTAssertGreaterOrEqual(numberOfCalls, 2)
or so.
Added HTTPBin path "/zeros/100000"
so we can guarantee at least 3 body parts
Just as a heads up, the main development branch has been changed to This PR has been re-targeted to main and should just work. However when performing rebases etc please keep this in mind -- you may want to fetch the main branch and rebase onto the |
All for this, but the default branch is still |
Sure thing. |
Add response delegate backpressure test. Creates delegate that returns a futureResult from a
scheduleTask
that will finish after one second has passed. TheprocessingBodyPart
flag is set at the beginning ofdidReceiveBodyPart
and cleared when the scheduled task runs. Add check to see if this flag is set when enteringdidReceiveBodyPart
. If so then the backpressure wasn't applied and the previousdidReceiveBodyPart
task is still running.Also added count which increments at entry to
didReceiveBodyPart
and decrements on scheduled task being run.