-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PECO-729] Improve retry behavior #230
Conversation
…algorithm Signed-off-by: Levko Kravets <[email protected]>
Signed-off-by: Levko Kravets <[email protected]>
Signed-off-by: Levko Kravets <[email protected]>
…of Thrift operations) Signed-off-by: Levko Kravets <[email protected]>
Signed-off-by: Levko Kravets <[email protected]>
Signed-off-by: Levko Kravets <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #230 +/- ##
==========================================
+ Coverage 93.09% 93.19% +0.09%
==========================================
Files 62 63 +1
Lines 1478 1513 +35
Branches 256 262 +6
==========================================
+ Hits 1376 1410 +34
- Misses 40 41 +1
Partials 62 62 ☔ View full report in Codecov by Sentry. |
expect(result.retryAfter).to.equal(200); | ||
}); | ||
|
||
it('should use backoff when `Retry-After` header is missing', async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benc-db , was there an issue in serverless when the retry-after was too short and we ended up running out of retries waiting for cluster to start up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the issue we were hitting was that Retry-After is always either 1s for serverless or 5s for classic. For pysql, we moved back to always using this Retry-After as a floor, and otherwise using exponential backoff, because 2.5 minutes (or 30s for serverless!) is often not enough time for compute to become available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andrefurlan-db @benc-db I updated code so now Retry-After
is used as a lower bound for backoff algorithm (not instead of backoff) - eac7d77 Please take one more look. Thank you!
Signed-off-by: Levko Kravets <[email protected]>
…ks/databricks-sql-nodejs into PECO-729-improve-retry-strategy
PECO-729
Retry-After
header and update backoff algorithmShould be easier (hopefully) to review commit by commit
Note: this PR doesn't include logic for retry on network errors. That part will be covered in a follow-up