-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add retry limiter to backoff function #1478
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: artem_danilov <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Tema It's ok to introduce a rate-limiting mechanism for the kv-client. Finding the optimal balance between error handling, retry success, and avoiding overloading PD could be challenging, or selecting a suitable default value that works for most scenarios. Another approach is similar to TiKV health control feedback (as discussed in tikv/tikv#16297), where some processing capacity information is carried in PD responses and fed back to the KV client. Based on this feedback, the kv-client can then decide its concurrency control and rate-limiting strategy accordingly. |
Thanks cfzjywxk for the comment. I think it is not always possible for PD to reply to provide this information to TiDB in case it is completely overloaded. tikv/pd#8678 proposes a more sophisticated solution to cover that case as well. Maybe it is worth to see if it could be incorporated into tikv/tikv#16297 which you mentioned. |
@@ -96,6 +98,50 @@ func NewConfig(name string, metric *prometheus.Observer, backoffFnCfg *BackoffFn | |||
} | |||
} | |||
|
|||
type RetryRateLimiter struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need comments like for exported functions and type definitions,
// RetryRateLimiter is used to limit retry times for PD requests.
or the lint check would fail.
cap int32 | ||
} | ||
|
||
func NewRetryRateLimiter(cap int32, ratio float32) *RetryRateLimiter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto for the comments, and better to explain the meaning of the input parameters.
Besides, would it be less expensive using int
or uint
type for ratio
instead of float values?
} | ||
} | ||
|
||
// add a token to the rate limiter bucket according to configured retry to success ratio and cap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment format is like
// addRetryToken is a ...
, needs to start with the function name
|
||
// add a token to the rate limiter bucket according to configured retry to success ratio and cap | ||
func (r *RetryRateLimiter) addRetryToken() { | ||
if rand.Float32() < r.allowedRetryToSuccessRatio { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As metioned above, would it be less expensive to use integer random values?
return false | ||
} | ||
|
||
func NewConfigWithRetryLimit(name string, metric *prometheus.Observer, backoffFnCfg *BackoffFnCfg, retryRateLimiter *RetryRateLimiter, err error) *Config { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto for the comments of exported function.
BoTiFlashRPC = NewConfig("tiflashRPC", &metrics.BackoffHistogramRPC, NewBackoffFnCfg(100, 2000, EqualJitter), tikverr.ErrTiFlashServerTimeout) | ||
BoTxnLock = NewConfig("txnLock", &metrics.BackoffHistogramLock, NewBackoffFnCfg(100, 3000, EqualJitter), tikverr.ErrResolveLockTimeout) | ||
BoPDRPC = NewConfig("pdRPC", &metrics.BackoffHistogramPD, NewBackoffFnCfg(500, 3000, EqualJitter), tikverr.NewErrPDServerTimeout("")) | ||
BoPDRegionMetadata = NewConfigWithRetryLimit("pdRegionMetadata", &metrics.BackoffHistogramPD, NewBackoffFnCfg(500, 3000, EqualJitter), NewRetryRateLimiter(10, 0.1), tikverr.NewErrPDServerTimeout("")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in the previous comments, it would be challendge to choose a default value for all kinds of scenarios? Do we have some tests for the choice of 10, 0.1
?
Current backoff policy does not help much to limit TiDB reties to retrieve Region from PD when there are issues with Region Metadata in PD:
This PR adds an ability to configure global retry limiter to Backoff function per each config. It also creates a new Backoff config dedicated to PD Region Metadata calls which will be used in TiDB in separate PR:
The config above allows a single retry per each 10 previous successful call (0.1), but limit overall retry budget to 10. It always start with full budget of retries.