Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gep: add GEP-3388 HTTP Retry Budget #3488

Merged
merged 13 commits into from
Jan 28, 2025
Merged
4 changes: 3 additions & 1 deletion geps/gep-1731/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,9 @@ relationships:
# or additional implementation. The extended GEP MUST have its extendedBy
# field set back to this GEP.
extends: {}
extendedBy: {}
extendedBy:
- number: 3388
name: Retry Budgets
# seeAlso indicates other GEPs that are relevant in some way without being
# covered by an existing relationship.
seeAlso:
Expand Down
113 changes: 113 additions & 0 deletions geps/gep-3388/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# GEP-3388: Retry Budgets

* Issue: [#3388](https://github.com/kubernetes-sigs/gateway-api/issues/3388)
* Status: Provisional

(See status definitions [here](/geps/overview/#gep-states).)

## TLDR

To allow configuration of a "retry budget" across all endpoints of a destination service, preventing additional client-side retries when the percentage of the active request load consisting of retries reaches a certain threshold.

## Goals

* To allow specification of a retry ["budget"](https://finagle.github.io/blog/2016/02/08/retry-budgets/) to determine whether a request should be retried, and any shared configuration or interaction with configuration of a static retry limit within HTTPRoute.
* To allow specification of a percentage of active requests, or recently active requests, that should be able to be retried concurrently.
* To allow specification of a *minimum* number of retries that should be allowed per second or concurrently, such that the budget for retries never goes below this minimum value.
* To define a standard for retry budgets that reconciles the known differences in current retry budget functionality between Gateway API data plane implementations.

## Non-Goals

* To allow specifying a default retry budget policy across a namespace or attached to a specific gateway.
* To allow configuration of a back-off strategy or timeout window within the retry budget spec.
* To allow specifying inclusion of specific HTTP status codes and responses within the retry budget spec.
* To allow specification of more than one retry budget for a given service, or for specific subsets of its traffic.

ericdbishop marked this conversation as resolved.
Show resolved Hide resolved
## Introduction

Multiple data plane proxies offer optional configuration for budgeted retries, in order to create a dynamic limit on the amount of a service's active request load that is comprised of retries from across its clients. In the case of Linkerd, retry budgets are the default retry policy configuration for HTTP retries within the [ServiceProfile CRD](https://linkerd.io/2.12/reference/service-profiles/), with static max retries being a [fairly recent addition](https://linkerd.io/2024/08/13/announcing-linkerd-2.16/).

Configuring a limit for client retries is an important factor in building a resilient system, allowing requests to be successfully retried during periods of intermittent failure. But too many client-side retries can also exacerbate consistent failures and slow down recovery, quickly overwhelming a failing system and leading to cascading failures such as retry storms. Configuring a sane limit for max client-side retries is often challenging in complex systems. Allowing an application developer (Ana) to configure a dynamic "retry budget" reduces the risk of a high number of retries across clients. It allows a service to perform as expected in both times of high & low request load, as well as both during periods of intermittent & consistent failures.

While retry budget configuration has been a frequently discussed feature within the community, differences in the semantics between data plane implementations creates a challenge for a consensus on the correct location for the configuration. This proposal aims to determine where retry budget's should be defined within the Gateway API, and whether data plane proxies may need to be altered to accommodate the specification.

### Background on implementations

#### Envoy

Envoy offers retry budgets as a configurable circuit breaker threshold for concurrent retries to an upstream cluster, in favor of configuring a static max retry threshold. In Istio, Envoy circuit breaker thresholds are typically configured [within the DestinationRule CRD](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings), which applies rules to clients of a service after routing has already occurred.

The optional [RetryBudget](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/circuit_breaker.proto#envoy-v3-api-msg-config-cluster-v3-circuitbreakers-thresholds-retrybudget) CircuitBreaker threshold can be configured with the following parameters:

* `budget_percent` Specifies the limit on concurrent retries as a percentage of the sum of active requests and active pending requests. For example, if there are 100 active requests and the budget_percent is set to 25, there may be 25 active retries. This parameter is optional. Defaults to 20%.
ericdbishop marked this conversation as resolved.
Show resolved Hide resolved

* `min_retry_concurrency` Specifies the minimum retry concurrency allowed for the retry budget. The limit on the number of active retries may never go below this number. This parameter is optional. Defaults to 3.

By default, Envoy uses a static threshold for retries. But when configured, Envoy's retry budget threshold overrides any other retry circuit breaker that has been configured.

#### linkerd2-proxy

The Linkerd implementation of retry budgets is configured alongside service route configuration, within the [ServiceProfile CRD](https://linkerd.io/2.12/reference/service-profiles/), limiting the number of total retries for a service as a percentage of the number of recent requests. In practice, this functions similarly to Envoy's retry budget implementation, as it is configured in a single location and measures the ratio of retry requests to original requests across all traffic destined for the service.

Linkerd uses [budgeted retries](https://linkerd.io/2.15/features/retries-and-timeouts/) as the default configuration to specify retries to a service, but - as of [edge-24.7.5](https://github.com/linkerd/linkerd2/releases/tag/edge-24.7.5) - supports counted retries. In all cases, retries are implemented by the `linkerd2-proxy` making the request on behalf on an application workload.
ericdbishop marked this conversation as resolved.
Show resolved Hide resolved

Linkerd's budgeted retries allow retrying an indefinite number of times, as long as the fraction of retries remains within the budget. Budgeted retries are supported only using Linkerd's native ServiceProfile CRD, which allows enabling retries, setting the retry budget (by default, 20% plus 10 "extra" retries per second), and configuring the window over which the fraction of retries to non-retries is calculated. The `retryBudget` field of the ServiceProfile spec can be configured with the following optional parameters:

* `retryRatio` Specifies a ratio of retry requests to original requests that is allowed. The default is 0.2, meaning that retries may add up to 20% to the request load.

* `minRetriesPerSecond` Specifies the minimum rate of retries per second that is allowed, so that retries are not prevented when the request load is very low. The default is 10.

* `ttl` A duration specifying how long requests are considered for when calculating the retry threshold. The default is 10s.

### Proposed Design

#### Retry Budget Policy Attachment

While current retry behavior is defined at the routing rule level within HTTPRoute, exposing retry budget configuration as a policy attachment offers some advantages:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you're not proposing a specific policy to include this in yet, but I'd argue this is exactly the kind of thing we had in mind for BackendLBPolicy (cc @gcs278)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to respectfully disagree with @robscott here: the connection between retries and the "backend load balancer" is pretty tenuous (even in Linkerd where the component that decides which backend gets a given request is called the load balancer 😉).

That does not mean that I think we should have a retry policy and a circuit breaking policy and a timeout policy etc. etc., though. It means that:

a. I remain generally opposed to policy attachment for table-stakes features, and
b. If we have a catchall policy for configuring the way we interact with the backends, let's not call it BackendLBPolicy.

Copy link
Contributor

@mikemorris mikemorris Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can draft two proposed implementations (one for a new policy resource, another adding to BackendLBPolicy) in a followup PR to avoid blocking this provisional GEP on bikeshedding this now.

I do have some concerns about messaging/supportability if we start glomming several discrete optional features onto a single *Policy CRD - I suppose it's not much worse than what we already have with the core resources, but it's perhaps simpler for implementations to message "SpecificPolicy with X optional fields is supported" than "for BroadPolicy, X feature is supported with Y optional fields, Z feature is supported with Q optional fields, etc" and I think gets more difficult/important to promote subfields to standard channel independently rather than potentially advancing the entire resource at once.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it would help sidestep bikeshedding to describe the API simply as a stanza with relevant configuration, then have a separate discussion about where that stanza would be included?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern here is that the sheer number of resources involved in using Gateway API is overwhelming for many users (especially new ones). If we keep on with a pattern of creating a unique policy for each topic, this problem is only going to get worse. Some of the most successful Kubernetes APIs are the ones that shoved a ton of concepts into a single resource (Service, Pod, etc). Although these APIs are overloaded, they continue to be remarkably popular.

If we have a catchall policy for configuring the way we interact with the backends, let's not call it BackendLBPolicy.

I think we'll need at least two backend policies - one for TLS config, and one for everything else. If you have any ideas for the name for the "everything else" one, I'd be open to them. I personally think BackendLBPolicy is ok, but can be convinced that better names exist.

Longer term, I really like the idea @ptrivedi has in #3539 that would add a new backend-focused resource to the API that could replace Service for many Gateway API users. In that proposal, it's called EndpointSelector, but the general idea would be to disconnect the "frontend" bits of a Service and instead have a resource exclusively focused on the backend bits. In that world, we could replace backend policies with inline fields. Not saying we should start with that for this specific GEP, but trying to provide a vision for a future that doesn't require all these backend policies.

Copy link
Contributor

@mikemorris mikemorris Jan 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #3573 to continue API design discussion in a followup (still intending to resolve that by January 30th deadline), hoping we can get this merged as provisional as-is.


* Users could define a single policy, targeting a service, that would dynamically configure a retry threshold based on the percentage of active requests across *all routes* destined for that service's backends.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to the above: set aside, for the moment, the idea that policy attachment is the only way to extend Service (maybe we go with endpoint Gateways, maybe we wave our magic wand and have a Service extension point, I dunno, just let's set that aside for the moment). What would you want the budgeted-retry configuration to look like in that world? What are the user stories driving that design?

Copy link
Contributor

@mikemorris mikemorris Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a magic world where we have extensible "mix-ins" or similar for core resources, I would envision a retry budget may be configured directly per-Service (or per-Gateway with #3539), but because one of the benefits of budgets is their adaptability as compared against a static count retry config, a user may still want a common policy for an entire namespace or all backends in a cluster (which is not currently in scope for this GEP but could be future extensibility pattern).


* In both Envoy and Linkerd data plane implementations, a retry budget is configured once to match all endpoints of a service, regardless of the routing rule that the request matches on. A policy attachment will allow for a single configuration for a service's retry budget, as opposed to configuring the retry budget across multiple HTTPRoute objects (see [Alternatives](#httproute-retry-budget)).

* Being able to configure a dynamic threshold of retries at the service level, alongside a static max number of retries on the route level. In practice, application developers would then be allowed more granular control of which requests should be retried. For example, an application developer may not want to perform retries on a specific route where requests are not idempotent, and can disable retries for that route. By having a retry budget policy configured, retries from other routes will still benefit from the budgeted retries.

Configuring a retry budget through a Policy Attachment may produce some confusion from a UX perspective, as users will be able to configure retries in two different places (HTTPRoute for static retries, versus a policy attachment for a dynamic retry threshold). Though this is likely a fair trade-off.

Discrepancies in the semantics of retry budget behavior and configuration options between Envoy and Linkerd may require a change in either implementation to accommodate the Gateway API specification. While Envoy's `min_retry_concurrency` setting may behave similarly in practice to Linkerd's `minRetriesPerSecond`, they are not directly equivalent.

The implementation of a version of Linkerd's `ttl` parameter within Envoy might be a path towards reconciling the behavior of these implementations, as it could allow Envoy to express a `budget_percent` and minimum number of permissible retries over a period of time rather than by tracking active and pending connections. It is not currently clear which of these models is preferable, but being able to specify a budget as requests over a window of time seems like it might offer more predictable behavior.

## API

### Go

TODO

### YAML

TODO

## Conformance Details

TODO

## Alternatives

### HTTPRoute Retry Budget

* The desired UX for retry budgets is to apply the policy at the service level, rather than individually across each route targeting the service. Placing the retry budget configuration within HTTPRoute would violate this requirement, as separate HTTPRoute objects could each have routing rules targeting the same destination service, and a single HTTPRoute object can target multiple destinations. To apply a retry budget to all routes targeting a service, a user would need to duplicate the configuration across multiple routing rules.

* If we wanted retry budgets to be configured on a per-route basis (as opposed to at the service level), it would require a change to be made in Envoy Route. And more than likely, similar changes would need to be made for Linkerd.
ericdbishop marked this conversation as resolved.
Show resolved Hide resolved

## Other considerations

* As there isn't anything inherently specific to HTTP requests in either known implementation, a retry budget policy on a target Service could likely be applicable to GRPCRoute as well as HTTPRoute requests.
* While retry budgets are commonly associated with service mesh uses cases to handle many distributed clients, a retry budget policy may also be desirable for north/south implementations of Gateway API to prioritize new inbound requests and minimize tail latency during periods of service instability.

## References

* <https://gateway-api.sigs.k8s.io/geps/gep-1731/>
ericdbishop marked this conversation as resolved.
Show resolved Hide resolved
* <https://finagle.github.io/blog/2016/02/08/retry-budgets/>
* <https://linkerd.io/2019/02/22/how-we-designed-retries-in-linkerd-2-2/>
* <https://linkerd.io/2.11/tasks/configuring-retries/>
* <https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/circuit_breaker.proto#config-cluster-v3-circuitbreakers-thresholds-retrybudget>
35 changes: 35 additions & 0 deletions geps/gep-3388/metadata.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
apiVersion: internal.gateway.networking.k8s.io/v1alpha1
kind: GEPDetails
number: 3388
name: Retry Budgets
status: Provisional
# Any authors who contribute to the GEP in any way should be listed here using
# their Github handle.
authors:
- ericdbishop
- mikemorris
relationships:
# obsoletes indicates that a GEP makes the linked GEP obsolete, and completely
# replaces that GEP. The obsoleted GEP MUST have its obsoletedBy field
# set back to this GEP, and MUST be moved to Declined.
obsoletes: {}
obsoletedBy: {}
# extends indicates that a GEP extends the linkned GEP, adding more detail
# or additional implementation. The extended GEP MUST have its extendedBy
# field set back to this GEP.
extends:
- number: 1731
name: HTTPRoute Retries
extendedBy: {}
# seeAlso indicates other GEPs that are relevant in some way without being
# covered by an existing relationship.
seeAlso: {}
# references is a list of hyperlinks to relevant external references.
# It's intended to be used for storing Github discussions, Google docs, etc.
references: {}
# featureNames is a list of the feature names introduced by the GEP, if there
# are any. This will allow us to track which feature was introduced by which GEP.
featureNames: {}
# changelog is a list of hyperlinks to PRs that make changes to the GEP, in
# ascending date order.
changelog: {}
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ nav:
- geps/gep-1867/index.md
- geps/gep-2648/index.md
- geps/gep-2649/index.md
- geps/gep-3388/index.md
- Implementable:
- geps/gep-3155/index.md
- Experimental:
Expand Down