You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
immediate retries might not be the best approach when the root cause is due to overloaded or throttled resources, as they may exacerbate these problems
Describe the requested feature
Similar to the powerful declarative syntax for Unrecoverable Exceptions described in #7093, allow declaration of exception criteria that should not be immediately retried.
If some component of the system is overloaded, then automatic rate limitingendpointConfiguration.Recoverability().OnConsecutiveFailures(...) can slow things down to let the component recover. Automatic rate limiting could be thought of as the chainsaw approach, where ImmediateRetriesNotNeededFor<T>() is like a scalpel. One is good for ice sculptures but the other may be more appropriate when the plastic surgeon approaches your reconstructive surgery.
If there are multiple handlers in a service, for example with a saga, consider if only one handler is accessing a limited resource that may be throwing a specific exception. OnConsecutiveFailures(...) would be slowing message processing of all handlers in the services, but ImmediateRetriesNotNeededFor<T>(...) would limit impact of the slow down.
In theory, you could build something similar yourself by defining exception types that you then look for in the recoverability policy and then have your devs throw when appropriate.
//in code
if(ex.Message.Contains("bla"))
throw new ImmediateRetriesNotNeededException()
//policy
if(ex is ImmediateRetriesNotNeededException)
//skip retries
Not sure if I like this myself tho since it bleeds into exception names which will get logged etc etc
Thanks, Andreas. Yes. That is similar to what we are doing right now. What I was thinking more about is if we have a certain SQL Exception type that says the DB is overloaded, we want to skip immediate retries and only do delay retries. Is there a more fluent way to define this behavior based on properties of the exception instead of dropping into the imperitive way we define a custom recoverability policy.
There is nothing inherently wrong with how custom recoverability policies work in NSB, it is just that they can have bugs, doing things like infinte immediate retry that destroy our system performance. For some reason we have trouble catching these issues before they bring down production. If there was a way to configure this behavior, for "skip immediate retry for exceptions matching this Func<Exception, bool>" and still use the default recoverability policy that'd be wonderful.
From: Andreas Ohlund Sent: Tuesday, April 2, 2024 5:30 AM To: Ben Brandt Subject: Re: Recoverability policies
Hi Ben!
You do have access to the full exception so you should be able to write:
if (context.Exception.Message.Contains("SQL") ) { return RecoverabilityAction.Discard("Blah."); }
Would that work?
/Andreas
On Thu, Mar 28, 2024 at 10:23 PM Ben Brandt wrote:
Thanks, Andreas!
What's the best way to describe more advanced exception filtering like SqlExceptions that have a SqlErrors[...].Number of a certain value?
Is there anything like this MassTransit feature where a filter function can be specified for to further refine the exception filter?
and you should be able to filter on the exception type and the message type. Note that the message type would be a string in the headers called `NServiceBus.EnclosedMessageTypes`
Describe the feature.
Is your feature related to a problem? Please describe.
Some exceptions may be transient, but in a high throughput environment, immediate retries of specific types of exceptions may not be beneficial.
https://docs.particular.net/architecture/recoverability#transient-errors
Describe the requested feature
Similar to the powerful declarative syntax for Unrecoverable Exceptions described in #7093, allow declaration of exception criteria that should not be immediately retried.
Ex:
Describe alternatives you've considered
If some component of the system is overloaded, then automatic rate limiting
endpointConfiguration.Recoverability().OnConsecutiveFailures(...)
can slow things down to let the component recover. Automatic rate limiting could be thought of as the chainsaw approach, whereImmediateRetriesNotNeededFor<T>()
is like a scalpel. One is good for ice sculptures but the other may be more appropriate when the plastic surgeon approaches your reconstructive surgery.If there are multiple handlers in a service, for example with a saga, consider if only one handler is accessing a limited resource that may be throwing a specific exception.
OnConsecutiveFailures(...)
would be slowing message processing of all handlers in the services, butImmediateRetriesNotNeededFor<T>(...)
would limit impact of the slow down.Additional Context
Related to:
Sent: Wednesday, April 3, 2024 1:42 AM
To: Ben Brandt
Subject: Re: Recoverability policies
The text was updated successfully, but these errors were encountered: