-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement IdleDuration for RedisTokenBucketRateLimiter #157
Implement IdleDuration for RedisTokenBucketRateLimiter #157
Conversation
@cristipufu Can you help me with that SonarCloud build issue? I'm not sure if I'm doing something wrong or that is something with configuration. |
Hi @dlxeon, thanks for your contribution. I need a few days for research and to think a bit more about this |
private int _activeRequestsCount; | ||
private long _lastActiveTimestamp = Stopwatch.GetTimestamp(); | ||
|
||
public override TimeSpan? IdleDuration => Interlocked.CompareExchange(ref _activeRequestsCount, 0, 0) > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick q: why do we need to track the number of active requests? Can't we just rely on the _idleSince
timestamp?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to ensure that during AcquireAsyncCoreInternal
execution we will never report we are idle when we don't.
Theoretically if we have infrequent usage of that rate limiter and for some reason Redis connection is slow and request takes 10+ seconds, than we will report 10+ seconds to .Net and then there is a chance .Net will decide that our rate limiter instance is idle and needs to be disposed, even though we are waiting for AcquireAsyncCoreInternal to complete
Probably, that's an overkill and in reality Redis shouldn't be that slow, but I didn't want to make such assumptions for common library.
|
||
public override TimeSpan? IdleDuration => Interlocked.CompareExchange(ref _activeRequestsCount, 0, 0) > 0 | ||
? TimeSpan.Zero | ||
: TimeSpan.FromTicks(Stopwatch.GetTimestamp() - _lastActiveTimestamp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please change this to something similar to the runtime's in-memory implementation?
- Rename
_lastActiveTimestamp
to_idleSince
- Return
null
instead ofTimeSpan.Zero
- Explicitly convert
Stopwatch
ticks intoTimeSpan
ticks usingTickFrequency
:
private static readonly double TickFrequency = (double)TimeSpan.TicksPerSecond / Stopwatch.Frequency;
public override TimeSpan? IdleDuration => _idleSince is null ? null : new TimeSpan((long)((Stopwatch.GetTimestamp() - _idleSince) * TickFrequency));
- Implement the same for the other rate limiters (fixed/sliding/concurrency)
} | ||
|
||
protected override RateLimitLease AttemptAcquireCore(int permitCount) | ||
{ | ||
_lastActiveTimestamp = Stopwatch.GetTimestamp(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not needed
Hi @dlxeon I want to merge this, can you please rebase? |
… implementation of IdleDuration, similar behavior
Hi @cristipufu I've addressed your suggestions for |
Hi @dlxeon, thanks a lot for your contribution. Can you please rebase your PR on top of the main branch? (there are some extra commits that shouldn't be here) |
… RedisFixedWindowRateLimiter,RedisSlidingWindowRateLimiter
0bf5fc9
to
24c8000
Compare
@cristipufu I've rebased and force pushed branch. Please check if everything is fine now. |
Thanks @dlxeon! |
Fix issue in RedisTokenBucketRateLimiter similar to #61
The current implementation always returns TimeSpan.Zero for RateLimiter.IdleDuration. When using a partitioned rate limiter this means the rate limiters created for the partitions never get cleaned up and will accumulate.
Behavior before fix: growing memory up until pod limit, causing pod restarts. 100k unique IP/day, causing 100k+ rate limiters in memory. Growing CPU utilization which is caused by .Net trying to find rate limiters to cleanup.

Implementation notes: