-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too much resources may be created when using lazy-warmup #173
Too much resources may be created when using lazy-warmup #173
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My feedback is in the comments. I think I understand the justification for the need to not allocate more than was demanded, but I'm afraid the provided solution will not prevent excessive warmup in case of high concurrency. It would work when concurrency is limited I believe. And perhaps in a highly concurrent scenario we'd actually want to allocate more. That's why I'm on the fence and just commenting instead of approving/requesting changes.
reactor-pool/src/test/java/reactor/pool/SimpleDequePoolInstrumentationTest.java
Show resolved
Hide resolved
reactor-pool/src/test/java/reactor/pool/SimpleDequePoolInstrumentationTest.java
Outdated
Show resolved
Hide resolved
reactor-pool/src/test/java/reactor/pool/SimpleDequePoolInstrumentationTest.java
Outdated
Show resolved
Hide resolved
@@ -99,6 +99,9 @@ public class SimpleDequePool<POOLABLE> extends AbstractPool<POOLABLE> { | |||
|
|||
Disposable evictionTask; | |||
|
|||
// Flag used to avoid creating resources while warmup is in progress | |||
volatile boolean warmupInProgress = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to prevent the drainLoop()
being called concurrently instead? Let's say that > 1 concurrent acquires happen. Both increment the demand and add themselves to the Borrower
queue. However, just one enters the drainLoop
and delivers all necessary resources. A loop can check if more demand has been added before it exits in order to repeat the procedure and ensure every acquire is satisfied in the end. But that would guarantee no simultaneous warmup happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify why drainLoop can be executed concurrently: both evictInBackground and pendingOffer methods have a check like this:
if (WIP.decrementAndGet(this) > 0) {
drainLoop();
}
And it doesn't exclude other threads from entering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I'm correct drainLoop
can't be run concurrently, it is protected using the WIP design pattern.
However, I will need to give more thinking about your comment, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a typical WIP pattern that is in use here. The above condition should be as the one in the drain
method:
void drain() {
if (WIP.getAndIncrement(this) == 0) {
drainLoop();
}
}
but it's not the case for evictInBackground
and pendingOffer
.
// flatMap will eagerly subscribe to the allocator from the current thread, but the concurrency | ||
// can be controlled from configuration | ||
final int mergeConcurrency = Math.min(poolConfig.allocationStrategy().warmupParallelism(), toWarmup + 1); | ||
warmupInProgress = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This flag does not guarantee exclusivity IMO. A stress test could be added to validate my accusation and I worry it would hold true :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see for the moment the concurrency issue, but I will investigate this, and will create a stress test.
The flag is only set from the drainLoop and then it is reset once the subscription to the Flux.range completes, and if drainLoop is missing the update of warmupInProgress, then drain() will cause drainLoop to be called again.
However, since both of you are seeing an issue here, I'll carefully revisit all this.
…roduces asynchrony.
}, alreadyPropagatedOrLogged -> drain(), () -> { | ||
warmupInProgress = false; | ||
drain(); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On error warmupInProgress
will continue to stay true
. Is that intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, the #175 is partially addressing this problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
warmpupInProgress should be indeed reset even if there is an error. will think about this.
closing this PR, see #172. |
When the reactor-pool is configured with the
sizeBetween
method using min/max values, more resources than expected may be created.For example, the following code will create 4 resources instead of the expected 3:
This is not a bug per se, but we can arrange to only create minimal necessary resources (3 in the above case).
The extra resource is created because when the second acquire takes place, the allocation for the two extra resources that are lazily created after the first acquire is still in progress and not yet completed, because the allocator is subscribed asynchronously using
.subscribeOn(Schedulers.single())
As a result, we end up with four created resources, while we would expect to only have three resources:
Fixes #172