-
Notifications
You must be signed in to change notification settings - Fork 742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore stale gossip packets that arrive out of order #1777
base: unstable
Are you sure you want to change the base?
Conversation
There is a failure in the daily: ``` === ASSERTION FAILED === ==> cluster_legacy.c:6588 'primary->replicaof == ((void *)0)' is not true ``` This is the logs: ``` - i am fd4318562665b4490ccc86e7f7988017cf960371 and myself become a replica, - 63c0167232dae95cdcc0a1568cd5368ac3b99f5 is the new primary 27867:M 24 Feb 2025 00:19:11.011 * Failover auth granted to 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () for epoch 9 27867:M 24 Feb 2025 00:19:11.039 * Configuration change detected. Reconfiguring myself as a replica of node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f 27867:S 24 Feb 2025 00:19:11.039 * Before turning into a replica, using my own primary parameters to synthesize a cached primary: I may be able to synchronize with the new primary with just a partial transfer. 27867:S 24 Feb 2025 00:19:11.039 * Connecting to PRIMARY 127.0.0.1:23654 27867:S 24 Feb 2025 00:19:11.039 * PRIMARY <-> REPLICA sync started - in here myself got an stale message, but we still process the packet and cause this issue 27867:S 24 Feb 2025 00:19:11.040 * Ignore stale message from 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f; gossip config epoch: 8, current config epoch: 9 27867:S 24 Feb 2025 00:19:11.040 * Node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () is now a replica of node fd4318562665b4490ccc86e7f7988017cf960371 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f ``` We can see myself got a stale message, but we still process it, and changed the role and cause a primary replica chain loop. The reason is that, this text is copy from valkey-io#651. In some rare case, slot config updates (via either PING/PONG or UPDATE) can be delivered out of order as illustrated below: 1. To keep the discussion simple, let's assume we have 2 shards, shard a and shard b. Let's also assume there are two slots in total with shard a owning slot 1 and shard b owning slot 2. 2. Shard a has two nodes: primary A and replica A*; shard b has primary B and replica B*. 3. A manual failover was initiated on A* and A* just wins the election. 4. A* announces to the world that it now owns slot 1 using PING messages. These PING messages are queued in the outgoing buffer to every other node in the cluster, namely, A, B, and B*. 5. Keep in mind that there is no ordering in the delivery of these PING messages. For the stale PING message to appear, we need the following events in the exact order as they are laid out. a. An old PING message before A* becomes the new primary is still queued in A*'s outgoing buffer to A. This later becomes the stale message, which says A* is a replica of A. It is followed by A*'s election winning announcement PING message. b. B or B* processes A's election winning announcement PING message and sets slots[1]=A*. c. A sends a PING message to B (or B*). Since A hasn't learnt that A* wins the election, it claims that it owns slot 1 but with a lower epoch than B has on slot 1. This leads to B sending an UPDATE to A directly saying A* is the new owner of slot 1 with a higher epoch. d. A receives the UPDATE from B and executes clusterUpdateSlotsConfigWith. A now realizes that it is a replica of A* hence setting myself->replicaof to A*. e. Finally, the pre-failover PING message queued up in A*'s outgoing buffer to A is delivered and processed, out of order though, to A. f. This stale PING message creates the replication loop Signed-off-by: Binbin <[email protected]>
The test failures seem concerning to me, I spent some time on friday trying to understand what might be causing it to fail but didn't figure it out. I wonder if we should still process the packet, but check and ignore setting it as our primary if it causes a loop instead? |
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
The test do these things:
And the reason seems to be:
if (sender_claimed_primary && areInSameShard(sender_claimed_primary, sender)) {
/* `sender` was a primary and was in the same shard as its new primary */
if (sender->configEpoch > sender_claimed_config_epoch) {
serverLog(LL_NOTICE,
"Ignore stale message from %.40s (%s) in shard %.40s;"
" gossip config epoch: %llu, current config epoch: %llu",
sender->name, sender->human_nodename, sender->shard_id,
(unsigned long long)sender_claimed_config_epoch,
(unsigned long long)sender->configEpoch);
} else { So i think in this case, we should check sender's primary config epoch? |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #1777 +/- ##
============================================
- Coverage 70.97% 70.87% -0.10%
============================================
Files 123 123
Lines 65651 65657 +6
============================================
- Hits 46593 46533 -60
- Misses 19058 19124 +66
|
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. Signed-off-by: Binbin <[email protected]>
There is a failure in the daily:
This is the logs:
We can see myself got a stale message, but we still process it, and changed
the role and cause a primary replica chain loop.
The reason is that, this text is copy from #651.
In some rare case, slot config updates (via either PING/PONG or UPDATE)
can be delivered out of order as illustrated below:
Closes #1015.