Skip to content

Commit

Permalink
PBM-1043 PITR node priority
Browse files Browse the repository at this point in the history
docs/features/point-in-time-recovery.md
  • Loading branch information
nastena1606 committed Jul 15, 2024
1 parent 48f3a70 commit e195e77
Show file tree
Hide file tree
Showing 2 changed files with 71 additions and 1 deletion.
56 changes: 55 additions & 1 deletion docs/features/point-in-time-recovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,9 @@ Set the `pitr.enabled` configuration option to `true`.
enabled: true
```

The `pbm-agent` starts [saving consecutive slices of the oplog](#oplog-slicing) periodically. A method similar to the way replica set nodes elect a new primary is used to select the `pbm-agent` that saves the oplog slices. (Find more information in [pbm-agent](../details/pbm-agent.md).)
The `pbm-agent` starts [saving consecutive slices of the oplog](#oplog-slicing) periodically. A method similar to the way replica set nodes elect a new primary is used to select the `pbm-agent` that saves the oplog slices. (Find more information in [pbm-agent](../details/pbm-agent.md).)

You can, however, influence the pbm-agent election by assigning a priority to the `mongod` nodes. See the [Adjust node priority for oplog slices](#adjust-node-priority-for-oplog-slices) for details.


[Restore to a point-in-time](../usage/pitr-tutorial.md){ .md-button .md-button }
Expand Down Expand Up @@ -79,6 +81,58 @@ If you set the new duration when the `pbm-agent` is making an oplog slice, the s

If the new duration is shorter, this triggers the `pbm-agent` to make a new slice with the updated span immediately. If the new duration is larger, the `pbm-agent` makes the next slice with the updated span in its scheduled time.

### Adjust node priority for oplog slices

!!! admonition "Version added: [2.6.0](../release-notes/2.6.0.md)"

By default, the `pbm-agent` to save oplog slices is selected randomly across secondary replica set members. The primary node is selected last, if none of the secondaries is responding.

Starting with version 2.6.0, you can control from what node to save oplog slices by assigning the priority to the desired nodes via the configuration file. For example, you can ensure that both backups and oplog slices are taken from the nodes in a specific data center as defined in the organization's regulations. Or, you can reduce network latency by making backups and / or oplog slices from nodes in geographically closest locations.

Node priority for oplog slices is handled similarly to the [node priority for making backups](../usage/start-backup.md#adjust-node-priority-for-backups), yet it is independent from it. Thus, you can assign a different priority for backups and oplog slices for the same node. Or, adjust only the priority for oplog slices, leaving the default one for backups.

PBM then handles both processes according to their priority.

The default node priority for oplog slices is the same as for making backups:

* arbiter or hidden node - priority 2
* secondary nodes - priority 1
* primary node - priority 0.5

To redefine it, specify the new priority for the [`pitr.priority`](../reference/pitr-options.md#pitrnodepriority) option in the configuration file:

```yaml
pitr:
enabled: true
priority:
- "rs1:27017": 1
- "rs2:27018": 2
- "rs3:27019": 1
```
The format of the priority array is `<hostname:port>:<priority>`.

!!! important

As soon as you adjust node priorities in the configuration file, it is assumed that you take manual control over them. The default rule to prefer secondary nodes over primary stops working.

To define priority in a sharded cluster, you can either list all nodes or specify priority for one node in each shard and config server replica set. The `hostname` and `port` uniquely identifies a node so that Percona Backup for MongoDB recognizes where it belongs to and grants the priority accordingly.

Note that if you listed only specific nodes, the remaining nodes will be automatically assigned priority `1.0`. For example, you assigned priority `2.5` to only one secondary node in every shard and config server replica set of the sharded cluster.

```yaml
backup:
priority:
"localhost:27027": 2.5 # config server replica set
"localhost:27018": 2.5 # shard 1
"localhost:28018": 2.5 # shard 2
```

The remaining secondaries and the primary nodes in the cluster receive priority `1.0`.

PBM save oplog slices from the node with the highest priority. If this node is not responding, it selects the next priority node. If there are several nodes with the same priority, one of them is randomly elected for saving oplog slices.


### Compressed oplog slices

!!! admonition "Version added: [1.7.0](../release-notes/1.7.0.md)"
Expand Down
16 changes: 16 additions & 0 deletions docs/reference/pitr-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ pitr:
compression: <string>
compressionLevel: <int>
oplogOnly: <boolean>
priority:
- "rs1:27017": 1
- "rs2:27018": 2
- "rs3:27019": 1
```
### pitr.enabled
Expand Down Expand Up @@ -61,3 +65,15 @@ Note that the greater value you specify, the more time and computing resources i
Controls whether the base backup is required to start [Point-in-Time recovery oplog slicing](../features/point-in-time-recovery.md#oplog-slicing). When set to true, Percona Backup for MongoDB saves oplog chunks without the base backup snapshot.

Available in Percona Backup for MongoDB starting with version 1.8.0. To learn more about the usage, see [Point-in-Time Recovery oplog replay](../usage/oplog-replay.md).

### pitr.priority

*Type*: array of strings

The list of `mongod` nodes and their priority for saving oplog slices. The node with the highest priority is elected for saving oplog slices. If several nodes have the same priority, the one among them is randomly elected.

If not set, the replica set nodes have the default priority as follows:

* hidden nodes - 2.0
* secondary nodes - 1.0
* primary node - 0.5

0 comments on commit e195e77

Please sign in to comment.