diff --git a/innovation-release/installing.html b/innovation-release/installing.html index efb3e801..77ab3f6b 100644 --- a/innovation-release/installing.html +++ b/innovation-release/installing.html @@ -1124,12 +1124,12 @@
To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
$ sudo percona-release enable pdps-pdps-8x-innovation
+$ sudo percona-release enable pdps-8x-innovation
Tip
To enable the minor version repository, use the following command:
-$ sudo percona-release enable pdps-pdps-8.1.0
+$ sudo percona-release enable pdps-8.1.0
Install Percona Distribution for MySQL packages¶
@@ -1172,12 +1172,12 @@ Install Percona Distrib
Run the following commands as the root user or via sudo
.
Enable Percona repository¶
To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
-$ sudo percona-release enable pdps-pdps-8x-innovation
+$ sudo percona-release enable pdps-8x-innovation
Tip
To enable the minor version repository, use the following command:
-$ sudo percona-release enable pdps-pdps-8.1.0
+$ sudo percona-release enable pdps-8.1.0
Install Percona Distribution for MySQL packages¶
@@ -1239,7 +1239,7 @@ Get expert help
Last update:
- 2023-11-27
+ 2023-11-28
diff --git a/innovation-release/search/search_index.json b/innovation-release/search/search_index.json
index 0ddfd3cf..2e32f2d3 100644
--- a/innovation-release/search/search_index.json
+++ b/innovation-release/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona Distribution for MySQL 8.1 Documentation","text":"Percona Distribution for MySQL is a single solution with the best and most critical enterprise components from the MySQL open source community, designed and tested to work together. With Percona Server for MySQL as the base server, the distribution brings you the enterprise-grade features for free. The set of carefully selected components helps you operate your MySQL database to meet your application and business needs.
"},{"location":"index.html#features","title":"Features","text":" -
Increased stability and availability - a set of high-availability and backup options help you ensure your data is saved and available for your business applications.
-
Improved performance and efficiency - integrated tools help DBAs maintain, manage and monitor the database performance and timely respond to changing demands.
-
Reduced costs - save on purchasing software licensing by using the distribution - the open-source enterprise-grade solution.
-
Easy-to-integrate with PMM - benefit from all the features of PMM for monitoring and managing the health of your database.
"},{"location":"index.html#get-started","title":"Get started","text":"Follow the installation instructions to get started with Percona Distribution for MySQL.
Read more about solutions you can deploy with Percona Distribution for MySQL in High availability solution with Group Replication.
Learn more about what\u2019s new in Percona Distribution for MySQL in the release notes.
"},{"location":"index.html#read-more","title":"Read more","text":" - Deployment variants
- Percona Distribution for MySQL components
"},{"location":"index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"404.html","title":"404 - Not Found","text":"We can\u2019t find the page you are looking for. Try using the Search or return to the homepage.
"},{"location":"404.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"architecture-components.html","title":"Architecture and components","text":"The following is the architecture layout for Percona Server for MySQL based deployment variant of Percona Distribution for MySQL with Group Replication.
"},{"location":"architecture-components.html#architecture-layout","title":"Architecture layout","text":""},{"location":"architecture-components.html#components","title":"Components","text":"The architecture is composed of two main layers:
-
Connection and distribution layer
-
Relational Database Management System (RDBMS) layer
"},{"location":"architecture-components.html#connection-and-distribution-layer","title":"Connection and distribution layer","text":"The connection and distribution layer consists of the following:
-
Application to proxy redirection mechanism. This mechanism can be anything from a Virtual IP managed by Keepalived local service to a DNS resolution service like Amazon Route 53. The mechanism\u2019s function is to redirect the traffic to the active Proxy node.
-
Proxy connection distribution. The distribution consists of two or more nodes and its role is to redirect the traffic to the active nodes of the Group Replication cluster. In cases like ProxySQL where the proxy is a level 7 proxy and can perform a read / write split, this layer is also in charge of redirecting writes to the Primary node and reads to the replicas, and of high availability to prevent a single point of failure.
"},{"location":"architecture-components.html#rdbms-layer","title":"RDBMS layer","text":"The data layer consists of the following:
-
Primary (or source) node serving write requests. This is the node that accepts writes and DDL modifications. Data will be processed following the ACID (atomicity, consistency, isolation, durability) model and replicated to all other nodes.
-
Replica nodes serving read requests. Some replica nodes can be elected Primary in case of the Primary node\u2019s failure. A replica node should be able to leave and join back to a healthy cluster without impacting the service.
-
Replication mechanism distributing changes across nodes. In this solution, it is done with Group Replication. Group Replication is a tightly coupled solution, which means that the database cluster is based on a Datacentric approach (single state of the data, distributed commit). In this case, the data is consistent in time across nodes though this type of replication requires a high performant link. Given that, the main Group Replication mechanism does not implicitly support Disaster Recovery (DR) and geographic distribution is not permitted.
The node characteristics such as CPU/RAM/Storage are not relevant to the solution design. They must reflect the estimated workload that the solution will have to cover, and this is a case by case identification.
However, it is important that all nodes that are part of the cluster must have the same characteristics. Otherwise, the cluster is imbalanced and services will be affected.
As a generic indication we recommend using nodes with at least 8 cores and 16GB RAM when in production.
"},{"location":"architecture-components.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"components.html","title":"Components","text":"Percona Distribution for MySQL consists of the following components:
-
Percona Server for MySQL is a drop-in replacement for MySQL Community Edition with the enterprise-grade features embedded by Percona.
-
Percona XtraBackup is an open-source hot backup utility for MySQL-based servers that doesn\u2019t lock your database during the backup.
-
Orchestrator is the replication topology manager for Percona Server for MySQL.
-
ProxySQL is a high performance, high-availability, protocol-aware proxy for MySQL.
-
Percona Toolkit is the set of scripts to simplify and optimize database operation.
-
MySQL Shell is an advanced client and code editor for MySQL Server.
-
MySQL Router is lightweight middleware that provides transparent routing between your application and back-end MySQL servers.
"},{"location":"components.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"copyright.html","title":"Copyright and licensing information","text":""},{"location":"copyright.html#documentation-licensing","title":"Documentation licensing","text":"Percona Distribution for MySQL documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.
"},{"location":"copyright.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"deploy-pdps-group-replication.html","title":"Deploying high availability solution with Group Replication","text":"This document provides step-by-step instructions on how to deploy high availability solution with Group Replication.
"},{"location":"deploy-pdps-group-replication.html#preconditions","title":"Preconditions","text":"We will use the following elements:
-
1 Virtual IP for ProxySQL failover - 192.168.4.194
-
2 ProxySQL nodes
- Proxy1 192.168.4.191
- Proxy2 192.168.4.192
-
4 MySQL nodes in Single Primary mode
- Gr1 192.168.4.81 - Initial Primary
- Gr2 192.168.4.82 - Replica / failover
- Gr3 192.168.4.83 - Replica / failover
- Gr4 192.168.4.84 - Replica / Backup
-
All of the following ports must be open if a firewall is in place or any other restriction like AppArmor or SELinux.
-
ProxySQL:
- 6033
- 6032
- 3306
-
MySQL - Group Replication:
- 3306
- 33060
- 33061
"},{"location":"deploy-pdps-group-replication.html#nodes-configuration","title":"Nodes configuration","text":""},{"location":"deploy-pdps-group-replication.html#preparation","title":"Preparation","text":" -
Install Percona Server-based variant of Percona Distribution for MySQL on each MySQL node (Gr1-Gr4).
-
Make sure that all the nodes use the same time-zone and time
$ date\nTue Aug 18 08:22:12 EDT 2020\n
-
Also check that ntpd
service is present and enabled
-
Make sure that each node resolves the other nodes by name
for i in 1 2 3 4 ; do ping -c 1 gr$i > /dev/null;echo $?; done\n
If nodes aren\u2019t able to resolve, add the entries in the /etc/hosts
file.
-
After instances are up and running, check Percona Server for MySQL version on each node:
mysql>\\s\n--------------\n/opt/mysql_templates/PS-8P/bin/mysql Ver 8.1.0-1 for Linux on x86_64 (Percona Server (GPL), Release 11, Revision 159f0eb)\n
"},{"location":"deploy-pdps-group-replication.html#step-1-create-an-administration-user","title":"Step 1 Create an administration user","text":" -
Create a user for administration. We will use the user dba
in our setup:
CREATE user dba@localhost identified by 'dbapw';\nCREATE user dba@'192.168.%' identified by 'dbapw';\n\nGRANT ALL on *.* to dba@localhost with grant option;\nGRANT ALL on *.* to dba@'192.168.%' with grant option;\n
Log out from the client as the root user and log in as the dba
user.
-
Make sure to have a good and unique SERVER_ID value:
mysql> show global variables like 'server_id';\n+---------------+-------+\n| Variable_name | Value |\n+---------------+-------+\n| server_id | 1 |\n+---------------+-------+\n1 row in set (0.01 sec)\n
The server_id
value must be unique on each node
"},{"location":"deploy-pdps-group-replication.html#step-2-add-group-replication-settings","title":"Step 2. Add Group Replication settings","text":" -
Stop all the nodes
$ service mysql stop\n
-
In the my.cnf
configuration file, add the following:
#####################\n#Replication + binlog settings\n#####################\nauto-increment-increment =1\nauto-increment-offset =1\n\nlog-bin =<path_to_logs>/binlog\nlog-bin-index =binlog.index\nbinlog-checksum =NONE\nbinlog-format =ROW\nbinlog-row-image =FULL\nlog-slave-updates =1\nbinlog-transaction-dependency-tracking =WRITESET_SESSION\n\nenforce-gtid-consistency =TRUE\ngtid-mode =ON\n\nmaster-info-file =master.info\nmaster-info-repository =TABLE\nrelay_log_info_repository =TABLE\nrelay-log =<path_to_logs>/relay\n\nsync-binlog =1\n\n### SLAVE SECTION\nskip-slave-start\nslave-parallel-type = LOGICAL_CLOCK\nslave-parallel-workers = 4\nslave-preserve-commit-order = 1\n\n\n######################################\n#Group Replication\n######################################\nplugin_load_add ='group_replication.so'\nplugin-load-add ='mysql_clone.so'\ngroup_replication_group_name =\"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa\" #<-- Not good. Use something\n that will help you to identify the GR transactions and from where they come from IE \"dc1euz1-aaaa-aaaa-aaaa-aaaaaaaaaaaa\"\ngroup_replication_start_on_boot =off\ngroup_replication_local_address = \"192.168.4.81/2/3/4:33061\" <---- CHANGE THIS TO MATCH EACH NODE LOCAL IP\ngroup_replication_group_seeds = \"192.168.4.81:33061,192.168.4.82:33061,192.168.4.83:33061,192.168.4.84:33061\"\ngroup_replication_bootstrap_group = off\ntransaction-write-set-extraction = XXHASH64\n
-
Restart all nodes:
$ service mysql start\n
-
Connect to the nodes
"},{"location":"deploy-pdps-group-replication.html#step-3-create-a-replication-user","title":"Step 3. Create a replication user","text":" -
On every node, create a user for replication
SET SQL_LOG_BIN=0;\n CREATE USER replica@'192.168.4.%' IDENTIFIED BY 'replicapw'; #<--- Please note the filter by IP is more restrictive\n GRANT REPLICATION SLAVE ON *.* TO replica@'192.168.4.%';\n FLUSH PRIVILEGES;\n SET SQL_LOG_BIN=1;\n
-
Link the nodes with the replication channel.
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='replicapw' FOR CHANNEL 'group_replication_recovery';\n
Run this command on all nodes.
-
Check the current status:
(dba@node1) [(none)]>\\u performance_schema\n (dba@node1) [performance_schema]>show tables like '%repl%';\n +-------------------------------------------+\n | Tables_in_performance_schema (%repl%) |\n +-------------------------------------------+\n | replication_applier_configuration |\n | replication_applier_filters |\n | replication_applier_global_filters |\n | replication_applier_status |\n | replication_applier_status_by_coordinator |\n | replication_applier_status_by_worker |\n | replication_connection_configuration |\n | replication_connection_status |\n | replication_group_member_stats |\n | replication_group_members | <------------------------\n +-------------------------------------------+\n\n (dba@node1) [performance_schema]>select * from replication_group_members\\G\nCHANNEL_NAME: group_replication_applier\n MEMBER_ID:\n MEMBER_HOST:\n MEMBER_PORT:\n MEMBER_STATE:\n MEMBER_ROLE: OFFLINE\nMEMBER_VERSION:\n1 row in set (0.00 sec)\n
At this stage, you should be able to start the first (Primary) cluster node.
-
Start the Primary node (Gr1) and enable Group Replication:
(dba@node1)[none]> SET GLOBAL group_replication_bootstrap_group=ON;\n(dba@node1)[none]> START GROUP_REPLICATION;\n(dba@node1)[none]> SET GLOBAL group_replication_bootstrap_group=OFF;\n
-
Check if the node registered correctly:
(dba@node1) [none]>select * from performance_schema.replication_group_members\\G\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 90a353b8-e6dc-11ea-98fa-08002734ed50\n MEMBER_HOST: gr1\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.1.0\n
-
Once the Primary node is running, connect to the secondary node (Gr2 node) and enable Group Replication:
(dba@node2) [none]>START GROUP_REPLICATION;\nQuery OK, 0 rows affected (4.60 sec)\n
-
Check if the secondary node registered correctly:
(dba@node2) [performance_schema]>select * from replication_group_members\\G\n*************************** 1. row ***************************\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 58ffd118-e6dc-11ea-8af8-08002734ed50\n MEMBER_HOST: gr2\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: SECONDARY\nMEMBER_VERSION: 8.1.0\n*************************** 2. row ***************************\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 90a353b8-e6dc-11ea-98fa-08002734ed50\n MEMBER_HOST: gr1\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.1.0\n
-
Test the replication:
- On the Primary node, run the following command:
(dba@node1) [performance_schema]>create schema test;\nQuery OK, 1 row affected (0.76 sec)\n\n(dba@node1) [performance_schema]>\\u test\nDatabase changed\n\n(dba@node1) [test]>create table test1 (`id` int auto_increment primary key);\nQuery OK, 0 rows affected (0.32 sec)\n\n(dba@node1) [test]>insert into test1 values(null);\nQuery OK, 1 row affected (0.34 sec)\n
- On the secondary node:
(dba@node2) [performance_schema]>use \\test\n Database changed\n (dba@node2) [test]>select * from test1;\n +----+\n | id |\n +----+\n | 1 |\n +----+\n 1 row in set (0.00 sec)\n
-
Start Group Replication on the remaining nodes
(dba@node3) [performance_schema]>START GROUP_REPLICATION;\n(dba@node4) [performance_schema]>START GROUP_REPLICATION;\n
"},{"location":"deploy-pdps-group-replication.html#proxy-setup","title":"Proxy setup","text":""},{"location":"deploy-pdps-group-replication.html#step-1-installation","title":"Step 1. Installation","text":" -
Install ProxySQL. In our example, we install ProxySQL on Proxy1 192.168.4.191 and Proxy2 192.168.4.192 nodes.
-
Create the monitoring user on MySQL Group Replication nodes:
create user monitor@'192.168.4.%' identified by 'monitor';\ngrant usage on *.* to 'monitor'@'192.168.4.%';\ngrant select on sys.* to 'monitor'@'192.168.4.%';\n
-
Define basic variables:
update global_variables set Variable_Value='admin:admin;cluster1:clusterpass' where Variable_name='admin-admin_credentials';\nupdate global_variables set variable_value='cluster1' where variable_name='admin-cluster_username';\nupdate global_variables set variable_value='clusterpass' where variable_name='admin-cluster_password';\nupdate global_variables set Variable_Value=0 where Variable_name='mysql-hostgroup_manager_verbose';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-query_digests_normalize_digest_text';\nupdate global_variables set Variable_Value='8.1.0' where Variable_name='mysql-server_version';\nupdate global_variables set Variable_Value='utf8' where Variable_name='mysql-default_charset';\nupdate global_variables set Variable_Value=300 where Variable_name='mysql-tcp_keepalive_time';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-use_tcp_keepalive';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-verbose_query_error';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-show_processlist_extended';\nupdate global_variables set Variable_Value=50000 where Variable_name='mysql-max_stmts_cache';\nupdate global_variables set Variable_Value='false' where Variable_name='admin-web_enabled';\nupdate global_variables set Variable_Value='0' where Variable_name='mysql-set_query_lock_on_hostgroup';\n\nload admin variables to run;save admin variables to disk;\nload mysql variables to run;save mysql variables to disk;\n
Note
The user name and password need to reflect your standards. The ones used above are just an example.
-
Set up the nodes as a cluster:
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('192.168.4.191',6032,100,'PRIMARY');\nINSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('192.168.4.192',6032,100,'SECONDARY');\nload proxysql servers to run;save proxysql servers to disk;\n
"},{"location":"deploy-pdps-group-replication.html#step-2-define-users-servers-and-query-rules-for-read-write-split","title":"Step 2. Define users, servers and query rules for read / write split","text":" -
Create one or more valid users. Define these user(s). For example, if you have a user named app_gr
with the password test
, and that has access to your Group Replication cluster, the command to define the user is the following:
insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('app_gr','test',1,400,'mysql',1,'application test user GR');\nLOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;\n
-
Define servers:
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.81',400,3306,10000,2000,'GR1');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.81',401,3306,100,2000,'GR1');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.82',401,3306,10000,2000,'GR2');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.83',401,3306,10000,2000,'GR2');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.84',401,3306,1,2000,'GR2');\nLOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;\n
-
Define query rules to get read / write split:
INSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(4040,6033,'app_gr',400,1,3,'^SELECT.*FOR UPDATE',1);\nINSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,multiplex,apply) values(4042,6033,'app_gr',401,1,3,'^SELECT.*$',1,1);\nLOAD MYSQL QUERY RULES TO RUN;SAVE MYSQL QUERY RULES TO DISK;\n
"},{"location":"deploy-pdps-group-replication.html#step-3-create-a-view-in-sys-schema","title":"Step 3. Create a view in SYS schema","text":"Once all the configuration is ready, we need to have a special view in the SYS schema in Percona server nodes. Find the view working for the server version 8 and above here.
Run that sql on the Primary node of the Group Replication cluster.
"},{"location":"deploy-pdps-group-replication.html#step-4-activate-support-for-group-replication-in-proxysql","title":"Step 4. Activate support for Group Replication in ProxySQL","text":"To activate the native support for Group Replication in ProxySQL, we will use the following group definition:
Writer HG-> 400\nReader HG-> 401\nBackupW HG-> 402\nOffline HG-> 9401\n
INSERT INTO mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind)\nvalues (400,402,401,9401,1,1,1,100);\nLOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;\n
"},{"location":"deploy-pdps-group-replication.html#comments-about-parameters","title":"Comments about parameters","text":"To obtain the most reliable results, we recommend setting the number of writers always to 1, and writer_is_also_reader
to 1 as well.
max_writers: 1\nwriter_is_also_reader: 1\n
The max_transactions_behind
is a subjective parameter that you should calculate on the basis of your needs. If, for instance, you cannot have a stale read, it will be safe to set this value to a low number (i.e. 50) and to set in all Group Replication nodes:
set global group_replication_consistency=AFTER;\n
If instead, you have no issue or strict requirements about some stale read, you can relax the parameter and ignore the group_replication_consistency
setting. Our recommended setting is group_replication_consistency=AFTER
and max_transactions_behind: 100
.
See also
ProxySQL Documentation: mysql_group_replication_hostgroups
"},{"location":"deploy-pdps-group-replication.html#step-5-enable-high-availability-for-proxysql","title":"Step 5. Enable high availability for ProxySQL","text":"keepalived
will be used to enable High Availability for ProxySQL.
-
Install keepalived
on each ProxySQL node using the package manager of your operating system:
on Debian/UbuntuOn RHEL/derivatives $ sudo apt install -y keepalived\n
$ sudo yum install -y keepalived\n
-
Modify the /etc/keepalived/keepalived.conf
file accordingly to your setup. In our case:
-
Proxy1 192.168.4.0/24 dev enp0s9 proto kernel scope link src 192.168.4.191
-
Proxy2 192.168.4.0/24 dev enp0s9 proto kernel scope link src 192.168.4.192
-
VIP 192.168.4.194
Let\u2019s say Proxy1 is the primary node while Proxy2 is the secondary node.
Given that, the config file looks as follows:
global_defs {\n # Keepalived process identifier\n router_id proxy_HA\n}\n# Script used to check if Proxy is running\nvrrp_script check_proxy {\n script \"killall -0 proxysql\"\n interval 2\n weight 2\n}\n# Virtual interface\n# The priority specifies the order in which the assigned interface to take over in a failover\nvrrp_instance VI_01 {\n state MASTER\n interface enp0s9\n virtual_router_id 51\n priority 100 <----- This needs to be different for each ProxySQL node, like 100/99\n\n # The virtual ip address shared between the two load balancers\n virtual_ipaddress {\n 192.168.4.194 dev enp0s9\n }\n track_script {\n check_proxy\n }\n}\n
-
Start the keepalived
service. From now on, the VIP will be associated with the Proxy1 unless the service is down.
"},{"location":"deploy-pdps-group-replication.html#disaster-recovery-implementation","title":"Disaster recovery implementation","text":"The implementation of a DR (Disaster Recovery) site will follow the same direction provided for the main site. There are only some generic rules to follow:
-
A DR site should be located in a different geographic location than the main site (several hundred kilometers/miles away).
-
The connection link between the main site and the DR site can only be established using asynchronous replication (standard MySQL replication setup ).
"},{"location":"deploy-pdps-group-replication.html#monitoring","title":"Monitoring","text":""},{"location":"deploy-pdps-group-replication.html#using-percona-management-and-monitoring-pmm","title":"Using Percona Management and Monitoring (PMM)","text":" -
Use this quickstart to install Percona Monitoring and Management (PMM).
-
Specify the replication_set
flag when registering the Percona Server for MySQL node or the MySQL node in PMM:
pmm-admin add mysql --username=pmm --password=pmm --query-source=perfschema --replication-set=gr_test_lab group_rep4 127.0.0.1:3306\n
Then you can use the Group Replication Dashboard and monitor your cluster with a lot of details.
The dashboard sections are the following:
-
Overview:
-
Replication delay details
-
Transactions
-
Conflicts
"},{"location":"deploy-pdps-group-replication.html#using-command-line","title":"Using command line","text":"From the command line, you need to manually query the tables in Performance schema:
+----------------------------------------------+\n| replication_applier_configuration |\n| replication_applier_filters |\n| replication_applier_global_filters |\n| replication_applier_status |\n| replication_applier_status_by_coordinator |\n| replication_applier_status_by_worker |\n| replication_connection_configuration |\n| replication_connection_status |\n| replication_group_member_stats |\n| replication_group_members |\n+----------------------------------------------+\n
For example, use this command to get the lag in number of transactions on a node:
select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-1),'-',-1) last_executed;select @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(Received_transaction_set,':',-1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag;\n+---------------+\n| last_executed |\n+---------------+\n| 125624 |\n+---------------+\n1 row in set, 1 warning (0.03 sec)\n\n+---------------+\n| last_received |\n+---------------+\n| 125624 |\n+---------------+\n1 row in set, 1 warning (0.00 sec)\n\n+----------+\n| real_lag |\n+----------+\n| 0 |\n+----------+\n1 row in set (0.00 sec)\n
You can use a more composite query to get information about each applier:
SELECT\n conn_status.channel_name as channel_name,\n conn_status.service_state as IO_thread,\n applier_status.service_state as SQL_thread,\n conn_status.LAST_QUEUED_TRANSACTION as last_queued_transaction,\n applier_status.LAST_APPLIED_TRANSACTION as last_applied_transaction,\n LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -\n LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP 'rep delay (sec)',\n LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP -\n LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP 'transport time',\n LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP -\n LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP 'time RL',\n LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -\n LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP 'apply time',\n if(GTID_SUBTRACT(LAST_QUEUED_TRANSACTION, LAST_APPLIED_TRANSACTION) = \"\",\"0\" , abs(time_to_sec(if(time_to_sec(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP)=0,0,timediff(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP,now()))))) `lag_in_sec`\nFROM\n performance_schema.replication_connection_status AS conn_status\nJOIN performance_schema.replication_applier_status_by_worker AS applier_status\n ON applier_status.channel_name = conn_status.channel_name\nORDER BY lag_in_sec, lag_in_sec desc\\G\n
Expected output *************************** 1. row ***************************\nchannel_name: group_replication_applier\nIO_thread: ON\nSQL_thread: ON\nlast_queued_transaction: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:125624\nlast_applied_transaction: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:125621\nrep delay (sec): 3.153038\ntransport time: 0.061327\ntime RL: 0.001005\napply time: 0.388680\nlag_in_sec: 0\n
Based on the material from Percona Database Performance Blog
This document is based on the blog post Percona Distribution for MySQL: High Availability with Group Replication Solution by Marco Tusa
"},{"location":"deploy-pdps-group-replication.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"deployment-variants.html","title":"Deployment variants","text":"Percona Distribution for MySQL provides two deployment variants: one is Percona Server for MySQL-based with asynchronous replication and another one is Percona Server for MySQL-based with group replication. The table below lists what components are available with Percona Server for MySQL:
Components Percona Server for MySQL Orchestrator YES HAProxy NO ProxySQL YES Percona XtraBackup YES Percona Toolkit YES MySQL Shell YES MySQL Router YES"},{"location":"deployment-variants.html#what-deployment-variant-to-choose","title":"What deployment variant to choose?","text":"The Percona Server-based deployment variant with asynchronous replication utilizes the primary / secondary replication model. It enables you to create geographically distributed infrastructures with the support for disaster recovery. However, this deployment variant does not guarantee data consistency on all nodes at the given moment and provides high availability of up to 4 nines.
The Percona Server-based deployment variant with Group Replication enables you to create fault-tolerant systems with redundancy by replicating the system state to a set of servers. Percona Server for MySQL-based deployment with Group Replication offers a high grade of high availability (4-5 nines) and almost instant fail over when associated with a proxy.
"},{"location":"deployment-variants.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"downgrade.html","title":"Downgrade Percona Distribution for MySQL","text":"Following the MySQL downgrade policy, the downgrade to a previous version of Percona Distribution of MySQL is not supported.
"},{"location":"downgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#acid","title":"ACID","text":"Set of properties that guarantee database transactions are processed reliably. Stands for Atomicity
, Consistency
, Isolation
, Durability
.
"},{"location":"glossary.html#asynchronous-replication","title":"Asynchronous replication","text":"Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.
"},{"location":"glossary.html#atomicity","title":"Atomicity","text":"Atomicity means that database operations are applied following an \u201call or nothing\u201d rule. A transaction is either fully applied or not at all.
"},{"location":"glossary.html#consistency","title":"Consistency","text":"In the context of backup and restore, consistency means that the data restored will be consistent in a given point in time. Partial or incomplete writes to disk of atomic operations (for example, to table and index data structures separately) won\u2019t be served to the client after the restore. The same applies to multi-document transactions, that started but didn\u2019t complete by the time the backup was finished.
"},{"location":"glossary.html#disaster-recovery","title":"Disaster recovery","text":"Disaster recovery are means to regain access and functionality of a database infrastructure after unplanned events that caused its failure.
"},{"location":"glossary.html#downtime","title":"Downtime","text":"Downtime is the period when a database infrastructure is unavailable due to expected (maintenance) or unexpected (outage, lost connectivity, hardware failure, etc.) reasons.
"},{"location":"glossary.html#durability","title":"Durability","text":"Once a transaction is committed, it will remain so.
"},{"location":"glossary.html#failover","title":"Failover","text":"Failover is switching automatically and seamlessly to a reliable backup system.
"},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.
"},{"location":"glossary.html#gtid","title":"GTID","text":"A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of the source. This identifier is unique across all servers in a given replication topology.
"},{"location":"glossary.html#high-availability","title":"High availability","text":"A high availability is the ability of a system to operate continuously without failure for a long time.
"},{"location":"glossary.html#isolation","title":"Isolation","text":"The Isolation requirement means that no transaction can interfere with another.
"},{"location":"glossary.html#loosely-coupled-cluster","title":"Loosely-coupled cluster","text":"A loosely-coupled cluster is the deployment where cluster nodes are independent in processing / applying transactions. Data state may not always be consistent in time on all nodes; however, a single node state does not affect the cluster. Loosely-coupled clusters use asynchronous replication and can be geographically distributed and/or serve as the disaster recovery site.
"},{"location":"glossary.html#multi-source-replication","title":"Multi-source replication","text":"A multi-source replication topology requires at least one replica synchronized with at least two sources. The transactions can be received in parallel because the replica creates a separate replication channel for each source.
Multi-source replication allows a single server to back up or consolidate data from multiple servers. This type of replication also lets you merge table shards.
"},{"location":"glossary.html#nines-of-availability","title":"Nines of availability","text":"Nines of availability refer to system availability as a percentage of total system time.
"},{"location":"glossary.html#semi-synchronous-replication","title":"Semi-synchronous replication","text":"A semi-synchronous replication is a technique where the primary node wait for at least one of the secondaries to acknowledge the transaction before processing further transactions.
"},{"location":"glossary.html#synchronous-replication","title":"Synchronous replication","text":"A synchronous replication is a technique when data is written to the primary and secondary nodes simultaneously. Thus, both primary and secondaries are in sync and failover from the primary to one of the secondaries is possible any time.
"},{"location":"glossary.html#tech-preview","title":"Tech preview","text":"A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.
"},{"location":"glossary.html#tightly-coupled-cluster","title":"Tightly-coupled cluster","text":"A tightly-coupled cluster is the deployment in which transactions and information is synchronously distributed, consistent and available on all cluster nodes at any time.
"},{"location":"glossary.html#uptime","title":"Uptime","text":"Uptime is the time when the system is continuously available.
"},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"installing.html","title":"Install Percona Distribution for MySQL","text":"We recommend to install Percona Distribution for MySQL from Percona repositories using the package manager of your operating system:
apt
- for Debian and Ubuntu Linux yum
- for Red Hat Enterprise Linux and compatible Linux derivatives
Find the full list of supported platforms on the Percona Software and Platform Lifecycle page.
Repository overview: Major and Minor repositories Percona provides two repositories for every deployment variant of Percona Distribution for MySQL.
The Major Release repository includes the latest version packages (for example, pdps-8x-innovation
). Whenever a package is updated, the package manager of your operating system detects that and prompts you to update. As long as you update all Distribution packages at the same time, you can ensure that the packages you\u2019re using have been tested and verified by Percona. Installing Percona Distribution for MySQL from the Major Release Repository is the recommended method.
The Minor Release repository includes a particular minor release of the database and all of the packages that were tested and verified to work with that minor release (for example, pdps-8.1.0
). You may choose to install Percona Distribution for MySQL from the Minor Release repository if you have decided to standardize on a particular release which has passed rigorous testing procedures and which has been verified to work with your applications. This allows you to deploy to a new host and ensure that you\u2019ll be using the same version of all the Distribution packages, even if newer releases exist in other repositories.
The disadvantage of using a Minor Release repository is that you are locked in this particular release. When potentially critical fixes are released in a later minor version of the database, you will not be prompted for an upgrade by the package manager of your operating system. You would need to change the configured repository in order to install the upgrade.
"},{"location":"installing.html#prerequisites","title":"Prerequisites","text":"To install Percona software, you need to configure the required repository. To simplify this process, use the percona-release
repository management tool.
-
Install GnuPG and curl
$ sudo apt install gnupg2 curl\n
-
Install percona-release. If you have it installed, update percona-release to the latest version.
"},{"location":"installing.html#procedure","title":"Procedure","text":"On Debian and Ubuntu LinuxOn Red Hat Enterprise Linux and derivatives Important
Run the following commands as the root user or via sudo
.
Platform specific notes
On CentOS 7, install the epel-release
package. It includes the dependencies required to install Orchestrator. Use the following command:
$ sudo yum -y install epel-release\n
Run the following commands as the root user or via sudo
.
"},{"location":"installing.html#enable-percona-repository","title":"Enable Percona repository","text":"To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
$ sudo percona-release enable pdps-pdps-8x-innovation\n
Tip
To enable the minor version repository, use the following command:
$ sudo percona-release enable pdps-pdps-8.1.0\n
"},{"location":"installing.html#install-percona-distribution-for-mysql-packages","title":"Install Percona Distribution for MySQL packages","text":" -
Install Percona Server for MySQL:
$ sudo apt install percona-server-server\n
-
Install the components. Use the commands below to install the required components:
Install Percona XtraBackup:
$ sudo apt install percona-xtrabackup-81\n
Install Percona Toolkit:
$ sudo apt install percona-toolkit\n
Install Orchestrator:
$ sudo apt install percona-orchestrator percona-orchestrator-cli percona-orchestrator-client\n
Install MySQL Shell:
$ sudo apt install percona-mysql-shell\n
Install ProxySQL:
$ sudo apt install proxysql2\n
Install MySQL Router:
$ sudo apt install percona-mysql-router\n
"},{"location":"installing.html#enable-percona-repository_1","title":"Enable Percona repository","text":"To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
$ sudo percona-release enable pdps-pdps-8x-innovation\n
Tip
To enable the minor version repository, use the following command:
$ sudo percona-release enable pdps-pdps-8.1.0\n
"},{"location":"installing.html#install-percona-distribution-for-mysql-packages_1","title":"Install Percona Distribution for MySQL packages","text":" -
Install Percona Server for MySQL:
$ sudo yum install percona-server-server\n
-
Install the components. Use the commands below to install the required components:
Install Percona XtraBackup
$ sudo yum install percona-xtrabackup-81\n
Install Orchestrator
$ sudo yum install percona-orchestrator percona-orchestrator-cli percona-orchestrator-client\n
Install Percona Toolkit
$ sudo yum install percona-toolkit\n
Install MySQL Shell:
$ sudo yum install percona-mysql-shell\n
Install ProxySQL:
$ sudo yum install proxysql2\n
Install MySQL Router:
$ sudo yum install percona-mysql-router\n
"},{"location":"installing.html#run-percona-distribution-for-mysql","title":"Run Percona Distribution for MySQL","text":"Percona Distribution for MySQL is not started automatically on Red Hat Enterprise Linux and CentOS after the installation is complete.
Start it manually using the following command:
$ sudo systemctl start mysql\n
Confirm that the service is running:
$ sudo systemctl status mysql\n
Stop the service:
$ sudo systemctl stop mysql\n
"},{"location":"installing.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"minor-upgrade.html","title":"Upgrade Percona Distribution for MySQL","text":"Minor releases include bug fixes and feature enhancements. We recommend to have Percona Distribution for MySQL updated to the latest version.
Though minor releases don\u2019t change the behavior, even a minor upgrade is a risky process. We recommend to back up your data before upgrading.
"},{"location":"minor-upgrade.html#preconditions","title":"Preconditions","text":"To upgrade Percona Distribution for MySQL, install the percona-release
repository management tool or update it to the latest version.
"},{"location":"minor-upgrade.html#procedure","title":"Procedure","text":"Important
Run the following commands as the root user or via sudo
.
-
Enable Percona repository
The Major Release repository automatically includes new version packages of Percona Distribution for MySQL. If you installed Percona Distribution for MySQL from a Minor Release repository, enable the new version repository:
$ sudo percona-release setup pdps-XXX \n
where XXX
is the required version.
Read more about major and Minor release repositories in Repository overview.
-
Stop mysql
service
$ sudo systemctl mysql stop\n
-
Install new version packages using the package manager of your operating system.
-
Restart mysql
service:
$ sudo systemctl mysql start\n
To upgrade the components, refer to Installing Percona Distribution for MySQL for installation instructions relevant to your operating system.
"},{"location":"minor-upgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"monitoring.html","title":"Measurement and monitoring","text":"To ensure that database infrastructure is performing as intended or at its best, specific metrics need to be measured and alerts are to be raised when some of these metrics are not in line with expectations. A periodic review of these measurements is also encouraged to promote stability and understand potential risks associated with the database infrastructure.
The following are the 3 aspects of database performance measurement and monitoring:
-
Measurement - to understand how a database infrastructure is performing, multiple aspects of the infrastructure need to be measured. With measurement it\u2019s important to understand the impact of the sample sizes, sample timing, and sample types.
-
Metrics - metrics refer to the actual parts of the database infrastructure being measured. When we discuss metrics, more isn\u2019t always better as it could introduce unintentional noise or make troubleshooting overly burdensome.
-
Alerting - when one or many metrics of the database infrastructure is not within a normal or acceptable range, an alert should be generated so that the team responsible for the appropriate portion of the database infrastructure can investigate and remedy it.
Monitoring and measurement for this solution are covered by Percona Monitoring and Management. It has a specific dashboard to monitor the Group Replication state and cluster status as a whole. For more information, read Percona Monitoring and Management Documentation.
"},{"location":"monitoring.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"pdps-group-replication.html","title":"High availability solution with Group Replication","text":"Every architecture and deployment depends on customer requirements and application demands for high availability and the estimated level of usage. For example, using a high read or a high write application, or both with 99.999% availability.
This guide gives architecture and deployment recommendations along with a technical overview for a solution that provides a high level of high availability and assumes the usage of high read / write applications (20K or more queries per second). It also provides step-by-step deployment guidelines.
This solution assumes the use of Percona Server for MySQL based deployment variant of Percona Distribution for MySQL with Group Replication.
"},{"location":"pdps-group-replication.html#high-availability-overview","title":"High availability overview","text":"How to measure availability and at what point does it become \u201chigh\u201d availability?
Generally speaking, the measurement of availability is done by establishing a measurement time frame and dividing it by the time that it was available. This ratio will rarely be 1, which is equal to 100% availability. A solution is considered to be highly available if it is at least 99% or \u201ctwo nines\u201d available.
The following table provides downtime calculations per high availability level:
Availability, % Downtime per year Downtime per month Downtime per week Downtime per day 99% (\u201ctwo nines\u201d) 3.65 days 7.31 hours 1.68 hours 14.40 minutes 99.5% (\u201ctwo nines five\u201d) 1.83 days 3.65 hours 50.40 minutes 7.20 minutes 99.9% (\u201cthree nines\u201d) 8.77 hours 43.83 minutes 10.08 minutes 1.44 minutes 99.95% (\u201cthree nines five\u201d) 4.38 hours 21.92 minutes 5.04 minutes 43.20 seconds 99.99% (\u201cfour nines\u201d) 52.60 minutes 4.38 minutes 1.01 minutes 8.64 seconds 99.995% (\u201cfour nines five\u201d) 26.30 minutes 2.19 minutes 30.24 seconds 4.32 seconds 99.999% (\u201cfive nines\u201d) 5.26 minutes 26.30 seconds 6.05 seconds 864.00 milliseconds"},{"location":"pdps-group-replication.html#how-is-high-availability-achieved","title":"How is high availability achieved?","text":"There are three key components to achieve high availability:
-
Infrastructure - this is the physical or virtual hardware that database systems rely on to run. Without enough infrastructure (VM\u2019s, networking, etc.), there cannot be high availability. The easiest example is: there is no way to make a single server highly available
.
-
Topology management - this is the software management related specifically to the database and managing its ability to stay consistent in the event of a failure. Many clustering or synchronous replication solutions offer this capability out of the box. However, asynchronous replication is handled by additional software.
-
Connection management - this is the software management related specifically to the networking and connectivity aspect of the database. Clustering solutions typically bundle with a connection manager. However, in asynchronous clusters, deploying a connection manager is mandatory for high availability.
This solution is based on a tightly coupled database cluster. It offers a high availability level of 99.995% when coupled with the Group Replication setting group_replication_consistency=AFTER
.
"},{"location":"pdps-group-replication.html#failovers","title":"Failovers","text":"A database failure or configuration change that requires a restart should not affect the stability of the database infrastructure, if it is properly planned and architected. Failovers are an integral part of a stability strategy and aligning the business requirements for availability and uptime with failover methodologies is critical.
The following are the three main types of failovers that can occur in database environments:
-
Planned failover. This is a failover that has been scheduled in advance or occurs at a regular interval. There can be many reasons for planned failovers including patching, large data operations, retiring existing infrastructure, or simply to test the failover strategy.
-
Unplanned failover. This is what occurs when a database has unexpectedly become unresponsive or experiences instability. An unplanned failover could also include emergency changes that do not fall under the planned failover cadence or scheduling parameters. Unplanned failovers are generally considered higher risk operations due to the high stress and high potential for data corruption or data fragmentation.
-
Regional or disaster recovery (DR) failover. Unplanned failovers still work with the assumption that additional database infrastructure is immediately available and in a usable state. However, in a regional or DR failover, it is assumed that there is a large scale infrastructure outage which requires the business to move its operations away from its current availability zone.
"},{"location":"pdps-group-replication.html#maintenance-windows","title":"Maintenance windows","text":""},{"location":"pdps-group-replication.html#major-vs-minor-maintenance","title":"Major vs Minor maintenance","text":"Although it may not be obvious at first, not all maintenance activities are created equal and do not have the same dependencies. It is good to separate maintenance that demands downtime or failover from maintenance that can be done without impacting those important stability metrics. When defining these maintenance dependencies, there can be a change in the actual maintenance process that allows for a different cadence.
"},{"location":"pdps-group-replication.html#maintenance-without-service-interruption","title":"Maintenance without service interruption","text":"It is possible to cover both major and minor maintenance without service interruption with rolling restart and using proper version upgrade.
"},{"location":"pdps-group-replication.html#uptime","title":"Uptime","text":"When referring to database stability, uptime is likely the largest indicator of stability and often is the most obvious symptom of an unstable database environment. Uptime is composed of three key components and, contrary to common perception, is based on what happens when the database software cannot take incoming requests rather than maintain the ability to take requests with errors.
The uptime components are:
- Recovery Time Objective (RTO)
RTO can be characterized by a simple question \u201cHow long can the business sustain a database outage?\u201d Once the business is aligned with a minimum viable recovery time objective, it is much more straightforward to plan and invest in the infrastructure required to meet that requirement. It is important to acknowledge that while everyone desires 100% uptime, there need to be realistic expectations that align with the business needs and not a technical desire.
- Recovery Point Objective (RPO)
There is a big distinction between the Recovery Point and the Recovery Time for a database infrastructure. The database can be available, but not to the exact state that it was when it became unavailable. That is where Recovery Point comes in. The question to ask here is \u201cHow much data can the business lose during a database outage?\u201d All businesses have their own requirements here yet it is always the goal to never sustain any data loss. But this is framed in the worst case scenario, how much data could be lost and the business maintains the ability to continue.
- Disaster recovery
RTO and RPO are great for unplanned outages or small scale hiccups to the infrastructure. Disaster recovery is a major large scale outage not strictly for the database infrastructure. How capable is the business of restarting operations with the assumption that all resources are completely unavailable in the main availability zone? The assumption here is that there is no viable restoration point or time that aligns with the business requirements. While each disaster recovery scenario is unique based on available infrastructure, backup strategy and technology stack, there are some common threads for any scenario.
The described solution helps improve uptime. It will help you to significantly reduce both RPO and RTO. Given the tightly coupled cluster solution approach, the failure of a single node will not result in service interruption.
Increasing the number of nodes will also improve the cluster resilience by the formula:
F = (N -1) / 2\n
where:
-
F
is the number of admissible failures
-
N
is the number of nodes in the cluster.
"},{"location":"pdps-group-replication.html#example","title":"Example","text":" -
In a cluster of 5 nodes, F = (5 - 1)/2 = 2. The cluster can support up to 2 failures.
-
In a cluster of 4 nodes, F = (4 - 1)/2 = 1. The cluster can support up to 1 failure.
This solution also allows for a more restrictive backup policy, dedicating a node to the backup cycle, which will help in keeping RPO low.
As previously mentioned, disaster recovery is not covered by default by this solution. It will require an additional replication setup and controller.
Based on the material from Percona Database Performance Blog
This document is based on the blog post Percona Distribution for MySQL: High Availability with Group Replication Solution by Marco Tusa
"},{"location":"pdps-group-replication.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes-ps-8.1.html","title":"Percona Distribution for MySQL 8.1.0 using Percona Server for MySQL (2023-11-27)","text":"Percona Distribution for MySQL is the most stable, scalable, and secure open source MySQL distribution based on Percona Server for MySQL. Install Percona Distribution for MySQL.
This release is based on Percona Server for MySQL 8.1.0-1.
"},{"location":"release-notes-ps-8.1.html#release-highlights","title":"Release highlights","text":"Percona Server for MySQL implements telemetry that fills in the gaps in our understanding of how you use Percona Server for MySQL to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona Server fo MySQL document.
The following user-defined function (UDF) shared objects (so) are converted to components:
- The
data_masking
plugin converted into the component_masking_functions
component - The
binlogs_utils_udf
UDF shared object (.so) converted to the component_binlog_utils
component - The
percona-udf
UDF shared object (.so) converted to the component_percona-udf
component
A user does not need to execute a separate CREATE FUNCTION ... SONAME ...
statement for each function. Installing the components with the INSTALL COMPONENT 'file://componenet_xxx
statement performs the auto-registration operations.
The keyring_vault
plugin converted into the component_keyring_vault
component. This conversion aligns the keyring_vault with the KMIP and KMS keyrings and supports \u201cALTER INSTANCE RELOAD KEYRING\u201d to update the configuration automatically.
The audit_log_filter
plugin converted to the component_audit_log_filter
component. The following changes are also available:
- Adds the
mysql_event_tracking_parse
audit log event - Reworked, optimized, and reorganized the audit event data members
- Data deduplication within the audit event data members
The current version of percona-release
does not support the setup
subcommand with the pdps-8x-innovation
and pdps-8.1.0
repositories. Use percona-release enable
instead. The support of the pdps-8x-innovation
and pdps-8.1.0
repositories for the setup
subcommand will be added in the next release of percona-release
.
The PS 8.1.0 MTR suites are reorganized. The existing percona-specific MTR test cases are regrouped and put into separate test suites:
- component_encryption_udf
- percona
- percona_innodb
Improvements and bug fixes introduced by Oracle for MySQL 8.1 and included in Percona Server for MySQL are the following:
-
The EXPLAIN FORMAT=JSON
can output the data to a user variable.
-
New messages written to the MySQL error log during shutdown:
-
Startup and shutdown log messages, including when the server was started with --initialize
-
Start and end of shutdown phases for plugins and components
-
Start-of-phase and end-of-phase messages for connection closing phases
-
The number and ID of threads still alive after being forcibly disconnected and potentially causing a wait
Find the full list of bug fixes and changes in the MySQL 8.1 Release Notes.
"},{"location":"release-notes-ps-8.1.html#deprecation-or-removal","title":"Deprecation or removal","text":" - The
mysql_native_password
authentication plugin is deprecated and subject to removal in a future version. - The TokuDB is removed. The following items are also removed:
- Percona-TokuBackup submodule
- PerconaFT submodule
- TokuDB storage engine code
- TokuDB MTR test suites
- plugin/tokudb-backup-plugin
- The MyRocks ZenFS is removed. The following items are also removed:
- zenfs submodule
- libzdb submodule
- RocksDB MTR changes are reverted
- Travis CI integration
- Supporting
readline
as a alternative to editline library is removed. - The
audit_log
(audit version 1) plugin is removed - The \u201cinclude/ext\u201d pre-C++17 compatibility headers are removed.
- The
keyring_vault
plugin is removed. - The
data_masking
UDF shared object (.so) is removed. - The
binlog_utils_udf
UDF shared object (.so) is removed. - The
percona_udf
UDF shared object (.so) is removed.
"},{"location":"release-notes-ps-8.1.html#platform-support","title":"Platform support","text":" - Percona Server for MySQL 8.1.0-1 is not supported on Ubuntu 18.04.
"},{"location":"release-notes-ps-8.1.html#supplied-components","title":"Supplied components","text":"Review each component\u2019s release notes for What\u2019s new, improvements, or bug fixes. The following is a list of the components supplied with the Percona Server for MySQL-based variation of the Percona Distribution for MySQL:
Component Version Description Orchestrator 3.2.6-11 The replication topology manager for Percona Server for MySQL ProxySQL 2.5.5 A high performance, high-availability, protocol-aware proxy for MySQL Percona XtraBackup 8.1.0 An open-source hot backup utility for MySQL-based servers Percona Toolkit 3.5.5 The set of scripts to simplify and optimize database operation MySQL Shell 8.1.0 An advanced client and code editor for MySQL Server MySQL Router 8.1.0 Lightweight middleware that provides transparent routing between your application and back-end MySQL servers"},{"location":"release-notes-ps-8.1.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes.html","title":"Percona Distribution for MySQL 8.1 release notes index","text":" - Percona Distribution for MySQL using Percona Server for MySQL 8.1.0 (2023-11-27)
"},{"location":"release-notes.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"trademark-policy.html","title":"Trademark policy","text":"This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.
Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.
Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.
First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.
Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.
Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.
Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.
Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.
In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.
In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.
"},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"uninstalling.html","title":"Uninstalling Percona Distribution for MySQL","text":"To uninstall Percona Distribution for MySQL, stop the mysql
service and remove all the installed packages using the package manager of your operating system. Optionally, disable Percona repository.
Note
Should you need the data files later, back up your data before uninstalling Percona Distribution for MySQL.
Important
Run all commands as the root user or via sudo
On Debian / UbuntuOn Red Hat Enterprise Linux / derivatives -
Stop the mysql
service.
$ sudo systemctl stop mysql\n
-
Remove Percona Server for MySQL.
$ sudo apt remove percona-server*\n
-
Remove the components. Use the following commands to remove the required components.
- Remove Percona XtraBackup
$ sudo apt remove percona-xtrabackup-81\n
- Remove Percona Toolkit
$ sudo apt remove percona-toolkit\n
- Remove Orchestrator
$ sudo apt remove percona-orchestrator*\n
- Remove MySQL Shell
$ sudo apt remove percona-mysql-shell\n
- Remove ProxySQL
$ sudo apt remove proxysql2\n
- Remove MySQL Router
$ sudo apt remove percona-mysql-router\n
-
Stop the mysql
service.
$ sudo systemctl stop mysql\n
-
Remove Percona Server for MySQL.
$ sudo yum remove percona-server*\n
-
Remove the components. Use the commands below to remove the required components.
- Remove Percona XtraBackup
$ sudo yum remove percona-xtrabackup-81\n
- Remove Percona Toolkit
$ sudo yum remove percona-toolkit\n
- Remove Orchestrator
$ sudo yum remove percona-orchestrator*\n
- Remove MySQL Shell
$ sudo yum remove percona-mysql-shell\n
- Remove ProxySQL
$ sudo yum remove proxysql2\n
- Remove MySQL Router
$ sudo yum remove percona-mysql-router\n
"},{"location":"uninstalling.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona Distribution for MySQL 8.1 Documentation","text":"Percona Distribution for MySQL is a single solution with the best and most critical enterprise components from the MySQL open source community, designed and tested to work together. With Percona Server for MySQL as the base server, the distribution brings you the enterprise-grade features for free. The set of carefully selected components helps you operate your MySQL database to meet your application and business needs.
"},{"location":"index.html#features","title":"Features","text":" -
Increased stability and availability - a set of high-availability and backup options help you ensure your data is saved and available for your business applications.
-
Improved performance and efficiency - integrated tools help DBAs maintain, manage and monitor the database performance and timely respond to changing demands.
-
Reduced costs - save on purchasing software licensing by using the distribution - the open-source enterprise-grade solution.
-
Easy-to-integrate with PMM - benefit from all the features of PMM for monitoring and managing the health of your database.
"},{"location":"index.html#get-started","title":"Get started","text":"Follow the installation instructions to get started with Percona Distribution for MySQL.
Read more about solutions you can deploy with Percona Distribution for MySQL in High availability solution with Group Replication.
Learn more about what\u2019s new in Percona Distribution for MySQL in the release notes.
"},{"location":"index.html#read-more","title":"Read more","text":" - Deployment variants
- Percona Distribution for MySQL components
"},{"location":"index.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"404.html","title":"404 - Not Found","text":"We can\u2019t find the page you are looking for. Try using the Search or return to the homepage.
"},{"location":"404.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"architecture-components.html","title":"Architecture and components","text":"The following is the architecture layout for Percona Server for MySQL based deployment variant of Percona Distribution for MySQL with Group Replication.
"},{"location":"architecture-components.html#architecture-layout","title":"Architecture layout","text":""},{"location":"architecture-components.html#components","title":"Components","text":"The architecture is composed of two main layers:
-
Connection and distribution layer
-
Relational Database Management System (RDBMS) layer
"},{"location":"architecture-components.html#connection-and-distribution-layer","title":"Connection and distribution layer","text":"The connection and distribution layer consists of the following:
-
Application to proxy redirection mechanism. This mechanism can be anything from a Virtual IP managed by Keepalived local service to a DNS resolution service like Amazon Route 53. The mechanism\u2019s function is to redirect the traffic to the active Proxy node.
-
Proxy connection distribution. The distribution consists of two or more nodes and its role is to redirect the traffic to the active nodes of the Group Replication cluster. In cases like ProxySQL where the proxy is a level 7 proxy and can perform a read / write split, this layer is also in charge of redirecting writes to the Primary node and reads to the replicas, and of high availability to prevent a single point of failure.
"},{"location":"architecture-components.html#rdbms-layer","title":"RDBMS layer","text":"The data layer consists of the following:
-
Primary (or source) node serving write requests. This is the node that accepts writes and DDL modifications. Data will be processed following the ACID (atomicity, consistency, isolation, durability) model and replicated to all other nodes.
-
Replica nodes serving read requests. Some replica nodes can be elected Primary in case of the Primary node\u2019s failure. A replica node should be able to leave and join back to a healthy cluster without impacting the service.
-
Replication mechanism distributing changes across nodes. In this solution, it is done with Group Replication. Group Replication is a tightly coupled solution, which means that the database cluster is based on a Datacentric approach (single state of the data, distributed commit). In this case, the data is consistent in time across nodes though this type of replication requires a high performant link. Given that, the main Group Replication mechanism does not implicitly support Disaster Recovery (DR) and geographic distribution is not permitted.
The node characteristics such as CPU/RAM/Storage are not relevant to the solution design. They must reflect the estimated workload that the solution will have to cover, and this is a case by case identification.
However, it is important that all nodes that are part of the cluster must have the same characteristics. Otherwise, the cluster is imbalanced and services will be affected.
As a generic indication we recommend using nodes with at least 8 cores and 16GB RAM when in production.
"},{"location":"architecture-components.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"components.html","title":"Components","text":"Percona Distribution for MySQL consists of the following components:
-
Percona Server for MySQL is a drop-in replacement for MySQL Community Edition with the enterprise-grade features embedded by Percona.
-
Percona XtraBackup is an open-source hot backup utility for MySQL-based servers that doesn\u2019t lock your database during the backup.
-
Orchestrator is the replication topology manager for Percona Server for MySQL.
-
ProxySQL is a high performance, high-availability, protocol-aware proxy for MySQL.
-
Percona Toolkit is the set of scripts to simplify and optimize database operation.
-
MySQL Shell is an advanced client and code editor for MySQL Server.
-
MySQL Router is lightweight middleware that provides transparent routing between your application and back-end MySQL servers.
"},{"location":"components.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"copyright.html","title":"Copyright and licensing information","text":""},{"location":"copyright.html#documentation-licensing","title":"Documentation licensing","text":"Percona Distribution for MySQL documentation is (C)2009-2023 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.
"},{"location":"copyright.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"deploy-pdps-group-replication.html","title":"Deploying high availability solution with Group Replication","text":"This document provides step-by-step instructions on how to deploy high availability solution with Group Replication.
"},{"location":"deploy-pdps-group-replication.html#preconditions","title":"Preconditions","text":"We will use the following elements:
-
1 Virtual IP for ProxySQL failover - 192.168.4.194
-
2 ProxySQL nodes
- Proxy1 192.168.4.191
- Proxy2 192.168.4.192
-
4 MySQL nodes in Single Primary mode
- Gr1 192.168.4.81 - Initial Primary
- Gr2 192.168.4.82 - Replica / failover
- Gr3 192.168.4.83 - Replica / failover
- Gr4 192.168.4.84 - Replica / Backup
-
All of the following ports must be open if a firewall is in place or any other restriction like AppArmor or SELinux.
-
ProxySQL:
- 6033
- 6032
- 3306
-
MySQL - Group Replication:
- 3306
- 33060
- 33061
"},{"location":"deploy-pdps-group-replication.html#nodes-configuration","title":"Nodes configuration","text":""},{"location":"deploy-pdps-group-replication.html#preparation","title":"Preparation","text":" -
Install Percona Server-based variant of Percona Distribution for MySQL on each MySQL node (Gr1-Gr4).
-
Make sure that all the nodes use the same time-zone and time
$ date\nTue Aug 18 08:22:12 EDT 2020\n
-
Also check that ntpd
service is present and enabled
-
Make sure that each node resolves the other nodes by name
for i in 1 2 3 4 ; do ping -c 1 gr$i > /dev/null;echo $?; done\n
If nodes aren\u2019t able to resolve, add the entries in the /etc/hosts
file.
-
After instances are up and running, check Percona Server for MySQL version on each node:
mysql>\\s\n--------------\n/opt/mysql_templates/PS-8P/bin/mysql Ver 8.1.0-1 for Linux on x86_64 (Percona Server (GPL), Release 11, Revision 159f0eb)\n
"},{"location":"deploy-pdps-group-replication.html#step-1-create-an-administration-user","title":"Step 1 Create an administration user","text":" -
Create a user for administration. We will use the user dba
in our setup:
CREATE user dba@localhost identified by 'dbapw';\nCREATE user dba@'192.168.%' identified by 'dbapw';\n\nGRANT ALL on *.* to dba@localhost with grant option;\nGRANT ALL on *.* to dba@'192.168.%' with grant option;\n
Log out from the client as the root user and log in as the dba
user.
-
Make sure to have a good and unique SERVER_ID value:
mysql> show global variables like 'server_id';\n+---------------+-------+\n| Variable_name | Value |\n+---------------+-------+\n| server_id | 1 |\n+---------------+-------+\n1 row in set (0.01 sec)\n
The server_id
value must be unique on each node
"},{"location":"deploy-pdps-group-replication.html#step-2-add-group-replication-settings","title":"Step 2. Add Group Replication settings","text":" -
Stop all the nodes
$ service mysql stop\n
-
In the my.cnf
configuration file, add the following:
#####################\n#Replication + binlog settings\n#####################\nauto-increment-increment =1\nauto-increment-offset =1\n\nlog-bin =<path_to_logs>/binlog\nlog-bin-index =binlog.index\nbinlog-checksum =NONE\nbinlog-format =ROW\nbinlog-row-image =FULL\nlog-slave-updates =1\nbinlog-transaction-dependency-tracking =WRITESET_SESSION\n\nenforce-gtid-consistency =TRUE\ngtid-mode =ON\n\nmaster-info-file =master.info\nmaster-info-repository =TABLE\nrelay_log_info_repository =TABLE\nrelay-log =<path_to_logs>/relay\n\nsync-binlog =1\n\n### SLAVE SECTION\nskip-slave-start\nslave-parallel-type = LOGICAL_CLOCK\nslave-parallel-workers = 4\nslave-preserve-commit-order = 1\n\n\n######################################\n#Group Replication\n######################################\nplugin_load_add ='group_replication.so'\nplugin-load-add ='mysql_clone.so'\ngroup_replication_group_name =\"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa\" #<-- Not good. Use something\n that will help you to identify the GR transactions and from where they come from IE \"dc1euz1-aaaa-aaaa-aaaa-aaaaaaaaaaaa\"\ngroup_replication_start_on_boot =off\ngroup_replication_local_address = \"192.168.4.81/2/3/4:33061\" <---- CHANGE THIS TO MATCH EACH NODE LOCAL IP\ngroup_replication_group_seeds = \"192.168.4.81:33061,192.168.4.82:33061,192.168.4.83:33061,192.168.4.84:33061\"\ngroup_replication_bootstrap_group = off\ntransaction-write-set-extraction = XXHASH64\n
-
Restart all nodes:
$ service mysql start\n
-
Connect to the nodes
"},{"location":"deploy-pdps-group-replication.html#step-3-create-a-replication-user","title":"Step 3. Create a replication user","text":" -
On every node, create a user for replication
SET SQL_LOG_BIN=0;\n CREATE USER replica@'192.168.4.%' IDENTIFIED BY 'replicapw'; #<--- Please note the filter by IP is more restrictive\n GRANT REPLICATION SLAVE ON *.* TO replica@'192.168.4.%';\n FLUSH PRIVILEGES;\n SET SQL_LOG_BIN=1;\n
-
Link the nodes with the replication channel.
CHANGE MASTER TO MASTER_USER='replica', MASTER_PASSWORD='replicapw' FOR CHANNEL 'group_replication_recovery';\n
Run this command on all nodes.
-
Check the current status:
(dba@node1) [(none)]>\\u performance_schema\n (dba@node1) [performance_schema]>show tables like '%repl%';\n +-------------------------------------------+\n | Tables_in_performance_schema (%repl%) |\n +-------------------------------------------+\n | replication_applier_configuration |\n | replication_applier_filters |\n | replication_applier_global_filters |\n | replication_applier_status |\n | replication_applier_status_by_coordinator |\n | replication_applier_status_by_worker |\n | replication_connection_configuration |\n | replication_connection_status |\n | replication_group_member_stats |\n | replication_group_members | <------------------------\n +-------------------------------------------+\n\n (dba@node1) [performance_schema]>select * from replication_group_members\\G\nCHANNEL_NAME: group_replication_applier\n MEMBER_ID:\n MEMBER_HOST:\n MEMBER_PORT:\n MEMBER_STATE:\n MEMBER_ROLE: OFFLINE\nMEMBER_VERSION:\n1 row in set (0.00 sec)\n
At this stage, you should be able to start the first (Primary) cluster node.
-
Start the Primary node (Gr1) and enable Group Replication:
(dba@node1)[none]> SET GLOBAL group_replication_bootstrap_group=ON;\n(dba@node1)[none]> START GROUP_REPLICATION;\n(dba@node1)[none]> SET GLOBAL group_replication_bootstrap_group=OFF;\n
-
Check if the node registered correctly:
(dba@node1) [none]>select * from performance_schema.replication_group_members\\G\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 90a353b8-e6dc-11ea-98fa-08002734ed50\n MEMBER_HOST: gr1\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.1.0\n
-
Once the Primary node is running, connect to the secondary node (Gr2 node) and enable Group Replication:
(dba@node2) [none]>START GROUP_REPLICATION;\nQuery OK, 0 rows affected (4.60 sec)\n
-
Check if the secondary node registered correctly:
(dba@node2) [performance_schema]>select * from replication_group_members\\G\n*************************** 1. row ***************************\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 58ffd118-e6dc-11ea-8af8-08002734ed50\n MEMBER_HOST: gr2\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: SECONDARY\nMEMBER_VERSION: 8.1.0\n*************************** 2. row ***************************\n CHANNEL_NAME: group_replication_applier\n MEMBER_ID: 90a353b8-e6dc-11ea-98fa-08002734ed50\n MEMBER_HOST: gr1\n MEMBER_PORT: 3306\n MEMBER_STATE: ONLINE\n MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.1.0\n
-
Test the replication:
- On the Primary node, run the following command:
(dba@node1) [performance_schema]>create schema test;\nQuery OK, 1 row affected (0.76 sec)\n\n(dba@node1) [performance_schema]>\\u test\nDatabase changed\n\n(dba@node1) [test]>create table test1 (`id` int auto_increment primary key);\nQuery OK, 0 rows affected (0.32 sec)\n\n(dba@node1) [test]>insert into test1 values(null);\nQuery OK, 1 row affected (0.34 sec)\n
- On the secondary node:
(dba@node2) [performance_schema]>use \\test\n Database changed\n (dba@node2) [test]>select * from test1;\n +----+\n | id |\n +----+\n | 1 |\n +----+\n 1 row in set (0.00 sec)\n
-
Start Group Replication on the remaining nodes
(dba@node3) [performance_schema]>START GROUP_REPLICATION;\n(dba@node4) [performance_schema]>START GROUP_REPLICATION;\n
"},{"location":"deploy-pdps-group-replication.html#proxy-setup","title":"Proxy setup","text":""},{"location":"deploy-pdps-group-replication.html#step-1-installation","title":"Step 1. Installation","text":" -
Install ProxySQL. In our example, we install ProxySQL on Proxy1 192.168.4.191 and Proxy2 192.168.4.192 nodes.
-
Create the monitoring user on MySQL Group Replication nodes:
create user monitor@'192.168.4.%' identified by 'monitor';\ngrant usage on *.* to 'monitor'@'192.168.4.%';\ngrant select on sys.* to 'monitor'@'192.168.4.%';\n
-
Define basic variables:
update global_variables set Variable_Value='admin:admin;cluster1:clusterpass' where Variable_name='admin-admin_credentials';\nupdate global_variables set variable_value='cluster1' where variable_name='admin-cluster_username';\nupdate global_variables set variable_value='clusterpass' where variable_name='admin-cluster_password';\nupdate global_variables set Variable_Value=0 where Variable_name='mysql-hostgroup_manager_verbose';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-query_digests_normalize_digest_text';\nupdate global_variables set Variable_Value='8.1.0' where Variable_name='mysql-server_version';\nupdate global_variables set Variable_Value='utf8' where Variable_name='mysql-default_charset';\nupdate global_variables set Variable_Value=300 where Variable_name='mysql-tcp_keepalive_time';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-use_tcp_keepalive';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-verbose_query_error';\nupdate global_variables set Variable_Value='true' where Variable_name='mysql-show_processlist_extended';\nupdate global_variables set Variable_Value=50000 where Variable_name='mysql-max_stmts_cache';\nupdate global_variables set Variable_Value='false' where Variable_name='admin-web_enabled';\nupdate global_variables set Variable_Value='0' where Variable_name='mysql-set_query_lock_on_hostgroup';\n\nload admin variables to run;save admin variables to disk;\nload mysql variables to run;save mysql variables to disk;\n
Note
The user name and password need to reflect your standards. The ones used above are just an example.
-
Set up the nodes as a cluster:
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('192.168.4.191',6032,100,'PRIMARY');\nINSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('192.168.4.192',6032,100,'SECONDARY');\nload proxysql servers to run;save proxysql servers to disk;\n
"},{"location":"deploy-pdps-group-replication.html#step-2-define-users-servers-and-query-rules-for-read-write-split","title":"Step 2. Define users, servers and query rules for read / write split","text":" -
Create one or more valid users. Define these user(s). For example, if you have a user named app_gr
with the password test
, and that has access to your Group Replication cluster, the command to define the user is the following:
insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('app_gr','test',1,400,'mysql',1,'application test user GR');\nLOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;\n
-
Define servers:
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.81',400,3306,10000,2000,'GR1');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.81',401,3306,100,2000,'GR1');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.82',401,3306,10000,2000,'GR2');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.83',401,3306,10000,2000,'GR2');\nINSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.84',401,3306,1,2000,'GR2');\nLOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;\n
-
Define query rules to get read / write split:
INSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(4040,6033,'app_gr',400,1,3,'^SELECT.*FOR UPDATE',1);\nINSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,multiplex,apply) values(4042,6033,'app_gr',401,1,3,'^SELECT.*$',1,1);\nLOAD MYSQL QUERY RULES TO RUN;SAVE MYSQL QUERY RULES TO DISK;\n
"},{"location":"deploy-pdps-group-replication.html#step-3-create-a-view-in-sys-schema","title":"Step 3. Create a view in SYS schema","text":"Once all the configuration is ready, we need to have a special view in the SYS schema in Percona server nodes. Find the view working for the server version 8 and above here.
Run that sql on the Primary node of the Group Replication cluster.
"},{"location":"deploy-pdps-group-replication.html#step-4-activate-support-for-group-replication-in-proxysql","title":"Step 4. Activate support for Group Replication in ProxySQL","text":"To activate the native support for Group Replication in ProxySQL, we will use the following group definition:
Writer HG-> 400\nReader HG-> 401\nBackupW HG-> 402\nOffline HG-> 9401\n
INSERT INTO mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind)\nvalues (400,402,401,9401,1,1,1,100);\nLOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;\n
"},{"location":"deploy-pdps-group-replication.html#comments-about-parameters","title":"Comments about parameters","text":"To obtain the most reliable results, we recommend setting the number of writers always to 1, and writer_is_also_reader
to 1 as well.
max_writers: 1\nwriter_is_also_reader: 1\n
The max_transactions_behind
is a subjective parameter that you should calculate on the basis of your needs. If, for instance, you cannot have a stale read, it will be safe to set this value to a low number (i.e. 50) and to set in all Group Replication nodes:
set global group_replication_consistency=AFTER;\n
If instead, you have no issue or strict requirements about some stale read, you can relax the parameter and ignore the group_replication_consistency
setting. Our recommended setting is group_replication_consistency=AFTER
and max_transactions_behind: 100
.
See also
ProxySQL Documentation: mysql_group_replication_hostgroups
"},{"location":"deploy-pdps-group-replication.html#step-5-enable-high-availability-for-proxysql","title":"Step 5. Enable high availability for ProxySQL","text":"keepalived
will be used to enable High Availability for ProxySQL.
-
Install keepalived
on each ProxySQL node using the package manager of your operating system:
on Debian/UbuntuOn RHEL/derivatives $ sudo apt install -y keepalived\n
$ sudo yum install -y keepalived\n
-
Modify the /etc/keepalived/keepalived.conf
file accordingly to your setup. In our case:
-
Proxy1 192.168.4.0/24 dev enp0s9 proto kernel scope link src 192.168.4.191
-
Proxy2 192.168.4.0/24 dev enp0s9 proto kernel scope link src 192.168.4.192
-
VIP 192.168.4.194
Let\u2019s say Proxy1 is the primary node while Proxy2 is the secondary node.
Given that, the config file looks as follows:
global_defs {\n # Keepalived process identifier\n router_id proxy_HA\n}\n# Script used to check if Proxy is running\nvrrp_script check_proxy {\n script \"killall -0 proxysql\"\n interval 2\n weight 2\n}\n# Virtual interface\n# The priority specifies the order in which the assigned interface to take over in a failover\nvrrp_instance VI_01 {\n state MASTER\n interface enp0s9\n virtual_router_id 51\n priority 100 <----- This needs to be different for each ProxySQL node, like 100/99\n\n # The virtual ip address shared between the two load balancers\n virtual_ipaddress {\n 192.168.4.194 dev enp0s9\n }\n track_script {\n check_proxy\n }\n}\n
-
Start the keepalived
service. From now on, the VIP will be associated with the Proxy1 unless the service is down.
"},{"location":"deploy-pdps-group-replication.html#disaster-recovery-implementation","title":"Disaster recovery implementation","text":"The implementation of a DR (Disaster Recovery) site will follow the same direction provided for the main site. There are only some generic rules to follow:
-
A DR site should be located in a different geographic location than the main site (several hundred kilometers/miles away).
-
The connection link between the main site and the DR site can only be established using asynchronous replication (standard MySQL replication setup ).
"},{"location":"deploy-pdps-group-replication.html#monitoring","title":"Monitoring","text":""},{"location":"deploy-pdps-group-replication.html#using-percona-management-and-monitoring-pmm","title":"Using Percona Management and Monitoring (PMM)","text":" -
Use this quickstart to install Percona Monitoring and Management (PMM).
-
Specify the replication_set
flag when registering the Percona Server for MySQL node or the MySQL node in PMM:
pmm-admin add mysql --username=pmm --password=pmm --query-source=perfschema --replication-set=gr_test_lab group_rep4 127.0.0.1:3306\n
Then you can use the Group Replication Dashboard and monitor your cluster with a lot of details.
The dashboard sections are the following:
-
Overview:
-
Replication delay details
-
Transactions
-
Conflicts
"},{"location":"deploy-pdps-group-replication.html#using-command-line","title":"Using command line","text":"From the command line, you need to manually query the tables in Performance schema:
+----------------------------------------------+\n| replication_applier_configuration |\n| replication_applier_filters |\n| replication_applier_global_filters |\n| replication_applier_status |\n| replication_applier_status_by_coordinator |\n| replication_applier_status_by_worker |\n| replication_connection_configuration |\n| replication_connection_status |\n| replication_group_member_stats |\n| replication_group_members |\n+----------------------------------------------+\n
For example, use this command to get the lag in number of transactions on a node:
select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-1),'-',-1) last_executed;select @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(Received_transaction_set,':',-1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag;\n+---------------+\n| last_executed |\n+---------------+\n| 125624 |\n+---------------+\n1 row in set, 1 warning (0.03 sec)\n\n+---------------+\n| last_received |\n+---------------+\n| 125624 |\n+---------------+\n1 row in set, 1 warning (0.00 sec)\n\n+----------+\n| real_lag |\n+----------+\n| 0 |\n+----------+\n1 row in set (0.00 sec)\n
You can use a more composite query to get information about each applier:
SELECT\n conn_status.channel_name as channel_name,\n conn_status.service_state as IO_thread,\n applier_status.service_state as SQL_thread,\n conn_status.LAST_QUEUED_TRANSACTION as last_queued_transaction,\n applier_status.LAST_APPLIED_TRANSACTION as last_applied_transaction,\n LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -\n LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP 'rep delay (sec)',\n LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP -\n LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP 'transport time',\n LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP -\n LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP 'time RL',\n LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -\n LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP 'apply time',\n if(GTID_SUBTRACT(LAST_QUEUED_TRANSACTION, LAST_APPLIED_TRANSACTION) = \"\",\"0\" , abs(time_to_sec(if(time_to_sec(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP)=0,0,timediff(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP,now()))))) `lag_in_sec`\nFROM\n performance_schema.replication_connection_status AS conn_status\nJOIN performance_schema.replication_applier_status_by_worker AS applier_status\n ON applier_status.channel_name = conn_status.channel_name\nORDER BY lag_in_sec, lag_in_sec desc\\G\n
Expected output *************************** 1. row ***************************\nchannel_name: group_replication_applier\nIO_thread: ON\nSQL_thread: ON\nlast_queued_transaction: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:125624\nlast_applied_transaction: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:125621\nrep delay (sec): 3.153038\ntransport time: 0.061327\ntime RL: 0.001005\napply time: 0.388680\nlag_in_sec: 0\n
Based on the material from Percona Database Performance Blog
This document is based on the blog post Percona Distribution for MySQL: High Availability with Group Replication Solution by Marco Tusa
"},{"location":"deploy-pdps-group-replication.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"deployment-variants.html","title":"Deployment variants","text":"Percona Distribution for MySQL provides two deployment variants: one is Percona Server for MySQL-based with asynchronous replication and another one is Percona Server for MySQL-based with group replication. The table below lists what components are available with Percona Server for MySQL:
Components Percona Server for MySQL Orchestrator YES HAProxy NO ProxySQL YES Percona XtraBackup YES Percona Toolkit YES MySQL Shell YES MySQL Router YES"},{"location":"deployment-variants.html#what-deployment-variant-to-choose","title":"What deployment variant to choose?","text":"The Percona Server-based deployment variant with asynchronous replication utilizes the primary / secondary replication model. It enables you to create geographically distributed infrastructures with the support for disaster recovery. However, this deployment variant does not guarantee data consistency on all nodes at the given moment and provides high availability of up to 4 nines.
The Percona Server-based deployment variant with Group Replication enables you to create fault-tolerant systems with redundancy by replicating the system state to a set of servers. Percona Server for MySQL-based deployment with Group Replication offers a high grade of high availability (4-5 nines) and almost instant fail over when associated with a proxy.
"},{"location":"deployment-variants.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"downgrade.html","title":"Downgrade Percona Distribution for MySQL","text":"Following the MySQL downgrade policy, the downgrade to a previous version of Percona Distribution of MySQL is not supported.
"},{"location":"downgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#acid","title":"ACID","text":"Set of properties that guarantee database transactions are processed reliably. Stands for Atomicity
, Consistency
, Isolation
, Durability
.
"},{"location":"glossary.html#asynchronous-replication","title":"Asynchronous replication","text":"Asynchronous replication is a technique where data is first written to the primary node. After the primary acknowledges the write, the data is written to secondary nodes.
"},{"location":"glossary.html#atomicity","title":"Atomicity","text":"Atomicity means that database operations are applied following an \u201call or nothing\u201d rule. A transaction is either fully applied or not at all.
"},{"location":"glossary.html#consistency","title":"Consistency","text":"In the context of backup and restore, consistency means that the data restored will be consistent in a given point in time. Partial or incomplete writes to disk of atomic operations (for example, to table and index data structures separately) won\u2019t be served to the client after the restore. The same applies to multi-document transactions, that started but didn\u2019t complete by the time the backup was finished.
"},{"location":"glossary.html#disaster-recovery","title":"Disaster recovery","text":"Disaster recovery are means to regain access and functionality of a database infrastructure after unplanned events that caused its failure.
"},{"location":"glossary.html#downtime","title":"Downtime","text":"Downtime is the period when a database infrastructure is unavailable due to expected (maintenance) or unexpected (outage, lost connectivity, hardware failure, etc.) reasons.
"},{"location":"glossary.html#durability","title":"Durability","text":"Once a transaction is committed, it will remain so.
"},{"location":"glossary.html#failover","title":"Failover","text":"Failover is switching automatically and seamlessly to a reliable backup system.
"},{"location":"glossary.html#general-availability-ga","title":"General availability (GA)","text":"A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.
"},{"location":"glossary.html#gtid","title":"GTID","text":"A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of the source. This identifier is unique across all servers in a given replication topology.
"},{"location":"glossary.html#high-availability","title":"High availability","text":"A high availability is the ability of a system to operate continuously without failure for a long time.
"},{"location":"glossary.html#isolation","title":"Isolation","text":"The Isolation requirement means that no transaction can interfere with another.
"},{"location":"glossary.html#loosely-coupled-cluster","title":"Loosely-coupled cluster","text":"A loosely-coupled cluster is the deployment where cluster nodes are independent in processing / applying transactions. Data state may not always be consistent in time on all nodes; however, a single node state does not affect the cluster. Loosely-coupled clusters use asynchronous replication and can be geographically distributed and/or serve as the disaster recovery site.
"},{"location":"glossary.html#multi-source-replication","title":"Multi-source replication","text":"A multi-source replication topology requires at least one replica synchronized with at least two sources. The transactions can be received in parallel because the replica creates a separate replication channel for each source.
Multi-source replication allows a single server to back up or consolidate data from multiple servers. This type of replication also lets you merge table shards.
"},{"location":"glossary.html#nines-of-availability","title":"Nines of availability","text":"Nines of availability refer to system availability as a percentage of total system time.
"},{"location":"glossary.html#semi-synchronous-replication","title":"Semi-synchronous replication","text":"A semi-synchronous replication is a technique where the primary node wait for at least one of the secondaries to acknowledge the transaction before processing further transactions.
"},{"location":"glossary.html#synchronous-replication","title":"Synchronous replication","text":"A synchronous replication is a technique when data is written to the primary and secondary nodes simultaneously. Thus, both primary and secondaries are in sync and failover from the primary to one of the secondaries is possible any time.
"},{"location":"glossary.html#tech-preview","title":"Tech preview","text":"A tech preview item can be a feature, a variable, or a value within a variable. The term designates that the item is not yet ready for production use and is not included in support by SLA. A tech preview item is included in a release so that users can provide feedback. The item is either updated and released as general availability(GA) or removed if not useful. The item\u2019s functionality can change from tech preview to GA.
"},{"location":"glossary.html#tightly-coupled-cluster","title":"Tightly-coupled cluster","text":"A tightly-coupled cluster is the deployment in which transactions and information is synchronously distributed, consistent and available on all cluster nodes at any time.
"},{"location":"glossary.html#uptime","title":"Uptime","text":"Uptime is the time when the system is continuously available.
"},{"location":"glossary.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"installing.html","title":"Install Percona Distribution for MySQL","text":"We recommend to install Percona Distribution for MySQL from Percona repositories using the package manager of your operating system:
apt
- for Debian and Ubuntu Linux yum
- for Red Hat Enterprise Linux and compatible Linux derivatives
Find the full list of supported platforms on the Percona Software and Platform Lifecycle page.
Repository overview: Major and Minor repositories Percona provides two repositories for every deployment variant of Percona Distribution for MySQL.
The Major Release repository includes the latest version packages (for example, pdps-8x-innovation
). Whenever a package is updated, the package manager of your operating system detects that and prompts you to update. As long as you update all Distribution packages at the same time, you can ensure that the packages you\u2019re using have been tested and verified by Percona. Installing Percona Distribution for MySQL from the Major Release Repository is the recommended method.
The Minor Release repository includes a particular minor release of the database and all of the packages that were tested and verified to work with that minor release (for example, pdps-8.1.0
). You may choose to install Percona Distribution for MySQL from the Minor Release repository if you have decided to standardize on a particular release which has passed rigorous testing procedures and which has been verified to work with your applications. This allows you to deploy to a new host and ensure that you\u2019ll be using the same version of all the Distribution packages, even if newer releases exist in other repositories.
The disadvantage of using a Minor Release repository is that you are locked in this particular release. When potentially critical fixes are released in a later minor version of the database, you will not be prompted for an upgrade by the package manager of your operating system. You would need to change the configured repository in order to install the upgrade.
"},{"location":"installing.html#prerequisites","title":"Prerequisites","text":"To install Percona software, you need to configure the required repository. To simplify this process, use the percona-release
repository management tool.
-
Install GnuPG and curl
$ sudo apt install gnupg2 curl\n
-
Install percona-release. If you have it installed, update percona-release to the latest version.
"},{"location":"installing.html#procedure","title":"Procedure","text":"On Debian and Ubuntu LinuxOn Red Hat Enterprise Linux and derivatives Important
Run the following commands as the root user or via sudo
.
Platform specific notes
On CentOS 7, install the epel-release
package. It includes the dependencies required to install Orchestrator. Use the following command:
$ sudo yum -y install epel-release\n
Run the following commands as the root user or via sudo
.
"},{"location":"installing.html#enable-percona-repository","title":"Enable Percona repository","text":"To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
$ sudo percona-release enable pdps-8x-innovation\n
Tip
To enable the minor version repository, use the following command:
$ sudo percona-release enable pdps-8.1.0\n
"},{"location":"installing.html#install-percona-distribution-for-mysql-packages","title":"Install Percona Distribution for MySQL packages","text":" -
Install Percona Server for MySQL:
$ sudo apt install percona-server-server\n
-
Install the components. Use the commands below to install the required components:
Install Percona XtraBackup:
$ sudo apt install percona-xtrabackup-81\n
Install Percona Toolkit:
$ sudo apt install percona-toolkit\n
Install Orchestrator:
$ sudo apt install percona-orchestrator percona-orchestrator-cli percona-orchestrator-client\n
Install MySQL Shell:
$ sudo apt install percona-mysql-shell\n
Install ProxySQL:
$ sudo apt install proxysql2\n
Install MySQL Router:
$ sudo apt install percona-mysql-router\n
"},{"location":"installing.html#enable-percona-repository_1","title":"Enable Percona repository","text":"To enable the desired repository, we recommend to use the enable
subcommand of percona-release
.
$ sudo percona-release enable pdps-8x-innovation\n
Tip
To enable the minor version repository, use the following command:
$ sudo percona-release enable pdps-8.1.0\n
"},{"location":"installing.html#install-percona-distribution-for-mysql-packages_1","title":"Install Percona Distribution for MySQL packages","text":" -
Install Percona Server for MySQL:
$ sudo yum install percona-server-server\n
-
Install the components. Use the commands below to install the required components:
Install Percona XtraBackup
$ sudo yum install percona-xtrabackup-81\n
Install Orchestrator
$ sudo yum install percona-orchestrator percona-orchestrator-cli percona-orchestrator-client\n
Install Percona Toolkit
$ sudo yum install percona-toolkit\n
Install MySQL Shell:
$ sudo yum install percona-mysql-shell\n
Install ProxySQL:
$ sudo yum install proxysql2\n
Install MySQL Router:
$ sudo yum install percona-mysql-router\n
"},{"location":"installing.html#run-percona-distribution-for-mysql","title":"Run Percona Distribution for MySQL","text":"Percona Distribution for MySQL is not started automatically on Red Hat Enterprise Linux and CentOS after the installation is complete.
Start it manually using the following command:
$ sudo systemctl start mysql\n
Confirm that the service is running:
$ sudo systemctl status mysql\n
Stop the service:
$ sudo systemctl stop mysql\n
"},{"location":"installing.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"minor-upgrade.html","title":"Upgrade Percona Distribution for MySQL","text":"Minor releases include bug fixes and feature enhancements. We recommend to have Percona Distribution for MySQL updated to the latest version.
Though minor releases don\u2019t change the behavior, even a minor upgrade is a risky process. We recommend to back up your data before upgrading.
"},{"location":"minor-upgrade.html#preconditions","title":"Preconditions","text":"To upgrade Percona Distribution for MySQL, install the percona-release
repository management tool or update it to the latest version.
"},{"location":"minor-upgrade.html#procedure","title":"Procedure","text":"Important
Run the following commands as the root user or via sudo
.
-
Enable Percona repository
The Major Release repository automatically includes new version packages of Percona Distribution for MySQL. If you installed Percona Distribution for MySQL from a Minor Release repository, enable the new version repository:
$ sudo percona-release setup pdps-XXX \n
where XXX
is the required version.
Read more about major and Minor release repositories in Repository overview.
-
Stop mysql
service
$ sudo systemctl mysql stop\n
-
Install new version packages using the package manager of your operating system.
-
Restart mysql
service:
$ sudo systemctl mysql start\n
To upgrade the components, refer to Installing Percona Distribution for MySQL for installation instructions relevant to your operating system.
"},{"location":"minor-upgrade.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"monitoring.html","title":"Measurement and monitoring","text":"To ensure that database infrastructure is performing as intended or at its best, specific metrics need to be measured and alerts are to be raised when some of these metrics are not in line with expectations. A periodic review of these measurements is also encouraged to promote stability and understand potential risks associated with the database infrastructure.
The following are the 3 aspects of database performance measurement and monitoring:
-
Measurement - to understand how a database infrastructure is performing, multiple aspects of the infrastructure need to be measured. With measurement it\u2019s important to understand the impact of the sample sizes, sample timing, and sample types.
-
Metrics - metrics refer to the actual parts of the database infrastructure being measured. When we discuss metrics, more isn\u2019t always better as it could introduce unintentional noise or make troubleshooting overly burdensome.
-
Alerting - when one or many metrics of the database infrastructure is not within a normal or acceptable range, an alert should be generated so that the team responsible for the appropriate portion of the database infrastructure can investigate and remedy it.
Monitoring and measurement for this solution are covered by Percona Monitoring and Management. It has a specific dashboard to monitor the Group Replication state and cluster status as a whole. For more information, read Percona Monitoring and Management Documentation.
"},{"location":"monitoring.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"pdps-group-replication.html","title":"High availability solution with Group Replication","text":"Every architecture and deployment depends on customer requirements and application demands for high availability and the estimated level of usage. For example, using a high read or a high write application, or both with 99.999% availability.
This guide gives architecture and deployment recommendations along with a technical overview for a solution that provides a high level of high availability and assumes the usage of high read / write applications (20K or more queries per second). It also provides step-by-step deployment guidelines.
This solution assumes the use of Percona Server for MySQL based deployment variant of Percona Distribution for MySQL with Group Replication.
"},{"location":"pdps-group-replication.html#high-availability-overview","title":"High availability overview","text":"How to measure availability and at what point does it become \u201chigh\u201d availability?
Generally speaking, the measurement of availability is done by establishing a measurement time frame and dividing it by the time that it was available. This ratio will rarely be 1, which is equal to 100% availability. A solution is considered to be highly available if it is at least 99% or \u201ctwo nines\u201d available.
The following table provides downtime calculations per high availability level:
Availability, % Downtime per year Downtime per month Downtime per week Downtime per day 99% (\u201ctwo nines\u201d) 3.65 days 7.31 hours 1.68 hours 14.40 minutes 99.5% (\u201ctwo nines five\u201d) 1.83 days 3.65 hours 50.40 minutes 7.20 minutes 99.9% (\u201cthree nines\u201d) 8.77 hours 43.83 minutes 10.08 minutes 1.44 minutes 99.95% (\u201cthree nines five\u201d) 4.38 hours 21.92 minutes 5.04 minutes 43.20 seconds 99.99% (\u201cfour nines\u201d) 52.60 minutes 4.38 minutes 1.01 minutes 8.64 seconds 99.995% (\u201cfour nines five\u201d) 26.30 minutes 2.19 minutes 30.24 seconds 4.32 seconds 99.999% (\u201cfive nines\u201d) 5.26 minutes 26.30 seconds 6.05 seconds 864.00 milliseconds"},{"location":"pdps-group-replication.html#how-is-high-availability-achieved","title":"How is high availability achieved?","text":"There are three key components to achieve high availability:
-
Infrastructure - this is the physical or virtual hardware that database systems rely on to run. Without enough infrastructure (VM\u2019s, networking, etc.), there cannot be high availability. The easiest example is: there is no way to make a single server highly available
.
-
Topology management - this is the software management related specifically to the database and managing its ability to stay consistent in the event of a failure. Many clustering or synchronous replication solutions offer this capability out of the box. However, asynchronous replication is handled by additional software.
-
Connection management - this is the software management related specifically to the networking and connectivity aspect of the database. Clustering solutions typically bundle with a connection manager. However, in asynchronous clusters, deploying a connection manager is mandatory for high availability.
This solution is based on a tightly coupled database cluster. It offers a high availability level of 99.995% when coupled with the Group Replication setting group_replication_consistency=AFTER
.
"},{"location":"pdps-group-replication.html#failovers","title":"Failovers","text":"A database failure or configuration change that requires a restart should not affect the stability of the database infrastructure, if it is properly planned and architected. Failovers are an integral part of a stability strategy and aligning the business requirements for availability and uptime with failover methodologies is critical.
The following are the three main types of failovers that can occur in database environments:
-
Planned failover. This is a failover that has been scheduled in advance or occurs at a regular interval. There can be many reasons for planned failovers including patching, large data operations, retiring existing infrastructure, or simply to test the failover strategy.
-
Unplanned failover. This is what occurs when a database has unexpectedly become unresponsive or experiences instability. An unplanned failover could also include emergency changes that do not fall under the planned failover cadence or scheduling parameters. Unplanned failovers are generally considered higher risk operations due to the high stress and high potential for data corruption or data fragmentation.
-
Regional or disaster recovery (DR) failover. Unplanned failovers still work with the assumption that additional database infrastructure is immediately available and in a usable state. However, in a regional or DR failover, it is assumed that there is a large scale infrastructure outage which requires the business to move its operations away from its current availability zone.
"},{"location":"pdps-group-replication.html#maintenance-windows","title":"Maintenance windows","text":""},{"location":"pdps-group-replication.html#major-vs-minor-maintenance","title":"Major vs Minor maintenance","text":"Although it may not be obvious at first, not all maintenance activities are created equal and do not have the same dependencies. It is good to separate maintenance that demands downtime or failover from maintenance that can be done without impacting those important stability metrics. When defining these maintenance dependencies, there can be a change in the actual maintenance process that allows for a different cadence.
"},{"location":"pdps-group-replication.html#maintenance-without-service-interruption","title":"Maintenance without service interruption","text":"It is possible to cover both major and minor maintenance without service interruption with rolling restart and using proper version upgrade.
"},{"location":"pdps-group-replication.html#uptime","title":"Uptime","text":"When referring to database stability, uptime is likely the largest indicator of stability and often is the most obvious symptom of an unstable database environment. Uptime is composed of three key components and, contrary to common perception, is based on what happens when the database software cannot take incoming requests rather than maintain the ability to take requests with errors.
The uptime components are:
- Recovery Time Objective (RTO)
RTO can be characterized by a simple question \u201cHow long can the business sustain a database outage?\u201d Once the business is aligned with a minimum viable recovery time objective, it is much more straightforward to plan and invest in the infrastructure required to meet that requirement. It is important to acknowledge that while everyone desires 100% uptime, there need to be realistic expectations that align with the business needs and not a technical desire.
- Recovery Point Objective (RPO)
There is a big distinction between the Recovery Point and the Recovery Time for a database infrastructure. The database can be available, but not to the exact state that it was when it became unavailable. That is where Recovery Point comes in. The question to ask here is \u201cHow much data can the business lose during a database outage?\u201d All businesses have their own requirements here yet it is always the goal to never sustain any data loss. But this is framed in the worst case scenario, how much data could be lost and the business maintains the ability to continue.
- Disaster recovery
RTO and RPO are great for unplanned outages or small scale hiccups to the infrastructure. Disaster recovery is a major large scale outage not strictly for the database infrastructure. How capable is the business of restarting operations with the assumption that all resources are completely unavailable in the main availability zone? The assumption here is that there is no viable restoration point or time that aligns with the business requirements. While each disaster recovery scenario is unique based on available infrastructure, backup strategy and technology stack, there are some common threads for any scenario.
The described solution helps improve uptime. It will help you to significantly reduce both RPO and RTO. Given the tightly coupled cluster solution approach, the failure of a single node will not result in service interruption.
Increasing the number of nodes will also improve the cluster resilience by the formula:
F = (N -1) / 2\n
where:
-
F
is the number of admissible failures
-
N
is the number of nodes in the cluster.
"},{"location":"pdps-group-replication.html#example","title":"Example","text":" -
In a cluster of 5 nodes, F = (5 - 1)/2 = 2. The cluster can support up to 2 failures.
-
In a cluster of 4 nodes, F = (4 - 1)/2 = 1. The cluster can support up to 1 failure.
This solution also allows for a more restrictive backup policy, dedicating a node to the backup cycle, which will help in keeping RPO low.
As previously mentioned, disaster recovery is not covered by default by this solution. It will require an additional replication setup and controller.
Based on the material from Percona Database Performance Blog
This document is based on the blog post Percona Distribution for MySQL: High Availability with Group Replication Solution by Marco Tusa
"},{"location":"pdps-group-replication.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes-ps-8.1.html","title":"Percona Distribution for MySQL 8.1.0 using Percona Server for MySQL (2023-11-27)","text":"Percona Distribution for MySQL is the most stable, scalable, and secure open source MySQL distribution based on Percona Server for MySQL. Install Percona Distribution for MySQL.
This release is based on Percona Server for MySQL 8.1.0-1.
"},{"location":"release-notes-ps-8.1.html#release-highlights","title":"Release highlights","text":"Percona Server for MySQL implements telemetry that fills in the gaps in our understanding of how you use Percona Server for MySQL to improve our products. Participation in the anonymous program is optional. You can opt-out if you prefer not to share this information. Find more information in the Telemetry on Percona Server fo MySQL document.
The following user-defined function (UDF) shared objects (so) are converted to components:
- The
data_masking
plugin converted into the component_masking_functions
component - The
binlogs_utils_udf
UDF shared object (.so) converted to the component_binlog_utils
component - The
percona-udf
UDF shared object (.so) converted to the component_percona-udf
component
A user does not need to execute a separate CREATE FUNCTION ... SONAME ...
statement for each function. Installing the components with the INSTALL COMPONENT 'file://componenet_xxx
statement performs the auto-registration operations.
The keyring_vault
plugin converted into the component_keyring_vault
component. This conversion aligns the keyring_vault with the KMIP and KMS keyrings and supports \u201cALTER INSTANCE RELOAD KEYRING\u201d to update the configuration automatically.
The audit_log_filter
plugin converted to the component_audit_log_filter
component. The following changes are also available:
- Adds the
mysql_event_tracking_parse
audit log event - Reworked, optimized, and reorganized the audit event data members
- Data deduplication within the audit event data members
The current version of percona-release
does not support the setup
subcommand with the pdps-8x-innovation
and pdps-8.1.0
repositories. Use percona-release enable
instead. The support of the pdps-8x-innovation
and pdps-8.1.0
repositories for the setup
subcommand will be added in the next release of percona-release
.
The PS 8.1.0 MTR suites are reorganized. The existing percona-specific MTR test cases are regrouped and put into separate test suites:
- component_encryption_udf
- percona
- percona_innodb
Improvements and bug fixes introduced by Oracle for MySQL 8.1 and included in Percona Server for MySQL are the following:
-
The EXPLAIN FORMAT=JSON
can output the data to a user variable.
-
New messages written to the MySQL error log during shutdown:
-
Startup and shutdown log messages, including when the server was started with --initialize
-
Start and end of shutdown phases for plugins and components
-
Start-of-phase and end-of-phase messages for connection closing phases
-
The number and ID of threads still alive after being forcibly disconnected and potentially causing a wait
Find the full list of bug fixes and changes in the MySQL 8.1 Release Notes.
"},{"location":"release-notes-ps-8.1.html#deprecation-or-removal","title":"Deprecation or removal","text":" - The
mysql_native_password
authentication plugin is deprecated and subject to removal in a future version. - The TokuDB is removed. The following items are also removed:
- Percona-TokuBackup submodule
- PerconaFT submodule
- TokuDB storage engine code
- TokuDB MTR test suites
- plugin/tokudb-backup-plugin
- The MyRocks ZenFS is removed. The following items are also removed:
- zenfs submodule
- libzdb submodule
- RocksDB MTR changes are reverted
- Travis CI integration
- Supporting
readline
as a alternative to editline library is removed. - The
audit_log
(audit version 1) plugin is removed - The \u201cinclude/ext\u201d pre-C++17 compatibility headers are removed.
- The
keyring_vault
plugin is removed. - The
data_masking
UDF shared object (.so) is removed. - The
binlog_utils_udf
UDF shared object (.so) is removed. - The
percona_udf
UDF shared object (.so) is removed.
"},{"location":"release-notes-ps-8.1.html#platform-support","title":"Platform support","text":" - Percona Server for MySQL 8.1.0-1 is not supported on Ubuntu 18.04.
"},{"location":"release-notes-ps-8.1.html#supplied-components","title":"Supplied components","text":"Review each component\u2019s release notes for What\u2019s new, improvements, or bug fixes. The following is a list of the components supplied with the Percona Server for MySQL-based variation of the Percona Distribution for MySQL:
Component Version Description Orchestrator 3.2.6-11 The replication topology manager for Percona Server for MySQL ProxySQL 2.5.5 A high performance, high-availability, protocol-aware proxy for MySQL Percona XtraBackup 8.1.0 An open-source hot backup utility for MySQL-based servers Percona Toolkit 3.5.5 The set of scripts to simplify and optimize database operation MySQL Shell 8.1.0 An advanced client and code editor for MySQL Server MySQL Router 8.1.0 Lightweight middleware that provides transparent routing between your application and back-end MySQL servers"},{"location":"release-notes-ps-8.1.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"release-notes.html","title":"Percona Distribution for MySQL 8.1 release notes index","text":" - Percona Distribution for MySQL using Percona Server for MySQL 8.1.0 (2023-11-27)
"},{"location":"release-notes.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"trademark-policy.html","title":"Trademark policy","text":"This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.
Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.
Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.
First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.
Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.
Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.
Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.
Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.
In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.
In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.
"},{"location":"trademark-policy.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"},{"location":"uninstalling.html","title":"Uninstalling Percona Distribution for MySQL","text":"To uninstall Percona Distribution for MySQL, stop the mysql
service and remove all the installed packages using the package manager of your operating system. Optionally, disable Percona repository.
Note
Should you need the data files later, back up your data before uninstalling Percona Distribution for MySQL.
Important
Run all commands as the root user or via sudo
On Debian / UbuntuOn Red Hat Enterprise Linux / derivatives -
Stop the mysql
service.
$ sudo systemctl stop mysql\n
-
Remove Percona Server for MySQL.
$ sudo apt remove percona-server*\n
-
Remove the components. Use the following commands to remove the required components.
- Remove Percona XtraBackup
$ sudo apt remove percona-xtrabackup-81\n
- Remove Percona Toolkit
$ sudo apt remove percona-toolkit\n
- Remove Orchestrator
$ sudo apt remove percona-orchestrator*\n
- Remove MySQL Shell
$ sudo apt remove percona-mysql-shell\n
- Remove ProxySQL
$ sudo apt remove proxysql2\n
- Remove MySQL Router
$ sudo apt remove percona-mysql-router\n
-
Stop the mysql
service.
$ sudo systemctl stop mysql\n
-
Remove Percona Server for MySQL.
$ sudo yum remove percona-server*\n
-
Remove the components. Use the commands below to remove the required components.
- Remove Percona XtraBackup
$ sudo yum remove percona-xtrabackup-81\n
- Remove Percona Toolkit
$ sudo yum remove percona-toolkit\n
- Remove Orchestrator
$ sudo yum remove percona-orchestrator*\n
- Remove MySQL Shell
$ sudo yum remove percona-mysql-shell\n
- Remove ProxySQL
$ sudo yum remove proxysql2\n
- Remove MySQL Router
$ sudo yum remove percona-mysql-router\n
"},{"location":"uninstalling.html#get-expert-help","title":"Get expert help","text":"If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
Community Forum Get a Percona Expert
"}]}
\ No newline at end of file
diff --git a/innovation-release/sitemap.xml.gz b/innovation-release/sitemap.xml.gz
index bb5eed45..3ec387d6 100644
Binary files a/innovation-release/sitemap.xml.gz and b/innovation-release/sitemap.xml.gz differ