diff --git a/8.0/404.html b/8.0/404.html index 28578729745..e7d0078e640 100644 --- a/8.0/404.html +++ b/8.0/404.html @@ -4015,11 +4015,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/adaptive-network-buffers.html b/8.0/adaptive-network-buffers.html index a3da479c2e5..9cd8f0f0e81 100644 --- a/8.0/adaptive-network-buffers.html +++ b/8.0/adaptive-network-buffers.html @@ -4081,11 +4081,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/added-features.html b/8.0/added-features.html index 32eff078db6..9f08d9c59b1 100644 --- a/8.0/added-features.html +++ b/8.0/added-features.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/advisors.html b/8.0/advisors.html index 310d13a4dd2..86e71a2fa68 100644 --- a/8.0/advisors.html +++ b/8.0/advisors.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/ai-docs.html b/8.0/ai-docs.html index 1746998ba0b..80dd882973e 100644 --- a/8.0/ai-docs.html +++ b/8.0/ai-docs.html @@ -4026,11 +4026,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/aio-page-requests.html b/8.0/aio-page-requests.html index 60cdc8a2aa7..8a6a634260e 100644 --- a/8.0/aio-page-requests.html +++ b/8.0/aio-page-requests.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apparmor.html b/8.0/apparmor.html index f74a0f8a20b..bc1ab06e758 100644 --- a/8.0/apparmor.html +++ b/8.0/apparmor.html @@ -4189,11 +4189,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-download-deb.html b/8.0/apt-download-deb.html index 81b06d9b6d0..8c6e68a53f4 100644 --- a/8.0/apt-download-deb.html +++ b/8.0/apt-download-deb.html @@ -4042,11 +4042,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-files.html b/8.0/apt-files.html index a6b11d67929..3657cda9f80 100644 --- a/8.0/apt-files.html +++ b/8.0/apt-files.html @@ -4042,11 +4042,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-pinning.html b/8.0/apt-pinning.html index e049dc50f65..597460f9b1b 100644 --- a/8.0/apt-pinning.html +++ b/8.0/apt-pinning.html @@ -4042,11 +4042,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-repo.html b/8.0/apt-repo.html index b80e720d743..a7dabb65a8a 100644 --- a/8.0/apt-repo.html +++ b/8.0/apt-repo.html @@ -4102,11 +4102,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-run.html b/8.0/apt-run.html index 017a273b4fb..6d29cc14672 100644 --- a/8.0/apt-run.html +++ b/8.0/apt-run.html @@ -4084,11 +4084,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/apt-uninstall-server.html b/8.0/apt-uninstall-server.html index 0b7cce75106..9210573cc22 100644 --- a/8.0/apt-uninstall-server.html +++ b/8.0/apt-uninstall-server.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-compression-encryption.html b/8.0/audit-log-filter-compression-encryption.html index f7c8b83e6ce..34672369bb2 100644 --- a/8.0/audit-log-filter-compression-encryption.html +++ b/8.0/audit-log-filter-compression-encryption.html @@ -4101,11 +4101,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-formats.html b/8.0/audit-log-filter-formats.html index d5fd54ed09a..c3e9a722427 100644 --- a/8.0/audit-log-filter-formats.html +++ b/8.0/audit-log-filter-formats.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-json.html b/8.0/audit-log-filter-json.html index 0dd7715140a..03d781a8b19 100644 --- a/8.0/audit-log-filter-json.html +++ b/8.0/audit-log-filter-json.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-naming.html b/8.0/audit-log-filter-naming.html index 4bddcd61b16..e913aa0ccd2 100644 --- a/8.0/audit-log-filter-naming.html +++ b/8.0/audit-log-filter-naming.html @@ -4107,11 +4107,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-new.html b/8.0/audit-log-filter-new.html index 87f1f8a7a41..0f2ca3ca076 100644 --- a/8.0/audit-log-filter-new.html +++ b/8.0/audit-log-filter-new.html @@ -4045,11 +4045,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-old.html b/8.0/audit-log-filter-old.html index 5c7b3aed614..f200445a8b0 100644 --- a/8.0/audit-log-filter-old.html +++ b/8.0/audit-log-filter-old.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-overview.html b/8.0/audit-log-filter-overview.html index 436b65c06ef..106c193b251 100644 --- a/8.0/audit-log-filter-overview.html +++ b/8.0/audit-log-filter-overview.html @@ -4116,11 +4116,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-restrictions.html b/8.0/audit-log-filter-restrictions.html index 432b7a52a4a..6290851ea1c 100644 --- a/8.0/audit-log-filter-restrictions.html +++ b/8.0/audit-log-filter-restrictions.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-security.html b/8.0/audit-log-filter-security.html index db668e6aedc..d539e63ae9a 100644 --- a/8.0/audit-log-filter-security.html +++ b/8.0/audit-log-filter-security.html @@ -4043,11 +4043,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-filter-variables.html b/8.0/audit-log-filter-variables.html index d877be95f72..79b4ea27764 100644 --- a/8.0/audit-log-filter-variables.html +++ b/8.0/audit-log-filter-variables.html @@ -4392,11 +4392,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/audit-log-plugin.html b/8.0/audit-log-plugin.html index 255ea068d6c..18a8b2758c2 100644 --- a/8.0/audit-log-plugin.html +++ b/8.0/audit-log-plugin.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/backup-locks.html b/8.0/backup-locks.html index 4c53b6c78b7..5c13f75107f 100644 --- a/8.0/backup-locks.html +++ b/8.0/backup-locks.html @@ -4200,11 +4200,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/backup-restore-overview.html b/8.0/backup-restore-overview.html index f30c5f153f8..43ffccf06b6 100644 --- a/8.0/backup-restore-overview.html +++ b/8.0/backup-restore-overview.html @@ -4124,11 +4124,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/binary-tarball-install.html b/8.0/binary-tarball-install.html index 3fd9b81b606..f75e4531c53 100644 --- a/8.0/binary-tarball-install.html +++ b/8.0/binary-tarball-install.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/binary-tarball-names.html b/8.0/binary-tarball-names.html index 765ab1c89a0..b2590ea26c1 100644 --- a/8.0/binary-tarball-names.html +++ b/8.0/binary-tarball-names.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/binlog-space.html b/8.0/binlog-space.html index 0943d99a80d..1118f982580 100644 --- a/8.0/binlog-space.html +++ b/8.0/binlog-space.html @@ -4090,11 +4090,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/binlogging-replication-improvements.html b/8.0/binlogging-replication-improvements.html index ace5eacfafd..d6f972a97f2 100644 --- a/8.0/binlogging-replication-improvements.html +++ b/8.0/binlogging-replication-improvements.html @@ -4229,11 +4229,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/build-apt-packages.html b/8.0/build-apt-packages.html index 23412bacd31..ae5af08873d 100644 --- a/8.0/build-apt-packages.html +++ b/8.0/build-apt-packages.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/changed-page-tracking.html b/8.0/changed-page-tracking.html index 70615625124..cced785c2dc 100644 --- a/8.0/changed-page-tracking.html +++ b/8.0/changed-page-tracking.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/compile-percona-server.html b/8.0/compile-percona-server.html index 4596b5492ec..d706c82dbb2 100644 --- a/8.0/compile-percona-server.html +++ b/8.0/compile-percona-server.html @@ -4092,11 +4092,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/components-keyrings-comparison.html b/8.0/components-keyrings-comparison.html index ad3c074555f..ad566273040 100644 --- a/8.0/components-keyrings-comparison.html +++ b/8.0/components-keyrings-comparison.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/compressed-columns.html b/8.0/compressed-columns.html index e07e31f788e..ecc4fc119bf 100644 --- a/8.0/compressed-columns.html +++ b/8.0/compressed-columns.html @@ -4033,11 +4033,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/copyright-and-licensing-information.html b/8.0/copyright-and-licensing-information.html index a37d2d991f2..fcc842d5bcf 100644 --- a/8.0/copyright-and-licensing-information.html +++ b/8.0/copyright-and-licensing-information.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-at-rest-encryption.html b/8.0/data-at-rest-encryption.html index 2bfedeafdc9..28daf9bc8b7 100644 --- a/8.0/data-at-rest-encryption.html +++ b/8.0/data-at-rest-encryption.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-loading.html b/8.0/data-loading.html index bde535d10cb..4e95c8f98e6 100644 --- a/8.0/data-loading.html +++ b/8.0/data-loading.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-masking-comparison.html b/8.0/data-masking-comparison.html index 1a5fe592a3d..452bdbcf638 100644 --- a/8.0/data-masking-comparison.html +++ b/8.0/data-masking-comparison.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-masking-function-list.html b/8.0/data-masking-function-list.html index c86731b6ebb..702ca159ebb 100644 --- a/8.0/data-masking-function-list.html +++ b/8.0/data-masking-function-list.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-masking-overview.html b/8.0/data-masking-overview.html index f3e4a6eca25..48fabedb86f 100644 --- a/8.0/data-masking-overview.html +++ b/8.0/data-masking-overview.html @@ -17,7 +17,7 @@ - + @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/data-masking-plugin-functions.html b/8.0/data-masking-plugin-functions.html index d58c8564672..e923f397b2a 100644 --- a/8.0/data-masking-plugin-functions.html +++ b/8.0/data-masking-plugin-functions.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/development.html b/8.0/development.html index dbfe500ff0f..0fa39262938 100644 --- a/8.0/development.html +++ b/8.0/development.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/differences.html b/8.0/differences.html index 0c30a35a29d..4cabb4e9506 100644 --- a/8.0/differences.html +++ b/8.0/differences.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/disable-audit-log-filter.html b/8.0/disable-audit-log-filter.html index cdd93767c15..731bad4b671 100644 --- a/8.0/disable-audit-log-filter.html +++ b/8.0/disable-audit-log-filter.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/docker-config.html b/8.0/docker-config.html index c6f0cf4a56e..ea5ad86af10 100644 --- a/8.0/docker-config.html +++ b/8.0/docker-config.html @@ -4044,11 +4044,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/docker.html b/8.0/docker.html index c4c4c9c7a4c..e74d9af4811 100644 --- a/8.0/docker.html +++ b/8.0/docker.html @@ -4197,11 +4197,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/downgrade-from-pro.html b/8.0/downgrade-from-pro.html index 6747ce1c149..00ac2def677 100644 --- a/8.0/downgrade-from-pro.html +++ b/8.0/downgrade-from-pro.html @@ -4040,11 +4040,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/downgrade.html b/8.0/downgrade.html index f671d7c648d..153db634125 100644 --- a/8.0/downgrade.html +++ b/8.0/downgrade.html @@ -4040,11 +4040,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/download-instructions.html b/8.0/download-instructions.html index 553c8aae58d..f8cd17cbfae 100644 --- a/8.0/download-instructions.html +++ b/8.0/download-instructions.html @@ -4101,11 +4101,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-binlogs.html b/8.0/encrypting-binlogs.html index a7a1b18fb1b..b81eece5b79 100644 --- a/8.0/encrypting-binlogs.html +++ b/8.0/encrypting-binlogs.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-doublewrite-buffers.html b/8.0/encrypting-doublewrite-buffers.html index 59f290fbd71..51aaaf2ef45 100644 --- a/8.0/encrypting-doublewrite-buffers.html +++ b/8.0/encrypting-doublewrite-buffers.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-redo-log.html b/8.0/encrypting-redo-log.html index b2b65756675..398e36755c1 100644 --- a/8.0/encrypting-redo-log.html +++ b/8.0/encrypting-redo-log.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-system-tablespace.html b/8.0/encrypting-system-tablespace.html index d2177d41262..d5e7f411942 100644 --- a/8.0/encrypting-system-tablespace.html +++ b/8.0/encrypting-system-tablespace.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-tables.html b/8.0/encrypting-tables.html index 4333c8d4f48..9d9edcf2527 100644 --- a/8.0/encrypting-tables.html +++ b/8.0/encrypting-tables.html @@ -4032,11 +4032,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-tablespaces.html b/8.0/encrypting-tablespaces.html index 8dc11e3af02..89285f65567 100644 --- a/8.0/encrypting-tablespaces.html +++ b/8.0/encrypting-tablespaces.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-temporary-files.html b/8.0/encrypting-temporary-files.html index 2e02fd9df16..4eedd364c42 100644 --- a/8.0/encrypting-temporary-files.html +++ b/8.0/encrypting-temporary-files.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-threads.html b/8.0/encrypting-threads.html index 87b003a6bf3..25494a59ac0 100644 --- a/8.0/encrypting-threads.html +++ b/8.0/encrypting-threads.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encrypting-undo-tablespace.html b/8.0/encrypting-undo-tablespace.html index 91129c1fa1c..25d004cb1ab 100644 --- a/8.0/encrypting-undo-tablespace.html +++ b/8.0/encrypting-undo-tablespace.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/encryption-functions.html b/8.0/encryption-functions.html index 2800b9f397d..f73d8d87f2d 100644 --- a/8.0/encryption-functions.html +++ b/8.0/encryption-functions.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/enforce-engine.html b/8.0/enforce-engine.html index cf2645518a9..dd04e990412 100644 --- a/8.0/enforce-engine.html +++ b/8.0/enforce-engine.html @@ -4029,11 +4029,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/extended-mysqlbinlog.html b/8.0/extended-mysqlbinlog.html index 95880c58ac8..601facbb477 100644 --- a/8.0/extended-mysqlbinlog.html +++ b/8.0/extended-mysqlbinlog.html @@ -4084,11 +4084,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/extended-mysqldump.html b/8.0/extended-mysqldump.html index adfc80b0dad..3024ea291e0 100644 --- a/8.0/extended-mysqldump.html +++ b/8.0/extended-mysqldump.html @@ -4117,11 +4117,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/extended-select-into-outfile.html b/8.0/extended-select-into-outfile.html index 39d6251dbe6..7d3f54b8eb0 100644 --- a/8.0/extended-select-into-outfile.html +++ b/8.0/extended-select-into-outfile.html @@ -4081,11 +4081,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/extended-show-grants.html b/8.0/extended-show-grants.html index bd79429de1e..405c2e1b4c7 100644 --- a/8.0/extended-show-grants.html +++ b/8.0/extended-show-grants.html @@ -4105,11 +4105,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/faq.html b/8.0/faq.html index e8328c54fd1..6432d3302b8 100644 --- a/8.0/faq.html +++ b/8.0/faq.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/fast-updates.html b/8.0/fast-updates.html index b54e0b4755a..e47f626a6c4 100644 --- a/8.0/fast-updates.html +++ b/8.0/fast-updates.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/feature-comparison.html b/8.0/feature-comparison.html index 3c003431b1a..061f26bca12 100644 --- a/8.0/feature-comparison.html +++ b/8.0/feature-comparison.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/fido-authentication-plugin.html b/8.0/fido-authentication-plugin.html index a305e183d6b..9f6aa8394a7 100644 --- a/8.0/fido-authentication-plugin.html +++ b/8.0/fido-authentication-plugin.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/filter-audit-log-filter-files.html b/8.0/filter-audit-log-filter-files.html index 51c1ad76fd2..a6713d761db 100644 --- a/8.0/filter-audit-log-filter-files.html +++ b/8.0/filter-audit-log-filter-files.html @@ -4101,11 +4101,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/fips.html b/8.0/fips.html index 3e2ba89bc8c..e3d813b1860 100644 --- a/8.0/fips.html +++ b/8.0/fips.html @@ -4123,11 +4123,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/gap-locks-detection.html b/8.0/gap-locks-detection.html index 2715a922351..6fd1e58f45f 100644 --- a/8.0/gap-locks-detection.html +++ b/8.0/gap-locks-detection.html @@ -4029,11 +4029,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/get-help.html b/8.0/get-help.html index eabf1b6d4ad..854dd9d5804 100644 --- a/8.0/get-help.html +++ b/8.0/get-help.html @@ -4088,11 +4088,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/glossary.html b/8.0/glossary.html index 339b71b66bc..ea273eea237 100644 --- a/8.0/glossary.html +++ b/8.0/glossary.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/group-replication-flow-control.html b/8.0/group-replication-flow-control.html index 108d0908ee1..1794536391c 100644 --- a/8.0/group-replication-flow-control.html +++ b/8.0/group-replication-flow-control.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/group-replication-system-variables.html b/8.0/group-replication-system-variables.html index c6526a28be4..d5aa9a99e4c 100644 --- a/8.0/group-replication-system-variables.html +++ b/8.0/group-replication-system-variables.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/improved-memory-engine.html b/8.0/improved-memory-engine.html index 89488ca59c5..cd9b65aec88 100644 --- a/8.0/improved-memory-engine.html +++ b/8.0/improved-memory-engine.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/improved-slow-query-log.html b/8.0/improved-slow-query-log.html index 8344651bda6..36097d25628 100644 --- a/8.0/improved-slow-query-log.html +++ b/8.0/improved-slow-query-log.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/in-place-upgrade-guide.html b/8.0/in-place-upgrade-guide.html index f09340ab58f..8b1ce6e6d86 100644 --- a/8.0/in-place-upgrade-guide.html +++ b/8.0/in-place-upgrade-guide.html @@ -4040,11 +4040,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/index-info-schema-tables.html b/8.0/index-info-schema-tables.html index 4e3466b9067..a3f39bcea81 100644 --- a/8.0/index-info-schema-tables.html +++ b/8.0/index-info-schema-tables.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/index.html b/8.0/index.html index 91a8af19b93..d57ae7a70db 100644 --- a/8.0/index.html +++ b/8.0/index.html @@ -4075,11 +4075,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/information-schema-tables.html b/8.0/information-schema-tables.html index fb7458fa9c2..89e36e6948a 100644 --- a/8.0/information-schema-tables.html +++ b/8.0/information-schema-tables.html @@ -4029,11 +4029,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-corrupt-table-action.html b/8.0/innodb-corrupt-table-action.html index 5e63fc69adc..2393f39af16 100644 --- a/8.0/innodb-corrupt-table-action.html +++ b/8.0/innodb-corrupt-table-action.html @@ -4033,11 +4033,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-expanded-fast-index-creation.html b/8.0/innodb-expanded-fast-index-creation.html index 56342af61a0..edf15399516 100644 --- a/8.0/innodb-expanded-fast-index-creation.html +++ b/8.0/innodb-expanded-fast-index-creation.html @@ -4134,11 +4134,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-fragmentation-count.html b/8.0/innodb-fragmentation-count.html index 08b3594aba5..fa527133f16 100644 --- a/8.0/innodb-fragmentation-count.html +++ b/8.0/innodb-fragmentation-count.html @@ -4032,11 +4032,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-fts-improvements.html b/8.0/innodb-fts-improvements.html index 6f997d9b64e..1d8ad96aaf2 100644 --- a/8.0/innodb-fts-improvements.html +++ b/8.0/innodb-fts-improvements.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-io.html b/8.0/innodb-io.html index cebad6ef360..23baf374b18 100644 --- a/8.0/innodb-io.html +++ b/8.0/innodb-io.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/innodb-show-status.html b/8.0/innodb-show-status.html index 2a9d50da059..cae22061552 100644 --- a/8.0/innodb-show-status.html +++ b/8.0/innodb-show-status.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/install-audit-log-filter.html b/8.0/install-audit-log-filter.html index abd6749ca65..794147c0459 100644 --- a/8.0/install-audit-log-filter.html +++ b/8.0/install-audit-log-filter.html @@ -4042,11 +4042,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/install-data-masking-component.html b/8.0/install-data-masking-component.html index 9b9f9947f52..5c2f5694bba 100644 --- a/8.0/install-data-masking-component.html +++ b/8.0/install-data-masking-component.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/install-data-masking-plugin.html b/8.0/install-data-masking-plugin.html index 6d1f80a2581..62bc560e21f 100644 --- a/8.0/install-data-masking-plugin.html +++ b/8.0/install-data-masking-plugin.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/install-myrocks.html b/8.0/install-myrocks.html index bc0ad6d22ba..c3527564d15 100644 --- a/8.0/install-myrocks.html +++ b/8.0/install-myrocks.html @@ -4029,11 +4029,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/install-pro.html b/8.0/install-pro.html index b775235e7b3..777a1f0f6bb 100644 --- a/8.0/install-pro.html +++ b/8.0/install-pro.html @@ -4092,11 +4092,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/installation.html b/8.0/installation.html index a82c6671d37..38559a11a67 100644 --- a/8.0/installation.html +++ b/8.0/installation.html @@ -4090,11 +4090,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/jemalloc-profiling.html b/8.0/jemalloc-profiling.html index f090be2d1df..45523a1e28b 100644 --- a/8.0/jemalloc-profiling.html +++ b/8.0/jemalloc-profiling.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/kill-idle-trx.html b/8.0/kill-idle-trx.html index 1ed2bb9098c..0a75371b3e2 100644 --- a/8.0/kill-idle-trx.html +++ b/8.0/kill-idle-trx.html @@ -4097,11 +4097,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/ldap-authentication.html b/8.0/ldap-authentication.html index 8fc45bb259b..1cfbc18982f 100644 --- a/8.0/ldap-authentication.html +++ b/8.0/ldap-authentication.html @@ -22,7 +22,7 @@ - + @@ -4130,18 +4130,9 @@
  • - + - Create a user using simple LDAP authentication - - - -
  • - -
  • - - - Create a user using SASL-based LDAP authentication + Create a user @@ -4187,11 +4178,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables @@ -7123,18 +7135,9 @@
  • - + - Create a user using simple LDAP authentication - - - -
  • - -
  • - - - Create a user using SASL-based LDAP authentication + Create a user @@ -7218,10 +7221,6 @@

    Using LDAP authentication plugins

    Version specific information

    Percona Server for MySQL 8.0.30-22 implements an SASL-based LDAP authentication plugin. This plugin only supports the SCRAM-SHA-1 SASL mechanism.

    -
    -

    Important

    -

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    -

    Percona Server for MySQL 8.0.19-10 implements the simple LDAP authentication. The Percona simple LDAP authentication plugin is a free and Open Source implementation of the MySQL Enterprise Simple LDAP authentication plugin.

    Plugin names and file names

    The following tables show the plugin names and the file name for simple LDAP authentication and SASL-based LDAP authentication.

    @@ -7350,10 +7349,6 @@

    Load the plugins at runtime
    mysql> INSTALL PLUGIN authentication_ldap_simple SONAME 'authentication_ldap_simple.so';
     
    -

    To set and persist values at runtime, use the following statements:

    -
    mysql> SET PERSIST authentication_ldap_simple_server_host='127.0.0.1';
    -mysql> SET PERSIST authentication_ldap_simple_bind_base_dn='dc=percona, dc=com';
    -
    mysql> INSTALL PLUGIN authentication_ldap_sasl SONAME 'authentication_ldap_sasl.so';
    @@ -7365,9 +7360,13 @@ 

    Load the plugins at runtimeCreate a user using simple LDAP authentication

    +

    Create a user

    +

    There are several methods to add or modify a user.

    +
    +
    +

    There are several methods to add or modify a user.

    -
    +

    In the CREATE USER statement or the ALTER USER statement, for simple LDAP authentication, you can specify the authentication_ldap_simple plugin in the IDENTIFIED WITH clause:

    @@ -7379,32 +7378,33 @@

    Create a user using simp

    If you provide the optional authentication string clause, ‘cn,ou,dc,dc’ in the example, the string is stored along with the password.

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_simple BY 'cn=[user name],ou=[organization unit],dc=[domain component],dc=com'
     
    -

    Unless the authentication_ldap_simple_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    +

    Unless the authentication_ldap_simple_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    Creating the user with IDENTIFIED BY authentication_ldap_simple uses the variables.

    -

    Creating the user with the authentication_ldap_simple_group_role_mapping variable also adds the authentication_ldap_simple_bind_root_dn and authentication_ldap_simple_bind_root_pwd variables.

    +

    Creating the user with the authentication_ldap_simple_group_role_mapping variable also adds the authentication_ldap_simple_bind_root_dn and authentication_ldap_simple_bind_root_pwd variables.

    +

    -

    Create a user using SASL-based LDAP authentication

    +

    There are several methods to add or modify a user.

    -
    +

    For SASL-based LDAP authentication, in the CREATE USER statement or the ALTER USER statement, you can specify the authentication_ldap_sasl plugin:

    @@ -7415,26 +7415,26 @@

    Create a user using

    If you provide the optional authentication string clause, ‘cn,ou,dc,dc’ in the example, the string is stored along with the password.

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_sasl BY 'cn=[user name],ou=[organization unit],dc=[domain component],dc=com'
     
    -

    Unless the authentication_ldap_sasl_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    +

    Unless the authentication_ldap_sasl_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    Creating the user with IDENTIFIED BY authentication_ldap_sasl uses the variables.

    -

    Creating the user with the authentication_ldap_sasl_group_role_mapping variable also adds theauthentication_ldap_sasl_bind_root_dn and authentication_ldap_sasl_bind_root_pwd variables.

    +

    Creating the user with the authentication_ldap_sasl_group_role_mapping variable also adds theauthentication_ldap_sasl_bind_root_dn and authentication_ldap_sasl_bind_root_pwd variables.

    @@ -7443,7 +7443,7 @@

    Examples
    uid=ldapuser,ou=testusers,dc=percona,dc=com
     

    -
    +

    The following example configures an LDAP user and connects to the database server.

    @@ -7472,15 +7472,17 @@

    ExamplesUninstall the plugins

    If you installed either plugin at server startup, remove those options from the my.cnf file, remove any startup options that set LDAP system variables, and restart the server.

    -
    +

    If you installed the plugins at runtime, run the following statements:

    mysql> UNINSTALL PLUGIN authentication_ldap_simple;
     
    -

    If you used SET_PERSIST, use RESET PERSIST to remove the settings.

    If you installed the plugins at runtime, run the following statements:

    @@ -7495,7 +7497,7 @@

    Uninstall the plugins - + - + @@ -28,7 +28,7 @@ - LDAP authentication plugin system variables - Percona Server for MySQL + LDAP SASL system variables - Percona Server for MySQL @@ -91,7 +91,7 @@ - + @@ -114,7 +114,7 @@
    - + Skip to content @@ -189,7 +189,7 @@
    - LDAP authentication plugin system variables + LDAP SASL system variables
    @@ -4045,18 +4045,18 @@ - LDAP authentication plugin system variables + LDAP SASL system variables - + - LDAP authentication plugin system variables + LDAP SASL system variables @@ -4242,168 +4242,36 @@

  • -
  • - - - authentication_ldap_simple_bind_base_dn - - - -
  • - -
  • - - - authentication_ldap_simple_bind_root_dn - - - -
  • - -
  • - - - authentication_ldap_simple_bind_root_pwd - - - -
  • - -
  • - - - authentication_ldap_simple_ca_path - - - -
  • - -
  • - - - authentication_ldap_simple_fallback_server_host - - - -
  • - -
  • - - - authentication_ldap_simple_fallback_server_port - - - -
  • - -
  • - - - authentication_ldap_simple_group_role_mapping - - - -
  • - -
  • - - - authentication_ldap_simple_group_search_attr - - - -
  • - -
  • - - - authentication_ldap_simple_group_search_filter - - - -
  • - -
  • - - - authentication_ldap_simple_init_pool_size - - - -
  • - -
  • - - - authentication_ldap_simple_log_status - - + +
  • - -
  • - - - authentication_ldap_simple_max_pool_size - - + + -
  • - -
  • - - - authentication_ldap_simple_server_host - - + + +
  • - - -
  • - - - authentication_ldap_simple_server_port - - + + + + + -
  • - -
  • - - - authentication_ldap_simple_ssl - - -
  • - -
  • - - - authentication_ldap_simple_tls - - -
  • - -
  • - - - authentication_ldap_simple_user_search_attr - - -
  • +
  • + - - -
  • - - + + LDAP Simple system variables + - - + + @@ -7422,159 +7290,6 @@ - - -
  • - - - authentication_ldap_simple_bind_base_dn - - - -
  • - -
  • - - - authentication_ldap_simple_bind_root_dn - - - -
  • - -
  • - - - authentication_ldap_simple_bind_root_pwd - - - -
  • - -
  • - - - authentication_ldap_simple_ca_path - - - -
  • - -
  • - - - authentication_ldap_simple_fallback_server_host - - - -
  • - -
  • - - - authentication_ldap_simple_fallback_server_port - - - -
  • - -
  • - - - authentication_ldap_simple_group_role_mapping - - - -
  • - -
  • - - - authentication_ldap_simple_group_search_attr - - - -
  • - -
  • - - - authentication_ldap_simple_group_search_filter - - - -
  • - -
  • - - - authentication_ldap_simple_init_pool_size - - - -
  • - -
  • - - - authentication_ldap_simple_log_status - - - -
  • - -
  • - - - authentication_ldap_simple_max_pool_size - - - -
  • - -
  • - - - authentication_ldap_simple_server_host - - - -
  • - -
  • - - - authentication_ldap_simple_server_port - - - -
  • - -
  • - - - authentication_ldap_simple_ssl - - - -
  • - -
  • - - - authentication_ldap_simple_tls - - - -
  • - -
  • - - - authentication_ldap_simple_user_search_attr - - -
  • @@ -7614,7 +7329,7 @@ - + @@ -7623,20 +7338,16 @@ - + -

    LDAP authentication plugin system variables

    +

    LDAP SASL system variables

    Authentication system variables

    Percona 8.0.30-22 adds LDAP_SASL variables and the fallback server variables for simple LDAP and SASL-based LDAP.

    -
    -

    Important

    -

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    -

    The installation adds the following variables:

    @@ -7714,74 +7425,6 @@

    Authentication system variablesauthentication_ldap_sasl_user_search_attr

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Name of the attribute that specifies user names in the LDAP directory entries
    authentication_ldap_simple_bind_base_dnBase distinguished name
    authentication_ldap_simple_bind_root_dnRoot distinguished name
    authentication_ldap_simple_bind_root_dn_pwdPassword for the root distinguished name
    authentication_ldap_simple_ca_pathAbsolute path of the certificate authority
    authentication_ldap_simple_fallback_server_hostIf the primary server is unavailable, the authentication plugin attempts to connect to the fallback server
    authentication_ldap_simple_fallback_server_portThe port number for the fallback server
    authentication_ldap_simple_group_role_mappingA list of LDAP group names - MySQL role pairs
    authentication_ldap_simple_group_search_attrName of the attribute that specifies the group names in the LDAP directory entries
    authentication_ldap_simple_group_search_filterCustom group search filter
    authentication_ldap_simple_init_pool_sizeInitial size of the connection pool to the LDAP server
    authentication_ldap_simple_log_statuslogging level
    authentication_ldap_simple_max_pool_sizeMaximum size of the pool of connections to the LDAP server
    authentication_ldap_simple_server_hostLDAP server host
    authentication_ldap_simple_server_portLDAP server TCP/IP port number
    authentication_ldap_simple_sslIf plugin connections to the LDAP server use the SSL protocol (ldaps://)
    authentication_ldap_simple_tlsIf plugin connections to the LDAP server are secured with STARTTLS (ldap://)
    authentication_ldap_simple_user_search_attrName of the attribute that specifies user names in the LDAP directory entries

    The following variables are described in detail:

    @@ -8376,603 +8019,13 @@

    authentication_ldap_sas

    The attribute name that specifies the user names in LDAP directory entries in SASL-based LDAP authentication.

    -

    authentication_ldap_simple_bind_base_dn

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-bind-base-dn=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNULL
    -

    The base distinguished name (DN) for simple LDAP authentication. You can limit the search scope by using the variable as the base of the search.

    -

    authentication_ldap_simple_bind_root_dn

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-bind-root-dn=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNULL
    -

    The root distinguished name (DN) used to authenticate simple LDAP. When performing a search, this variable is used with -authentication_ldap_simple_bind_root_pwd as the authenticating credentials to the LDAP server.

    -

    authentication_ldap_simple_bind_root_pwd

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-bind-root-pwd=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNULL
    -

    The root password used to authenticate against simple LDAP server. This variable is used with -authentication_ldap_simple_bind_root_dn.

    -

    authentication_ldap_simple_ca_path

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-ca_path=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNULL
    -

    The certificate authority’s absolute path used to verify the LDAP certificate.

    -

    authentication_ldap_simple_fallback_server_host

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-fallback-server-host
    ScopeGlobal
    DynamicYes
    TypeSting
    DefaultNULL
    -

    Use with authentication_ldap_simple_fallback_server_port.

    -

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    -

    authentication_ldap_simple_fallback_server_port

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-fallback-server-port
    ScopeGlobal
    DynamicYes
    TypeInteger
    DefaultNULL
    -

    Use with authentication_ldap_simple_fallback_server_host.

    -

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    -

    If the fallback server host has a value, and the fallback port is 0, users can specify multiple fallback servers.

    -

    Use this format to specify multiple fallback servers: authentication_ldap_simple_fallback_server_host="ldap(s)://host:port,ldap(s)://host2:port2, for example.

    -

    authentication_ldap_simple_group_role_mapping

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-group-role-mapping=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNull
    -

    When an LDAP user logs in, the server checks if the LDAP user is a member of the specified group. If the user is, then the server automatically grants the database server roles to the user.

    -

    The variable has this format: <ldap_group>=<mysql_role>,<ldap_group2>=<mysql_role2>,.

    -

    authentication_ldap_simple_group_search_attr

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-group-search-attr=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Defaultcn
    -

    The attribute name that specifies group names in the LDAP directory entries for simple LDAP authentication.

    -

    authentication_ldap_simple_group_search_filter

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-group-search-filter=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Default(|(&(objectClass=posixGroup)(memberUid=%s))(&(objectClass=group)(member=%s)))
    -

    The custom group search filter for simple LDAP authentication.

    -

    authentication_ldap_simple_init_pool_size

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-init-pool-size=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default10
    Minimum value0
    Maximum value32767
    Unitconnections
    -

    The initial size of the connection pool to the LDAP server for simple LDAP authentication.

    -

    authentication_ldap_simple_log_status

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-log-status=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default1
    Minimum value1
    Maximum value6
    -

    The logging level for messages written to the error log for simple LDAP authentication.

    -

    authentication_ldap_simple_max_pool_size

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-max-pool-size=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default1000
    Minimum value0
    Maximum value32767
    Unitconnections
    -

    The maximum connection pool size to the LDAP server in simple LDAP authentication. The variable is used with authentication_ldap_simple_init_pool_size.

    -

    authentication_ldap_simple_server_host

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-server-host=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNULL
    -

    The LDAP server host used for simple LDAP authentication. The LDAP server host can be an IP address or a host name.

    -

    authentication_ldap_simple_server_port

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-server-port=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default389
    Minimum value1
    Maximum value32376
    -

    The LDAP server TCP/IP port number used for simple LDAP authentication.

    -

    authentication_ldap_simple_ssl

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-ssl=value
    ScopeGlobal
    DynamicYes
    Data typeBoolean
    DefaultOFF
    -

    If this variable is enabled, the plugin connects to the server with SSL.

    -

    authentication_ldap_simple_tls

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-tls=value
    ScopeGlobal
    DynamicYes
    Data typeBoolean
    DefaultOFF
    -

    If this variable is enabled, the plugin connects to the server with TLS.

    -

    authentication_ldap_simple_user_search_attr

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OptionDescription
    Command-line–authentication-ldap-simple-user-search-attr=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Defaultuid
    -

    The attribute name that specifies the user names in LDAP directory entries in simple LDAP authentication.

    +

    For more details, see the LDAP Authentication documentation.


    Last update: - 2023-09-27 + 2025-01-13
    diff --git a/8.0/ldap-simple-variables.html b/8.0/ldap-simple-variables.html new file mode 100644 index 00000000000..6d8c0479725 --- /dev/null +++ b/8.0/ldap-simple-variables.html @@ -0,0 +1,8105 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + LDAP Simple system variables - Percona Server for MySQL + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + Skip to content + + +
    +
    + +
    + + + + + + + + + + + +
    + + + + + + + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + + +
    + +
    +
    +
    + + + + + + + + +
    +
    + + + + + + + + + + + + + + + + + + + + +

    LDAP Simple system variables

    +

    The following variables are static. These variables can only be modified by restarting the server with a new value set in the configuration file (for example, my.cnf or my.ini) or passed as a command-line option when starting the server.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Name
    authentication_ldap_simple_bind_base_dn
    authentication_ldap_simple_bind_root_dn
    authentication_ldap_simple_bind_root_pwd
    authentication_ldap_simple_ca_path
    authentication_ldap_simple_fallback_server_host
    authentication_ldap_simple_fallback_server_port
    authentication_ldap_simple_group_role_mapping
    authentication_ldap_simple_group_search_attr
    authentication_ldap_simple_group_search_filter
    authentication_ldap_simple_init_pool_size
    authentication_ldap_simple_log_status
    authentication_ldap_simple_max_pool_size
    authentication_ldap_simple_server_host
    authentication_ldap_simple_server_port
    authentication_ldap_simple_ssl
    authentication_ldap_simple_tls
    authentication_ldap_simple_user_search_attr
    +

    authentication_ldap_simple_bind_base_dn

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDetails
    Command-line--authentication-ldap-simple-bind-base-dn
    Scopeglobal
    DynamicYes
    Data TypeString
    DefaultNULL
    +

    This variable sets the base Distinguished Name (DN) for binding to the LDAP server during simple LDAP authentication.

    +

    Setting this value correctly is crucial for security. Incorrect values can cause authentication failures or security risks.

    +

    authentication_ldap_simple_bind_root_dn

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-bind-root-dn=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNULL
    +

    Percona Server for MySQL uses a root Distinguished Name (DN) to connect to the LDAP server for simple LDAP authentication. This variable is used with authentication_ldap_simple_bind_root_pwd. This root DN, along with the root password, is used to authenticate with the LDAP server and obtain a connection.

    +
      +
    • +

      If the MySQL account does not specify an LDAP user DN:

      +
        +
      • +

        MySQL first authenticates to the LDAP server using the provided root DN and password.

        +
      • +
      • +

        Then, it searches the LDAP directory for the user DN corresponding to the MySQL user’s name.

        +
      • +
      • +

        Finally, MySQL attempts to authenticate using the found user DN and the password provided by the MySQL user.

        +
      • +
      +
    • +
    +

    If the MySQL account specifies an LDAP user DN:

    +
      +
    • +

      MySQL directly authenticates to the LDAP server using the provided user DN and the password supplied by the MySQL user.

      +
    • +
    • +

      This method is faster as it avoids the initial authentication step with the root DN.

      +
    • +
    +

    authentication_ldap_simple_bind_root_pwd

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-bind-root-pwd=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNULL
    +

    The root password used to authenticate against an LDAP. This variable is used with authentication_ldap_simple_bind_root_dn.

    +

    authentication_ldap_simple_ca_path

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-ca_path=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNull
    +

    This variable specifies the absolute path to the Certificate Authority (CA) file for LDAP Simple authentication. This variable allows the authentication plugin to verify the LDAP server certificate, enhancing security.

    +

    authentication_ldap_simple_fallback_server_host

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-fallback-server-host
    ScopeGlobal
    DynamicYes
    TypeSting
    DefaultNULL
    +

    Use with authentication_ldap_simple_fallback_server_port.

    +

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    +

    authentication_ldap_simple_fallback_server_port

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-fallback-server-port
    ScopeGlobal
    DynamicYes
    TypeInteger
    DefaultNULL
    +

    Use with authentication_ldap_simple_fallback_server_host.

    +

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    +

    If the fallback server host has a value, and the fallback port is 0, users can specify multiple fallback servers.

    +

    Use this format to specify multiple fallback servers: authentication_ldap_simple_fallback_server_host="ldap(s)://host:port,ldap(s)://host2:port2, for example.

    +

    authentication_ldap_simple_group_role_mapping

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-group-role-mapping=value
    ScopeGlobal
    DynamicYes
    Data typeString
    DefaultNull
    +

    When an LDAP user logs in, the server checks if the LDAP user is a member of the specified group. If the user is, then the server automatically grants the database server roles to the user.

    +

    The variable has this format: <ldap_group>=<mysql_role>,<ldap_group2>=<mysql_role2>,.

    +

    authentication_ldap_simple_group_search_attr

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-group-search-attr=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Defaultcn
    +

    The attribute name that specifies group names in the LDAP directory entries for simple LDAP authentication.

    +

    authentication_ldap_simple_group_search_filter

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-group-search-filter=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Default(|(&(objectClass=posixGroup)(memberUid=%s))(&(objectClass=group)(member=%s)))
    +

    The custom group search filter for simple LDAP authentication.

    +

    authentication_ldap_simple_init_pool_size

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-init-pool-size=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default10
    Minimum value0
    Maximum value32767
    Unitconnections
    +

    The initial size of the connection pool to the LDAP server for simple LDAP authentication.

    +

    authentication_ldap_simple_log_status

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-log-status=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default1
    Minimum value1
    Maximum value6
    +

    The logging level for messages written to the error log for simple LDAP authentication.

    +

    authentication_ldap_simple_max_pool_size

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-max-pool-size=value
    ScopeGlobal
    DynamicYes
    Data typeInteger
    Default1000
    Minimum value0
    Maximum value32767
    Unitconnections
    +

    The maximum connection pool size to the LDAP server in simple LDAP authentication. The variable is used with authentication_ldap_simple_init_pool_size.

    +

    authentication_ldap_simple_server_host

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-server-host=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNull
    +

    The LDAP server host used for LDAP authentication.

    +

    authentication_ldap_simple_server_port

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-server-port=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNull
    +

    The LDAP server TCP/IP port number used for LDAP authentication.

    +

    authentication_ldap_simple_ssl

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-ssl=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNull
    +

    If this variable is enabled, the plugin connects to the server with SSL.

    +

    authentication_ldap_simple_tls

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line--authentication-ldap-simple-tls=value
    ScopeGlobal
    DynamicNo
    Data typeString
    DefaultNull
    +

    If this variable is enabled, the plugin connects to the server with TLS.

    +

    authentication_ldap_simple_user_search_attr

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    OptionDescription
    Command-line–authentication-ldap-simple-user-search-attr=value
    ScopeGlobal
    DynamicYes
    Data typeString
    Defaultuid
    +

    The attribute name that specifies the user names in LDAP directory entries in simple LDAP authentication.

    +

    For more details, see the LDAP Authentication documentation.

    +
    +
    + + + Last update: + 2025-01-13 + + +
    + + + + + + + + + +
    +
    + + + + + +
    + + + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/8.0/libcoredumper.html b/8.0/libcoredumper.html index 88d6a801abb..ffe75c4c641 100644 --- a/8.0/libcoredumper.html +++ b/8.0/libcoredumper.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/limitations.html b/8.0/limitations.html index f2354f1f08f..34342efd0b5 100644 --- a/8.0/limitations.html +++ b/8.0/limitations.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/log-connection-error.html b/8.0/log-connection-error.html index 2449ef0f3fd..8c8a38c0f7d 100644 --- a/8.0/log-connection-error.html +++ b/8.0/log-connection-error.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/manage-audit-log-filter.html b/8.0/manage-audit-log-filter.html index 534cfdc0f56..a710fa212ad 100644 --- a/8.0/manage-audit-log-filter.html +++ b/8.0/manage-audit-log-filter.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/misc-info-schema-tables.html b/8.0/misc-info-schema-tables.html index 067dd6bd3db..3cb53098481 100644 --- a/8.0/misc-info-schema-tables.html +++ b/8.0/misc-info-schema-tables.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/myrocks-column-family.html b/8.0/myrocks-column-family.html index e1fa50c65f3..9b3b25bc895 100644 --- a/8.0/myrocks-column-family.html +++ b/8.0/myrocks-column-family.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/myrocks-index.html b/8.0/myrocks-index.html index fa502f0041e..857d262fa4a 100644 --- a/8.0/myrocks-index.html +++ b/8.0/myrocks-index.html @@ -4032,11 +4032,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/myrocks-performance-schema-tables.html b/8.0/myrocks-performance-schema-tables.html index 470600ddfe6..f4ae6301ad1 100644 --- a/8.0/myrocks-performance-schema-tables.html +++ b/8.0/myrocks-performance-schema-tables.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/pam-plugin.html b/8.0/pam-plugin.html index 72f6d9e5727..459a2c40101 100644 --- a/8.0/pam-plugin.html +++ b/8.0/pam-plugin.html @@ -4030,11 +4030,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/percona-release.html b/8.0/percona-release.html index c494a9add81..2482257c181 100644 --- a/8.0/percona-release.html +++ b/8.0/percona-release.html @@ -4155,11 +4155,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/percona-sequence-table.html b/8.0/percona-sequence-table.html index 6156d629d11..137a48c1ffa 100644 --- a/8.0/percona-sequence-table.html +++ b/8.0/percona-sequence-table.html @@ -4159,11 +4159,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/percona-xtradb.html b/8.0/percona-xtradb.html index 40b21f6150f..521121c4025 100644 --- a/8.0/percona-xtradb.html +++ b/8.0/percona-xtradb.html @@ -4031,11 +4031,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/post-installation.html b/8.0/post-installation.html index 70897a93a20..e0041c684dd 100644 --- a/8.0/post-installation.html +++ b/8.0/post-installation.html @@ -4156,11 +4156,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/prefix-index-queries-optimization.html b/8.0/prefix-index-queries-optimization.html index 3ad977029a3..0a4765231f9 100644 --- a/8.0/prefix-index-queries-optimization.html +++ b/8.0/prefix-index-queries-optimization.html @@ -4029,11 +4029,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/pro-files.html b/8.0/pro-files.html index 6649e48afc6..de95927e316 100644 --- a/8.0/pro-files.html +++ b/8.0/pro-files.html @@ -4092,11 +4092,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/process-list.html b/8.0/process-list.html index bacd756a86a..4bf046d8571 100644 --- a/8.0/process-list.html +++ b/8.0/process-list.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/procfs-plugin.html b/8.0/procfs-plugin.html index cf2df0b44d1..160bdc96587 100644 --- a/8.0/procfs-plugin.html +++ b/8.0/procfs-plugin.html @@ -4165,11 +4165,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/proxy-protocol-support.html b/8.0/proxy-protocol-support.html index 488e001ffbd..d8f213daddc 100644 --- a/8.0/proxy-protocol-support.html +++ b/8.0/proxy-protocol-support.html @@ -4105,11 +4105,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/ps-variables.html b/8.0/ps-variables.html index 65349312fc7..84582e09b9a 100644 --- a/8.0/ps-variables.html +++ b/8.0/ps-variables.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/ps-versions-comparison.html b/8.0/ps-versions-comparison.html index 95608b85725..cb102074adb 100644 --- a/8.0/ps-versions-comparison.html +++ b/8.0/ps-versions-comparison.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/psmysql-pro.html b/8.0/psmysql-pro.html index 2d73e364fd7..06f99edf442 100644 --- a/8.0/psmysql-pro.html +++ b/8.0/psmysql-pro.html @@ -4088,11 +4088,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/query-limit-records.html b/8.0/query-limit-records.html index 02723dde55b..0679fb396b2 100644 --- a/8.0/query-limit-records.html +++ b/8.0/query-limit-records.html @@ -4032,11 +4032,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/quickstart-apt.html b/8.0/quickstart-apt.html index 31586ada508..185f09178aa 100644 --- a/8.0/quickstart-apt.html +++ b/8.0/quickstart-apt.html @@ -4180,11 +4180,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/quickstart-docker.html b/8.0/quickstart-docker.html index be322221e24..8dcd523e5ba 100644 --- a/8.0/quickstart-docker.html +++ b/8.0/quickstart-docker.html @@ -4198,11 +4198,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/quickstart-next-steps.html b/8.0/quickstart-next-steps.html index 98efb4d498c..339734a6cc2 100644 --- a/8.0/quickstart-next-steps.html +++ b/8.0/quickstart-next-steps.html @@ -4141,11 +4141,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/quickstart-overview.html b/8.0/quickstart-overview.html index 6051230dc38..60cc44a716f 100644 --- a/8.0/quickstart-overview.html +++ b/8.0/quickstart-overview.html @@ -4099,11 +4099,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/quickstart-yum.html b/8.0/quickstart-yum.html index 4cd8fe90899..592e796ee83 100644 --- a/8.0/quickstart-yum.html +++ b/8.0/quickstart-yum.html @@ -4189,11 +4189,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/reading-audit-log-filter-files.html b/8.0/reading-audit-log-filter-files.html index 9c58d30c5ce..9d39a2b5220 100644 --- a/8.0/reading-audit-log-filter-files.html +++ b/8.0/reading-audit-log-filter-files.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.29-21.html b/8.0/release-notes/8.0.29-21.html index 53c66cb097d..79da179922c 100644 --- a/8.0/release-notes/8.0.29-21.html +++ b/8.0/release-notes/8.0.29-21.html @@ -4122,11 +4122,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.30-22.html b/8.0/release-notes/8.0.30-22.html index bdf5632de02..62ddcc579ea 100644 --- a/8.0/release-notes/8.0.30-22.html +++ b/8.0/release-notes/8.0.30-22.html @@ -4130,11 +4130,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables @@ -7162,10 +7183,10 @@

    Release highlightsSASL-based LDAP plugin

  • -

    SASL-based LDAP variables

    +

    SASL-based LDAP variables

  • -

    Fallback server variables for simple LDAP and SASL-based LDAP

    +

    Fallback server variables for simple LDAP and SASL-based LDAP

  • FIDO authentication plugin

    @@ -7202,7 +7223,7 @@

    New featuresPS-6002: Added the global variable --replica-enable-event to maintain the create/alter event state on replicas.

  • -

    PS-7980: Added the global system variables --authentication_ldap_simple_group_role_mapping and --authentication_ldap_sasl_group_role_mapping. When a user logs in with LDAP authentication, the server checks if the LDAP user is a member of any group specified in this variable. If the check is successful, the matching MySQL roles are automatically granted to the user.

    +

    PS-7980: Added the global system variables --authentication_ldap_simple_group_role_mapping and --authentication_ldap_sasl_group_role_mapping. When a user logs in with LDAP authentication, the server checks if the LDAP user is a member of any group specified in this variable. If the check is successful, the matching MySQL roles are automatically granted to the user.

  • PS-8275: Implements the ability to eject a cluster node when the node exceeds the flow control threshold by adding a MAJORITY mode to group_replication_flow_control_mode and a global system variable group_replication_auto_evict_timeout.

    @@ -7220,16 +7241,16 @@

    ImprovementsPS-8155: Implements support for multiple LDAP server for the simple LDAP authentication plugin and the SASL-based LDAP authentication plugin. If the appropriate system variable is set, and if the primary server is unavailable, the authentication plugin attempts to connect and authenticate using a fallback server. A user can also specify multiple fallback servers. The following global system variables are:

  • -

    authentication_ldap_simple_fallback_server_host

    +

    authentication_ldap_simple_fallback_server_host

  • -

    authentication_ldap_simple_fallback_server_port

    +

    authentication_ldap_simple_fallback_server_port

  • -

    authentication_ldap_sasl_fallback_server_host

    +

    authentication_ldap_sasl_fallback_server_host

  • -

    authentication_ldap_sasl_fallback_server_port

    +

    authentication_ldap_sasl_fallback_server_port

  • Bug fixes

    diff --git a/8.0/release-notes/8.0.30-22.upd.html b/8.0/release-notes/8.0.30-22.upd.html index f5dd0f8867e..f9d5ae0fb0b 100644 --- a/8.0/release-notes/8.0.30-22.upd.html +++ b/8.0/release-notes/8.0.30-22.upd.html @@ -4139,11 +4139,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables @@ -7181,10 +7202,10 @@

    Release highlightsSASL-based LDAP plugin

  • -

    SASL-based LDAP variables

    +

    SASL-based LDAP variables

  • -

    Fallback server variables for simple LDAP and SASL-based LDAP

    +

    Fallback server variables for simple LDAP and SASL-based LDAP

  • FIDO authentication plugin

    @@ -7221,7 +7242,7 @@

    New featuresPS-6002: Added the global variable --replica-enable-event to maintain the create/alter event state on replicas.

  • -

    PS-7980: Added the global system variables --authentication_ldap_simple_group_role_mapping and --authentication_ldap_sasl_group_role_mapping. When a user logs in with LDAP authentication, the server checks if the LDAP user is a member of any group specified in this variable. If the check is successful, the matching MySQL roles are automatically granted to the user.

    +

    PS-7980: Added the global system variables --authentication_ldap_simple_group_role_mapping and --authentication_ldap_sasl_group_role_mapping. When a user logs in with LDAP authentication, the server checks if the LDAP user is a member of any group specified in this variable. If the check is successful, the matching MySQL roles are automatically granted to the user.

  • PS-8275: Implements the ability to eject a cluster node when the node exceeds the flow control threshold by adding a MAJORITY mode to group_replication_flow_control_mode and a global system variable group_replication_auto_evict_timeout.

    @@ -7239,16 +7260,16 @@

    ImprovementsPS-8155: Implements support for multiple LDAP server for the simple LDAP authentication plugin and the SASL-based LDAP authentication plugin. If the appropriate system variable is set, and if the primary server is unavailable, the authentication plugin attempts to connect and authenticate using a fallback server. A user can also specify multiple fallback servers. The following global system variables are:

  • -

    authentication_ldap_simple_fallback_server_host

    +

    authentication_ldap_simple_fallback_server_host

  • -

    authentication_ldap_simple_fallback_server_port

    +

    authentication_ldap_simple_fallback_server_port

  • -

    authentication_ldap_sasl_fallback_server_host

    +

    authentication_ldap_sasl_fallback_server_host

  • -

    authentication_ldap_sasl_fallback_server_port

    +

    authentication_ldap_sasl_fallback_server_port

  • Bug fixes

    diff --git a/8.0/release-notes/8.0.31-23.html b/8.0/release-notes/8.0.31-23.html index 97cb3180cdd..965d1f93388 100644 --- a/8.0/release-notes/8.0.31-23.html +++ b/8.0/release-notes/8.0.31-23.html @@ -4119,11 +4119,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.32-24.html b/8.0/release-notes/8.0.32-24.html index 26d18f7d6d5..6e654af4e6b 100644 --- a/8.0/release-notes/8.0.32-24.html +++ b/8.0/release-notes/8.0.32-24.html @@ -4110,11 +4110,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.33-25.html b/8.0/release-notes/8.0.33-25.html index ce82a0a43c4..4aeb62762db 100644 --- a/8.0/release-notes/8.0.33-25.html +++ b/8.0/release-notes/8.0.33-25.html @@ -4109,11 +4109,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.33-25.upd.html b/8.0/release-notes/8.0.33-25.upd.html index 75d1e63a9c1..30ff74259e8 100644 --- a/8.0/release-notes/8.0.33-25.upd.html +++ b/8.0/release-notes/8.0.33-25.upd.html @@ -4081,11 +4081,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.34-26.html b/8.0/release-notes/8.0.34-26.html index 7ffb042f6d5..340416b6659 100644 --- a/8.0/release-notes/8.0.34-26.html +++ b/8.0/release-notes/8.0.34-26.html @@ -4141,11 +4141,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.35-27.html b/8.0/release-notes/8.0.35-27.html index 3c03a09df49..f3560afb61b 100644 --- a/8.0/release-notes/8.0.35-27.html +++ b/8.0/release-notes/8.0.35-27.html @@ -4114,11 +4114,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.36-28.html b/8.0/release-notes/8.0.36-28.html index 93b93c233f6..2e2fb036f10 100644 --- a/8.0/release-notes/8.0.36-28.html +++ b/8.0/release-notes/8.0.36-28.html @@ -4108,11 +4108,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.37-29.html b/8.0/release-notes/8.0.37-29.html index 2c65666f829..ef733c15964 100644 --- a/8.0/release-notes/8.0.37-29.html +++ b/8.0/release-notes/8.0.37-29.html @@ -4117,11 +4117,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.38.html b/8.0/release-notes/8.0.38.html index 5fe1d25896c..37cdf195655 100644 --- a/8.0/release-notes/8.0.38.html +++ b/8.0/release-notes/8.0.38.html @@ -4040,11 +4040,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.39-30.html b/8.0/release-notes/8.0.39-30.html index f2d926f6e9e..79bb0eebc70 100644 --- a/8.0/release-notes/8.0.39-30.html +++ b/8.0/release-notes/8.0.39-30.html @@ -4108,11 +4108,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/8.0.40-31.html b/8.0/release-notes/8.0.40-31.html index dd9c2981f98..0bab37afc0f 100644 --- a/8.0/release-notes/8.0.40-31.html +++ b/8.0/release-notes/8.0.40-31.html @@ -4099,11 +4099,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.12-2rc1.html b/8.0/release-notes/Percona-Server-8.0.12-2rc1.html index 2d03847f2e8..f3c89c60292 100644 --- a/8.0/release-notes/Percona-Server-8.0.12-2rc1.html +++ b/8.0/release-notes/Percona-Server-8.0.12-2rc1.html @@ -4122,11 +4122,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.13-3.html b/8.0/release-notes/Percona-Server-8.0.13-3.html index 36adc1e011e..a9f13049814 100644 --- a/8.0/release-notes/Percona-Server-8.0.13-3.html +++ b/8.0/release-notes/Percona-Server-8.0.13-3.html @@ -4163,11 +4163,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.13-4.html b/8.0/release-notes/Percona-Server-8.0.13-4.html index a0418d51427..fec45d8bd0a 100644 --- a/8.0/release-notes/Percona-Server-8.0.13-4.html +++ b/8.0/release-notes/Percona-Server-8.0.13-4.html @@ -4090,11 +4090,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.14.html b/8.0/release-notes/Percona-Server-8.0.14.html index 8493c66b886..d4de709dd53 100644 --- a/8.0/release-notes/Percona-Server-8.0.14.html +++ b/8.0/release-notes/Percona-Server-8.0.14.html @@ -4045,11 +4045,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.15-5.html b/8.0/release-notes/Percona-Server-8.0.15-5.html index ac6b5743f2b..d6b667cabf2 100644 --- a/8.0/release-notes/Percona-Server-8.0.15-5.html +++ b/8.0/release-notes/Percona-Server-8.0.15-5.html @@ -4085,11 +4085,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.15-6.html b/8.0/release-notes/Percona-Server-8.0.15-6.html index c60749cea74..0b204d52639 100644 --- a/8.0/release-notes/Percona-Server-8.0.15-6.html +++ b/8.0/release-notes/Percona-Server-8.0.15-6.html @@ -4094,11 +4094,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.16-7.html b/8.0/release-notes/Percona-Server-8.0.16-7.html index cc7dcd9b767..8f52676b5db 100644 --- a/8.0/release-notes/Percona-Server-8.0.16-7.html +++ b/8.0/release-notes/Percona-Server-8.0.16-7.html @@ -4106,11 +4106,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.17-8.html b/8.0/release-notes/Percona-Server-8.0.17-8.html index 1565b5f1ef1..e4cbc3ff3a2 100644 --- a/8.0/release-notes/Percona-Server-8.0.17-8.html +++ b/8.0/release-notes/Percona-Server-8.0.17-8.html @@ -4094,11 +4094,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.18-9.html b/8.0/release-notes/Percona-Server-8.0.18-9.html index e5cb6ba418d..8f46fedb2ac 100644 --- a/8.0/release-notes/Percona-Server-8.0.18-9.html +++ b/8.0/release-notes/Percona-Server-8.0.18-9.html @@ -4083,11 +4083,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.19-10.html b/8.0/release-notes/Percona-Server-8.0.19-10.html index 226a1390a23..a279ac36f4d 100644 --- a/8.0/release-notes/Percona-Server-8.0.19-10.html +++ b/8.0/release-notes/Percona-Server-8.0.19-10.html @@ -4104,11 +4104,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.20-11.html b/8.0/release-notes/Percona-Server-8.0.20-11.html index 64bab229c19..9b5e8e865ac 100644 --- a/8.0/release-notes/Percona-Server-8.0.20-11.html +++ b/8.0/release-notes/Percona-Server-8.0.20-11.html @@ -4104,11 +4104,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.21-12.html b/8.0/release-notes/Percona-Server-8.0.21-12.html index 0b7f87d635d..caa5a5a40bc 100644 --- a/8.0/release-notes/Percona-Server-8.0.21-12.html +++ b/8.0/release-notes/Percona-Server-8.0.21-12.html @@ -4095,11 +4095,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.22-13.html b/8.0/release-notes/Percona-Server-8.0.22-13.html index 7ea84c74bd8..e1eabaa9842 100644 --- a/8.0/release-notes/Percona-Server-8.0.22-13.html +++ b/8.0/release-notes/Percona-Server-8.0.22-13.html @@ -4104,11 +4104,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.23-14.html b/8.0/release-notes/Percona-Server-8.0.23-14.html index 8506207b329..606d768e07f 100644 --- a/8.0/release-notes/Percona-Server-8.0.23-14.html +++ b/8.0/release-notes/Percona-Server-8.0.23-14.html @@ -4113,11 +4113,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.25-15.html b/8.0/release-notes/Percona-Server-8.0.25-15.html index e0b829a016e..6a195af8cf0 100644 --- a/8.0/release-notes/Percona-Server-8.0.25-15.html +++ b/8.0/release-notes/Percona-Server-8.0.25-15.html @@ -4115,11 +4115,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.26-16.html b/8.0/release-notes/Percona-Server-8.0.26-16.html index 494f01a18de..ce65a6a9f47 100644 --- a/8.0/release-notes/Percona-Server-8.0.26-16.html +++ b/8.0/release-notes/Percona-Server-8.0.26-16.html @@ -4122,11 +4122,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.26-17.html b/8.0/release-notes/Percona-Server-8.0.26-17.html index 085b8774d23..b4f05b0f9b1 100644 --- a/8.0/release-notes/Percona-Server-8.0.26-17.html +++ b/8.0/release-notes/Percona-Server-8.0.26-17.html @@ -4104,11 +4104,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.27-18.html b/8.0/release-notes/Percona-Server-8.0.27-18.html index 8c662814d12..a10ded3a3bd 100644 --- a/8.0/release-notes/Percona-Server-8.0.27-18.html +++ b/8.0/release-notes/Percona-Server-8.0.27-18.html @@ -4140,11 +4140,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.28-19.html b/8.0/release-notes/Percona-Server-8.0.28-19.html index a3e1d729dd5..8586501075e 100644 --- a/8.0/release-notes/Percona-Server-8.0.28-19.html +++ b/8.0/release-notes/Percona-Server-8.0.28-19.html @@ -4122,11 +4122,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/Percona-Server-8.0.28-20.html b/8.0/release-notes/Percona-Server-8.0.28-20.html index 1c140fb02ee..bcc18747739 100644 --- a/8.0/release-notes/Percona-Server-8.0.28-20.html +++ b/8.0/release-notes/Percona-Server-8.0.28-20.html @@ -4122,11 +4122,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/release-notes/release-notes_index.html b/8.0/release-notes/release-notes_index.html index 1a437885203..52a936d8355 100644 --- a/8.0/release-notes/release-notes_index.html +++ b/8.0/release-notes/release-notes_index.html @@ -4040,11 +4040,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/removing-tokudb.html b/8.0/removing-tokudb.html index d70ea84d3fc..a7e4b4cbab4 100644 --- a/8.0/removing-tokudb.html +++ b/8.0/removing-tokudb.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/reserved-words.html b/8.0/reserved-words.html index b01f780c7c5..445175e4508 100644 --- a/8.0/reserved-words.html +++ b/8.0/reserved-words.html @@ -4028,11 +4028,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/rotating-master-key.html b/8.0/rotating-master-key.html index 92df12e6549..b7e3b148a0a 100644 --- a/8.0/rotating-master-key.html +++ b/8.0/rotating-master-key.html @@ -4033,11 +4033,32 @@
  • - + - LDAP authentication plugin system variables + LDAP SASL system variables + + + + +
  • + + + + + + + + + + +
  • + + + + + LDAP Simple system variables diff --git a/8.0/search/search_index.json b/8.0/search/search_index.json index aec107c1dc3..2224538b344 100644 --- a/8.0/search/search_index.json +++ b/8.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Percona Server for MySQL 8.0 - Documentation","text":"

    This documentation is for the latest release: Percona Server for MySQL 8.0.40-31 (Release Notes).

    Percona Server for MySQL is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior and optimized performance, greater scalability, and availability, enhanced backups, increased visibility and instrumentation.

    Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads.

    Enjoy all the benefits of Percona Server for MySQL Pro.

    You can use the Quickstart Guide to start using Percona Server for MySQL.

    "},{"location":"index.html#for-monitoring-and-management","title":"For Monitoring and Management","text":"

    Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.

    Install PMM and connect your MySQL instances to it

    "},{"location":"adaptive-network-buffers.html","title":"Adaptive network buffers","text":"

    To find the buffer size of the current connection, use the network_buffer_length status variable. Add SHOW GLOBAL to review the cumulative buffer sizes for all connections. This variable can help to estimate the maximum size of the network buffer\u2019s overhead.

    Network buffers grow towards the max_allowed_packet size and do not shrink until the connection is terminated. For example, if the connections are selected at random from the pool, an occasional big query eventually increases the buffers of all connections. The combination of max_allowed packet set to a value between 64MB to 128MB and the connection number between 256 to 1024 can create a large memory overhead.

    Percona Server for MySQL version 8.0.23-14 introduces the net_buffer_shrink_interval variable to solve this issue. The default value is 0 (zero). If you set the value higher than 0, Percona Server records the network buffer\u2019s maximum use size for the number of seconds set by net_buffer_shrink_interval. When the next interval starts, the network buffer is set to the recorded size. This action removes spikes in the buffer size.

    You can achieve similar results by disconnecting and reconnecting the TCP connections, but this solution is a heavier process. This process disconnects and reconnects connections with small buffers.

    "},{"location":"adaptive-network-buffers.html#net_buffer_shrink_interval","title":"net_buffer_shrink_interval","text":"Option Description Command-line: \u2013net-buffer-shrink-interval=# Scope: Global Dynamic: Yes Data type: integer Default value: 0

    The interval is measured in seconds. The default value is 0, which disables the functionality. The minimum value is 0, and the maximum value is 31536000.

    "},{"location":"added-features.html","title":"Updated supported features","text":"

    The following is a list of the latest supported features:

    • Percona Server for MySQL 8.0.27-18 adds support for SELECT FOR UPDATE SKIP LOCKED/NOWAIT. The transaction isolation level must be READ COMMITTED.

    • Percona Server for MySQL 8.0.27-18 adds the ability to cancel ongoing manual compactions. The cancel methods are the following:

      • Using either Control+C (from a session) or KILL (from another session) for client sessions running manual compactions by SET GLOBAL rocksdb_compact_cf (variable).

      • Using a global variable rocksdb_cancel_manual_compactions to cancel all ongoing manual compactions.

    • Percona Server for MySQL 8.0.23-14 adds supported for Generated Columns and index are supported. Generated columns are not supported in versions earlier than 8.0.23-14.

    • Percona Server for MySQL 8.0.23-14 adds support for explicit DEFAULT value expressions. From version 8.0.13-3 to version 8.0.22-13, MyRocks did not support these expressions.

    "},{"location":"advisors.html","title":"Use Percona Monitoring and Management (PMM) Advisors","text":"

    Percona Monitoring and Management (PMM) provides several categories of Advisors. Each Advisor contains a set of automated checks. These checks investigate your database settings for a specific range of possible issues.

    The Percona Platform hosts the Advisors.

    The PMM Server automatically downloads the Advisors if the Advisors and Telemetry options are enabled in Configuration > Settings > Advanced Settings. Both options are enabled by default.

    See also

    PMM documentation - Advisor checks

    "},{"location":"ai-docs.html","title":"How we use artificial intelligence","text":"

    The technical writer oversees the integration of AI-driven tools and platforms into the documentation workflow, ensuring that AI-generated text meets the standards for clarity, coherence, and accuracy. While AI assists in tasks such as content generation, language enhancement, and formatting optimization, the technical writer is responsible for validating and refining the output to ensure its suitability for the intended audience.

    Throughout the documentation process, the technical writer reviews the quality and relevance of AI-generated content in detail and with critical judgment. By leveraging their expertise in language, communication, and subject matter knowledge, the technical writer collaborates with AI systems to refine and tailor the documentation to meet the specific needs and preferences of the audience.

    While AI accelerates the documentation process and enhances productivity, the technical writer verifies the information\u2019s accuracy and maintains consistency in terminology, style, and tone. The technical writer ensures that the final document reflects the company\u2019s commitment to excellence.

    "},{"location":"aio-page-requests.html","title":"Multiple page asynchronous I/O requests","text":"

    The I/O unit size in InnoDB is only one page, even if the server doing read ahead. A 16KB I/O unit size is too small for sequential reads, and less efficient than a larger I/O unit size. InnoDB uses Linux asynchronous I/O (aio) by default. By submitting multiple, consecutive 16KB read requests at the same time, Linux internally merges the requests and reads more efficiently.

    This feature is able to submit multiple page I/O requests and works in the background. You can manage the feature with the linear read-ahead technique. This technique adds pages to the buffer pool based on the buffer pool pages being accessed sequentially. The innodb_read_ahead_threshold configuration parameter controls this operation.

    On a HDD RAID 1+0 environment, more than 1000MB/s disk reads can be achieved by submitting 64 consecutive pages requests at once, while only 160MB/s disk reads is shown by submitting single page request.

    "},{"location":"aio-page-requests.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1 - The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"aio-page-requests.html#status-variables","title":"Status variables","text":""},{"location":"aio-page-requests.html#innodb_buffered_aio_submitted","title":"Innodb_buffered_aio_submitted","text":"Option Description Scope: Global Data type: Numeric

    This variable shows the number of submitted buffered asynchronous I/O requests.

    "},{"location":"aio-page-requests.html#other-reading","title":"Other reading","text":"
    • Making full table scan 10x faster in InnoDB

    • Bug #68659 InnoDB Linux native aio should submit more i/o requests at once

    "},{"location":"apparmor.html","title":"Working with AppArmor","text":"

    The operating system has a Discretionary Access Controls (DAC) system. AppArmor supplements the DAC with a Mandatory Access Control (MAC) system. AppArmor is the default security module for Ubuntu or Debian systems and uses profiles to define how programs access resources.

    AppArmor is path-based and restricts processes by using profiles. Each profile contains a set of policy rules. Some applications may install their profile along with the application. If an installation does not also install a profile, then that application is not part of the AppArmor subsystem. You can also create profiles since they are simple text files stored in the /etc/apparmor.d directory.

    A profile is in one of the following modes:

    • Enforce - the default setting, applications are prevented from taking actions restricted by the profile rules.

    • Complain - applications are allowed to take restricted actions, and the actions are logged.

    • Disabled - applications are allowed to take restricted actions, and the actions are not logged.

    You can mix enforce profiles and complain profiles in your server.

    "},{"location":"apparmor.html#install-the-utilities-used-to-control-apparmor","title":"Install the utilities used to control AppArmor","text":"

    Install the apparmor-utils package to work with profiles. Use these utilities to create, update, enforce, switch to complain mode, and disable profiles, as needed:

    $ sudo apt install apparmor-utils\n
    Expected output
    Reading package lists... Done\nBuilding dependency tree\n...\nThe following additional packages will be installed:\n    python3-apparmor python3-libapparmor\n...\n
    "},{"location":"apparmor.html#check-the-current-status","title":"Check the current status","text":"

    As root or using sudo, you can check the AppArmor status:

    $ sudo aa-status\n
    Expected output
    apparmor module is loaded.\n34 profiles are loaded.\n32 profiles in enforce mode.\n...\n    /usr/sbin/mysqld\n...\n2 profiles in complain mode.\n...\n3 profiles have profiles defined.\n...\n0 processes are in complain mode.\n0 processes are unconfined but have a profile defined.\n
    "},{"location":"apparmor.html#switch-a-profile-to-complain-mode","title":"Switch a profile to complain mode","text":"

    Switch a profile to complain mode when the program is in your path with this command:

    $ sudo aa-complain <program>\n

    If needed, specify the program\u2019s path in the command:

    $ sudo aa-complain /sbin/<program>\n

    If the profile is not stored in /etc/apparmor.d/, use the following command:

    $ sudo aa-complain /path/to/profiles/<program>\n
    "},{"location":"apparmor.html#switch-a-profile-to-enforce-mode","title":"Switch a profile to enforce mode","text":"

    Switch a profile to the enforce mode when the program is in your path with this command:

    $ sudo aa-enforce <program>\n

    If needed, specify the program\u2019s path in the command:

    $ sudo aa-enforce /sbin/<program>\n

    If the profile is not stored in /etc/apparmor.d/, use the following command:

    $ sudo aa-enforce /path/to/profile\n
    "},{"location":"apparmor.html#disable-one-profile","title":"Disable one profile","text":"

    You can disable a profile but it is recommended to Switch a Profile to Complain mode.

    Use either of the following methods to disable a profile:

    $ sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/\n$ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld\n

    or

    $ aa-disable /etc/apparmor.d/usr.sbin.mysqld\n
    "},{"location":"apparmor.html#reload-all-profiles","title":"Reload all profiles","text":"

    Run either of the following commands to reload all profiles:

    $ sudo service apparmor reload\n

    or

    $ sudo systemctl reload apparmor.service\n
    "},{"location":"apparmor.html#reload-one-profile","title":"Reload one profile","text":"

    To reload one profile, run the following:

    $ sudo apparmor_parser -r /etc/apparmor.d/<profile>\n

    For some changes to take effect, you may need to restart the program.

    "},{"location":"apparmor.html#disable-apparmor","title":"Disable AppArmor","text":"

    AppArmor provides security and disabling the system is not recommened. If AppArmor must be disabled, run the following commands:

    1. Check the status.

      $ sudo apparmor_status\n
    2. Stop and disable AppArmor.

      $ sudo systemctl stop apparmor\n$ sudo systemctl disable apparmor\n
    "},{"location":"apparmor.html#add-the-mysqld-profile","title":"Add the mysqld profile","text":"

    Add the mysqld profile with the following procedure:

    1. Download the current version of the AppArmor:

      $ wget https://raw.githubusercontent.com/percona/percona-server/8.0/build-ps/debian/extra/apparmor.d/usr.sbin.mysqld.in\n

      The expected output:

      ...\nSaving to 'apparamor-profile`\n...\n
    2. Move the file to /etc/apparmor.d/usr.sbin.mysqld

      $ sudo mv apparmor-profile /etc/apparmor.d/usr.sbin.mysqld\n
    3. Create an empty file for editing:

      $ sudo touch /etc/apparmor.d/local/usr.sbin.mysqld\n
    4. Load the profile:

      $ sudo apparmor_parser -r -T -W /etc/apparmor.d/usr.sbin.mysqld\n
    5. Restart Percona Server for MySQL:

      $ sudo systemctl restart mysql\n
    6. Verify the profile status:

      $ sudo aa-status\n
      Expected output
      ...\nprocesses are in enforce mode\n...\n/usr/sbin/mysqld (100840)\n...\n
    "},{"location":"apparmor.html#edit-the-mysqld-profile","title":"Edit the mysqld profile","text":"

    Only edit /etc/apparmor.d/local/usr.sbin.mysql. We recommend that you switch a profile to Complain mode before editing the file. Edit the file in any text editor. When your work is done, Reload one profile and Switch a Profile to Enforce mode.

    "},{"location":"apparmor.html#configure-a-custom-data-directory-location","title":"Configure a custom data directory location","text":"

    You can change the data directory to a non-default location, like /var/lib/mysqlcustom. You should enable audit mode, to capture all of the actions, and edit the profile to allow access for the custom location.

    $ cat /etc/mysql/mysql.conf.d/mysqld.cnf\n
    Expected output
    The Percona Server 8.0 configuration file.\n\nFor explanations see\nhttps://dev.mysql.com/doc/mysql/en/server-system-variables.html\n\n[mysqld]\npid-file    = /var/run/mysqld/mysqld.pid\nsocket        = /var/run/mysqld/mysqld.sock\n*datadir    = /var/lib/mysqlcustom*\nlog-error    = /var/log/mysql/error.log\n

    Enable audit mode for mysqld. In this mode, the security policy is enforced and all access is logged.

    $ aa-audit mysqld\n

    Restart Percona Server for MySQL.

    $ sudo systemctl mysql restart\n

    The restart fails because AppArmor has blocked access to the custom data directory location. To diagnose the issue, check the logs for the following:

    • ALLOWED - A log event when the profile is in complain mode and the action violates a policy.

    • DENIED - A log event when the profile is in enforce mode and the action is blocked.

    For example, the following log entries show DENIED:

    Expected output
    ...\nDec 07 12:17:08 ubuntu-s-4vcpu-8gb-nyc1-01-aa-ps audit[16013]: AVC apparmor=\"DENIED\" operation=\"mknod\" profile=\"/usr/sbin/mysqld\" name=\"/var/lib/mysqlcustom/binlog.index\" pid=16013 comm=\"mysqld\" requested_mask=\"c\" denied_mask=\"c\" fsuid=111 ouid=111\nDec 07 12:17:08 ubuntu-s-4vcpu-8gb-nyc1-01-aa-ps kernel: audit: type=1400 audit(1607343428.022:36): apparmor=\"DENIED\" operation=\"mknod\" profile=\"/usr/sbin/mysqld\" name=\"/var/lib/mysqlcustom/mysqld_tmp_file_case_insensitive_test.lower-test\" pid=16013 comm=\"mysqld\" requested_mask=\"c\" denied_mask=\"c\" fsuid=111 ouid=111\n...\n

    Open /etc/apparmor.d/local/usr.sbin.mysqld in a text editor and edit the following entries in the Allow data dir access section.

    Allow data dir access\n/var/lib/mysqlcustom/ r,\n/var/lib/mysqlcustom/** rwk,\n

    In etc/apparmor.d/local/usr.sbin.mysqld, comment out, using the # symbol, the current entries in the Allow data dir access section. This step is optional. If you skip this step, mysqld continues to access the default data directory location.

    Note

    Edit the local version of the file instead of the main profile. Separating the changes makes maintenance easier.

    Reload the profile:

    $ apparmor_parser -r -T /etc/apparmor.d/usr.sbin.mysqld\n

    Restart mysql:

    $ systemctl restart mysqld\n
    "},{"location":"apparmor.html#set-up-a-custom-log-location","title":"Set up a custom log location","text":"

    To move your logs to a custom location, you must edit the my.cnf configuration file and then edit the local profile to allow access:

    cat /etc/mysql/mysql.conf.d/mysqld.cnf\n
    Expected output
    The Percona Server 8.0 configuration file.\n\nFor explanations see\nhttps://dev.mysql.com/doc/mysql/en/server-system-variables.html\n\n[mysqld]\npid-file    = /var/run/mysqld/mysqld.pid\nsocket        = /var/run/mysqld/mysqld.sock\ndatadir    = /var/lib/mysql\nlog-error    = /*custom-log-dir*/mysql/error.log\n

    Verify the custom directory exists.

    $ ls -la /custom-log-dir/\n
    Expected output
    total 12\ndrwxrwxrwx  3 root root 4096 Dec  7 13:09 .\ndrwxr-xr-x 24 root root 4096 Dec  7 13:07 ..\ndrwxrwxrwx  2 root root 4096 Dec  7 13:09 mysql\n

    Restart Percona Server.

    $ service mysql start\n
    Expected output
    Job for mysql.service failed because the control process exited with error code.\nSee \"systemctl status mysql.service\" and \"journalctl -xe\" for details.\n
    $ journalctl -xe\n
    Expected output
    ...\nAVC apparmor=\"DENIED\" operation=\"mknod\" profile=\"/usr/sbin/mysqld\" name=\"/custom-log-dir/mysql/error.log\"\n...\n

    The access has been denied by AppArmor. Edit the local profile in the Allow log file access section to allow access to the custom log location.

    $ cat /etc/apparmor.d/local/usr.sbin.mysqld\n
    Expected output
     Site-specific additions and overrides for usr.sbin.mysqld..\n For more details, please see /etc/apparmor.d/local/README.\n\n Allow log file access\n /custom-log-dir/mysql/ r,\n /custom-log-dir/mysql/** rw,\n

    Reload the profile:

    $ apparmor_parser -r -T /etc/apparmor.d/usr.sbin.mysqld\n

    Restart Percona Server:

    $ systemctl restart mysqld\n
    "},{"location":"apparmor.html#set-secure_file_priv-directory-location","title":"Set secure_file_priv directory location","text":"

    By default, secure_file_priv points to the following location:

    mysql> mysqlshow variables like 'secure_file_priv';\n
    Expected output
    +------------------+-----------------------+\n| Variable_name    | Value                 |\n+------------------+-----------------------+\n| secure_file_priv | /var/lib/mysql-files/ |\n+------------------+-----------------------+\n

    To allow access to another location, in a text editor, open the local profile. Review the settings in the Allow data dir access section:

    Allow data dir access\n/var/lib/mysql/ r,\n/var/lib/mysql/** rwk,\n

    Edit the local profile in a text editor to allow access to the custom location.

    $ cat /etc/apparmor.d/local/usr.sbin.mysqld\n
    Expected output
    Site-specific additions and overrides for usr.sbin.mysqld..\nFor more details, please see /etc/apparmor.d/local/README.\n\nAllow data dir access\n/var/lib/mysqlcustom/ r,\n/var/lib/mysqlcustom/** rwk,\n

    Reload the profile:

    $ apparmor_parser -r -T /etc/apparmor.d/usr.sbin.mysqld\n

    Restart Percona Server for MySQL:

    $ systemctl restart mysqld\n
    "},{"location":"apt-download-deb.html","title":"Install Percona Server for MySQL 8.0 using downloaded DEB packages","text":"

    When installing packages manually, you must resolve all the dependencies and install missing packages yourself. You must install the following packages before manually installing Percona Server:

    • mysql-common

    • libjemalloc1

    • libaio1

    • libmecab2.

    Download the packages from Percona Product Downloads. If needed, Instructions for the Percona Product Download are available.

    1. The following example uses Wget to download the Percona Server 8.0 bundle from the specified URL. The bundle is a tar archive containing the Percona Server 8.0.31-23 binary for Debian Buster (x86_64 architecture):

      $ wget https://downloads.percona.com/downloads/Percona-Server-8.0/Percona-Server-8.0.31-23/binary/debian/buster/x86_64/Percona-Server-8.0.31-23-r71449379-buster-x86_64-bundle.tar\n
    2. The command line instruction uses the tar command to extract files from a tarball (a type of compressed file). The tar command is a Unix utility that stores and extracts files from an archive file known as a tarfile.

      The xvf option combines three separate options: x, v, and f.

      • x extract the files from the archive

      • v verbose - print the file names as they are extracted

      • f file - use the following argument as the name of the archive file

      The following command extracts the contents of the Percona-Server-8.0.31-23-r71449379-buster-x86_64-bundle.tar file and prints the file names.

      $ tar xvf Percona-Server-8.0.31-23-r71449379-buster-x86_64-bundle.tar\n
      Expected output
      libperconaserverclient21_8.0.31-23-1.buster_amd64.deb\nlibperconaserverclient21-dev_8.0.31-23-1.buster_amd64.deb\npercona-mysql-router_8.0.31-23-1.buster_amd64.deb\npercona-server-client_8.0.31-23-1.buster_amd64.deb\npercona-server-common_8.0.31-23-1.buster_amd64.deb\npercona-server-dbg_8.0.31-23-1.buster_amd64.deb\npercona-server-rocksdb_8.0.31-23-1.buster_amd64.deb\npercona-server-server_8.0.31-23-1.buster_amd64.deb\npercona-server-source_8.0.31-23-1.buster_amd64.deb\npercona-server-test_8.0.31-23-1.buster_amd64.deb\n
    3. Install Percona Server for MySQL using the dpkg utility to install Debian (.deb) packages. The installation requires either root or the sudo command. sudo allows you to run programs with the security privileges of another user, usually as the superuser.

      dpkg is a package manager for Debian-based systems and can install, remove, and provide information about .deb packages.

      The -i option tells dpkg to install the package.

      The *.deb is a wildcard that matches any file in the current directory that ends with the .deb extension.

      $ sudo dpkg -i *.deb\n

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB changes by Percona Server for MySQL version.

    "},{"location":"apt-files.html","title":"Files in the DEB package built for Percona Server for MySQL 8.0","text":"Package Contains percona-server-server The database server itself, the mysqld binary and associated files. percona-server-common The files common to the server and client. percona-server-client The command line client. percona-server-dbg Debug symbols for the server. percona-server-test The database test suite. percona-server-source The server source. percona-server-rocksdb The files for rocksdb installation. percona-mysql-router The mysql router. libperconaserverclient21-dev Header files needed to compile software to use the client library. libperconaserverclient21 The client-shared library. The version is incremented when there is an ABI change that requires software using the client library to be recompiled or its source code modified."},{"location":"apt-pinning.html","title":"Apt pinning the Percona Server for MySQL 8.0 packages","text":"

    Apt pinning is a feature in Debian and its derivatives like Ubuntu, allowing you to prioritize package versions from different repositories. When you pin the Percona Server for MySQL 8.0 packages, you tell your system\u2019s package manager to prefer this specific version over others available in different repositories. This ability is beneficial when you want to ensure that you\u2019re running a specific version of Percona Server for MySQL due to compatibility or stability reasons, despite newer versions available elsewhere.

    To apt pin Percona Server for MySQL 8.0, you must first open the /etc/apt/preferences.d/ directory and create a new file for the Percona Server package. In this file, you will specify the package name followed by a pinning priority. A higher priority ensures that the version of Percona Server you wish to install is preferred over other versions.

    You should include the version number of the Percona Server in the \u2018Package\u2019 field to identify the package.

    Then, assign a priority in the \u2018Pin-Priority\u2019 field; a common practice is to set it above 1000 to prioritize it over other packages. The pinning takes place in the preference file. To pin a package, set the Pin-Priority to higher numbers.

    Make a new file /etc/apt/preferences.d/00percona.pref. For example, add the following to the preference file:

    Package: \nPin: release o=Percona Development Team\nPin-Priority: 1001\n

    Save this file and update your package lists with sudo apt update. Finally, install Percona Server for MySQL 8.0 with sudo apt install percona-server-server-8.0, and apt will adhere to your pinning preferences.

    For more information, see debian wiki on AptConfiguration.

    "},{"location":"apt-repo.html","title":"Use an APT repository to install Percona Server for MySQL 8.0","text":"

    Ready-to-use packages are available from the Percona Server for MySQL software repositories and the Percona downloads page.

    Specific information on the supported platforms, products, and versions is described in Percona Software and Platform Lifecycle.

    We gather Telemetry data in the Percona packages and Docker images.

    Review Get more help for ways that we can work with you.

    "},{"location":"apt-repo.html#version-changes","title":"Version changes","text":"

    Starting with Percona Server 8.0.37-29, the following operating systems on Percona Software Download include ARM64 packages with the arm64.deb extension:

    • Debian GNU/Linux 12.0

    • Debian GNU/Linux 11.0

    • Ubuntu 24.04

    • Ubuntu 22.04

    • Ubuntu 20.04

    "},{"location":"apt-repo.html#install-percona-server-for-mysql-using-apt","title":"Install Percona Server for MySQL using APT","text":"
    1. This command line instruction uses the apt command to update the package lists for upgrades and new package installations.

      • sudo is a command that allows you to run programs with the security privileges of another user, by default, as the superuser. Updating the package lists typically requires superuser or \u2018root\u2019 privileges.

      • apt is a command-line interface that handles package management in Debian and its derivatives.

      • update option resynchronizes the package index files from the sources specified in the system\u2019s sources.list file. You should run this command regularly to get the latest package updates.

      $ sudo apt update\n
    2. This command line instruction uses superuser privileges to install the curl package using the apt package manager. curl is a command-line tool used to transfer data using various network protocols.

      $ sudo apt install curl\n
    3. This command line instruction uses curl to download the percona-release_latest.generic_all.deb file from the https://repo.percona.com/apt location.

      The -0 option saves the downloaded file with the same name used in the URL.

      $ curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n
    4. The following command uses the apt command to install multiple packages. gnupg2 is the GNU Privacy Guard that provides cryptographic privacy and authentication. lsb-release is a Linux utility that provides certain Linux Standard Base (LSB) and distribution-specific information. ./percona-release_latest.generic_all.deb is a Debian package in the current directory.

      $ sudo apt install gnupg2 lsb-release ./percona-release_latest.generic_all.deb\n
    5. The following command uses superuser privileges to update the package lists from the repositories so that the system knows about the latest versions of packages and their dependencies.

      $ sudo apt update\n
    6. This command line instruction uses percona-release command, a tool provided by Percona, to set up a specific Percona Server version.

      $ sudo percona-release setup ps80\n
    7. You can check the repository setup for the Percona original release list in /etc/apt/sources.list.d/percona-original-release.list. The APT system uses this file to know where to find updates and new packages for Percona software.

    8. This command uses the apt command to install the percona-server-server package.

      $ sudo apt install percona-server-server\n

    See Configuring Percona repositories with percona-release for more information.

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    Percona Server for MySQL contains user-defined functions from the Percona Toolkit. These user-defined functions provide faster checksums. For more details on the user-defined functions, see Percona Toolkit UDF functions.

    After the installation completes, run the following commands to create these functions:

    mysql -e \"CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'\"\nmysql -e \"CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'\"\nmysql -e \"CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'\"\n
    "},{"location":"apt-repo.html#install-the-percona-testing-repository-using-apt","title":"Install the Percona Testing repository using APT","text":"

    Percona offers pre-release builds from the testing repository. As a superuser, run percona-release with the testing argument to enable it.

    $ sudo percona-release enable ps80 testing\n

    Do not run testing repository builds in production. The build may not contain all the features available in the final release and may change without notice.

    "},{"location":"apt-run.html","title":"Run Percona Server for MySQL 8.0 after APT repository installation","text":"

    Percona Server for MySQL stores the data files in /var/lib/mysql/ by default. You can find the configuration file that is used to manage Percona Server for MySQL in /etc/mysql/my.cnf.

    Note

    Debian and Ubuntu installation doesn\u2019t automatically create a special debian-sys-maint user which can be used by the control scripts to control the Percona Server for MySQL mysqld and mysqld_safe services which was the case with previous Percona Server for MySQL versions. If you still require this user you\u2019ll need to create it manually.

    Run the following commands as root or by using the sudo command

    1. Starting the service

      Percona Server for MySQL is started automatically after it gets installed unless it encounters errors during the installation process. You can also manually start it by running: service mysql start

    2. Confirming that service is running. You can check the service status by running: service mysql status

    3. Stopping the service

      You can stop the service by running: service mysql stop

    4. Restarting the service. service mysql restart

    Note

    Debian 9.0 (stretch) and Ubuntu 18.04 LTS (bionic) come with systemd as the default system and service manager. You can invoke all the above commands with systemctl instead of service. Currently, both are supported.

    "},{"location":"apt-run.html#working-with-apparmor","title":"Working with AppArmor","text":"

    For information on AppArmor, see Working with AppArmor.

    "},{"location":"apt-uninstall-server.html","title":"Uninstall Percona Server for MySQL 8.0 using the APT package manager","text":"

    To uninstall Percona Server for MySQL, you must remove all the installed packages. Removing packages with apt remove does not remove the configuration and data files. Removing the packages with apt purge does remove the packages with configuration files and data files (all the databases). Depending on your needs, you can choose which command best suits you.

    1. To uninstall Percona Server for MySQL, you must stop the Percona Server for MySQL service:

      $ sudo systemctl stop mysql\n
    2. Remove the Percona Server for MySQL packages. You can use either command.

      a. The command, apt remove removes only the packages. This operation does not remove the data files (databases, tables, logs, configuration, and other files).

      $ sudo apt remove percona-server\\\n

      If you do not need these remaining files, remove them manually.\n

      b. This command removes all the packages and any associated configuration files. This action ensures the complete removal of the packages from the system.

      ```{.bash data-prompt=\"$\"}\n$ sudo apt purge percona-server\\`\n```\n
    3. To ensure associated packages are removed, run the following command:

      $ sudo apt autoremove\n
    4. You can manually remove the data directory by executing the following command. Back up any of the necessary data before deleting the files.

      $ sudo rm -rf /var/lib/mysql/\n
    "},{"location":"audit-log-filter-compression-encryption.html","title":"Audit Log Filter compression and encryption","text":""},{"location":"audit-log-filter-compression-encryption.html#compression","title":"Compression","text":"

    You can enable compression for any format by setting the audit_log_filter_compression system variable when the server starts.

    The audit_log_filter_compression variable can be either of the following:

    • NONE (no compression) - the default value
    • GZIP - uses the GNU Zip compression

    If compression and encryption are enabled, the plugin applies compression before encryption. If you must manually recover a file with both settings, first decrypt the file and then uncompress the file.

    "},{"location":"audit-log-filter-compression-encryption.html#encryption","title":"Encryption","text":"

    You can encrypt any audit log filter file in any format. The audit log filter plugin generates the initial password, but you can use user-defined passwords after that. The plugin stores the passwords in the keyring, so that feature must be enabled.

    Set the audit_log_filter_encryption system variable with the server starts. The allowed values are the following:

    • NONE - no encryption, the default value
    • AES - AES-256-CBC (Cipher Block Chaining) encryption

    The AES uses the 256-bit key size.

    The following audit log filter functions are used with encryption:

    Function name Description audit_log_encryption_password_set() Stores the password in the keyring. If encryption is enabled, the function also rotates the log file by renaming the current log file and creating a log file encrypted with the password. audit_log_encryption_password_get() Invoking this function without an argument returns the current encryption password. An argument that specifies the keyring ID of an archived password or current password returns that password by ID.

    The audit_log_filter_password_history_keep_days variable is used with encryption. If the variable is not zero (0) , invoking audit_log_encryption_password_set() causes the expiration of archived audit log passwords.

    When the plugin starts with encryption enabled, the plugin checks if the keyring has an audit log filter encryption password. If no password is found, the plugin generates a random password and stores this password in the keyring. Use audit_log_encryption_password_get() to review this password.

    If compression and encryption are enabled, the plugin applies compression before encryption. If you must manually recover a file with both settings, first decrypt the file and then uncompress the file.

    "},{"location":"audit-log-filter-compression-encryption.html#manually-uncompressing-and-decrypting-audit-log-filter-files","title":"Manually uncompressing and decrypting audit log filter files","text":"

    To decrypt an encrypted log file, use the openssl command. For example:

    openssl enc -d -aes-256-cbc -pass pass:password\n    -iter iterations -md sha256\n    -in audit.timestamp.log.pwd_id.enc\n    -out audit.timestamp.log\n

    To execute that command, you must obtain a password and iterations. To do this, use audit_log_encryption_password_get().

    This function gets the encryption password, and the iterations count and returns this data as a JSON-encoded string. For example, if the audit log file name is audit.20190415T151322.log.20190414T223342-2.enc, the password ID is {randomly-generated-alphanumeric-string}-2 and the keyring ID is audit-log-20190414T223342-2.

    Get the keyring password:

    mysql> SELECT audit_log_encryption_password_get('audit-log-20190414T223342-2');\n

    The return value of this function may look like the following:

    Expected output
    {\"password\":\"{randomly-generated-alphanumeric-string}\",\"iterations\":568977}\n
    "},{"location":"audit-log-filter-formats.html","title":"Audit Log Filter file format overview","text":"

    When an auditable event occurs, the plugin writes a record to the log file.

    After the plugin starts, the first record lists the description of the server and the options at startup. After the first record, the auditable events are connections, disconnections, SQL statements executed, and so on. Statements within stored procedures or triggers are not logged, only the top-level statements.

    If files are referenced by LOAD_DATA, the contents are not logged.

    Set with the audit_log_filter_format system variable at startup. The available format types are the following;

    Format Type Command Description XML (new style) audit_log_filter_format=NEW The default format XML (old style) audit_log_filter_format=OLD The original version of the XML format JSON audit_log_filter_format=JSON Files written as a JSON array

    By default, the file contents in the new-style XML format are not compressed or encrypted.

    Changing the audit_log_filter_format, you should also change the audit_log_filter_file name. For example, changing the audit_log_filter_format to JSON, change the audit_log_filter_file to audit.json. If you don\u2019t change the audit_log_filter_file name, then all audit log filter files have the same base name and you won\u2019t be able to easily find when the format changed.

    "},{"location":"audit-log-filter-json.html","title":"Audit Log Filter format - JSON","text":"

    The JSON format has one top-level JSON array, which contain JSON objects with key-value pairs. These objects represent an event in the audit. Some pairs are listed in every audit record. The audit record type determines if other key-value pairs are listed. The order of the pairs within an audit record is not guaranteed. The value description may be truncated.

    Certain statistics, such as query time and size, are only available in the JSON format and help detect activity outliers when analyzed.

    [\n  {\n    \"timestamp\": \"2023-03-29 11:17:03\",\n    \"id\": 0,\n    \"class\": \"audit\",\n    \"server_id\": 1\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 1,\n    \"class\": \"command\",\n    \"event\": \"command_start\",\n    \"connection_id\": 1,\n    \"command_data\": {\n      \"name\": \"command_start\",\n      \"status\": 0,\n      \"command\": \"query\"}\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 2,\n    \"class\": \"general\",\n    \"event\": \"log\",\n    \"connection_id\": 11,\n    \"account\": { \"user\": \"root[root] @ localhost []\", \"host\": \"localhost\" },\n    \"login\": { \"user\": \"root[root] @ localhost []\", \"os\": \"\", \"ip\": \"\", \"proxy\": \"\" },\n    \"general_data\": {\n      \"command\": \"Query\",\n      \"sql_command\": \"create_table\",\n      \"query\": \"CREATE TABLE t1 (c1 INT)\",\n      \"status\": 0}\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 3,\n    \"class\": \"query\",\n    \"event\": \"query_start\",\n    \"connection_id\": 11,\n    \"query_data\": {\n      \"query\": \"CREATE TABLE t1 (c1 INT)\",\n      \"status\": 0,\n      \"sql_command\": \"create_table\"}\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 4,\n    \"class\": \"query\",\n    \"event\": \"query_status_end\",\n    \"connection_id\": 11,\n    \"query_data\": {\n      \"query\": \"CREATE TABLE t1 (c1 INT)\",\n      \"status\": 0,\n      \"sql_command\": \"create_table\"}\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 5,\n    \"class\": \"general\",\n    \"event\": \"status\",\n    \"connection_id\": 11,\n    \"account\": { \"user\": \"root[root] @ localhost []\", \"host\": \"localhost\" },\n    \"login\": { \"user\": \"root[root] @ localhost []\", \"os\": \"\", \"ip\": \"\", \"proxy\": \"\" },\n    \"general_data\": {\n      \"command\": \"Query\",\n      \"sql_command\": \"create_table\",\n      \"query\": \"CREATE TABLE t1 (c1 INT)\",\n      \"status\": 0}\n  },\n  {\n    \"timestamp\": \"2023-03-29 11:17:05\",\n    \"id\": 6,\n    \"class\": \"command\",\n    \"event\": \"command_end\",\n    \"connection_id\": 1,\n    \"command_data\": {\n      \"name\": \"command_end\",\n      \"status\": 0,\n      \"command\": \"query\"}\n  }\n]\n
    The order of the attributes within the JSON object can vary. Certain attributes are in every element. Other attributes are optional and depend on the type of event and the filter settings or plugin settings.

    The following fields are contained in each object:

    • timestamp
    • id
    • class
    • event

    The possible attributes in a JSON object are the following:

    Name Description class Defines the type of event account Defines the MySQL account associated with the event. connection_data Defines the client connection. connection_id Defines the client connection identifier event Defines a subclass of the event class general_data Defines the executed statement or command when the audit record has a class value of general. id Defines the event ID login Defines how the client connected to the server query_statistics Defines optional query statistics and is used for outlier detection shutdown_data Defines the audit log filter termination startup_data Defines the initialization of the audit log filter plugin table_access_data Defines access to a table time Defines an integer that represents a UNIX timestamp timestamp Defines a UTC value in the YYYY-MM_DD hh:mm:ss format"},{"location":"audit-log-filter-naming.html","title":"Audit Log Filter file naming conventions","text":""},{"location":"audit-log-filter-naming.html#name-qualities","title":"Name qualities","text":"

    The audit log filter file name has the following qualities:

    • Optional directory name
    • Base name
    • Optional suffix

    Using either compression or encryption adds the following suffixes:

    • Compression adds the .gz suffix
    • Encryption adds the pwd_id.enc suffix

    The pwd_id represents the password used for encrypting the log files. The audit log filter plugin stores passwords in the keyring.

    You can combine compression and encryption, which adds both suffixes to the audit_filter.log name.

    The following table displays the possible ways a file can be named:

    Default name Enabled feature audit.log No compression or encryption audit.log.gz Compression audit.log.pwd_id.enc Encryption audit.log.gz.pwd_id.enc Compression, encryption"},{"location":"audit-log-filter-naming.html#encryption-id-format","title":"Encryption ID format","text":"

    The format for pwd_id is the following:

    • A UTC value in YYYYMMDDThhmmss format that represents when the password was created
    • A sequence number that starts at 1 and increases if passwords have the same timestamp value

    The following are examples of pwd_id values:

    20230417T082215-1\n20230301T061400-1\n20230301T061400-2\n

    The following example is a list of the audit log filter files with the pwd_id:

    audit_filter.log.20230417T082215-1.enc\naudit_filter.log.20230301T061400-1.enc\naudit_filter.log.20230301T061400-2.enc\n

    The current password has the largest sequence number.

    "},{"location":"audit-log-filter-naming.html#renaming-operations","title":"Renaming operations","text":"

    During initialization, the plugin checks if a file with that name exists. If it does, the plugin renames the file. The plugin writes to an empty file.

    During termination, the plugin renames the file.

    "},{"location":"audit-log-filter-new.html","title":"Audit Log Filter format - XML (new style)","text":"

    The filter writes the audit log filter file in XML. The XML file uses UTF-8.

    The is the root element and this element contains elements. Each element contains specific information about an event that is audited.

    For each new file, the Audit Log Filter plugin writes the XML declaration and the root element tag. The plugin writes the closing root element when closing the file. If the file is open, this closing element is not available.

    <?xml version=\"1.0\" encoding=\"utf-8\"?>\n<AUDIT>\n    <AUDIT_RECORD>\n        <NAME>Audit</NAME>\n        <RECORD_ID>0_2023-03-29T11:11:43</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:43</TIMESTAMP>\n        <SERVER_ID>1</SERVER_ID>\n    </AUDIT_RECORD>\n    <AUDIT_RECORD>\n        <NAME>Command Start</NAME>\n        <RECORD_ID>1_2023-03-29T11:11:45</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:45</TIMESTAMP>\n        <STATUS>0</STATUS>\n        <CONNECTION_ID>1</CONNECTION_ID>\n        <COMMAND_CLASS>query</COMMAND_CLASS>\n    </AUDIT_RECORD>\n    <AUDIT_RECORD>\n        <NAME>Query</NAME>\n        <RECORD_ID>2_2023-03-29T11:11:45</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:45</TIMESTAMP>\n        <COMMAND_CLASS>create_table</COMMAND_CLASS>\n        <CONNECTION_ID>11</CONNECTION_ID>\n        <HOST>localhost</HOST>\n        <IP></IP>\n        <USER>root[root] @ localhost []</USER>\n        <OS_LOGIN></OS_LOGIN>\n        <SQLTEXT>CREATE TABLE t1 (c1 INT)</SQLTEXT>\n        <STATUS>0</STATUS>\n    </AUDIT_RECORD>\n    <AUDIT_RECORD>\n        <NAME>Query Start</NAME>\n        <RECORD_ID>3_2023-03-29T11:11:45</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:45</TIMESTAMP>\n        <STATUS>0</STATUS>\n        <CONNECTION_ID>11</CONNECTION_ID>\n        <COMMAND_CLASS>create_table</COMMAND_CLASS>\n        <SQLTEXT>CREATE TABLE t1 (c1 INT)</SQLTEXT>\n    </AUDIT_RECORD>\n    <AUDIT_RECORD>\n        <NAME>Query</NAME>\n        <RECORD_ID>4_2023-03-29T11:11:45</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:45</TIMESTAMP>\n        <COMMAND_CLASS>create_table</COMMAND_CLASS>\n        <CONNECTION_ID>11</CONNECTION_ID>\n        <HOST>localhost</HOST>\n        <IP></IP>\n        <USER>root[root] @ localhost []</USER>\n        <OS_LOGIN></OS_LOGIN>\n        <SQLTEXT>CREATE TABLE t1 (c1 INT)</SQLTEXT>\n        <STATUS>0</STATUS>\n    </AUDIT_RECORD>\n    <AUDIT_RECORD>\n        <NAME>Command End</NAME>\n        <RECORD_ID>5_2023-03-29T11:11:45</RECORD_ID>\n        <TIMESTAMP>2023-03-29T11:11:45</TIMESTAMP>\n        <STATUS>0</STATUS>\n        <CONNECTION_ID>1</CONNECTION_ID>\n        <COMMAND_CLASS>query</COMMAND_CLASS>\n    </AUDIT_RECORD>\n</AUDIT>\n

    The order of the attributes within an can vary. Certain attributes are in every element. Other attributes are optional and depend on the type of audit record.

    The attributes in every element are the following:

    Attribute Name Description <NAME> The action that generated the audit record. <RECORD_ID> The <RECORD_ID> consists of a sequence number and a timestamp value. The sequence number is initialized when the plugin opens the audit log filter file. <TIMESTAMP> Displays the date and time when the audit event happened.

    The optional attributes are the following:

    Attribute Name Description <COMMAND_CLASS> Contains the type of performed action. <CONNECTION_ID> Contains the client connection identifier. <CONNECTION_ATTRIBUTES> Contains the client connection attributes. Each attribute has a <NAME> and <VALUE> pair. <CONNECTION_TYPE> Contains the type of connection security. <DB> Contains the database name. <HOST> Contains the client\u2019s hostname. <IP> Contains the client\u2019s IP address. <MYSQL_VERSION> Contains the MySQL server version. <OS_LOGIN> Contains the user name used during an external authentication, for example, if the user is authenticated through an LDAP plugin. If the authentication plugin does not set a value or the user is authenticated using MySQL authentication, this value is empty. <OS_VERSION> Contains the server\u2019s operating system. <PRIV_USER> Contains the user name used by the server when checking privileges. This name may be different than <USER>. <PROXY_USER> Contains the proxy user. If a proxy is not used, the value is empty. <SERVER_ID> Contains the server ID. <SQLTEXT> Contains the text of the SQL statement. <STARTUP_OPTIONS> Contains the startup options. These options may be provided by the command line or files. <STATUS> Contains the status of a command. A 0 (zero) is a success. A nonzero value is an error. <STATUS_CODE> Contains the status of a command, which either succeeds (0) or an error occurred (1). <TABLE> Contains the table name. <USER> Contains the user name from the client. This name may be different than <PRIV_USER>. <VERSION> Contains the audit log filter format."},{"location":"audit-log-filter-old.html","title":"Audit Log Filter format - XML (old style)","text":"

    The old style XML format uses <AUDIT> tag as the root element and adds the </AUDIT> tag when the file closes. Each audited event is contained in an element.

    The order of the attributes within an can vary. Certain attributes are in every element. Other attributes are optional and depend on the type of audit record.

    <?xml version=\"1.0\" encoding=\"utf-8\"?>\n<AUDIT>\n  <AUDIT_RECORD\n    NAME=\"Audit\"\n    RECORD_ID=\"0_2023-03-29T11:15:52\"\n    TIMESTAMP=\"2023-03-29T11:15:52\"\n    SERVER_ID=\"1\"/>\n  <AUDIT_RECORD\n    NAME=\"Command Start\"\n    RECORD_ID=\"1_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    STATUS=\"0\"\n    CONNECTION_ID=\"1\"\n    COMMAND_CLASS=\"query\"/>\n  <AUDIT_RECORD\n    NAME=\"Query\"\n    RECORD_ID=\"2_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    COMMAND_CLASS=\"create_table\"\n    CONNECTION_ID=\"11\"\n    HOST=\"localhost\"\n    IP=\"\"\n    USER=\"root[root] @ localhost []\"\n    OS_LOGIN=\"\"\n    SQLTEXT=\"CREATE TABLE t1 (c1 INT)\"\n    STATUS=\"0\"/>\n  <AUDIT_RECORD\n    NAME=\"Query Start\"\n    RECORD_ID=\"3_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    STATUS=\"0\"\n    CONNECTION_ID=\"11\"\n    COMMAND_CLASS=\"create_table\"\n    SQLTEXT=\"CREATE TABLE t1 (c1 INT)\"/>\n  <AUDIT_RECORD\n    NAME=\"Query Status End\"\n    RECORD_ID=\"4_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    STATUS=\"0\"\n    CONNECTION_ID=\"11\"\n    COMMAND_CLASS=\"create_table\"\n    SQLTEXT=\"CREATE TABLE t1 (c1 INT)\"/>\n  <AUDIT_RECORD\n    NAME=\"Query\"\n    RECORD_ID=\"5_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    COMMAND_CLASS=\"create_table\"\n    CONNECTION_ID=\"11\"\n    HOST=\"localhost\"\n    IP=\"\"\n    USER=\"root[root] @ localhost []\"\n    OS_LOGIN=\"\"\n    SQLTEXT=\"CREATE TABLE t1 (c1 INT)\"\n    STATUS=\"0\"/>\n  <AUDIT_RECORD\n    NAME=\"Command End\"\n    RECORD_ID=\"6_2023-03-29T11:15:53\"\n    TIMESTAMP=\"2023-03-29T11:15:53\"\n    STATUS=\"0\"\n    CONNECTION_ID=\"1\"\n    COMMAND_CLASS=\"query\"/>\n</AUDIT>\n

    The required attributes are the following:

    HTML Table Generator Attribute Name Description \u00a0NAME \u00a0The action that generated the audit record. \u00a0RECORD_ID \u00a0The RECORD_ID consists of a sequence number and a timestamp value. The sequence number is initialized when the plugin opens the audit log filter file. \u00a0TIMESTAMP \u00a0Displays the date and time when the audit event happened.

    The optional attributes are the following:

    HTML Table Generator Attribute Name Description COMMAND_CLASS Type of action performed CONNECTION_ID Client connection identifier CONNECTION_TYPE Connection security type DB Database name HOST Client's hostname IP Client's IP address MYSQL_VERSION Server version OS_LOGIN The user name used during an external authentication, for example, if the user is authenticated through an LDAP plugin. If the authentication plugin does not set a value or the user is authenticated using MySQL authentication, this value is empty. OS_VERSION Server's operating system PRIV_USER The user name used by the server when checking privileges. This name may be different than USER. PROXY_USER The proxy user. If a proxy is not used, the value is empty. SERVER_ID Server Identifier SQLTEXT SQL statement text STARTUP_OPTIONS Server startup options, either command line or config files STATUS Command's status - a 0 (zero) is a success, a non-zero is an error STATUS_CODE A 0 (zero) is a success, a non-zero is an error TABLE Table name USER \u00a0Client's user name - this name may be different than PRIV_USER. VERSION Format of audit log filter"},{"location":"audit-log-filter-overview.html","title":"Audit Log Filter overview","text":"

    The Audit Log Filter plugin allows you to monitor, log, and block a connection or query actively executed on the selected server.

    Enabling the plugin produces a log file that contains a record of server activity. The log file has information on connections and databases accessed by that connection.

    The plugin uses the mysql system database to store filter and user account data. Set the audit_log_filter_database variable at server startup to select a different database.

    The AUDIT_ADMIN privilege is required to enable users to manage the Audit Log Filter plugin.

    "},{"location":"audit-log-filter-overview.html#privileges","title":"Privileges","text":"

    Define the privilege at runtime at the startup of the server. The associated Audit Log Filter privilege can be unavailable if the plugin is not enabled.

    "},{"location":"audit-log-filter-overview.html#audit_admin","title":"AUDIT_ADMIN","text":"

    This privilege is defined by the server and enables the user to configure the plugin.

    "},{"location":"audit-log-filter-overview.html#audit_abort_exempt","title":"AUDIT_ABORT_EXEMPT","text":"

    This privilege allows queries from a user account to always be executed. An abort item does not block them. This ability lets the user account regain access to a system if an audit is misconfigured. The query is logged due to the privilege. User accounts with the SYSTEM_USER privilege have the AUDIT_ABORT_EXEMPT privilege.

    "},{"location":"audit-log-filter-overview.html#audit-log-filter-tables","title":"Audit Log Filter tables","text":"

    The Audit Log Filter plugin uses mysql system database tables in the InnoDB storage engine. These tables store user account data and filter data. When you start the server, change the plugin\u2019s database with the audit_log_filter_database variable.

    The audit_log_filter table stores the definitions of the filters and has the following column definitions:

    HTML Table Generator Column name Description \u00a0NAME \u00a0Name of the filter \u00a0FILTER \u00a0Definition of the filter linked to the name as a JSON value

    The audit_log_user table stores account data and has the following column definitions:

    HTML Table Generator Column name Description \u00a0USER \u00a0The account name of the user \u00a0HOST \u00a0The account name of the host \u00a0FILTERNAME \u00a0The account filter name"},{"location":"audit-log-filter-restrictions.html","title":"Audit Log Filter restrictions","text":""},{"location":"audit-log-filter-restrictions.html#general-restrictions","title":"General restrictions","text":"

    The Audit Log Filter has the following general restrictions:

    • Logs only SQL statements. Statements made by NoSQL APIs, such as the Memcached API, are not logged.

    • Logs only the top-level statement. Statements within a stored procedure or a trigger are not logged. Does not log the file contents for statements like LOAD_DATA.

    • If used with a cluster, the plugin must be installed on each server used to execute SQL on the cluster.

    • If used with a cluster, the application or user is responsible for aggregating all the data of each server used in the cluster.

    "},{"location":"audit-log-filter-security.html","title":"Audit Log Filter security","text":"

    The Audit Log Filter plugin generates audit log filter files. The directory that contains these files should be accessible only to the following:

    • Users who must be able to view the log

    • Server must be able to write to the directory

    The files are not encrypted by default and may contain sensitive information.

    The default name for the file in the data directory is audit_filter.log. If needed, use the audit_log_filter_file system variable at server startup to change the location. Due to the log rotation, multiple audit log files may exist.

    "},{"location":"audit-log-filter-variables.html","title":"Audit log filter functions, options and variables","text":"

    The following sections describe the functions, options, and variables available in the audit log filter plugin.

    "},{"location":"audit-log-filter-variables.html#audit-log-filter-functions","title":"Audit log filter functions","text":"

    The following audit log filter functions are available.

    Function name audit_log_encryption_password_get(keyring_id) audit_log_encryption_password_set(new_password) audit_log_filter_flush() audit_log_read() audit_log_read_bookmark() audit_log_filter_remove_filter(filter_name) audit_log_filter_remove_user(user_name) audit_log_rotate() audit_log_filter_set_filter(filter_name, definition) audit_log_filter_set_user(user_name, filter_name)"},{"location":"audit-log-filter-variables.html#audit_log_encryption_password_getkeyring_id","title":"audit_log_encryption_password_get(keyring_id)","text":"

    This function returns the encryption password. Any keyring plugin or keyring component can be used, but the plugin or component must be enabled. If the plugin or component is not enabled, an error occurs.

    "},{"location":"audit-log-filter-variables.html#parameters","title":"Parameters","text":"

    keyring_id - If the function does not contain a keyring_id, the function returns the current encryption password. You can also request a specific encryption password with the keyring ID of either the current password or an archived password.

    "},{"location":"audit-log-filter-variables.html#returns","title":"Returns","text":"

    This function returns a JSON object containing the password, iterations count used by the password.

    "},{"location":"audit-log-filter-variables.html#example","title":"Example","text":"
    mysql> SELECT audit_log_encryption_password_get();\n
    Expected output
    +---------------------------------------------+\n| audit_log_encryption_password_get()         |\n+---------------------------------------------+\n| {\"password\":\"passw0rd\",\"iterations\":5689}   |\n+---------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_encryption_password_setnew_password","title":"audit_log_encryption_password_set(new_password)","text":"

    This function sets the encryption password and stores the new password in the keyring.

    "},{"location":"audit-log-filter-variables.html#parameters_1","title":"Parameters","text":"

    password - the password as a string. The maximum length is 766 bytes.

    "},{"location":"audit-log-filter-variables.html#returns_1","title":"Returns","text":"

    This function returns a string. An OK indicates a success. ERROR indicates a failure.

    "},{"location":"audit-log-filter-variables.html#example_1","title":"Example","text":"
    mysql> SELECT audit_log_encryption_password_set(passw0rd);\n
    Expected output
    +-----------------------------------------------------+\n| audit_log_encryption_password_set(passw0rd)         |\n+-----------------------------------------------------+\n| OK                                                  |\n+-----------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_flush","title":"audit_log_filter_flush()","text":"

    This function updates the audit log filter tables and makes any changes operational.

    Modifying the audit log filter tables directly with INSERT, UPDATE, or DELETE does not implement the modifications immediately. The tables must be flushed to have those changes take effect.

    This function forces reloading all filters and should only be used if someone has modified the tables directly.

    Important

    Avoid using this function. This function performs an operation that is similar to uninstalling and reinstalling the plugin. Filters are detached from all current sessions. To restart logging, the current sessions must either disconnect and reconnect or do a change-user operation.

    "},{"location":"audit-log-filter-variables.html#parameters_2","title":"Parameters","text":"

    None.

    "},{"location":"audit-log-filter-variables.html#returns_2","title":"Returns","text":"

    This function returns either an OK for success or an error message for failure.

    "},{"location":"audit-log-filter-variables.html#example_2","title":"Example","text":"
    mysql> SELECT audit_log_filter_flush();\n
    Expected output
    +--------------------------+\n| audit_log_filter_flush() |\n+--------------------------+\n| OK                       |\n+--------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_read","title":"audit_log_read()","text":"

    If the audit log filter format is JSON, this function reads the audit log and returns an array of the audit events as a JSON string. Generates an error if the format is not JSON.

    "},{"location":"audit-log-filter-variables.html#parameters_3","title":"Parameters","text":"

    None. If the start position is not provided, the read continues from the current position.

    Optional: You can specify a starting position for the read with start or a timestamp and an id, both items are considered a bookmark and can be used to identify an event. You must include both (timestamp and id) or an error is generated. If the timestamp does not include a time section, the function assumes the time is 00:00.

    You can also provide a max_array_length to limit the number of log events.

    Callaudit_log_read_bookmark() to return the most recently written event.

    "},{"location":"audit-log-filter-variables.html#returns_3","title":"Returns","text":"

    This function returns a string of a JSON array of the audit events, or a JSON NULL value. Returns NULL and generates an error if the call fails.

    "},{"location":"audit-log-filter-variables.html#example_3","title":"Example","text":"
    mysql> SELECT audit_log_read(audit_log_read_bookmark());\n
    Expected output
    +------------------------------------------------------------------------------+\n| audit_log_read(audit_log_read_bookmark())                                   |\n+------------------------------------------------------------------------------+\n| [{\"timestamp\" : \"2023-06-02 09:43:25\", \"id\": 10,\"class\":\"connection\",]       |\n+------------------------------------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_read_bookmark","title":"audit_log_read_bookmark()","text":"

    This function provides a bookmark for the most recently written audit log event as a JSON string. Generates an error if the format is not JSON.

    If this function is used with [audit_log_read()](#audit_log_read), theaudit_log_read()` function starts reading at that position.

    "},{"location":"audit-log-filter-variables.html#parameters_4","title":"Parameters","text":"

    None.

    "},{"location":"audit-log-filter-variables.html#returns_4","title":"Returns","text":"

    This function returns a JSON string containing a bookmark for success or NULL and an error for failure.

    "},{"location":"audit-log-filter-variables.html#example_4","title":"Example","text":"
    mysql> SELECT audit_log_read_bookmark();\n
    Expected output
    +----------------------------------------------------+\n| audit_log_read_bookmark()                          |\n+----------------------------------------------------+\n| {\"timestamp\" : \"2023-06-02 09:43:25\", \"id\": 10 }   |\n+----------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_remove_filterfilter_name","title":"audit_log_filter_remove_filter(filter_name)","text":"

    This function removes the selected filter from the current set of filters.

    If user accounts are assigned the selected filter, the user accounts are no longer filtered. The user accounts are removed from audit_log_filter_user. If the user accounts are in a current session, they are detached from the selected filter and no longer logged.

    "},{"location":"audit-log-filter-variables.html#parameters_5","title":"Parameters","text":"

    filter_name - a selected filter name as a string.

    "},{"location":"audit-log-filter-variables.html#returns_5","title":"Returns","text":"

    This function returns either an OK for success or an error message for failure.

    If the filter name does not exist, no error is generated.

    "},{"location":"audit-log-filter-variables.html#example_5","title":"Example","text":"
    mysql> SELECT audit_log_filter_remove_filter('filter-name');\n
    Expected output
    +------------------------------------------------+\n| audit_log_filter_remove_filter('filter-name')  |\n+------------------------------------------------+\n| OK                                             |\n+------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_remove_useruser_name","title":"audit_log_filter_remove_user(user_name)","text":"

    This function removes the assignment of a filter from the selected user account.

    If the user account is in a current session, they are not affected. New sessions for this user account use the default account filter or are not logged.

    If the user-name is %, the default account filter is removed.

    "},{"location":"audit-log-filter-variables.html#parameters_6","title":"Parameters","text":"

    user_name - a selected user name in either the user_name@host_name format or %.

    "},{"location":"audit-log-filter-variables.html#returns_6","title":"Returns","text":"

    This function returns either an OK for success or an error message for failure.

    If the user_name has no filter assigned, no error is generated.

    "},{"location":"audit-log-filter-variables.html#example_6","title":"Example","text":"
    mysql> SELECT audit_log_filter_remove_user('user-name@localhost');\n
    Expected output
    +------------------------------------------------------+\n| audit_log_filter_remove_user('user-name@localhost')  |\n+------------------------------------------------------+\n| OK                                                   |\n+------------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_rotate","title":"audit_log_rotate()","text":""},{"location":"audit-log-filter-variables.html#parameters_7","title":"Parameters","text":"

    None.

    "},{"location":"audit-log-filter-variables.html#returns_7","title":"Returns","text":"

    This function returns renamed file name.

    "},{"location":"audit-log-filter-variables.html#example_7","title":"Example","text":"
    mysql> SELECT audit_log_rotate();\n
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_set_filterfilter_name-definition","title":"audit_log_filter_set_filter(filter_name, definition)","text":"

    This function, when provided with a filter name and definition, adds the filter.

    The new filter has a different filter ID. Generates an error if the filter name exists.

    "},{"location":"audit-log-filter-variables.html#parameters_8","title":"Parameters","text":"
    • filter_name - a selected filter name as a string.

    • definition - Defines the definition as a JSON value.

    "},{"location":"audit-log-filter-variables.html#returns_8","title":"Returns","text":"

    This function returns either an OK for success or an error message for failure.

    "},{"location":"audit-log-filter-variables.html#example_8","title":"Example","text":"
    mysql> SET @filter = '{ \"filter_name\": { \"log\": true }}'\nmysql> SET audit_log_filter_set_filter('filter-name', @filter);\n
    Expected output
    +-------------------------------------------------------------+\n| audit_log_filter_set_filter('filter-name', @filter)  |\n+-------------------------------------------------------------+\n| OK                                                          |\n+-------------------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_set_useruser_name-filter_name","title":"audit_log_filter_set_user(user_name, filter_name)","text":"

    This function assigns the filter to the selected user account.

    A user account can only have one filter. If the user account already has a filter, this function replaces the current filter. If the user account is in a current session, nothing happens. When the user account connects again the new filter is used.

    The user name, %, is the default account. The filter assigned to % is used by any user account without a defined filter.

    "},{"location":"audit-log-filter-variables.html#parameters_9","title":"Parameters","text":"
    • user_name - a selected user name in either the user_name@host_name format or %.

    • filter_name - a selected filter name as a string.

    "},{"location":"audit-log-filter-variables.html#returns_9","title":"Returns","text":"

    This function returns either an OK for success or an error message for failure.

    "},{"location":"audit-log-filter-variables.html#example_9","title":"Example","text":"
    mysql> SELECT audit_log_filter_set_user('user-name@localhost', 'filter-name');\n
    Expected output
    +-------------------------------------------------------------------+\n| audit_log_filter_set_user('user-name@localhost', 'filter-name')  |\n+-------------------------------------------------------------------+\n| OK                                                                |\n+-------------------------------------------------------------------+\n
    "},{"location":"audit-log-filter-variables.html#audit-log-filter-options-and-variables","title":"Audit log filter options and variables","text":"Name audit-log-filter audit_log_filter_buffer_size audit_log_filter_compression audit_log_filter_database audit_log_filter_disable audit_log_filter_encryption audit_log_filter_file audit_log_filter_filter_id audit_log_filter_format audit_log_filter_format_unix_timestamp audit_log_filter_handler audit_log_filter_key_derivation_iterations_count_mean audit_log_filter_max_size audit_log_filter_password_history_keep_days audit_log_filter_prune_seconds audit_log_filter_read_buffer_size audit_log_filter_rotate_on_size audit_log_filter_strategy audit_log_filter_syslog_tag audit_log_filter_syslog_priority"},{"location":"audit-log-filter-variables.html#audit-log-filter","title":"audit-log-filter","text":"Option Description Command-line \u2013audit-log-filter[=value] Dynamic No Scope Data type Enumeration Default ON

    This option determines how, at startup, the server loads the audit_log_filter plugin. The plugin must be registered.

    The valid values are the following:

    • ON
    • OFF
    • FORCE
    • FORCE_PLUS_PERMANENT
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_buffer_size","title":"audit_log_filter_buffer_size","text":"Option name Description Command-line \u2013audit-log-filter-buffer-size Dynamic No Scope Global Data type Integer Default 1048576 Minimum value 4096 Maximum value 18446744073709547520 Units byes Block size 4096

    This variable defines the buffer size in multiples of 4096 when logging is asynchronous. The contents for events are stored in a buffer. The contents are stored until the contents are written.

    The plugin initializes a single buffer and removes the buffer when the plugin terminates.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_compression","title":"audit_log_filter_compression","text":"Option name Description Command-line \u2013audit-log-filter-compression Dynamic Yes Scope Global Data type Enumeration Default NONE Valid values NONE or GZIP

    This variable defines the compression type for the audit log filter file. The values can be either NONE, the default value and file has no compression, or GZIP.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_database","title":"audit_log_filter_database","text":"Option name Description Command-line \u2013audit-log-filter-database Dynamic No Scope Global Data type String Default mysql

    This variable defines the audit_log_filter database. This read-only variable stores the necessary tables. Set this option at system startup. The database name cannot exceed 64 characters or be NULL.

    An invalid database name prevents the use of audit log filter plugin.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_disable","title":"audit_log_filter_disable","text":"Option name Description Command-line \u2013audit-log-filter-disable Dynamic Yes Scope Global Data type Boolean Default OFF

    This variable disables the plugin logging for all connections and any sessions.

    This variable requires the user account to have SYSTEM_VARIABLES_ADMIN and AUDIT_ADMIN privileges.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_encryption","title":"audit_log_filter_encryption","text":"Option name Description Command-line \u2013audit-log-filter-encryption Dynamic No Scope Global Data type Enumeration Default NONE Valid values NONE or AES

    This variable defines the encryption type for the audit log filter file. The values can be either of the following:

    • NONE - the default value, no encryption
    • AES
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_file","title":"audit_log_filter_file","text":"Option name Description Command-line \u2013audit-log-filter-file Dynamic No Scope Global Data type String Default audit_filter.log

    This variable defines the name and suffix of the audit log filter file. The plugin writes events to this file.

    The file name and suffix can be either of the following:

    • a relative path name - the plugin looks for this file in the data directory
    • a full path name - the plugin uses the given value

    If you use a full path name, ensure the directory is accessible only to users who need to view the log and the server.

    For more information, see Naming conventions

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_filter_id","title":"audit_log_filter_filter_id","text":"Option name Description Command-line \u2013audit-log-filter-file-id Dynamic No Scope Session Data type Integer Default 0 Minimum value 0 Maximum value 4292967295

    This variable defines the internal ID of the audit log filter in the current session.

    The default value is 0 (zero) - the session has no assigned filter.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_format","title":"audit_log_filter_format","text":"Option name Description Command-line \u2013audit-log-filter-format Dynamic No Scope Global Data type Enumeration Default NEW Available values OLD, NEW, JSON

    This variable defines the audit log filter file format.

    The available values are the following:

    • OLD (old-style XML)
    • NEW (new-style XML) and
    • JSON.
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_format_unix_timestamp","title":"audit_log_filter_format_unix_timestamp","text":"Option name Description Command-line \u2013audit-log-filter-format-unix-timestamp Dynamic Yes Scope Global Data type Boolean Default OFF

    This option is only supported for JSON-format files.

    Enabling this option adds a time field to JSON-format files. The integer represents the UNIX timestamp value and indicates the date and time when the audit event was generated. Changing the value causes a file rotation because all records must either have or do not have the time field. This option requires the AUDIT_ADMIN and SYSTEM_VARIABLES_ADMIN privileges.

    This option does nothing when used with other format types.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_handler","title":"audit_log_filter_handler","text":"Option name Description Command-line \u2013audit-log-filter-handler Dynamic No Scope Global Data type String Default FILE

    Defines where the plugin writes the audit log filter file. The following values are available:

    • FILE - plugin writes the log to a location specified in audit_log_filter_file
    • SYSLOG - plugin writes to the syslog
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_key_derivation_iterations_count_mean","title":"audit_log_filter_key_derivation_iterations_count_mean","text":"Option name Description Command-line \u2013audit-log-filter-key-derivation-iterations-count-mean Dynamic Yes Scope Global Data type Integer Default 60000 Minimum value 1000 Maximum value 1000000

    Defines the mean value of iterations used by the password-based derivation routine while calculating the encryption key and iv values. A random number represents the actual iteration count and deviates no more than 10% from this value.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_max_size","title":"audit_log_filter_max_size","text":"Option name Description Command-line \u2013audit-log-filter-max-size Dynamic Yes Scope Global Data type Integer Default 1GB Minimum value 0 Maximum value 18446744073709551615 Unit bytes Block size 4096

    Defines pruning based on the combined size of the files:

    The default value is 1GB.

    A value of 0 (zero) disables pruning based on size.

    A value greater than 0 (zero) enables pruning based on size and defines the combined size limit. When the files exceed this limit, they can be pruned.

    The value is based on 4096 (block size). A value is truncated to the nearest multiple of the block size. If the value is less than 4096, the value is treated as 0 (zero).

    If the values for audit_log_filter_rotate_on_size and audit_log_filter_max_size are greater than 0, we recommend that audit_log_filter_max_size value should be at least seven times the audit_log_filter_rotate_on_size value.

    Pruning requires the following options:

    • audit_log_filter_max_size
    • audit_log_filter_rotate_on_size
    • audit_log_filter_prune_seconds
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_password_history_keep_days","title":"audit_log_filter_password_history_keep_days","text":"Option name Description Command-line \u2013audit-log-filter-password-history-keep-days Dynamic Yes Scope Global Data type Integer Default 0

    Defines when passwords may be removed and measured in days.

    Encrypted log files have passwords stored in the keyring. The plugin also stores a password history. A password does not expire, despite being past the value, in case the password is used for rotated audit logs. The operation of creating a password also archives the previous password.

    The default value is 0 (zero). This value disables the expiration of passwords. Passwords are retained forever.

    If the plugin starts and encryption is enabled, the plugin checks for an audit log filter encryption password. If a password is not found, the plugin generates a random password.

    Call `audit_log_filter_encryption_set() to set a specific password.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_prune_seconds","title":"audit_log_filter_prune_seconds","text":"Option name Description Command-line \u2013audit-log-filter-prune-seconds Dynamic Yes Scope Global Data type Integer Default 0 Minimum value 0 Maximum value 1844674073709551615 Unit seconds

    Defines when the audit log filter file is pruned. This pruning is based on the age of the file. The value is measured in seconds.

    A value of 0 (zero) is the default and disables pruning. The maximum value is 18446744073709551615.

    A value greater than 0 enables pruning. An audit log filter file can be pruned after this value.

    To enable log pruning, you must set one of the following:

    • Enable log rotation by setting `audit_log_filter_rotate_on_size
    • Add a value greater than 0 (zero) for either audit_log_filter_max_size or audit_log_filter_prune_seconds
    "},{"location":"audit-log-filter-variables.html#audit_log_filter_read_buffer_size","title":"audit_log_filter_read_buffer_size","text":"Option name Description Command-line \u2013audit-log-filter-read-buffer-size Dynamic Yes Scope Global Data type Integer Unit Bytes Default 32768

    This option is only supported for JSON-format files.

    The size of the buffer for reading from the audit log filter file. The audit_log_filter_read() reads only from this buffer size.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_rotate_on_size","title":"audit_log_filter_rotate_on_size","text":"Option name Description Command-line \u2013audit-log-filter-rotate-on-size Dynamic Yes Scope Global Data type Integer Default 1GB

    Performs an automatic log file rotation based on the size. The default value is 1GB. If the value is greater than 0, when the log file size exceeds the value, the plugin renames the current file and opens a new log file using the original name.

    If you set the value to less than 4096, the plugin does not automatically rotate the log files. You can rotate the log files manually using audit_log_rotate(). If the value is not a multiple of 4096, the plugin truncates the value to the nearest multiple.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_strategy","title":"audit_log_filter_strategy","text":"Option name Description Command-line \u2013audit-log-filter-strategy Dynamic No Scope Global Data type Enumeration Default ASYNCHRONOUS

    Defines the Audit Log filter plugin\u2019s logging method. The valid values are the following:

    Values Description ASYNCHRONOUS Waits until there is outer buffer space PERFORMANCE If the outer buffer does not have enough space, drops requests SEMISYNCHRONOUS Operating system permits caching SYNCHRONOUS Each request calls sync()"},{"location":"audit-log-filter-variables.html#audit_log_filter_syslog_tag","title":"audit_log_filter_syslog_tag","text":"Option Description Command-line \u2013audit-log-filter-syslog-tag= Dynamic No Scope Global Data type String Default audit-filter"},{"location":"audit-log-filter-variables.html#audit_log_filter_syslog_facility","title":"audit_log_filter_syslog_facility","text":"Option name Description Command-line \u2013audit-log-filter-syslog-facility Dynamic No Scope Global Data type String Default LOG_USER

    Specifies the syslog facility value. The option has the same meaning as the appropriate parameter described in the syslog(3) manual.

    "},{"location":"audit-log-filter-variables.html#audit_log_filter_syslog_priority","title":"audit_log_filter_syslog_priority","text":"Option name Description Command-line \u2013audit-log-filter-syslog-priority Dynamic No Scope Global Data type String Default LOG_INFO

    Defines the priority value for the syslog. The option has the same meaning as the appropriate parameter described in the syslog(3) manual.

    "},{"location":"audit-log-filter-variables.html#audit-log-filter-status-variables","title":"Audit log filter status variables","text":"

    The audit log filter plugin exposes status variables. These variables provide information on the operations.

    Name Description audit_log_filter_current_size The current size of the audit log filter file. If the log is rotated, the size is reset to 0. audit_log_filter_direct_writes Identifies when the log_strategy_type = ASYNCHRONOUS and messages bypass the write buffer and are written directly to the log file audit_log_filter_max_drop_size In the performance logging mode, the size of the largest dropped event. audit_log_filter_events The number of audit log filter events audit_log_filter_events_filtered The number of filtered audit log filter plugin events audit_log_filter_events_lost If the event is larger than the available audit log filter buffer space, the event is lost audit_log_filter_events_written The number of audit log filter events written audit_log_filter_total_size The total size of the events written to all audit log filter files. The number increases even when a log is rotated audit_log_filter_write_waits In the asynchronous logging mode, the number of times an event waited for space in the audit log filter buffer"},{"location":"audit-log-plugin.html","title":"Audit log plugin","text":"

    Percona Audit Log Plugin provides monitoring and logging of connection and query activity that were performed on specific server. Information about the activity is stored in a log file. This implementation is alternative to the MySQL Enterprise Audit Log Plugin

    "},{"location":"audit-log-plugin.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.

    • Percona Server for MySQL 8.0.15-6: The Audit_log_buffer_size_overflow variable was implemented.

    "},{"location":"audit-log-plugin.html#install-the-plugin","title":"Install the plugin","text":"

    The audit Log plugin is installed, but, by default, is not enabled when you install Percona Server for MySQL. To check if the plugin is enabled run the following commands:

    mysql> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME LIKE '%audit%';\n
    Expected output
    Empty set (0.00 sec)\n
    mysql> SHOW variables LIKE 'audit%';\n
    Expected output
    Empty set (0.01 sec)\n
    mysql> SHOW variables LIKE 'plugin%';\n
    Expected output
    +---------------+------------------------+\n| Variable_name | Value                  |\n+---------------+------------------------+\n| plugin_dir    | /usr/lib/mysql/plugin/ |\n+---------------+------------------------+\n1 row in set (0.00 sec)\n

    Note

    The location of the MySQL plugin directory depends on the operating system and may be different on your system.

    The following command enables the plugin:

    mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so';\n

    Run the following command to verify if the plugin was installed correctly:

    mysql> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME LIKE '%audit%'\\G\n
    Expected output
    *************************** 1. row ***************************\n        PLUGIN_NAME: audit_log\n        PLUGIN_VERSION: 0.2\n        PLUGIN_STATUS: ACTIVE\n        PLUGIN_TYPE: AUDIT\nPLUGIN_TYPE_VERSION: 4.1\n        PLUGIN_LIBRARY: audit_log.so\nPLUGIN_LIBRARY_VERSION: 1.7\n        PLUGIN_AUTHOR: Percona LLC and/or its affiliates.\n    PLUGIN_DESCRIPTION: Audit log\n        PLUGIN_LICENSE: GPL\n        LOAD_OPTION: ON\n1 row in set (0.00 sec)\n

    You can review the audit log variables with the following command:

    mysql> SHOW variables LIKE 'audit%';\n
    Expected output
    +-----------------------------+---------------+\n| Variable_name               | Value         |\n+-----------------------------+---------------+\n| audit_log_buffer_size       | 1048576       |\n| audit_log_exclude_accounts  |               |\n| audit_log_exclude_commands  |               |\n| audit_log_exclude_databases |               |\n| audit_log_file              | audit.log     |\n| audit_log_flush             | OFF           |\n| audit_log_format            | OLD           |\n| audit_log_handler           | FILE          |\n| audit_log_include_accounts  |               |\n| audit_log_include_commands  |               |\n| audit_log_include_databases |               |\n| audit_log_policy            | ALL           |\n| audit_log_rotate_on_size    | 0             |\n| audit_log_rotations         | 0             |\n| audit_log_strategy          | ASYNCHRONOUS  |\n| audit_log_syslog_facility   | LOG_USER      |\n| audit_log_syslog_ident      | percona-audit |\n| audit_log_syslog_priority   | LOG_INFO      |\n+-----------------------------+---------------+\n18 rows in set (0.00 sec)\n
    "},{"location":"audit-log-plugin.html#log-format","title":"Log format","text":"

    The plugin supports the following log formats: OLD, NEW, JSON, and CSV. The OLD format and the NEW format are based on XML. The OLD format defines each log record with XML attributes. The NEW format defines each log record with XML tags. The information logged is the same for all four formats. The audit_log_format variable controls the log format choice.

    "},{"location":"audit-log-plugin.html#format-examples","title":"Format examples","text":"

    The following formats are available:

    Old log formatNew log formatJSON formatCSV format
    <AUDIT_RECORD\nNAME=\"Query\"\nRECORD=\"3_2021-06-30T11:56:53\"\nTIMESTAMP=\"2021-06-30T11:57:14 UTC\"\nCOMMAND_CLASS=\"select\"\nCONNECTION_ID=\"3\"\nSTATUS=\"0\"\nSQLTEXT=\"select * from information_schema.PLUGINS where PLUGIN_NAME like '%audit%'\"\nUSER=\"root[root] @ localhost []\"\nHOST=\"localhost\"\nOS_USER=\"\"\nIP=\"\"\nDB=\"\"\n/>\n
    <AUDIT_RECORD>\n<NAME>Query</NAME>\n<RECORD>16684_2021-06-30T16:07:41</RECORD>\n<TIMESTAMP>2021-06-30T16:08:06 UTC</TIMESTAMP>\n<COMMAND_CLASS>select</COMMAND_CLASS>\n<CONNECTION_ID>2</CONNECTION_ID>\n<STATUS>0</STATUS>\n<SQLTEXT>select id, holder from one</SQLTEXT>\n<USER>root[root] @ localhost []</USER>\n<HOST>localhost</HOST>\n<OS_USER></OS_USER>\n<IP></IP>\n<DB></DB>\n
    {\"audit_record\":{\"name\":\"Query\",\"record\":\"13149_2021-06-30T15:03:11\",\"timestamp\":\"2021-06-30T15:07:58 UTC\",\"command_class\":\"show_databases\",\"connection_id\":\"2\",\"status\":0,\"sqltext\":\"show databases\",\"user\":\"root[root] @ localhost []\",\"host\":\"localhost\",\"os_user\":\"\",\"ip\":\"\",\"db\":\"\"}}\n
    \"Query\",\"22567_2021-06-30T16:10:09\",\"2021-06-30T16:19:00 UTC\",\"select\",\"2\",0,\"select count(*) from one\",\"root[root] @ localhost []\",\"localhost\",\"\",\"\",\"\"\n
    "},{"location":"audit-log-plugin.html#audit-log-events","title":"Audit log events","text":"

    The audit Log plugin generates a log of following events.

    AuditConnect or DisconnectQuery

    Audit event indicates that audit logging started or finished. NAME field will be Audit when logging started and NoAudit when logging finished. Audit record also includes server version and command-line arguments.

    ??? example \"Audit event\"\n\n    ```text\n    <AUDIT_RECORD\n    NAME=\"Audit\"\n    RECORD=\"1_2021-06-30T11:56:53\"\n    TIMESTAMP=\"2021-06-30T11:56:53 UTC\"\n    MYSQL_VERSION=\"5.7.34-37\"\n    STARTUP_OPTIONS=\"--daemonize --pid-file=/var/run/mysqld/mysqld.pid\"\n    OS_VERSION=\"x86_64-debian-linux-gnu\"\n    />\n    ```\n

    Connect record event will have NAME field Connect when user logged in or login failed, or Quit when connection is closed.

    The additional fields for this event are the following:

    * `CONNECTION_ID`\n\n* `STATUS`\n\n* `USER`\n\n* `PRIV_USER`\n\n* `OS_LOGIN`\n\n* `PROXY_USER`\n\n* `HOST`\n\n* `IP`\n

    The value for STATUS is 0 for successful logins and non-zero for failed logins.

    Disconnect event
    <AUDIT_RECORD\nNAME=\"Quit\"\nRECORD=\"5_2021-06-29T19:33:03\"\nTIMESTAMP=\"2021-06-29T19:34:38Z\"\nCONNECTION_ID=\"14\"\nSTATUS=\"0\"\nUSER=\"root\"\nPRIV_USER=\"root\"\nOS_LOGIN=\"\"\nPROXY_USER=\"\"\nHOST=\"localhost\"\nIP=\"\"\nDB=\"\"\n/>\n

    Additional fields for this event are: COMMAND_CLASS (values come from the com_status_vars array in the `sql/mysqld.cc`` file in a MySQL source distribution.

    Examples are select, alter_table, create_table, etc.), CONNECTION_ID, STATUS (indicates an error when the vaule is non-zero), SQLTEXT (text of SQL-statement), USER, HOST, OS_USER, IP.

    The possible values for the NAME name field for this event are Query, Prepare, Execute, Change user, etc.

    Query event
    <AUDIT_RECORD\nNAME=\"Query\"\nRECORD=\"4_2021-06-29T19:33:03\"\nTIMESTAMP=\"2021-06-29T19:33:34Z\"\nCOMMAND_CLASS=\"show_variables\"\nCONNECTION_ID=\"14\"\nSTATUS=\"0\"\nSQLTEXT=\"show variables like 'audit%'\"\nUSER=\"root[root] @ localhost []\"\nHOST=\"localhost\"\nOS_USER=\"\"\nIP=\"\"\nDB=\"\"\n/>\n
    "},{"location":"audit-log-plugin.html#stream-the-audit-log-to-syslog","title":"Stream the audit log to syslog","text":"

    To stream the audit log to syslog you\u2019ll need to set audit_log_handler variable to SYSLOG. To control the syslog file handler, the following variables can be used: audit_log_syslog_ident, audit_log_syslog_facility, and audit_log_syslog_priority These variables have the same meaning as appropriate parameters described in the syslog(3) manual.

    Note

    The actions for the variables: audit_log_strategy, audit_log_buffer_size, audit_log_rotate_on_size, audit_log_rotations are captured only with FILE handler.

    "},{"location":"audit-log-plugin.html#filter-methods","title":"Filter methods","text":"

    You can filter the results by the following methods.

    Filter by userFilter by SQL command typeFiltering by database

    The filtering by user feature adds two new global variables: audit_log_include_accounts and audit_log_exclude_accounts to specify which user accounts should be included or excluded from audit logging.

    Only one of these variables can contain a list of users to be either included or excluded, while the other must be NULL. If one of the variables is set to be not NULL (contains a list of users), the attempt to set another one fails. An empty string means an empty list.

    Changes of audit_log_include_accounts and audit_log_exclude_accounts do not apply to existing server connections.

    The filtering by SQL command type adds two new global variables: audit_log_include_commands and audit_log_exclude_commands to specify which command types should be included or excluded from audit logging.

    Only one of these variables can contain a list of command types to be either included or excluded, while the other needs to be NULL. If one of the variables is set to be not NULL (contains a list of command types), the attempt to set another one will fail. An empty string is defined as an empty list.

    If both the audit_log_exclude_commands variable and the audit_log_include_commands variable are NULL, all commands are logged.

    The filtering by an SQL database is implemented by two global variables: audit_log_include_databases and audit_log_exclude_databases to specify which databases should be included or excluded from audit logging.

    Only one of these variables can contain a list of databases to be either included or excluded, while the other needs to be NULL. If one of the variables is set to be not NULL (contains a list of databases), the attempt to set another one will fail. Empty string means an empty list.

    If query is accessing any of databases listed in audit_log_include_databases, the query will be logged. If query is accessing only databases listed in audit_log_exclude_databases, the query will not be logged. CREATE TABLE statements are logged unconditionally.

    Changes of audit_log_include_databases and audit_log_exclude_databases do not apply to existing server connections.

    "},{"location":"audit-log-plugin.html#filter-examples","title":"Filter examples","text":"

    The following are examples of the different filters.

    Filter by userFilter by SQL command typeFilter by database

    The following example adds users who will be monitored:

    mysql> SET GLOBAL audit_log_include_accounts = 'user1@localhost,root@localhost';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    If you try to add users to both the include list and the exclude list, the server returns the following error:

    mysql> SET GLOBAL audit_log_exclude_accounts = 'user1@localhost,root@localhost';\n
    Expected output
    ERROR 1231 (42000): Variable 'audit_log_exclude_accounts' can't be set to the value of 'user1@localhost,root@localhost'\n

    To switch from filtering by included user list to the excluded user list or back, first set the currently active filtering variable to NULL:

    mysql> SET GLOBAL audit_log_include_accounts = NULL;\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_accounts = 'user1@localhost,root@localhost';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_accounts = \"'user'@'host'\";\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_accounts = '''user''@''host''';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_accounts = '\\'user\\'@\\'host\\'';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    To see which user accounts have been added to the exclude list, run the following command:

    mysql> SELECT @@audit_log_exclude_accounts;\n
    Expected output
    +------------------------------+\n| @@audit_log_exclude_accounts |\n+------------------------------+\n| 'user'@'host'                |\n+------------------------------+\n1 row in set (0.00 sec)\n

    Account names from mysql.user table are logged in the audit log. For example when you create a user:

    mysql> CREATE USER 'user1'@'%' IDENTIFIED BY '111';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    When user1 connects from localhost, the user is listed:

    <AUDIT_RECORD\nNAME=\"Connect\"\nRECORD=\"2_2021-06-30T11:56:53\"\nTIMESTAMP=\"2021-06-30T11:56:53 UTC\"\nCONNECTION_ID=\"6\"\nSTATUS=\"0\"\nUSER=\"user1\" ;; this is a 'user' part of account in 8.0\nPRIV_USER=\"user1\"\nOS_LOGIN=\"\"\nPROXY_USER=\"\"\nHOST=\"localhost\" ;; this is a 'host' part of account in 8.0\nIP=\"\"\nDB=\"\"\n/>\n

    To exclude user1 from logging in Percona Server for MySQL 8.0, set:

    SET GLOBAL audit_log_exclude_accounts = 'user1@%';\n

    The value can be NULL or comma separated list of accounts in form user@host or 'user'@'host' (if user or host contains comma).

    The available command types can be listed by running:

    mysql> SELECT name FROM performance_schema.setup_instruments WHERE name LIKE \"statement/sql/%\" ORDER BY name;\n
    Expected output
    +------------------------------------------+\n| name                                     |\n+------------------------------------------+\n| statement/sql/alter_db                   |\n| statement/sql/alter_db_upgrade           |\n| statement/sql/alter_event                |\n| statement/sql/alter_function             |\n| statement/sql/alter_procedure            |\n| statement/sql/alter_server               |\n| statement/sql/alter_table                |\n| statement/sql/alter_tablespace           |\n| statement/sql/alter_user                 |\n| statement/sql/analyze                    |\n| statement/sql/assign_to_keycache         |\n| statement/sql/begin                      |\n| statement/sql/binlog                     |\n| statement/sql/call_procedure             |\n| statement/sql/change_db                  |\n| statement/sql/change_master              |\n...\n| statement/sql/xa_rollback                |\n| statement/sql/xa_start                   |\n+------------------------------------------+\n145 rows in set (0.00 sec)\n

    You can add commands to the include filter by running:

    mysql> SET GLOBAL audit_log_include_commands= 'set_option,create_db';\n

    Create a database with the following command:

    mysql> CREATE DATABASE sample;\n
    Expected output
    <AUDIT_RECORD>\n<NAME>Query</NAME>\n<RECORD>24320_2021-06-30T17:44:46</RECORD>\n<TIMESTAMP>2021-06-30T17:45:16 UTC</TIMESTAMP>\n<COMMAND_CLASS>create_db</COMMAND_CLASS>\n<CONNECTION_ID>2</CONNECTION_ID>\n<STATUS>0</STATUS>\n<SQLTEXT>CREATE DATABASE sample</SQLTEXT>\n<USER>root[root] @ localhost []</USER>\n<HOST>localhost</HOST>\n<OS_USER></OS_USER>\n<IP></IP>\n<DB></DB>\n</AUDIT_RECORD>\n

    To switch the command type filtering type from included type list to the excluded list or back, first reset the currently-active list to NULL:

    mysql> SET GLOBAL audit_log_include_commands = NULL;\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_commands= 'set_option,create_db';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    A stored procedure has the call_procedure command type. All the statements executed within the procedure have the same type call_procedure as well.

    To add databases to be monitored, run:

    mysql> SET GLOBAL audit_log_include_databases = 'test,mysql,db1';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_include_databases= 'db1','db3';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    If you you try to add databases to both include and exclude lists server will show you the following error:

    mysql> SET GLOBAL audit_log_exclude_databases = 'test,mysql,db1';\n
    Error message
    ERROR 1231 (42000): Variable 'audit_log_exclude_databases can't be set to the value of 'test,mysql,db1'\n

    To switch from filtering by included database list to the excluded one or back, first set the currently active filtering variable to NULL:

    mysql> SET GLOBAL audit_log_include_databases = NULL;\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> SET GLOBAL audit_log_exclude_databases = 'test,mysql,db1';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    "},{"location":"audit-log-plugin.html#system-variables","title":"System variables","text":""},{"location":"audit-log-plugin.html#audit_log_strategy","title":"audit_log_strategy","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value ASYNCHRONOUS Allowed values ASYNCHRONOUS, PERFORMANCE, SEMISYNCHRONOUS, SYNCHRONOUS

    This variable is used to specify the audit log strategy, possible values are:

    • ASYNCHRONOUS - (default) log using memory buffer, do not drop messages if buffer is full

    • PERFORMANCE - log using memory buffer, drop messages if buffer is full

    • SEMISYNCHRONOUS - log directly to file, do not flush and sync every event

    • SYNCHRONOUS - log directly to file, flush and sync every event

    This variable has effect only when audit_log_handler is set to FILE.

    "},{"location":"audit-log-plugin.html#audit_log_file","title":"audit_log_file","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value audit.log

    This variable is used to specify the filename that\u2019s going to store the audit log. It can contain the path relative to the datadir or absolute path.

    "},{"location":"audit-log-plugin.html#audit_log_flush","title":"audit_log_flush","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String Default value OFF

    When this variable is set to ON log file will be closed and reopened. This can be used for manual log rotation.

    "},{"location":"audit-log-plugin.html#audit_log_buffer_size","title":"audit_log_buffer_size","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type Numeric Default value 1 Mb

    This variable can be used to specify the size of memory buffer used for logging, used when audit_log_strategy variable is set to ASYNCHRONOUS or PERFORMANCE values. This variable has effect only when audit_log_handler is set to FILE.

    "},{"location":"audit-log-plugin.html#audit_log_exclude_accounts","title":"audit_log_exclude_accounts","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    This variable is used to specify the list of users for which Filtering by user is applied. The value can be NULL or comma separated list of accounts in form user@host or 'user'@'host' (if user or host contains comma). If this variable is set, then audit_log_include_accounts must be unset, and vice versa.

    "},{"location":"audit-log-plugin.html#audit_log_exclude_commands","title":"audit_log_exclude_commands","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    This variable is used to specify the list of commands for which Filtering by SQL command type is applied. The value can be NULL or comma separated list of commands. If this variable is set, then audit_log_include_commands must be unset, and vice versa.

    "},{"location":"audit-log-plugin.html#audit_log_exclude_databases","title":"audit_log_exclude_databases","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    Use this variable to specify the databases to be filtered. The value can be NULL or a comma-separated list of databases if you set this variable, unset audit_log_include_databases, and vice versa.

    "},{"location":"audit-log-plugin.html#audit_log_format","title":"audit_log_format","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value OLD Allowed values OLD, NEW, CSV, JSON

    This variable is used to specify the audit log format. The audit log plugin supports four log formats: OLD, NEW, JSON, and CSV. OLD and NEW formats are based on XML, where the former outputs log record properties as XML attributes and the latter as XML tags. Information logged is the same in all four formats.

    "},{"location":"audit-log-plugin.html#audit_log_include_accounts","title":"audit_log_include_accounts","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    This variable is used to specify the list of users for which Filtering by user is applied. The value can be NULL or comma separated list of accounts in form user@host or 'user'@'host' (if user or host contains comma). If this variable is set, then audit_log_exclude_accounts must be unset, and vice versa.

    "},{"location":"audit-log-plugin.html#audit_log_include_commands","title":"audit_log_include_commands","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    This variable is used to specify the list of commands for which Filtering by SQL command type is applied. The value can be NULL or comma separated list of commands. If this variable is set, then audit_log_exclude_commands must be unset, and vice versa.

    "},{"location":"audit-log-plugin.html#audit_log_include_databases","title":"audit_log_include_databases","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String

    This variable defines the list of databases to be filtered. You can set the value to NULL or a comma-separated list of databases. If you set this variable, you must unset audit_log_exclude_databases; the opposite is true.

    "},{"location":"audit-log-plugin.html#audit_log_policy","title":"audit_log_policy","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type String Default ALL Allowed values ALL, LOGINS, QUERIES, NONE

    This variable is used to specify which events should be logged. Possible values are:

    • ALL - all events will be logged

    • LOGINS - only logins will be logged

    • QUERIES - only queries will be logged

    • NONE - no events will be logged

    "},{"location":"audit-log-plugin.html#audit_log_rotate_on_size","title":"audit_log_rotate_on_size","text":"Option Description Command Line: Yes Scope: Global Dynamic: Yes Data type Numeric Default value 0

    This variable is measured in bytes and specifies the maximum size of the audit log file. Upon reaching this size, the audit log will be rotated. The rotated log files are present in the same directory as the current log file. The sequence number is appended to the log file name upon rotation.

    If the value is set to 0 (the default), the audit log files won\u2019t rotate.

    Set the audit_log_handler to FILE to enable this variable.

    "},{"location":"audit-log-plugin.html#audit_log_rotations","title":"audit_log_rotations","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type Numeric Default value 0

    This variable is used to specify how many log files should be kept when audit_log_rotate_on_size variable is set to non-zero value. This variable has effect only when audit_log_handler is set to FILE.

    "},{"location":"audit-log-plugin.html#audit_log_handler","title":"audit_log_handler","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value FILE Allowed values FILE, SYSLOG

    This variable is used to configure where the audit log will be written. If it is set to FILE, the log will be written into a file specified by audit_log_file variable. If it is set to SYSLOG, the audit log will be written to syslog.

    "},{"location":"audit-log-plugin.html#audit_log_syslog_ident","title":"audit_log_syslog_ident","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value percona-audit

    This variable is used to specify the ident value for syslog. This variable has the same meaning as the appropriate parameter described in the syslog(3) manual.

    "},{"location":"audit-log-plugin.html#audit_log_syslog_facility","title":"audit_log_syslog_facility","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value LOG_USER

    This variable is used to specify the facility value for syslog. This variable has the same meaning as the appropriate parameter described in the syslog(3) manual.

    "},{"location":"audit-log-plugin.html#audit_log_syslog_priority","title":"audit_log_syslog_priority","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value LOG_INFO

    This variable is used to specify the priority value for syslog. This variable has the same meaning as the appropriate parameter described in the syslog(3) manual.

    "},{"location":"audit-log-plugin.html#status-variables","title":"Status Variables","text":""},{"location":"audit-log-plugin.html#audit_log_buffer_size_overflow","title":"Audit_log_buffer_size_overflow","text":"Option Description Scope: Global Data type Numeric

    The number of times an audit log entry was either dropped or written directly to the file due to its size being bigger than audit_log_buffer_size variable.

    "},{"location":"backup-locks.html","title":"Backup locks","text":"

    Percona Server for MySQL offers the LOCK TABLES FOR BACKUP statement as a lightweight alternative to FLUSH TABLES WITH READ LOCK for both physical and logical backups.

    Note

    As of Percona Server for MySQL 8.0.13-4, LOCK TABLES FOR BACKUP requires the BACKUP_ADMIN privilege.

    "},{"location":"backup-locks.html#lock-tables-for-backup","title":"LOCK TABLES FOR BACKUP","text":"

    LOCK TABLES FOR BACKUP uses a new MDL lock type to block updates to non-transactional tables and DDL statements for all tables. If there is an active LOCK TABLES FOR BACKUP lock then all DDL statements and all updates to MyISAM, CSV, MEMORY, ARCHIVE, TokuDB, and MyRocks tables will be blocked in the Waiting for backup lock status, visible in PERFORMANCE_SCHEMA or PROCESSLIST.

    LOCK TABLES FOR BACKUP has no effect on SELECT queries for all mentioned storage engines. Against InnoDB, MyRocks, Blackhole and Federated tables, the LOCK TABLES FOR BACKUP is not applicable to the INSERT, REPLACE, UPDATE, DELETE statements: Blackhole tables obviously have no relevance to backups, and Federated tables are ignored by both logical and physical backup tools.

    Unlike FLUSH TABLES WITH READ LOCK, LOCK TABLES FOR BACKUP does not flush tables, i.e. storage engines are not forced to close tables and tables are not expelled from the table cache. As a result, LOCK TABLES FOR BACKUP only waits for conflicting statements to complete (i.e. DDL and updates to non-transactional tables). It never waits for SELECTs, or UPDATEs to InnoDB or MyRocks tables to complete, for example.

    If an \u201cunsafe\u201d statement is executed in the same connection that is holding a LOCK TABLES FOR BACKUP lock, it fails with the following error:

    Expected output
    ERROR 1880 (HY000): Can't execute the query because you have a conflicting backup lock\n\nUNLOCK TABLES releases the lock acquired by LOCK TABLES FOR BACKUP.\n

    The intended use case for Percona XtraBackup is:

    LOCK TABLES FOR BACKUP\n... copy .frm, MyISAM, CSV, etc. ...\nUNLOCK TABLES\n... get binlog coordinates ...\n... wait for redo log copying to finish ...\n
    "},{"location":"backup-locks.html#privileges","title":"Privileges","text":"

    The LOCK TABLES FOR BACKUP requires the BACKUP_ADMIN privilege.

    "},{"location":"backup-locks.html#interaction-with-other-global-locks","title":"Interaction with other global locks","text":"

    The LOCK TABLES FOR BACKUP has no effect if the current connection already owns a FLUSH TABLES WITH READ LOCK lock, as it is a more restrictive lock. If FLUSH TABLES WITH READ LOCK is executed in a connection that has acquired LOCK TABLES FOR BACKUP, FLUSH TABLES WITH READ LOCK fails with an error.

    If the server is operating in the read-only mode (i.e. read_only set to 1), statements that are unsafe for backups will be either blocked or fail with an error, depending on whether they are executed in the same connection that owns LOCK TABLES FOR BACKUP lock, or other connections.

    "},{"location":"backup-locks.html#myisam-index-and-data-buffering","title":"MyISAM index and data buffering","text":"

    MyISAM key buffering is normally write-through, i.e. by the time each update to a MyISAM table is completed, all index updates are written to disk. The only exception is delayed key writing feature which is disabled by default.

    When the global system variable delay_key_write is set to ALL, key buffers for all MyISAM tables are not flushed between updates, so a physical backup of those tables may result in broken MyISAM indexes. To prevent this, LOCK TABLES FOR BACKUP will fail with an error if delay_key_write is set to ALL. An attempt to set delay_key_write to ALL when there\u2019s an active backup lock will also fail with an error.

    Another option to involve delayed key writing is to create MyISAM tables with the DELAY_KEY_WRITE option and set the delay_key_write variable to ON (which is the default). In this case, LOCK TABLES FOR BACKUP will not be able to prevent stale index files from appearing in the backup. Users are encouraged to set delay_key_writes to OFF in the configuration file, my.cnf, or repair MyISAM indexes after restoring from a physical backup created with backup locks.

    MyISAM may also cache data for bulk inserts, e.g. when executing multi-row INSERTs or LOAD DATA statements. Those caches, however, are flushed between statements, so have no effect on physical backups as long as all statements updating MyISAM tables are blocked.

    "},{"location":"backup-locks.html#the-mysqldump-command","title":"The mysqldump Command","text":"

    mysqldump has also been extended with a new option, lock-for-backup (disabled by default). When used together with the --single-transaction option, the option makes mysqldump issue LOCK TABLES FOR BACKUP before starting the dump operation to prevent unsafe statements that would normally result in an inconsistent backup.

    When used without the --single-transaction option, lock-for-backup is automatically converted to lock-all-tables.

    The option lock-for-backup is mutually exclusive with lock-all-tables, i.e. specifying both on the command line will lead to an error.

    If the backup locks feature is not supported by the target server, but lock-for-backup is specified on the command line, mysqldump aborts with an error.

    "},{"location":"backup-locks.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"backup-locks.html#system-variables","title":"System Variables","text":""},{"location":"backup-locks.html#have_backup_locks","title":"have_backup_locks","text":"Option Description Command Line: Yes Config file No Scope: Global Dynamic: No Data type Boolean Default value YES

    This is a server variable implemented to help other utilities decide what locking strategy can be implemented for a server. When available, the backup locks feature is supported by the server and the variable value is always YES.

    "},{"location":"backup-locks.html#status-variables","title":"Status variables","text":""},{"location":"backup-locks.html#com_lock_tables_for_backup","title":"Com_lock_tables_for_backup","text":"Option Description Scope: Global/Session Data type Numeric

    This status variable indicates the number of times the corresponding statements have been executed.

    "},{"location":"backup-locks.html#client-command-line-parameter","title":"Client command line parameter","text":""},{"location":"backup-locks.html#lock-for-backup","title":"lock-for-backup","text":"Option Description Command Line: Yes Scope: Global Dynamic: No Data type String Default value Off

    When used together with the \u2013-single-transaction option, the option makes mysqldump issue LOCK TABLES FOR BACKUP before starting the dump operation to prevent unsafe statements that would normally result in an inconsistent backup.

    "},{"location":"backup-restore-overview.html","title":"Backup and restore overview","text":"

    Backups are data snapshots that are taken at a specific time and are stored in a common location in a common format. A backup is only useful for a defined time.

    The following scenarios require a backup to recover:

    Reason Description Hardware or host failure Issues with disks, such as stalls or broken disks. With cloud services, the instance can be unaccessible or broken. Corrupted data This issue can be caused by power outages, the database failed to write correctly and close the file. User mistake Deleting data or an update overwriting good data with bad data Natural disaster or data center failure Power outage, flooding, or internet issues Compliance Required to comply with regulations and standards"},{"location":"backup-restore-overview.html#strategies","title":"Strategies","text":"

    Define a backup and restore strategy for each of your databases. The strategies should have the following practices:

    Practice Description Retention How long should you keep the backups. This decision should be based on the organization\u2019s data governance policies and the expense of storing the backups. The schedule for backups should match the retention schedule. Document Document the strategy and any related policies. The documents should include information about the process and any tools used during backup or restore. Encrypt Encrypt the backup and secure the storage locations Test Test the backups on a timely basis.

    The backup strategy defines type and the backup frequency, the hardware required, how the backups are verified, and storing the backups, which also includes the backup security. The strategy uses the following metrics:

    Metric Description Recovery Time Objective (RTO) How long can the system be down? Recovery Point Objective (RPO) How much data can the organization lose?

    The restore strategy defines which user account has the restore responsibility and how and frequency of testing the restore process.

    These strategies require planning, implementation, and rigorous testing. You must test your restore process with each type of backup used to validate the backup and measure the recovery time. Automate this testing as much as possible. You should also document the process. In case of disaster, you can follow the procedures in the document without wasting time.

    If you are using replication, consider using a dedicated replica for backups because the operation can cause a high CPU load.

    "},{"location":"backup-restore-overview.html#physical-backup-or-logical-backup","title":"Physical backup or logical backup","text":"

    A backup can be either a physical backup or a logical backup.

    "},{"location":"backup-restore-overview.html#physical-backups","title":"Physical backups","text":"

    A physical backup copies the files needed to store and recover the database. They can be data files, configuration files, logs, and other types of files. The physical database can be stored in the cloud, in offline storage, on disc, or tape.

    Percona XtraBackup takes a physical backup. You can also use RDS/LVM Snapshots or the MySQL Enterprise Backup.

    If the server is stopped or down, you can copy the datadir with the cp command or the rsync command.

    "},{"location":"backup-restore-overview.html#logical-backups","title":"Logical backups","text":"

    A logical backup contains the structural details. This type of backup contains tables, views, procedures, and functions.

    Tools like mysqldump , mydumper, mysqlpump, and mysql shell take a logical backup.

    "},{"location":"backup-restore-overview.html#comparison","title":"Comparison","text":"Comparison Physical backup Logical backup Content The physical database files The tables, users, procedures, and functions Restore speed Restore can be quick Restore can be slower and does not include file information. Storage Can take more space Based on what is selected, the backup can be smaller"},{"location":"binary-tarball-install.html","title":"Install Percona Server for MySQL 8.0 from a binary tarball","text":"

    A binary tarball contains a group of files, including the source code, bundled together into one file using the tar command and compressed using gzip.

    See the list of the binary tarball available based on the Percona Server for MySQL version to select the right tarball for your environment.

    You can download the binary tarballs from the Linux - Generic section on the download page.

    Fetch and extract the correct binary tarball. For example, for Debian 10:

    $ wget https://downloads.percona.com/downloads/Percona-Server-8.0/Percona-Server-8.0.26-16/binary/tarball/Percona-Server-8.0.26-16-Linux.x86_64.glibc2.12.tar.gz\n
    "},{"location":"binary-tarball-install.html#install-percona-server-for-mysql-pro-from-a-binary-tarball","title":"Install Percona Server for MySQL Pro from a binary tarball","text":"

    You can download the required binary tarball for Percona Server for MySQL Pro using your CLIENTID and TOKEN in the following link https://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/tarballs/.

    Fetch and extract the correct binary tarball using your CLIENTID and TOKEN. For example, for Oracle Linux 9:

    wget https://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/tarballs/Percona-Server-8.0.40-31/Percona-Server-Pro-8.0.40-31-Linux.x86_64.glibc2.34-debug.tar.gz\n
    "},{"location":"binary-tarball-names.html","title":"Binary tarball file names available based on the Percona Server for MySQL version","text":"

    For later version of Percona Server for MySQL, the tar files are organized by the glibc2 version. You can find this version on your operating system with the following command:

    $ ldd --version\n
    Expected output
    ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35\nCopyright (C) 2022 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\nWritten by Roland McGrath and Ulrich Drepper.\n

    If the glibc2 version from your operating system is not listed, then this Percona Server for MySQL version does not support that operating system.

    "},{"location":"binary-tarball-names.html#binary-tarball-file-name-organization","title":"Binary tarball file name organization","text":"8.0.26-16 and laterPro buildsZenfs8.0.20-11 to 8.0.25-178.0.19-10 and earlier

    The following lists the platform and the associated full binary file name used by Percona Server for MySQL tar files from version 8.0.26-16 and later.

    Platform Percona Server for MySQL tarball name glibc2 version Ubuntu 22.04 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.35-zenfs.tar.gz glibc2.35 Ubuntu 20.04 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.31.tar.gz glibc2.31 Ubuntu 18.04 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.27.tar.gz glibc2.27 Red Hat Enterprise 9 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.34.tar.gz glibc2.34 Red Hat Enterprise 8 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.28.tar.gz glibc2.28 Red Hat Enterprise 7 Percona-Server-8.0.30-22-Linux.x86_64.glibc2.17.tar.gz glibc2.17 Red Hat Enterprise 6 Percona-Server-8.0.20-11-Linux.x86_64.glibc2.12.tar.gz glibc2.12

    The types of files are as follows:

    Type Name Description Full Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>.tar.gz Contains all files available Minimal Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>-minimal.tar.gz Contains binaries and libraries Debug Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>-debug.tar.gz Contains the minimal build files and test files, and debug symbols Zenfs Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>-zenfs.tar.gz Contains the zenfs files and can be either a full or minimal installation

    The following binary tarballs are available for Percona Server for MySQL Pro builds from version 8.0.35-27 and later.

    Platform Percona Server for MySQL Pro tarball name glibc2 version Ubuntu 22.04 Percona-Server-Pro-8.0.40-31-Linux.x86_64.glibc2.35.tar.gz glibc2.35 Ubuntu 22.04 Percona-Server-Pro-8.0.40-31-Linux.x86_64.glibc2.35-minimal.tar.gz glibc2.35 Red Hat Enterprise 9 Percona-Server-Pro-8.0.40-31-Linux.x86_64.glibc2.34.tar.gz glibc2.34 Red Hat Enterprise 9 Percona-Server-Pro-8.0.40-31-Linux.x86_64.glibc2.34-minimal.tar.gz glibc2.34

    The types of files are the following:

    Type Name Description Full Percona-Server-Pro-<version-number>-Linux.x86_64.<glibc2-version>.tar.gz Contains all files available Minimal Percona-Server-Pro<version-number>-Linux.x86_64.<glibc2-version>.minimal.tar.gz Contains binaries and libraries

    Implemented in Percona for MySQL 8.0.26-16, the following binary tarballs are available for the MyRocks ZenFS installation. See Installing and configuring Percona Server for MySQL with ZenFS support for more information and the installation procedure.

    Type Name Description Full Percona-Server-<version number>-Linux.x86_64.<glibc-version>-zenfs.tar.gz Contains the binaries, libraries, test files, and debug symbols Minimal Percona-Server-<version number>-Linux.x86_64.<glibc-version>-zenfs-minimal.tar.gz Contains the binaries and libraries but does not include test files or debug symbols

    At this time, you can enable the ZenFS plugin in the following distributions:

    Distribution Name Notes Debian 11.1 Able to run the ZenFS plugin Ubuntu 20.04.3 Requires the 5.11 HWE kernel patched with the allow blk-zoned ioctls without CAPT_SYS_ADMIN patch

    If you do not enable the ZenFS functionality on Ubuntu 20.04, the binaries with ZenFS support can run on the standard 5.4 kernel. Other Linux distributions are adding support for ZenFS, but Percona does not provide installation packages for those distributions.

    The multiple binary tarballs from earlier versions are replaced with the following:

    Type Name Operating systems Description Full Percona-Server-<version number>-Linux.x86_64.glibc2.12.tar.gz Built for CentOS 6 Contains binaries, libraries, test files, and debug symbols Minimal Percona-Server-<version number>-Linux.x86_64.glibc2.12-minimal.tar.gz Built for CentOS 6 Contains binaries and libraries but does not include test files, or debug symbols Full Percona-Server-<version number>-Linux.x86_64.glibc2.17.tar.gz Compatible with any supported operating system except for CentOS 6 Contains binaries, libraries, test files, and debug symbols Minimal Percona-Server-<version number>-Linux.x86_64.glibc2.17-minimal.tar.gz Compatible with any supported operating system except for CentOS 6 Contains binaries and libraries but does not include test files or debug symbols

    The tarball file has the following characteristics:

    Type Name Description Full Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>.tar.gz Contains all files available Minimal Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>.minimal.tar.gz Contains binaries and libraries Debug Percona-Server-<version-number>-Linux.x86_64.<glibc2-version>.debug.tar.gz Contains the minimal build files and test files, and debug symbols

    For Percona Server for MySQL 8.0.19-10 and earlier, multiple tarballs are provided based on the OpenSSL library available in the distribution:

    • ssl100 - for Debian prior to 9 and Ubuntu prior to 14.04 versions (libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0);

    • ssl102 - for Debian 9 and Ubuntu versions starting from 14.04 (libssl.so.1.1 => /usr/lib/libssl.sl.1.1)

    • ssl101 - for CentOS 6 and CentOS 7 (libssl.so.10 => /usr/lib64/libssl.so.10);

    • ssl102 - for CentOS 8 and RedHat 8 (libssl.so.1.1 => /usr/lib/libssl.so.1.1.1b);

    "},{"location":"binlog-space.html","title":"Limiting the disk space used by binary log files","text":"

    It is a challenge to control how much disk space is used by the binary logs. The size of a binary log can vary because a single transaction must be written to a single binary log and cannot be split between multiple binary log files.

    "},{"location":"binlog-space.html#binlog_space_limit","title":"binlog_space_limit","text":"Attribute Description Uses the command line Yes Uses the configuration file Yes Scope Global Dynamic No Variable type ULONG_MAX Default value 0 (unlimited) Maximum value - 64-bit platform 18446744073709547520

    This variable places an upper limit on the total size in bytes of all binary logs. When the limit is reached, the oldest binary logs are purged until the total size is under the limit or only the active log remains.

    The default value of 0 disables the feature. No limit is set on the log space. The binary logs accumulate indefinitely until the disk space is full.

    "},{"location":"binlog-space.html#example","title":"Example","text":"

    Set the binlog_space_limit to 50 GB in the my.cnf file:

    [mysqld]\n...\nbinlog_space_limit = 50G\n...\n
    "},{"location":"binlogging-replication-improvements.html","title":"Binary logs and replication improvements","text":"

    Due to continuous development, Percona Server for MySQL incorporated a number of improvements related to replication and binary logs handling. This resulted in replication specifics, which distinguishes it from MySQL.

    "},{"location":"binlogging-replication-improvements.html#safety-of-statements-with-a-limit-clause","title":"Safety of statements with a LIMIT clause","text":""},{"location":"binlogging-replication-improvements.html#summary-of-the-fix","title":"Summary of the fix","text":"

    MySQL considers all UPDATE/DELETE/INSERT ... SELECT statements with LIMIT clause to be unsafe, no matter wether they are really producing non-deterministic result or not, and switches from statement-based logging to row-based one. Percona Server for MySQL is more accurate, it acknowledges such instructions as safe when they include ORDER BY PK or WHERE condition. This fix has been ported from the upstream bug report #42415 (#44).

    "},{"location":"binlogging-replication-improvements.html#performance-improvement-on-relay-log-position-update","title":"Performance improvement on relay log position update","text":""},{"location":"binlogging-replication-improvements.html#relay-log-position-fix","title":"Relay log position fix","text":"

    MySQL always updated relay log position in multi-source replications setups regardless of whether the committed transaction has already been executed or not. Percona Server omits relay log position updates for the already logged GTIDs.

    "},{"location":"binlogging-replication-improvements.html#relay-log-position-details","title":"Relay log position details","text":"

    Particularly, such unconditional relay log position updates caused additional fsync operations in case of relay-log-info-repository=TABLE, and with the higher number of channels transmitting such duplicate (already executed) transactions the situation became proportionally worse. Bug fixed #1786 (upstream #85141).

    "},{"location":"binlogging-replication-improvements.html#performance-improvement-on-source-and-connection-status-updates","title":"Performance improvement on source and connection status updates","text":""},{"location":"binlogging-replication-improvements.html#source-and-connection-status-update-fix","title":"Source and connection status update fix","text":"

    Replica nodes configured to update source status and connection information only on log file rotation did not experience the expected reduction in load. MySQL was additionally updating this information in case of multi-source replication when replica had to skip the already executed GTID event.

    "},{"location":"binlogging-replication-improvements.html#source-and-connection-status-details","title":"Source and connection status details","text":"

    The configuration with master_info_repository=TABLE and sync_master_info=0 makes replica to update source status and connection information in this table on log file rotation and not after each sync_master_info event, but it didn\u2019t work on multi-source replication setups. Heartbeats sent to the replica to skip GTID events which it had already executed previously, were evaluated as relay log rotation events and reacted with mysql.slave_master_info table sync. This inaccuracy could produce huge (up to 5 times on some setups) increase in write load on the replica, before this problem was fixed in Percona Server for MySQL. Bug fixed #1812 (upstream #85158).

    "},{"location":"binlogging-replication-improvements.html#write-flush-commands-to-the-binary-log","title":"Write FLUSH commands to the binary log","text":"

    FLUSH commands, such as FLUSH SLOW LOGS, are not written to the binary log if the system variable binlog_skip_flush_commands is set to ON.

    In addition, the following changes were implemented in the behavior of read_only and super_read_only modes:

    • When read_only is set to ON, any FLUSH ... command executed by a normal user (without the SUPER privilege) are not written to the binary log regardless of the value of the binlog_skip_flush_command variable.

    • When super_read_only is set to ON, any FLUSH ... command executed by any user (even by those with the SUPER privilege) are not written to the binary log regardless of the value of the binlog_skip_flush_commands variable.

    An attempt to run a FLUSH command without either SUPER or RELOAD privileges results in the ER_SPECIFIC_ACCESS_DENIED_ERROR exception regardless of the value of the binlog_skip_flush_commands variable.

    "},{"location":"binlogging-replication-improvements.html#binlog_skip_flush_commands","title":"binlog_skip_flush_commands","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Default OFF

    This variable was introduced in Percona Server for MySQL 8.0.15-5.

    When binlog_skip_flush_commands is set to ON, FLUSH ... commands are not written to the binary log. See Writing FLUSH Commands to the Binary Log for more information about what else affects the writing of FLUSH commands to the binary log.

    Note

    FLUSH LOGS, FLUSH BINARY LOGS, FLUSH TABLES WITH READ LOCK, and FLUSH TABLES ... FOR EXPORT are not written to the binary log no matter what value the binlog_skip_flush_commands variable contains. The FLUSH command is not recorded to the binary log and the value of binlog_skip_flush_commands is ignored if the FLUSH command is run with the NO_WRITE_TO_BINLOG keyword (or its alias LOCAL).

    "},{"location":"binlogging-replication-improvements.html#maintaining-comments-with-drop-table","title":"Maintaining comments with DROP TABLE","text":"

    When you issue a DROP TABLE command, the binary log stores the command but removes comments and encloses the table name in quotation marks. If you require the binary log to maintain the comments and not add quotation marks, enable binlog_ddl_skip_rewrite.

    "},{"location":"binlogging-replication-improvements.html#binlog_ddl_skip_rewrite","title":"binlog_ddl_skip_rewrite","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Default OFF

    This variable was introduced in Percona Server for MySQL 8.0.26-16.

    If the variable is enabled, single table DROP TABLE DDL statements are logged in the binary log with comments. Multi-table DROP TABLE DDL statements are not supported and return an error.

    SET binlog_ddl_skip_rewrite = ON;\n/*comment at start*/DROP TABLE t /*comment at end*/;\n
    "},{"location":"binlogging-replication-improvements.html#binary-log-user-defined-functions","title":"Binary log user defined functions","text":"

    To implement Point in Time recovery, we have added the binlog_utils_udf. The following user-defined functions are included:

    Name Returns Description get_binlog_by_gtid() Binlog file name as STRING Returns the binlog file name that contains the specified GTID get_last_gtid_from_binlog() GTID as STRING Returns the last GTID found in the specified binlog get_gtid_set_by_binlog() GTID set as STRING Returns all GTIDs found in the specified binlog get_binlog_by_gtid_set() Binlog file name as STRING Returns the file name of the binlog which contains at least one GTID from the specified set. get_first_record_timestamp_by_binlog() Timestamp as INTEGER Returns the timestamp of the first event in the specified binlog get_last_record_timestamp_by_binlog() Timestamp as INTEGER Returns the timestamp of the last event in the specified binlog

    Note

    All functions returning timestamps return their values as microsecond precision UNIX time. In other words, they represent the number of microseconds since 1-JAN-1970.

    All functions accepting a binlog name as the parameter accepts only short names, without a path component. If the path separator (\u2018/\u2019) is found in the input, an error is returned. This serves the purpose of restricting the locations from where binlogs can be read. They are always read from the current binlog directory (@@log_bin_basename system variable).

    All functions returning binlog file names return the name in short form, without a path component.

    The basic syntax for get_binlog_by_gtid() is the following:

    * get_binlog_by_gtid(string) [AS] alias\n

    Usage: SELECT get_binlog_by_gtid(string) [AS] alias

    Example:

    CREATE FUNCTION get_binlog_by_gtid RETURNS STRING SONAME 'binlog_utils_udf.so';\nSELECT get_binlog_by_gtid(\"F6F54186-8495-47B3-8D9F-011DDB1B65B3:1\") AS result;\n
    Expected output
    +--------------+\n| result       |\n+==============+\n| binlog.00001 |\n+--------------+\n
    DROP FUNCTION get_binlog_by_gtid;\n

    The basic syntax for get_last_gtid_from_binlog() is the following:

    * get_last_gtid_from_binlog(string) [AS] alias\n

    Usage: SELECT get_last_gtid_from_binlog(string) [AS] alias

    For example:

    CREATE FUNCTION get_last_gtid_from_binlog RETURNS STRING SONAME 'binlog_utils_udf.so';\nSELECT get_last_gtid_from_binlog(\"binlog.00001\") AS result;\n
    Expected output
    +-----------------------------------------+\n| result                                  |\n+=========================================+\n| F6F54186-8495-47B3-8D9F-011DDB1B65B3:10 |\n+-----------------------------------------+\n
    DROP FUNCTION get_last_gtid_from_binlog;\n

    The basic syntax for get_gtid_set_by_binlog() is the following:

    * get_gtid_set_by_binlog(string) [AS] alias\n

    Usage: SELECT get_gtid_set_by_binlog(string) [AS] alias

    For example:

    CREATE FUNCTION get_gtid_set_by_binlog RETURNS STRING SONAME 'binlog_utils_udf.so';\nSELECT get_gtid_set_by_binlog(\"binlog.00001\") AS result;\n
    Expected output
    +-------------------------+\n| result                  |\n+=========================+\n| 11ea-b9a7:7,11ea-b9a7:8 |\n+-------------------------+\n
    DROP FUNCTION get_gtid_set_by_binlog;\n

    The basic syntax for get_binlog_by_gtid_set() is the following:

    • get_binlog_by_gtid_set(string) [AS] alias

    Usage: SELECT get_binlog_by_gtid_set(string) [AS] alias

    Example:

    CREATE FUNCTION get_binlog_by_gtid_set RETURNS STRING SONAME 'binlog_utils_udf.so';\nSELECT get_binlog_by_gtid_set(\"11ea-b9a7:7,11ea-b9a7:8\") AS result;\n
    Expected output
    +---------------------------------------------------------------+\n| result                                                        |\n+===============================================================+\n| bin.000003                                                    |\n+---------------------------------------------------------------+\n
    DROP FUNCTION get_binlog_by_gtid_set;\n

    The basic syntax for get_first_record_timestamp_by_binlog() is the following:

    * get_first_record_timestamp_by_binlog(TIMESTAMP) [AS] alias\n

    Usage: SELECT get_first_record_timestamp_by_binlog(TIMESTAMP) [AS] alias

    For example:

    CREATE FUNCTION get_first_record_timestamp_by_binlog RETURNS INTEGER SONAME 'binlog_utils_udf.so';\nSELECT FROM_UNIXTIME(get_first_record_timestamp_by_binlog(\"bin.00003\") DIV 1000000) AS result;\n
    Expected output
    +---------------------+\n| result              |\n+=====================+\n| 2020-12-03 09:10:40 |\n+---------------------+\n
    DROP FUNCTION get_first_record_timestamp_by_binlog;\n

    The basic syntax for get_last_record_timestamp_by_binlog() is the following:

    * get_last_record_timestamp_by_binlog(TIMESTAMP) [AS] alias\n

    Usage: SELECT get_last_record_timestamp_by_binlog(TIMESTAMP) [AS] alias

    For example:

    CREATE FUNCTION get_last_record_timestamp_by_binlog RETURNS INTEGER SONAME 'binlog_utils_udf.so';\nSELECT FROM_UNIXTIME(get_last_record_timestamp_by_binlog(\"bin.00003\") DIV 1000000) AS result;\n
    Expected output
    +---------------------+\n| result              |\n+=====================+\n| 2020-12-04 04:18:56 |\n+---------------------+\n
    DROP FUNCTION get_last_record_timestamp_by_binlog;\n
    "},{"location":"binlogging-replication-improvements.html#limitations","title":"Limitations","text":"

    For the following variables, do not define values with one or more dot (.) characters:

    • log_bin

    • log_bin_index

    A value defined with these characters is handled differently in MySQL and Percona XtraBackup and can cause unpredictable behavior.

    "},{"location":"build-apt-packages.html","title":"Build APT packages","text":"

    If you wish to build your own Debian/Ubuntu (dpkg) packages of Percona Server for MySQL, you first need to start with a source tarball, either from the Percona website or by generating your own by following the instructions above (Installing Percona Server for MySQL from the Git Source Tree).

    Extract the source tarball:

    $ tar xfz Percona-Server-8.0.13-3-Linux.x86_64.ssl102.tar.gz\n$ cd Percona-Server-8.0.13-3\n

    Copy the Debian packaging in the directory that Debian expects it to be in:

    $ cp -ap build-ps/debian debian\n

    Update the changelog for your distribution (here we update for the unstable distribution - sid), setting the version number appropriately. The trailing one in the version number is the revision of the Debian packaging.

    $ dch -D unstable --force-distribution -v \"8.0.13-3-1\" \"Update to 8.0.13-3\"\n

    Build the Debian source package:

    $ dpkg-buildpackage -S\n

    Use sbuild to build the binary package in a chroot:

    $ sbuild -d sid percona-server-8.0_8.0.13-3-1.dsc\n

    You can give different distribution options to dch and sbuild to build binary packages for all Debian and Ubuntu releases.

    Note

    PAM Authentication Plugin is not built with the server by default. In order to build the Percona Server for MySQL with PAM plugin, an additional option -DWITH_PAM=ON should be used.

    "},{"location":"changed-page-tracking.html","title":"XtraDB changed page tracking","text":"

    Important

    Starting with Percona Server for MySQL 8.0.30, the changed page tracking feature is removed and no longer supported.

    Starting with Percona Server for MySQL 8.0.27, the page tracking feature is deprecated and may be removed in future versions.

    We recommend using the MySQL page tracking feature. For more information, see MySQL InnoDB Clone and page tracking.

    XtraDB now tracks the pages that have changes written to them according to the redo log. This information is written out in special changed page bitmap files. This information can be used to speed up incremental backups using Percona XtraBackup by removing the need to scan whole data files to find the changed pages. Changed page tracking is done by a new XtraDB worker thread that reads and parses log records between checkpoints. The tracking is controlled by a new read-only server variable innodb_track_changed_pages.

    Bitmap filename format used for changed page tracking is ib_modified_log_<seq>_<startlsn>.xdb. The first number is the sequence number of the bitmap log file and the startlsn number is the starting LSN number of data tracked in that file. Example of the bitmap log files should look like this:

    Expected output
    ib_modified_log_1_0.xdb\nib_modified_log_2_1603391.xdb\n

    Sequence number can be used to easily check if all the required bitmap files are present. Start LSN number will be used in XtraBackup and INFORMATION_SCHEMA queries to determine which files have to be opened and read for the required LSN interval data. The bitmap file is rotated on each server restart and whenever the current file size reaches the predefined maximum. This maximum is controlled by a new innodb_max_bitmap_file_size variable.

    Old bitmap files may be safely removed after a corresponding incremental backup is taken. For that there are server User statements for handling the XtraDB changed page bitmaps. Removing the bitmap files from the filesystem directly is safe too, as long as care is taken not to delete data for not-yet-backuped LSN range.

    This feature will be used for implementing faster incremental backups that use this information to avoid full data scans in Percona XtraBackup.

    "},{"location":"changed-page-tracking.html#user-statements-for-handling-the-xtradb-changed-page-bitmaps","title":"User statements for handling the XtraDB changed page bitmaps","text":"

    New statements have been introduced for handling the changed page bitmap tracking. All of these statements require SUPER privilege.

    • FLUSH CHANGED_PAGE_BITMAPS - this statement can be used for synchronous bitmap write for immediate catch-up with the log checkpoint. This is used by innobackupex to make sure that XtraBackup indeed has all the required data it needs.

    • RESET CHANGED_PAGE_BITMAPS - this statement will delete all the bitmap log files and restart the bitmap log file sequence.

    • PURGE CHANGED_PAGE_BITMAPS BEFORE <lsn> - this statement will delete all the change page bitmap files up to the specified log sequence number.

    "},{"location":"changed-page-tracking.html#additional-information-in-show-engine-innodb-status","title":"Additional information in SHOW ENGINE INNODB STATUS","text":"

    When log tracking is enabled, the following additional fields are displayed in the LOG section of the SHOW ENGINE INNODB STATUS output:

    • \u201cLog tracked up to:\u201d displays the LSN up to which all the changes have been parsed and stored as a bitmap on disk by the log tracking thread

    • \u201cMax tracked LSN age:\u201d displays the maximum limit on how far behind the log tracking thread may be.

    Note

    Implemented in Percona Server for MySQL 8.0.13-4, a new InnoDB monitor, log_writer_on_tracker_waits, records log writer waits due to changed page tracking lag. This log writer works in parallel with other log_writer_on_[*]_ waits monitors.

    "},{"location":"changed-page-tracking.html#information_schema-tables","title":"INFORMATION_SCHEMA tables","text":"

    This table contains a list of modified pages from the bitmap file data. As these files are generated by the log tracking thread parsing the log whenever the checkpoint is made, it is not real-time data.

    "},{"location":"changed-page-tracking.html#information_schemainnodb_changed_pages","title":"INFORMATION_SCHEMA.INNODB_CHANGED_PAGES","text":"Column Name Description \u2018INT(11) space_id\u2019 \u2018space id of modified page\u2019 \u2018INT(11) page_id\u2019 \u2018id of modified page\u2019 \u2018BIGINT(21) start_lsn\u2019 \u2018start of the interval\u2019 \u2018BIGINT(21) end_lsn\u2019 \u2018end of the interval \u2018

    The start_lsn and the end_lsn columns denote between which two checkpoints this page was changed at least once. They are also equal to checkpoint LSNs.

    Number of records in this table can be limited by using the variable innodb_max_changed_pages.

    "},{"location":"changed-page-tracking.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"changed-page-tracking.html#system-variables","title":"System variables","text":""},{"location":"changed-page-tracking.html#innodb_max_changed_pages","title":"innodb_max_changed_pages","text":"Option Description Command Line: Yes Config file Yes Scope: Global Dynamic: Yes Data type Numeric Default value 1000000 Range 1 - 0 (unlimited)

    This variable is used to limit the result row count for the queries from INFORMATION_SCHEMA.INNODB_CHANGED_PAGES table.

    "},{"location":"changed-page-tracking.html#innodb_track_changed_pages","title":"innodb_track_changed_pages","text":"Option Description Command Line: Yes Config file Yes Scope: Global Dynamic: No Data type Boolean Default value 0 - False Range 0-1

    This variable is used to enable/disable XtraDB changed page tracking feature.

    "},{"location":"changed-page-tracking.html#innodb_max_bitmap_file_size","title":"innodb_max_bitmap_file_size","text":"Option Description Command Line: Yes Config file Yes Scope: Global Dynamic: Yes Data type Numeric Default value 104857600 (100 MB) Range 4096 (4KB) - 18446744073709551615 (16EB)

    This variable is used to control maximum bitmap size after which the file will be rotated.

    "},{"location":"compile-percona-server.html","title":"Compile Percona Server for MySQL 8.0 from source","text":"

    The following instructions install Percona Server for MySQL 8.0.

    "},{"location":"compile-percona-server.html#install-percona-server-for-mysql-from-the-git-source-tree","title":"Install Percona Server for MySQL from the Git Source Tree","text":"

    Percona uses the Github revision control system for development. To build the latest Percona Server for MySQL from the source tree, you will need git installed on your system.

    You can now fetch the latest Percona Server for MySQL 8.0 sources.

    $ git clone https://github.com/percona/percona-server.git\n$ cd percona-server\n$ git checkout 8.0\n$ git submodule init\n$ git submodule update\n

    If you are going to be making changes to Percona Server for MySQL 8.0 and wanting to distribute the resulting work, you can generate a new source tarball (exactly the same way as we do for release):

    $ cmake .\n$ make dist\n

    After either fetching the source repository or extracting a source tarball (from Percona or one you generated yourself), you will now need to configure and build Percona Server for MySQL.

    First, run CMake to configure the build. Here you can specify all the normal build options as you do for a normal MySQL build. Depending on what options you wish to compile Percona Server for MySQL with, you may need other libraries installed on your system. Here is an example using a configure line similar to the options that Percona uses to produce binaries:

    $ cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DBUILD_CONFIG=mysql_release -DFEATURE_SET=community\n
    "},{"location":"compile-percona-server.html#compile-from-source","title":"Compile from source","text":"

    Now, compile using make:

    $ make\n

    Install:

    $ make install\n

    Percona Server for MySQL 8.0 is installed on your system.

    "},{"location":"components-keyrings-comparison.html","title":"Compare keyring components and keyring plugins","text":"

    If you want to store encryption keys in Percona Server for MySQL securely, you have two options: keyring components and plugins. They are similar in functionality, but they have some differences that you should consider before choosing.

    Keyring components are newer and more flexible than keyring plugins. You can load them using a manifest file, so you don\u2019t need the --early-plugin-load option. You can also configure the keyring components using their configuration files instead of system variables. Keyring components have fewer restrictions on the types and lengths of keys they can handle. For example, keyring components can support RSA keys, while keyring plugins cannot.

    However, keyring components also have some disadvantages. They may be incompatible with certain features such as InnoDB encryption threads. They also require more steps to install and uninstall than keyring plugins. You must create and edit each component\u2019s manifest file, configuration file, and component directory.

    Keyring plugins are older than keyring components. You can load them using the --early-plugin-load option, which is more straightforward than a manifest file. You can also configure them using system variables, which are easier to manage than configuration files. Keyring plugins are compatible with certain features that use encryption keys.

    However, keyring plugins also have some limitations. They have more restrictions on the types and lengths of keys they can support. For example, they cannot handle RSA keys, while keyring components can. They also require you to specify the plugin name and library file for each plugin you load.

    To summarize, keyring components and plugins are ways to store encryption keys in Percona Server for MySQL, but they have different advantages and disadvantages. You should choose the one that suits your needs and preferences.

    "},{"location":"compressed-columns.html","title":"Compressed columns with dictionaries","text":"

    The per-column compression feature is a data type modifier, independent from user-level SQL and InnoDB data compression, that causes the data stored in the column to be compressed on writing to storage and decompressed on reading. For all other purposes, the data type is identical to the one without the modifier, i.e. no new data types are created. Compression is done by using the zlib library.

    Additionally, it is possible to pre-define a set of strings for each compressed column to achieve a better compression ratio on relatively small individual data items.

    This feature provides:

    • a better compression ratio for text data which consists of a large number of predefined words (e.g. JSON or XML) using compression methods with static dictionaries

    • a way to select columns in the table to compress (in contrast to the InnoDB row compression method). This feature is based on a patch provided by Weixiang Zhai.

    "},{"location":"compressed-columns.html#specifications","title":"Specifications","text":"

    The feature is limited to InnoDB/XtraDB storage engine and to columns of the following data types:

    • BLOB (including TINYBLOB, MEDIUMBLOB, LONGBLOG)

    • TEXT (including TINYTEXT, MEDUUMTEXT, LONGTEXT)

    • VARCHAR (including NATIONAL VARCHAR)

    • VARBINARY

    • JSON

    A compressed column is declared by using the syntax that extends the existing COLUMN_FORMAT modifier: COLUMN_FORMAT COMPRESSED. If this modifier is applied to an unsupported column type or storage engine, an error is returned.

    The compression can be specified:

    • when creating a table: CREATE TABLE ... (..., foo BLOB COLUMN_FORMAT COMPRESSED, ...);

    • when altering a table and modifying a column to the compressed format: ALTER TABLE ... MODIFY [COLUMN] ... COLUMN_FORMAT COMPRESSED, or ALTER TABLE ... CHANGE [COLUMN] ... COLUMN_FORMAT COMPRESSED.

    Unlike Oracle MySQL, compression is applicable to generated stored columns. Use this syntax extension as follows:

    mysql> CREATE TABLE t1(\n       id INT,\n       a BLOB,\n       b JSON COLUMN_FORMAT COMPRESSED,\n       g BLOB GENERATED ALWAYS AS (a) STORED COLUMN_FORMAT COMPRESSED WITH COMPRESSION_DICTIONARY numbers\n     ) ENGINE=InnoDB;\n

    To decompress a column, specify a value other than COMPRESSED to COLUMN_FORMAT: FIXED, DYNAMIC, or DEFAULT. If there is a column compression/decompression request in an ALTER TABLE, it is forced to the COPY algorithm.

    Two new variables: innodb_compressed_columns_zip_level and innodb_compressed_columns_threshold have been implemented.

    "},{"location":"compressed-columns.html#compression-dictionary-support","title":"Compression dictionary support","text":"

    To achieve a better compression ratio on relatively small individual data items, it is possible to predefine a compression dictionary, which is a set of strings for each compressed column.

    Compression dictionaries can be represented as a list of words in the form of a string (a comma or any other character can be used as a delimiter although not required). In other words, a, bb, ccc, a bb ccc, and abbccc will have the same effect. However, the latter is more compact. The Quote symbol quoting is handled by regular SQL quoting. The maximum supported dictionary length is 32506 bytes (zlib limitation).

    The compression dictionary is stored in a new system InnoDB table. As this table is of the data dictionary kind, concurrent reads are allowed, but writes are serialized, and reads are blocked by writes. Table read through old read views are not supported, similar to InnoDB internal DDL transactions.

    "},{"location":"compressed-columns.html#interaction-with-innodb_force_recovery-variable","title":"Interaction with innodb_force_recovery variable","text":"

    Compression dictionary operations are treated like DDL operations with the exception when innodb_force_value is set to 3: with values less than 3, compression dictionary operations are allowed, and with values >= 3, they are forbidden.

    Note

    Prior to Percona Server for MySQL 8.0.15-6 using Compression dictionary operations with innodb_force_recovery variable set to value > 0 would result in an error.

    "},{"location":"compressed-columns.html#example","title":"Example","text":"

    In order to use the compression dictionary, you need to create it. This can be done by running:

    mysql> SET @dictionary_data = 'one' 'two' 'three' 'four';\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n
    mysql> CREATE COMPRESSION_DICTIONARY numbers (@dictionary_data);\n
    Expected output
    Query OK, 0 rows affected (0.00 sec)\n

    To create a table that has both compression and compressed dictionary support you should run:

    mysql> CREATE TABLE t1(\n        id INT,\n        a BLOB COLUMN_FORMAT COMPRESSED,\n        b BLOB COLUMN_FORMAT COMPRESSED WITH COMPRESSION_DICTIONARY numbers\n      ) ENGINE=InnoDB;\n

    The following example shows how to insert a sample of JSON data into the table:

    SET @json_value =\n'[\\n'\n' {\\n'\n' \"one\" = 0,\\n'\n' \"two\" = 0,\\n'\n' \"three\" = 0,\\n'\n' \"four\" = 0\\n'\n' },\\n'\n' {\\n'\n' \"one\" = 0,\\n'\n' \"two\" = 0,\\n'\n' \"three\" = 0,\\n'\n' \"four\" = 0\\n'\n' },\\n'\n' {\\n'\n' \"one\" = 0,\\n'\n' \"two\" = 0,\\n'\n' \"three\" = 0,\\n'\n' \"four\" = 0\\n'\n' },\\n'\n' {\\n'\n' \"one\" = 0,\\n'\n' \"two\" = 0,\\n'\n' \"three\" = 0,\\n'\n' \"four\" = 0\\n'\n' }\\n'\n']\\n'\n;\n
    mysql> INSERT INTO t1 VALUES(0, @json_value, @json_value);\nQuery OK, 1 row affected (0.01 sec)\n
    "},{"location":"compressed-columns.html#information_schema-tables","title":"INFORMATION_SCHEMA Tables","text":"

    This feature implements two new INFORMATION_SCHEMA tables.

    "},{"location":"compressed-columns.html#information_schemacompression_dictionary","title":"INFORMATION_SCHEMA.COMPRESSION_DICTIONARY","text":"Column Name Description \u2018BIGINT(21)_UNSIGNED dict_version\u2019 \u2018dictionary version\u2019 \u2018VARCHAR(64) dict_name\u2019 \u2018dictionary name\u2019 \u2018BLOB dict_data\u2019 \u2018compression dictionary string\u2019

    This table provides a view of the internal compression dictionary. The SUPER privilege is required to query it.

    "},{"location":"compressed-columns.html#information_schemacompression_dictionary_tables","title":"INFORMATION_SCHEMA.COMPRESSION_DICTIONARY_TABLES","text":"Column Name Description \u2018BIGINT(21)_UNSIGNED table_schema\u2019 \u2018table schema\u2019 \u2018BIGINT(21)_UNSIGNED table_name\u2019 \u2018table ID from INFORMATION_SCHEMA.INNODB_SYS_TABLES\u2019 \u2018BIGINT(21)_UNSIGNED column_name\u2019 \u2018column position (starts from 0 as in INFORMATION_SCHEMA.INNODB_SYS_COLUMNS)\u2019 \u2018BIGINT(21)_UNSIGNED dict_name\u2019 \u2018dictionary ID\u2019

    This table provides a view over the internal table that stores the mapping between the compression dictionaries and the columns using them. The SUPER privilege is require to query it.

    "},{"location":"compressed-columns.html#limitations","title":"Limitations","text":"

    Compressed columns cannot be used in indices (neither on their own nor as parts of composite keys).

    Note

    CREATE TABLE t2 AS SELECT \\* FROM t1 will create a new table with a compressed column, whereas CREATE TABLE t2 AS SELECT CONCAT(a,'') AS a FROM t1 will not create compressed columns.

    At the same time, after executing the CREATE TABLE t2 LIKE t1 statement, t2.a will have the COMPRESSED attribute.

    ALTER TABLE ... DISCARD/IMPORT TABLESPACE is not supported for tables with compressed columns. To export and import tablespaces with compressed columns, you uncompress them first with: ALTER TABLE ... MODIFY ... COLUMN_FORMAT DEFAULT.

    "},{"location":"compressed-columns.html#mysqldump-command-line-parameters","title":"mysqldump command line parameters","text":"

    By default, with no additional options, mysqldump will generate a MySQL compatible SQL output.

    All /\\*!50633 COLUMN_FORMAT COMPRESSED \\*/ and /\\*!50633 COLUMN_FORMAT COMPRESSED WITH COMPRESSION_DICTIONARY <dictionary> \\*/ won\u2019t be in the dump.

    When a new option enable-compressed-columns is specified, all /\\*!50633 COLUMN_FORMAT COMPRESSED \\*/ will be left intact and all /\\*!50633 COLUMN_FORMAT COMPRESSED WITH COMPRESSION_DICTIONARY <dictionary> \\*/ will be transformed into /\\*!50633 COLUMN_FORMAT COMPRESSED \\*/. In this mode, the dump will contain the necessary SQL statements to create compressed columns, but without dictionaries.

    When a new enable-compressed-columns-with-dictionaries option is specified, dump will contain all compressed column attributes and compression dictionary.

    Moreover, the following dictionary creation fragments will be added before CREATE TABLE statements which are going to use these dictionaries for the first time.

    /*!50633 DROP COMPRESSION_DICTIONARY IF EXISTS <dictionary>; */\n/*!50633 CREATE COMPRESSION_DICTIONARY <dictionary>(...); */\n

    Two new options add-drop-compression-dictionary and skip-add-drop-compression-dictionary will control if /\\*!50633 DROP COMPRESSION_DICTIONARY IF EXISTS <dictionary> \\*/ part from previous paragraph will be skipped or not. By default, add-drop-compression-dictionary the mode will be used.

    When both enable-compressed-columns-with-dictionaries and --tab=<dir> (separate file for each table) options are specified, necessary compression dictionaries will be created in each output file using the following fragment (regardless of the values of add-drop-compression-dictionary and skip-add-drop-compression-dictionary options).

    /*!50633 CREATE COMPRESSION_DICTIONARY IF NOT EXISTS <dictionary>(...); */\n
    "},{"location":"compressed-columns.html#version-specific-information","title":"Version specific information","text":"
    • Percona Server for MySQL 8.0.13-3: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"compressed-columns.html#system-variables","title":"System variables","text":""},{"location":"compressed-columns.html#innodb_compressed_columns_zip_level","title":"innodb_compressed_columns_zip_level","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 6 Range 0-9

    This variable is used to specify the compression level used for compressed columns. Specifying 0 will use no compression, 1 the fastest, and 9 the best compression. The default value is 6.

    "},{"location":"compressed-columns.html#innodb_compressed_columns_threshold","title":"innodb_compressed_columns_threshold","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 96 Range 1 - 2^64-1 (or 2^32-1 for 32-bit release)

    By default, a value being inserted will be compressed if its length exceeds innodb_compressed_columns_threshold bytes. Otherwise, it will be stored in the raw (uncompressed) form.

    Please also note that because of the nature of some data, the compressed representation can be longer than the original value. In this case, it does not make sense to store such values in compressed form as Percona Server for MySQL would have to waste both memory space and CPU resources for unnecessary decompression. Therefore, even if the length of such non-compressible values exceeds innodb_compressed_columns_threshold, they will be stored in an uncompressed form (however, an attempt to compress them will still be made).

    This parameter can be tuned to skip unnecessary attempts of data compression for values that are known in advance by the user to have a bad compression ratio of their first N bytes.

    "},{"location":"copyright-and-licensing-information.html","title":"Copyright and licensing information","text":""},{"location":"copyright-and-licensing-information.html#documentation-licensing","title":"Documentation licensing","text":"

    Percona Server for MySQL documentation is (C)2009-2024 Percona LLC and/or its affiliates and is distributed under the Creative Commons Attribution 4.0 International License.

    "},{"location":"copyright-and-licensing-information.html#software-license","title":"Software license","text":"

    Percona Server for MySQL is built upon MySQL from Oracle. Along with making our own modifications, we merge in changes from other sources such as community contributions and changes from MariaDB.

    The original SHOW USER/TABLE/INDEX statistics code came from Google.

    Percona does not require copyright assignment.

    See the COPYING files accompanying the software distribution.

    "},{"location":"data-at-rest-encryption.html","title":"Data at Rest Encryption","text":"

    The following system variables, status variables, and options have been removed in Percona Server for MySQL 8.0.31-23.

    Variable name innodb_encryption_rotation_pages_read_from_cache innodb_encryption_rotation_pages_read_from_disk innodb_encryption_rotation_pages_modified innodb_encryption_rotation_pages_flushed innodb_encryption_rotation_estimated_iops innodb_encryption_rotation_list_length innodb_num_pages_encrypted innodb_num_pages_decrypted innodb_encryption_threads innodb_encryption_rotate_key_age innodb_encryption_rotation_loops innodb_default_encryption_key_id rotate_system_key and any dependencies

    The following system variable options have been changed in Percona Server for MySQL 8.0.31-23.

    Variable name Changed default_table_encryption Changed to two options: ON or OFF innodb_sys_tablespace_encrypt Changed to Boolean

    Data security is a concern for institutions and organizations. Transparent Data Encryption (TDE) or Data at Rest Encryption encrypts data files. Data at rest is any data that is not accessed or changed frequently, stored on different types of storage devices. Encryption ensures that if an unauthorized user accesses the data files from the file system, the user cannot read the contents.

    If the user uses master key encryption, the MySQL keyring plugin stores the InnoDB master key, used for the master key encryption implemented by MySQL. The master key is also used to encrypt redo logs, and undo logs, along with the tablespaces.

    The InnoDB tablespace encryption has the following components:

    • The database instance has a master key for tablespaces and a master key for binary log encryption.

    • Each tablespace has a tablespace key. The key is used to encrypt the Tablespace data pages. Encrypted tablespace keys are written on the tablespace header. In the master key implementation, the tablespace key cannot be changed unless you rebuild the table.

    Two separate keys allow the master key to be rotated in a minimal operation. When the master key is rotated, each tablespace key is decrypted and re-encrypted with the new master key. The key rotation only reads and writes to the first page of each tablespace file (.ibd).

    An InnoDB tablespace file is comprised of multiple logical and physical pages. Page 0 is the tablespace header page and keeps the metadata for the tablespace. The encryption information is stored on page 0 and the tablespace key is encrypted.

    An encrypted page is decrypted at the I/O layer, added to the buffer pool, and used to access the data. A buffer pool page is not encrypted. The page is encrypted by the I/O layer before the page is flushed to disk.

    "},{"location":"data-at-rest-encryption.html#percona-xtrabackup-support","title":"Percona XtraBackup support","text":"

    Percona XtraBackup version 8 supports the backup of encrypted general tablespaces.

    Percona XtraBackup only supports features that are Generally Available (GA) in Percona Server for MySQL. Due to time constraints, GA features may be supported in a later Percona XtraBackup release. Review the Percona XtraBackup release notes for more information.

    "},{"location":"data-loading.html","title":"MyRocks data loading","text":"

    By default, MyRocks configurations are optimized for short transactions, and not for data loading. MyRocks has a couple of special session variables to speed up data loading dramatically.

    "},{"location":"data-loading.html#sorted-bulk-loading","title":"Sorted bulk loading","text":"

    If your data is guaranteed to be loaded in primary key order, then this method is recommended. This method works by dropping any secondary keys first, loading data into your table in primary key order, and then restoring the secondary keys via Fast Secondary Index Creation.

    "},{"location":"data-loading.html#creating-secondary-indexes","title":"Creating secondary indexes","text":"

    When loading data into empty tables, it is highly recommended to drop all secondary indexes first, then loading data, and adding all secondary indexes after finishing loading data. MyRocks has a feature called Fast Secondary Index Creation. Fast Secondary Index Creation is automatically used when executing CREATE INDEX or ALTER TABLE ... ADD INDEX. With Fast Secondary Index Creation, the secondary index entries are directly written to bottommost RocksDB levels and bypassing compaction. This significantly reduces total write volume and CPU time for decompressing and compressing data on higher levels.

    "},{"location":"data-loading.html#loading-data","title":"Loading data","text":"

    As described above, loading data is highly recommended for tables with primary key only (no secondary keys), with all secondary indexes added after loading data.

    When loading data into MyRocks tables, there are two recommended session variables:

    SET session sql_log_bin=0;\nSET session rocksdb_bulk_load=1;\n

    When converting from large MyISAM/InnoDB tables, either by using the ALTER or INSERT INTO SELECT statements it\u2019s recommended that you create MyRocks tables as below (in case the table is sufficiently big it will cause the server to consume all the memory and then be terminated by the OOM killer):

    SET session sql_log_bin=0;\nSET session rocksdb_bulk_load=1;\nALTER TABLE large_myisam_table ENGINE=RocksDB;\nSET session rocksdb_bulk_load=0;\n

    Using sql_log_bin=0 avoids writing to binary logs.

    With rocksdb_bulk_load set to 1, MyRocks enters special mode to write all inserts into bottommost RocksDB levels, and skips writing data into MemTable and the following compactions. This is very efficient way to load data.

    The rocksdb_bulk_load mode operates with a few conditions:

    • None of the data being bulk loaded can overlap with existing data in the table. The easiest way to ensure this is to always bulk load into an empty table, but the mode will allow loading some data into the table, doing other operations, and then returning and bulk loading addition data if there is no overlap between what is being loaded and what already exists.

    • The data may not be visible until bulk load mode is ended (i.e. the rocksdb_bulk_load is set to zero again). The method that is used is building up SST files which will later be added as-is to the database. Until a particular SST has been added the data will not be visible to the rest of the system, thus issuing a SELECT on the table currently being bulk loaded will only show older data and will likely not show the most recently added rows. Ending the bulk load mode will cause the most recent SST file to be added. When bulk loading multiple tables, starting a new table will trigger the code to add the most recent SST file to the system \u2013 as a result, it is inadvisable to interleave INSERT statements to two or more tables during bulk load mode.

    By default, the rocksdb_bulk_load mode expects all data be inserted in primary key order (or reversed order). If the data is in the reverse order (i.e. the data is descending on a normally ordered primary key or is ascending on a reverse ordered primary key), the rows are cached in chunks to switch the order to match the expected order.

    Inserting one or more rows out of order will result in an error and may result in some of the data being inserted in the table and some not. To resolve the problem, one can either fix the data order of the insert, truncate the table, and restart.

    "},{"location":"data-loading.html#unsorted-bulk-loading","title":"Unsorted bulk loading","text":"

    If your data is not ordered in primary key order, then this method is recommended. With this method, secondary keys do not need to be dropped and restored. However, writing to the primary key no longer goes directly to SST files, and are written to temporary files for sorted first, so there is extra cost to this method.

    To allow for loading unsorted data:

    SET session sql_log_bin=0;\nSET session rocksdb_bulk_load_allow_unsorted=1;\nSET session rocksdb_bulk_load=1;\n...\nSET session rocksdb_bulk_load=0;\nSET session rocksdb_bulk_load_allow_unsorted=0;\n

    Note that rocksdb_bulk_load_allow_unsorted can only be changed when rocksdb_bulk_load is disabled (set to 0). In this case, all input data will go through an intermediate step that writes the rows to temporary SST files, sorts them rows in the primary key order, and then writes to final SST files in the correct order.

    "},{"location":"data-loading.html#other-approaches","title":"Other approaches","text":"

    If rocksdb_commit_in_the_middle is enabled, MyRocks implicitly commits every rocksdb_bulk_load_size records (default is 1,000) in the middle of your transaction. If your data loading fails in the middle of the statement (LOAD DATA or bulk INSERT), rows are not entirely rolled back, but some of rows are stored in the table. To restart data loading, you\u2019ll need to truncate the table and loading data again.

    Warning

    If you are loading large data without enabling rocksdb_bulk_load or rocksdb_commit_in_the_middle, please make sure transaction size is small enough. All modifications of the ongoing transactions are kept in memory.

    "},{"location":"data-loading.html#other-reading","title":"Other reading","text":"
    • Data Loading - this document has been used as a source for writing this documentation

    • ALTER TABLE \u2026 ENGINE=ROCKSDB uses too much memory

    "},{"location":"data-masking-comparison.html","title":"Compare the data masking component to the data masking plugin","text":"

    The Data Masking component feature is in tech preview.

    Percona Server for MySQL 8.0.34 introduces a data masking component that operates like a plugin but features a different architecture, enhancing the server\u2019s functionality. Below are the main differences between the component and the plugin:

    Scenario Description Character set support The component allows multi-byte character sets for general-purpose masking functions, while the plugin does not. Masking capabilities The component can mask PAN, SSN, IBAN, UUID, Canada SIN, and UK NIN. In contrast, the plugin only handles PAN and SSN. Data generation The component generates random email, US phone, PAN, SSN, IBAN, UUID, Canada SIN, and UK NIN data, while the plugin generates fewer types: email, US phone, PAN, and SSN. Dictionary storage The component stores substitution dictionaries in the database, as opposed to the plugin, which keeps these dictionaries in a file. Privilege management The component uses the MASKING_DICTIONARIES_ADMIN privilege for dictionary management, while the plugin requires the FILE privilege. Function handling The component automatically registers or unregisters loadable functions during installation or uninstallation, while the plugin does not offer this automatic process."},{"location":"data-masking-comparison.html#additional-resources","title":"Additional resources","text":"

    Install the data masking component

    Data masking component functions

    "},{"location":"data-masking-function-list.html","title":"Data masking component functions","text":"

    The feature is in tech preview.

    Name Usage gen_blocklist(str, from_dictionary_name, to_dictionary_name) Replace a term from a dictionary gen_dictionary(dictionary_name) Returns a random term from a dictionary gen_range(lower, upper) Returns a number from a range gen_rnd_canada_sin() Generates a Canadian Social Insurance number gen_rnd_email([name_size, surname_size, domain]) Generates an email address gen_rnd_iban([country, size]) Generates an International Bank Account number gen_rnd_pan() Generates a Primary account number for a payment card gen_rnd_ssn() Generates a US Social Security number gen_rnd_uk_nin() Generates a United Kingdom National Insurance number gen_rnd_us_phone() Generates a US phone number gen_rnd_uuid() Generates a Universally Unique Identifier mask_canada_sin(str [,mask_char]) Masks the Canadian Social Insurance number mask_iban(str [,mask_char]) Masks the International Bank Account number mask_inner(str, margin1, margin2 [,mask_char]) Masks the inner part of a string mask_outer(str, margin1, margin2 [,mask_char]) Masks the outer part of the string mask_pan(str [,mask_char]) Masks the Primary Account number for a payment card mask_pan_relaxed(str [,mask_char]) Partially masks the Primary Account number for a payment card mask_ssn(str [,mask_char]) Masks the US Social Security number mask_uk_nin(str [,mask_char]) Masks the United Kingdom National Insurance number mask_uuid(str [,mask_char]) Masks the Universally Unique Identifier masking_dictionary_remove(dictionary_name) Removes the dictionary masking_dictionary_term_add(dictionary_name, term_name) Adds a term to the masking dictionary masking_dictionary_term_remove(dictionary_name, term_name) Removes a term from the masking dictionary"},{"location":"data-masking-function-list.html#gen_blockliststr-from_dictionary_name-to_dictionary_name","title":"gen_blocklist(str, from_dictionary_name, to_dictionary_name)","text":"

    Replaces a term from one dictionary with a randomly selected term in another dictionary.

    "},{"location":"data-masking-function-list.html#parameters","title":"Parameters","text":"Parameter Optional Description Type term No The term to replace String from_dictionary_name No The dictionary that stores the term. String to_dictionary_name No The dictionary that stores the replacement term String"},{"location":"data-masking-function-list.html#returns","title":"Returns","text":"

    A term, selected at random, from the dictionary listed in to_dictionary_name that replaces the selected term. If the selected term is not listed in the from_dictionary_name or a dictionary is missing, then the term is returned. If the to_dictionary_name does not exist, then returns NULL. The character set of the returned string is the same character set of the term parameter.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example","title":"Example","text":"
    mysql> SELECT gen_blocklist('apple', 'fruit', 'nut');\n
    Expected output
    +-----------------------------------------+\n| gen_blocklist('apple', 'fruit', 'nut')  |\n+-----------------------------------------+\n| walnut                                  |\n+-----------------------------------------+\n
    "},{"location":"data-masking-function-list.html#gen_dictionarydictionary_name","title":"gen_dictionary(dictionary_name)","text":"

    Returns a term from a dictionary selected at random.

    "},{"location":"data-masking-function-list.html#parameters_1","title":"Parameters","text":"Parameter Optional Description Type dictionary_name No Select the random term from this dictionary String"},{"location":"data-masking-function-list.html#returns_1","title":"Returns","text":"

    A random term from the dictionary listed in dictionary_name in the utf8mb4 character set. Returns NULL if the dictionary_name does not exist.

    "},{"location":"data-masking-function-list.html#example_1","title":"Example","text":"
    mysql> SELECT gen_dictionary('trees');\n
    Expected output
    +--------------------------------------------------+\n| gen_dictionary('trees')                          |\n+--------------------------------------------------+\n| Norway spruce                                    |\n+--------------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rangelower-upper","title":"gen_range(lower, upper)","text":"

    Returns a number from a defined range.

    "},{"location":"data-masking-function-list.html#parameters_2","title":"Parameters","text":"Parameter Optional Description Type lower No The lower boundary of the range Integer upper No The upper boundary of the range Integer

    The upper parameter value must be an integer either greater than or equal to the lower parameter value.

    "},{"location":"data-masking-function-list.html#returns_2","title":"Returns","text":"

    An integer, selected at random, from an inclusive range defined by the lower parameter value and the upper parameter value, or NULL if the upper boundary is less than the lower boundary.

    "},{"location":"data-masking-function-list.html#example_2","title":"Example","text":"
    mysql> SELECT gen_range(10, 100);\n
    Expected output
    +--------------------------------------+\n| gen_range(10,100)                    |\n+--------------------------------------+\n| 56                                   |\n+--------------------------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_canada_sin","title":"gen_rnd_canada_sin()","text":"

    Generates a Canada Social Insurance Number (SIN).

    Important

    Only use this function for testing because the result could be a legitimate SIN. Use mask_canada_sin to disguise the result if you must publish the result.

    "},{"location":"data-masking-function-list.html#parameters_3","title":"Parameters","text":"

    None.

    "},{"location":"data-masking-function-list.html#returns_3","title":"Returns","text":"

    Returns a Canada SIN formatted in three groups of three digits (for example, 123-456-789) in the utf8mb4 character set. To ensure the number is consistent, the number is verified with the Luhn algorithm.

    "},{"location":"data-masking-function-list.html#example_3","title":"Example","text":"
    mysql> SELECT gen_rnd_canada_sin();\n
    Expected output
    +-------------------------+\n| gen_rnd_canada_sin()    |\n+-------------------------+\n| 506-948-819             |\n+-------------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_emailname_size-surname_size-domain","title":"gen_rnd_email([name_size, surname_size, domain])","text":"

    Generates a random email address in the name.surname@domain format.

    "},{"location":"data-masking-function-list.html#parameters_4","title":"Parameters","text":"Parameter Optional Description Type name_size Yes Specifies the number of characters in the name part. The default number is five. The minimum number is one. The maximum number is 1024. Integer surname_size Yes Specifies the number of characters in the surname part. The default number is seven. The minimum number is one. The maximum number is 1024. Integer domain Yes Specifies the domain name used. The default value is example.com. Integer"},{"location":"data-masking-function-list.html#returns_4","title":"Returns","text":"

    A generated email address as a string in the same character set as domain. If the domain value is not specified, then the string is in the utf8mb4 character set. The name and surname are random lower-case letters (a - z).

    "},{"location":"data-masking-function-list.html#example_4","title":"Example","text":"
    mysql> SELECT gen_rnd_email(name_size=4, surname_size=5, domain='mydomain.edu');\n
    Expected output
    +-------------------------------------+\n| gen_rnd_email(4, 5, 'mydomain.edu') |\n+-------------------------------------+\n| qwer.asdfg@mydomain.edu             |\n+-------------------------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_ibancountry-size","title":"gen_rnd_iban([country, size])","text":"

    Generates an Internal Bank Account Number (IBAN).

    Important

    Generating an IBAN with a valid country code should only be used for testing. The function does not check if the generated value is a legitimate bank account. If you must publish the result, consider using mask_iban to disguise the result. The function does not perform a checksum on the bank account number.

    "},{"location":"data-masking-function-list.html#parameters_5","title":"Parameters","text":"Parameter Optional Description Type country Yes A two-character country code String size Yes Number of characters Integer

    If the country is not specified, the default value is ZZ. The value must be two upper-case characters (A-Z) or an error is returned.

    The default value for size is 16. The minimum value is 15. The maximum value is 34.

    "},{"location":"data-masking-function-list.html#returns_5","title":"Returns","text":"

    The function returns a string that is the length of the size value. The string consists of country (two characters) followed by the (size - 2) random digits.

    The character set is the same as the country parameter or if that parameter is not specified, the character set is utf8mb4.

    "},{"location":"data-masking-function-list.html#example_5","title":"Example","text":"
    mysql> SELECT gen_rnd_iban();\n
    Expected output
    +-------------------+\n| gen_rnd_iban()    |\n+-------------------+\n|ZZ78959120078536   |\n+-------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_pan","title":"gen_rnd_pan()","text":"

    Generates a Primary Account Number (PAN) for a payment card that passes basic checksum validation.

    The generated PAN can be one of the following:

    • American Express

    • Visa

    • Mastercard

    • Discover

    Important

    Generating the PAN should only be used for testing. The function does not check if the generated value is a legitimate primary account number. If you must publish the result, consider using mask_pan or mask_pan_relaxed() to disguise the result.

    "},{"location":"data-masking-function-list.html#parameters_6","title":"Parameters","text":"

    None

    "},{"location":"data-masking-function-list.html#returns_6","title":"Returns","text":"

    A random PAN string in utf8mb4 character set.

    "},{"location":"data-masking-function-list.html#example_6","title":"Example","text":"
    mysql> SELECT gen_rnd_pan();\n
    Expected output
    +-------------------+\n| gen_rnd_pan()     |\n+-------------------+\n| 1234567898765432  |\n+-------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_ssn","title":"gen_rnd_ssn()","text":"

    Generates a United States Social Security Account Number (SSN).

    "},{"location":"data-masking-function-list.html#parameters_7","title":"Parameters","text":"

    None

    "},{"location":"data-masking-function-list.html#returns_7","title":"Returns","text":"

    A SSN string in a nine-digit number format \u201cAAA-GG-SSSS\u201d in the utf8mb4 character set. The number has three parts, the first three digits are the area number, the group number, and the serial number. The generated SSN uses \u2018900\u2019 or greater numbers for the area number. These numbers are not legitimate because they are outside the approved range.

    "},{"location":"data-masking-function-list.html#example_7","title":"Example","text":"
    mysql> SELECT gen_rnd_ssn();\n
    Expected output
    +----------------+\n| gen_rnd_ssn()  |\n+----------------+\n| 970-03-0370    |\n-----------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_uk_nin","title":"gen_rnd_uk_nin()","text":"

    Generates a United Kingdom National Insurance Number (NIN).

    Important

    This function should only be used for testing. The function does not check if the generated value is a legitimate United Kingdom National Insurance number. If you must publish the result, consider masking the result with mask_uk_nin.

    "},{"location":"data-masking-function-list.html#parameters_8","title":"Parameters","text":"

    None.

    "},{"location":"data-masking-function-list.html#returns_8","title":"Returns","text":"

    A NIN string in the utf8mb4 character set. The string is nine (9) characters in length, always starts with \u2018AA\u2019 and ends with \u2018C\u2019.

    "},{"location":"data-masking-function-list.html#example_8","title":"Example","text":"
    mysql> SELECT gen_rnd_uk_nin();\n
    Expected output
    +----------------------+\n| gen_rnd_uk_nin()     |\n+----------------------+\n| AA123456C            |\n+----------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_us_phone","title":"gen_rnd_us_phone()","text":"

    Generates a United States phone number with the 555 area code. The \u2018555\u2019 area code represents fictional numbers.

    "},{"location":"data-masking-function-list.html#parameters_9","title":"Parameters","text":"

    None

    "},{"location":"data-masking-function-list.html#returns_9","title":"Returns","text":"

    Returns a United States phone number in the utf8mb4 character set.

    "},{"location":"data-masking-function-list.html#example_9","title":"Example","text":"
    mysql> SELECT gen_rnd_us_phone();\n
    Expected output
    +--------------------+\n| gen_rnd_us_phone() |\n+--------------------+\n| 1-555-249-2029     |\n+--------------------+\n
    "},{"location":"data-masking-function-list.html#gen_rnd_uuid","title":"gen_rnd_uuid()","text":"

    Generates a version 4 Universally Unique Identifier (UUID).

    "},{"location":"data-masking-function-list.html#parameters_10","title":"Parameters","text":"

    None.

    "},{"location":"data-masking-function-list.html#returns_10","title":"Returns","text":"

    Returns a UUID as a string in the utf8mb4 character set.

    "},{"location":"data-masking-function-list.html#example_10","title":"Example","text":"
    mysql> SELECT gen_rnd_uuid();\n
    Expected output
    +------------------------------------+\n| gen_rnd_uuid()                     |\n+------------------------------------+\n|9a3b642c-06c6-11ee-be56-0242ac120002|\n+------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_canada_sinstr-mask_char","title":"mask_canada_sin(str [,mask_char])","text":"

    Masks a Canada Social Insurance Number (SIN).

    "},{"location":"data-masking-function-list.html#parameters_11","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The masking character String

    The str accepts an alphanumeric string.

    If you do not specify a mask_char, the default character is X. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_11","title":"Returns","text":"

    A string with the selected characters masked by a specified mask_char or the default value for that parameter. The function supports multibyte characters in any character set. The character set of the return value is the same as str.

    An error is reported if str length is an incorrect length.

    Returns a NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_11","title":"Example","text":"
    mysql> SELECT mask_canada_sin('555-555-555');\n
    Expected output
    +--------------------------------+\n| mask_canada_sin('555-555-555') |\n+--------------------------------+\n| XXX-XXX-XXX                    |\n+--------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_ibanstr-mask_char","title":"mask_iban(str [,mask_char])","text":"

    Masks an Internal Bank Account Number (IBAN).

    "},{"location":"data-masking-function-list.html#parameters_12","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes Character used for masking String

    The str accepts either of the following:

    • No separator symbol

    • Groups of four characters. These groups can be separated by a space or any separator character.

    The default value for mask_char is *. The value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_12","title":"Returns","text":"

    Returns the masked string. The character set of the result is the same as the character set of str.

    An error is reported if the str length is incorrect.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_12","title":"Example","text":"
    mysql> SELECT mask_iban('DE27 1002 02003 77495 4156');\n
    Expected output
    +---------------------------------------------+\n| mask_iban('DE27 1002 02003 77495 4156')     |\n+---------------------------------------------+\n| DE** **** **** **** ****                    |\n+---------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_innerstr-margin1-margin2-mask_char","title":"mask_inner(str, margin1, margin2 [,mask_char])","text":"

    Returns the string where a selected inner portion is masked with a substitute character.

    "},{"location":"data-masking-function-list.html#parameters_13","title":"Parameters","text":"Parameter Optional Description Type string No The string to be masked String margin1 No The number of characters on the left end of the string to remain unmasked Integer margin2 No The number of characters on the right end of the string to remain unmasked Integer mask_char Yes The masking character String

    The margin1 value cannot be a negative number. A value of 0 (zero) masks all characters.

    The margin2 value cannot be a negative number. A value of 0 (zero) masks all characters.

    If the sum of margin1 and margin2 is greater than or equal to the string length, no masking occurs.

    If the mask_char is not specified, the default is \u2018X\u2019. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_13","title":"Returns","text":"

    A string with the selected characters masked by a specified mask_char or that parameter\u2019s default value in the character set of the string parameter.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_13","title":"Example","text":"
    mysql> SELECT mask_inner('123456789', 1, 2);\n
    Expected output
    +-----------------------------------+\n| mask_inner('123456789', 1, 2)     |\n+-----------------------------------+\n| 1XXXXXX89                          |\n+-----------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_outerstr-margin1-margin2-mask_char","title":"mask_outer(str, margin1, margin2 [,mask_char])","text":"

    Returns the string where a selected outer portion is masked with a substitute character.

    "},{"location":"data-masking-function-list.html#parameters_14","title":"Parameters","text":"Parameter Optional Description Type string No The string to be masked String margin1 No On the left end of the string, mask this designated number of characters Integer margin2 No On the right end of the string, mask this designated number of characters Integer mask_char Yes The masking character String

    The margin1 cannot be a negative number. A value of 0 (zero) does not mask any characters.

    The margin2 cannot be a negative number. A value of 0 (zero) does not mask any characters.

    If the sum of margin1 and margin2 is greater than or equal to the string length, the string is masked.

    If the mask_char is not specified, the default is \u2018X\u2019. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_14","title":"Returns","text":"

    A string with the selected characters masked by a specified mask_char or that parameter\u2019s default value in the same character set as string.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_14","title":"Example","text":"
    mysql> SELECT mask_outer('123456789', 2, 2); \n
    Expected output
    +------------------------------------+\n| mask_outer('123456789', 2, 2).     |\n+------------------------------------+\n| XX34567XX                          |\n+------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_panstr-mask_char","title":"mask_pan(str [,mask_char])","text":"

    Returns a masked payment card Primary Account Number (PAN). The mask replaces the PAN number with the specified character except for the last four digits.

    "},{"location":"data-masking-function-list.html#parameters_15","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The masking character String

    The str contains a minimum of 14 or a maximum of 19 alphanumeric characters.

    If the mask_char is not specified, the default value is \u2018X\u2019. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_15","title":"Returns","text":"

    A string with the selected characters masked by a specified mask_char or that parameter\u2019s default value. The character set of the result is the same character set as str.

    An error occurs if the str parameter is not the correct length.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_15","title":"Example","text":"
    mysql> SELECT mask_pan (gen_rnd_pan());\n
    Expected output
    +------------------------------------+\n| mask_pan(gen_rnd_pan())            |\n+------------------------------------+\n| XXXXXXXXXXX2345                    |\n+------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_pan_relaxedstr-mask_char","title":"mask_pan_relaxed(str [,mask_char])","text":"

    Returns a masked payment card Primary Account Number (PAN). The first six numbers and the last four numbers and the rest of the string masked by specified character or X.

    "},{"location":"data-masking-function-list.html#parameters_16","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The specified character for masking String

    The str must contain a minimum of 14 or a maximum of 19 alphanumeric characters.

    If the mask_char is not specified, the default value is \u2018X\u2019.

    "},{"location":"data-masking-function-list.html#returns_16","title":"Returns","text":"

    A string with the first six numbers and the last four numbers and the rest of the string masked by a specified mask_char or that parameter\u2019s default value (X). The character set of the result is the same character set as str.

    The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    Reports an error is the str parameter is not the correct length.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_16","title":"Example","text":"
    mysql> SELECT mask_pan_relaxed(gen_rnd_pan());\n
    Expected output
    +------------------------------------------+\n| mask_pan_relaxed(gen_rnd_pan())          |\n+------------------------------------------+\n| 520754XXXXXX4848                         |\n+------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_ssnstr-mask_char","title":"mask_ssn(str [,mask_char])","text":"

    Returns a masked United States Social Security Number(SSN). The mask replaces the SSN number with the specified character except for the last four digits.

    "},{"location":"data-masking-function-list.html#parameters_17","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The masking character String

    The str accepts either of the following:

    • Nine integers, no separator symbol
    • Nine integers in the AAA-GG-SSS pattern. The - (dash symbol) is the separator character.

    If the mask_char is not specified, the default value is *. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_17","title":"Returns","text":"

    A string with the selected characters masked by a specified mask_char or that parameter\u2019s default value in the same character set of str.

    Reports an error if the value of the str is an incorrect length.

    Returns a NULL value if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_17","title":"Example","text":"
    mysql> SELECT mask_ssn('555-55-5555', 'X');\n
    Expected output
    +-----------------------------+\n| mask_ssn('555-55-5555','X') |\n+-----------------------------+\n| XXX-XX-5555                 |\n+-----------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_uk_ninstr-mask_char","title":"mask_uk_nin(str [,mask_char])","text":"

    Returns a masked a United Kingdom National Insurance Number (NIN). The mask replaces the NIN number with the specified character except for the first two digits.

    "},{"location":"data-masking-function-list.html#parameters_18","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The masking character String

    The str accepts an alpha-numeric string and does not check format and the str can use any separator character.

    If the mask_char is not specified, the default value is *. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_18","title":"Returns","text":"

    Returns a string with the selected characters masked by a specified mask_char or that parameter\u2019s default value in the same character set as str.

    An error occurs if the str parameter is not the correct length.

    Returns a NULL value if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_18","title":"Example","text":"
    mysql> SELECT mask_uk_nin ('CT 26 46 83 D');\n
    Expected output
    +------------------------------------+\n| mask_uk_nin('CT 26 46 83 D')       |\n+------------------------------------+\n| CT ** ** ** *                      |\n+------------------------------------+\n
    "},{"location":"data-masking-function-list.html#mask_uuidstr-mask_char","title":"mask_uuid(str [,mask_char])","text":"

    Masks a Universally Unique Identifier (UUID).

    "},{"location":"data-masking-function-list.html#parameters_19","title":"Parameters","text":"Parameter Optional Description Type str No The string to be masked String mask_char Yes The masking character String

    The str format is ********-****-****-****-************.

    If the mask_char is not specified, the default value is \u2018*\u2019. The mask_char value can be a multibyte character in any character set and may not be same character set as str.

    "},{"location":"data-masking-function-list.html#returns_19","title":"Returns","text":"

    A string with the characters masked by a specified mask_char or that parameter\u2019s default value in the same character set as str.

    Returns an error if the length of str is incorrect.

    Returns NULL if you invoke this function with NULL as the primary argument.

    "},{"location":"data-masking-function-list.html#example_19","title":"Example","text":"
    mysql> SELECT mask_uuid('9a3b642c-06c6-11ee-be56-0242ac120002');\n
    Expected output
    +-------------------------------------------------------+\n| mask_uuid('9a3b642c-06c6-11ee-be56-0242ac120002')     |\n+-------------------------------------------------------+\n|********_****_****_****_************                   |\n+-------------------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#masking_dictionary_removedictionary_name","title":"masking_dictionary_remove(dictionary_name)","text":"

    Removes all of the terms and then removes the dictionary.

    Requires the MASKING_DICTIONARIES_ADMIN privilege.

    "},{"location":"data-masking-function-list.html#parameters_20","title":"Parameters","text":"Parameter Optional Description Type dictionary_name No The dictionary to be removed String"},{"location":"data-masking-function-list.html#returns_20","title":"Returns","text":"

    Returns a string value of 1 (one) in the utf8mb4 character set if the operation is successful or NULL if the operation could not find the dictionary_name.

    "},{"location":"data-masking-function-list.html#example_20","title":"Example","text":"
    mysql> SELECT masking_dictionary_remove('trees');\n
    Expected output
    +------------------------------------------+\n| masking_dictionary_remove('trees')       |\n+------------------------------------------+\n|                                        1 |\n+------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#masking_dictionary_term_adddictionary_name-term_name","title":"masking_dictionary_term_add(dictionary_name, term_name)","text":"

    Adds a term to the dictionary and requires the MASKING_DICTIONARIES_ADMIN privilege.

    "},{"location":"data-masking-function-list.html#parameters_21","title":"Parameters","text":"Parameter Optional Description Type dictionary_name No The dictionary where the term is added String term_name No The term added to the selected dictionary String"},{"location":"data-masking-function-list.html#returns_21","title":"Returns","text":"

    Returns a string value of 1 (one) in the utf8mb4 character set if the operation is successful. If the dictionary_name does not exist, the operation creates the dictionary.

    Returns NULL if the operation fails. An operation can fail if the term_name is already available in the dictionary specified by dictionary_name.

    The operation uses INSERT IGNORE and can have the following outcomes:

    • The term_name is truncated if the term_name length is greater than maximum length of the Term field in the mysql.masking_dictionaries table.

    • The character of the dictionary_name is not supported by the Dictionary field in mysql.masking_dictionaries table, the character is implicitly converted to \u2018?\u2019.

    • If the character of the term_name is not supported by the Term field in the mysql.masking_dictionaries table, the character is implicitly converted to \u2018?\u2019.

    The following command returns the table information:

    mysql> DESCRIBE mysql.masking_dictionaries;\n

    The result returns the table structure.

    Expected output
    +------------+--------------+------+-----+---------+-------+\n| Field      | Type         | Null | Key | Default | Extra |\n+------------+--------------+------+-----+---------+-------+\n| Dictionary | varchar(256) | NO   | PRI | NULL    |       |\n| Term       | varchar(256) | NO   | PRI | NULL    |       |\n+------------+--------------+------+-----+---------+-------+\n2 rows in set (0.02 sec)\n

    Modify the table with an ALTER TABLE statement, if needed.

    "},{"location":"data-masking-function-list.html#example_21","title":"Example","text":"
    mysql> SELECT masking_dictionary_term_add('trees','pine');\n
    Expected output
    +-----------------------------------------------+\n| masking_dictionary_term_add('trees', 'pine')  |\n+-----------------------------------------------+\n|                                             1 |\n+-----------------------------------------------+\n
    "},{"location":"data-masking-function-list.html#masking_dictionary_term_removedictionary_name-term_name","title":"masking_dictionary_term_remove(dictionary_name, term_name)","text":"

    Removes the selected term from the dictionary.

    Requires the MASKING_DICTIONARIES_ADMIN privilege.

    "},{"location":"data-masking-function-list.html#parameters_22","title":"Parameters","text":"Parameter Optional Description Type dictionary_name No The dictionary that contains the term_name String term_name No The term to be removed String"},{"location":"data-masking-function-list.html#returns_22","title":"Returns","text":"

    Returns a string value of 1 (one) in the utf8mb4 character set if the operation is successful.

    Returns NULL if the operation fails. An operation can fail if the following occurs:

    • The term_name is not available in the dictionary specified by dictionary_name
    • The dictionary_name could not be found
    "},{"location":"data-masking-function-list.html#parameters_23","title":"Parameters","text":"Parameter Optional Description Type dictionary_name No The dictionary for the term String term_name No The term to be added String"},{"location":"data-masking-function-list.html#example_22","title":"Example","text":"
    mysql> SELECT masking_dictionary_term_remove('trees','pine');\n
    Expected output
    +-------------------------------------------------------+\n| masking_dictionary_term_remove('trees', 'pine')       |\n+-------------------------------------------------------+\n|                                                     1 |\n+-------------------------------------------------------+\n
    "},{"location":"data-masking-overview.html","title":"Data masking overview","text":"

    Data masking protects sensitive information by blocking unauthorized users from accessing the real data. This process creates altered versions of data for specific uses, like presentations, sales demonstrations, or software testing. The masked data keeps the same format as the original but contains changed values that cannot be reversed to reveal the true information. By making the data worthless to outsiders, masking helps organizations reduce their risk of data breaches or misuse. Companies can safely use masked data in various scenarios without exposing confidential details to unauthorized parties.

    Data masking in Percona Server for MySQL is an essential tool for protecting sensitive information in various scenarios:

    Scenario Description Protecting data in development and testing Developers and testers require realistic data to validate applications. By masking sensitive details, such as credit card numbers, Social Security numbers, and addresses, accurate user information can be safeguarded in non-production environments. Compliance with data privacy regulations Stringent laws like GDPR, HIPAA, and CCPA mandate the protection of personal data. Data masking enables the anonymization of personal information, facilitating its use for analysis and reporting while ensuring compliance with regulations. Securing data when collaborating with external entities Sharing data with third-party vendors demands the masking of sensitive information to prevent access to accurate personal details. Supporting customer service and training Customer support teams and trainers often require access to customer data. Through data masking, they can utilize realistic information without compromising actual customer details. Facilitating data analysis and reporting Analysts rely on access to data for generating reports and uncovering insights. By employing data masking techniques, they can work with realistic data sets without compromising privacy.

    These examples underscore how data masking serves as a crucial safeguard for sensitive information, allowing organizations to leverage their data effectively across diverse functions.

    Data masking helps to limit the exposure of sensitive data by preventing access to non-authorized users. Masking provides a way to create a version of the data in situations, such as a presentation, sales demo, or software testing, when the real data should not be used. Data masking changes the data values while using the same format and cannot be reverse engineered. Masking reduces an organization\u2019s risk by making the data useless to an outside party.

    "},{"location":"data-masking-overview.html#data-masking-techniques","title":"Data masking techniques","text":"

    The common data masking techniques are the following:

    Technique Description Custom string Replaces sensitive data with a specific string, such as a phone number with XXX-XXX-XXXX Data substitution Replaces sensitive data with realistic alternative values, such as city name with another name from a dictionary"},{"location":"data-masking-overview.html#additional-resources","title":"Additional resources","text":"

    Component:

    Install the data masking component

    Data masking component functions

    Plugin:

    Install data masking plugin

    Data masking plugin funtions

    "},{"location":"data-masking-plugin-functions.html","title":"Data masking plugin functions","text":"

    This feature was implemented in Percona Server for MySQL 8.0.17-8.

    The Percona Data Masking plugin is a free and Open Source implementation of the MySQL\u2019s data masking plugin. Data Masking provides a set of functions to hide sensitive data with modified content.

    Data masking can have either of the characteristics:

    • Generation of random data, such as an email address

    • De-identify data by transforming the data to hide content

    The data masking functions have the following categories:

    • General purpose

    • Special purpose

    • Generating Random Data with Defined characteristics

    • Using Dictionaries to Generate Random Data

    "},{"location":"data-masking-plugin-functions.html#general-purpose","title":"General purpose","text":"

    The general purpose data masking functions are the following:

    Function Description mask_inner(string, margin1, margin2 [, character]) Returns a result where only the inner part of a string is masked. A different masking character can be specified. . mask_outer(string, margin1, margin2 [, character]) Masks the outer part of the string. The inner section is not masked. A different masking character can be specified."},{"location":"data-masking-plugin-functions.html#examples","title":"Examples","text":"

    An example of mask_inner:

    mysql> SELECT mask_inner('123456789', 1, 2);\n
    Expected output
    +-----------------------------------+\n| mask_inner('123456789', 1, 2)     |\n+-----------------------------------+\n|1XXXXXX89                          |\n+-----------------------------------+\n

    An example of mask_outer:

    mysql> SELECT mask_outer('123456789', 2, 2); \n
    Expected output
    +------------------------------------+\n| mask_outer('123456789', 2, 2).     |\n+------------------------------------+\n| XX34567XX                          |\n+------------------------------------+\n
    "},{"location":"data-masking-plugin-functions.html#special-purpose","title":"Special Purpose","text":"

    The special purpose data masking functions are as follows:

    Parameter Description mask_pan(string) Masks the Primary Account Number (PAN) by replacing the string with an \u201cX\u201d except for the last four characters. The PAN string must be 15 characters or 16 characters in length. mask_pan_relaxed(string) Returns the first six numbers and the last four numbers. The rest of the string is replaced by \u201cX\u201d. mask_ssn(string) Returns a string with only the last four numbers visible. The rest of the string is replaced by \u201cX\u201d."},{"location":"data-masking-plugin-functions.html#examples_1","title":"Examples","text":"

    An example of mask_pan.

    mysql> SELECT mask_pan (gen_rnd_pan());\n
    Expected output
    +------------------------------------+\n| mask_pan(gen_rnd_pan())            |\n+------------------------------------+\n| XXXXXXXXXXX2345                    |\n+------------------------------------+\n

    An example of mask_pan_relaxed:

    mysql> SELECT mask_pan_relaxed(gen_rnd_pan());\n
    Expected output
    +------------------------------------------+\n| mask_pan_relaxed(gen_rnd_pan())          |\n+------------------------------------------+\n| 520754XXXXXX4848                         |\n+------------------------------------------+\n

    An example of mask_ssn:

    mysql> SELECT mask_ssn('555-55-5555');\n
    Expected output
    +-------------------------+\n| mask_ssn('555-55-5555') |\n+-------------------------+\n| XXX-XX-5555             |\n+-------------------------+\n
    "},{"location":"data-masking-plugin-functions.html#generate-random-data-for-specific-requirements","title":"Generate random data for specific requirements","text":"

    These functions generate random values for specific requirements.

    Parameter Description gen_range(lower, upper) Generates a random number based on a selected range and supports negative numbers. gen_rnd_email() Generates a random email address. The domain is example.com. gen_rnd_pan([size in integer]) Generates a random primary account number. This function should only be used for test purposes. gen_rnd_us_phone() Generates a random U.S. phone number. The generated number adds the 1 dialing code and is in the 555 area code. The 555 area code is not valid for any U.S. phone number. gen_rnd_ssn() Generates a random, non-legitimate US Social Security Number in an AAA-BBB-CCCC format. This function should only be used for test purposes."},{"location":"data-masking-plugin-functions.html#examples_2","title":"Examples","text":"

    An example of gen_range(lower, upper):

    mysql> SELECT gen_range(10, 100);\n
    Expected output
    +--------------------------------------+\n| gen_range(10,100)                    |\n+--------------------------------------+\n| 56                                   |\n+--------------------------------------+\n

    An example of gen_range(lower, upper) with negative numbers:

    mysql> SELECT gen_range(-100,-80);\n
    Expected output
    +--------------------------------------+\n| gen_range(-100,-80)                  |\n+--------------------------------------+\n| -91                                  |\n+--------------------------------------+\n

    An example of gen_rnd_email():

    mysql> SELECT gen_rnd_email();\n
    Expected output
    +---------------------------------------+\n| gen_rnd_email()                       |\n+---------------------------------------+\n| sma.jrts@example.com                  |\n+---------------------------------------+\n

    An example of mask_pan(gen_rnd_pan()):

    mysql> SELECT mask_pan(gen_rnd_pan());\n
    Expected output
    +-------------------------------------+\n| mask_pan(gen_rnd_pan())             |\n+-------------------------------------+\n| XXXXXXXXXXXX4444                    |\n+-------------------------------------+\n

    An example of gen_rnd_us_phone():

    mysql> SELECT gen_rnd_us_phone();\n
    Expected output
    +-------------------------------+\n| gen_rnd_us_phone()            |\n+-------------------------------+\n| 1-555-635-5709                |\n+-------------------------------+\n

    An example of gen_rnd_ssn():

    mysql> SELECT gen_rnd_ssn()\n
    Expected output
    +-----------------------------+\n| gen_rnd_ssn()               |\n+-----------------------------+\n| 995-33-5656                 |\n+-----------------------------+\n
    "},{"location":"data-masking-plugin-functions.html#use-dictionaries-to-generate-random-terms","title":"Use dictionaries to generate random terms","text":"

    Use a selected dictionary to generate random terms. The dictionary must be loaded from a file with the following characteristics:

    • Plain text

    • One term per line

    • Must contain at least one entry

    Copy the dictionary files to a directory accessible to MySQL. Percona Server for MySQL* 8.0.21-12 enabled using the secure-file-priv option for gen_dictionary_load(). The secure-file-priv option defines the directories where gen_dictionary_load() loads the dictionary files.

    Note

    Percona Server for MySQL 8.0.34 deprecates the gen_blacklist() function. Use gen_blocklist() instead.

    Parameter Description Returns gen_blacklist(str, dictionary_name, replacement_dictionary_name) Replaces a term with a term from a second dictionary. Deprecated in Percona Server for MySQL 8.0.34. A dictionary term gen_blocklist(str, dictionary_name, replacement_dictionary_name) Replaces a term with a term from a second dictionary. A dictionary term gen_dictionary(dictionary_name) Randomizes the dictionary terms A random term from the selected dictionary. gen_dictionary_drop(dictionary_name) Removes the selected dictionary from the dictionary registry. Either success or failure gen_dictionary_load(dictionary path, dictionary name) Loads a file into the dictionary registry and configures the dictionary name. The name can be used with any function. If the dictionary is edited, you must drop and then reload the dictionary to view the changes. Either success or failure"},{"location":"data-masking-plugin-functions.html#example","title":"Example","text":"

    An example of gen_blocklist():

    mysql> SELECT gen_blocklist('apple', 'fruit', 'nut');\n
    Expected output
    +-----------------------------------------+\n| gen_blocklist('apple', 'fruit', 'nut')  |\n+-----------------------------------------+\n| walnut                                  |\n+-----------------------------------------+\n

    An example of gen_dictionary():

    mysql> SELECT gen_dictionary('trees');\n
    Expected output
    +--------------------------------------------------+\n| gen_dictionary('trees')                          |\n+--------------------------------------------------+\n| Norway spruce                                    |\n+--------------------------------------------------+\n

    An example of gen_dictionary_drop():

    mysql> SELECT gen_dictionary_drop('mytestdict')\n
    Expected output
    +-------------------------------------+\n| gen_dictionary_drop('mytestdict')   |\n+-------------------------------------+\n| Dictionary removed                  |\n+-------------------------------------+\n

    An example of gen_dictionary_load(path, name):

    mysql> SELECT gen_dictionary_load('/usr/local/mysql/dict-files/testdict', 'testdict');\n
    Expected output
    +-------------------------------------------------------------------------------+\n| gen_dictionary_load('/usr/local/mysql/mysql/dict-files/testdict', 'testdict') |\n+-------------------------------------------------------------------------------+\n| Dictionary load successfully                                                  |\n+-------------------------------------------------------------------------------+\n
    "},{"location":"development.html","title":"Development of Percona Server for MySQL","text":"

    Percona Server for MySQL is an open source project to produce a distribution of the MySQL Server with improved performance, scalability and diagnostics.

    "},{"location":"development.html#submit-changes","title":"Submit changes","text":"

    We keep the trunk in a constant state of stability to allow for a release at any time and to minimize wasted time by developers due to broken code.

    "},{"location":"development.html#overview","title":"Overview","text":"

    At Percona we use Git for source control, GitHub for code hosting, and Jira for release management.

    We change our software to implement new features and/or to fix bugs. Refactoring could be classed either as a new feature or a bug depending on the scope of work.

    New features and bugs are targeted to specific releases. A release is part of a series. For example, 2.4 is a series in Percona XtraBackup and 2.4.15, 2.4.16, and 2.4.17 are releases in this series.

    Code is proposed for merging in the form of pull requests on GitHub.

    For Percona Server for MySQL, we have several Git branches on which development occurs: 5.5, 5.6, 5.7, and 8.0. As Percona Server for MySQL is not a traditional project, instead of being a set of patches against an existing product, these branches are not related. In other words, we do not merge from one release branch to another. To have your changes in several branches, you must propose branches to each release branch.

    "},{"location":"development.html#making-a-change-to-a-project","title":"Making a change to a project","text":"

    In this case, we are going to use percona-xtrabackup as an example. The workflow is similar for Percona Server for MySQL, but the patch will need to be modified in all release branches of Percona Server for MySQL.

    • git branch https://github.com/percona/percona-xtrabackup/featureX (where \u2018featureX\u2019 is a sensible name for the task at hand)

    • (developer makes changes in featureX, testing locally)

    • The Developer pushes to https://github.com/percona/username/percona-xtrabackup/featureX

    • The developer can submit a pull request to https://github.com/percona/percona-xtrabackup,

    • Code undergoes a review

    • Once code is accepted, it can be merged

    If the change also applies to a stable release (e.g. 2.4) then changes should be made on a branch of 2.4 and merged to a branch of the trunk. In this case, there should be two branches run through the param build and two merge proposals (one for the stable release and one with the changes merged to the trunk). This prevents somebody else from having to guess how to merge your changes.

    "},{"location":"development.html#percona-server-for-mysql","title":"Percona Server for MySQL","text":"

    The same process for Percona Server for MySQL, but we have several different branches (and merge requests).

    "},{"location":"differences.html","title":"Differences between Percona MyRocks and Facebook MyRocks","text":"

    The original MyRocks was developed by Facebook and works with their implementation of MySQL. Percona MyRocks is a branch of MyRocks for Percona Server for MySQL and includes the following differences from the original implementation:

    • The behavior of the START TRANSACTION WITH CONSISTENT SNAPSHOT statement depends on the transaction isolation level.
    Storage Engine Transaction isolation level READ COMMITTED REPEATABLE READ InnoDB Success Success Facebook MyRocks Fail Success (MyRocks engine only; read-only, as all MyRocks engine snapshots) Percona MyRocks Fail with any DML which would violate the read-only snapshot constraint Success (read-only snapshots independent of the engines in use)
    • Percona MyRocks includes the lz4 and zstd statically linked libraries.
    "},{"location":"disable-audit-log-filter.html","title":"Disable Audit Log Filter logging","text":"

    The audit_log_filter_disable system variable lets you disable or enable logging for all connections.

    You can set the variable in the following ways:

    • Option file
    • Command-line startup string
    • SET statement during runtime
    mysql> SET GLOBAL audit_log_filter_disable = true;\n

    Setting audit_log_filter_disable has the following effect:

    Value Actions true Generates a warning. Audit log function calls and changes in variables generate session warnings. Disables the plugin. false Re-enables the plugin and generates a warning. This is the default value."},{"location":"disable-audit-log-filter.html#privileges-required","title":"Privileges required","text":"

    Setting the value of audit_log_filter_disable at runtime requires the following:

    • AUDIT_ADMIN privilege
    • SYSTEM_VARIABLES_ADMIN privilege
    "},{"location":"docker-config.html","title":"Docker environment variables","text":"

    When running a Docker container with Percona Server, you can adjust the configuration of the instance Add one or more environment variables to the docker run command.

    These variables will not affect you if you start the container with a data directory that already contains a database. Any pre-existing database remains untouched on the container startup.

    The variables are optional, but you must specify at least one of the following:

    • MYSQL_DATABASE - the database schema name that is created when the container starts

    • MYSQL_USER - create a user account when the container starts

    • MYSQL_PASSWORD - used with MYSQL_USER to create a password for that user account.

    • MYSQL_ALLOW_EMPTY_PASSWORD - creates a root user with an empty password. This option is insecure and only should be used for testing or proof of concept when the database can be removed afterward. Anyone can connect as root.

    • MYSQL_ROOT_PASSWORD - this password is used for the root user account. This option is not recommended for production.

    • MYSQL_RANDOM_ROOT_PASSWORD - set this variable instead of MYSQL_ROOT_PASSWORD when you want Percona Server to generate a password for you. The generated password is available in the container\u2019s logs only during the first start of the container. Use docker logs. You cannot retrieve the password after the first start.

    To further secure your instance, use the MYSQL_ONETIME_PASSWORD variable.

    These variables are visible to anyone able to run Docker inspect.

    $ docker inspect ps\n
    Expected output
    ...\n \"Env\": [\n                \"MYSQL_ROOT_PASSWORD=root\",\n                \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n                \"PS_VERSION=8.0.29-21.1\",\n                \"OS_VER=el8\",\n                \"FULL_PERCONA_VERSION=8.0.29-21.1.el8\"\n               ]\n...\n

    You should use Docker secrets or volumes instead.

    Percona Server for MySQL also allows adding the _FILE suffix to a variable name. This suffix lets you add the value in a path so that the value cannot be inspected from outside the container.

    "},{"location":"docker.html","title":"Run Percona Server for MySQL in a Docker Container","text":"

    Docker lets developers build, deploy, run, update, and manage containers, which isolate applications from the host system. Docker containers are made from Docker images, which are snapshots of the configuration needed to run an application, such as the code and libraries.

    Percona solutions

    Percona provides a range of solutions and services for open source databases. Some of the critical solutions offered are the following:

    • Database Distributions: Percona offers enhanced, enterprise-ready versions of popular open source databases, such as MySQL, MongoDB, and PostgreSQL

    • Backup Solutions: They provide tools for backing up your databases.

    • Clustering Solutions: Percona offers solutions for setting up and managing database clusters.

    • Observability, Monitoring, and Management Solutions: Percona Monitoring and Management (PMM) is an open source platform for managing and monitoring MySQL, MongoDB, and PostgreSQL performance.

    • Kubernetes Operators: Percona provides Kubernetes operators for automated provisioning and managing databases in Kubernetes.

    These databases are supported across traditional deployments, cloud-based platforms, and hybrid IT environments. Percona\u2019s solutions are designed for peak performance, security, scalability, and availability. They also offer support and services for these databases, ensuring they run faster and more reliably.

    For this document, container refers to the Docker container and instance refers to the database server in the container.

    Reasons for deploying Percona Server in Docker

    Deploying a Percona Server for MySQL container is an efficient, fast solution to setting up a database quickly without using too many resources. This type of deployment works best for small to medium applications.

    You should deploy a Percona Server for MySQL database in Docker for several reasons. Here are some of them:

    • Portability: Docker containers can run on any platform that supports Docker. This flexibility lets you move your database or installation steps from one platform to another.
    • Isolation: Docker containers are isolated from each other and the host system. This isolation means you can run multiple instances of MySQL on the same machine without interfering with each other or affecting the host\u2019s performance. You can also isolate your database from other applications or services that might pose a security risk or consume too many resources.
    • Scalability: Depending on the load and demand, you can scale docker containers up or down. You can use tools like Docker Compose or Kubernetes to orchestrate multiple containers and manage their configuration, networking, and deployment. You can also use Docker Swarm or Amazon ECS to distribute your containers across multiple nodes and achieve high availability and fault tolerance.
    • Versioning: Docker images and containers contain all the dependencies and configurations needed to run your application. You can use tags to specify different versions of your images and easily switch between them. You can also use Docker Hub or other registries to store and share your images with others.
    • Development: Docker containers can help you create a consistent and reproducible development environment that matches your production environment. You can use tools like Dockerfile or Docker Build to automate the creation of your images and ensure they have the same settings and packages as your production images. You can also use tools like Docker Volumes or Bind Mounts to persist and share your data between containers or the host system.

    Review Get more help for ways that we can work with you.

    Percona Server for MySQL has an official Docker image hosted on Docker Hub. If you want the latest version, use the latest tag. You can reference a specific version using the Docker tag filter for the 8.0 versions.

    We gather Telemetry data in the Percona packages and Docker images.

    Make sure that you are using the latest version of Docker. The apt and yum versions may be outdated and cause errors. Install Docker on your system.

    "},{"location":"docker.html#starting-a-detached-container","title":"Starting a detached container","text":"

    You can start a background container with the --detached or -d option, which runs the container in the background. In detached mode, the container exits when the root process used to run the container exits.

    Benefits of using Docker run

    The docker run command automatically pulls the image from a registry if that image is not available locally and starts a Docker container. A container is an isolated environment that runs an application on the host operating system. An image is a template that contains the application code and its dependencies. You can use this command to run an application in a container without affecting the host system or other containers.

    The benefits of using the Docker run command are:

    • Allows you to run applications consistently and safely across different platforms and environments.
    • Reduces the overhead and complexity of installing and configuring applications and their dependencies on the host system.
    • Improves the security and isolation of applications by limiting their access to the host resources and other containers.
    • Enables faster development and deployment cycles by allowing you to easily create, update, and destroy containers.

    The following example starts a container named ps with the latest version of Percona Server for MySQL 8.0. This action also creates the root user and uses root as the password. Please note that root is not a secure password.

    $ docker run -d \\\n  --name ps \\\n  -e MYSQL_ROOT_PASSWORD=root \\\n  percona/percona-server:8.0\n
    Expected output
    Unable to find image 'percona/percona-server:8.0' locally\n8.0: Pulling from percona/percona-server\n

    By default, Docker pulls the image from Docker Hub if it is not available locally.

    To view the container\u2019s logs, use the following command:

    $ docker logs ps --follow\n
    Expected output
    Initializing database\n2022-09-07T15:20:03.158128Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.29-21) initializing of server in progress as process 15\n2022-09-07T15:20:03.167764Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.\n2022-09-07T15:20:03.530600Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.\n2022-09-07T15:20:04.367600Z 0 [Warning] [MY-013829] [Server] Missing data directory for ICU regular expressions: /usr/lib64/mysql/private/.\n...\n2022-09-07T15:20:13.706090Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/lib/mysql/mysqlx.sock\n2022-09-07T15:20:13.706136Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.29-21'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona Server (GPL), Release 21, Revision c59f87d2854.\n

    You can access the server when you see the ready for connections information in the log.

    "},{"location":"docker.html#percona-server-for-mysql-arm64","title":"Percona Server for MySQL ARM64","text":"

    Percona Server for MySQL is available in the ARM64 architecture. You can find the version and architecture on Docker Hub Percona/Percona-Server Tags. Docker Hub provides images for multiple OS/ARCH combinations, letting you select the version and architecture that aligns with your specific system. Docker Hub has two elements for identifying and managing container images: tags and OS/ARCH.

    Tags are labels attached to Docker images. The tag identifies the different versions of the same image.

    You use a tag to do the following:

    • Pull a specific version of an image (for example, percona/percona-server:22.04)

    • Access the latest version (for example, percona/percona-server:latest)

    The OS/ARCH refers to the combination of the operating system (OS) and architecture (ARCH) that an image is designed to run on. Common examples of OS/ARCH combinations include the following:

    • linux/amd64: Runs on Linux systems with 64-bit AMD or Intel processors

    • arm64/v8: Runs on ARM-based systems with 64-bit architecture

    Select the desired tag and verify the OS/ARCH (linux/arm64/v8). Add that tag to the docker run command. If you do not add a tag, Docker uses latest as the default tag and assumes you are in the AMD64 architecture.

    For example, to download 8.0.35 in linux/arm64/v8, add the 8.0.35-aarch64 tag. The aarch64 section defines the architecture as ARM64. Run the following command:

    $ docker run -d \\\n  --name ps \\\n  -e MYSQL_ROOT_PASSWORD=root \\\n  percona/percona-server:8.0.35-aarch64\n
    Expected output
    Unable to find image 'percona/percona-server:8.0.35-aarch64' locally\n8.0.35-aarch64: Pulling from percona/percona-server\n0f09b26fb4cb: Pull complete\n...\n121231b07f2a: Pull complete\nDigest: sha256:610e7e3beffd09b8a037e2b172452d1231188a6ba12c414e7ffb306846f63b34 \nStatus: Downloaded newer image for percona/percona-server:8.0.35-aarch64\nc0de02ce3b84281e030a710e184f20a6b8f012133ca15a51ed6a35e75d1d8e22\n

    You can also use 8.0.35-27.1-multi. Docker selects the appropriate architecture.

    "},{"location":"docker.html#using-docker-platform-for-emulation","title":"Using Docker --platform for emulation","text":"

    If you must use an AMD64 version in an ARM64 architecture, add --platform Linux/amd64 to specify the target platform in the docker run command. This option lets you construct images compatible with different architectures using an emulation.

    Be aware of the following when running an emulation:

    • May be slower than running in the native architecture for compute-heavy tasks

    • Adds complexity to your environment and may require additional configuration

    • May find subtle architectural differences which lead to runtime issues

    Be sure to test the image throughly before using it in production.

    "},{"location":"docker.html#passing-options","title":"Passing Options","text":"

    You can pass options with the docker run command. For example, the following command uses UTF-8 as the default setting for character set and collation for all databases:

    $ docker run -d \\\n --name ps \\\n -e MYSQL_ROOT_PASSWORD=root \\\n percona/percona-server:8.0 \\\n --character-set-server=utf8 \\\n --collation-server=utf8_general_ci\n
    "},{"location":"docker.html#accessing-the-percona-server-container","title":"Accessing the Percona Server Container","text":"

    The docker exec command lets you have a shell inside the container. This command uses it which forwards your input stream as an interactive TTY.

    An example of accessing the detached container:

    $ docker exec -it ps /bin/bash\n

    If you need to troubleshoot, the error log is found in /var/log/ or /var/log/mysql/. The file name may be error.log or mysqld.log.

    "},{"location":"docker.html#troubleshooting","title":"Troubleshooting","text":"

    You can view the error log with the following command:

    [mysql@ps] $ more /var/log/mysql/error.log\n
    Expected output
    ...\n2017-08-29T04:20:22.190474Z 0 [Warning] 'NO_ZERO_DATE', 'NO_ZERO_IN_DATE' and 'ERROR_FOR_DIVISION_BY_ZERO' sql modes should be used with strict mode. They will be merged with strict mode in a future release.\n2017-08-29T04:20:22.190520Z 0 [Warning] 'NO_AUTO_CREATE_USER' sql mode was not set.\n...\n
    "},{"location":"docker.html#accessing-the-database","title":"Accessing the database","text":"

    You can access the database either with Docker exec or using the mysql command in the container\u2019s shell.

    An example of using Docker exec to access the database:

    $ docker exec -ti ps mysql -uroot -proot\n
    Expected output
    mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 9\n...\n

    Exiting Percona Server also exits the container.

    You can also run the MySQL command-line client within the container\u2019s shell to access the database:

    [mysql@ps] $ mysql -uroot -proot\n
    Expected output
    mysql: [Warning] Using a password on the command line interface can be insecure.\nWelcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 8\nServer version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854\n\nCopyright (c) 2009-2022 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2022, Oracle and/or its affiliates.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n
    "},{"location":"docker.html#accessing-the-server-from-an-application-in-another-container","title":"Accessing the server from an application in another container","text":"

    The image exposes the standard MySQL port 3306, so container linking makes the Percona Server instance available from other containers. To link a container running your application (in this case, from an image named app/image) with the Percona Server container, run it with the following command:

    $ docker run -d \\\n  --name app \\\n  --link ps \\\n  app/image:latest\n

    This application container will be able to access the Percona Server container via port 3306.

    "},{"location":"docker.html#storing-data","title":"Storing data","text":"

    There are two ways to store data used by applications that run in Docker containers:

    • Let Docker manage the storage of your data by writing the database files to disk on the host system using its internal volume management.

    • Create a data directory on the host system on high-performance storage and mount it to a directory visible from the container. This method places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The user should ensure that the directory exists, that the user accounts have required permissions, and that any other security mechanisms on the host system are set up correctly.

    For example, if you create a data directory on a suitable volume on your host system named /local/datadir, you run the container with the following command:

    $ docker run -d \\\n  --name ps \\\n  -e MYSQL_ROOT_PASSWORD=root \\\n  -v /local/datadir:/var/lib/mysql \\\n  percona/percona-server:8.0\n

    The -v /local/datadir:/var/lib/mysql option mounts the /local/datadir directory on the host to /var/lib/mysql in the container, which is the default data directory used by Percona Server for MySQL.

    Do not add MYSQL_ROOT_PASSWORD to the docker run command if the data directory contains subdirectories, files, or data.

    Note

    If you have SELinux enabled, assign the relevant policy type to the new data directory so that the container will be allowed to access it:

    $ chcon -Rt svirt_sandbox_file_t /local/datadir\n
    "},{"location":"docker.html#port-forwarding","title":"Port forwarding","text":"

    Docker allows mapping ports on the container to ports on the host system using the -p option. If you run the container with this option, you can connect to the database by connecting your client to a port on the host machine. This ability simplifies consolidating instances to a single host.

    To map the standard MySQL port 3306 to port 6603 on the host:

    $ docker run -d \\\n --name ps \\\n -e MYSQL_ROOT_PASSWORD=root \\\n -p 6603:3306 \\\n percona/percona-server:8.0\n
    "},{"location":"docker.html#exiting-the-container","title":"Exiting the container","text":"

    If you are in the interactive shell, use CTRL-D or exit to exit the session.

    If you have a non-shell process running, interrupt the process with CTRL-C before using either CTRL-D or exit.

    "},{"location":"docker.html#stopping-the-container","title":"Stopping the container","text":"

    The docker stop container command sends a TERM signal, then waits 10 seconds and sends a KILL signal. The following example stops the ps container:

    $ docker stop ps\n

    The default length of time before stopping a container is 10 seconds. A very large instance cannot dump the data from memory to disk within that time. With this type of instance, add the --time or the -t option to docker stop:

    $ docker stop ps -t 600\n
    "},{"location":"docker.html#removing-the-container","title":"Removing the container","text":"

    To remove a stopped container, use the docker rm command.

    $ docker rm ps\n
    "},{"location":"docker.html#for-more-information","title":"For more information","text":"

    Review the Docker Docs

    "},{"location":"downgrade-from-pro.html","title":"Downgrade from Percona Server for MySQL Pro","text":"

    If you want to downgrade from Percona Server for MySQL Pro to the same version of Percona Server for MySQL, do the following:

    Note

    In Percona Server for MySQL Pro 8.0.35-27, the downgrade from percona-mysql-router-pro to percona-mysql-router is not supported for Ubuntu 22.04.

    On Debian and UbuntuOn RHEL and derivatives
    1. Set up the Percona Server for MySQL 8.0 repository

      $ sudo percona-release setup ps80\n
    2. Stop the mysql server.

      $ sudo systemctl stop mysql\n
    3. Install the server package

      $ sudo apt install percona-server-server\n

      Install other required packages. Check files in the DEB package built for Percona Server for MySQL 8.0.

    4. Start the mysql server

      $ sudo systemctl start mysql\n

    Note

    On Debian 12, if you want to remove the Percona Server for MySQL after the downgrade, you must stop the server manually. This behavior will be fixed in future releases.

    $ sudo systemctl stop mysql\n
    1. Set up the Percona Server for MySQL 8.0 repository

      $ sudo percona-release setup ps80\n
    2. Stop the mysql server.

      $ sudo systemctl stop mysql\n
    3. Install the server package

      $ sudo yum --allowerasing install percona-server-server\n

      Install other required packages. Check files in the RPM package built for Percona Server for MySQL 8.0.

    4. Start the mysql server

      $ sudo systemctl start mysql\n
    "},{"location":"downgrade.html","title":"Downgrade Percona Server for MySQL","text":"

    A downgrade from Percona Server for MySQL 8.0 to 5.7 is not supported.

    A downgrade from a Percona Server for MySQL 8.0 version to an earlier 8.0 version is not supported.

    Percona does not test a downgrade operation between versions.

    Each release of Percona Server for MySQL 8.0 can contain significant changes that are not backward-compatible. Restoring a backup to an earlier version may fail, for example, if your code uses a feature that does not exist in an earlier version.

    Before you upgrade to the latest release, do the following:

    • Make a full backup of your data and test the backup

    • Thoroughly test in a staging environment

    MySQL 8 Minor Version Upgrades Are ONE-WAY Only

    Review Get more help for ways that we can work with you.

    "},{"location":"download-instructions.html","title":"Percona Product Download Instructions","text":""},{"location":"download-instructions.html#select-the-software","title":"Select the software","text":"

    Do the following steps to select the software:

    1. Open Percona Product Downloads
    2. Locate the Percona Software, for example, Percona Server for MySQL
    3. In Select Product, select the which product, for example, Percona Server 8.0
    4. In Select Product Version, select the version, for example, PERCONA-SERVER-8.0.31-23
    5. In Select Platform, select the operating system, for example, DEBIAN GNU/LINUX 12.0 (\u201cBookworm\u201d).

    The easiest method is to download all packages. The Package Download Options may mix AMD64 and ARM64 packages. Select the correct CPU architecture for your system.

    "},{"location":"download-instructions.html#download-to-a-local-computer","title":"Download to a local computer","text":"

    In Package Download Options, select a specific package or select the DOWNLOAD ALL PACKAGES button

    The selected packages are downloaded to the local computer.

    "},{"location":"download-instructions.html#download-to-another-computer","title":"Download to another computer","text":"

    In Package Download Options, select a specific package or select the DOWNLOAD ALL PACKAGES button, and hover your cursor over the DOWNLOAD arrow. Right-click and in the drop-down menu, select Copy Link.

    Paste the link in your terminal to download the selected package.

    "},{"location":"encrypting-binlogs.html","title":"Encrypt Binary Log Files and Relay Log Files","text":"

    Binary log file and relay log file encryption at rest ensures the server-generated binary logs are encrypted in persistent storage.

    "},{"location":"encrypting-binlogs.html#upgrade-from-percona-server-for-mysql-8015-5-or-later","title":"Upgrade from Percona Server for MySQL 8.0.15-5 or later","text":"

    As of 8.0.15-5, Percona Server for MySQL uses the upstream implementation of the binary log file and relay log file encryption.

    The encrypt-binlog variable is removed, and the related command-line option \u2013encrypt-binlog is not supported. It is important to remove the encrypt-binlog variable from your configuration file before you attempt to upgrade either from another release in the Percona Server for MySQL 8.0 series or from Percona Server for MySQL 5.7. Otherwise, a server boot error is generated and reports an unknown variable.

    The implemented binary log file encryption is compatible with the older format. The encrypted binary log file used in a previous version of MySQL 8.0 series or Percona Server for MySQL series is supported.

    "},{"location":"encrypting-binlogs.html#architecture","title":"Architecture","text":"

    The Binary log encryption uses the following tiers:

    • File password

    • Binary log file encryption key

    The file password encrypts the content of a single binary file or relay log file. The binary log encryption key encrypts the file password and the key is stored in the keyring.

    "},{"location":"encrypting-binlogs.html#implementation","title":"Implementation","text":"

    After you have enabled the binlog_encryption variable and the keyring is available, you can encrypt the data content for new binary log files and relay log files. Only the data content is encrypted. The server generates a MySQL error if you attempt to encrypt a binary log file or relay log file without a keyring.

    In replication, the source maintains the binary log and the replica maintains a binary log copy called the relay log. The source uses SSL connections to encrypt the stream, and the events are re-executed on the replica. The source and replicas can have separate keyring storage and different keyring plugins.

    The server rotates the binary log and relay log files if binlog_encryption = OFF. All new log files are unencrypted. Any encrypted files are not unencrypted.

    When an encrypted binary log is dumped, and this operation involves decryption, use mysqlbinlog with the --read-from-remote-server option.

    Note

    The \u2013read-from-remote-server option only applies to the binary logs. Encrypted relay logs can not be dumped or decrypted with this option.

    "},{"location":"encrypting-binlogs.html#enable-binary-log-encryption","title":"Enable binary log encryption","text":"

    In Percona Server for MySQL 8.0.15-5 and later, set the binlog_encryption variable to ON in a startup configuration file, such as my.cnf. The default is OFF.

    binlog_encryption=ON\n
    "},{"location":"encrypting-binlogs.html#verify-the-encryption","title":"Verify the encryption","text":"

    To verify that the binary log encryption option is enabled, run the following statement:

    mysql> SHOW BINARY LOGS;\n

    The SHOW BINARY LOGS output displays the name, size, and if a binary log file is encrypted or unencrypted.

    Expected output
    +-------------------+----------------+---------------+\n| Log_name          | File_size      | Encrypted     |\n+-------------------+----------------+---------------+\n| binlog.00011      | 72367          | No            |\n| binlog:00012      | 71503          | No            |\n| binlog:00013      | 73762          | Yes           |\n+-------------------+----------------+---------------+\n
    "},{"location":"encrypting-binlogs.html#binary-log-file-variables","title":"Binary log file variables","text":""},{"location":"encrypting-binlogs.html#encrypt_binlog","title":"encrypt_binlog","text":"Option Description Command-line \u2013encrypt-binlog Dynamic No Scope Global Data type Boolean Default OFF

    Percona Server for MySQL 8.0.15-5 removes this variable.

    This variable enables or disables the binary log and relay log file encryption.

    "},{"location":"encrypting-doublewrite-buffers.html","title":"Encrypt doublewrite buffers","text":"

    A summary of Doublewrite buffer and Doublewrite buffer encryption changes:

    Percona Server for MySQL Versions Doublewrite Buffer and Doublewrite Buffer Encryption Implementation Percona Server from Percona-Server-8.0.23-14 MySQL 8.0.23 implemented its own version of parallel doublewrite encryption. Pages that belong to encrypted tablespaces are also written into the doublewrite buffer in an encrypted form. The Percona implementation was reverted and theinnodb_parallel_dblwr_encrypt is deprecated and may be removed in later releases. Percona Server from Percona-Server-8.0.20-11 to Percona-Server-8.0.22-13 inclusive MySQL 8.0.20 implemented its own parallel doublewrite buffer, which is stored in external files (#ib_16384_xxx.dblwr) and not stored in the system tablespace. Percona\u2019s implementation was reverted. As a result, innodb_parallel_doublewrite_path was deprecated. However, MySQL did not implement parallel doublewrite buffer encryption at this time, so Percona reimplemented parallel doublewrite buffer encryption on top of the MySQL parallel doublewrite buffer implementation. Percona preserved the meaning and functionality of the innodb_parallel_dblwr_encrypt variable. Percona-Server-8.0.12-1.alpha to Percona-Server-8.0.19-10 inclusive Percona Server for MySQL had its own implementation of the parallel doublewrite buffer which was enabled by setting the innodb_parallel_doublewrite_path variable. Enabling the innodb_parallel_dblwr_encrypt controlled whether the parallel doublewrite pages were encrypted or not. In case the parallel doublewrite buffer was disabled (innodb_parallel_doublewrite_path was set to empty string),the doublewrite buffer pages were located in the system tablespace (ibdata1). The system tablespace itself could be encrypted by setting innodb_sys_tablespace_encrypt, which also encrypted the doublewrite buffer pages.

    For Percona Server for MySQL versions below Percona Server for MySQL version 8.0.23-14, Percona encrypts the doublewrite buffer using innodb_parallel_dblwr_encrypt.

    "},{"location":"encrypting-doublewrite-buffers.html#innodb_parallel_dblwr_encrypt","title":"innodb_parallel_dblwr_encrypt","text":"Option Description Command-line \u2013innodb-parallel-dblwr-encrypt Scope Global Dynamic Yes Data type Boolean Default OFF

    Percona Server for MySQL 8.0.23-14 has deprecated this variable and the variable has no effect.

    This variable controls whether the parallel doublewrite buffer pages were encrypted or not. The encryption used the key of the tablespace to which the page belongs.

    Starting from Percona Server for MySQL 8.0.23-14 the value of this variable is ignored. Pages from the encrypted tablespaces are always written to the doublewrite buffer as encrypted, and pages from unencrypted tablespaces are always written unencrypted.

    The innodb_parallel_dblwr_encrypt is accepted but has no effect. An explicit attempt to change the value generates the following warning in the error log file:

    Error message
    **Setting Percona-specific INNODB_PARALLEL_DBLWR_ENCRYPT is deprecated and has no effect.**\n
    "},{"location":"encrypting-redo-log.html","title":"Encrypting the Redo Log data","text":""},{"location":"encrypting-redo-log.html#version-changes","title":"Version changes","text":"

    Percona Server for MySQL 8.0.30-22 changes the data type to Boolean and removes the master_key and keyring_key options.

    Percona Server for MySQL 8.0.16-7 implements the options.

    "},{"location":"encrypting-redo-log.html#overview","title":"Overview","text":"

    MySQL uses the redo log files to apply changes during data recovery.

    Writing the redo log data to disk encrypts the data and reading the redo log data from disk unencrypts the data. When the redo log data is in memory, the data is unencrypted. The Redo log data uses the tablespace encryption key.

    Warning

    After the redo log is enabled, a MySQL 8.0.30 regression prevents disabling the redo log.

    After starting the server, an attempt to encrypt the redo log files fails if you have the following conditions:

    • Server started with no keyring specified
    • Server started with a keyring, but you specified a redo log encryption method that is different from the previously used method on the server.

    Starting with MySQL 8.0.30, the most recent checkpoint and the redo log encryption metadata is stored in the redo log file header.

    After enabling redo log encryption, attempting a normal restart without the keyring plugin or keyring component is not possible, since InnoDB scans the redo log pages during startup. Without the keyring plugin or keyring component, this operation is not possible when the redo log pages are encrypted. You can do a forced startup without the redo logs.

    "},{"location":"encrypting-redo-log.html#system-variable","title":"System variable","text":""},{"location":"encrypting-redo-log.html#innodb_redo_log_encrypt","title":"innodb_redo_log_encrypt","text":"Variable Description Command-line \u2013innodb-redo-log-encrypt Dynamic Yes Scope Global Data type Boolean Default OFF Option Description ON This option is a compatibility alias for the master_key. Any existing redo log pages remain unencrypted; new pages are encrypted when written to disk. OFF Any existing encrypted pages remain encrypted; new pages are unencrypted. master_key Removed in Percona Server for MySQL 8.0.30-22 keyring_key Removed in Percona Server for MySQL 8.0.30-22

    Determines the encryption for the table redo log data. The default option for the variable is OFF.

    "},{"location":"encrypting-system-tablespace.html","title":"Encrypt system tablespace","text":""},{"location":"encrypting-system-tablespace.html#version-changes","title":"Version changes","text":"

    Percona Server for MySQL 8.0.31-23 removes keyring encryption with advanced encryption key rotation and associated system variables, status variables, and options.

    Keyring encryption is a tech preview feature.

    "},{"location":"encrypting-system-tablespace.html#overview","title":"Overview","text":"

    Percona Server for MySQL supports system tablespace encryption. The InnoDB system tablespace may be encrypted with the master key encryption. The limitation is the following:

    • You cannot convert the system tablespace from the encrypted state to the unencrypted state, or the unencrypted state to the encrypted state. If a conversion is needed, create a new instance with the system tablespace in the required state and transfer the user tables to that instance.

    To enable system tablespace encryption, edit my.cnf, and set the innodb_sys_tablespace_encrypt =\u201cON\u201d

    System tablespace encryption can only be enabled with the --initialize option

    You can create an encrypted table as follows:

    mysql> CREATE TABLE table_name TABLESPACE=innodb_system ENCRYPTION='Y';\n
    "},{"location":"encrypting-system-tablespace.html#system-variables","title":"System variables","text":""},{"location":"encrypting-system-tablespace.html#innodb_sys_tablespace_encrypt","title":"innodb_sys_tablespace_encrypt","text":"Option Description Command-line \u2013innodb-sys-tablespace-encrypt Scope Global Dynamic No Data type Boolean Default OFF

    Enables the encryption of the InnoDB system tablespace.

    "},{"location":"encrypting-system-tablespace.html#re-encrypt-the-system-tablespace","title":"Re-encrypt the system tablespace","text":"

    You can re-encrypt the system tablespace key with master key rotation. When the master key is rotated, the tablespace key is decrypted and re-encrypt with the new master key. Only the first page of the tablespace (.ibd) file is read and written during the key rotation. The tables in the tablespace are not re-encrypted.

    mysql> ALTER INSTANCE ROTATE INNODB MASTER KEY;\n
    "},{"location":"encrypting-tables.html","title":"Encrypt File-Per-Table Tablespace","text":"

    A file-per-table tablespace stores the table data and the indexes for a single InnoDB table. In this tablespace configuration, each table is stored in a .ibd file.

    The architecture for data at rest encryption for file-per-table tablespace has two tiers:

    • Master key

    • Tablespace keys

    The keyring plugin must be installed and enabled. The file_per_table tablespace inherits the schema default encryption setting unless you explicitly define encryption in CREATE TABLE or ALTER TABLE.

    mysql> CREATE TABLE ... ENCRYPTION='Y';\n
    mysql> ALTER TABLE ... ENCRYPTION='Y';\n

    Using ALTER TABLE without the ENCRYPTION option does not change the encryption state. An encrypted table remains encrypted or an unencrypted table remains unencrypted.

    "},{"location":"encrypting-tablespaces.html","title":"Encrypt schema or general tablespace","text":"

    Percona Server for MySQL uses the same encryption architecture as *MySQL, a two-tier system consisting of a master key and tablespace keys. The master key can be changed, or rotated in the keyring, as needed. Each tablespace key, when decrypted, remains the same.

    The feature requires the keyring plugin.

    "},{"location":"encrypting-tablespaces.html#set-the-default-for-schemas-and-general-tablespace-encryption","title":"Set the default for schemas and general tablespace encryption","text":"

    The tables in a general tablespace are either all encrypted or all unencrypted. A tablespace cannot contain a mixture of encrypted tables and unencrypted tables.

    In versions before Percona Server for MySQL 8.0.16-7, use the variable innodb_encrypt_tables.

    "},{"location":"encrypting-tablespaces.html#innodb_encrypt_tables","title":"innodb_encrypt_tables","text":"Option Description Command-line \u2013innodb-encrypt-tables Scope Global Dynamic Yes Data type Text Default OFF

    The variable is deprecated and removed in Percona Server for MySQL 8.0.16-7.

    The default setting is \u201cOFF\u201d.

    The encryption of a schema or a general tablespace is determined by the default_table_encryption variable unless you specify the ENCRYPTION clause in the CREATE SCHEMA or CREATE TABLESPACE statement. This variable is implemented in Percona Server for MySQL version 8.0.16-7.

    You can set the default_table_encryption variable in an individual connection.

    mysql> SET default_table_encryption=ON;\n
    "},{"location":"encrypting-tablespaces.html#system-variable","title":"System variable","text":""},{"location":"encrypting-tablespaces.html#default_table_encryption","title":"default_table_encryption","text":"

    Percona Server for MySQL 8.0.31-23 removes the ONLINE_TO_KEYRING and ONLINE_TO_KEYRING_TO_UNENCRYPTED options.

    Option Description Command-line default-table-encryption Scope Session Dynamic Yes Data type Text Default OFF

    Defines the default encryption setting for schemas and general tablespaces. The variable allows you to create or alter schemas or tablespaces without specifying the ENCRYPTION clause. The default encryption setting applies only to schemas and general tablespaces and is not applied to the MySQL system tablespace.

    The variable has the following possible options:

    Value Description ON New tables are encrypted. Add ENCRYPTION=\"N\" to the CREATE TABLE or ALTER TABLE statement to create unencrypted tables. OFF By default, new tables are unencrypted. Add ENCRYPTION=\"Y\" to the CREATE TABLE or ALTER TABLE statement to create encrypted tables. ONLINE_TO_KEYRING This option is technical preview quality. Percona Server for MySQL 8.0.31-23 removes this option.* Converts a tablespace encrypted by a Master Key to use Advanced Encryption Key Rotation. You can only apply the keyring encryption when creating tables or altering tables. ONLINE_FROM_KEYRING_TO_UNENCRYPTED This option is technical preview quality. Percona Server for MySQL 8.0.31-23 removes this option.* Converts a tablespace encrypted by Advanced Encryption Key Rotation to unencrypted.

    Note

    The ALTER TABLE statement changes the current encryption mode only if you use the ENCRYPTION clause.

    See also

    MySQL Documentation: default_table_encryption

    "},{"location":"encrypting-tablespaces.html#merge-sort-encryption","title":"Merge-sort-encryption","text":""},{"location":"encrypting-tablespaces.html#innodb_encrypt_online_alter_logs","title":"innodb_encrypt_online_alter_logs","text":"Option Description Command-line \u2013innodb_encrypt-online-alter-logs Scope Global Dynamic Yes Data type Boolean Default OFF

    This variable simultaneously turns on the encryption of files used by InnoDB for full-text search using parallel sorting, building indexes using merge sort, and online DDL logs created by InnoDB for online DDL. Encryption is available for file merges used in queries and backend processes.

    "},{"location":"encrypting-tablespaces.html#use-encryption","title":"Use ENCRYPTION","text":"

    If you do not set the default encryption setting, you can create general tablespaces with the ENCRYPTION setting.

    mysql> CREATE TABLESPACE tablespace_name ENCRYPTION='Y';\n

    All tables contained in the tablespace are either encrypted or not encrypted. You cannot encrypt only some of the tables in a general tablespace. This feature extends the CREATE TABLESPACE statement to accept the ENCRYPTION='Y/N' option.

    Note

    Prior to Percona Server for MySQL 8.0.13, the ENCRYPTION option was specific to the CREATE TABLE or SHOW CREATE TABLE statement.

    As of Percona Server for MySQL 8.0.13, the option is a tablespace attribute and is not allowed with the CREATE TABLE or SHOW CREATE TABLE statement except with file-per-table tablespaces.

    In an encrypted general tablespace, an attempt to create an unencrypted table generates the following error:

    mysql> CREATE TABLE t3 (a INT, b TEXT) TABLESPACE foo ENCRYPTION='N';\n
    Expected output
    ERROR 1478 (HY0000): InnoDB: Tablespace 'foo' can contain only ENCRYPTED tables.\n

    The server diagnoses an attempt to create or move tables, including partitioned ones, to a general tablespace with an incompatible encryption setting and aborts the process.

    If you must move tables between incompatible tablespaces, create tables with the same structure in another tablespace and run INSERT INTO SELECT from each of the source tables into the destination tables.

    "},{"location":"encrypting-tablespaces.html#export-an-encrypted-general-tablespace","title":"Export an encrypted general tablespace","text":"

    You can only export encrypted file-per-table tablespaces

    "},{"location":"encrypting-temporary-files.html","title":"Encrypt temporary files","text":"

    For InnoDB user-created temporary tables are created in a temporary tablespace file and use the innodb_temp_tablespace_encrypt variable.

    The CREATE TEMPORARY TABLE does not support the ENCRYPTION clause. The TABLESPACE clause cannot be set to innodb_temporary.

    The global temporary tablespace datafile ibtmp1 contains the temporary table undo logs while intrinsic temporary tables and user-created temporary tables are located in the encrypted session temporary tablespace.

    To create new temporary tablespaces unencrypted, the following variables must be set to OFF at runtime:

    • innodb_temp_tablespace_encrypt

    • default_table_encryption

    Any existing encrypted user-created temporary files and intrinsic temporary tables remain in an encrypted session.

    Temporary tables are only destroyed when the session is disconnected.

    The default_table_encryption setting in my.cnf determines if a temporary table is encrypted.

    If the innodb_temp_tablespace_encrypt = \u201cOFF\u201d and the default_table_encryption =\u201dON\u201d, the user-created temporary tables are encrypted. The temporary tablespace data file ibtmp1, which contains undo logs, is not encrypted.

    If the innodb_temp_tablespace_encrypt is \u201cON\u201d for the system tablespace, InnoDB generates an encryption key and encrypts the system\u2019s temporary tablespace. If you reset the encryption to \u201cOFF\u201d, all subsequent pages are written to an unencrypted tablespace. Any generated keys are not erased to allow encrypted tables and undo data to be decrypted.

    For each temporary file, an encryption key has the following attributes:

    • Generated locally

    • Maintained in memory for the lifetime of the temporary file

    • Discarded with the temporary file

    "},{"location":"encrypting-temporary-files.html#system-variables","title":"System variables","text":""},{"location":"encrypting-temporary-files.html#encrypt_tmp_files","title":"encrypt_tmp_files","text":"Option Description Command-line \u2013encrypt_tmp_files Scope Global Dynamic No Data type Boolean Default OFF

    This variable turns \u201cON\u201d the encryption of temporary files created by the Percona Server for MySQL. The default value is OFF.

    "},{"location":"encrypting-temporary-files.html#innodb_temp_tablespace_encrypt","title":"innodb_temp_tablespace_encrypt","text":"Option Description Command-line innodb-temp-tablespace-encrypt Scope Global Dynamic Yes Data type Boolean Default OFF

    When this variable is set to ON, the server encrypts the global temporary tablespace and has the .ibtmp1 file extension and the session temporary tablespace and has the .ibt file extension.

    The variable does not enforce the encryption of currently open temporary files and does not rebuild the system\u2019s temporary tablespace to encrypt data that has already been written.

    "},{"location":"encrypting-threads.html","title":"Advanced encryption key rotation","text":"

    Important

    This feature, and associated system variables, status variables, and options have been removed in Percona Server for MySQL 8.0.31-23.

    The Advanced Encryption Key Rotation feature lets you perform specific encryption and decryption tasks in real time.

    The following table explains the benefits of Advanced Encryption Key Rotation:

    Advanced Encryption Key Rotation Master Key Encryption Encrypts any existing tablespaces in a single operation. Advanced Encryption Key Rotation allows encryption to be applied to all or selected existing tablespaces. You can exclude tablespaces. Encrypts each existing tablespace as a separate operation. Encrypts tables with a key from a keyring. Encrypts tables with a key that is then stored in the encryption header of the tablespace. Re-encrypts each tablespace page by page when the key is rotated. Re-encrypts only the tablespace encryption header when the key is rotated.

    If you enable Advanced Encryption Key Rotation with a Master key encrypted tablespace, the tablespace is re-encrypted with the keyring key in a background process. If the Advanced Encryption Key Rotation feature is enabled, you cannot convert a tablespace to use Master key encryption. You must disable the feature before you convert the tablespace.

    This feature is in tech preview.

    You must have the SYSTEM_VARIABLES_ADMIN privilege or the SUPER privilege to set these variables.

    "},{"location":"encrypting-threads.html#innodb_encryption_threads","title":"innodb_encryption_threads","text":"

    This variable is removed in Percona Server for MySQL 8.0.31-23.

    Option Description Command-line \u2013innodb-encryption-threads Scope Global Dynamic Yes Data type Numeric Default 0

    This variable works in combination with the default_table_encryption variable set to ONLINE_TO_KEYRING. This variable configures the number of threads for background encryption. For the online encryption, the value must be greater than zero.

    "},{"location":"encrypting-threads.html#innodb_online_encryption_rotate_key_age","title":"innodb_online_encryption_rotate_key_age","text":"

    This variable is removed in Percona Server for MySQL 8.0.31-23.

    Option Description Command-line \u2013innodb-online-encryption-rotate-key-age Scope Global Dynamic Yes Data type Numeric Default 1

    Defines the rotation for the re-encryption of a table encrypted using KEYRING. The value of this variable determines the how frequently the encrypted tables are re-encrypted.

    For example, the following values would trigger a re-encryption in the following intervals:

    • The value is 1, and the table is re-encrypted on each key rotation.

    • The value is 2, and the table is re-encrypted on every other key rotation.

    • The value is 10, and the table is re-encrypted on every tenth key rotation.

    You should select the value which best fits your operational requirements.

    "},{"location":"encrypting-threads.html#innodb_encryption_rotation_iops","title":"innodb_encryption_rotation_iops","text":"

    This variable is removed in Percona Server for MySQL 8.0.31-23.

    Option Description Command-line \u2013innodb-encryption-rotation-iops Scope Global Dynamic Yes Data type Numeric Default 100

    Defines the number of input/output operations per second (iops) available for use by a key rotation process.

    "},{"location":"encrypting-threads.html#innodb_default_encryption_key_id","title":"innodb_default_encryption_key_id","text":"

    This variable is removed in Percona Server for MySQL 8.0.31-23.

    Option Description Command-line \u2013innodb-default-encryption-key-id Scope Session Dynamic Yes Data type Numeric Default 0

    Defines the default encryption ID used to encrypt tablespaces.

    "},{"location":"encrypting-threads.html#use-keyring-encryption","title":"Use Keyring Encryption","text":"

    This feature is removed in Percona Server for MySQL 8.0.31-23.

    Keyring management is enabled for each table, per file table, separately when you set encryption in the ENCRYPTION clause to KEYRING in the supported SQL statement.

    • CREATE TABLE \u2026 ENCRYPTION=\u2019KEYRING\u2019

    • ALTER TABLE \u2026 ENCRYPTION=\u2019KEYRING\u2019

    Note

    Running an ALTER TABLE ... ENCRYPTION='N' on a table created with ENCRYPTION='KEYRING' converts the table to the existing MySQL schema, tablespace, or table encryption state.

    "},{"location":"encrypting-undo-tablespace.html","title":"Encrypt the undo tablespace","text":"

    The undo data may contain sensitive information about the database operations.

    You can encrypt the data in an undo log using the innodb_undo_log_encrypt option. You can change the setting for this variable in the configuration file, as a startup parameter, or during runtime as a global variable. The undo data encryption must be enabled; the feature is disabled by default.

    "},{"location":"encrypting-undo-tablespace.html#innodb_undo_log_encrypt","title":"innodb_undo_log_encrypt","text":"Option Description Command-line \u2013innodb_undo-log_encrypt Scope Global Dynamic Yes Data type Boolean Default OFF

    Defines if the undo log data is encrypted. The default for the undo log is \u201cOFF\u201d, which disables the encryption.

    You can create up to 127 undo tablespaces and you can, with the server running, add or reduce the number of undo tablespaces.

    Note

    If you disable encryption, any encrypted undo data remains encrypted. To remove this data, and truncate the undo tablespace.

    "},{"location":"encrypting-undo-tablespace.html#how-to-enable-encryption-on-an-undo-log","title":"How to enable encryption on an undo log","text":"

    You enable encryption for an undo log by adding the following to the my.cnf file:

    [mysqld]\n...\ninnodb_undo_log_encrypt=ON\n...\n
    "},{"location":"encryption-functions.html","title":"Encryption user-defined functions","text":"

    The encryption user-defined functions (UDF) let you encrypt and decrypt data. You can choose different encryption algorithms and manage the range of data to encrypt.

    "},{"location":"encryption-functions.html#version-updates","title":"Version updates","text":"

    Percona Server for MySQL 8.0.41 adds the following:

    • Support for pkcs1, oaep, or no padding for RSA encrypt and decrypt operations

      pkcs1 padding explanation RSAES-PKCS1-v1_5](https://en.wikipedia.org/wiki/PKCS_1) RSA encryption padding scheme prevents patterns that attackers could exploit by including a random sequence of bytes which ensures that the ciphertext is different no matter how many times it is encrypted.

      oeap padding explanation The RSAES-OAEP - Optimal Asymmetric Encryption Padding RSA encryption padding scheme adds a randomized mask generation function. This function makes it more difficult for attackers to exploit weaknesses in the encryption algorithm or to recover the original message.

      no padding explanation Using no padding means that the plaintext message is encrypted without adding an extra layer before performing the RSA encryption operation.

    • Support for pkcs1 or pkcs1_pss padding for RSA sign and verify operations

      pkcs1 padding explanation The RSASSA-PKCS1-v1_5 is a deterministic RSA signature padding scheme that hashes a message, pads the hash with a specific structure, and encrypts it with the signer\u2019s private key for signature generation. pkcs1_pss padding explanation The RSASSA-PSS - `Probabilistic Signature Scheme\u2019 is an RSA signature padding scheme used to add randomness to a message before signing it with a private key. This randomness helps to increase the security of the signature and make it more resistant to various attacks.

    • encryption_udf.legacy_paddding_scheme system variable

    • Character set awareness

    Percona Server for MySQL 8.0.28-20 adds encryption functions and variables to manage the encryption range.

    "},{"location":"encryption-functions.html#charset-awareness","title":"Charset Awareness","text":"

    All component_encryption_udf functions now handle character sets intelligently:

    \u2022 Algorithms, digest names, padding schemes, keys, and parameters in PEM format: Automatically converted to the ASCII charset at the MySQL level before passing to the functions.

    \u2022 Messages, data blocks, and signatures used for digest calculation, encryption, decryption, signing, or verification: Automatically converted to the binary charset at the MySQL level before passing to the functions.

    \u2022 Function return values in PEM format: Assigned the ASCII charset.

    \u2022 Function return values for operations like digest calculation, encryption, decryption, and signing: Assigned the binary charset.

    "},{"location":"encryption-functions.html#use-user-defined-functions","title":"Use user-defined functions","text":"

    You can also use the user-defined functions with the PEM format keys generated externally by the OpenSSL utility.

    A digest uses plaintext and generates a hash value. This hash value can verify if the plaintext is unmodified. You can also sign or verify on digests to ensure that the original plaintext was not modified. You cannot decrypt the original text from the hash value.

    When choosing key lengths, consider the following:

    • Encryption strength increases with the key size and, also, the key generation time.

    • If performance is important and the functions are frequently used, use symmetric encryption. Symmetric encryption functions are faster than asymmetric encryption functions. Moreover, asymmetric encryption has restrictions on the maximum length of a message being encrypted. For example, for RSA the algorithm maximum message size is the key length in bytes (key length in bits / 8) minus 11.

    The following table and sections describe the functions. For examples, see function examples.

    Function Name asymmetric_decrypt(algorithm, crypt_str, key_str) asymmetric_derive(pub_key_str, priv_key_str) asymmetric_encrypt(algorithm, str, key_str) asymmetric_sign(algorithm, digest_str, priv_key_str, digest_type) asymmetric_verify(algorithm, digest_str, sig_str, pub_key_str, digest_type) create_asymmetric_priv_key(algorithm, (key_len | dh_parameters)) create_asymmetric_pub_key(algorithm, priv_key_str) create_dh_parameters(key_len) create_digest(digest_type, str)

    The following table describes the Encryption threshold variables which can be used to set the maximum value for a key length based on the type of encryption.

    Variable Name encryption_udf.dh_bits_threshold encryption_udf.dsa_bits_threshold encryption_udf.rsa_bits_threshold"},{"location":"encryption-functions.html#install-component_encryption_udf","title":"Install component_encryption_udf","text":"

    Use the Install Component Statement to add the component_encryption_udf component. The functions and variables are available. The user-defined functions and the Encryption threshold variables are auto-registered. There is no requirement to invoke CREATE FUNCTION ... SONAME ....

    The INSERT privilege on the mysql.component system table is required to run the INSTALL COMPONENT statement. To register the component, the operation adds a row to this table.

    The following is an example of the installation command:

    mysql> INSTALL COMPONENT 'file://component_encryption_udf';\n

    Note

    If you are Compiling Percona Server for MySQL from Source, the Encryption UDF component is built by default when Percona Server for MySQL is built. Specify the -DWITH_ENCRYPTION_UDF=OFF cmake option to exclude it.

    "},{"location":"encryption-functions.html#user-defined-functions-described","title":"User-defined functions described","text":""},{"location":"encryption-functions.html#asymmetric_decryptalgorithm-crypt_str-key_str","title":"asymmetric_decrypt(algorithm, crypt_str, key_str)","text":"

    Decrypts an encrypted string using the algorithm and a key string.

    "},{"location":"encryption-functions.html#returns","title":"Returns","text":"

    A plaintext as a string.

    "},{"location":"encryption-functions.html#parameters","title":"Parameters","text":"

    The following are the function\u2019s parameters:

    • algorithm - the encryption algorithm supports RSA to decrypt the string.

    • key_str - a string in the PEM format. The key string must have the following attributes:

    • Valid

    • Public or private key string that corresponds with the private or public key string used with the asymmetric_encrypt function.

    • crypt_str - an encrypted string produced by certain encryption functions like AES_ENCRYPT(). This string is typically stored as a binary or blog data type.

    • padding - An optional parameter introduced in Percona Server for MySQL 8.0.41. It is used with the RSA algorithm and supports RSA encryption padding schemes like no, pkcs1, or oaep. If you skip this parameter, the system determines its value based on the encryption_udf.legacy_padding_scheme variable.

    "},{"location":"encryption-functions.html#asymmetric_derivepub_key_str-priv_key_str","title":"asymmetric_derive(pub_key_str, priv_key_str)","text":"

    Derives a symmetric key using a public key generated on one side and a private key generated on another.

    "},{"location":"encryption-functions.html#asymmetric_derive-output","title":"asymmetric_derive output","text":"

    A key as a binary string.

    "},{"location":"encryption-functions.html#asymmetric_derive-parameters","title":"asymmetric_derive parameters","text":"

    The pub_key_str must be a public key in the PEM format and generated using the Diffie-Hellman (DH) algorithm.

    The priv_key_str must be a private key in the PEM format and generated using the Diffie-Hellman (DH) algorithm.

    "},{"location":"encryption-functions.html#asymmetric_encryptalgorithm-str-key_str","title":"asymmetric_encrypt(algorithm, str, key_str)","text":"

    Encrypts a string using the algorithm and a key string.

    "},{"location":"encryption-functions.html#asymmetric_encrypt-output","title":"asymmetric_encrypt output","text":"

    A ciphertext as a binary string.

    "},{"location":"encryption-functions.html#asymmetric_encrypt-parameters","title":"asymmetric_encrypt parameters","text":"

    The parameters are the following:

    • algorithm - the encryption algorithm supports RSA to encrypt the string.

    • str - measured in bytes. The length of the string must not be greater than the key_str modulus length in bytes - 11 (additional bytes used for PKCS1 padding)

    • key_str - a key (either private or public) in the PEM format

    • padding - An optional parameter introduced in Percona Server for MySQL 8.0.41. It is used with the RSA algorithm and supports RSA encryption padding schemes like no, pkcs1, or oaep. If you skip this parameter, the system determines its value based on the encryption_udf.legacy_padding_scheme variable.

    "},{"location":"encryption-functions.html#asymmetric_signalgorithm-digest_str-priv_key_str-digest_type","title":"asymmetric_sign(algorithm, digest_str, priv_key_str, digest_type)","text":"

    Signs a digest string using a private key string.

    "},{"location":"encryption-functions.html#asymmetric_sign-output","title":"asymmetric_sign output","text":"

    A signature is a binary string.

    "},{"location":"encryption-functions.html#asymmetric_sign-parameters","title":"asymmetric_sign parameters","text":"

    The parameters are the following:

    • algorithm - the encryption algorithm supports either RSA or DSA to encrypt the string.

    • digest_str - the digest binary string that is signed. Invoking create_digest generates the digest.

    • priv_key_str - the private key used to sign the digest string. The key must be in the PEM format.

    • digest_type - the OpenSSL version installed on your system determines the available hash functions. The following table lists these functions:

      OpenSSL 1.0.2 OpenSSL 1.1.0 OpenSSL 1.1.1 OpenSSL 3.0.x md5 md5 md5 md5 sha1 sha1 sha1 sha1 sha224 sha224 sha224 sha224 sha384 sha384 sha384 sha384 sha512 sha512 sha512 sha512 md4 md4 md4 md4 sha md5-sha1 md5-sha1 md5-sha1 ripemd160 ripemd160 ripemd160 sha512-224 whirlpool whirlpool sha512-224 sha512-256 blake2b512 sha512-256 sha3-224 blake2s256 whirlpool sha3-256 sm3 sha3-384 blake2b512 sha3-512 blake2s256 sha3-224 sha3-384 sha3-512 shake128 shake256
    • padding - An optional parameter introduced in Percona Server for MySQL 8.0.41. It is used with the RSA algorithm and supports RSA signature padding schemes like pkcs1, or pkcs1_pss. If you skip this parameter, the system determines its value based on the encryption_udf.legacy_padding_scheme variable.

    "},{"location":"encryption-functions.html#asymmetric_verifyalgorithm-digest_str-sig_str-pub_key_str-digest_type","title":"asymmetric_verify(algorithm, digest_str, sig_str, pub_key_str, digest_type)","text":"

    Verifies whether the signature string matches the digest string.

    "},{"location":"encryption-functions.html#asymmetric_verify-output","title":"asymmetric_verify output","text":"

    A 1 (success) or a 0 (failure).

    "},{"location":"encryption-functions.html#asymmetric_verify-parameters","title":"asymmetric_verify parameters","text":"

    The parameters are the following:

    • algorithm - supports either \u2018RSA\u2019 or \u2018DSA\u2019.

    • digest_str - invoking create_digest generates this digest binary string.

    • sig_str - the signature binary string. Invoking asymmetric_sign generates this string.

    • pub_key_str - the signer\u2019s public key string. This string must correspond to the private key passed to asymmetric_sign to generate the signature string. The string must be in the PEM format.

    • digest_type - the supported values are listed in the digest type table of create_digest

    • padding - An optional parameter introduced in Percona Server for MySQL 8.0.41. It is used with the RSA algorithm and supports RSA signature padding schemes like pkcs1, or pkcs1_pss. If you skip this parameter, the system determines its value based on the encryption_udf.legacy_padding_scheme variable.

    "},{"location":"encryption-functions.html#create_asymmetric_priv_keyalgorithm-key_len-dh_parameters","title":"create_asymmetric_priv_key(algorithm, (key_len | dh_parameters))","text":"

    Generates a private key using the given algorithm and key length for RSA or DSA or Diffie-Hellman parameters for DH. For RSA or DSA, if needed, execute KILL [QUERY|CONNECTION] <id> to terminate a long-lasting key generation. The DH key generation from existing parameters is a quick operation. Therefore, it does not make sense to terminate that operation with KILL.

    "},{"location":"encryption-functions.html#create_asymmetric_priv_key-output","title":"create_asymmetric_priv_key output","text":"

    The key as a string in the PEM format.

    "},{"location":"encryption-functions.html#create_asymmetric_priv_key-parameters","title":"create_asymmetric_priv_key parameters","text":"

    The parameters are the following:

    • algorithm - the supported values are \u2018RSA\u2019, \u2018DSA\u2019, or \u2018DH\u2019.

    • key_len - the supported key length values are the following:

      • RSA - the minimum length is 1,024. The maximum length is 16,384.

      • DSA - the minimum length is 1,024. The maximum length is 9,984.

      Note

      The key length limits are defined by OpenSSL. To change the maximum key length, use either encryption_udf.rsa_bits_threshold or encryption_udf.dsa_bits_threshold.

    • dh_parameters - Diffie-Hellman (DH) parameters. Invoking create_dh_parameter creates the DH parameters.

    "},{"location":"encryption-functions.html#create_asymmetric_pub_keyalgorithm-priv_key_str","title":"create_asymmetric_pub_key(algorithm, priv_key_str)","text":"

    Derives a public key from the given private key using the given algorithm.

    "},{"location":"encryption-functions.html#create_asymmetric_pub_key-output","title":"create_asymmetric_pub_key output","text":"

    The key as a string in the PEM format.

    "},{"location":"encryption-functions.html#create_asymmetric_pub_key-parameters","title":"create_asymmetric_pub_key parameters","text":"

    The parameters are the following:

    • algorithm - the supported values are \u2018RSA\u2019, \u2018DSA\u2019, or \u2018DH\u2019.

    • priv_key_str - must be a valid key string in the PEM format.

    "},{"location":"encryption-functions.html#create_dh_parameterskey_len","title":"create_dh_parameters(key_len)","text":"

    Creates parameters for generating a Diffie-Hellman (DH) private/public key pair. If needed, execute KILL [QUERY|CONNECTION] <id> to terminate the generation of long-lasting parameters.

    Generating the DH parameters can take more time than generating the RSA keys or the DSA keys. OpenSSL defines the parameter length limits. To change the maximum parameter length, use encryption_udf.dh_bits_threshold.

    "},{"location":"encryption-functions.html#create_dh_parameters-output","title":"create_dh_parameters output","text":"

    A string in the PEM format and can be passed to create_asymmetric_private_key.

    "},{"location":"encryption-functions.html#create_dh_parameters-parameters","title":"create_dh_parameters parameters","text":"

    The parameters are the following:

    • key_len - the range for the key length is from 1024 to 10,000. The default value is 10,000.
    "},{"location":"encryption-functions.html#create_digestdigest_type-str","title":"create_digest(digest_type, str)","text":"

    Creates a digest from the given string using the given digest type. The digest string can be used with asymmetric_sign and asymmetric_verify.

    "},{"location":"encryption-functions.html#create_digest-output","title":"create_digest output","text":"

    The digest of the given string as a binary string

    "},{"location":"encryption-functions.html#create_digest-parameters","title":"create_digest parameters","text":"

    The parameters are the following:

    • digest_type - the OpenSSL version installed on your system determines the available hash functions. The following table lists these functions:

      OpenSSL 1.0.2 OpenSSL 1.1.0 OpenSSL 1.1.1 OpenSSL 3.0.x md5 md5 md5 md5 sha1 sha1 sha1 sha1 sha224 sha224 sha224 sha224 sha384 sha384 sha384 sha384 sha512 sha512 sha512 sha512 md4 md4 md4 md4 sha md5-sha1 md5-sha1 md5-sha1 ripemd160 ripemd160 ripemd160 sha512-224 whirlpool whirlpool sha512-224 sha512-256 blake2b512 sha512-256 sha3-224 blake2s256 whirlpool sha3-256 sm3 sha3-384 blake2b512 sha3-512 blake2s256 sm3 sha3-224 blake2b512 sha3-384 blake2s256 sha3-512 blake2b512 shake128 blake2s256 shake256
    • str - String used to generate the digest string.

    "},{"location":"encryption-functions.html#encryption-threshold-variables","title":"Encryption threshold variables","text":"

    The maximum key length limits are defined by OpenSSL. Server administrators can limit the maximum key length using the encryption threshold variables.

    The variables are automatically registered when component_encryption_udf is installed.

    Variable Name encryption_udf.dh_bits_threshold"},{"location":"encryption-functions.html#encryption_udfdh_bits_threshold","title":"encryption_udf.dh_bits_threshold","text":"

    The variable sets the maximum limit for the create_dh_parameters user-defined function and takes precedence over the OpenSSL maximum length value.

    Option Description command-line Yes scope Global data type unsigned integer default 10000

    The range for this variable is from 1024 to 10,000. The default value is 10,000.

    "},{"location":"encryption-functions.html#encryption_udfdsa_bits_threshold","title":"encryption_udf.dsa_bits_threshold","text":"

    The variable sets the threshold limits for create_asymmetric_priv_key user-defined function when the function is invoked with the DSA parameter and takes precedence over the OpenSSL maximum length value.

    Option Description command-line Yes scope Global data type unsigned integer default 9984

    The range for this variable is from 1,024 to 9,984. The default value is 9,984.

    "},{"location":"encryption-functions.html#encryption_udflegacy_paddding_scheme","title":"encryption_udf.legacy_paddding_scheme","text":"

    The variable enables or disables the legacy padding scheme for certain encryption operations.

    Option Description command-line Yes scope Global data type Boolean default OFF

    This system variable is a BOOLEAN type and is set to OFF by default.

    This variable controls how the functions asymmetric_encrypt(), asymmetric_decrypt(), asymmetric_sign(), and asymmetric_verify() behave when you don\u2019t explicitly set the padding parameter.

    \u2022 When encryption_udf.legacy_padding_scheme is OFF:

    \u2022 asymmetric_encrypt() and asymmetric_decrypt() use OAEP encryption padding.\n\n\u2022 asymmetric_sign() and asymmetric_verify() use PKCS1_PSS signature padding.\n

    \u2022 When encryption_udf.legacy_padding_scheme is ON:

    \u2022 asymmetric_encrypt() and asymmetric_decrypt() use PKCS1 encryption padding.\n\n\u2022 asymmetric_sign() and asymmetric_verify() use PKCS1 signature padding.\n

    The asymmetric_encrypt() and asymmetric_decrypt() functions, when the encryption is RSA, can accept an optional parameter, padding. You can set this parameter to no, pkcs1, or oaep. If you don\u2019t specify this parameter, it defaults based on the encryption_udf.legacy_padding_scheme value.

    The padding schemes have the following limitations:

    Padding Scheme Details oeap The message you encrypt can be as long as your RSA key size in bytes - 42 bytes. no The message length must exactly match your RSA key size in bytes. For example, if your key is 1024 bits (128 bytes), the message must also be 128 bytes. If it doesn\u2019t match, it will cause an error. pkcs1 Your message can be equal to or smaller than the RSA key size - 11 bytes. For instance, with a 1024-bit RSA key, your message can\u2019t be longer than 117 bytes.

    Similarly, asymmetric_sign() and asymmetric_verify() also have an optional padding parameter, which can be either pkcs1 or pkcs1_pss. If not explicitly set, it follows the default based on encryption_udf.legacy_padding_scheme. You can only use the padding parameter with RSA algorithms.

    "},{"location":"encryption-functions.html#additional-resources","title":"Additional resources","text":"

    For more information, read Digital Signatures: Another layer of Data Protection in Percona Server for MySQL

    "},{"location":"encryption-functions.html#encryption_udfrsa_bits_threshold","title":"encryption_udf.rsa_bits_threshold","text":"

    The variable sets the threshold limits for the create_asymmetric_priv_key user-defined function when the function is invoked with the RSA parameter and takes precedence over the OpenSSL maximum length value.

    Option Description command-line Yes scope Global data type unsigned integer default 16384

    The range for this variable is from 1,024 to 16,384. The default value is 16,384.

    "},{"location":"encryption-functions.html#examples","title":"Examples","text":"

    Code examples for the following operations:

    • set the threshold variables

    • create a private key

    • create a public key

    • encrypt data

    • decrypt data

    -- Set Global variable\nmysql> SET GLOBAL encryption_udf.dh_bits_threshold = 4096;\n\n-- Set Global variable\nmysql> SET GLOBAL encryption_udf.rsa_bits_threshold = 4096;\n
    -- Create private key\nmysql> SET @private_key = create_asymmetric_priv_key('RSA', 3072);\n\n-- Create public key\nmysql> SET @public_key = create_asymmetric_pub_key('RSA', @private_key);\n\n-- Encrypt data using the private key (you can also use the public key)\nmysql> SET @ciphertext = asymmetric_encrypt('RSA', 'This text is secret', @private_key);\n\n-- Decrypt data using the public key (you can also use the private key)\n-- The decrypted value @plaintext should be identical to the original 'This text is secret'\nmysql> SET @plaintext = asymmetric_decrypt('RSA', @ciphertext, @public_key);\n

    Code examples for the following operations:

    • generate a digest string

    • generate a digest signature

    • verify the signature against the digest

    -- Generate a digest string\nmysql> SET @digest = create_digest('SHA256', 'This is the text for digest');\n\n-- Generate a digest signature\nmysql> SET @signature = asymmetric_sign('RSA', @digest, @private_key, 'SHA256');\n\n-- Verify the signature against the digest\n-- The @verify_signature must be equal to 1\nmysql> SET @verify_signature = asymmetric_verify('RSA', @digest, @signature, @public_key, 'SHA256');\n

    Code examples for the following operations:

    • generate a DH parameter

    • generates two DH key pairs

    • generate a symmetric key using the public_1 and the private_2

    • generate a symmetric key using the public_2 and the private_1

     -- Generate a DH parameter\n mysql> SET @dh_parameter = create_dh_parameters(3072);\n\n -- Generate DH key pairs\n mysql> SET @private_1 = create_asymmetric_priv_key('DH', @dh_parameter);\n mysql> SET @public_1 = create_asymmetric_pub_key('DH', @private_1);\n mysql> SET @private_2 = create_asymmetric_priv_key('DH', @dh_parameter);\n mysql> SET @public_2 = create_asymmetric_pub_key('DH', @private_2);\n\n-- Generate a symmetric key using the public_1 and private_2\n-- The @symmetric_1 must be identical to @symmetric_2\nmysql> SET symmetric_1 = asymmetric_derive(@public_1, @private_2);\n\n-- Generate a symmetric key using the public_2 and private_1\n-- The @symmetric_2 must be identical to @symmetric_1\nmysql> SET symmetric_2 = asymmetric_derive(@public_2, @private_1);\n

    Code examples for the following operations:

    • create a private key using a SET statement

    • create a private key using a SELECT statement

    • create a private key using an INSERT statement

    mysql> SET @private_key1 = create_asymmetric_priv_key('RSA', 3072);\nmysql> SELECT create_asymmetric_priv_key('RSA', 3072) INTO @private_key2;\nmysql> INSERT INTO key_table VALUES(create_asymmetric_priv_key('RSA', 3072));\n
    "},{"location":"encryption-functions.html#uninstall-component_encryption_udf","title":"Uninstall component_encryption_udf","text":"

    You can deactivate and uninstall the component using the Uninstall Component statement.

    mysql> UNINSTALL COMPONENT 'file://component_encryption_udf';\n
    "},{"location":"enforce-engine.html","title":"Enforcing storage engine","text":"

    Percona Server for MySQL has implemented variable which can be used for enforcing the use of a specific storage engine.

    When this variable is specified and a user tries to create a table using an explicit storage engine that is not the specified enforced engine, the user will get either an error if the NO_ENGINE_SUBSTITUTION SQL mode is enabled or a warning if NO_ENGINE_SUBSTITUTION is disabled and the table will be created anyway using the enforced engine (this is consistent with the default MySQL way of creating the default storage engine if other engines are not available unless NO_ENGINE_SUBSTITUTION is set).

    In case user tries to enable enforce_storage_engine with engine that isn\u2019t available, system will not start.

    Note

    If you\u2019re using enforce_storage_engine, you must either disable it before doing mysql_upgrade or perform mysql_upgrade with server started with --skip-grants-tables.

    "},{"location":"enforce-engine.html#version-specific-information","title":"Version specific information","text":"
    • Percona Server for MySQL 8.0.13-4: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"enforce-engine.html#system-variables","title":"System variables","text":""},{"location":"enforce-engine.html#enforce_storage_engine","title":"enforce_storage_engine","text":"Option Description Command Line: Yes Config file Yes Scope: Global Dynamic: No Data type String Default value NULL

    This variable is not case sensitive.

    "},{"location":"enforce-engine.html#example","title":"Example","text":"

    Adding following option to my.cnf will start the server with InnoDB as enforced storage engine.

    enforce_storage_engine=InnoDB\n
    "},{"location":"extended-mysqlbinlog.html","title":"Extended mysqlbinlog","text":"

    Note

    In Percona Server 8.0.18, the --compress option was marked as deprecated and may be removed in a future version of Percona Server for MySQL.

    Percona Server for MySQL has implemented compression support for mysqlbinlog. This is similar to support that both mysql and mysqldump programs include (the -C, --compress options \u201cUse compression in server/client protocol\u201d). Using the compressed protocol helps reduce the bandwidth use and speed up transfers.

    Percona Server for MySQL has also implemented support for SSL. mysqlbinlog now accepts the SSL connection options as all the other client programs. This feature can be useful with --read-from-remote-server option.

    "},{"location":"extended-mysqlbinlog.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7
    "},{"location":"extended-mysqldump.html","title":"Extended mysqldump","text":""},{"location":"extended-mysqldump.html#backup-locks-support","title":"Backup locks support","text":"

    When used together with the \u2013single-transaction option, the lock-for-backup option makes mysqldump issue LOCK TABLES FOR BACKUP before starting the dump operation to prevent unsafe statements that would normally result in an inconsistent backup.

    More information can be found on the Backup Locks feature documentation.

    "},{"location":"extended-mysqldump.html#compressed-columns-support","title":"Compressed columns support","text":"

    mysqldump supports the Compressed columns with dictionaries feature. More information about the relevant options can be found on the Compressed columns with dictionaries feature page.

    "},{"location":"extended-mysqldump.html#taking-backup-by-descending-primary-key-order","title":"Taking backup by descending primary key order","text":"

    \u2013order-by-primary-desc tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if the storage engine is using the reverse order column for a primary key.

    "},{"location":"extended-mysqldump.html#rocksdb-support","title":"RocksDB support","text":"

    mysqldump detects when MyRocks is installed and available. If there is a session variable named rocksdb_skip_fill_cache mysqldump sets it to 1.

    mysqldump will now automatically enable session the variable rocksdb_bulk_load if it is supported by the target server.

    "},{"location":"extended-mysqldump.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7
    "},{"location":"extended-select-into-outfile.html","title":"Extended SELECT INTO OUTFILE/DUMPFILE","text":"

    Percona Server for MySQL improves the SELECT INTO ... OUTFILE and SELECT ... INTO DUMPFILE commands by allowing them to work with UNIX sockets and named pipes. In the past, using these types of files would cause an error.

    This feature lets you quickly combine LOAD DATA LOCAL INFILE with SELECT INTO ... OUTFILE to transfer data across the network or between different partitions. It avoids creating an intermediate file, saves disk space, and reduces I/O usage. This ability makes data loading more efficient, especially for large datasets or complex configurations.

    "},{"location":"extended-select-into-outfile.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"extended-show-grants.html","title":"Extended SHOW GRANTS","text":"

    In Oracle MySQL SHOW GRANTS displays only the privileges granted explicitly to the named account. Other privileges might be available to the account, but they are not displayed. For example, if an anonymous account exists, the named account might be able to use its privileges, but SHOW GRANTS will not display them. Percona Server for MySQL offers the SHOW EFFECTIVE GRANTS command to display all the effectively available privileges to the account, including those granted to a different account.

    "},{"location":"extended-show-grants.html#example","title":"Example","text":"

    If we create the following users:

    mysql> CREATE USER grantee@localhost IDENTIFIED BY 'grantee1';\n
    Expected output
    Query OK, 0 rows affected (0.50 sec)\n
    mysql> CREATE USER grantee IDENTIFIED BY 'grantee2';\n
    Expected output
    Query OK, 0 rows affected (0.09 sec)\n
    mysql> CREATE DATABASE db2;\n
    Expected output
    Query OK, 1 row affected (0.20 sec)\n
    mysql> GRANT ALL PRIVILEGES ON db2.* TO grantee WITH GRANT OPTION;\n
    Expected output
    Query OK, 0 rows affected (0.12 sec)\n
    • SHOW EFFECTIVE GRANTS output before the change:
    mysql> SHOW EFFECTIVE GRANTS;\n
    Expected output
    +----------------------------------------------------------------------------------------------------------------+\n| Grants for grantee@localhost                                                                                   |\n+----------------------------------------------------------------------------------------------------------------+\n| GRANT USAGE ON *.* TO 'grantee'@'localhost' IDENTIFIED BY PASSWORD '*9823FF338D44DAF02422CF24DD1F879FB4F6B232' |\n+----------------------------------------------------------------------------------------------------------------+\n1 row in set (0.04 sec)\n

    Although the grant for the db2 database isn\u2019t shown, grantee user has enough privileges to create the table in that database:

    user@trusty:~$ mysql -ugrantee -pgrantee1 -h localhost\n
    mysql> CREATE TABLE db2.t1(a int);\n
    Expected output
    Query OK, 0 rows affected (1.21 sec)\n
    • The output of SHOW EFFECTIVE GRANTS after the change shows all the privileges for the grantee user:
    mysql> SHOW EFFECTIVE GRANTS;\n
    Expected output
    +-------------------------------------------------------------------+\n| Grants for grantee@localhost                                      |\n+-------------------------------------------------------------------+\n| GRANT USAGE ON *.* TO 'grantee'@'localhost' IDENTIFIED BY PASSWORD|\n| '*9823FF338D44DAF02422CF24DD1F879FB4F6B232'                       |\n| GRANT ALL PRIVILEGES ON `db2`.* TO 'grantee'@'%' WITH GRANT OPTION|\n+-------------------------------------------------------------------+\n2 rows in set (0.00 sec)\n
    "},{"location":"extended-show-grants.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"extended-show-grants.html#other-reading","title":"Other reading","text":"
    • #53645 - SHOW GRANTS not displaying all the applicable grants
    "},{"location":"faq.html","title":"Frequently asked questions","text":""},{"location":"faq.html#q-will-percona-server-for-mysql-with-xtradb-invalidate-our-mysql-support","title":"Q: Will Percona Server for MySQL with XtraDB invalidate our MySQL support?","text":"

    A: We don\u2019t know the details of your support contract. You should check with your Oracle representative. We have heard anecdotal stories from MySQL Support team members that they have customers who use Percona Server for MySQL with XtraDB, but you should not base your decision on that.

    "},{"location":"faq.html#q-will-we-have-to-gpl-our-whole-application-if-we-use-percona-server-for-mysql-with-xtradb","title":"Q: Will we have to GPL our whole application if we use Percona Server for MySQL with XtraDB?","text":"

    A: This is a common misconception about the GPL. We suggest reading the Free Software Foundation \u2018s excellent reference material on the GPL Version 2, which is the license that applies to MySQL and therefore to Percona Server for MySQL with XtraDB. That document contains links to many other documents which should answer your questions. Percona is unable to give legal advice about the GPL.

    "},{"location":"faq.html#q-do-i-need-to-install-percona-client-libraries","title":"Q: Do I need to install Percona client libraries?","text":"

    A: No, you don\u2019t need to change anything on the clients. Percona Server for MySQL is 100% compatible with all existing client libraries and connectors.

    "},{"location":"faq.html#q-when-using-the-percona-xtrabackup-to-set-up-a-replication-replica-on-debian-based-systems-im-getting-error-1045-28000-access-denied-for-user-debian-sys-maintlocalhost-using-password-yes","title":"Q: When using the Percona XtraBackup to set up a replication replica on Debian-based systems I\u2019m getting: \u201cERROR 1045 (28000): Access denied for user \u2018debian-sys-maint\u2019@\u2019localhost\u2019 (using password: YES)\u201d","text":"

    A: In case you\u2019re using the init script on Debian-based system to start mysqld, be sure that the password for debian-sys-maint user has been updated and it\u2019s the same as that user\u2019s password from the server that the backup has been taken from. The password can be seen and updated in /etc/mysql/debian.cnf. For more information on how to set up a replication replica using Percona XtraBackup see this how-to.

    "},{"location":"fast-updates.html","title":"Fast updates with TokuDB","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    "},{"location":"fast-updates.html#introduction","title":"Introduction","text":"

    Update intensive applications can have their throughput limited by the random read capacity of the storage system. The cause of the throughput limit is the read-modify-write algorithm that MySQL uses to process update statements (read a row from the storage engine, apply the updates to it, write the new row back to the storage engine).

    To address this throughput limit, TokuDB provides an experimental fast update feature, which uses a different update algorithm. Update expressions of the SQL statement are encoded into tiny programs that are stored in an update Fractal Tree message. This update message is injected into the root of the Fractal Tree index. Eventually, these update messages reach a leaf node, where the update programs are applied to the row. Since messages are moved between Fractal Tree levels in batches, the cost of reading in the leaf node is amortized over many update messages.

    This feature is available for UPDATE and INSERT statements, and can be turned ON/OFF separately for them with use of two variables. Variable tokudb_enable_fast_update variable toggles fast updates for the UPDATE, and tokudb_enable_fast_upsert does the same for INSERT.

    "},{"location":"fast-updates.html#limitations","title":"Limitations","text":"

    Fast updates are activated instead of normal MySQL read-modify-write updates if the executed expression meets the number of conditions.

    • fast updates can be activated for a statement or a mixed replication,

    • a primary key must be defined for the involved table,

    • both simple and compound primary keys are supported, and int, char or varchar are the allowed data types for them,

    • updated fields should have Integer or char data type,

    • fields that are part of any key should be not updated,

    • clustering keys are not allowed,

    • triggers should be not involved,

    • supported update expressions should belong to one of the following types:

      • x = constant

      • x = x + constant

      • x = x - constant

      • x = if (x=0,0,x-1)

      • x = x + values

    "},{"location":"fast-updates.html#usage-specifics-and-examples","title":"Usage specifics and examples","text":"

    Following example creates a table that associates event identifiers with their count:

    CREATE TABLE t (event_id bigint unsigned NOT NULL PRIMARY KEY, event_count bigint unsigned NOT NULL\n);\n

    Many graph applications that map onto relational tables can use duplicate key inserts and updates to maintain the graph. For example, one can update the meta-data associated with a link in the graph using duplicate key insertions. If the affected rows is not used by the application, then the insertion or update can be marked and executed as a fast insertion or a fast update.

    "},{"location":"fast-updates.html#insertion-example","title":"Insertion example","text":"

    If it is not known if the event identifier (represented by event_id) already exists in the table, then INSERT ... ON DUPLICATE KEY UPDATE ... statement can insert it if not existing, or increment its event_count otherwise. Here is an example with duplicate key insertion statement, where %id is some specific event_id value:

    INSERT INTO t VALUES (%id, 1)\n  ON DUPLICATE KEY UPDATE event_count=event_count+1;\n
    "},{"location":"fast-updates.html#explanation","title":"Explanation","text":"

    If the event id\u2019s are random, then the throughput of this application would be limited by the random read capacity of the storage system since each INSERT statement has to determine if this event_id exists in the table.

    TokuDB replaces the primary key existence check with an insertion of an \u201cupsert\u201d message into the Fractal Tree index. This \u201cupsert\u201d message contains a copy of the row and a program that increments event_count. As the Fractal Tree buffer\u2019s get filled, this \u201cupsert\u201d message is flushed down the tree. Eventually, the message reaches a leaf node and gets executed there. If the key exists in the leaf node, then the event_count is incremented. Otherwise, the new row is inserted into the leaf node.

    "},{"location":"fast-updates.html#update-example","title":"Update example","text":"

    If event_id is known to exist in the table, then UPDATE statement can be used to increment its event_count (once again, specific event_id value is written here as %id):

    UPDATE t SET event_count=event_count+1 WHERE event_id=%id;\n
    "},{"location":"fast-updates.html#explanation_1","title":"Explanation","text":"

    TokuDB generates an \u201cupdate\u201d message from the UPDATE statement and its update expression trees, and inserts this message into the Fractal Tree index. When the message eventually reaches the leaf node, the increment program is extracted from the message and executed.

    "},{"location":"feature-comparison.html","title":"Percona Server for MySQL feature comparison","text":"

    Percona Server for MySQL is a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior performance, scalability, and instrumentation.

    Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads. It delivers higher value to MySQL server users with optimized performance, greater performance scalability and availability, enhanced backups, and increased visibility.

    We provide these benefits by significantly enhancing Percona Server for MySQL as compared to the standard MySQL database server:

    Features Percona Server for MySQL 8.0.30 MySQL 8.0.30 Open Source Yes Yes ACID Compliance Yes Yes Multi-Version Concurrency Control Yes Yes Row-Level Locking Yes Yes Automatic Crash Recovery Yes Yes Table Partitioning Yes Yes Views Yes Yes Subqueries Yes Yes Triggers Yes Yes Stored Procedures Yes Yes Foreign Keys Yes Yes Window Functions Yes Yes Common Table Expressions Yes Yes Geospatial Features (GIS, SRS) Yes Yes GTID Replication Yes Yes Group Replication Yes Yes MyRocks Storage Engine Yes No Improvements for Developers Percona Server for MySQL 8.0.30 MySQL 8.0.30 NoSQL Socket-Level Interface Yes Yes X API Support Yes Yes JSON Functions Yes Yes InnoDB Full-Text Search Improvements Yes No Extra Hash/Digest Functions Yes No Instrumentation and Troubleshooting Features Percona Server for MySQL 8.0.30 MySQL 8.0.30 INFORMATION_SCHEMA Tables 95 65 Global Performance and Status Counters 853 434 Optimizer Histograms Yes Yes Per-Table Performance Counters Yes No Per-Index Performance Counters Yes No Per-User Performance Counters Yes No Per-Client Performance Counters Yes No Per-Thread Performance Counters Yes No Global Query Response Time Statistics Yes No Enhanced SHOW INNODB ENGINE STATUS Yes No Undo Segment Information Yes No Temporary Tables Information Yes No Extended Slow Query Logging Yes No User Statistics Yes No Performance and Scalability Features Percona Server for MySQL 8.0.30 MySQL 8.0.30 InnoDB Resource Groups Yes Yes Configurable Page Sizes Yes Yes Contention-Aware Transaction Scheduling Yes Yes Improved Scalability By Splitting Mutexes Yes No Improved MEMORY Storage Engine Yes No Improved Flushing Yes No Parallel Doublewrite Buffer Yes Yes Configurable Fast Index Creation) Yes No Per-Column Compression for VARCHAR/BLOB and JSON Yes No Compressed Columns with Dictionaries Yes No Security Features Percona Server for MySQL 8.0.30 MySQL 8.0.30 SQL Roles Yes Yes SHA-2 Based Password Hashing Yes Yes Password Rotation Policy Yes Yes PAM Authentication Plugin Yes Enterprise-Only Audit Logging Plugin Yes Enterprise-Only Encryption Features Percona Server for MySQL 8.0.30 MySQL 8.0.30 Storing Keyring in a File Yes Yes Storing Keyring in Hashicorp Vault Yes Enterprise Only Encrypt InnoDB Data Yes Yes Encrypt InnoDB Logs Yes Yes Encrypt Built-In InnoDB Tablespaces (General, System, Undo, Temp) Yes Yes Encrypt Binary Logs Yes No Encrypt Temporary Files Yes No Enforce Encryption Yes No Operational Improvements Percona Server for MySQL 8.0.30 MySQL 8.0.30 Atomic DDL Yes Yes Transactional Data Dictionary Yes Yes Instant DDL Yes Yes SET PERSIST Yes Yes Invisible Indexes Yes Yes Threadpool Yes Enterprise-Only Backup Locks Yes No Extended SHOW GRANTS Yes No Improved Handling of Corrupted Tables Yes No Ability to Kill Idle Transactions Yes No Improvements to START TRANSACTION WITH CONSISTENT SNAPSHOT Yes No Features for Running Database as a Service (DBaaS) Percona Server for MySQL 8.0.30 MySQL 8.0.30 Enforce a Specific Storage Engine Yes Yes"},{"location":"fido-authentication-plugin.html","title":"FIDO authentication plugin","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    Percona Server for MySQL 8.0.30-22 adds support for the Fast Identify Online (FIDO) authentication method that uses a plugin. The FIDO authentication provides a set of standards that reduces the reliance on passwords.

    The server-side fido authentication plugin enables authentication using external devices. If this plugin is the only authentication plugin used by the account, this plugin allows authentication without a password. Multi-factor authentication can use non-FIDO MySQL authentication methods, the FIDO authentication method, or a combination of both.

    All distributions include the client-side authentication_fido_client plugin. This plugin allows clients to connect to accounts that use authentication_fido and authenticate on a server that has that plugin loaded.

    "},{"location":"fido-authentication-plugin.html#plugin-and-library-file-names","title":"Plugin and library file names","text":"

    The plugin and library file names are listed in the following table.

    Plugin or file name Plugin or library file name Server-side plugin authentication_fido Client-side plugin authentication_fido_client Library file authentication_fido.so"},{"location":"fido-authentication-plugin.html#install-the-fido-authentication-plugin","title":"Install the FIDO authentication plugin","text":"

    The library file must be stored in the directory named by the plugin_dir variable.

    At server startupEdit my.cnf and restart the serverLoad the plugin at runtime

    At server startup, use the --plugin_load_add option with the library name. The option must be added each time the server starts.

    [mysqld]\n...\nplugin-load-add=authentication_fido.so\n...\n
    mysql> INSTALL PLUGIN authentication_fido SONAME `authentication_fido.so`;\n
    "},{"location":"fido-authentication-plugin.html#verify-installation","title":"Verify installation","text":"

    Use the SHOW PLUGINS statement or query the INFORMATION_SCHEMA.PLUGINS table to verify that the plugin was loaded successfully and is active.

    Check the server error log if the plugin is not loaded.

    "},{"location":"fido-authentication-plugin.html#use-fido-authentication","title":"Use FIDO authentication","text":"

    FIDO can be used with non-FIDO authentication. See Use FIDO authentication with non-FIDO authentication. FIDO can be used to create 1FA accounts that do not require passwords. For instructions, see Use FIDO.

    "},{"location":"fido-authentication-plugin.html#use-fido-authentication-with-non-fido-authentication","title":"Use FIDO authentication with non-FIDO authentication","text":"

    A FIDO device is associated with the account using FIDO authentication. The FIDO device must be registered before the account can be used in a one-time process. This device must be available and the user must perform whatever FIDO device action required, such as adding a thumbprint, or the registration fails.

    The registration can only be performed by the user named by the account. An error occurs if a user attempts the registration for another user.

    The device registration can be performed on the mysql client or MySQL Shell. Use the --fido-register-factor option with the factor or factors for the device. For example, if you are using FIDO as a second authentication method, which is a common practice, the statement is --fido-register-factor=2.

    Any authentication factors that proceed the FIDO registration must succeed before the registration continues.

    The server checks the user account information to determine if the FIDO device requires registration. If the device must be registered, the server switches the client session to sandbox mode. The registration must be completed before any other activity. In this mode, only ALTER USER statements are permitted. If the session is started with --fido-register-factor, the client generates the statements required to register. After the registration is complete, the session is switched out of sandbox mode and the client can proceed as normal.

    After the device is registered, the server updates the mysql.user system table for that account with the device registration status and stores the public key and credential ID.

    The user must use the same FIDO device during registration and authentication. If the device is reset or the user attempts to use a different device, the authentication fails. To use a different device, the registered device must be unregistered and you must complete the registration process again.

    "},{"location":"fido-authentication-plugin.html#use-fido-authentication-as-the-only-method","title":"Use FIDO authentication as the only method","text":"

    If FIDO is used as the only method of authentication, the method does not use a password. The authentication uses a method such as a biometric scan or a security key.

    The user creates an account with the PASSWORDLESS_USER_ADMIN privilege and the CREATE USER privilege.

    The first element of the authentication_policy value must be an asterisk(*). Do not start with the plugin name. For information about configuring the authentication policy value, see Configuring the Multifactor Authentication Policy.

    You must include the INITIAL AUTHENTICATION IDENTIFIED BY clause in the CREATE USER statement. The server does accept the statement without the clause but the account is unusable because the user cannot connect to the server to register the device.

    The CREATE USER syntax is the following:

    mysql> CREATE USER <username>@<hostname> IDENTIFIED WITH authentication_fido INITIAL AUTHENTICATION IDENTIFIED BY '<password>';\n

    During registration, the user must authenticate with the password. After the device is registered, the server deletes the password and modifies the account to make FIDO the only authentication method.

    "},{"location":"fido-authentication-plugin.html#unregister-a-fido-device","title":"Unregister a FIDO device","text":"

    If the FIDO device is replaced or lost, the following actions occur:

    Action required Who can perform the action Unregister the previous device The account owner or any user with the CREATE USER privilege can unregister the device Register the new device The user planning to use the device must register the new device

    The statement to unregister a device is as follows:

    mysql> ALTER USER `username`@`hostname` {2|3} FACTOR UNREGISTER;\n
    "},{"location":"filter-audit-log-filter-files.html","title":"Filter the Audit Log Filter logs","text":"

    The audit filter log filtering is based on rules. The filter rule definition has the ability to include or exclude events based on the following attributes:

    • User account
    • Audit event class
    • Audit event subclass
    • Audit event fields (for example, COMMAND_CLASS or STATUS)

    You can define multiple filters and assign any filter to multiple accounts. You can also create a default filter for specific user accounts. The filters are defined using function calls. After the filter is defined, the filter is stored in mysql system tables.

    "},{"location":"filter-audit-log-filter-files.html#audit-log-filter-functions","title":"Audit Log Filter functions","text":"

    The Audit Log filter functions require AUDIT_ADMIN or SUPER privilege.

    The following functions are used for rule-based filtering:

    Function Description Example audit_log_filter_flush() Manually flush the filter tables SELECT audit_log_filter_flush() audit_log_filter_set_filter() Defines a filter SELECT audit_log_filter_set_filter('log_connections','{ \"filter\":{}}'\u2019) audit_log_filter_remove_filter() Removes a filter audit_log_filter_set_user() Assigns a filter to a specific user account audit_log_filter_remove_user() Removes the filters from a specific user account

    Using a SQL interface, you can define, display, or modify audit log filters. The filters are stored in the mysql system database.

    A read-only variable, audit_log_filter_id, signals if a filter is assigned to a specific session.

    Filter definitions are JSON values.

    The function, audit_log_filter_flush(), forces reloading all filters and should only be invoked when modifying the audit tables. This function affects all users. Users in current sessions must either execute change-user or disconnect and reconnect.

    "},{"location":"filter-audit-log-filter-files.html#constraints","title":"Constraints","text":"

    The audit_log_filter plugin must be enabled and the audit tables must exist to use the audit log filter functions. The user account must have the required privileges.

    "},{"location":"filter-audit-log-filter-files.html#using-the-audit-log-filter-functions","title":"Using the audit log filter functions","text":"

    With a new connection, the audit log filter plugin finds the user account name in the filter assignments. If a filter has been assigned, the plugin uses that filter. If no filter has been assigned, but there is a default account filter, the plugin uses that filter. If there is no filter assigned, and there is no default account filter, then the plugin does not process any event.

    The default account is represented by % as the account name.

    You can assign filters to a specific user account or disassociate a user account from a filter. To disassociate a user account, either unassign a filter or assign a different filter. If you remove a filter, that filter is unassigned from all users, including current users in current sessions.

    "},{"location":"fips.html","title":"FIPS compliance","text":"

    Percona Server for MySQL Pro includes the capabilities that are typically requested by large enterprises. Percona Server for MySQL Pro contains packages created and tested by Percona. These packages are supported only for Percona Customers with a subscription.

    Become a Percona Customer

    The Federal Information Processing Standards (FIPS) are a set of U.S. government standards that ensure the security of computer systems for non-military government agencies and contractors. These standards specify how to perform cryptographic operations, such as encryption, hashing, and digital signatures. FIPS mode is a mode of operation that enforces these standards and rejects any non-compliant algorithms or parameters.

    Percona Server for MySQL implements the same level of FIPS support as MySQL. Percona Server for MySQL can run in FIPS mode if a FIPS-enabled OpenSSL library and FIPS Object Module are available at runtime or if compiled using a FIPS-validated version of OpenSSL. You can also receive this functionality by building Percona Server for MySQL from source code.

    "},{"location":"fips.html#prerequisites","title":"Prerequisites","text":"

    To prepare Percona Server for MySQL for FIPS certification, do the following:

    • Check that your operating system includes FIPS pre-approved OpenSSL library in version 3.0.x or higher. The following distributions includes FIPS pre-approved OpenSSL library in version 3.0.x or higher:

      • RedHat Enterprise Linux 9 and derivatives

      • Oracle Linux 9

      The following distributions also includes OpenSSL library in version 3.0.x but do not have FIPS-approved crypto provider installed by default (you can build the crypto provider from the source for testing):

      • Debian 12

      • Ubuntu 22.04 Pro (the OpenSSL FIPS 140-3 certification is under implementation)

        Note

        If you enable FIPS on Ubuntu Pro with $ sudo pro enable fips-updates and then disable FIPS with $ sudo pro disable fips-updates, Percona Server for MySQL may stop operating properly. For example, if you disable FIPS on Ubuntu Pro with $ sudo pro disable fips-updates and enable the FIPS mode on Percona Server with ssl-fips-mode=ON, Percona Server may not load the SSL certificate.

    • Deploy Percona Server for MySQL from the Pro build, which is built and tested on operating systems with FIPS pre-approved OpenSSL packages.

    "},{"location":"fips.html#the-fips-mode-variables","title":"The FIPS mode variables","text":"

    Percona Server for MySQL uses the same variables and values as MySQL. Percona Server for MySQL enables control of FIPS mode on the server side and the client side:

    • The ssl_fips_mode system variable shows whether the server operates in FIPS mode. This variable is disabled by default.

      The ssl_fips_mode system variable has these values:

      • 0 - disables FIPS mode
      • 1 - enables FIPS mode. The exact behavior of the enabled FIPS mode depends on the OpenSSL version. The server only specifies the FIPS value to OpenSSL.
      • 2 - enables strict FIPS mode. This value provides more restrictions than the 1 value. The exact behavior of the strict FIPS mode depends on the OpenSSL version. The server only specifies the FIPS value to OpenSSL.
    • The --ssl-fips-mode client/server option controls whether a given client operates in FIPS mode. This setting does not change the server setting. This option is disabled by default.

      The --ssl-fips-mode client/server option has these values:

      • OFF - disables FIPS mode
      • ON - enables FIPS mode. The exact behavior of the enabled FIPS mode depends on the OpenSSL version. The server only specifies the FIPS value to OpenSSL.
      • STRICT - enables strict FIPS mode. This value provides more restrictions than the ON value. The exact behavior of the strict FIPS mode depends on the OpenSSL version. The server only specifies the FIPS value to OpenSSL.

      The server operation in FIPS mode does not depend on which crypto module (regular or FIPS-approved) is set as the default in the OpenSSL configuration file. The server always respects the value of --ssl-fips-mode server command line option (OFF, ON, or STRICT). The ssl_fips_mode global system variable is read-only and cannot be changed at runtime.

    "},{"location":"fips.html#enable-the-fips-mode","title":"Enable the FIPS mode","text":"

    To enable the FIPS mode, pass --ssl-fips-mode=ON or --ssl-fips-mode=STRICT to mysqld as a command line argument or add ssl-fips-mode=ON or --ssl-fips-mode=STRICT to the configuration file. Ignore the warning that the --ssl-fips-mode client/server option is deprecated.

    "},{"location":"fips.html#check-that-fips-mode-is-enabled","title":"Check that FIPS mode is enabled","text":"

    To ensure that the FIPS mode is enabled, do the following:

    • Pass --log-error-verbosity=3 to mysqld as a command line argument or add log-error-verbosity=3 to the configuration file.

    • Check that the error log contains the following message:

      A FIPS-approved version of the OpenSSL cryptographic library has been detected in the operating system with a properly configured FIPS module available for loading. Percona Server for MySQL will load this module and run in FIPS mode.\n
    "},{"location":"fips.html#next-steps","title":"Next steps","text":"

    Install Percona Server for MySQL Pro

    If you already use Percona Server for MySQL, you can

    Upgrade to Percona Server for MySQL Pro

    "},{"location":"gap-locks-detection.html","title":"Gap locks detection","text":"

    The Gap locks detection is based on a Facebook MySQL patch.

    If a transactional storage engine does not support gap locks (for example MyRocks) and a gap lock is being attempted while the transaction isolation level is either REPEATABLE READ or SERIALIZABLE, the following SQL error will be returned to the client and no actual gap lock will be taken on the effected rows.

    Error message
    ERROR HY000: Using Gap Lock without full unique key in multi-table or multi-statement transactions is not allowed. You need to either rewrite queries to use all unique key columns in WHERE equal conditions, or rewrite to single-table, single-statement transaction.\n
    "},{"location":"get-help.html","title":"Get help from Percona","text":"

    Our documentation guides are packed with information, but they can\u2019t cover everything you need to know about Percona Server for MySQL. They also won\u2019t cover every scenario you might come across. Don\u2019t be afraid to try things out and ask questions when you get stuck.

    "},{"location":"get-help.html#perconas-community-forum","title":"Percona\u2019s Community Forum","text":"

    Be a part of a space where you can tap into a wealth of knowledge from other database enthusiasts and experts who work with Percona\u2019s software every day. While our service is entirely free, keep in mind that response times can vary depending on the complexity of the question. You are engaging with people who genuinely love solving database challenges.

    We recommend visiting our Community Forum. It\u2019s an excellent place for discussions, technical insights, and support around Percona database software. If you\u2019re new and feeling a bit unsure, our FAQ and Guide for New Users ease you in.

    If you have thoughts, feedback, or ideas, the community team would like to hear from you at Any ideas on how to make the forum better?. We\u2019re always excited to connect and improve everyone\u2019s experience.

    "},{"location":"get-help.html#percona-experts","title":"Percona experts","text":"

    Percona experts bring years of experience in tackling tough database performance issues and design challenges. We understand your challenges when managing complex database environments. That\u2019s why we offer various services to help you simplify your operations and achieve your goals.

    Service Description 24/7 Expert Support Our dedicated team of database experts is available 24/7 to assist you with any database issues. We provide flexible support plans tailored to your specific needs. Hands-On Database Management Our managed services team can take over the day-to-day management of your database infrastructure, freeing up your time to focus on other priorities. Expert Consulting Our experienced consultants provide guidance on database topics like architecture design, migration planning, performance optimization, and security best practices. Comprehensive Training Our training programs help your team develop skills to manage databases effectively, offering virtual and in-person courses.

    We\u2019re here to help you every step of the way. Whether you need a quick fix or a long-term partnership, we\u2019re ready to provide your expertise and support.

    "},{"location":"glossary.html","title":"Glossary","text":""},{"location":"glossary.html#acid","title":"ACID","text":"

    Set of properties that guarantee database transactions are processed reliably. Stands for Atomicity, Consistency, Isolation, Durability.

    "},{"location":"glossary.html#atomicity","title":"Atomicity","text":"

    Atomicity means that database operations are applied following a \u201call or nothing\u201d rule. A transaction is either fully applied or not at all.

    "},{"location":"glossary.html#consistency","title":"Consistency","text":"

    Consistency means that each transaction that modifies the database takes it from one consistent state to another.

    "},{"location":"glossary.html#durability","title":"Durability","text":"

    Once a transaction is committed, it will remain so.

    "},{"location":"glossary.html#environment-variable","title":"Environment Variable","text":"

    A variable that stores configuration settings for a software program or operating system.

    "},{"location":"glossary.html#foreign-key","title":"Foreign Key","text":"

    A referential constraint between two tables. Example: A purchase order in the purchase_orders table must have been made by a customer that exists in the customers table.

    "},{"location":"glossary.html#general-availability-ga","title":"General Availability (GA)","text":"

    A finalized version of the product which is made available to the general public. It is the final stage in the software release cycle.

    "},{"location":"glossary.html#isolation","title":"Isolation","text":"

    The Isolation requirement means that no transaction can interfere with another.

    "},{"location":"glossary.html#innodb","title":"InnoDB","text":"

    A Storage Engine for MySQL and derivatives (Percona Server, MariaDB) originally written by Innobase Oy, since acquired by Oracle. It provides ACID compliant storage engine with foreign key support. As of MySQL version 5.5, InnoDB became the default storage engine on all platforms.

    "},{"location":"glossary.html#json-javascript-object-notation","title":"JSON (JavaScript Object Notation)","text":"

    A common file format used to store data in a human-readable and machine-readable way using key-value pairs.

    "},{"location":"glossary.html#jenkins","title":"Jenkins","text":"

    Jenkins is a continuous integration system that we use to help ensure the continued quality of the software we produce. It helps us achieve the aims of:

    • no failed tests in the trunk on any platform

    • aid developers in ensuring merge requests build and test on all platform

    • no known performance regressions (without a damn good explanation).

    "},{"location":"glossary.html#lsn","title":"LSN","text":"

    The Log Sequence Number (LSN) is an 8-byte number. Every data change adds an entry to the redo log and generates an LSN. The server increments the LSN with every change.

    "},{"location":"glossary.html#mandatory-dependency","title":"Mandatory Dependency","text":"

    A software package that another software package absolutely needs to function correctly. Removing a mandatory dependency can cause the main software to malfunction.

    "},{"location":"glossary.html#mariadb","title":"MariaDB","text":"

    A fork of MySQL that is maintained primarily by Monty Program AB. It aims to add features, and fix bugs while maintaining 100% backward compatibility with MySQL.

    "},{"location":"glossary.html#metrics","title":"Metrics","text":"

    Measurable data points collected by telemetry about software usage.

    "},{"location":"glossary.html#mycnf","title":"my.cnf","text":"

    A configuration file used by MySQL databases.

    "},{"location":"glossary.html#myisam","title":"MyISAM","text":"

    A MySQL Storage Engine that was the default until MySQL 5.5.

    "},{"location":"glossary.html#mysql","title":"MySQL","text":"

    An open source database that has spawned several distributions and forks. MySQL AB was the primary maintainer and distributor until bought by Sun Microsystems, which was then acquired by Oracle. As Oracle owns the MySQL trademark, the term MySQL is often used for the Oracle distribution of MySQL as distinct from the drop-in replacements such as MariaDB and Percona Server for MySQL.

    "},{"location":"glossary.html#numa","title":"NUMA","text":"

    Non-Uniform Memory Access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. The whole system may still operate as one unit, and all memory is basically accessible from everywhere but at a potentially higher latency and lower performance.

    "},{"location":"glossary.html#percona-server-for-mysql","title":"Percona Server for MySQL","text":"

    The Percona branch of MySQL with performance and management improvements.

    "},{"location":"glossary.html#storage-engine","title":"Storage Engine","text":"

    A storage engine is a piece of software that implements the details of data storage and retrieval for a database system. This term is primarily used within the MySQL ecosystem due to it being the first widely used relational database to have an abstraction layer around storage. It is analogous to a Virtual File System layer in an Operating System. A VFS layer allows an operating system to read and write multiple file systems (e.g. FAT, NTFS, XFS, ext3) and a Storage Engine layer allows a database server to access tables stored in different engines (for example, MyISAM or InnoDB).

    "},{"location":"glossary.html#tech-preview","title":"Tech Preview","text":"

    A tech preview item can be a feature, a variable, or a value within a variable. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment and also use an alternative backup method for redundancy. A tech preview item is included in a release for users to provide feedback. The item is either updated, released as general availability(GA), or removed if not useful. The functionality can change from tech preview to GA.

    "},{"location":"glossary.html#telemetry","title":"Telemetry","text":"

    Telemetry automatically collects usage data from software to understand how users interact with it.

    "},{"location":"glossary.html#uninstall-component","title":"Uninstall Component","text":"

    A way to remove a specific component or functionality within a software package.

    "},{"location":"glossary.html#universally-unique-identifier-uuid","title":"Universally Unique Identifier (UUID)","text":"

    A unique identifier used to ensure no two entities share the same ID.

    "},{"location":"glossary.html#xtradb","title":"XtraDB","text":"

    The Percona improved version of InnoDB provides performance, features, and reliability above what is shipped by Oracle in InnoDB.

    "},{"location":"group-replication-flow-control.html","title":"Manage group replication flow control","text":"

    In replication, flow control prevents one member from falling too far behind the cluster and avoids excessive buffering. A cluster is not required to keep members in sync together for replication. The pending transactions in the relay log only increase for the lagging replica. Each member sends statistics to the group.

    Flow control sets a threshold on the queue for transactions waiting in the certification queue or the transactions waiting in the applier queue. If the thresholds are exceeded, and during the duration that they are exceeded, flow control adjusts the writer members to the capacity of the delayed member. This action ensures that all members are in sync.

    Flow controls work asynchronously and depend on the following:

    • Monitoring the throughput and queue sizes of each member
    • Throttling members to avoid writing beyond the capacity available

    The following system variables set flow control behavior for Group Replication:

    • group_replication_flow_control_mode
    • group_replication_flow_control_certifier_threshold
    • group_replication_flow_control_applier_threshold

    Flow control is enabled and disabled by selecting a value in the group_replication_flow_control_mode variable. Flow control can also be enabled on the certifier or applier level or both and sets the threshold level.

    "},{"location":"group-replication-system-variables.html","title":"Group replication system variables","text":"variable name group_replication_auto_evict_timeout group_replication_certification_loop_chunk_size group_replication_certification_loop_sleep_time group_replication_flow_control_mode group_replication_xcom_ssl_accept_retries group_replication_xcom_ssl_socket_timeout"},{"location":"group-replication-system-variables.html#group_replication_auto_evict_timeout","title":"group_replication_auto_evict_timeout","text":"

    The variable is in tech preview mode. Before using the variable in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    Option Description Introduced 8.0.30-22 Command-line \u2013group-replication-auto-evict-timeout Dynamic Yes Scope Global Type Integer Default value 0 Maximum Value 65535 Unit seconds

    The value can be changed while Group Replication is running. The change takes effect immediately. Every node in the group can have a different timeout value, but, to avoid unexpected exits, we recommend that all nodes have the same value.

    The variable specifies a period of time in seconds before a node is automatically evicted if the node exceeds the flow control threshold. The default value is 0, which disables the eviction. To set the timeout, change the value with a number higher than zero.

    In single-primary mode, the primary server ignores the timeout.

    "},{"location":"group-replication-system-variables.html#group_replication_certification_loop_chunk_size","title":"group_replication_certification_loop_chunk_size","text":"Option Description Introduced 8.0.32-24 Command-line \u2013group-replication-certification-loop-chunk-size Dynamic Yes Scope Global Data type ulong Default value 0

    Defines the size of the chunk that must be processed during the certifier garbage collection phase after which the client transactions are allowed to interleave. The default value is 0.

    The minimum value is 0. The maximum value is 4294967295.

    "},{"location":"group-replication-system-variables.html#group_replication_certification_loop_sleep_time","title":"group_replication_certification_loop_sleep_time","text":"Option Description Introduced 8.0.32-24 Command-line \u2013group-replication-certification-loop-sleep-time Dynamic Yes Scope Global Data type ulong Default value 0

    Defines the sleep time in microseconds that the certifier garbage collection loop allows client transactions to interleave. The default value is 0.

    The minimum value is 0. The maximum value is 1000000.

    "},{"location":"group-replication-system-variables.html#group_replication_flow_control_mode","title":"group_replication_flow_control_mode","text":"Option Description Introduced 8.0.32-24 Command-line \u2013group_replication_flow_control_mode Dynamic Yes Scope Global Data type Enumeration Default value Quota Valid values DISABLED QUOTA MAJORITY

    The \u201cMAJORITY\u201d value is in tech preview mode. Before using the variable in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    The variable specifies the mode use for flow control.

    Percona Server for MySQL 8.0.30-22 adds the \u201cMAJORITY\u201d value to the group_replication_flow_control_mode variable. In \u201cMAJORITY\u201d mode, flow control is activated only if the majority, more than half the number of members, exceed the flow control threshold. The other values are not changed.

    "},{"location":"group-replication-system-variables.html#group_replication_xcom_ssl_accept_retries","title":"group_replication_xcom_ssl_accept_retries","text":"Option Description Introduced 8.0.34-26 Command-line \u2013group_replication_xcom_ssl_accept_retries Dynamic Yes Scope Global Data type integer Default value 10

    This variable is only effective on START GROUP_REPLICATION, and only when group replication is configured with SSL.

    Defines the number of retries before closing the socket. On each retry, the server thread calls SSL_accept(), with a timeout defined by group_replication_xcom_ssl_socket_timeout. Used by the SSL handshake process after the connection has been accepted by the first accept() call.

    The default value is 10.

    "},{"location":"group-replication-system-variables.html#group_replication_xcom_ssl_socket_timeout","title":"group_replication_xcom_ssl_socket_timeout","text":"Option Description Introduced 8.0.34-26 Command-line \u2013group_replication_xcom_ssl_socket_timeout Dynamic Yes Scope Global Data type integer Default value 0 Measured in seconds

    This variable is only effective on START GROUP_REPLICATION, and only when group replication is configured with SSL.

    Defines a file-descriptor level timeout, measured in seconds, for both accept() and SSL_accept() calls when group replication listens on the xcom port.

    When set to a valid value, for example, 5 then both accept() and SSL_accept() return after 5 seconds.

    The default value has been set to 0 (waits infinitely) for backward compatibility.

    "},{"location":"improved-memory-engine.html","title":"Improved MEMORY storage engine","text":"

    As of MySQL 5.5.15, a Fixed Row Format (FRF) is still being used in the MEMORY storage engine. The fixed row format imposes restrictions on the type of columns as it assigns in advance a limited amount of memory per row. This renders a VARCHAR field in a CHAR field in practice and makes it impossible to have a TEXT or BLOB field with that engine implementation.

    To overcome this limitation, the Improved MEMORY Storage Engine is introduced in this release for supporting true VARCHAR, VARBINARY, TEXT, and BLOB fields in the MEMORY tables.

    This implementation is based on the Dynamic Row Format (DFR) introduced by the mysql-heap-dynamic-rows patch.

    DFR is used to store column values in a variable-length form, thus helping to decrease the memory footprint of those columns and making possible BLOB and TEXT fields and real VARCHAR and VARBINARY.

    Unlike the fixed implementation, each column value in DRF uses only as much space as required. Variable-length values can use up to 4 bytes to store the actual value length, and only the necessary number of blocks is used to store the value.

    Rows in DFR are represented internally by multiple memory blocks, which means that a single row can consist of multiple blocks organized into one set. Each row occupies at least one block, there can not be multiple rows within a single block. Block size can be configured when creating a table (see below).

    This DFR implementation has two caveats regarding ordering and indexes.

    "},{"location":"improved-memory-engine.html#caveats","title":"Caveats","text":""},{"location":"improved-memory-engine.html#ordering-of-rows","title":"Ordering of rows","text":"

    In the absence of ORDER BY, records may be returned in a different order than the previous MEMORY implementation.

    This is not a bug. Any application relying on a specific order without an ORDER BY clause may deliver unexpected results. A specific order without ORDER BY is a side effect of a storage engine and query optimizer implementation which may and will change between minor MySQL releases.

    "},{"location":"improved-memory-engine.html#indexing","title":"Indexing","text":"

    It is currently impossible to use indexes on BLOB columns due to some limitations of the Dynamic Row Format. Trying to create such an index will fail with the following error:

    Expected output
    BLOB column '<name>' can't be used in key specification with the used table type.\n
    "},{"location":"improved-memory-engine.html#restrictions","title":"Restrictions","text":"

    For performance reasons, a mixed solution is implemented: the fixed format is used at the beginning of the row, while the dynamic one is used for the rest of it.

    The size of the fixed-format portion of the record is chosen automatically on CREATE TABLE and cannot be changed later. This, in particular, means that no indexes can be created later with CREATE INDEX or ALTER TABLE when the dynamic row format is used.

    All values for columns used in indexes are stored in fixed format at the first block of the row, then the following columns are handled with DRF.

    This sets two restrictions to tables:

    * the order of the fields and therefore,\n\n* the minimum size of the block used in the table.\n
    "},{"location":"improved-memory-engine.html#ordering-of-columns","title":"Ordering of columns","text":"

    The columns used in fixed format must be defined before the dynamic ones in the CREATE TABLE statement. If this requirement is not met, the engine will not be able to add blocks to the set for these fields and they will be treated as fixed.

    "},{"location":"improved-memory-engine.html#minimum-block-size","title":"Minimum block size","text":"

    The block size has to be big enough to store all fixed-length information in the first block. If not, the CREATE TABLE or ALTER TABLE statements will fail (see below).

    "},{"location":"improved-memory-engine.html#limitations","title":"Limitations","text":"

    MyISAM tables are still used for query optimizer internal temporary tables where the MEMORY tables could be used now instead: for temporary tables containing large VARCHAR\\`s, ``BLOB, andTEXT` columns.

    "},{"location":"improved-memory-engine.html#setting-row-format","title":"Setting row format","text":"

    Taking the restrictions into account, the Improved MEMORY Storage Engine will choose DRF over FRF at the moment of creating the table according to following criteria:

    * There is an implicit request of the user in the column types **OR**\n\n* There is an explicit request of the user **AND** the overhead incurred by `DFR` is beneficial.\n
    "},{"location":"improved-memory-engine.html#implicit-request","title":"Implicit request","text":"

    The implicit request by the user is taken when there is at least one BLOB or TEXT column in the table definition. If there are none of these columns and no relevant option is given, the engine will choose FRF.

    For example, this will yield the use of the dynamic format:

    mysql> CREATE TABLE t1 (f1 VARCHAR(32), f2 TEXT, PRIMARY KEY (f1)) ENGINE=HEAP;\n

    While this will not:

    mysql> CREATE TABLE t1 (f1 VARCHAR(16), f2 VARCHAR(16), PRIMARY KEY (f1)) ENGINE=HEAP;\n
    "},{"location":"improved-memory-engine.html#explicit-request","title":"Explicit request","text":"

    The explicit request is set with one of the following options in the CREATE TABLE statement:

    * `KEY_BLOCK_SIZE = <value>`\n\n* Requests the DFR with the specified block size (in bytes)\n

    Despite its name, the KEY_BLOCK_SIZE option refers to a block size used to store data rather then indexes. The reason for this is that an existing CREATE TABLE option is reused to avoid introducing new ones.

    The Improved MEMORY Engine checks whether the specified block size is large enough to keep all key column values. If it is too small, table creation will abort with an error.

    After DRF is requested explicitly and there are no BLOB or TEXT columns in the table definition, the Improved MEMORY Engine will check if using the dynamic format provides any space saving benefits as compared to the fixed one:

    * if the fixed row length is less than the dynamic block size (plus the dynamic row overhead - platform dependent) **OR**\n\n* there isn\u2019t any variable-length columns in the table or `VARCHAR` fields are declared with length 31 or less,\n

    the engine will revert to the fixed format as it is more space efficient in such case. The row format being used by the engine can be checked using SHOW TABLE STATUS.

    "},{"location":"improved-memory-engine.html#examples","title":"Examples","text":"

    On a 32-bit platform:

    mysql> CREATE TABLE t1 (f1 VARCHAR(32), f2 VARCHAR(32), f3 VARCHAR(32), f4 VARCHAR(32), PRIMARY KEY (f1)) KEY_BLOCK_SIZE=124 ENGINE=HEAP;\n\nmysql> SHOW TABLE STATUS LIKE 't1'; \n
    Expected output
    Name  Engine  Version    Rows Avg_row_length  Data_length     Max_data_length Index_length    Data_free       Auto_increment  Create_time     Update_time     Check_time      Collation       Checksum        Create_options  Comment\nt1    MEMORY  10         X    0       X       0       0       NULL    NULL    NULL    NULL    latin1_swedish_ci       NULL    row_format=DYNAMIC KEY_BLOCK_SIZE=124\n

    On a 64-bit platform:

    mysqlCREATE TABLE t1 (f1 VARCHAR(32), f2 VARCHAR(32), f3 VARCHAR(32), f4 VARCHAR(32), PRIMARY KEY (f1)) KEY_BLOCK_SIZE=124 ENGINE=HEAP;\n
    mysqlSHOW TABLE STATUS LIKE 't1';\n

    Expected output
    Name  Engine  Version    Rows Avg_row_length  Data_length     Max_data_length Index_length    Data_free       Auto_increment  Create_time     Update_time     Check_time      Collation       Checksum        Create_options  Comment\nt1    MEMORY  10         X    0       X       0       0       NULL    NULL    NULL    NULL    latin1_swedish_ci       NULL    KEY_BLOCK_SIZE=124\n
    "},{"location":"improved-memory-engine.html#implementation-details","title":"Implementation details","text":"

    MySQL MEMORY tables keep data in arrays of fixed-size chunks. These chunks are organized into two groups of HP_BLOCK structures:

    • group1 contains indexes, with one HP_BLOCK per key (part of HP_KEYDEF),

    • group2 contains record data, with a single HP_BLOCK for all records.

    While columns used in indexes are usually small, other columns in the table may need to accommodate larger data. Typically, larger data is placed into VARCHAR or BLOB columns.

    The Improved MEMORY Engine implements the concept of dataspace, HP_DATASPACE, which incorporates the HP_BLOCK structures for the record data, adding more information for managing variable-sized records.

    Variable-size records are stored in multiple \u201cchunks\u201d, which means that a single record of data (a database \u201crow\u201d) can consist of multiple chunks organized into one \u201cset\u201d, contained in HP_BLOCK structures.

    In variable-size format, one record is represented as one or many chunks depending on the actual data, while in fixed-size mode, one record is always represented as one chunk. The index structures would always point to the first chunk in the chunkset.

    Variable-size records are necessary only in the presence of variable-size columns. The Improved Memory Engine will be looking for BLOB or VARCHAR columns with a declared length of 32 or more. If no such columns are found, the table will be switched to the fixed-size format. You should always put such columns at the end of the table definition in order to use the variable-size format.

    Whenever data is being inserted or updated in the table, the Improved Memory Engine will calculate how many chunks are necessary.

    For INSERT operations, the engine only allocates new chunksets in the recordspace. For UPDATE operations it will modify the length of the existing chunkset if necessary, unlinking unnecessary chunks at the end, or allocating and adding more if a larger length is needed.

    When writing data to chunks or copying data back to a record, fixed-size columns are copied in their full format, while VARCHAR and BLOB columns are copied based on their actual length, skipping any NULL values.

    When allocating a new chunkset of N chunks, the engine will try to allocate chunks one-by-one, linking them as they become allocated. For allocating a single chunk, it will attempt to reuse a deleted (freed) chunk. If no free chunks are available, it will try to allocate a new area inside a HP_BLOCK.

    When freeing chunks, the engine will place them at the front of a free list in the dataspace, each one containing a reference to the previously freed chunk.

    The allocation and contents of the actual chunks varies between fixed and variable-size modes:

    • Format of a fixed-size chunk:

      • uchar[] * With sizeof=chunk_dataspace_length, but at least sizeof(uchar\\*) bytes. It keeps actual data or pointer to the next deleted chunk, where chunk_dataspace_length equals to full record length

      • uchar * Status field (1 means \u201cin use\u201d, 0 means \u201cdeleted\u201d)

    • Format of a variable-size chunk:

      • uchar[] * With sizeof=chunk_dataspace_length, but at least sizeof(uchar\\*) bytes. It keeps actual data or pointer to the next deleted chunk, where chunk_dataspace_length is set according to table\u2019s key_block_size

      • uchar\\* * Pointer to the next chunk in this chunkset, or NULL for the last chunk

      • uchar * Status field (1 means \u201cfirst\u201d, 0 means \u201cdeleted\u201d, 2 means \u201clinked\u201d)

    Total chunk length is always aligned to the next sizeof(uchar\\*).

    See also

    Dynamic row format for MEMORY tables

    "},{"location":"improved-slow-query-log.html","title":"Improved slow query log","text":"

    This feature adds microsecond time resolution and additional statistics to the slow query log output. It lets you turn the slow query log on or off at runtime, adds logging for the replica SQL thread, and adds fine-grained control over what and how much to log into the slow query log.

    You can use the Percona Toolkit pt-query-digest tool to aggregate similar queries together and report on those that consume the most execution time.

    "},{"location":"improved-slow-query-log.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"improved-slow-query-log.html#other-information","title":"Other information","text":""},{"location":"improved-slow-query-log.html#changes-to-the-log-format","title":"Changes to the log format","text":"

    The feature adds more information to the slow log output.

    Expected output
    # Time: 130601  8:01:06.058915\n# User@Host: root[root] @ localhost []  Id:    42\n# Schema: imdb  Last_errno: 0  Killed: 0\n# Query_time: 7.725616  Lock_time: 0.000328  Rows_sent: 4  Rows_examined: 1543720  Rows_affected: 0\n# Bytes_sent: 272  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0\n# Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No\n# Filesort: No  Filesort_on_disk: No  Merge_passes: 0\nSET timestamp=1370073666;\nSELECT id,title,production_year FROM title WHERE title = 'Bambi';\n

    Another example (log_slow_verbosity =profiling):

    Expected output
    # Time: 130601  8:03:20.700441\n# User@Host: root[root] @ localhost []  Id:    43\n# Schema: imdb  Last_errno: 0  Killed: 0\n# Query_time: 7.815071  Lock_time: 0.000261  Rows_sent: 4  Rows_examined: 1543720  Rows_affected: 0\n# Bytes_sent: 272\n# Profile_starting: 0.000125 Profile_starting_cpu: 0.000120\nProfile_checking_permissions: 0.000021 Profile_checking_permissions_cpu: 0.000021\nProfile_Opening_tables: 0.000049 Profile_Opening_tables_cpu: 0.000048 Profile_init: 0.000048\nProfile_init_cpu: 0.000049 Profile_System_lock: 0.000049 Profile_System_lock_cpu: 0.000048\nProfile_optimizing: 0.000024 Profile_optimizing_cpu: 0.000024 Profile_statistics: 0.000036 \nProfile_statistics_cpu: 0.000037 Profile_preparing: 0.000029 Profile_preparing_cpu: 0.000029\nProfile_executing: 0.000012 Profile_executing_cpu: 0.000012 Profile_Sending_data: 7.814583\nProfile_Sending_data_cpu: 7.811634 Profile_end: 0.000013 Profile_end_cpu: 0.000012\nProfile_query_end: 0.000014 Profile_query_end_cpu: 0.000014 Profile_closing_tables: 0.000023\nProfile_closing_tables_cpu: 0.000023 Profile_freeing_items: 0.000051\nProfile_freeing_items_cpu: 0.000050 Profile_logging_slow_query: 0.000006\nProfile_logging_slow_query_cpu: 0.000006\n# Profile_total: 7.815085 Profile_total_cpu: 7.812127\nSET timestamp=1370073800;\nSELECT id,title,production_year FROM title WHERE title = 'Bambi';\n

    Notice that the Killed: keyword is followed by zero when the query successfully completes. This keyword is followed by a number other than zero if the query is unsuccessful:

    Killed Numeric Code Exception 0 NOT_KILLED 1 KILL_BAD_DATA 1053 ER_SERVER_SHUTDOWN (see MySQL Documentation) 1317 ER_QUERY_INTERRUPTED (see MySQL Documentation) 3024 ER_QUERY_TIMEOUT (see MySQL Documentation) Any other number KILLED_NO_VALUE (Catches all other cases)"},{"location":"improved-slow-query-log.html#connection-and-schema-identifier","title":"Connection and Schema Identifier","text":"

    Each slow log entry now contains a connection identifier so you can trace all the queries from a single connection. This identifier is the same value shown in the Id column in SHOW FULL PROCESSLIST or returned from the CONNECTION_ID() function.

    Each entry also contains a schema name to trace all the queries for a particular schema.

    Expected output
    # Id: 43  Schema: imdb\n
    "},{"location":"improved-slow-query-log.html#microsecond-time-resolution-and-extra-row-information","title":"Microsecond time resolution and extra row information","text":"

    This microsecond time resolution and extra row information are the \u2018microflowfeature's original functionality.Query_timeandLock_time` are logged with microsecond resolution.

    The feature also adds information about how many rows were examined for SELECT queries and how many were analyzed and affected for UPDATE, DELETE, and INSERT queries.

    Expected output
    # Query_time: 0.962742  Lock_time: 0.000202  Rows_sent: 4  Rows_examined: 1543719  Rows_affected: 0\n

    Values and context:

    • Rows_examined: Number of rows scanned - SELECT

    • Rows_affected: Number of rows changed - UPDATE, DELETE, INSERT

    "},{"location":"improved-slow-query-log.html#memory-footprint","title":"Memory footprint","text":"

    The feature provides information about the amount of bytes sent for the result of the query and the number of temporary tables created for its execution - differentiated by whether they were created on memory or on disk - with the total number of bytes used by them.

    Expected output
    # Bytes_sent: 8053  Tmp_tables: 1  Tmp_disk_tables: 0  Tmp_table_sizes: 950528\n

    Values and context:

    • Bytes_sent: The amount of bytes sent for the result of the query

    • Tmp_tables: Number of temporary tables created on memory for the query

    • Tmp_disk_tables: Number of temporary tables created on disk for the query

    • Tmp_table_sizes: Total Size in bytes for all temporary tables used in the query

    "},{"location":"improved-slow-query-log.html#query-plan-information","title":"Query plan information","text":"

    The database can execute a query using different methods:

    • Using indexes

    • Scanning the entire table

    • Creating temporary tables

    Each query can be executed in various ways. For example, it may use indexes or do a full table scan, or a temporary table may be needed. These are the things that you can usually see by running EXPLAIN on the query. The feature will now allow you to see the most important facts about the execution in the log file.

    Expected output
    # Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No\n# Filesort: No  Filesort_on_disk: No  Merge_passes: 0\n

    The values and their meanings are documented with the log_slow_filter option.

    "},{"location":"improved-slow-query-log.html#innodb-usage-information","title":"InnoDB usage information","text":"

    The final part of the output is the InnoDB usage statistics. MySQL currently shows many per-session statistics for operations with SHOW SESSION STATUS, but that does not include those of InnoDB, which are always global and shared by all threads. This feature lets you see those values for a given query.

    Expected output
    #   InnoDB_IO_r_ops: 6415  InnoDB_IO_r_bytes: 105103360  InnoDB_IO_r_wait: 0.001279\n#   InnoDB_rec_lock_wait: 0.000000  InnoDB_queue_wait: 0.000000\n#   InnoDB_pages_distinct: 6430\n
    Value Description InnoDB_IO_r_ops Counts the planned page read operations. The actual number may differ due to asynchronous operations. InnoDB_IO_r_bytes Similar to InnoDB_IO_r_ops but measures the operations in bytes instead of counts. InnoDB_IO_r_wait Shows how long (in seconds) InnoDB took to read data from storage. InnoDB_rec_lock_wait Indicates how long (in seconds) the query waited for row locks. InnoDB_queue_wait Measures how long (in seconds) the query waited to enter the InnoDB queue or waited inside the queue for execution. InnoDB_pages_distinct Estimates the number of unique pages the query accessed. It uses a small hash array to represent the entire buffer pool, which may lead to inaccuracy for queries accessing many pages.

    If the query did not use InnoDB tables, that information is written into the log instead of the above statistics.

    "},{"location":"in-place-upgrade-guide.html","title":"Percona Server for MySQL in-place upgrade guide: from 5.7 to 8.0","text":"

    Important

    An in-place upgrade is not recommended. Use a replication upgrade.

    An in-place upgrade involves shutting down the 5.7 server, and replacing the server binaries, or packages, with new ones. At this point the new server version can be started on the existing data directory. If the new version is less than 8.0.16, you should run mysql_upgrade. Note that the server should be configured to perform a slow shutdown by setting innodb_fast_shutdown=0 prior to shutdown. While an in-place upgrade may not be suitable for all environments, especially those environments with many variables to consider, the upgrade should work in most cases.

    The benefits are:

    • Less additional infrastructure cost compared to a new environment, but nodes must be tested.
    • An upgrade can be completed over weeks with cool-down periods between reader node upgrades.
    • Requires a failover of production traffic, and for minimal downtime you must have good high-availability tools.

    Before you start the upgrade process, it is recommended to make a full backup of your database. Copy the database configuration file, for example, my.cnf, to another directory to save it.

    Warning

    Do not upgrade from 5.7 to 8.0 on a crashed instance. If the server instance has crashed, run the crash recovery before proceeding with the upgrade.

    The encrypt-binlog variable is removed, and the related command-line option \u2013encrypt-binlog is not supported. It is important to remove the encrypt-binlog variable from your configuration file before you attempt to upgrade either from another release in the Percona Server for MySQL 8.0 series or from Percona Server for MySQL 5.7. Otherwise, a server boot error is generated and reports an unknown variable.

    The implemented binary log file encryption is compatible with the older format. The encrypted binary log file used in a previous version of MySQL 8.0 series or Percona Server for MySQL series is supported.

    You can select one of the following ways to upgrade Percona Server for MySQL from 5.7 to 8.0:

    • Upgrade with the Percona repositories

    • Upgrade from systems that use the MyRocks or TokuDB Storage Engine and Partitioned Tables

    • Upgrade with standalone packages

    "},{"location":"index-info-schema-tables.html","title":"Index of INFORMATION_SCHEMA tables","text":"

    This is a list of the INFORMATION_SCHEMA TABLES that exist in Percona Server for MySQL with XtraDB. The entry for each table points to the page in the documentation where item is described.

    • INFORMATION_SCHEMA.CLIENT_STATISTICS

    • INFORMATION_SCHEMA.GLOBAL_TEMPORARY_TABLES

    • [INFORMATION_SCHEMA.INDEX_STATISTICS]

    • PROCFS

    • INFORMATION_SCHEMA.QUERY_RESPONSE_TIME

    • INFORMATION_SCHEMA.TABLE_STATISTICS

    • INFORMATION_SCHEMA.TEMPORARY_TABLES

    • THREAD_STATISTICS

    • INFORMATION_SCHEMA.USER_STATISTICS

    • XTRADB_INTERNAL_HASH_TABLES

    • XTRADB_READ_VIEW

    • INFORMATION_SCHEMA.XTRADB_RSEG

    • INFORMATION_SCHEMA.XTRADB_ZIP_DICT

    • INFORMATION_SCHEMA.XTRADB_ZIP_DICT_COLS

    "},{"location":"information-schema-tables.html","title":"MyRocks Information Schema tables","text":"

    When you install the MyRocks plugin for MySQL, the Information Schema is extended to include the following tables:

    "},{"location":"information-schema-tables.html#rocksdb_global_info","title":"ROCKSDB_GLOBAL_INFO","text":""},{"location":"information-schema-tables.html#columns","title":"Columns","text":"Column Name Type TYPE varchar(513) NAME varchar(513) VALUE varchar(513)"},{"location":"information-schema-tables.html#rocksdb_cfstats","title":"ROCKSDB_CFSTATS","text":""},{"location":"information-schema-tables.html#columns_1","title":"Columns","text":"Column Name Type CF_NAME varchar(193) STAT_TYPE varchar(193) VALUE bigint(8)"},{"location":"information-schema-tables.html#rocksdb_trx","title":"ROCKSDB_TRX","text":"

    This table stores mappings of RocksDB transaction identifiers to MySQL client identifiers to enable associating a RocksDB transaction with a MySQL client operation.

    "},{"location":"information-schema-tables.html#columns_2","title":"Columns","text":"Column Name Type TRANSACTION_ID bigint STATE varchar(193) NAME varchar(193) WRITE_COUNT bigint LOCK_COUNT bigint TIMEOUT_SEC int WAITING_KEY varchar(513) WAITING_COLUMN_FAMILY_ID int IS_REPLICATION int SKIP_TRX_API int READ_ONLY int HAS_DEADLOCK_DETECTION int NUM_ONGOING_BULKLOAD int THREAD_ID int QUERY varchar(193)"},{"location":"information-schema-tables.html#rocksdb_cf_options","title":"ROCKSDB_CF_OPTIONS","text":""},{"location":"information-schema-tables.html#columns_3","title":"Columns","text":"Column Name Type CF_NAME varchar(193) OPTION_TYPE varchar(193) VALUE varchar(193)"},{"location":"information-schema-tables.html#rocksdb_active_compaction_stats","title":"ROCKSDB_ACTIVE_COMPACTION_STATS","text":""},{"location":"information-schema-tables.html#columns_4","title":"Columns","text":"Column Name Type THREAD_ID bigint CF_NAME varchar(193) INPUT_FILES varchar(513) OUTPUT_FILES varchar(513) COMPACTION_REASON varchar(513)"},{"location":"information-schema-tables.html#rocksdb_compaction_history","title":"ROCKSDB_COMPACTION_HISTORY","text":""},{"location":"information-schema-tables.html#columns_5","title":"Columns","text":"Column Name Type THREAD_ID bigint CF_NAME varchar(513) INPUT_LEVEL integer OUTPUT_LEVEL integer INPUT_FILES varchar(513) OUTPUT_FILES varchar(513) COMPACTION_REASON varchar(513) START_TIMESTAMP bigint END_TIMESTAMP bigint"},{"location":"information-schema-tables.html#rocksdb_compaction_stats","title":"ROCKSDB_COMPACTION_STATS","text":""},{"location":"information-schema-tables.html#columns_6","title":"Columns","text":"Column Name Type CF_NAME varchar(193) LEVEL varchar(513) TYPE varchar(513) VALUE double"},{"location":"information-schema-tables.html#rocksdb_dbstats","title":"ROCKSDB_DBSTATS","text":""},{"location":"information-schema-tables.html#columns_7","title":"Columns","text":"Column Name Type STAT_TYPE varchar(193) VALUE bigint(8)"},{"location":"information-schema-tables.html#rocksdb_ddl","title":"ROCKSDB_DDL","text":""},{"location":"information-schema-tables.html#columns_8","title":"Columns","text":"Column Name Type TABLE_SCHEMA varchar(193) TABLE_NAME varchar(193) PARTITION_NAME varchar(193) INDEX_NAME varchar(193) COLUMN_FAMILY int(4) INDEX_NUMBER int(4) INDEX_TYPE smallint(2) KV_FORMAT_VERSION smallint(2) TTL_DURATION bigint(8) INDEX_FLAGS bigint(8) CF varchar(193) AUTO_INCREMENT bigint(8) unsigned"},{"location":"information-schema-tables.html#rocksdb_index_file_map","title":"ROCKSDB_INDEX_FILE_MAP","text":""},{"location":"information-schema-tables.html#columns_9","title":"Columns","text":"Column Name Type COLUMN_FAMILY int(4) INDEX_NUMBER int(4) SST_NAME varchar(193) NUM_ROWS bigint(8) DATA_SIZE bigint(8) ENTRY_DELETES bigint(8) ENTRY_SINGLEDELETES bigint(8) ENTRY_MERGES bigint(8) ENTRY_OTHERS bigint(8) DISTINCT_KEYS_PREFIX varchar(400)"},{"location":"information-schema-tables.html#rocksdb_live_files_metadata","title":"ROCKSDB_LIVE_FILES_METADATA","text":"Column Name Type CF_NAME varchar(193) LEVEL varchar(513) NAME varchar(513) DB_PATH varchar(513) FILE_NUMBER bigint FILE_TYPE varchar(193) SIZE bigint RELATIVE_FILENAME varchar(193) DIRECTORY varchar(513) TEMPERATURE varchar(193) FILE_CHECKSUM varchar(513) FILE_CHECKSUM_FUNC_NAME varchar(193) SMALLEST_SEQNO bigint LARGEST_SEQNO bigint SMALLEST_KEY varchar(513) LARGEST_KEY varchar(513) NUM_READS_SAMPLED bigint BEING_COMPACTED tinyint NUM_ENTRIES bigint NUM_DELETIONS bigint OLDEST_BLOB_FILE_NUMBER bigint OLDEST_ANCESTER_TIME bigint FILE_CREATION_TIME bigint"},{"location":"information-schema-tables.html#rocksdb_locks","title":"ROCKSDB_LOCKS","text":"

    This table contains the set of locks granted to MyRocks transactions.

    "},{"location":"information-schema-tables.html#columns_10","title":"Columns","text":"Column Name Type COLUMN_FAMILY_ID int(4) TRANSACTION_ID bigint KEY varchar(513) MODE varchar(32)"},{"location":"information-schema-tables.html#rocksdb_perf_context","title":"ROCKSDB_PERF_CONTEXT","text":""},{"location":"information-schema-tables.html#columns_11","title":"Columns","text":"Column Name Type TABLE_SCHEMA varchar(193) TABLE_NAME varchar(193) PARTITION_NAME varchar(193) STAT_TYPE varchar(193) VALUE bigint(8)"},{"location":"information-schema-tables.html#rocksdb_perf_context_global","title":"ROCKSDB_PERF_CONTEXT_GLOBAL","text":""},{"location":"information-schema-tables.html#columns_12","title":"Columns","text":"Column Name Type STAT_TYPE varchar(193) VALUE bigint(8)"},{"location":"information-schema-tables.html#rocksdb_deadlock","title":"ROCKSDB_DEADLOCK","text":"

    This table records information about deadlocks.

    "},{"location":"information-schema-tables.html#columns_13","title":"Columns","text":"Column Name Type DEADLOCK_ID bigint(8) TRANSACTION_ID bigint(8) CF_NAME varchar(193) WAITING_KEY varchar(513) LOCK_TYPE varchar(193) INDEX_NAME varchar(193) TABLE_NAME varchar(193) ROLLED_BACK bigint(8)"},{"location":"innodb-corrupt-table-action.html","title":"Handle corrupted tables","text":"

    When a server subsystem tries to access a corrupted table, the server may crash. If this outcome is not desirable when a corrupted table is encountered, set the new system innodb_corrupt_table_action variable to a value which allows the ongoing operation to continue without crashing the server.

    The server error log registers attempts to access corrupted table pages.

    "},{"location":"innodb-corrupt-table-action.html#interacting-with-the-innodb_force_recovery-variable","title":"Interacting with the innodb_force_recovery variable","text":"

    The innodb_corrupt_table_action variable may work in conjunction with the innodb_force_recovery variable which considerably reduces the effect of InnoDB subsystems running in the background.

    If the innodb_force_recovery option is <4, corrupted pages are lost and the server may continue to run due to the innodb_corrupt_table_action variable having a non-default value.

    For more information about the innodb_force_recovery variable, see Forcing InnoDB Recovery from the MySQL Reference Manual.

    This feature adds a new system variable.

    "},{"location":"innodb-corrupt-table-action.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"innodb-corrupt-table-action.html#system-variables","title":"System variables","text":""},{"location":"innodb-corrupt-table-action.html#innodb_corrupt_table_action","title":"innodb_corrupt_table_action","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type ULONG Default assert Range assert, warn, salvage
    • Enabling innodb_file_per_table and using the assert value creates an assertion failure which causes XtraDB to intentionally crash the server. This action is expected when detecting corrupted data in a single-table tablespace.

    • Enabling innodb_file_per_table and using the warn value causes XtraDB to pass the table corruption as corrupt table instead of crashing the server. Detecting the file as corrupt also disables the file I/O for that data file, except for the deletion operation.

    • Enabling innodb_file_per_table and using the salvage value causes XtraDB to allow read access to the corrupted tablespace but ignores any corrupted pages.

    "},{"location":"innodb-expanded-fast-index-creation.html","title":"Expanded fast index creation","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    Percona has implemented several changes related to MySQL\u2019s fast index creation feature. Fast index creation was implemented in MySQL as a way to speed up the process of adding or dropping indexes on tables with many rows.

    This feature implements a session variable that enables extended fast index creation. Besides optimizing DDL directly, expand_fast_index_creation may also optimize index access for subsequent DML statements because using it results in much less fragmented indexes.

    "},{"location":"innodb-expanded-fast-index-creation.html#the-mysqldump-command","title":"The mysqldump command","text":"

    A new option, --innodb-optimize-keys, was implemented in mysqldump. It changes the way InnoDB tables are dumped, so that secondary and foreign keys are created after loading the data, thus taking advantage of fast index creation. More specifically:

    • KEY, UNIQUE KEY, and CONSTRAINT clauses are omitted from CREATE TABLE statements corresponding to InnoDB tables.

    • An additional ALTER TABLE is issued after dumping the data, in order to create the previously omitted keys.

    "},{"location":"innodb-expanded-fast-index-creation.html#alter-table","title":"ALTER TABLE","text":"

    When ALTER TABLE requires a table copy, secondary keys are now dropped and recreated later, after copying the data. The following restrictions apply:

    • Only non-unique keys can be involved in this optimization.

    • If the table contains foreign keys, or a foreign key is being added as a part of the current ALTER TABLE statement, the optimization is disabled for all keys.

    "},{"location":"innodb-expanded-fast-index-creation.html#optimize-table","title":"OPTIMIZE TABLE","text":"

    Internally, OPTIMIZE TABLE is mapped to ALTER TABLE ... ENGINE=innodb for InnoDB tables. As a consequence, it now also benefits from fast index creation, with the same restrictions as for ALTER TABLE.

    "},{"location":"innodb-expanded-fast-index-creation.html#caveats","title":"Caveats","text":"

    InnoDB fast index creation uses temporary files in tmpdir for all indexes being created. So make sure you have enough tmpdir space when using expand_fast_index_creation. It is a session variable, so you can temporarily switch it off if you are short on tmpdir space and/or don\u2019t want this optimization to be used for a specific table.

    There\u2019s also a number of cases when this optimization is not applicable:

    • UNIQUE indexes in ALTER TABLE are ignored to enforce uniqueness where necessary when copying the data to a temporary table;

    • ALTER TABLE and OPTIMIZE TABLE always process tables containing foreign keys as if expand_fast_index_creation is OFF to avoid dropping keys that are part of a FOREIGN KEY constraint;

    • mysqldump \u2013innodb-optimize-keys ignores foreign keys because InnoDB requires a full table rebuild on foreign key changes. So adding them back with a separate ALTER TABLE after restoring the data from a dump would actually make the restore slower;

    • mysqldump \u2013innodb-optimize-keys ignores indexes on AUTO_INCREMENT columns, because they must be indexed, so it is impossible to temporarily drop the corresponding index;

    • mysqldump \u2013innodb-optimize-keys ignores the first UNIQUE index on non-nullable columns when the table has no PRIMARY KEY defined, because in this case InnoDB picks such an index as the clustered one.

    "},{"location":"innodb-expanded-fast-index-creation.html#system-variables","title":"System variables","text":""},{"location":"innodb-expanded-fast-index-creation.html#expand_fast_index_creation","title":"expand_fast_index_creation","text":"Option Description Command Line: Yes Config file No Scope: Local/Global Dynamic: Yes Data type Boolean Default value ON/OFF

    See also

    Improved InnoDB fast index creation

    Thinking about running OPTIMIZE on your InnoDB Table? Stop!

    "},{"location":"innodb-fragmentation-count.html","title":"InnoDB page fragmentation counters","text":"

    InnoDB page fragmentation is caused by random insertion or deletion from a secondary index. This means that the physical ordering of the index pages on the disk is not same as the index ordering of the records on the pages. As a consequence this means that some pages take a lot more space and that queries which require a full table scan can take a long time to finish.

    To provide more information about the InnoDB page fragmentation Percona Server for MySQL now provides the following counters as status variables: Innodb_scan_pages_contiguous, Innodb_scan_pages_disjointed, Innodb_scan_data_size, Innodb_scan_deleted_recs_size, and Innodb_scan_pages_total_seek_distance.

    "},{"location":"innodb-fragmentation-count.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7
    "},{"location":"innodb-fragmentation-count.html#status-variables","title":"Status variables","text":""},{"location":"innodb-fragmentation-count.html#innodb_scan_pages_contiguous","title":"Innodb_scan_pages_contiguous","text":"Option Description Scope Session Data type Numeric

    This variable shows the number of contiguous page reads inside a query.

    "},{"location":"innodb-fragmentation-count.html#innodb_scan_pages_disjointed","title":"Innodb_scan_pages_disjointed","text":"Option Description Scope Session Data type Numeric

    This variable shows the number of disjointed page reads inside a query.

    "},{"location":"innodb-fragmentation-count.html#innodb_scan_data_size","title":"Innodb_scan_data_size","text":"Option Description Scope Session Data type Numeric

    This variable shows the size of data in all InnoDB pages read inside a query (in bytes) - calculated as the sum of page_get_data_size(page) for every page scanned.

    "},{"location":"innodb-fragmentation-count.html#innodb_scan_deleted_recs_size","title":"Innodb_scan_deleted_recs_size","text":"Option Description Scope Session Data type Numeric

    This variable shows the size of deleted records (marked as deleted in page_delete_rec_list_end()) in all InnoDB pages read inside a query (in bytes) - calculated as the sum of page_header_get_field(page, PAGE_GARBAGE) for every page scanned.

    "},{"location":"innodb-fragmentation-count.html#innodb_scan_pages_total_seek_distance","title":"Innodb_scan_pages_total_seek_distance","text":"Option Description Scope Session Data type Numeric

    This variable shows the total seek distance when moving between pages.

    "},{"location":"innodb-fragmentation-count.html#related-reading","title":"Related reading","text":"
    • InnoDB: look after fragmentation

    • Defragmenting a Table

    "},{"location":"innodb-fts-improvements.html","title":"InnoDB full-text search improvements","text":""},{"location":"innodb-fts-improvements.html#ignoring-stopword-list","title":"Ignoring stopword list","text":"

    By default, all Full-Text Search indexes check the stopwords list, to see if any indexed elements contain words on that list.

    Using this list for n-gram indexes isn\u2019t always suitable, for example, any item that contains a or i will be ignored. Another word that can\u2019t be searched is east, this one will find no matches because a is on the FTS stopword list.

    To resolve this issue, Percona Server for MySQL has the innodb_ft_ignore_stopwords variable to control whether InnoDB Full-Text Search should ignore the stopword list.

    Although this variable is introduced to resolve n-gram issues, it affects all Full-Text Search indexes as well.

    Being a stopword doesn\u2019t just mean being one of the predefined words from the list. Tokens shorter than innodb_ft_min_token_size or longer than innodb_ft_max_token_size are also considered stopwords. Therefore, when innodb_ft_ignore_stopwords is set to ON even for non-ngram FTS, innodb_ft_min_token_size / innodb_ft_max_token_size will be ignored meaning that in this case very short and very long words will also be indexed.

    "},{"location":"innodb-fts-improvements.html#system-variables","title":"System variables","text":""},{"location":"innodb-fts-improvements.html#innodb_ft_ignore_stopwords","title":"innodb_ft_ignore_stopwords","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default OFF

    When enabled, this variable will instruct InnoDB Full Text Search parser to ignore the stopword list when building/updating an FTS index.

    "},{"location":"innodb-io.html","title":"Improved InnoDB I/O scalability","text":"

    Because InnoDB is a complex storage engine it must be configured properly in order to perform at its best. Some points are not configurable in standard InnoDB. The goal of this feature is to provide a more exhaustive set of options for XtraDB.

    "},{"location":"innodb-io.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.

    Note

    Implemented in Percona Server for MySQL 8.0.13-4, max checkpoint age has been removed because the information is identical to log capacity.

    "},{"location":"innodb-io.html#system-variables","title":"System variables","text":""},{"location":"innodb-io.html#innodb_flush_method","title":"innodb_flush_method","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Enumeration Default NULL Allowed values fsync, O_DSYNC, O_DIRECT, O_DIRECT_NO_FSYNC, littlesync, nosync

    The following values are allowed:

    * `fdatasync`: use `fsync()` to flush data, log, and parallel doublewrite files.\n\n* `O_SYNC`: use `O_SYNC` to open and flush the log and parallel doublewrite files; use `fsync()` to flush the data files. Do not use `fsync()` to flush the parallel doublewrite file.\n\n* `O_DIRECT`: use O_DIRECT to open the data files and `fsync()` system call to flush data, log, and parallel doublewrite files.\n\n* `O_DIRECT_NO_FSYNC`: use O_DIRECT to open the data files and parallel doublewrite files, but does not use the `fsync()` system call to flush the data files, log files, and parallel doublewrite files. Do not use this option for the *XFS* file system.\n

    Note

    On an ext4 filesystem, set innodb_log_write_ahead_size to match the filesystem\u2019s write-ahead block size. This variable avoids unaligned AIO/DIO warnings.

    Starting from Percona Server for MySQL 8.0.20-11, the innodb_flush_method affects doublewrite buffers exactly the same as in MySQL 8.0.20.

    "},{"location":"innodb-io.html#status-variables","title":"Status variables","text":"

    The following information has been added to SHOW ENGINE INNODB STATUS to confirm the checkpointing activity:

    The max checkpoint age\nThe current checkpoint age target\nThe current age of the oldest page modification which has not been flushed to disk yet.\nThe current age of the last checkpoint\n...\n---\nLOG\n---\nLog sequence number 0 1059494372\nLog flushed up to   0 1059494372\nLast checkpoint at  0 1055251010\nMax checkpoint age  162361775\nCheckpoint age target 104630090\nModified age        4092465\nCheckpoint age      4243362\n0 pending log writes, 0 pending chkp writes\n...\n
    "},{"location":"innodb-show-status.html","title":"Extended show engine InnoDB status","text":"

    This feature reorganizes the output of SHOW ENGINE INNODB STATUS to improve readability and to provide additional information. The variable innodb_show_locks_held controls the umber of locks held to print for each InnoDB transaction.

    This feature modified the SHOW ENGINE INNODB STATUS command as follows:

    • Added extended information about InnoDB internal hash table sizes (in bytes) in the BUFFER POOL AND MEMORY section; also added buffer pool size in bytes.

    • Added additional LOG section information.

    "},{"location":"innodb-show-status.html#other-information","title":"Other information","text":"
    • Author / Origin: Baron Schwartz, https://lists.mysql.com/internals/35174
    "},{"location":"innodb-show-status.html#system-variables","title":"System variables","text":""},{"location":"innodb-show-status.html#innodb_show_locks_held","title":"innodb_show_locks_held","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type ULONG Default 10 Range 0 - 1000

    Specifies the number of locks held to print for each InnoDB transaction in SHOW ENGINE INNODB STATUS.

    "},{"location":"innodb-show-status.html#innodb_print_lock_wait_timeout_info","title":"innodb_print_lock_wait_timeout_info","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Boolean Default OFF

    Makes InnoDB to write information about all lock wait timeout errors into the log file.

    This allows to find out details about the failed transaction, and, most importantly, the blocking transaction. Query string can be obtained from EVENTS_STATEMENTS_CURRENT table, based on the PROCESSLIST_ID field, which corresponds to thread_id from the log output.

    Taking into account that blocking transaction is often a multiple statement one, following query can be used to obtain blocking thread statements history:

    SELECT s.SQL_TEXT FROM performance_schema.events_statements_history s\nINNER JOIN performance_schema.threads t ON t.THREAD_ID = s.THREAD_ID\nWHERE t.PROCESSLIST_ID = %d\nUNION\nSELECT s.SQL_TEXT FROM performance_schema.events_statements_current s\nINNER JOIN performance_schema.threads t ON t.THREAD_ID = s.THREAD_ID\nWHERE t.PROCESSLIST_ID = %d;\n

    The PROCESSLIST_ID in this example is exactly the thread id from error log output.

    "},{"location":"innodb-show-status.html#status-variables","title":"Status variables","text":"

    The status variables here contain information available in the output of SHOW ENGINE INNODB STATUS, organized by the sections SHOW ENGINE INNODB STATUS displays. If you are familiar with the output of SHOW ENGINE INNODB STATUS, you will probably already recognize the information these variables contain.

    "},{"location":"innodb-show-status.html#background-thread","title":"BACKGROUND THREAD","text":"

    The following variables contain information in the BACKGROUND THREAD section of the output from SHOW ENGINE INNODB STATUS.

    Expected output
    -----------------\nBACKGROUND THREAD\n-----------------\nsrv_master_thread loops: 1 srv_active, 0 srv_shutdown, 11844 srv_idle\nsrv_master_thread log flush and writes: 11844\n

    InnoDB has a source thread which performs background tasks depending on the server state, once per second. If the server is under workload, the source thread runs the following: performs background table drops; performs change buffer merge, adaptively; flushes the redo log to disk; evicts tables from the dictionary cache if needed to satisfy its size limit; makes a checkpoint. If the server is idle: performs background table drops, flushes and/or checkpoints the redo log if needed due to the checkpoint age; performs change buffer merge at full I/O capacity; evicts tables from the dictionary cache if needed; and makes a checkpoint.

    "},{"location":"innodb-show-status.html#innodb_master_thread_active_loops","title":"Innodb_master_thread_active_loops","text":"Option Description Scope Global Data type Numeric

    This variable shows the number of times the above one-second loop was executed for active server states.

    "},{"location":"innodb-show-status.html#innodb_master_thread_idle_loops","title":"Innodb_master_thread_idle_loops","text":"Option Description Scope Global Data type Numeric

    This variable shows the number of times the above one-second loop was executed for idle server states.

    "},{"location":"innodb-show-status.html#innodb_background_log_sync","title":"Innodb_background_log_sync","text":"Option Description Scope Global Data type Numeric

    This variable shows the number of times the InnoDB source thread has written and flushed the redo log.

    "},{"location":"innodb-show-status.html#semaphores","title":"SEMAPHORES","text":"

    The following variables contain information in the SEMAPHORES section of the output from SHOW ENGINE INNODB STATUS. An example of that output is:

    Expected output
    ----------\nSEMAPHORES\n----------\nOS WAIT ARRAY INFO: reservation count 9664, signal count 11182\nMutex spin waits 20599, rounds 223821, OS waits 4479\nRW-shared spins 5155, OS waits 1678; RW-excl spins 5632, OS waits 2592\nSpin rounds per wait: 10.87 mutex, 15.01 RW-shared, 27.19 RW-excl\n
    "},{"location":"innodb-show-status.html#insert-buffer-and-adaptive-hash-index","title":"INSERT BUFFER AND ADAPTIVE HASH INDEX","text":"

    The following variables contain information in the INSERT BUFFER AND ADAPTIVE HASH INDEX section of the output from SHOW ENGINE INNODB STATUS. An example of that output is:

    Expected output
    -------------------------------------\nINSERT BUFFER AND ADAPTIVE HASH INDEX\n-------------------------------------\nIbuf: size 1, free list len 6089, seg size 6091,\n44497 inserts, 44497 merged recs, 8734 merges\n0.00 hash searches/s, 0.00 non-hash searches/s\n
    "},{"location":"innodb-show-status.html#innodb_ibuf_free_list","title":"Innodb_ibuf_free_list","text":"Option Description Scope Global Data type Numeric"},{"location":"innodb-show-status.html#innodb_ibuf_segment_size","title":"Innodb_ibuf_segment_size","text":"Option Description Scope Global Data type Numeric"},{"location":"innodb-show-status.html#log","title":"LOG","text":"

    The following variables contain information in the LOG section of the output from SHOW ENGINE INNODB STATUS. An example of that output is:

    Expected output
    LOG\n---\nLog sequence number 10145937666\nLog flushed up to   10145937666\nPages flushed up to 10145937666\nLast checkpoint at  10145937666\nMax checkpoint age    80826164\nCheckpoint age target 78300347\nModified age          0\nCheckpoint age        0\n0 pending log writes, 0 pending chkp writes\n9 log i/o's done, 0.00 log i/o's/second\nLog tracking enabled\nLog tracked up to   10145937666\nMax tracked LSN age 80826164\n
    "},{"location":"innodb-show-status.html#innodb_lsn_current","title":"Innodb_lsn_current","text":"Option Description Scope Global Data type Numeric

    This variable shows the current log sequence number.

    "},{"location":"innodb-show-status.html#innodb_lsn_flushed","title":"Innodb_lsn_flushed","text":"Option Description Scope Global Data type Numeric

    This variable shows the current maximum LSN that has been written and flushed to disk.

    "},{"location":"innodb-show-status.html#innodb_lsn_last_checkpoint","title":"Innodb_lsn_last_checkpoint","text":"Option Description Scope Global Data type Numeric

    This variable shows the LSN of the latest completed checkpoint.

    "},{"location":"innodb-show-status.html#innodb_checkpoint_age","title":"Innodb_checkpoint_age","text":"Option Description Scope Global Data type Numeric

    This variable shows the current InnoDB checkpoint age, i.e., the difference between the current LSN and the LSN of the last completed checkpoint.

    "},{"location":"innodb-show-status.html#innodb_checkpoint_max_age","title":"Innodb_checkpoint_max_age","text":"Option Description Scope Global Data type Numeric

    This variable shows the maximum allowed checkpoint age above which the redo log is close to full and a checkpoint must happen before any further redo log writes.

    Note

    This variable was removed in Percona Server for MySQL 8.0.13-4 due to a change in MySQL. The variable is identical to log capacity.

    "},{"location":"innodb-show-status.html#buffer-pool-and-memory","title":"BUFFER POOL AND MEMORY","text":"

    The following variables contain information in the BUFFER POOL AND MEMORY section of the output from SHOW ENGINE INNODB STATUS. An example of that output is:

    Expected output
    ----------------------\nBUFFER POOL AND MEMORY\n----------------------\nTotal memory allocated 137363456; in additional pool allocated 0\nTotal memory allocated by read views 88\nInternal hash tables (constant factor + variable factor)\n    Adaptive hash index 2266736         (2213368 + 53368)\n    Page hash           139112 (buffer pool 0 only)\n    Dictionary cache    729463  (554768 + 174695)\n    File system         824800  (812272 + 12528)\n    Lock system         333248  (332872 + 376)\n    Recovery system     0       (0 + 0)\nDictionary memory allocated 174695\nBuffer pool size        8191\nBuffer pool size, bytes 134201344\nFree buffers            7481\nDatabase pages          707\nOld database pages      280\nModified db pages       0\nPending reads 0\nPending writes: LRU 0, flush list 0 single page 0\nPages made young 0, not young 0\n0.00 youngs/s, 0.00 non-youngs/s\nPages read 707, created 0, written 1\n0.00 reads/s, 0.00 creates/s, 0.00 writes/s\nNo buffer pool page gets since the last printout\nPages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s\nLRU len: 707, unzip_LRU len: 0\n
    "},{"location":"innodb-show-status.html#innodb_mem_adaptive_hash","title":"Innodb_mem_adaptive_hash","text":"Option Description Scope Global Data type Numeric

    This variable shows the current size, in bytes, of the adaptive hash index.

    "},{"location":"innodb-show-status.html#innodb_mem_dictionary","title":"Innodb_mem_dictionary","text":"Option Description Scope Global Data type Numeric

    This variable shows the current size, in bytes, of the InnoDB in-memory data dictionary info.

    "},{"location":"innodb-show-status.html#innodb_mem_total","title":"Innodb_mem_total","text":"Option Description Scope Global Data type Numeric

    This variable shows the total amount of memory, in bytes, InnoDB has allocated in the process heap memory.

    "},{"location":"innodb-show-status.html#innodb_buffer_pool_pages_lru_flushed","title":"Innodb_buffer_pool_pages_LRU_flushed","text":"Option Description Scope Global Data type Numeric

    This variable shows the total number of buffer pool pages which have been flushed from the LRU list, i.e., too old pages which had to be flushed in order to make buffer pool room to read in new data pages.

    "},{"location":"innodb-show-status.html#innodb_buffer_pool_pages_made_not_young","title":"Innodb_buffer_pool_pages_made_not_young","text":"Option Description Scope Global Data type Numeric

    This variable shows the number of times a buffer pool page was not marked as accessed recently in the LRU list because of innodb_old_blocks_time variable setting.

    "},{"location":"innodb-show-status.html#innodb_buffer_pool_pages_made_young","title":"Innodb_buffer_pool_pages_made_young","text":"Option Description Scope Global Data type Numeric

    This variable shows the number of times a buffer pool page was moved to the young end of the LRU list due to its access, to prevent its eviction from the buffer pool.

    "},{"location":"innodb-show-status.html#innodb_buffer_pool_pages_old","title":"Innodb_buffer_pool_pages_old","text":"Option Description Scope Global Data type Numeric

    This variable shows the total number of buffer pool pages which are considered to be old according to the Making the Buffer Pool Scan Resistant manual page.

    "},{"location":"innodb-show-status.html#transactions","title":"TRANSACTIONS","text":"

    The following variables contain information in the TRANSACTIONS section of the output from SHOW INNODB STATUS. An example of that output is:

    Expected output
    ------------\nTRANSACTIONS\n------------\nTrx id counter F561FD\nPurge done for trx's n:o < F561EB undo n:o < 0\nHistory list length 19\nLIST OF TRANSACTIONS FOR EACH SESSION:\n---TRANSACTION 0, not started, process no 993, OS thread id 140213152634640\nmysql thread id 15933, query id 32109 localhost root\nshow innodb status\n---TRANSACTION F561FC, ACTIVE 29 sec, process no 993, OS thread id 140213152769808 updating or deleting\nmysql tables in use 1, locked 1\n
    "},{"location":"innodb-show-status.html#innodb_max_trx_id","title":"Innodb_max_trx_id","text":"Option Description Scope Global Data type Numeric

    This variable shows the next free transaction id number.

    "},{"location":"innodb-show-status.html#innodb_oldest_view_low_limit_trx_id","title":"Innodb_oldest_view_low_limit_trx_id","text":"Option Description Scope Global Data type Numeric

    This variable shows the highest transaction id, above which the current oldest open read view does not see any transaction changes. Zero if there is no open view.

    "},{"location":"innodb-show-status.html#innodb_purge_trx_id","title":"Innodb_purge_trx_id","text":"Option Description Scope Global Data type Numeric

    This variable shows the oldest transaction id whose records have not been purged yet.

    "},{"location":"innodb-show-status.html#innodb_purge_undo_no","title":"Innodb_purge_undo_no","text":"Option Description Scope Global Data type Numeric"},{"location":"innodb-show-status.html#information_schema-tables","title":"INFORMATION_SCHEMA Tables","text":"

    The following table contains information about the oldest active transaction in the system.

    "},{"location":"innodb-show-status.html#information_schemaxtradb_read_view","title":"INFORMATION_SCHEMA.XTRADB_READ_VIEW","text":"Column Name Description \u2018READ_VIEW_LOW_LIMIT_TRX_NUMBER\u2019 This is the highest transactions number at the time the view was created. \u2018READ_VIEW_UPPER_LIMIT_TRX_ID\u2019 This is the highest transactions ID at the time the view was created. This means that it should not see newer transactions with IDs bigger than or equal to that value. \u2018READ_VIEW_LOW_LIMIT_TRX_ID\u2019 This is the latest committed transaction ID at the time the oldest view was created. This means that it should see all transactions with IDs smaller than or equal to that value.

    Note

    Starting with Percona Server for MySQL 8.0.20-11, in INFORMATION_SCHEMA.XTRADB_READ_VIEW, the data type for the following columns is changed from VARCHAR(18) to BIGINT UNSIGNED:

    • READ_VIEW_LOW_LIMIT_TRX_NUMBER

    • READ_VIEW_UPPER_LIMIT_TRX_ID

    • READ_VIWE_LOW_LIMIT_TRX_ID

    The columns contain 64-bit integers, which is too large for VARCHAR(18).

    The following table contains information about the memory usage for InnoDB/XtraDB hash tables.

    "},{"location":"innodb-show-status.html#information_schemaxtradb_internal_hash_tables","title":"INFORMATION_SCHEMA.XTRADB_INTERNAL_HASH_TABLES","text":"Column Name Description \u2018INTERNAL_HASH_TABLE_NAME\u2019 Hash table name \u2018TOTAL_MEMORY\u2019 Total amount of memory \u2018CONSTANT_MEMORY\u2019 Constant memory \u2018VARIABLE_MEMORY\u2019 Variable memory"},{"location":"innodb-show-status.html#other-reading","title":"Other reading","text":"
    • SHOW INNODB STATUS walk through

    • Table locks in SHOW INNODB STATUS

    "},{"location":"install-audit-log-filter.html","title":"Install the Audit Log Filter","text":"

    The plugin_dir system variable defines the plugin library location. If needed, at server startup, set the plugin_dir variable.

    When upgrading a MySQL installation, plugins are not automatically upgraded. You may need to manually load the plugin after the MySQL upgrade.

    In the share directory, locate the audit_log_filter_linux_install.sqlscript.

    Implemented in 8.0.34, at the time you run the script, you can select the database used to store the JSON filter tables.

    • If the plugin is loaded, the installation script takes the database name from the audit_log_filter_database variable
    • If the plugin is not loaded, but passes the -D db_name to the mysql client when the installation script runs, uses the db_name.
    • If the plugin is not loaded and the -D option is not provided, the installation script creates the required tables in the default database name mysql.

    You can also designate a different database with the audit_log_filter_database system variable. The database name cannot be NULL or exceed 64 characters. If the database name is invalid, the audit log filter tables are not found.

    With 8.0.34 and higher, use this command:

    $ mysql -u -D database -p < audit_log_filter_linux_install.sql\n

    To verify the plugin installation, run the following command:

    mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE `audit%';\n
    Expected output
    +--------------------+---------------+\n| PLUGIN_NAME        | PLUGIN_STATUS |\n+--------------------+---------------+\n| audit_log_filter   | ACTIVE        |\n+--------------------+---------------+\n

    After the installation, you can use the --audit_log_filter option when restarting the server. To prevent the server from not running the plugin use --audit_log_filter with either the FORCE or the FORCE_PLUS_PERMANENT values.

    "},{"location":"install-data-masking-component.html","title":"Install the data masking component","text":"

    Percona Server for MySQL 8.0.34 adds the data masking component. Percona Server for MySQL also has the Data Masking plugin.

    Useful link

    Compare the data masking component and the data masking plugin

    Before installing the component, you must uninstall the data masking plugin and all of the functions. The removal avoids conflicts.

    The component has the following parts:

    • A database server system table used to store the terms and dictionaries
    • A component_masking_functions component that contains the loadable functions

    The MASKING_DICTIONARIES_ADMIN privilege may be required by some functions.

    "},{"location":"install-data-masking-component.html#install-the-component","title":"Install the component","text":"

    The following steps install the component:

    1. Create masking_dictionaries.

      mysql> CREATE TABLE IF NOT EXISTS\nmysql.masking_dictionaries(\n    Dictionary VARCHAR(256) NOT NULL,\n    Term VARCHAR(256) NOT NULL,\n    UNIQUE INDEX dictionary_term_idx (Dictionary, Term)\n) ENGINE = InnoDB DEFAULT CHARSET=utf8mb4;\n
    2. Install the data masking components and the loadable functions.

      mysql> INSTALL COMPONENT 'file://component_masking_functions';\n
    3. The MASKING_DICTIONARIES_ADMIN is required to use the the following functions:

      • masking_dictionary_term_add

      • masking_dictionary_term_remove

      • masking_dictionary_remove

        mysql> GRANT MASKING_DICTIONARIES_ADMIN ON *.* TO <user>;\n
    "},{"location":"install-data-masking-component.html#useful-links","title":"Useful links","text":"

    Uninstall the data masking component

    Data masking component functions

    "},{"location":"install-data-masking-plugin.html","title":"Install and remove the data masking plugin","text":"

    This feature was implemented in Percona Server for MySQL version Percona Server for MySQL 8.0.17-8.

    The Percona Data Masking plugin is a free and Open Source implementation of the MySQL\u2019s data masking plugin. Data Masking provides a set of functions to hide sensitive data with modified content.

    "},{"location":"install-data-masking-plugin.html#install-the-plugin","title":"Install the plugin","text":"

    The following command installs the plugin and the functions:

    INSTALL PLUGIN data_masking SONAME 'data_masking.so';\n
    "},{"location":"install-data-masking-plugin.html#uninstall-the-plugin","title":"Uninstall the plugin","text":"

    Use the UNINSTALL PLUGIN statement and the DROP FUNCTION statement to disable and uninstall the plugin and then remove the functions.

    UNINSTALL PLUGIN data_masking;\n
    "},{"location":"install-myrocks.html","title":"Percona MyRocks installation guide","text":"

    Percona MyRocks is distributed as a separate package that can be enabled as a plugin for Percona Server for MySQL 8.0 and later versions.

    Note

    File formats across different MyRocks variants may not be compatible. Percona Server for MySQL supports only Percona MyRocks. Migrating from one variant to another requires a logical data dump and reload.

    • Installing Percona MyRocks

    • Removing Percona MyRocks

    "},{"location":"install-myrocks.html#install-percona-myrocks","title":"Install Percona MyRocks","text":"

    It is recommended to install Percona software from official repositories:

    1. Configure Percona repositories as described in Percona Software Repositories Documentation.

    2. Install Percona MyRocks using the corresponding package manager:

      • For Debian or Ubuntu:
      $ sudo apt install percona-server-rocksdb\n

      Note

      Review the Installing and configuring Percona Server for MySQL with ZenFS support document for the Installation and the Configuration information.

      • For RHEL or CentOS:
      $ sudo yum install percona-server-rocksdb\n

    After installation, you should see the following output:

    Expected output
    * This release of |Percona Server| is distributed with RocksDB storage engine.\n* Run the following script to enable the RocksDB storage engine in Percona Server:\n
    $ ps-admin --enable-rocksdb -u <mysql_admin_user> -p[mysql_admin_pass] [-S <socket>] [-h <host> -P <port>]\n
    "},{"location":"install-myrocks.html#enable-myrocks-with-ps-admin","title":"Enable MyRocks with ps-admin","text":"

    Run the ps-admin script as system root user or with sudo and provide the MySQL root user credentials to properly enable the RocksDB (MyRocks) storage engine:

    $ sudo ps-admin --enable-rocksdb -u root -pPassw0rd\n
    Expected output
    Checking if RocksDB plugin is available for installation ...\nINFO: ha_rocksdb.so library for RocksDB found at /usr/lib64/mysql/plugin/ha_rocksdb.so.\n\nChecking RocksDB engine plugin status...\nINFO: RocksDB engine plugin is not installed.\n\nInstalling RocksDB engine...\nINFO: Successfully installed RocksDB engine plugin.\n

    Note

    When you use the ps-admin script to enable Percona MyRocks, it performs the following:

    • Disables Transparent huge pages

    • Installs and enables the RocksDB plugin

    If the script returns no errors, Percona MyRocks should be successfully enabled on the server. You can verify it as follows:

    mysql> SHOW ENGINES;\n
    Expected output
    +---------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n| Engine  | Support | Comment                                                                    | Transactions | XA   | Savepoints |\n+---------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n| ROCKSDB | YES     | RocksDB storage engine                                                     | YES          | YES  | YES        |\n...\n| InnoDB  | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |\n+---------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n10 rows in set (0.00 sec)\n

    Note that the RocksDB engine is not set to be default, new tables will still be created using the InnoDB (XtraDB) storage engine. To make RocksDB storage engine default, set default-storage-engine=rocksdb in the [mysqld] section of my.cnf and restart Percona Server for MySQL.

    Alternatively, you can add ENGINE=RocksDB after the CREATE TABLE statement for every table that you create.

    "},{"location":"install-myrocks.html#install-myrocks-plugins","title":"Install MyRocks plugins","text":"

    You can install MyRocks manually with a series of INSTALL PLUGIN statements. You must have the INSERT privilege for the mysql.plugin system table.

    The following statements install MyRocks:

    INSTALL PLUGIN ROCKSDB SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_CFSTATS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_DBSTATS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_PERF_CONTEXT SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_PERF_CONTEXT_GLOBAL SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_CF_OPTIONS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_GLOBAL_INFO SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_COMPACTION_HISTORY SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_COMPACTION_STATS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_ACTIVE_COMPACTION_STATS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_DDL SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_INDEX_FILE_MAP SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_LOCKS SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_TRX SONAME 'ha_rocksdb.so';\nINSTALL PLUGIN ROCKSDB_DEADLOCK SONAME 'ha_rocksdb.so';\n
    "},{"location":"install-myrocks.html#remove-percona-myrocks","title":"Remove Percona MyRocks","text":"

    It will not be possible to access tables created using the RocksDB engine with another storage engine after you remove Percona MyRocks. If you need this data, alter the tables to another storage engine. For example, to alter the City table to InnoDB, run the following:

    mysql> ALTER TABLE City ENGINE=InnoDB;\n

    To disable and uninstall the RocksDB engine plugins, use the ps-admin script as follows:

    $ sudo ps-admin --disable-rocksdb -u root -pPassw0rd\n
    Expected output
    Checking RocksDB engine plugin status...\nINFO: RocksDB engine plugin is installed.\n\nUninstalling RocksDB engine plugin...\nINFO: Successfully uninstalled RocksDB engine plugin.\n

    After the engine plugins have been uninstalled, remove the Percona MyRocks package:

    • For Debian or Ubuntu:

      $ sudo apt remove percona-server-rocksdb-8.0\n
    • For RHEL or CentOS:

      $ sudo yum remove percona-server-rocksdb-80.x86_64\n

    Finally, remove all the MyRocks Server Variables from the configuration file (my.cnf) and restart Percona Server for MySQL.

    "},{"location":"install-myrocks.html#uninstall-myrocks-plugins","title":"Uninstall MyRocks plugins","text":"

    You can uninstall the plugins for MyRocks. You must have the DELETE privilege for the mysql.plugin system table.

    The following statements remove the MyRocks plugins:

    UNINSTALL PLUGIN ROCKSDB;\nUNINSTALL PLUGIN ROCKSDB_CFSTATS;\nUNINSTALL PLUGIN ROCKSDB_DBSTATS;\nUNINSTALL PLUGIN ROCKSDB_PERF_CONTEXT;\nUNINSTALL PLUGIN ROCKSDB_PERF_CONTEXT_GLOBAL;\nUNINSTALL PLUGIN ROCKSDB_CF_OPTIONS;\nUNINSTALL PLUGIN ROCKSDB_GLOBAL_INFO;\nUNINSTALL PLUGIN ROCKSDB_COMPACTION_HISTORY;\nUNINSTALL PLUGIN ROCKSDB_COMPACTION_STATS;\nUNINSTALL PLUGIN ROCKSDB_ACTIVE_COMPACTION_STATS;\nUNINSTALL PLUGIN ROCKSDB_DDL;\nUNINSTALL PLUGIN ROCKSDB_INDEX_FILE_MAP;\nUNINSTALL PLUGIN ROCKSDB_LOCKS;\nUNINSTALL PLUGIN ROCKSDB_TRX;\nUNINSTALL PLUGIN ROCKSDB_DEADLOCK;\n
    "},{"location":"install-pro.html","title":"Install Percona Server for MySQL Pro","text":"

    Percona Server for MySQL Pro includes the capabilities that are typically requested by large enterprises. Percona Server for MySQL Pro contains packages created and tested by Percona. These packages are supported only for Percona Customers with a subscription.

    Become a Percona Customer

    Review Get more help for ways that we can work with you.

    This document provides guidelines how to install Pro packages of Percona Server for MySQL from Percona repositories. Check files in packages built for Percona Server for MySQL Pro

    "},{"location":"install-pro.html#procedure","title":"Procedure","text":"
    1. Request the access to the pro repository from Percona Support. You will receive the client ID and the access token which you use when downloading the packages.

    2. Configure the repository and install Percona Server for MySQL packages

      On Debian and UbuntuOn RHEL and derivatives
      1. Download the Percona gpg key:

        $ wget https://github.com/percona/percona-repositories/raw/main/deb/percona-keyring.gpg \n
      2. Add the Percona gpg key to trusted.gpg.d directory:

        $ sudo cp percona-keyring.gpg /etc/apt/trusted.gpg.d/\n
      3. Create the /etc/apt/sources.list.d/psmysql-pro.list configuration file with the following contents with your [CLIENTID] and [TOKEN].

        To get the OPERATING_SYSTEM value, run lsb_release -sc.

        /etc/apt/sources.list.d/psmysql-pro.list
        deb http://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/apt/ OPERATING_SYSTEM main\n
      4. Update the local cache

        $ sudo apt update\n
      5. Install Percona Server for MySQL packages

        $ sudo apt install -y percona-server-server-pro\n

        Install other required packages. Check files in the DEB package built for Percona Server for MySQL 8.0.

      1. Create the /etc/yum.repos.d/psmysql-pro.repo configuration file with the following contents with your [CLIENTID] and [TOKEN].

        /etc/yum.repos.d/psmysql-pro.repo
        [ps-8.0-pro]\nname=PS_8.0_PRO\nbaseurl=http://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/yum/release/$releasever/RPMS/x86_64\nenabled=1\ngpgkey = https://repo.percona.com/yum/PERCONA-PACKAGING-KEY\n
      2. Install Percona Server for MySQL packages

        $ sudo yum install -y percona-server-server-pro\n

        Install other required packages. Check files in the DEB package built for Percona Server for MySQL 8.0.

    3. Start the server

      $ sudo systemctl start mysql\n
    "},{"location":"install-pro.html#next-step","title":"Next step","text":"

    Enable the FIPS mode

    "},{"location":"installation.html","title":"Install Percona Server for MySQL","text":"

    Before installing, review the Percona Server for MySQL 8.0 Release notes.

    We gather Telemetry data in the Percona packages and Docker images.

    We recommend using the repositories that Percona provides to simplify the installation process. The percona-release tool makes installing and updating your software and its dependencies easy using your operating system package manager. The Percona Software repositories contain YUM (RPM packages for Red Hat Enterprise Linux and derivatives) and APT (DEB packages for Ubuntu and Debian) for Percona software such as Percona Server for MySQL, Percona XtraBackup, and Percona Toolkit.

    For more information, see Percona Software repositories and the percona-release tool.

    To get started quickly, use the Quickstart guide. You can find instructions for either Docker or installing with a package manager.

    Review Get more help for ways that we can work with you.

    The following guides describe the installation procedures for using the official Percona Software repositories.

    • Install Percona Server for MySQL on Debian and Ubuntu

    • Install Percona Server for MySQL on Red Hat Enterprise Linux and derivatives

    "},{"location":"installation.html#install-percona-server-for-mysql-pro","title":"Install Percona Server for MySQL Pro","text":"

    Install Percona Server for MySQL Pro

    "},{"location":"installation.html#upgrade-to-percona-server-for-mysql-pro","title":"Upgrade to Percona Server for MySQL Pro","text":"

    If you already use Percona Server for MySQL, you can Upgrade to Percona Server for MySQL Pro

    "},{"location":"jemalloc-profiling.html","title":"Jemalloc memory allocation profiling","text":"

    Implemented in Percona Server for MySQL 8.0.25-15, Percona Server for MySQL can take advantage of the memory-profiling ability of the jemalloc allocator. This ability provides a method to investigate memory-related issues.

    "},{"location":"jemalloc-profiling.html#requirements","title":"Requirements","text":"

    This memory-profiling requires jemalloc_detected. This read-only variable returns true if jemalloc with the profiling-enabled option is being used by Percona Server for MySQL.

    As root, customize jemalloc with the following flags:

    Option Description \u2013enable-stats Enables statistics-gathering ability \u2013enable-prof Enables heap profiling and the ability to detect leaks.

    Using LD_PRELOAD. Build the library, configure the malloc configuration with the prof:true string, and then use LD_PRELOAD to preload the libjemalloc.so library. The libprocess MemoryProfiler class detects the library automatically and enables the profiling support.

    The following is an example of the required commands:

    ./configure --enable-stats --enable-prof && make && make install\nMALLOC_CONF=prof:true\nLD_PRELOAD=/usr/lib/libjemalloc.so\n
    "},{"location":"jemalloc-profiling.html#use-percona-server-for-mysql-with-jemalloc-with-profiling-enabled","title":"Use Percona Server for MySQL with jemalloc with profiling enabled","text":"

    To detect if jemalloc is set, run the following command:

    SELECT @@jemalloc_detected;\n

    To enable jemalloc profiling in a MySQL client, run the following command:

    set global jemalloc_profiling=on;\n

    The malloc_stats_totals table returns the statistics, in bytes, of the memory usage. The command takes no parameters and returns the results as a table.

    The following example commands display this result:

    use performance_schema;\n
    SELECT * FROM malloc_stats_totals;\n
    Expected output
    +----+------------+------------+------------+-------------+------------+\n| id | ALLOCATION | MAPPED     | RESIDENT   | RETAINED    | METADATA   |\n+----+------------+------------+------------+-------------+------------+\n|  1 | 390977528  | 405291008  | 520167424  | 436813824   | 9933744    |\n+----+------------+------------+------------+-------------+------------+\n1 row in set (0.00 sec)\n

    The malloc_stats table returns the cumulative totals, in bytes, of several statistics per type of arena. The command takes no parameters and returns the results as a table.

    The following example commands display this result:

    use performance_schema;\n
    mysql> SELECT * FROM malloc_stats ORDER BY TYPE DESC LIMIT 3;\n
    Expected output
    +--------+-------------+-------------+-------------+-------------+\n| TYPE   | ALLOCATED   | NMALLOC     | NDALLOC     | NRESQUESTS  |\n+--------+-------------+-------------+-------------+-------------+\n| small  | 23578872    | 586156      | 0           | 2649417     |\n| large  | 367382528   | 2218        | 0           | 6355        |\n| huge   | 0           | 0           | 0           | |\n+--------+-------------+-------------+-------------+-------------+\n3 rows in set (0.00 sec)\n
    "},{"location":"jemalloc-profiling.html#dumping-the-profile","title":"Dumping the profile","text":"

    The profiling samples the malloc() calls and stores the sampled stack traces in a separate location in memory. These samples can be dumped into the filesystem. A dump returns a detailed view of the state of the memory.

    The process is global; therefore, only a single concurrent run is available and only the most recent runs are stored on disk.

    Use the following command to create a profile dump file:

    flush memory profile;\n

    The generated memory profile dumps are written to the /tmp directory.

    You can analyze the dump files with jeprof program, which must be installed on the host system in the appropriate path. This program is a perl script that post-processes the dump files in their raw format. The program has no connection to the jemalloc library and the version numbers are not required to match.

    To verify the dump, run the following command:

    ls /tmp/jeprof_mysqld*\n/tmp/jeprof_mysqld.1.0.170013202213\njeprof --show_bytes /tmp/jeprof_mysqld.1.0.170013202213 jeprof.*.heap\n

    You can also access the memory profile to plot a graph of the memory use. This ability requires that jeprof and dot are in the /tmp path. For the graph to display useful information, the binary file must contain symbol information.

    Run the following command:

    jeprof --dot /usr/sbin/mysqld /tmp/jeprof_mysqld.1.0.170013202213 > /tmp/jeprof1.dot\ndot --Tpng /tmp/jeprof1.dot > /tmp/jeprof1.png\n

    Note

    An example of allocation graph.

    "},{"location":"jemalloc-profiling.html#performance_schema-tables","title":"PERFORMANCE_SCHEMA tables","text":"

    In 8.0.25.14, the following tables are implemented to retrieve memory allocation statistics for a running instance or return the cumulative number of allocations requested or allocations returned for a running instance.

    More information about the stats that are returned can be found in jemalloc.

    "},{"location":"jemalloc-profiling.html#malloc_stats_totals","title":"malloc_stats_totals","text":"

    The current stats for allocations. All measurements are in bytes.

    Column Name Description ALLOCATED The total amount the application allocated ACTIVE The total amount allocated by the application of active pages. A multiple of the page size and this value is greater than or equal to the stats.allocated value. The sum does not include allocator metadata pages and stats.arenas.<i>.pdirty or stats.arenas.<i>.pmuzzy. MAPPED The total amount in chunks that are mapped by the allocator in active extents. This value does not include inactive chunks. The value is at least as large as the stats.active and is a multiple of the chunk size. RESIDENT A maximum number the allocator has mapped in physically resident data pages. All allocator metadata pages and unused dirty pages are included in this value. Pages may not be physically resident if they correspond to demand-zeroed virtual memory that has not yet been touched. This value is a maximum rather than a precise value and is a multiple of the page size. The value is greater than the stats.active. RETAINED The amount retained by the virtual memory mappings of the operating system. This value does not include any returned mappings. This type of memory, usually de-committed, untouched, or purged. The value is associated with physical memory and is excluded from mapped memory statistics. METADATA The total amount dedicated to metadata. This value contains the base allocations which are used for bootstrap-sensitive allocator metadata structures. Transparent huge pages usage is not included."},{"location":"jemalloc-profiling.html#malloc_stats","title":"malloc_stats","text":"

    The cumulative number of allocations requested or allocations returned for a running instance.

    Column Name Description Type The type of object: small, large, and huge ALLOCATED The number of bytes that are currently allocated to the application. NMALLOC A cumulative number of times an allocation was requested from the arena\u2019s bins. The number includes times when the allocation satisfied an allocation request or filled a relevant tcache if opt.tcache is enabled. NDALLOC A cumulative number of times an allocation was returned to the arena\u2019s bins. The number includes times when the allocation was deallocated or flushed the relevant tcache if opt.tcache is enabled. NREQUESTS The cumulative number of allocation requests satisfied."},{"location":"jemalloc-profiling.html#system-variables","title":"System variables","text":"

    The following variables have been added:

    "},{"location":"jemalloc-profiling.html#jemalloc_detected","title":"jemalloc_detected","text":"

    Description: This read-only variable returns true if jemalloc with profiling enabled is detected. The following options are required:

    • Jemalloc is installed and compiled with profiling enabled

    • Percona Server for MySQL is configured to use jemalloc by using the environment variable LD_PRELOAD.

    • The environment variable MALLOC_CONF is set to prof:true.

    The following options are:

    • Scope: Global

    • Variable Type: Boolean

    • Default Value: false

    "},{"location":"jemalloc-profiling.html#jemalloc_profiling","title":"jemalloc_profiling","text":"

    Description: Enables jemalloc profiling. The variable requires jemalloc_detected.

    • Command Line: \u2013jemalloc_profiling[=(OFF|ON)]

    • Config File: Yes

    • Scope: Global

    • Dynamic: Yes

    • Variable Type: Boolean

    • Default Value: OFF

    "},{"location":"jemalloc-profiling.html#disable-profiling","title":"Disable profiling","text":"

    To disable jemalloc profiling, in a MySQL client, run the following command:

    set global jemalloc_profiling=off;\n
    "},{"location":"kill-idle-trx.html","title":"Kill idle transactions","text":"

    This feature limits the age of idle transactions for all transactional storage engines. Kills any idle transaction when the specified limit is reached. This limit prevents users from blocking the InnoDB purge by mistake.

    "},{"location":"kill-idle-trx.html#system-variables","title":"System variables","text":""},{"location":"kill-idle-trx.html#kill_idle_transaction","title":"kill_idle_transaction","text":"Option Description Config file Yes Scope: Global Dynamic: Yes Data type Integer Default value 0 (disabled) Units Seconds"},{"location":"ldap-authentication.html","title":"Using LDAP authentication plugins","text":"

    LDAP (Lightweight Directory Access Protocol) provides an alternative method to access existing directory servers, which maintain information about individuals, groups, and organizations.

    "},{"location":"ldap-authentication.html#version-specific-information","title":"Version specific information","text":"

    Percona Server for MySQL 8.0.30-22 implements an SASL-based LDAP authentication plugin. This plugin only supports the SCRAM-SHA-1 SASL mechanism.

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    Percona Server for MySQL 8.0.19-10 implements the simple LDAP authentication. The Percona simple LDAP authentication plugin is a free and Open Source implementation of the MySQL Enterprise Simple LDAP authentication plugin.

    "},{"location":"ldap-authentication.html#plugin-names-and-file-names","title":"Plugin names and file names","text":"

    The following tables show the plugin names and the file name for simple LDAP authentication and SASL-based LDAP authentication.

    Simple LDAP authentication plugin names and library nameSASL-based LDAP authentication plugin names and library names Plugin or file Plugin name or file name Server-side plugin authentication_ldap_simple client-side plugin mysql_clear_password library file authentication_ldap_simple.so Plugin or file Plugin name or file name Server-side plugin authentication_ldap_sasl client-side plugin authentication_ldap_sasl_client library files authentication_ldap_sasl.so authentication_ldap_sasl_client.so"},{"location":"ldap-authentication.html#how-does-the-authentication-work","title":"How does the authentication work","text":"

    The server-side LDAP plugins work only with the specific client-side plugin:

    • The authentication_ldap_simple plugin, on the server, performs the simple LDAP authentication. The client, using mysql_clear_password, connects to the server. The client plugin sends the password to the server as cleartext. For this method, use a secure connection between the client and server.

    • The authentication_ldap_sasl plugin, on the server, performs the SASL-based LDAP authentication. The client must use the authentication_ldap_sasl_client plugin. The method does not send the password to the server in cleartext. The server-side and client-side plugins use Simple Authentication and Security Layer (SASL) to send secure messages within the LDAP protocol.

    For either method, the database server rejects the connection if the client user name and the host name do not match a server account.

    If a database server LDAP authentication is successful, the LDAP server searches for an entry. The LDAP server matches the user and authenticates using the LDAP password. If the database server account names the LDAP user distinguished name (DN), added by the IDENTIFIED WITH <plugin-name> BY '<auth-string>' clause, the LDAP server uses that value and the LDAP password provided by the client. This method fails if the DN and password have incorrect values

    If the LDAP server finds multiple matches or no match, authentication fails.

    If the password is correct, and the LDAP server finds a match, then LDAP authentication succeeds. The LDAP server returns the LDAP entry and the authentication plugin determines the authenticated user\u2019s name based on the entry. If the LDAP entry has no group attribute, the plugin returns the client user name as the authenticated name. If the LDAP entry has a group attribute, the plugin returns the group value as the authenticated name.

    The database server compares the client user name to the authenticated user name. If these names are the same, the database server uses the client user name to check for privileges. If the name differs, then the database server looks for an account that matches the authenticated name.

    "},{"location":"ldap-authentication.html#prerequisites-for-authentication","title":"Prerequisites for authentication","text":"

    The LDAP authentication plugins required the following:

    • An available LDAP server

    • The LDAP server must contain the LDAP user accounts to be authenticated

    • The OpenLDAP client library must be available on the same system as the plugin

    The SASL-based LDAP authentication additionally requires the following:

    • Configure the LDAP server to communicate with a SASL server

    • Available SASL client library on the same system as the client plugin.

    • Services are configured to use the supported SCRAM-SHA-1 SASL mechanism

    "},{"location":"ldap-authentication.html#install-the-plugins","title":"Install the plugins","text":"

    You can use either of the following methods to install the plugins.

    The SASL-based LDAP authentication is available on Percona Server for MySQL 8.0.30-22 and later.

    "},{"location":"ldap-authentication.html#load-the-plugins-at-server-start","title":"Load the plugins at server start","text":"

    Use either of the following methods to load the plugin at server start.

    Load the simple LDAP authenticationLoad the SASL_based LDAP authentication plugin

    Add the following statements to your my.cnf file to load simple LDAP authentication:

    [mysqld]\nplugin-load-add=authentication_ldap_simple.so\nauthentication_ldap_simple_server_host=127.0.0.1\nauthentication_ldap_simple_bind_base_dn='dc=percona, dc=com'\n

    Restart the server for the changes to take effect.

    Add the following statements to your my.cnf file to load the SASL-based LDAP authentication:

    [mysqld]\nplugin-load-add=authentication_ldap_sasl.so\nauthentication_ldap_sasl_server_host=127.0.0.1\nauthentication_ldap_sasl_bind_base_dn='dc=percona, dc=com'\n
    "},{"location":"ldap-authentication.html#load-the-plugins-at-runtime","title":"Load the plugins at runtime","text":"

    Install the plugin with the following statements.

    Load the simple LDAP authentication pluginLoad the SASL-based LDAP authentication plugin
    mysql> INSTALL PLUGIN authentication_ldap_simple SONAME 'authentication_ldap_simple.so';\n

    To set and persist values at runtime, use the following statements:

    mysql> SET PERSIST authentication_ldap_simple_server_host='127.0.0.1';\nmysql> SET PERSIST authentication_ldap_simple_bind_base_dn='dc=percona, dc=com';\n
    mysql> INSTALL PLUGIN authentication_ldap_sasl SONAME 'authentication_ldap_sasl.so';\n

    To set and persist values at runtime, use the following statements:

    mysql> SET PERSIST authentication_ldap_sasl_server_host='127.0.0.1';\nmysql> SET PERSIST authentication_ldap_sasl_bind_base_dn='dc=percona, dc=com';\n
    "},{"location":"ldap-authentication.html#create-a-user-using-simple-ldap-authentication","title":"Create a user using simple LDAP authentication","text":"

    There are several methods to add or modify a user.

    Use authentication_ldap_simple pluginUse the authentication string in simple LDAP

    In the CREATE USER statement or the ALTER USER statement, for simple LDAP authentication, you can specify the authentication_ldap_simple plugin in the IDENTIFIED WITH clause:

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_simple;\n

    Using the IDENTIFIED WITH clause, the database server assigns the specified plugin.

    If you provide the optional authentication string clause, \u2018cn,ou,dc,dc\u2019 in the example, the string is stored along with the password.

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_simple BY 'cn=[user name],ou=[organization unit],dc=[domain component],dc=com'\n

    Unless the authentication_ldap_simple_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    • authentication_ldap_simple_bind_base_dn

    • authentication_ldap_simple_bind_root_dn

    • authentication_ldap_simple_bind_root_pwd

    • authentication_ldap_simple_user_search_attr

    • authentication_ldap_simple_group_search_attr

    Creating the user with IDENTIFIED BY authentication_ldap_simple uses the variables.

    Creating the user with the authentication_ldap_simple_group_role_mapping variable also adds the authentication_ldap_simple_bind_root_dn and authentication_ldap_simple_bind_root_pwd variables.

    "},{"location":"ldap-authentication.html#create-a-user-using-sasl-based-ldap-authentication","title":"Create a user using SASL-based LDAP authentication","text":"

    There are several methods to add or modify a user.

    Use authentication_ldap_sasl pluginUse the authentication string in SASL-based LDAP

    For SASL-based LDAP authentication, in the CREATE USER statement or the ALTER USER statement, you can specify the authentication_ldap_sasl plugin:

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_sasl;\n

    If you provide the optional authentication string clause, \u2018cn,ou,dc,dc\u2019 in the example, the string is stored along with the password.

    mysql> CREATE USER ... IDENTIFIED WITH authentication_ldap_sasl BY 'cn=[user name],ou=[organization unit],dc=[domain component],dc=com'\n

    Unless the authentication_ldap_sasl_group_role_mapping variable is used, creating a user with an authentication string does not use the following system variables:

    • authentication_ldap_sasl_bind_base_dn

    • authentication_ldap_sasl_bind_root_dn

    • authentication_ldap_sasl_bind_root_pwd

    • authentication_ldap_sasl_user_search_attr

    • authentication_ldap_sasl_group_search_attr

    Creating the user with IDENTIFIED BY authentication_ldap_sasl uses the variables.

    Creating the user with the authentication_ldap_sasl_group_role_mapping variable also adds theauthentication_ldap_sasl_bind_root_dn and authentication_ldap_sasl_bind_root_pwd variables.

    "},{"location":"ldap-authentication.html#examples","title":"Examples","text":"

    The following sections are examples of using simple LDAP authentication and SASL-based LDAP authentication.

    For the purposes of this example, we use the following LDAP user:

    uid=ldapuser,ou=testusers,dc=percona,dc=com\n
    Simple LDAP authenticationSASL-based LDAP authentication

    The following example configures an LDAP user and connects to the database server.

    Create a database server account for ldapuser with the following statement:

    mysql> CREATE USER 'ldapuser'@'localhost' IDENTIFIED WITH authentication_ldap_simple BY 'uid=ldapuser,ou=testusers,dc=percona,dc=com';\n

    The authentication string does not include the LDAP password. This password must be provided by the client user when they connect.

    mysql> mysql --user=ldapuser --password --enable-cleartext-plugin\n

    The user enters the ldapuser password. The client sends the password as cleartext, which is necessary when using a server-side LDAP library without SASL. The following actions may minimize the risk:

    • Require that the database server clients explicitly enable the mysql_clear_password plugin with --enable-cleartext-plugin.
    • Require that the database server clients connect to the database server using an encrypted connection

    The following example configures an LDAP user and connect to the database server.

    Create a database server account for ldapuser with the following statement:

    mysql> CREATE USER 'ldapuser'@'localhost' IDENTIFIED WITH authentication_ldap_sasl AS 'uid=ldapuser,ou=testusers,dc=percona,dc=com';\n

    The authentication string does not include the LDAP password. This password must be provided by the client user when they connect.

    Clients connect ot the database server by providing the database server user name and LDAP password:

    mysql> mysql --user=ldapuser --password\n

    The authentication is similar to the authentication method used by simple LDAP authentication, except that the client and the database server SASL LDAP plugins use SASL messages. These messages are secure within the LDAP protocol.

    "},{"location":"ldap-authentication.html#uninstall-the-plugins","title":"Uninstall the plugins","text":"

    If you installed either plugin at server startup, remove those options from the my.cnf file, remove any startup options that set LDAP system variables, and restart the server.

    Uninstall the simple LDAP authentication pluginUninstall the SASL-based LDAP authentication plugin

    If you installed the plugins at runtime, run the following statements:

    mysql> UNINSTALL PLUGIN authentication_ldap_simple;\n

    If you used SET_PERSIST, use RESET PERSIST to remove the settings.

    If you installed the plugins at runtime, run the following statements:

    mysql> UNINSTALL PLUGIN authentication_ldap_sasl;\n

    If you used SET_PERSIST, use RESET PERSIST to remove the settings.

    "},{"location":"ldap-system-variables.html","title":"LDAP authentication plugin system variables","text":""},{"location":"ldap-system-variables.html#authentication-system-variables","title":"Authentication system variables","text":"

    Percona 8.0.30-22 adds LDAP_SASL variables and the fallback server variables for simple LDAP and SASL-based LDAP.

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    The installation adds the following variables:

    Variable name Description authentication_ldap_sasl_bind_base_dn Base distinguished name authentication_ldap_sasl_bind_root_dn Root distinguished name authentication_ldap_sasl_bind_root_dn_pwd Password for the root distinguished name authentication_ldap_sasl_ca_path Absolute path of the certificate authority authentication_ldap_sasl_fallback_server_host If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server authentication_ldap_sasl_fallback_server_port The port number for the fallback server authentication_ldap_sasl_group_role_mapping A list of LDAP group names - MySQL role pairs authentication_ldap_sasl_group_search_attr Name of the attribute that specifies the group names in the LDAP directory entries authentication_ldap_sasl_group_search_filter Custom group search filter authentication_ldap_sasl_init_pool_size Initial size of the connection pool to the LDAP server authentication_ldap_sasl_log_status logging level authentication_ldap_sasl_max_pool_size Maximum size of the pool of connections to the LDAP server authentication_ldap_sasl_server_host LDAP server host authentication_ldap_sasl_server_port LDAP server TCP/IP port number authentication_ldap_sasl_ssl If plugin connections to the LDAP server use the SSL protocol (ldaps://) authentication_ldap_sasl_tls If plugin connections to the LDAP server are secured with STARTTLS (ldap://) authentication_ldap_sasl_user_search_attr Name of the attribute that specifies user names in the LDAP directory entries authentication_ldap_simple_bind_base_dn Base distinguished name authentication_ldap_simple_bind_root_dn Root distinguished name authentication_ldap_simple_bind_root_dn_pwd Password for the root distinguished name authentication_ldap_simple_ca_path Absolute path of the certificate authority authentication_ldap_simple_fallback_server_host If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server authentication_ldap_simple_fallback_server_port The port number for the fallback server authentication_ldap_simple_group_role_mapping A list of LDAP group names - MySQL role pairs authentication_ldap_simple_group_search_attr Name of the attribute that specifies the group names in the LDAP directory entries authentication_ldap_simple_group_search_filter Custom group search filter authentication_ldap_simple_init_pool_size Initial size of the connection pool to the LDAP server authentication_ldap_simple_log_status logging level authentication_ldap_simple_max_pool_size Maximum size of the pool of connections to the LDAP server authentication_ldap_simple_server_host LDAP server host authentication_ldap_simple_server_port LDAP server TCP/IP port number authentication_ldap_simple_ssl If plugin connections to the LDAP server use the SSL protocol (ldaps://) authentication_ldap_simple_tls If plugin connections to the LDAP server are secured with STARTTLS (ldap://) authentication_ldap_simple_user_search_attr Name of the attribute that specifies user names in the LDAP directory entries

    The following variables are described in detail:

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_bind_base_dn","title":"authentication_ldap_sasl_bind_base_dn","text":"Option Description Command-line \u2013authentication-ldap-sasl-bind-base-dn=value Scope Global Dynamic Yes Data type String Default NULL

    The base distinguished name (DN) for SASL-based LDAP authentication. You can limit the search scope by using the variable as the base of the search.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_bind_root_dn","title":"authentication_ldap_sasl_bind_root_dn","text":"Option Description Command-line \u2013authentication-ldap-sasl-bind-root-dn=value Scope Global Dynamic Yes Data type String Default NULL

    The root distiguished name (DN) used to authenticate SASL-based LDAP. When performing a search, this variable is used with authentication_ldap_sasl_bind_root_pwd as the authenticating credentials to the LDAP server.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_bind_root_pwd","title":"authentication_ldap_sasl_bind_root_pwd","text":"Option Description Command-line \u2013authentication-ldap-sasl-bind-root-pwd=value Scope Global Dynamic Yes Data type String Default NULL

    The root password used to authenticate against SASL-based LDAP server. This variable is used with authentication_ldap_sasl_bind_root_dn.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_ca_path","title":"authentication_ldap_sasl_ca_path","text":"Option Description Command-line \u2013authentication-ldap-sasl-ca_path=value Scope Global Dynamic Yes Data type String Default NULL

    The certificate authority\u2019s absolute path used to verify the LDAP certificate.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_fallback_server_host","title":"authentication_ldap_sasl_fallback_server_host","text":"Option Description Command-line \u2013authentication-ldap-sasl-fallback-server-host Scope Global Dynamic Yes Type Sting Default NULL

    Use with authentication_ldap_sasl_fallback_server_port.

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_fallback_server_port","title":"authentication_ldap_sasl_fallback_server_port","text":"Option Description Command-line \u2013authentication-ldap-sasl-fallback-server-port Scope Global Dynamic Yes Type Integer Default NULL

    Use with authentication_ldap_sasl_fallback_server_host.

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    If the fallback server host has a value, and the fallback port is 0, users can specify multiple fallback servers.

    Use this format to specify multiple fallback servers: authentication_ldap_sasl_fallback_server_host=\"ldap(s)://host:port,ldap(s)://host2:port2, for example.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_group_role_mapping","title":"authentication_ldap_sasl_group_role_mapping","text":"Option Description Command-line \u2013authentication-ldap-sasl-group-role-mapping=value Scope Global Dynamic Yes Data type String Default Null

    When an LDAP user logs in, the server checks if the LDAP user is a member of the specified group. If the user is, then the server automatically grants the database server roles to the user.

    The variable has this format: <ldap_group>=<mysql_role>,<ldap_group2>=<mysql_role2>,.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_group_search_attr","title":"authentication_ldap_sasl_group_search_attr","text":"Option Description Command-line \u2013authentication-ldap-sasl-group-search-attr=value Scope Global Dynamic Yes Data type String Default cn

    The attribute name that specifies group names in the LDAP directory entries for SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_group_search_filter","title":"authentication_ldap_sasl_group_search_filter","text":"Option Description Command-line \u2013authentication-ldap-sasl-group-search-filter=value Scope Global Dynamic Yes Data type String Default (|(&(objectClass=posixGroup)(memberUid=%s))(&(objectClass=group)(member=%s)))

    The custom group search filter for SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_init_pool_size","title":"authentication_ldap_sasl_init_pool_size","text":"Option Description Command-line \u2013authentication-ldap-sasl-init-pool-size=value Scope Global Dynamic Yes Data type Integer Default 10 Minimum value 0 Maximum value 32767 Unit connections

    The initial size of the connection pool to the LDAP server for SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_log_status","title":"authentication_ldap_sasl_log_status","text":"Option Description Command-line \u2013authentication-ldap-sasl-log-status=value Scope Global Dynamic Yes Data type Integer Default 1 Minimum value 1 Maximum value 6

    The logging level for messages written to the error log for SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_max_pool_size","title":"authentication_ldap_sasl_max_pool_size","text":"Option Description Command-line \u2013authentication-ldap-sasl-max-pool-size=value Scope Global Dynamic Yes Data type Integer Default 1000 Minimum value 0 Maximum value 32767 Unit connections

    The maximum connection pool size to the LDAP server in SASL-based LDAP authentication. The variable is used with authentication_ldap_sasl_init_pool_size.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_server_host","title":"authentication_ldap_sasl_server_host","text":"Option Description Command-line \u2013authentication-ldap-sasl-server-host=value Scope Global Dynamic Yes Data type String Default NULL

    The LDAP server host used for SASL-based LDAP authentication. The LDAP server host can be an IP address or a host name.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_server_port","title":"authentication_ldap_sasl_server_port","text":"Option Description Command-line \u2013authentication-ldap-sasl-server-port=value Scope Global Dynamic Yes Data type Integer Default 389 Minimum value 1 Maximum value 32376

    The LDAP server TCP/IP port number used for SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_ssl","title":"authentication_ldap_sasl_ssl","text":"Option Description Command-line \u2013authentication-ldap-sasl-ssl=value Scope Global Dynamic Yes Data type Boolean Default OFF

    If this variable is enabled, the plugin connects to the server with SSL.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_tls","title":"authentication_ldap_sasl_tls","text":"Option Description Command-line \u2013authentication-ldap-sasl-tls=value Scope Global Dynamic Yes Data type Boolean Default OFF

    If this variable is enabled, the plugin connects to the server with TLS.

    "},{"location":"ldap-system-variables.html#authentication_ldap_sasl_user_search_attr","title":"authentication_ldap_sasl_user_search_attr","text":"Option Description Command-line \u2013authentication-ldap-sasl-user-search-attr=value Scope Global Dynamic Yes Data type String Default uid

    The attribute name that specifies the user names in LDAP directory entries in SASL-based LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_bind_base_dn","title":"authentication_ldap_simple_bind_base_dn","text":"Option Description Command-line \u2013authentication-ldap-simple-bind-base-dn=value Scope Global Dynamic Yes Data type String Default NULL

    The base distinguished name (DN) for simple LDAP authentication. You can limit the search scope by using the variable as the base of the search.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_bind_root_dn","title":"authentication_ldap_simple_bind_root_dn","text":"Option Description Command-line \u2013authentication-ldap-simple-bind-root-dn=value Scope Global Dynamic Yes Data type String Default NULL

    The root distinguished name (DN) used to authenticate simple LDAP. When performing a search, this variable is used with authentication_ldap_simple_bind_root_pwd as the authenticating credentials to the LDAP server.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_bind_root_pwd","title":"authentication_ldap_simple_bind_root_pwd","text":"Option Description Command-line \u2013authentication-ldap-simple-bind-root-pwd=value Scope Global Dynamic Yes Data type String Default NULL

    The root password used to authenticate against simple LDAP server. This variable is used with authentication_ldap_simple_bind_root_dn.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_ca_path","title":"authentication_ldap_simple_ca_path","text":"Option Description Command-line \u2013authentication-ldap-simple-ca_path=value Scope Global Dynamic Yes Data type String Default NULL

    The certificate authority\u2019s absolute path used to verify the LDAP certificate.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_fallback_server_host","title":"authentication_ldap_simple_fallback_server_host","text":"Option Description Command-line \u2013authentication-ldap-simple-fallback-server-host Scope Global Dynamic Yes Type Sting Default NULL

    Use with authentication_ldap_simple_fallback_server_port.

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_fallback_server_port","title":"authentication_ldap_simple_fallback_server_port","text":"Option Description Command-line \u2013authentication-ldap-simple-fallback-server-port Scope Global Dynamic Yes Type Integer Default NULL

    Use with authentication_ldap_simple_fallback_server_host.

    If the primary server is unavailable, the authentication plugin attempts to connect to the fallback server and authenticate using that server.

    If the fallback server host has a value, and the fallback port is 0, users can specify multiple fallback servers.

    Use this format to specify multiple fallback servers: authentication_ldap_simple_fallback_server_host=\"ldap(s)://host:port,ldap(s)://host2:port2, for example.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_group_role_mapping","title":"authentication_ldap_simple_group_role_mapping","text":"Option Description Command-line \u2013authentication-ldap-simple-group-role-mapping=value Scope Global Dynamic Yes Data type String Default Null

    When an LDAP user logs in, the server checks if the LDAP user is a member of the specified group. If the user is, then the server automatically grants the database server roles to the user.

    The variable has this format: <ldap_group>=<mysql_role>,<ldap_group2>=<mysql_role2>,.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_group_search_attr","title":"authentication_ldap_simple_group_search_attr","text":"Option Description Command-line \u2013authentication-ldap-simple-group-search-attr=value Scope Global Dynamic Yes Data type String Default cn

    The attribute name that specifies group names in the LDAP directory entries for simple LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_group_search_filter","title":"authentication_ldap_simple_group_search_filter","text":"Option Description Command-line \u2013authentication-ldap-simple-group-search-filter=value Scope Global Dynamic Yes Data type String Default (|(&(objectClass=posixGroup)(memberUid=%s))(&(objectClass=group)(member=%s)))

    The custom group search filter for simple LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_init_pool_size","title":"authentication_ldap_simple_init_pool_size","text":"Option Description Command-line \u2013authentication-ldap-simple-init-pool-size=value Scope Global Dynamic Yes Data type Integer Default 10 Minimum value 0 Maximum value 32767 Unit connections

    The initial size of the connection pool to the LDAP server for simple LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_log_status","title":"authentication_ldap_simple_log_status","text":"Option Description Command-line \u2013authentication-ldap-simple-log-status=value Scope Global Dynamic Yes Data type Integer Default 1 Minimum value 1 Maximum value 6

    The logging level for messages written to the error log for simple LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_max_pool_size","title":"authentication_ldap_simple_max_pool_size","text":"Option Description Command-line \u2013authentication-ldap-simple-max-pool-size=value Scope Global Dynamic Yes Data type Integer Default 1000 Minimum value 0 Maximum value 32767 Unit connections

    The maximum connection pool size to the LDAP server in simple LDAP authentication. The variable is used with authentication_ldap_simple_init_pool_size.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_server_host","title":"authentication_ldap_simple_server_host","text":"Option Description Command-line \u2013authentication-ldap-simple-server-host=value Scope Global Dynamic Yes Data type String Default NULL

    The LDAP server host used for simple LDAP authentication. The LDAP server host can be an IP address or a host name.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_server_port","title":"authentication_ldap_simple_server_port","text":"Option Description Command-line \u2013authentication-ldap-simple-server-port=value Scope Global Dynamic Yes Data type Integer Default 389 Minimum value 1 Maximum value 32376

    The LDAP server TCP/IP port number used for simple LDAP authentication.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_ssl","title":"authentication_ldap_simple_ssl","text":"Option Description Command-line \u2013authentication-ldap-simple-ssl=value Scope Global Dynamic Yes Data type Boolean Default OFF

    If this variable is enabled, the plugin connects to the server with SSL.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_tls","title":"authentication_ldap_simple_tls","text":"Option Description Command-line \u2013authentication-ldap-simple-tls=value Scope Global Dynamic Yes Data type Boolean Default OFF

    If this variable is enabled, the plugin connects to the server with TLS.

    "},{"location":"ldap-system-variables.html#authentication_ldap_simple_user_search_attr","title":"authentication_ldap_simple_user_search_attr","text":"Option Description Command-line \u2013authentication-ldap-simple-user-search-attr=value Scope Global Dynamic Yes Data type String Default uid

    The attribute name that specifies the user names in LDAP directory entries in simple LDAP authentication.

    "},{"location":"libcoredumper.html","title":"Libcoredumper","text":""},{"location":"libcoredumper.html#using-libcoredumper","title":"Using libcoredumper","text":"

    A core dump file is the documented moment of a computer when either the computer or an application exits. Developers examine the dump as one of the tasks when searching for the cause of a failure.

    The libcoredumper is a free and Open Source fork of google-coredumper, enhanced to work on newer Linux versions, and GCC and CLANG.

    "},{"location":"libcoredumper.html#enabling-the-libcoredumper","title":"Enabling the libcoredumper","text":"

    Enable core dumps for troubleshooting purposes.

    To enable the libcoredumper, add the coredumper variable to the mysqld section of my.cnf. This variable is independent of the older core-file variable.

    The variable can have the following possible values:

    Value Description Blank The core dump is saved under MySQL datadir and named core. A path ending with / The core dump is saved under the specified directory and named core. Full path with a filename The core dump is saved under the specified directory and filename

    Restart the server.

    "},{"location":"libcoredumper.html#verifying-the-libcoredumper-is-active","title":"Verifying the libcoredumper is active","text":"

    MySQL writes to the log when generating a core file and delegates the core dump operation to the Linux kernel.

    Writing a core file\n

    MySQL using the libcoredumper to generate the file creates the following message in the log:

    Writing a core file using lib coredumper\n

    Every core file adds a crash timestamp instead of a PID for the following reasons:

    • Correlates the core file with the crash. MySQL prints a UTC timestamp on the crash log.
    10:02:09 UTC - mysqld got signal 11;\n
    • Stores multiple core files.

    Note

    For example, operators and containers run as the process id of PID 1. If the process ID is used to identify the core file, each container crash generates a core dump that overwrites the previous core file.

    "},{"location":"libcoredumper.html#disabling-the-libcoredumper","title":"Disabling the libcoredumper","text":"

    You can disable the libcoredumper. A core file may contain sensitive data and takes disk space.

    To disable the libcoredumper you must do the following:

    1. In the mysqld section of my.cnf, remove the libcoredumper variable.

    2. Restart the server.

    "},{"location":"limitations.html","title":"MyRocks limitations","text":"

    The MyRocks storage engine lacks the following features compared to InnoDB:

    • Online DDL is not supported due to the lack of atomic DDL support.

      * There is no `ALTER TABLE ... ALGORITHM=INSTANT` functionality\n\n* A partition management operation only supports the `COPY` algorithms, which rebuilds the partition table and moves the data based on the new `PARTITION ... VALUE` definition. In the case of `DROP PARTITION`, the data not moved to another partition is deleted.\n
    • ALTER TABLE .. EXCHANGE PARTITION.

    • SAVEPOINT

    • Transportable tablespace

    • Foreign keys

    • Spatial indexes

    • Fulltext indexes

    • Gap locks

    • Group Replication

    • Partial Update of LOB in InnoDB

    You should also consider the following:

    • All collations are supported on CHAR and VARCHAR indexed columns. By default, MyRocks prevents creating indexes with non-binary collations (including latin1). You can optionally use it by setting rocksdb_strict_collation_exceptions to t1 (table names with regex format), but non-binary covering indexes other than latin1 (excluding german1) still require a primary key lookup to return the CHAR or VARCHAR column.

    • Either ORDER BY DESC or ORDER BY ASC is slow. This is because of \u201cPrefix Key Encoding\u201d feature in RocksDB. See https://www.slideshare.net/matsunobu/myrocks-deep-dive/58 for details. By default, ascending scan is faster and descending scan is slower. If the \u201creverse column family\u201d is configured, then descending scan will be faster and ascending scan will be slower. Note that InnoDB also imposes a cost when the index is scanned in the opposite order.

    • When converting from large MyISAM/InnoDB tables, either by using the ALTER or INSERT INTO SELECT statements it\u2019s recommended that you check the Data loading documentation and create MyRocks tables as below (in case the table is sufficiently big it will cause the server to consume all the memory and then be terminated by the OOM killer):

     SET session sql_log_bin=0;\n SET session rocksdb_bulk_load=1;\n ALTER TABLE large_myisam_table ENGINE=RocksDB;\n SET session rocksdb_bulk_load=0;\n
    Expected output
    .. warning::\n\n   If you are loading large data without enabling :ref:`rocksdb_bulk_load`\n   or :ref:`rocksdb_commit_in_the_middle`, please make sure transaction\n   ize is small enough. All modifications of the ongoing transactions are\n   kept in memory.\n
    • With partitioned tables that use the TokuDB or MyRocks storage engine, the upgrade only works with native partitioning.

      See also

      MySQL Documentation: Preparing Your Installation for Upgrade

    • Percona Server for MySQL 8.0 and Unicode 9.0.0 standards have defined a change in the handling of binary collations. These collations are handled as NO PAD, trailing spaces are included in key comparisons. A binary collation comparison may result in two unique rows inserted and does not generate a`DUP_ENTRY` error. MyRocks key encoding and comparison does not account for this character set attribute.

    "},{"location":"limitations.html#not-supported-on-myrocks","title":"Not supported on MyRocks","text":"

    MyRocks does not support the following:

    • Operating as either a source or a replica in any replication topology that is not exclusively row-based. Statement-based and mixed-format binary logging is not supported. For more information, see Replication Formats.

    • Using multi-valued indexes. Implemented in Percona Server for MySQL 8.0.17, InnoDB supports this feature.

    • Using spatial data types .

    • Using the Clone Plugin and the Clone Plugin API. As of Percona Server for MySQL 8.0.17, InnoDB supports either these features.

    • Using encryption in tables. At this time, during an ALTER TABLE operation, MyRocks mistakenly detects all InnoDB tables as encrypted. Therefore, any attempt to ALTER an InnoDB table to MyRocks fails.

      As a workaround, we recommend a manual move of the table. The following steps are the same as the ALTER TABLE ... ENGINE=... process:

      • Use SHOW CREATE TABLE ... to return the InnoDB table definition.

      • With the table definition as the source, perform a CREATE TABLE ... ENGINE=RocksDB.

      • In the new table, use INSERT INTO <new table> SELECT \\* FROM <old table>.

      Note

      With MyRocks and with large tables, it is recommended to set the session variable rocksdb_bulk_load=1 during the load to prevent running out of memory. This recommendation is because of the MyRocks large transaction limitation. For more information, see MyRocks Data Loading

    "},{"location":"log-connection-error.html","title":"Too many connections warning","text":"

    If the log_error_verbosity system variable is set to 2 or higher, this feature generates the Too many connections warning in the log.

    "},{"location":"log-connection-error.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"manage-audit-log-filter.html","title":"Manage the Audit Log Filter files","text":"

    The Audit Log Filter files have the following potential results:

    • Consume a large amount of disk space
    • Grow large

    You can manage the space by using log file rotation. This operation renames and then rotates the current log file and then uses the original name on a new current log file. You can rotate the file either manually or automatically.

    If automatic rotation is enabled, you can prune the log file. This pruning operation can be based on either the log file age or combined log file size.

    "},{"location":"manage-audit-log-filter.html#manual-log-rotation","title":"Manual log rotation","text":"

    The default setting for audit_log_filter_rotate_on_size is 1GB. If this option is set to 0, the audit log filter plugin does not do an automatic rotation of the log file. You must do the rotation manually with this setting.

    The SELECT audit_log_rotate() command renames the file and creates a new audit log filter file with the original name. You must have the AUDIT_ADMIN privilege.

    The files are pruned if either audit_log_filter_max_size or audit_log_filter_prune_seconds have a value greater than 0 (zero) and audit_log_filter_rotate_on_size > 0.

    After the files have been renamed, you must manually remove any archived audit log filter files. The renamed audit log filter files can be read by audit_log_read(). The audit_log_read() does not find the logs if the name pattern differs from the current pattern.

    "},{"location":"misc-info-schema-tables.html","title":"Misc. INFORMATION_SCHEMA tables","text":"

    This page lists the INFORMATION_SCHEMA tables added to standard MySQL by Percona Server for MySQL that don\u2019t exist elsewhere in the documentation.

    "},{"location":"misc-info-schema-tables.html#temporary-tables","title":"Temporary tables","text":"

    Note

    This feature implementation is considered tech preview quality.

    Only the temporary tables that were explicitly created with CREATE TEMPORARY TABLE or ALTER TABLE are shown, and not the ones created to process complex queries.

    "},{"location":"misc-info-schema-tables.html#information_schemaglobal_temporary_tables","title":"INFORMATION_SCHEMA.GLOBAL_TEMPORARY_TABLES","text":"Column Name Description \u2018SESSION_ID\u2019 \u2018MySQL connection id\u2019 \u2018TABLE_SCHEMA\u2019 \u2018Schema in which the temporary table is created\u2019 \u2018TABLE_NAME\u2019 \u2018Name of the temporary table\u2019 \u2018ENGINE\u2019 \u2018Engine of the temporary table\u2019 \u2018NAME\u2019 \u2018Internal name of the temporary table\u2019 \u2018TABLE_ROWS\u2019 \u2018Number of rows of the temporary table\u2019 \u2018AVG_ROW_LENGTH\u2019 \u2018Average row length of the temporary table\u2019 \u2018DATA_LENGTH\u2019 \u2018Size of the data (Bytes)\u2019 \u2018INDEX_LENGTH\u2019 \u2018Size of the indexes (Bytes)\u2019 \u2018CREATE_TIME\u2019 \u2018Date and time of creation of the temporary table\u2019 \u2018UPDATE_TIME\u2019 \u2018Date and time of the latest update of the temporary table\u2019

    The feature was ported from Percona Server for MySQL 5.7 in 8.0.12-1.

    This table holds information on the temporary tables that exist for all connections. You don\u2019t need the SUPER privilege to query this table.

    "},{"location":"misc-info-schema-tables.html#information_schematemporary_tables","title":"INFORMATION_SCHEMA.TEMPORARY_TABLES","text":"Column Name Description \u2018SESSION_ID\u2019 \u2018MySQL connection id\u2019 \u2018TABLE_SCHEMA\u2019 \u2018Schema in which the temporary table is created\u2019 \u2018TABLE_NAME\u2019 \u2018Name of the temporary table\u2019 \u2018ENGINE\u2019 \u2018Engine of the temporary table\u2019 \u2018NAME\u2019 \u2018Internal name of the temporary table\u2019 \u2018TABLE_ROWS\u2019 \u2018Number of rows of the temporary table\u2019 \u2018AVG_ROW_LENGTH\u2019 \u2018Average row length of the temporary table\u2019 \u2018DATA_LENGTH\u2019 \u2018Size of the data (Bytes)\u2019 \u2018INDEX_LENGTH\u2019 \u2018Size of the indexes (Bytes)\u2019 \u2018CREATE_TIME\u2019 \u2018Date and time of creation of the temporary table\u2019 \u2018UPDATE_TIME\u2019 \u2018Date and time of the latest update of the temporary table\u2019

    The feature was ported from Percona Server for MySQL 5.7 in 8.0.12-1.

    This table holds information on the temporary tables existing for the running connection.

    "},{"location":"myrocks-column-family.html","title":"MyRocks column families","text":"

    MyRocks stores all data in a single server instance as a collection of key-value pairs within the log structured merge tree data structure. This is a flat data structure that requires that keys be unique throughout the whole data structure. MyRocks incorporates table IDs and index IDs into the keys.

    Each key-value pair belongs to a column family. It is a data structure similar in concept to tablespaces. Each column family has distinct attributes, such as block size, compression, sort order, and MemTable. Utilizing these attributes, MyRocks effectively uses column families to store indexes.

    On system initialization, MyRocks creates two column families. The __system__ column family is reserved by MyRocks; no user created tables or indexes belong to this column family. The default column family is the location for the indexes created by the user when you a column family is not explicitly specified.

    To be able to apply a custom block size, compression, or sort order you need to create an index in its own column family using the COMMENT clause.

    The following example demonstrates how to place the PRIMARY KEY into the cf1 column family and the index kb \u2014 into the cf2 column family.

    CREATE TABLE t1 (a INT, b INT,\nPRIMARY KEY(a) COMMENT 'cfname=cf1',\nKEY kb(b) COMMENT 'cfname=cf2')\nENGINE=ROCKSDB;\n

    The column family name is specified as the value of the cfname attribute at the beginning of the COMMENT clause. The name is case sensitive and may not contain leading or trailing whitespace characters.

    The COMMENT clause may contain other information following the semicolon character (;) after the column family name: \u2018cfname=foo; special column family\u2019. If the column family cannot be created, MyRocks uses the default column family.

    Warning

    The cfname attribute must be all lowercase. Place the equals sign (=) in front of the column family name without any whitespace on both sides of it.

    COMMENT 'cfname=Foo; Creating the Foo family name'\n

    See also

    Using COMMENT to Specify Column Family Names with Multiple Table Partitions https://github.com/facebook/mysql-5.6/wiki/Column-Families-on-Partitioned-Tables.

    "},{"location":"myrocks-column-family.html#controlling-the-number-of-column-families-to-reduce-memory-consumption","title":"Controlling the number of column families to reduce memory consumption","text":"

    Each column family has its own MemTable. It is an in-memory data structure where data are written to before they are flushed to SST files. The queries also use MemTables first. To reduce the overall memory consumption, the number of active column families should stay low.

    With the option |opt.no-create-column-family| set to true, the COMMENT clause will not treat cfname as a special token; it will not be possible to create column families using the COMMENT clause.

    "},{"location":"myrocks-column-family.html#column-family-options","title":"Column family options","text":"

    On startup, the server applies the |opt.default-cf-options| option to all existing column families. You may use the |opt.override-cf-options| option to override the value of any attribute of a chosen column family.

    Note that the options |opt.dcfo| and |opt.ocfo| are read-only at runtime.

    At runtime, use the the |opt.update-cf-options| option to update some column family attributes.

    "},{"location":"myrocks-index.html","title":"Percona MyRocks introduction","text":"

    MyRocks is a storage engine for MySQL based on RocksDB, an embeddable, persistent key-value store. Percona MyRocks is an implementation for Percona Server for MySQL.

    The RocksDB store is based on the log-structured merge-tree (or LSM tree). It is optimized for fast storage and combines outstanding space and write efficiency with acceptable read performance. As a result, MyRocks has the following advantages compared to other storage engines, if your workload uses fast storage, such as SSD:

    • Requires less storage space

    • Provides more storage endurance

    • Ensures better IO capacity

    Percona MyRocks Installation Guide

    MyRocks Limitations

    Differences between Percona MyRocks and Facebook MyRocks

    MyRocks Column Families

    MyRocks Server Variables

    MyRocks Information Schema Tables

    Performance Schema MyRocks changes

    "},{"location":"myrocks-performance-schema-tables.html","title":"Performance Schema MyRocks changes","text":"

    RocksDB WAL file information can be seen in the performance_schema.log_status table in the STORAGE ENGINE column.

    This feature has been implemented in Percona Server for MySQL 8.0.15-6.

    "},{"location":"myrocks-performance-schema-tables.html#example","title":"Example","text":"
    mysql> select * from performance_schema.log_status\\G\n
    Expected output
    *************************** 1. row ***************************\n\nSERVER_UUID: f593b4f8-6fde-11e9-ad90-080027c2be11\n     LOCAL: {\"gtid_executed\": \"\", \"binary_log_file\": \"binlog.000004\", \"binary_log_position\": 1698222}\nREPLICATION: {\"channels\": []}\nSTORAGE_ENGINES: {\"InnoDB\": {\"LSN\": 36810235, \"LSN_checkpoint\": 36810235}, \"RocksDB\": {\"wal_files\": [{\"path_name\": \"/000026.log\", \"log_number\": 26, \"size_file_bytes\": 371869}]}}\n1 row in set (0.00 sec)\n
    "},{"location":"pam-plugin.html","title":"PAM authentication plugin","text":"

    Percona PAM Authentication Plugin is a free and Open Source implementation of the MySQL\u2019s authentication plugin. This plugin acts as a mediator between the MySQL server, the MySQL client, and the PAM stack. The server plugin requests authentication from the PAM stack, forwards any requests and messages from the PAM stack over the wire to the client (in cleartext) and reads back any replies for the PAM stack.

    PAM plugin uses dialog as its client side plugin. Dialog plugin can be loaded to any client application that uses libperconaserverclient/libmysqlclient library.

    Here are some of the benefits that Percona dialog plugin offers over the default one:

    • It correctly recognizes whether PAM wants input to be echoed or not, while the default one always echoes the input on the user\u2019s console.

    • It can use the password which is passed to MySQL client via \u201c-p\u201d parameter.

    • Dialog client installation bug has been fixed.

    Percona offers two versions of this plugin:

    • Full PAM plugin called auth_pam. This plugin uses dialog.so. It fully supports the PAM protocol with arbitrary communication between client and server.

    • Oracle-compatible PAM called auth_pam_compat. This plugin uses mysql_clear_password which is a part of Oracle MySQL client. It also has some limitations, such as, it supports only one password input. You must use -p option in order to pass the password to auth_pam_compat.

    These two versions of plugins are physically different. To choose which one you want used, you must use IDENTIFIED WITH \u2018auth_pam\u2019 for auth_pam, and IDENTIFIED WITH \u2018auth_pam_compat\u2019 for auth_pam_compat.

    "},{"location":"pam-plugin.html#version-specific-information","title":"Version specific information","text":"

    Implemented in Percona Server for MySQL 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.

    A plugin may not be supported in later releases of MySQL or Percona Server for MySQL since version changes may introduce incompatible changes.

    "},{"location":"pam-plugin.html#installation","title":"Installation","text":"

    This plugin requires manual installation because it isn\u2019t installed by default.

    mysql> INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';\n

    After the plugin has been installed it should be present in the plugins list. To check if the plugin has been correctly installed and active

    mysql> SHOW PLUGINS;\n
    Expected output
    ...\n| auth_pam                       | ACTIVE   | AUTHENTICATION     | auth_pam.so | GPL     |\n
    "},{"location":"pam-plugin.html#configuration","title":"Configuration","text":"

    In order to use the plugin, authentication method should be configured. Simple setup can be to use the standard UNIX authentication method (pam_unix).

    Note

    To use pam_unix, mysql will need to be added to the shadow group in order to have enough privileges to read the /etc/shadow.

    A sample /etc/pam.d/mysqld file:

    auth       required     pam_unix.so\naccount    required     pam_unix.so\n

    For added information in the system log, you can expand it to be:

    auth       required     pam_warn.so\nauth       required     pam_unix.so audit\naccount    required     pam_unix.so audit\n
    "},{"location":"pam-plugin.html#creating-a-user","title":"Creating a user","text":"

    After the PAM plugin has been configured, users can be created with the PAM plugin as authentication method

    mysql> CREATE USER 'newuser'@'localhost' IDENTIFIED WITH auth_pam;\n

    This will create a user newuser that can connect from localhost who will be authenticated using the PAM plugin. If the pam_unix method is being used user will need to exist on the system.

    "},{"location":"pam-plugin.html#supplementary-groups-support","title":"Supplementary groups support","text":"

    Percona Server for MySQL has implemented PAM plugin support for supplementary groups. Supplementary or secondary groups are extra groups a specific user is member of. For example user joe might be a member of groups: joe (his primary group) and secondary groups developers and dba. A complete list of groups and users belonging to them can be checked with cat /etc/group command.

    This feature enables using secondary groups in the mapping part of the authentication string, like \u201cmysql, developers=joe, dba=mark\u201d. Previously only primary groups could have been specified there. If user is a member of both developers and dba, PAM plugin will map it to the joe because developers matches first.

    "},{"location":"pam-plugin.html#known-issues","title":"Known issues","text":"

    Default mysql stack size is not enough to handle pam_encryptfs module. The workaround is to increase the MySQL stack size by setting the thread-stack variable to at least 512KB or by increasing the old value by 256KB.

    PAM authentication can fail with mysqld: pam_unix(mysqld:account): Fork failed: Cannot allocate memory error in the /var/log/secure even when there is enough memory available. Current workaround is to set vm.overcommit_memory to 1:

    echo 1 /proc/sys/vm/overcommit_memory\n

    and by adding the vm.overcommit_memory = 1 to /etc/sysctl.conf to make the change permanent after reboot. Authentication of internal (i.e. non PAM) accounts continues to work fine when mysqld reaches this memory utilization level. NOTE: Setting the vm.overcommit_memory to 1 will cause kernel to perform no memory overcommit handling which could increase the potential for memory overload and invoking of OOM killer.

    "},{"location":"percona-release.html","title":"Use percona-release","text":"

    A user of Percona Server for MySQL prioritizes efficiency and reliability. Installing the software directly offers a baseline solution, but for enhanced control, convenience, and security, use the Percona Software repositories and the percona-release tool. The percona-release tool method provides simplified, secure, and efficient Percona Server for MySQL installation and experience.

    "},{"location":"percona-release.html#effortless-repository-management","title":"Effortless Repository Management","text":"

    With a single command, percona-release configures your system to access official Percona repositories, eliminating the need to identify and add individual sources manually. When adding a new Percona product; percona-release handles the repository setup seamlessly.

    "},{"location":"percona-release.html#targeted-repositories","title":"Targeted Repositories","text":"

    Choose from various repositories, including specific versions, stability channels (stable, testing, experimental), and individual Percona products like XtraBackup or Toolkit. This granular control ensures you obtain the exact software configuration desired.

    "},{"location":"percona-release.html#automatic-updates","title":"Automatic Updates","text":"

    Benefit from automatic updates to repository lists, guaranteeing access to the latest packages as Percona releases them.

    "},{"location":"percona-release.html#streamlined-package-management","title":"Streamlined Package Management","text":"

    Leverage your system\u2019s native package manager (apt for Debian/Ubuntu, yum for Red Hat/CentOS) to install Percona Server for MySQL and related components. This familiar interface simplifies the process, eliminating the need to download and manage individual packages.

    "},{"location":"percona-release.html#dependency-resolution","title":"Dependency Resolution","text":"

    percona-release intelligently handles package dependencies, ensuring all necessary components are automatically installed alongside Percona Server for MySQL. No more wrestling with missing libraries or compatibility issues.

    "},{"location":"percona-release.html#consistent-configuration","title":"Consistent Configuration","text":"

    percona-release guarantees consistent package versions across your systems by maintaining centralized repositories. This configuration simplifies configuration management and reduces the risk of inconsistencies arising from manual installations.

    "},{"location":"percona-release.html#gpg-key-verification","title":"GPG Key Verification","text":"

    percona-release uses GPG keys to cryptographically verify the authenticity and integrity of packages downloaded from Percona repositories. This security measure protects you from malicious software or tampered packages, ensuring the software you install is genuine and secure.

    "},{"location":"percona-release.html#signed-packages","title":"Signed Packages","text":"

    All packages within Percona repositories are digitally signed, further bolstering security and tamper detection. This added layer of protection provides peace of mind, knowing the software you install has not been compromised.

    "},{"location":"percona-release.html#regular-security-updates","title":"Regular Security Updates","text":"

    Percona actively releases security updates for its software. By using percona-release, you gain access to these critical updates when they become available, helping you maintain a secure and stable database environment.

    "},{"location":"percona-sequence-table.html","title":"PERCONA_SEQUENCE_TABLE(n) function","text":"

    Using PERCONA_SEQUENCE_TABLE() function provides the following:

    Benefit Description Generates Sequences Acts as an inline table-valued function that generates a sequence of numbers. Table-Valued Function Unlike traditional scalar functions, PERCONA_SEQUENCE_TABLE() returns a virtual table with a single column named value containing the generated sequence. Simpler Syntax Simplifies queries that need to generate predictable sequences of numbers. Flexibility Allows dynamic definition of sequences within queries, offering more control compared to pre-defined tables for sequences. Predefined Sequence Does not manage sequences like Oracle or PostgreSQL; instead, it allows definition and generation of sequences within a SELECT statement. Customization Enables customization of starting value, increment/decrement amount, and number of values to generate."},{"location":"percona-sequence-table.html#version-update","title":"Version update","text":"

    Percona Server for MySQL 8.0.37 deprecated SEQUENCE_TABLE(), and Percona may remove this function in a future release. We recommend that you use PERCONA_SEQUENCE_TABLE() instead.

    To maintain compatibility with existing third-party software, SEQUENCE_TABLE is no longer a reserved term and can be used as a regular identifier.

    Percona Server for MySQL 8.0.20-11 introduced the SEQUENCE_TABLE() function.

    "},{"location":"percona-sequence-table.html#table-functions","title":"Table functions","text":"

    The function is an inline table-valued function. This function creates a temporary table with multiple rows. You can use this function within a single SELECT statement. Oracle MySQL Server only has the JSON_TABLE table function. The Percona Server for MySQL has the JSON_TABLE and PERCONA_SEQUENCE_TABLE() table functions. A single SELECT statement generates a multi-row result set. In contrast, a scalar function (like EXP(x) or LOWER(str) always returns a single value of a specific data type.

    "},{"location":"percona-sequence-table.html#syntax","title":"Syntax","text":"

    As with any derived tables, a table function requires an alias in the SELECT statement.

    The result set is a single column with the predefined column name value of type BIGINT UNSIGNED. You can reference the value column in SELECT statements. The following statements are valid. Using n as the number of generated values, the following is the basic syntax:

    • PERCONA_SEQUENCE_TABLE(n) [AS] alias
    SELECT \u2026 FROM PERCONA_SEQUENCE_TABLE(n) [AS] alias\n\nPERCONA_SEQUENCE_TABLE(n) [AS] alias\n
    SELECT * FROM PERCONA_SEQUENCE_TABLE(n) AS tt;\nSELECT <expr(value)> FROM PERCONA_SEQUENCE_TABLE(n) AS tt;\n

    The first number in the series, the initial term, is defined as 0, and the series ends with a value less than n.

    "},{"location":"percona-sequence-table.html#basic-sequence-generation","title":"Basic sequence generation","text":"

    In this example, the following statement generates a sequence:

    mysql> SELECT * FROM PERCONA_SEQUENCE_TABLE(3) AS tt;\n
    Expected output
    +-------+\n| value |\n+-------+\n|     0 |\n|     1 |\n|     2 |\n+-------+\n
    "},{"location":"percona-sequence-table.html#start-with-a-specific-value","title":"Start with a specific value","text":"

    You can define the initial value using the WHERE clause. The following example starts the sequence with 4.

    SELECT value AS result FROM PERCONA_SEQUENCE_TABLE(8) AS tt WHERE value >= 4;\n
    Expected output
    +--------+\n| result |\n+--------+\n|      4 |\n|      5 |\n|      6 |\n|      7 |\n+--------+\n
    "},{"location":"percona-sequence-table.html#filter-even-numbers","title":"Filter even numbers","text":"

    Consecutive terms increase or decrease by a common difference. The default common difference value is 1. However, it is possible to filter the results using the WHERE clause to simulate common differences greater than 1.

    The following example prints only even numbers from the 0..7 range:

    SELECT value AS result FROM PERCONA_SEQUENCE_TABLE(8) AS tt WHERE value % 2 = 0;\n
    Expected output
    +--------+\n| result |\n+--------+\n|      0 |\n|      2 |\n|      4 |\n|      6 |\n+--------+\n
    "},{"location":"percona-sequence-table.html#generate-random-numbers","title":"Generate random numbers","text":"

    The following is an example of using the function to populate a table with a set of random numbers:

    mysql> SELECT FLOOR(RAND() * 100) AS result FROM PERCONA_SEQUENCE_TABLE(4) AS tt;\n

    The output could be the following:

    Expected output
    +--------+\n| result |\n+--------+\n|     24 |\n|     56 |\n|     70 |\n|     25 |\n+--------+\n
    "},{"location":"percona-sequence-table.html#generate-random-strings","title":"Generate random strings","text":"

    You can populate a table with a set of pseudo-random strings with the following statement:

    mysql> SELECT MD5(value) AS result FROM PERCONA_SEQUENCE_TABLE(4) AS tt;\n
    Expected output
    +----------------------------------+\n| result                           |\n+----------------------------------+\n| f17d9c990f40f8ac215f2ecdfd7d0451 |\n| 2e5751b7cfd7f053cd29e946fb2649a4 |\n| b026324c6904b2a9cb4b88d6d61c81d1 |\n| 26ab0db90d72e28ad0ba1e22ee510510 |\n+----------------------------------+\n
    "},{"location":"percona-sequence-table.html#add-a-sequence-to-a-table","title":"Add a sequence to a table","text":"

    You can add the sequence as a column to a new table or an existing table, as shown in this example:

    mysql> CREATE TABLE t1 AS SELECT * FROM PERCONA_SEQUENCE_TABLE(4) AS tt;\n\nmysql> SELECT * FROM t1;\n
    Expected output
    +-------+\n| value |\n+-------+\n|     0 |\n|     1 |\n|     2 |\n|     3 |\n+-------+\n

    Sequences are useful for various purposes, such as populating tables and generating test data.

    "},{"location":"percona-xtradb.html","title":"The Percona XtraDB storage engine","text":"

    Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware. It also includes a variety of other features useful in high-performance environments. It is fully backward compatible, and so can be used as a drop-in replacement for standard InnoDB.

    Percona XtraDB includes all of InnoDB \u2018s robust, reliable ACID-compliant design and advanced MVCC architecture, and builds on that solid foundation with more features, more tunability, more metrics, and more scalability. In particular, it is designed to scale better on many cores, use memory more efficiently, and be more convenient and useful. The new features are specially designed to alleviate some of InnoDB\u2019s limitations. We choose features and fixes based on customer requests and on our best judgment of real-world needs as a high-performance consulting company.

    Percona XtraDB engine will not have further binary releases, it is distributed as part of the Percona Server for MySQL.

    "},{"location":"post-installation.html","title":"Post-installation","text":"

    Review Get more help for ways that we can work with you.

    Depending on the type of installation, you may need to do the following tasks:

    "},{"location":"post-installation.html#installed-using-binary-files-or-compiling-from-source","title":"Installed using binary files or compiling from source","text":"Task Initialize the data dictionary Test the server Set service to start at boot time"},{"location":"post-installation.html#initialize-the-data-directory","title":"Initialize the data directory","text":"

    If you install the server using either the source distribution or generic binary distribution files, the data directory is not initialized, and you must run the initialization process after installation.

    Run mysqld with the \u2013initialize option or the initialize-insecure option.

    Executing mysqld with either option does the following:

    • Verifies the existence of the data directory

    • Initializes the system tablespace and related structures

    • Creates system tables including grant tables, time zone tables, and server-side help tables

    • Creates root@localhost

    You should run the following steps with the mysql login.

    1. Navigate to the MySQL directory. The example uses the default location.

      $ cd /usr/local/mysql\n
    2. Create a directory for the MySQL files. The secure_file_priv uses the directory path as a value.

      $ mkdir mydata\n

      The mysql user account should have the drwxr-x--- permissions. Four sections define the permissions; file or directory, User, Group, and Others.

      The first character designates if the permissions are for a file or directory. The first character is d for a directory.

      The rest of the sections are specified in three-character sets.

      Permission User Group Other Read Yes Yes No Write Yes No No Execute Yes Yes No
    3. Run the command to initialize the data directory.

      $ bin/mysqld --initialize\n
    "},{"location":"post-installation.html#test-the-server","title":"Test the server","text":"

    After you have initialized the data directory, and the server is started, you can run tests on the server.

    This section assumes you have used the default installation settings. If you have modified the installation, navigate to the installation location. You can also add the location by Setting the Environment Variables.

    You can use the mysqladmin client to access the server.

    If you have issues connecting to the server, use the root user and the root account password.

    $ sudo mysqladmin -u root -p version\n
    Expected output
    Enter password:\nmysql Ver 8.0.19-10 for debian-linux-gnu on x86_64 (Percona Server (GPL), Release '10', Revision 'f446c04')\n...\nServer version      8.0.19-10\nProtocol version    10\nConnection          Localhost via UNIX socket\nUNIX socket         /var/run/mysqld/mysqld.sock\nUptime:             4 hours 58 min 10 section\n\nThreads:    2 Questions:    16 Slow queries: 0 Opens: 139 Flush tables: 3\nOpen tables: 59  Queries per second avg: 0.0000\n

    Use mysqlshow to display database and table information.

    $ sudo mysqlshow -u root -p\n
    Expected output
    Enter password:\n\n+---------------------+\n|      Databases      |\n+=====================+\n| information_schema  |\n+---------------------+\n| mysql               |\n+---------------------+\n| performance_schema  |\n+---------------------+\n| sys                 |\n+---------------------+\n
    "},{"location":"post-installation.html#set-service-to-run-at-boot-time","title":"Set service to run at boot time","text":"

    After a generic binary installation, manually configure systemd support.

    The following commands start, check the status, and stop the server:

    $ sudo systemctl start mysqld\n$ sudo systemctl status mysqld\n$ sudo systemctl stop mysqld\n

    Run the following command to start the service at boot time:

    $ sudo systemctl enable mysqld\n
    Run the following command to prevent a service from starting at boot time:

    $ sudo systemctl disable mysqld\n
    "},{"location":"post-installation.html#all-installations","title":"All installations","text":"Task Update the root password Secure the server Populate the time zone tables"},{"location":"post-installation.html#update-the-root-password","title":"Update the root password","text":"

    During an installation on Debian/Ubuntu, you are prompted to enter a root password. On Red Hat Enterprise Linux and derivatives, you update the root password after installation.

    Restart the server with the --skip-grant-tables option to allow access without a password. This option is insecure. This option also disables remote connections.

    $ sudo systemctl stop mysqld\n$ sudo systemctl set-environment MYSQLD_OPTS=\"--skip-grant-tables\"\n$ sudo systemctl start mysqld \n$ mysql\n

    Reload the grant tables to be able to run the ALTER USER statement. Enter a password that satisfies the current policy.

    mysql> FLUSH PRIVILEGES;\nmysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPassword_12';\nmysql> exit\n
    If, when adding the password, MySQL returns ERROR 1819 (HY000) Your password does not satisfy the current policy, run the following command to see policy requirement.

    mysql> SHOW VARIABLES LIKE 'validate_password%';\n
    Redo your password to satisfy the requirements.

    Stop the server, remove the --skip-grant-tables option, start the server, and log into the server with the updated password.

    $ sudo systemctl stop mysqld \n$ sudo systemctl unset-environment MYSQLD_OPTS \n$ sudo systemctl start mysqld \n$ mysql -u root -p\n
    "},{"location":"post-installation.html#secure-the-server","title":"Secure the server","text":"

    The mysql_secure_installation script improves the security of the instance.

    The script does the following:

    • Changes the root password

    • Disallows remote login for root accounts

    • Removes anonymous users

    • Removes the test database

    • Reloads the privilege tables

    The following statement runs the script:

    $ mysql_secure_installation\n
    "},{"location":"post-installation.html#populate-the-time-zone-tables","title":"Populate the time zone tables","text":"

    The time zone system tables are the following:

    • time_zone

    • time_zone_leap_second

    • time_zone_name

    • time_zone_transition

    • time_zone_transition_type

    If you install the server using either the source distribution or the generic binary distribution files, the installation creates the time zone tables, but the tables are not populated.

    The mysql_tzinfo_to_sql program populates the tables from the zoneinfo directory data available in Linux.

    A common method to populate the tables is to add the zoneinfo directory path to mysql_tzinfo_to_sql and then send the output into the mysql system schema.

    The example assumes you are running the command with the root account. The account must have the privileges for modifying the mysql system schema.

    $ mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root -p -D mysql\n
    "},{"location":"prefix-index-queries-optimization.html","title":"Prefix index queries optimization","text":"

    Percona Server for MySQL has ported Prefix Index Queries Optimization feature from Facebook patch for MySQL.

    Prior to this InnoDB would always fetch the clustered index for all prefix columns in an index, even when the value of a particular record was smaller than the prefix length. This implementation optimizes that case to use the record from the secondary index and avoid the extra lookup.

    "},{"location":"prefix-index-queries-optimization.html#status-variables","title":"Status variables","text":""},{"location":"prefix-index-queries-optimization.html#innodb_secondary_index_triggered_cluster_reads","title":"Innodb_secondary_index_triggered_cluster_reads","text":"Option Description Scope: Global Data type: Numeric

    This variable shows the number of times secondary index lookup triggered cluster lookup.

    "},{"location":"prefix-index-queries-optimization.html#innodb_secondary_index_triggered_cluster_reads_avoided","title":"Innodb_secondary_index_triggered_cluster_reads_avoided","text":"Option Description Scope: Global Data type: Numeric

    This variable shows the number of times prefix optimization avoided triggering cluster lookup.

    "},{"location":"prefix-index-queries-optimization.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7
    "},{"location":"pro-files.html","title":"Files in packages built for Percona Server for MySQL Pro","text":"

    Percona Server for MySQL Pro includes the capabilities that are typically requested by large enterprises. Percona Server for MySQL Pro contains packages created and tested by Percona. These packages are supported only for Percona Customers with a subscription.

    Become a Percona Customer

    "},{"location":"pro-files.html#files-in-the-deb-package","title":"Files in the DEB package","text":"Package Contains percona-server-server-pro The database server itself, the mysqld binary and associated files. percona-server-pro-common The files common to the server and client. percona-server-client-pro The command line client. percona-server-test-pro The database test suite. percona-server-pro-source The server source. percona-mysql-router-pro The mysql router. percona-server-rocksdb-pro The files for rocksdb installation. percona-server-pro-dbg The debug symbols."},{"location":"pro-files.html#files-in-the-rpm-package","title":"Files in the RPM package","text":"Package Contains percona-server-server-pro The database server itself, the mysqld binary and associated files. percona-server-client-pro The command line client. percona-server-test-pro The database test suite. percona-server-rocksdb-pro The files for rocksdb installation. percona-mysql-router-pro The mysql router. percona-server-shared-pro Client shared library. percona-server-pro-debuginfo The debug symbols. percona-server-devel-pro Header files needed to compile software using the client library."},{"location":"process-list.html","title":"Process list","text":"

    Note

    MySQL 8.0.22 provides the Performance Schema processlist table which can be directly queried.

    This page describes Percona changes to both the standard MySQL SHOW PROCESSLIST command and the standard MySQL INFORMATION_SCHEMA table PROCESSLIST.

    "},{"location":"process-list.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"process-list.html#information_schema-tables","title":"INFORMATION_SCHEMA Tables","text":"

    INFORMATION_SCHEMA.PROCESSLIST

    This table implements modifications to the standard MySQL INFORMATION_SCHEMA table PROCESSLIST.

    Column Name Description \u2018ID\u2019 \u2018The connection identifier.\u2019 \u2018USER\u2019 \u2018The MySQL user who issued the statement.\u2019 \u2018HOST\u2019 \u2018The host name of the client issuing the statement.\u2019 \u2018DB\u2019 \u2018The default database, if one is selected, otherwise NULL.\u2019 \u2018COMMAND\u2019 \u2018The type of command the thread is executing.\u2019 \u2018TIME\u2019 \u2018The time in seconds that the thread has been in its current state.\u2019 \u2018STATE\u2019 \u2018An action, event, or state that indicates what the thread is doing.\u2019 \u2018INFO\u2019 \u2018The statement that the thread is executing, or NULL if it is not executing any statement.\u2019 \u2018TIME_MS\u2019 \u2018The time in milliseconds that the thread has been in its current state.\u2019 \u2018ROWS_EXAMINED\u2019 \u2018The number of rows examined by the statement being executed (NOTE: This column is not updated for each examined row so it does not necessarily show an up-to-date value while the statement is executing. It only shows a correct value after the statement has completed.).\u2019 \u2018ROWS_SENT\u2019 \u2018The number of rows sent by the statement being executed.\u2019 \u2018TID\u2019 \u2018The Linux Thread ID. For Linux, this corresponds to light-weight process ID (LWP ID) and can be seen in the ps -L output. In case when Thread Pool is enabled, \u201cTID\u201d is not null for only currently executing statements and statements received via \u201cextra\u201d connection.\u2019"},{"location":"process-list.html#example-output","title":"Example output","text":"

    Table PROCESSLIST:

    mysql> SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST;\n
    Expected output
    +----+------+-----------+--------------------+---------+------+-----------+---------------------------+---------+-----------+---------------+\n| ID | USER | HOST      | DB                 | COMMAND | TIME | STATE     | INFO                      | TIME_MS | ROWS_SENT | ROWS_EXAMINED |\n+----+------+-----------+--------------------+---------+------+-----------+---------------------------+---------+-----------+---------------+\n| 12 | root | localhost | information_schema | Query   |    0 | executing | select * from processlist |       0 |         0 |             0 |\n+----+------+-----------+--------------------+---------+------+-----------+---------------------------+---------+-----------+---------------+\n
    "},{"location":"procfs-plugin.html","title":"The ProcFS plugin","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    Implemented in Percona Server for MySQL 8.0.25-15, the ProcFS plugin provides access to the Linux performance counters by running SQL queries against a Percona Server for MySQL 8.0.

    You may be unable to capture operating system metrics in certain environments, such as Cloud installations or MySQL-as-a-Service installations. These metrics are essential for complete system performance monitoring.

    The plugin does the following:

    • Reads selected files from the /proc file system and the /sys file system.

    • Populates the file names and their content as rows in the INFORMATION_SCHEMA.PROCFS view.

    The system variable procfs_files_spec provides access to the /proc and the /sys files and directories. This variable cannot be changed at run time, preventing a compromised account from giving itself greater access to those file systems.

    "},{"location":"procfs-plugin.html#install-the-plugin-manually","title":"Install the PLUGIN manually","text":"

    We recommend installing the plugin as part of the package. If needed, you can install this plugin manually. Copy the procfs.so file to the mysql plugin installation directory and execute the following command:

    INSTALL PLUGIN procfs SONAME 'procfs.so';\n
    "},{"location":"procfs-plugin.html#access-privileges-required","title":"Access privileges required","text":"

    Only users with the ACCESS_PROCFS dynamic privilege can access the INFORMATION_SCHEMA.PROCFS view. During the plugin startup, this dynamic privilege is registered with the server.

    After the plugin installation, grant a user access to the INFORMATION_SCHEMA.PROCFS view by executing the following command:

    GRANT ACCESS_PROCFS ON *.* TO 'user'@'host';\n

    Important

    An SELinux policy or an AppArmor profile may prevent access to file locations needed by the ProcFS plugin, such as the \u2018/proc/sys/fs/file-nr\u2019 directory or any sub-directories or files under \u2018/proc/irq/\u2019. Either edit the policy or profile to ensure that the plugin has the necessary access. If the policy and profile do not allow access, the plugin may may have unexpected behavior.

    For more information, see Working with SELinux and Working with AppArmor.

    "},{"location":"procfs-plugin.html#using-the-procfs-plugin","title":"Using the ProcFS plugin","text":"

    Authorized users can obtain information from individual files by specifying the exact file name within a WHERE clause. Files that are not included are ignored and considered not to exist.

    All files that match the procfs_files_spec are opened, read, stored in memory, and, finally, returned to the client. It is critical to add a WHERE clause to return only specific files to limit the impact of the plugin on the server\u2019s performance. A failure to use a WHERE clause can lead to lengthy query response times, high load, and high memory usage on the server. The WHERE clause can contain either an equality operator, the LIKE operator, or the IN operator. The LIKE operator limits file globbing. You can write file access patterns in the glob(7) style, such as /sys/block/sd[a-z]/stat;/proc/version\\*

    The following example returns the proc/version:

    SELECT * FROM INFORMATION_SCHEMA.PROCFS WHERE FILE = '/proc/version';\n
    "},{"location":"procfs-plugin.html#tables","title":"Tables","text":""},{"location":"procfs-plugin.html#procfs","title":"PROCFS","text":"

    The schema definition of the INFORMATION_SCHEMA.PROCFS view is:

    CREATE TEMPORARY TABLE `PROCFS` (\n`FILE` varchar(1024) NOT NULL DEFAULT '',\n`CONTENTS` longtext NOT NULL\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n

    Status variables provide the basic metrics:

    Name Description procfs_access_violations The number of attempted queries by users without the ACCESS_PROCFS privilege. procfs_queries The number of queries made against the procfs view. procfs_files_read The number of files read to provide content procfs_bytes_read The number of bytes read to provide content"},{"location":"procfs-plugin.html#variable","title":"Variable","text":""},{"location":"procfs-plugin.html#procfs_files_spec","title":"procfs_files_spec","text":"Option Description Scope: Global Dynamic: Yes Read, Write, or Read-Only: Read-Only

    The variable has been implemented in Percona Server for MySQL 8.0.25-14. The default value for procfs_files_spec is: /proc/cpuinfo;/proc/irq//;/proc/loadavg/proc/net/dev;/proc/net/sockstat;/proc/net/sockstat_rhe4;/proc/net/tcpstat;/proc/self/net/netstat;/proc/self/stat;/proc/self/io;/proc/self/numa_maps/proc/softirqs;/proc/spl/kstat/zfs/arcstats;/proc/stat;/proc/sys/fs/file-nr;/proc/version;/proc/vmstat

    Enables access to the /proc and /sys directories and files. This variable is global, read only, and is set by using either the mysqld command line or by editing my.cnf.

    "},{"location":"procfs-plugin.html#limitations","title":"Limitations","text":"

    The following limitations are:

    • Only first 60k of /proc/ /sys/ files are returned

    • The file name size is limited to 1k

    • The plugin cannot read files if path does not start from /proc or /sys

    • Complex WHERE conditions may force the plugin to read all configured files.

    "},{"location":"procfs-plugin.html#uninstall-plugin","title":"Uninstall plugin","text":"

    The following statement removes the procfs plugin.

    UNINSTALL PLUGIN procfs;\n
    "},{"location":"proxy-protocol-support.html","title":"Support for PROXY protocol","text":"

    The proxy protocol helps servers see the real client address when a proxy server sits between them. Normally, servers only see the proxy\u2019s address. For example, when HAProxy stands between a MySQL client and server, it can use the proxy protocol to show the client\u2019s true address to the server.

    This protocol is off by default because it can make the server think traffic is coming from somewhere else. You can turn it on for specific hosts or networks where you trust the proxy servers. Once enabled, these addresses can only send proxied connections.

    Remember to set up proper firewall rules when you use this feature.

    The proxy protocol only works with TCP connections over IPv4 and IPv6. It doesn\u2019t work with UNIX socket connections. Also, you can\u2019t use localhost addresses (127.0.0.1 or ::1) as proxied IP addresses, even if they\u2019re in your allowed proxy network list.

    "},{"location":"proxy-protocol-support.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"proxy-protocol-support.html#system-variables","title":"System variables","text":""},{"location":"proxy-protocol-support.html#proxy_protocol_networks","title":"proxy_protocol_networks","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Default (empty string)

    This setting controls which IP addresses can use the proxy protocol. It\u2019s a global setting that you can\u2019t change while the server is running. You can set it to either a star symbol (*) or a list of specific IP addresses and networks.

    For safety, we don\u2019t recommend using the star symbol. If you do, your server will accept the proxy protocol from any computer, which could be risky.

    When listing networks, use CIDR notation. For example, write \u201c192.168.0.0/24\u201d to include all addresses from 192.168.0.0 to 192.168.0.255.

    To keep your server safe from people pretending to be trusted sources, make this list as small as possible. Only include the IP addresses of proxy servers you trust.

    Remember, you can list both IPv4 and IPv6 addresses. Separate each address or network with a comma.

    "},{"location":"ps-variables.html","title":"List of variables introduced in Percona Server for MySQL 8.0","text":""},{"location":"ps-variables.html#system-variables","title":"System variables","text":"Name Cmd-Line Option File Var Scope Dynamic audit_log_buffer_size Yes Yes Global No audit_log_file Yes Yes Global No audit_log_flush Yes Yes Global Yes audit_log_format Yes Yes Global No audit_log_handler Yes Yes Global No audit_log_policy Yes Yes Global Yes audit_log_rotate_on_size Yes Yes Global No audit_log_rotations Yes Yes Global No audit_log_strategy Yes Yes Global No audit_log_syslog_facility Yes Yes Global No audit_log_syslog_ident Yes Yes Global No audit_log_syslog_priority Yes Yes Global No csv_mode Yes Yes Both Yes enforce_storage_engine Yes Yes Global No expand_fast_index_creation Yes No Both Yes extra_max_connections Yes Yes Global Yes extra_port Yes Yes Global No have_backup_locks Yes No Global No have_backup_safe_binlog_info Yes No Global No have_snapshot_cloning Yes No Global No innodb_cleaner_lsn_age_factor Yes Yes Global Yes innodb_corrupt_table_action Yes Yes Global Yes innodb_empty_free_list_algorithm Yes Yes Global Yes innodb_encrypt_online_alter_logs Yes Yes Global Yes innodb_encrypt_tables Yes Yes Global Yes innodb_kill_idle_transaction Yes Yes Global Yes innodb_max_bitmap_file_size Yes Yes Global Yes innodb_max_changed_pages Yes Yes Global Yes innodb_print_lock_wait_timeout_info Yes Yes Global Yes innodb_show_locks_held Yes Yes Global Yes innodb_temp_tablespace_encrypt Yes Yes Global No innodb_track_changed_pages Yes Yes Global No keyring_vault_config Yes Yes Global Yes keyring_vault_timeout Yes Yes Global Yes log_slow_filter Yes Yes Both Yes log_slow_rate_limit Yes Yes Both Yes log_slow_rate_type Yes Yes Global Yes log_slow_sp_statements Yes Yes Global Yes log_slow_verbosity Yes Yes Both Yes log_warnings_suppress Yes Yes Global Yes proxy_protocol_networks Yes Yes Global No query_response_time_flush Yes No Global No query_response_time_range_base Yes Yes Global Yes query_response_time_stats Yes Yes Global Yes slow_query_log_always_write_time Yes Yes Global Yes slow_query_log_use_global_control Yes Yes Global Yes thread_pool_high_prio_mode Yes Yes Both Yes thread_pool_high_prio_tickets Yes Yes Both Yes thread_pool_idle_timeout Yes Yes Global Yes thread_pool_max_threads Yes Yes Global Yes thread_pool_oversubscribe Yes Yes Global Yes thread_pool_size Yes Yes Global Yes thread_pool_stall_limit Yes Yes Global No thread_statistics Yes Yes Global Yes tokudb_alter_print_error tokudb_analyze_delete_fractionref tokudb_analyze_in_background Yes Yes Both Yes tokudb_analyze_mode Yes Yes Both Yes tokudb_analyze_throttle Yes Yes Both Yes tokudb_analyze_time Yes Yes Both Yes tokudb_auto_analyze Yes Yes Both Yes tokudb_block_size tokudb_bulk_fetch tokudb_cache_size tokudb_cachetable_pool_threads Yes Yes Global No tokudb_cardinality_scale_percent tokudb_check_jemalloc tokudb_checkpoint_lock tokudb_checkpoint_on_flush_logs tokudb_checkpoint_pool_threads Yes Yes Global No tokudb_checkpointing_period tokudb_cleaner_iterations tokudb_cleaner_period tokudb_client_pool_threads Yes Yes Global No tokudb_commit_sync tokudb_compress_buffers_before_eviction Yes Yes Global No tokudb_create_index_online tokudb_data_dir tokudb_debug tokudb_directio tokudb_disable_hot_alter tokudb_disable_prefetching tokudb_disable_slow_alter tokudb_empty_scan tokudb_enable_partial_eviction Yes Yes Global No tokudb_fanout Yes Yes Both Yes tokudb_fs_reserve_percent tokudb_fsync_log_period tokudb_hide_default_row_format tokudb_killed_time tokudb_last_lock_timeout tokudb_load_save_space tokudb_loader_memory_size tokudb_lock_timeout tokudb_lock_timeout_debug tokudb_log_dir tokudb_max_lock_memory tokudb_optimize_index_fraction tokudb_optimize_index_name tokudb_optimize_throttle tokudb_pk_insert_mode tokudb_prelock_empty tokudb_read_block_size tokudb_read_buf_size tokudb_read_status_frequency tokudb_row_format tokudb_rpl_check_readonly tokudb_rpl_lookup_rows tokudb_rpl_lookup_rows_delay tokudb_rpl_unique_checks tokudb_rpl_unique_checks_delay tokudb_strip_frm_data Yes Yes Global No tokudb_support_xa tokudb_tmp_dir tokudb_version tokudb_write_status_frequency userstat Yes Yes Global Yes version_comment Yes Yes Global Yes version_suffix Yes Yes Global Yes"},{"location":"ps-variables.html#status-variables","title":"Status variables","text":"Name Var Type Var Scope Binlog_snapshot_file String Global Binlog_snapshot_position Numeric Global Com_lock_binlog_for_backup Numeric Both Com_lock_tables_for_backup Numeric Both Com_show_client_statistics Numeric Both Com_show_index_statistics Numeric Both Com_show_table_statistics Numeric Both Com_show_thread_statistics Numeric Both Com_show_user_statistics Numeric Both Com_unlock_binlog Numeric Both Innodb_background_log_sync Numeric Global Innodb_buffer_pool_pages_LRU_flushed Numeric Global Innodb_buffer_pool_pages_made_not_young Numeric Global Innodb_buffer_pool_pages_made_young Numeric Global Innodb_buffer_pool_pages_old Numeric Global Innodb_checkpoint_age Numeric Global Innodb_checkpoint_max_age Numeric Global Innodb_ibuf_free_list Numeric Global Innodb_ibuf_segment_size Numeric Global Innodb_lsn_current Numeric Global Innodb_lsn_flushed Numeric Global Innodb_lsn_last_checkpoint Numeric Global Innodb_master_thread_active_loops Numeric Global Innodb_master_thread_idle_loops Numeric Global Innodb_max_trx_id Numeric Global Innodb_mem_adaptive_hash Numeric Global Innodb_mem_dictionary Numeric Global Innodb_oldest_view_low_limit_trx_id Numeric Global Innodb_purge_trx_id Numeric Global Innodb_purge_undo_no Numeric Global Open_tables_with_triggers Numeric Global Threadpool_idle_threads Numeric Global Threadpool_threads Numeric Global Tokudb_DB_OPENS Tokudb_DB_CLOSES Tokudb_DB_OPEN_CURRENT Tokudb_DB_OPEN_MAX Tokudb_LEAF_ENTRY_MAX_COMMITTED_XR Tokudb_LEAF_ENTRY_MAX_PROVISIONAL_XR Tokudb_LEAF_ENTRY_EXPANDED Tokudb_LEAF_ENTRY_MAX_MEMSIZE Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_IN Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_OUT Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_IN Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_OUT Tokudb_CHECKPOINT_PERIOD Tokudb_CHECKPOINT_FOOTPRINT Tokudb_CHECKPOINT_LAST_BEGAN Tokudb_CHECKPOINT_LAST_COMPLETE_BEGAN Tokudb_CHECKPOINT_LAST_COMPLETE_ENDED Tokudb_CHECKPOINT_DURATION Tokudb_CHECKPOINT_DURATION_LAST Tokudb_CHECKPOINT_LAST_LSN Tokudb_CHECKPOINT_TAKEN Tokudb_CHECKPOINT_FAILED Tokudb_CHECKPOINT_WAITERS_NOW Tokudb_CHECKPOINT_WAITERS_MAX Tokudb_CHECKPOINT_CLIENT_WAIT_ON_MO Tokudb_CHECKPOINT_CLIENT_WAIT_ON_CS Tokudb_CHECKPOINT_BEGIN_TIME Tokudb_CHECKPOINT_LONG_BEGIN_TIME Tokudb_CHECKPOINT_LONG_BEGIN_COUNT Tokudb_CHECKPOINT_END_TIME Tokudb_CHECKPOINT_LONG_END_TIME Tokudb_CHECKPOINT_LONG_END_COUNT Tokudb_CACHETABLE_MISS Tokudb_CACHETABLE_MISS_TIME Tokudb_CACHETABLE_PREFETCHES Tokudb_CACHETABLE_SIZE_CURRENT Tokudb_CACHETABLE_SIZE_LIMIT Tokudb_CACHETABLE_SIZE_WRITING Tokudb_CACHETABLE_SIZE_NONLEAF Tokudb_CACHETABLE_SIZE_LEAF Tokudb_CACHETABLE_SIZE_ROLLBACK Tokudb_CACHETABLE_SIZE_CACHEPRESSURE Tokudb_CACHETABLE_SIZE_CLONED Tokudb_CACHETABLE_EVICTIONS Tokudb_CACHETABLE_CLEANER_EXECUTIONS Tokudb_CACHETABLE_CLEANER_PERIOD Tokudb_CACHETABLE_CLEANER_ITERATIONS Tokudb_CACHETABLE_WAIT_PRESSURE_COUNT Tokudb_CACHETABLE_WAIT_PRESSURE_TIME Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_COUNT Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_TIME Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS_ACTIVE Tokudb_CACHETABLE_POOL_CLIENT_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CLIENT_MAX_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_ITEMS_PROCESSED Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_EXECUTION_TIME Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS_ACTIVE Tokudb_CACHETABLE_POOL_CACHETABLE_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CACHETABLE_MAX_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_ITEMS_PROCESSED Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_EXECUTION_TIME Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS_ACTIVE Tokudb_CACHETABLE_POOL_CHECKPOINT_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CHECKPOINT_MAX_QUEUE_SIZE Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_ITEMS_PROCESSED Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_EXECUTION_TIME Tokudb_LOCKTREE_MEMORY_SIZE Tokudb_LOCKTREE_MEMORY_SIZE_LIMIT Tokudb_LOCKTREE_ESCALATION_NUM Tokudb_LOCKTREE_ESCALATION_SECONDS Tokudb_LOCKTREE_LATEST_POST_ESCALATION_MEMORY_SIZE Tokudb_LOCKTREE_OPEN_CURRENT Tokudb_LOCKTREE_PENDING_LOCK_REQUESTS Tokudb_LOCKTREE_STO_ELIGIBLE_NUM Tokudb_LOCKTREE_STO_ENDED_NUM Tokudb_LOCKTREE_STO_ENDED_SECONDS Tokudb_LOCKTREE_WAIT_COUNT Tokudb_LOCKTREE_WAIT_TIME Tokudb_LOCKTREE_LONG_WAIT_COUNT Tokudb_LOCKTREE_LONG_WAIT_TIME Tokudb_LOCKTREE_TIMEOUT_COUNT Tokudb_LOCKTREE_WAIT_ESCALATION_COUNT Tokudb_LOCKTREE_WAIT_ESCALATION_TIME Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_COUNT Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_TIME Tokudb_DICTIONARY_UPDATES Tokudb_DICTIONARY_BROADCAST_UPDATES Tokudb_DESCRIPTOR_SET Tokudb_MESSAGES_IGNORED_BY_LEAF_DUE_TO_MSN Tokudb_TOTAL_SEARCH_RETRIES Tokudb_SEARCH_TRIES_GT_HEIGHT Tokudb_SEARCH_TRIES_GT_HEIGHTPLUS3 Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_BYTES Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_UNCOMPRESSED_BYTES Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_SECONDS Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_BYTES Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_UNCOMPRESSE Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_SECONDS Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_BYTES Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_UNCOMPRESSED_BYTES Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_SECONDS Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_BYTES Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_UNCOMPRESSED_BY Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_SECONDS Tokudb_LEAF_NODE_COMPRESSION_RATIO Tokudb_NONLEAF_NODE_COMPRESSION_RATIO Tokudb_OVERALL_NODE_COMPRESSION_RATIO Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS_BYTES Tokudb_LEAF_NODE_PARTIAL_EVICTIONS Tokudb_LEAF_NODE_PARTIAL_EVICTIONS_BYTES Tokudb_LEAF_NODE_FULL_EVICTIONS Tokudb_LEAF_NODE_FULL_EVICTIONS_BYTES Tokudb_NONLEAF_NODE_FULL_EVICTIONS Tokudb_NONLEAF_NODE_FULL_EVICTIONS_BYTES Tokudb_LEAF_NODES_CREATED Tokudb_NONLEAF_NODES_CREATED Tokudb_LEAF_NODES_DESTROYED Tokudb_NONLEAF_NODES_DESTROYED Tokudb_MESSAGES_INJECTED_AT_ROOT_BYTES Tokudb_MESSAGES_FLUSHED_FROM_H1_TO_LEAVES_BYTES Tokudb_MESSAGES_IN_TREES_ESTIMATE_BYTES Tokudb_MESSAGES_INJECTED_AT_ROOT Tokudb_BROADCASE_MESSAGES_INJECTED_AT_ROOT Tokudb_BASEMENTS_DECOMPRESSED_TARGET_QUERY Tokudb_BASEMENTS_DECOMPRESSED_PRELOCKED_RANGE Tokudb_BASEMENTS_DECOMPRESSED_PREFETCH Tokudb_BASEMENTS_DECOMPRESSED_FOR_WRITE Tokudb_BUFFERS_DECOMPRESSED_TARGET_QUERY Tokudb_BUFFERS_DECOMPRESSED_PRELOCKED_RANGE Tokudb_BUFFERS_DECOMPRESSED_PREFETCH Tokudb_BUFFERS_DECOMPRESSED_FOR_WRITE Tokudb_PIVOTS_FETCHED_FOR_QUERY Tokudb_PIVOTS_FETCHED_FOR_QUERY_BYTES Tokudb_PIVOTS_FETCHED_FOR_QUERY_SECONDS Tokudb_PIVOTS_FETCHED_FOR_PREFETCH Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_BYTES Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_SECONDS Tokudb_PIVOTS_FETCHED_FOR_WRITE Tokudb_PIVOTS_FETCHED_FOR_WRITE_BYTES Tokudb_PIVOTS_FETCHED_FOR_WRITE_SECONDS Tokudb_BASEMENTS_FETCHED_TARGET_QUERY Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_BYTES Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_SECONDS Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_BYTES Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_SECONDS Tokudb_BASEMENTS_FETCHED_PREFETCH Tokudb_BASEMENTS_FETCHED_PREFETCH_BYTES Tokudb_BASEMENTS_FETCHED_PREFETCH_SECONDS Tokudb_BASEMENTS_FETCHED_FOR_WRITE Tokudb_BASEMENTS_FETCHED_FOR_WRITE_BYTES Tokudb_BASEMENTS_FETCHED_FOR_WRITE_SECONDS Tokudb_BUFFERS_FETCHED_TARGET_QUERY Tokudb_BUFFERS_FETCHED_TARGET_QUERY_BYTES Tokudb_BUFFERS_FETCHED_TARGET_QUERY_SECONDS Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_BYTES Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_SECONDS Tokudb_BUFFERS_FETCHED_PREFETCH Tokudb_BUFFERS_FETCHED_PREFETCH_BYTES Tokudb_BUFFERS_FETCHED_PREFETCH_SECONDS Tokudb_BUFFERS_FETCHED_FOR_WRITE Tokudb_BUFFERS_FETCHED_FOR_WRITE_BYTES Tokudb_BUFFERS_FETCHED_FOR_WRITE_SECONDS Tokudb_LEAF_COMPRESSION_TO_MEMORY_SECONDS Tokudb_LEAF_SERIALIZATION_TO_MEMORY_SECONDS Tokudb_LEAF_DECOMPRESSION_TO_MEMORY_SECONDS Tokudb_LEAF_DESERIALIZATION_TO_MEMORY_SECONDS Tokudb_NONLEAF_COMPRESSION_TO_MEMORY_SECONDS Tokudb_NONLEAF_SERIALIZATION_TO_MEMORY_SECONDS Tokudb_NONLEAF_DECOMPRESSION_TO_MEMORY_SECONDS Tokudb_NONLEAF_DESERIALIZATION_TO_MEMORY_SECONDS Tokudb_PROMOTION_ROOTS_SPLIT Tokudb_PROMOTION_LEAF_ROOTS_INJECTED_INTO Tokudb_PROMOTION_H1_ROOTS_INJECTED_INTO Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_0 Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_1 Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_2 Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_3 Tokudb_PROMOTION_INJECTIONS_LOWER_THAN_DEPTH_3 Tokudb_PROMOTION_STOPPED_NONEMPTY_BUFFER Tokudb_PROMOTION_STOPPED_AT_HEIGHT_1 Tokudb_PROMOTION_STOPPED_CHILD_LOCKED_OR_NOT_IN_MEMORY Tokudb_PROMOTION_STOPPED_CHILD_NOT_FULLY_IN_MEMORY Tokudb_PROMOTION_STOPPED_AFTER_LOCKING_CHILD Tokudb_BASEMENT_DESERIALIZATION_FIXED_KEY Tokudb_BASEMENT_DESERIALIZATION_VARIABLE_KEY Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_SUCCESS Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_FAIL_POS Tokudb_RIGHTMOST_LEAF_SHORTCUT_FAIL_REACTIVE Tokudb_CURSOR_SKIP_DELETED_LEAF_ENTRY Tokudb_FLUSHER_CLEANER_TOTAL_NODES Tokudb_FLUSHER_CLEANER_H1_NODES Tokudb_FLUSHER_CLEANER_HGT1_NODES Tokudb_FLUSHER_CLEANER_EMPTY_NODES Tokudb_FLUSHER_CLEANER_NODES_DIRTIED Tokudb_FLUSHER_CLEANER_MAX_BUFFER_SIZE Tokudb_FLUSHER_CLEANER_MIN_BUFFER_SIZE Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_SIZE Tokudb_FLUSHER_CLEANER_MAX_BUFFER_WORKDONE Tokudb_FLUSHER_CLEANER_MIN_BUFFER_WORKDONE Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_WORKDONE Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_STARTED Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_RUNNING Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_COMPLETED Tokudb_FLUSHER_CLEANER_NUM_DIRTIED_FOR_LEAF_MERGE Tokudb_FLUSHER_FLUSH_TOTAL Tokudb_FLUSHER_FLUSH_IN_MEMORY Tokudb_FLUSHER_FLUSH_NEEDED_IO Tokudb_FLUSHER_FLUSH_CASCADES Tokudb_FLUSHER_FLUSH_CASCADES_1 Tokudb_FLUSHER_FLUSH_CASCADES_2 Tokudb_FLUSHER_FLUSH_CASCADES_3 Tokudb_FLUSHER_FLUSH_CASCADES_4 Tokudb_FLUSHER_FLUSH_CASCADES_5 Tokudb_FLUSHER_FLUSH_CASCADES_GT_5 Tokudb_FLUSHER_SPLIT_LEAF Tokudb_FLUSHER_SPLIT_NONLEAF Tokudb_FLUSHER_MERGE_LEAF Tokudb_FLUSHER_MERGE_NONLEAF Tokudb_FLUSHER_BALANCE_LEAF Tokudb_HOT_NUM_STARTED Tokudb_HOT_NUM_COMPLETED Tokudb_HOT_NUM_ABORTED Tokudb_HOT_MAX_ROOT_FLUSH_COUNT Tokudb_TXN_BEGIN Tokudb_TXN_BEGIN_READ_ONLY Tokudb_TXN_COMMITS Tokudb_TXN_ABORTS Tokudb_LOGGER_NEXT_LSN Tokudb_LOGGER_WRITES Tokudb_LOGGER_WRITES_BYTES Tokudb_LOGGER_WRITES_UNCOMPRESSED_BYTES Tokudb_LOGGER_WRITES_SECONDS Tokudb_LOGGER_WAIT_LONG Tokudb_LOADER_NUM_CREATED Tokudb_LOADER_NUM_CURRENT Tokudb_LOADER_NUM_MAX Tokudb_MEMORY_MALLOC_COUNT Tokudb_MEMORY_FREE_COUNT Tokudb_MEMORY_REALLOC_COUNT Tokudb_MEMORY_MALLOC_FAIL Tokudb_MEMORY_REALLOC_FAIL Tokudb_MEMORY_REQUESTED Tokudb_MEMORY_USED Tokudb_MEMORY_FREED Tokudb_MEMORY_MAX_REQUESTED_SIZE Tokudb_MEMORY_LAST_FAILED_SIZE Tokudb_MEM_ESTIMATED_MAXIMUM_MEMORY_FOOTPRINT Tokudb_MEMORY_MALLOCATOR_VERSION Tokudb_MEMORY_MMAP_THRESHOLD Tokudb_FILESYSTEM_THREADS_BLOCKED_BY_FULL_DISK Tokudb_FILESYSTEM_FSYNC_TIME Tokudb_FILESYSTEM_FSYNC_NUM Tokudb_FILESYSTEM_LONG_FSYNC_TIME Tokudb_FILESYSTEM_LONG_FSYNC_NUM"},{"location":"ps-versions-comparison.html","title":"List of features available in Percona Server for MySQL releases","text":"Percona Server for MySQL 5.7 Percona Server for MySQL 8.0 Improved Buffer Pool Scalability Improved Buffer Pool Scalability Improved InnoDB I/O Scalability Improved InnoDB I/O Scalability Multiple Adaptive Hash Search Partitions Multiple Adaptive Hash Search Partitions Atomic write support for Fusion-io devices Atomic write support for Fusion-io devices Query Cache Enhancements Feature not implemented Improved NUMA support Improved NUMA support Thread Pool Thread Pool Suppress Warning Messages Suppress Warning Messages Ability to change the database for mysqlbinlog Ability to change the database for mysqlbinlog Fixed Size for the Read Ahead Area Fixed Size for the Read Ahead Area Improved MEMORY Storage Engine Improved MEMORY Storage Engine Restricting the number of binlog files Restricting the number of binlog files Ignoring missing tables in mysqldump Ignoring missing tables in mysqldump Too Many Connections Warning Too Many Connections Warning Handle Corrupted Tables Handle Corrupted Tables Lock-Free SHOW SLAVE STATUS Lock-Free SHOW REPLICA STATUS Expanded Fast Index Creation Expanded Fast Index Creation Percona Toolkit UDFs Percona Toolkit UDFs Support for Fake Changes Support for Fake Changes Kill Idle Transactions Kill Idle Transactions XtraDB changed page tracking XtraDB changed page tracking Enforcing Storage Engine Replaced with upstream implementation Utility user Utility user Extending the secure-file-priv server option Extending the secure-file-priv server option Expanded Program Option Modifiers Feature not implemented PAM Authentication Plugin PAM Authentication Plugin Log Archiving for XtraDB Log Archiving for XtraDB User Statistics User Statistics Slow Query Log Slow Query Log Count InnoDB Deadlocks Count InnoDB Deadlocks Log All Client Commands (syslog) Log All Client Commands (syslog) Response Time Distribution Feature not implemented Show Storage Engines Show Storage Engines Show Lock Names Show Lock Names Process List Process List Misc. INFORMATION_SCHEMA Tables Misc. INFORMATION_SCHEMA Tables Extended Show Engine InnoDB Status Extended Show Engine InnoDB Status Thread Based Profiling Thread Based Profiling XtraDB Performance Improvements for I/O-Bound Highly-Concurrent Workloads XtraDB Performance Improvements for I/O-Bound Highly-Concurrent Workloads Page cleaner thread tuning Page cleaner thread tuning Statement Timeout Statement Timeout Extended SELECT INTO OUTFILE/DUMPFILE Extended SELECT INTO OUTFILE/DUMPFILE Per-query variable statement Per-query variable statement Extended mysqlbinlog Extended mysqlbinlog Slow Query Log Rotation and Expiration Slow Query Log Rotation and Expiration Metrics for scalability measurement Feature not implemented Audit Log Audit Log Backup Locks Backup Locks CSV engine mode for a standard-compliant quote and comma parsing CSV engine mode for a standard-compliant quote and comma parsing Super read-only Super read-only"},{"location":"ps-versions-comparison.html#other-reading","title":"Other reading","text":"
    • What Is New in MySQL 5.7

    • What Is New in MySQL 8.0

    "},{"location":"psmysql-pro.html","title":"Percona Server for MySQL Pro","text":"

    Percona Server for MySQL Pro includes the capabilities that are typically requested by large enterprises. Percona Server for MySQL Pro contains packages created and tested by Percona. These packages are supported only for Percona Customers with a subscription.

    Become a Percona Customer

    "},{"location":"psmysql-pro.html#capabilities","title":"Capabilities","text":"

    Find the list of capabilities available in Percona Server for MySQL Pro:

    Name Version Description FIPS compliance 8.0.40-31 The FIPS feature has been tested on Percona Server for MySQL Pro 8.0.40-31. There are no changes to this release. FIPS compliance 8.0.39-30 The FIPS feature has been tested on Percona Server for MySQL Pro 8.0.39-30. There are no changes to this release. FIPS compliance 8.0.37-29 The FIPS feature has been tested on Percona Server for MySQL Pro 8.0.37-29. There are no changes to this release. FIPS compliance 8.0.36-28 The FIPS feature has been tested on Percona Server for MySQL Pro 8.0.36-28. There are no changes to this release. FIPS compliance 8.0.35-27 FIPS compliance enables all commercial cloud service providers who want to sell and increase their presence for US government entities."},{"location":"psmysql-pro.html#whats-in-it-for-you","title":"What\u2019s in it for you?","text":"
    • Save on deploying and maintaining build infrastructure as we do the build and testing for you
    • Longer support for older versions of operating systems.

    Install Percona Server for MySQL Pro

    If you already use Percona Server for MySQL, you can

    Upgrade to Percona Server for MySQL Pro

    Community users can receive all these capabilities by building Percona Server for MySQL from the same source code.

    "},{"location":"query-limit-records.html","title":"Limit the estimation of records in a Query","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    This page describes an alternative when running queries against a large number of table partitions. When a query runs, InnoDB estimates the records in each partition. This process can result in more pages read and more disk I/O, if the buffer pool must fetch the pages from disk. This process increases the query time if there are a large number of partitions.

    The addition of two variables makes it possible to override records_in_range which effectively bypasses the process.

    Warning

    The use of these variables may result in improper index selection by the optimizer.

    "},{"location":"query-limit-records.html#innodb_records_in_range","title":"innodb_records_in_range","text":"Option Description Command-line: --innodb-records-in-range Scope: Global Dynamic: Yes Data type: Numeric Default 0

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    The variable provides a method to limit the number of records estimated for a query.

    mysql> SET @@GLOBAL.innodb_records_in_range=100;\n100\n
    "},{"location":"query-limit-records.html#innodb_force_index_records_in_range","title":"innodb_force_index_records_in_range","text":"Option Description Command-line: --innodb-force-index-records-in-range Scope: Global Dynamic: Yes Data type: Numeric Default 0

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    This variable provides a method to override the records_in_range result when a FORCE INDEX is used in a query.

    mysql> SET @@GLOBAL.innodb_force_index_records_in_range=100;\n100\n
    "},{"location":"query-limit-records.html#using-the-favor_range_scan-optimizer-switch","title":"Using the favor_range_scan optimizer switch","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    In specific scenarios, the optimizer chooses to scan a table instead of using a range scan. The conditions are the following:

    • Table with an extremely large number of rows

    • Compound primary keys made of two or more columns

    • WHERE clause contains multiple range conditions

    The optimizer_switch controls the optimizer behavior. The favor_range_scan switch arbitrarily lowers the cost of a range scan by a factor of 10.

    The available values are:

    • ON

    • OFF (Default)

    • DEFAULT

    mysql> SET optimizer_switch='favor_range_scan=on';\n
    "},{"location":"quickstart-apt.html","title":"Install with APT","text":"

    Install the Percona repositories using APT.

    "},{"location":"quickstart-apt.html#prerequisits","title":"Prerequisits","text":"
    • Either use sudo or run as root

    • Stable Internet access

    "},{"location":"quickstart-apt.html#installation-steps","title":"Installation steps","text":"

    The \u201cexpected output\u201d depends on the operating system. The following examples are based on Ubuntu 22.04.

    1. Update the package index.

      $ sudo apt update\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      Hit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease\nGet:2 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]\nHit:3 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease\n...\n
    2. Install curl.

      $ sudo apt install -y curl\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      Reading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\n...\n
    3. Download the percona-release repository package:

      $ curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb\n

      You should see the following result:

      Expected output
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                Dload  Upload   Total   Spent    Left  Speed\n100 11804  100 11804    0     0  17375      0 --:--:-- --:--:-- --:--:-- 17358\n\n...\n
    4. Install the downloaded package and any dependencies:

      $ sudo apt install -y gnupg2 lsb-release ./percona-release_latest.generic_all.deb\n

      You should see the following result:

      Expected output
      Reading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nNote, selecting 'percona-release' instead of './percona-release_latest.generic_all.deb'\nlsb-release is already the newest version (11.1.0ubuntu4).\nlsb-release set to manually installed.\n...\n
    5. Update the package listing.

      $ sudo apt update\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      Hit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease\nGet:2 http://repo.percona.com/percona/apt jammy InRelease [15.7 kB]\nHit:3 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease\nHit:4 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease\n...\n
    6. Set up the Percona Server for MySQL 8.0 repository:

      $ sudo percona-release setup ps80\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      * Disabling all Percona Repositories\n* Enabling the Percona Server 8.0 repository\n* Enabling the Percona Tools repository\nHit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease\nHit:2 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease\nHit:3 http://repo.percona.com/prel/apt jammy InRelease\n...\n
    7. Enable the Percona Server for MySQL release.

      $ sudo percona-release enable ps-80 release\n

      You should see the following result:

      Expected output
      * Enabling the Percona Server 8.0 repository\n<*> All done!\n==> Please run \"apt-get update\" to apply changes\n
    8. Update the package listing.

      $ sudo apt update\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      Hit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease\nGet:2 http://repo.percona.com/percona/apt jammy InRelease [15.7 kB]\nHit:3 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease\nHit:4 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease\n...\n
    9. Install Percona Server for MySQL 8.0.

      $ sudo apt install -y percona-server-server\n

      The result depends on the operating system. The following result is based on Ubuntu 22.04:

      Expected output
      Reading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nThe following additional packages will be installed:\ndebsums libaio1 libdpkg-perl libfile-fcntllock-perl libfile-fnmatch-perl libmecab2\npercona-server-client percona-server-common\nSuggested packages:\ndebian-keyring gcc | c-compiler binutils git bzr\nThe following NEW packages will be installed:\ndebsums libaio1 libdpkg-perl libfile-fcntllock-perl libfile-fnmatch-perl libmecab2\npercona-server-client percona-server-common percona-server-server\n0 upgraded, 9 newly installed, 0 to remove and 161 not upgraded.\nNeed to get 172 MB of archives.\nAfter this operation, 612 MB of additional disk space will be used.\n...\n
    10. The installation asks you to enter a password. We use \u2018secret\u2019 for these examples, but you can use any password. You must remember to use your password for the rest of the Quickstart.

    11. Confirm your password.

    12. Choose the type of authentication, based on the compatibility and the security requirements of your applications.

      The Strong password encryption uses a more secure hashing algorithm to store and verify passwords, which makes it harder for attackers to crack them.

      The Legacy authentication method uses the older and less secure hashing algorithm that was used in previous versions of MySQL.

    13. [Optional] You can increase the security of MySQL by runningsudo mysql_secure installation.

      After installing MySQL, you should run the mysql_secure_installation script to improve the security of your database server. This script helps you perform several important tasks, such as:

      • Set a password for the root user

      • Select a level for the password validation policy

      • Remove anonymous users

      • Disable root login remotely

      • Remove the test database

      • Reload the privilege table to ensure all changes take effect immediately

      By running this script, you can prevent unauthorized access to your server and protect your data from potential threats.

      $ sudo mysql_secure_installation\n
      Expected output
      Securing the MySQL server deployment.\n\nEnter password for user root:\n\nVALIDATE PASSWORD COMPONENT can be used to test passwords\nand improve security. It checks the strength of password\nand allows the users to set only those passwords which are\nsecure enough. Would you like to setup VALIDATE PASSWORD component?\n\nPress y|Y for Yes, any other key for No:\n\nThere are three levels of password validation policy:\n\nLOW    Length >= 8\nMEDIUM Length >= 8, numeric, mixed case, and special characters\nSTRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file\n\nPlease enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1\nUsing existing password for root.\n\nEstimated strength of the password: 0\nChange the password for root ? ((Press y|Y for Yes, any other key for No) :\n\nNew password:\n\nRe-enter new password:\n\nEstimated strength of the password: 100\nDo you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) :\nBy default, a MySQL installation has an anonymous user,\nallowing anyone to log into MySQL without having to have\na user account created for them. This is intended only for\ntesting, and to make the installation go a bit smoother.\nYou should remove them before moving into a production\nenvironment.\n\nRemove anonymous users? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\nNormally, root should only be allowed to connect from\n'localhost'. This ensures that someone cannot guess at\nthe root password from the network.\n\nDisallow root login remotely? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\nBy default, MySQL comes with a database named 'test' that\nanyone can access. This is also intended only for testing,\nand should be removed before moving into a production\nenvironment.\n\nRemove test database and access to it? (Press y|Y for Yes, any other key for No) :\n- Dropping test database...\nSuccess.\n\n- Removing privileges on test database...\nSuccess.\n\nReloading the privilege tables will ensure that all changes\nmade so far will take effect immediately.\n\nReload privilege tables now? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\nAll done!\n
    14. When the installation is complete, check the service status.

      $ sudo systemctl status mysql\n
      Expected output
      \u25cf mysql.service - Percona Server\n     Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)\n     Active: active (running) since Fri 2024-02-16 10:24:58 UTC; 3min 7s ago\n    Process: 4456 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)\n   Main PID: 4501 (mysqld)\n     Status: \"Server is operational\"\n      Tasks: 39 (limit: 2219)\n     Memory: 371.7M\n        CPU: 11.800s\n     CGroup: /system.slice/mysql.service\n             \u2514\u25004501 /usr/sbin/mysqld\n\nFeb 16 10:24:56 vagrant systemd[1]: Starting Percona Server...\nFeb 16 10:24:58 vagrant systemd[1]: Started Percona Server.\n

      If needed, restart the service

      $ sudo systemctl status mysql\n
    15. Log in to the server. Use the password you entered during the installation process, which could be secret or whatever you have selected. You do not see the characters in the password as you type.

      $ mysql -uroot -p\nEnter password:\n
      Expected output
      Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 8\nServer version: 8.0.40 Percona Server (GPL), Release '27', Revision '2f8eeab2'$\n\nCopyright (c) 2009-2024 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2024, Oracle and/or its affiliates.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
    "},{"location":"quickstart-apt.html#create-a-database","title":"Create a database","text":"Benefits and what to watch out for when creating databases and tables

    Creating a database and table has the following benefits:

    • Store and organize your data in a structured and consistent way.
    • Query and manipulate your data using SQL.
    • Enforce data integrity and security using constraints, triggers, views, roles, and permissions.
    • Optimize your data access and performance using indexes, partitions, caching, and other techniques.

    When you create a table, design your database schema carefully, changing it later may be difficult and costly. You should experiment with concurrency, transactions, locking, isolation, and other issues that may arise when multiple users access the same data. You must backup and restore your data regularly, as data loss or corruption may occur due to hardware failures, human errors, or malicious attacks.

    To create a database, use the CREATE DATABASE statement. You can optionally specify the character set and collation for the database in the statement. After the database is created, select the database using the USE statement or the -D option in the MySQL client.

    mysql> CREATE DATABASE mydb;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> use mydb;\n
    Expected output
    Database changed\n
    "},{"location":"quickstart-apt.html#create-a-table","title":"Create a table","text":"

    Create a table using the CREATE TABLE statement. You can specify the values for each column or use the DEFAULT keyword for columns with default values, data types, constraints, indexes, and other options.

    mysql> CREATE TABLE `employees` (\n    `id` mediumint(8) unsigned NOT NULL auto_increment,\n    `name` varchar(255) default NULL,\n    `email` varchar(255) default NULL,\n    `country` varchar(100) default NULL,\n    PRIMARY KEY (`id`)\n) AUTO_INCREMENT=1;\n
    Expected output
    Query OK, 0 rows affected, 1 warning (0.03 sec)\n
    "},{"location":"quickstart-apt.html#insert-data-into-the-table","title":"Insert data into the table","text":"

    Insert data into the table using the INSERT INTO SQL statement. This statement adds multiple records into a table in one statement.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n    (\"Erasmus Richardson\",\"posuere.cubilia.curae@outlook.net\",\"England\"),\n    (\"Jenna French\",\"rhoncus.donec@hotmail.couk\",\"Canada\"),\n    (\"Alfred Dejesus\",\"interdum@aol.org\",\"Austria\"),\n    (\"Hamilton Puckett\",\"dapibus.quam@outlook.com\",\"Canada\"),\n    (\"Michal Brzezinski\",\"magna@icloud.pl\",\"Poland\"),\n    (\"Zofia Lis\",\"zofial00@hotmail.pl\",\"Poland\"),\n    (\"Aisha Yakubu\",\"ayakubu80@outlook.com\",\"Nigeria\"),\n    (\"Miguel Cardenas\",\"euismod@yahoo.com\",\"Peru\"),\n    (\"Luke Jansen\",\"nibh@hotmail.edu\",\"Netherlands\"),\n    (\"Roger Pettersen\",\"nunc@protonmail.no\",\"Norway\");\n
    Expected output
    Query OK, 10 rows affected (0.02 sec)\nRecords: 10  Duplicates: 0  Warnings: 0\n
    "},{"location":"quickstart-apt.html#run-a-select-query","title":"Run a SELECT query","text":"

    SELECT queries retrieve data from one or more tables based on specified criteria. They are the most common type of query and can be used for various purposes, such as displaying, filtering, sorting, aggregating, or joining data. SELECT queries do not modify the data in the database but can affect performance if they involve large or complex datasets.

    mysql>SELECT id, name, email, country FROM employees WHERE country = 'Poland';\n
    Expected output
    +----+-------------------+---------------------+---------+\n| id | name              | email               | country |\n+----+-------------------+---------------------+---------+\n|  5 | Michal Brzezinski | magna@icloud.pl     | Poland  |\n|  6 | Zofia Lis         | zofial00@hotmail.pl | Poland  |\n+----+-------------------+---------------------+---------+\n2 rows in set (0.00 sec)\n
    "},{"location":"quickstart-apt.html#run-an-update-query","title":"Run an Update query","text":"

    UPDATE queries modify existing data in a table. They are used to change or correct the information stored in the database. UPDATE queries can update one or more columns and rows simultaneously, depending on the specified conditions. They may also fail if they violate any constraints or rules defined on the table.

    An example of an UPDATE query and then run a [SELECT](#select-query) with a WHERE clause to verify the update.\n
    mysql> UPDATE employees SET name = 'Zofia Niemec' WHERE id = 6;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\nRows matched: 1  Changed: 1  Warnings: 0\n
    mysql> SELECT name FROM employees WHERE id = 6;\n
    Expected output
    +--------------+\n| name         |\n+--------------+\n| Zofia Niemec |\n+--------------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-apt.html#run-an-insert-query","title":"Run an INSERT query","text":"

    INSERT queries add new data to a table and populate the database with new information. Depending on the syntax, INSERT queries can insert one or more rows at a time. The query may fail if it violates any constraints or rules defined on the table, such as primary keys, foreign keys, unique indexes, or triggers.

    Insert a row into a table and then run a SELECT with a WHERE clause to verify the record was inserted.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n(\"Kenzo Sasaki\",\"KenSasaki@outlook.com\",\"Japan\");\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id = 11;\n
    Expected output
    +----+--------------+-----------------------+---------+\n| id | name         | email                 | country |\n+----+--------------+-----------------------+---------+\n| 11 | Kenzo Sasaki | KenSasaki@outlook.com | Japan   |\n+----+--------------+-----------------------+---------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-apt.html#run-a-delete-query","title":"Run a Delete query","text":"

    DELETE queries remove existing data from a table. They are used to clean up information no longer needed or relevant in the database. Depending on the specified conditions, DELETE queries can delete one or more rows at a time. They may also trigger cascading deletes on related tables if foreign key constraints are enforced.

    Delete a row in the table and run a SELECT with a WHERE clause to verify the deletion.

    mysql> DELETE FROM employees WHERE id >= 11;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id > 10;\n
    Expected output
    Empty set (0.00 sec)\n
    "},{"location":"quickstart-apt.html#troubleshooting","title":"Troubleshooting","text":"
    • Connection Issues: Double-check credentials, ensure service is running, and verify firewall configurations if applicable.

    • Permission Errors: Grant necessary permissions to users using GRANT statements within the MySQL shell.

    • Package Installation Issues: Refer to Percona documentation and online forums for specific error messages and solutions.

    "},{"location":"quickstart-apt.html#security-best-practices","title":"Security best practices","text":"
    • Strong Passwords: Utilize complex and unique passwords for all users, especially the root account.

    • Minimize Permissions: Grant users only the privileges necessary for their tasks.

    • Disable Unnecessary Accounts: Remove test accounts and unused accounts.

    • Regular Backups: Implement consistent backup routines to safeguard your data.

    • Keep Software Updated: Maintain Percona Server and related packages updated with security patches.

    • Monitor Server Activity: Employ tools, like Percona Monitoring and Management, and logs to monitor server activity for suspicious behavior.

    "},{"location":"quickstart-apt.html#next-step","title":"Next step","text":"

    Choose your next steps

    "},{"location":"quickstart-docker.html","title":"Run Percona Server for MySQL in a Docker container","text":"

    You are welcome to name any items to match your organization\u2019s standards or use your table structure and data. If you do, the results are different from the expected results.

    "},{"location":"quickstart-docker.html#prerequisites","title":"Prerequisites","text":"
    • Docker Engine installed and running
    • Stable internet connection
    • Basic understanding of the command-line interface (CLI)

    Always adapt the commands and configurations to your specific environment and security requirements.

    "},{"location":"quickstart-docker.html#start-a-docker-container","title":"Start a Docker container","text":"

    To use the \u201cDocker run\u201d command, specify the name or ID of the image you want to use and, optionally, some flags and arguments that modify the container\u2019s behavior. The command has the following options:

    Option Description -d Runs the container in detached mode, allowing the container to operate in the background. -p 3306:3306 Maps the container\u2019s MySQLport (3306) tothe same port yourhost, enabling external access. --name psmysql Provides a meaningful name to the container. If you do not use this option, Docker adds a random name. -e MYSQL_ROOT_PASSWORD=secret Adds an environmental variable and changes the password from the default password. --v myvol:/var/lib/mysql Mounts a host directory (myvol) as the container\u2019s data volume, ensuring persistent storage for the database between container lifecycles. percona/percona-server:8.0.40 The image with the tag (8.0.40) to specify a specific release.

    You must provide at least one environment variable to access the database, such as MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD or the instance refuses to initialize.

    If needed, you can replace the secret password with a stronger password.

    For this document, we add the 8.0.40 tag. In Docker, a tag is a label assigned to an image and is used to maintain different versions of an image. If we did not add a tag, Docker uses latest as the default tag and downloads the latest image from percona/percona-server on the Docker Hub.

    To run the Docker ARM64 version of Percona Server for MySQL, use the 8.0.40-aarch64 tag instead of 8.0.40.

    $ docker run -d -p 3306:3306 --name psmysql \\\n-e MYSQL_ROOT_PASSWORD=secret \\\n-v myvol:/var/lib/mysql \\\npercona/percona-server:8.0.40-aarch64\n
    Expected output
    Unable to find image 'percona/percona-server:8.0.40-aarch64' locally\n8.0.40-aarch64: Pulling from percona/percona-server\nd6f6a69cdebb: Pull complete\n4f8794caafba: Pull complete\nd80629460c71: Pull complete\nf550e519928f: Pull complete\nfb91f65fb039: Pull complete\ne8f7e0c2fbae: Pull complete\nDigest: sha256:4944f9b365e0dc88f41b3b704ff2a02d1459fd07763d7d1a444b263db8498e1f\nStatus: Downloaded newer image for percona/percona-server:8.0.40-aarch64\n01d4f6d188b609ff92158605f8528d640aa28ff5720efa0286b36f51d4bec11c\n
    "},{"location":"quickstart-docker.html#connect-to-the-database-instance","title":"Connect to the database instance","text":"

    To connect to a MySQL database on a container, use the Docker exec command with the database instance connect command. You must know the name or ID of the container that runs the database server and the database credentials.

    The Docker exec command runs a specified command in a running container. The database instance connect command connects to a MySQL server with the user name and password.

    For this example, we have the following options:

    Option Description it Interact with the container and be a pseudo-terminal psmysql Running container name mysql Connects to a database instance -u Specifies the user account used to connect -p Use this password when connecting

    You must enter the password when the server prompts you.

    Connect to the database instance example

    $ docker exec -it psmysql mysql -uroot -p\n

    You are prompted to enter the password, which is secret. If you have changed the password, use your password. You will not see any characters as you type.

    Enter password:\n

    You should see the following result.

    Expected output
    Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 8\nServer version: 8.0.40 Percona Server (GPL), Release 26, Revision 0fe62c85\n\nCopyright (c) 2009-2024 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2024, Oracle and/or its affiliates.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
    "},{"location":"quickstart-docker.html#create-a-database","title":"Create a database","text":"Benefits and what to watch out for when creating databases and tables

    Creating a database and table has the following benefits:

    • Store and organize your data in a structured and consistent way.
    • Query and manipulate your data using SQL.
    • Enforce data integrity and security using constraints, triggers, views, roles, and permissions.
    • Optimize your data access and performance using indexes, partitions, caching, and other techniques.

    When you create a table, design your database schema carefully, changing it later may be difficult and costly. You should experiment with concurrency, transactions, locking, isolation, and other issues that may arise when multiple users access the same data. You must backup and restore your data regularly, as data loss or corruption may occur due to hardware failures, human errors, or malicious attacks.

    To create a database, use the CREATE DATABASE statement. You can optionally specify the character set and collation for the database in the statement. After the database is created, select the database using the USE statement or the -D option in the MySQL client.

    mysql> CREATE DATABASE mydb;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> use mydb;\n
    Expected output
    Database changed\n
    "},{"location":"quickstart-docker.html#create-a-table","title":"Create a table","text":"

    Create a table using the CREATE TABLE statement. You can specify the values for each column or use the DEFAULT keyword for columns with default values, data types, constraints, indexes, and other options.

    mysql> CREATE TABLE `employees` (\n    `id` mediumint(8) unsigned NOT NULL auto_increment,\n    `name` varchar(255) default NULL,\n    `email` varchar(255) default NULL,\n    `country` varchar(100) default NULL,\n    PRIMARY KEY (`id`)\n) AUTO_INCREMENT=1;\n
    Expected output
    Query OK, 0 rows affected, 1 warning (0.03 sec)\n
    "},{"location":"quickstart-docker.html#insert-data-into-the-table","title":"Insert data into the table","text":"

    Insert data into the table using the INSERT INTO SQL statement. This statement adds multiple records into a table in one statement.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n    (\"Erasmus Richardson\",\"posuere.cubilia.curae@outlook.net\",\"England\"),\n    (\"Jenna French\",\"rhoncus.donec@hotmail.couk\",\"Canada\"),\n    (\"Alfred Dejesus\",\"interdum@aol.org\",\"Austria\"),\n    (\"Hamilton Puckett\",\"dapibus.quam@outlook.com\",\"Canada\"),\n    (\"Michal Brzezinski\",\"magna@icloud.pl\",\"Poland\"),\n    (\"Zofia Lis\",\"zofial00@hotmail.pl\",\"Poland\"),\n    (\"Aisha Yakubu\",\"ayakubu80@outlook.com\",\"Nigeria\"),\n    (\"Miguel Cardenas\",\"euismod@yahoo.com\",\"Peru\"),\n    (\"Luke Jansen\",\"nibh@hotmail.edu\",\"Netherlands\"),\n    (\"Roger Pettersen\",\"nunc@protonmail.no\",\"Norway\");\n
    Expected output
    Query OK, 10 rows affected (0.02 sec)\nRecords: 10  Duplicates: 0  Warnings: 0\n
    "},{"location":"quickstart-docker.html#run-a-select-query","title":"Run a SELECT query","text":"

    SELECT queries retrieve data from one or more tables based on specified criteria. They are the most common type of query and can be used for various purposes, such as displaying, filtering, sorting, aggregating, or joining data. SELECT queries do not modify the data in the database but can affect the performance if the query involves large or complex datasets.

    mysql>SELECT id, name, email, country FROM employees WHERE country = 'Poland';\n
    Expected output
    +----+-------------------+---------------------+---------+\n| id | name              | email               | country |\n+----+-------------------+---------------------+---------+\n|  5 | Michal Brzezinski | magna@icloud.pl     | Poland  |\n|  6 | Zofia Lis         | zofial00@hotmail.pl | Poland  |\n+----+-------------------+---------------------+---------+\n2 rows in set (0.00 sec)\n
    "},{"location":"quickstart-docker.html#run-an-update-query","title":"Run an Update query","text":"

    UPDATE queries modify existing data in a table. They are used to change or correct the information stored in the database. UPDATE queries can update one or more columns and rows simultaneously, depending on the specified conditions. They may also fail if they violate any constraints or rules defined on the table.

    An example of an UPDATE query and then run a SELECT with a WHERE clause to verify the update.

    mysql> UPDATE employees SET name = 'Zofia Niemec' WHERE id = 6;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\nRows matched: 1  Changed: 1  Warnings: 0\n
    mysql> SELECT name FROM employees WHERE id = 6;\n
    Expected output
    +--------------+\n| name         |\n+--------------+\n| Zofia Niemec |\n+--------------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-docker.html#run-an-insert-query","title":"Run an INSERT query","text":"

    INSERT queries add new data to a table. They are used to populate the database with new information. INSERT queries can insert one or more rows at a time, depending on the syntax. The query may fail if it violates any constraints or rules defined on the table, such as primary keys, foreign keys, unique indexes, or triggers.

    Insert a row into a table and then run a SELECT with a WHERE clause to verify the record was inserted.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n(\"Kenzo Sasaki\",\"KenSasaki@outlook.com\",\"Japan\");\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id = 11;\n
    Expected output
    +----+--------------+-----------------------+---------+\n| id | name         | email                 | country |\n+----+--------------+-----------------------+---------+\n| 11 | Kenzo Sasaki | KenSasaki@outlook.com | Japan   |\n+----+--------------+-----------------------+---------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-docker.html#run-a-delete-query","title":"Run a Delete query","text":"

    DELETE queries remove existing data from a table. They are used to clean up the information no longer needed or relevant in the database. The DELETE queries can delete one or more rows at a time, depending on the specified conditions. They may also trigger cascading deletes on related tables if foreign key constraints are enforced.

    Delete a row in the table and run a SELECT with a WHERE clause to verify the deletion.

    mysql> DELETE FROM employees WHERE id >= 11;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id > 10;\n
    Expected output
    Empty set (0.00 sec)\n
    "},{"location":"quickstart-docker.html#clean-up","title":"Clean up","text":"

    The following steps do the following:

    • Exits the MySQL command client shell and the Docker container

    • Removes the Docker container and the Docker image

    • Removes the Docker volume.

    The steps are as follows:

    1. To exit the MySQL command client shell we use exit. You can also use the \\q or quit commands. The execution of the statement also closes the connection.

      • An example of exiting the MySQL command client shell and closing the connection.

        mysql> exit\n
        Expected output
        Bye\n
    2. You may want to remove the docker container and the image if they are no longer needed or to free up disk space. To remove a docker container, use the command docker rm followed by psmysql, the container ID or name. To remove a docker image, use the command docker rmi followed by percona/percona-server:8.0.40, the image ID or name and the tag. If you are running the ARM64 version of Percona Server, edit the Docker command to use the 8.0.40-aarch64 tag with docker image rmi percona/percona-server:8.0.40-aarch64

      • An example of removing a Docker container.

        $ docker container rm psmysql -f\n
        Expected output

        psmysql\n
        * An example of removing a Docker image. If running the ARM64 version of Percona Server, edit the Docker command to use the 8.0.40-aarch64 tag. This edit changes the command to docker image rmi percona/percona-server:8.0.40-aarch64

        $ docker image rmi percona/percona-server:8.0.40\n
        Expected output
        Untagged: percona/percona-server:8.0.40\nUntagged: percona/percona-server@sha256:4944f9b365e0dc88f41b3b704ff2a02d1459fd07763d7d1a444b263db8498e1f\nDeleted: sha256:b2588da614b1f382468fc9f44600863e324067a9cae57c204a30a2105d61d9d9\nDeleted: sha256:1ceaa6dc89e328281b426854a3b00509b5df13826a9618a09e819a830b752ebd\nDeleted: sha256:77471692427a227eb16d06907357956c3bb43f0fdc3ecf6f8937e1acecae24fe\nDeleted: sha256:8db06cc7b0430437edc7f118b139d2195cb65e2e8025f9a4517d16778f615384\nDeleted: sha256:e5a57a2fafec4ab9482240f28927651d56545c19626e344aceb8be3704c3c397\nDeleted: sha256:f86198f39b893674d44d424c958f23183bf919d2ced20e1f519714d0972d75ed\nDeleted: sha256:db9672f7e12e374d5e9016b758a29d5444e8b0fd1246a6f1fc5c2b3c847dddcf\n
    3. Remove the docker volume if a container does not use the volume and you no longer need it.

      • An example of removing a Docker volume.

        $ docker volume rm myvol\n
        Expected output
        myvol\n
    "},{"location":"quickstart-docker.html#troubleshooting","title":"Troubleshooting","text":"
    • Connection Refusal: Ensure Docker is running and the container is active. Verify port 3306 is accessible on the container\u2019s IP address.

    • Incorrect Credentials: Double-check the root password you set during container launch.

    • Data Loss: Always back up your data regularly outside the container volume.

    "},{"location":"quickstart-docker.html#security-measures","title":"Security measures","text":"
    • Strong Passwords: Utilize complex, unique passwords for the root user and any additional accounts created within the container. The alphanumeric password should contain at least 12 characters. The password should include uppercase and lowercase letters, numbers, and symbols.

    • Network Restrictions: Limit network access to the container by restricting firewall rules to only authorized IP addresses.

    • Periodic Updates: Regularly update the Percona Server image and Docker Engine to mitigate known vulnerabilities.

    • Data Encryption: Consider encrypting the data directory within the container volume for an additional layer of security.

    • Monitor Logs: Actively monitor container logs for suspicious activity or errors.

    Remember, responsible container management and robust security practices are crucial for safeguarding your MySQL deployment. By following these guidelines, you can leverage the benefits of Docker and Percona Server while prioritizing the integrity and security of your data.

    "},{"location":"quickstart-docker.html#next-step","title":"Next step","text":"

    Choose your next steps

    "},{"location":"quickstart-next-steps.html","title":"Next steps","text":"

    After creating a database and running queries, you have taken the first steps to become a MySQL beginner developer. However, there is still more to learn and practice to improve your skills and knowledge. Some of the next steps you can take are learning and using the following:

    • Familiarize yourself with the different data types, such as integers, strings, dates, and booleans, and choose the right one for your data.

    • Create and use indexes to optimize the performance of your queries and reduce the load on your database server.

    • Combine data from multiple tables and sources using joins, subqueries, and unions.

    • Use functions, procedures, triggers, and views to encapsulate the logic, automate the tasks, and create reusable components.

    • Use transactions, locks, and isolation levels to ensure data integrity and consistency in concurrent operations.

    • Use backup and restore tools to protect your data from loss or corruption.

    • Use security features, such as users, roles, privileges, and encryption, to protect your data from unauthorized access or modification.

    • Use debugging and testing tools like logs, error messages, breakpoints, and assertions to identify and fix errors in your code or queries.

    • Use documentation and commenting tools, such as comments, diagrams, schemas, and manuals, to explain and document your code or queries.

    These tasks will expand your knowledge and skills in using Percona Server for MySQL and become more confident and proficient in developing database applications.

    Review the Percona Server for MySQL documentation for more information.

    "},{"location":"quickstart-next-steps.html#other-percona-products","title":"Other Percona products","text":""},{"location":"quickstart-next-steps.html#for-backups-and-restores","title":"For backups and restores","text":"

    Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL\u00ae that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup.

    Install Percona XtraBackup

    "},{"location":"quickstart-next-steps.html#for-monitoring-and-management","title":"For monitoring and management","text":"

    Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details.

    Install PMM and connect your MySQL instances to it.

    "},{"location":"quickstart-next-steps.html#for-high-availability","title":"For high availability","text":"

    Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve the performance and scalability of their database environments, supporting their critical business applications in the most demanding public, private, and hybrid cloud environments.

    Percona XtraDB Cluster Quick Start guide

    "},{"location":"quickstart-next-steps.html#advanced-command-line-tools","title":"Advanced command-line tools","text":"

    Percona Toolkit is a collection of advanced command-line tools used by the Percona support staff to perform various MySQL, MongoDB, and system tasks that are complex or difficult to perform manually. These tools are ideal alternatives to \u201cone-off\u201d scripts because they are professionally developed, formally tested, and documented. Each tool is self-contained, so installation is quick and easy and does not install libraries.

    Percona Toolkit documentation

    "},{"location":"quickstart-next-steps.html#operators","title":"Operators","text":"

    Percona Operator for MySQL and Percona Operator for MySQL based on Percona XtraDB Cluster are tools designed to simplify the deployment, management, and scaling of MySQL and Percona XtraDB Cluster (PXC) instances in Kubernetes environments. These operators automate various database tasks such as backups, recovery, and updates, ensuring high availability and reliability. They provide robust features like automated failover, self-healing, and seamless scaling, which help maintain optimal database performance and reduce manual intervention. By leveraging Kubernetes\u2019 orchestration capabilities, these operators enhance the efficiency and resilience of MySQL and PXC deployments, making them well-suited for modern cloud-native applications.

    Percona Operator for MySQL Documentation

    Percona Operator for MySQL based on Percona XtraDB Cluster

    "},{"location":"quickstart-next-steps.html#cloud-native-database-services","title":"Cloud-native database services","text":"

    Percona Everest is an open-source cloud-native database platform that helps developers deploy code faster, scale deployments rapidly, and reduce database administration overhead while regaining control over their data, database configuration, and DBaaS costs.

    Percona Everest

    "},{"location":"quickstart-overview.html","title":"Overview","text":"

    Percona Server for MySQL is a freely available, fully compatible, enhanced, and open source drop-in replacement for any MySQL database and provides enterprise-grade features in security, availability, data management, visibility, instrumentation, and performance.

    To start with Percona Server for MySQL quickly, this Quickstart guide focuses on either using Docker, or installing with APT or YUM.

    You can explore alternative installation options in the [Install] section of the Percona Server for MySQL documentation.

    Review Get more help for ways that we can work with you.

    "},{"location":"quickstart-overview.html#purpose-of-the-quickstart","title":"Purpose of the Quickstart","text":"

    This document guides you through the initial setup process, including setting a root password in either APT or YUM, creating a database.

    You can also do the following:

    • Download and install Percona Server for MySQL packages for your operating system

    • Work with the Quickstart for the Percona Operator for MySQL based on the Percona Server for MySQL using Helm or the Quickstart for the Percona Operator for MySQL based on the Percona Server for MySQL using Minikube to find out more about the Percona Operator.

    "},{"location":"quickstart-overview.html#steps-for-first-time-users","title":"Steps for first-time users","text":"

    The following guides walk you through the setup process and working with a database for a developer. Select the installation method that works best in your environment.

    "},{"location":"quickstart-overview.html#next-steps","title":"Next steps","text":"

    Run Percona Server for MySQL 8.0 in a Docker container

    Install using APT

    Install using YUM

    Choose your next steps

    "},{"location":"quickstart-yum.html","title":"Install with YUM","text":"

    Use the Percona repositories to install using YUM.

    "},{"location":"quickstart-yum.html#prerequisits","title":"Prerequisits","text":"
    • Either use sudo or run as root

    • Stable Internet access

    "},{"location":"quickstart-yum.html#installation-steps","title":"Installation steps","text":"

    The \u201cexpected output\u201d depends on the operating system. The following examples are based on Oracle Linux 9.3.

    1. Use the YUM package manager to install percona-release.

      $ sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n
      Expected output
      Oracle Linux 9 BaseOS Latest (x86_64)                                                        7.0 MB/s |  20 MB     00:02\nOracle Linux 9 Application Stream Packages (x86_64)                                          7.3 MB/s |  28 MB     00:03\nOracle Linux 9 UEK Release 7 (x86_64)                                                        6.7 MB/s |  27 MB     00:04\nLast metadata expiration check: 0:00:04 ago on Fri 16 Feb 2024 12:34:26 PM UTC.\npercona-release-latest.noarch.rpm                                                             37 kB/s |  20 kB     00:00\nDependencies resolved.\n...\n* Enabling the Percona Original repository\n<*> All done!\n* Enabling the Percona Release repository\n<*> All done!\nThe percona-release package now contains a percona-release script that can enable additional repositories for our newer products.\n\nFor example, to enable the Percona Server 8.0 repository use:\n\n    percona-release setup ps80\n\nNote: To avoid conflicts with older product versions, the percona-release setup command may disable our original repository for some products.\n\nFor more information, please visit:\n    https://www.percona.com/doc/percona-repo-config/percona-release.html\n\n\n    Verifying        : percona-release-1.0-27.noarch                                                                       1/1\n\nInstalled:\n    percona-release-1.0-27.noarch\n\nComplete!\n
    2. Use the percona-release tool to set up the repository for Percona Server for MySQL 8.0.

      $ sudo percona-release setup ps-80\n
      Expected output
      * Disabling all Percona Repositories\nOn Red Hat 8 systems it is needed to disable the following DNF module(s): mysql  to install Percona-Server\nDo you want to disable it? [y/N] y\nDisabling dnf module...\nPercona Release release/noarch YUM repository                                             2.7 kB/s | 1.8 kB     00:00\nUnable to resolve argument mysql\nError: Problems in request:\nmissing groups or modules: mysql\nDNF mysql module was disabled\n* Enabling the Percona Server 8.0 repository\n* Enabling the Percona Tools repository\n<*> All done!\n
    3. Enable the ps-80 release repository.

      $ sudo percona-release enable ps-80 release\n
      Expected output
      * Enabling the Percona Server 8.0 repository\n<*> All done!\n
    4. Install the latest version of Percona Server for MySQL 8.0. This installation may take some time.

      $ sudo yum install -y percona-server-server\n
      Expected output
      Percona Server 8.0 release/x86_64 YUM repository                                             1.0 MB/s | 2.3 MB     00:02\nPercona Tools release/x86_64 YUM repository                                                  761 kB/s | 1.1 MB     00:01\nLast metadata expiration check: 0:00:01 ago on Fri 16 Feb 2024 03:07:45 PM UTC.\nDependencies resolved.\n\u2026\nperl-vars-1.05-480.el9.noarch\n\nComplete!\n
    5. Check the status of the mysql service and restart if needed.

      $ sudo systemctl status mysql\n
      Expected output
      \u25cb mysqld.service - MySQL Server\n    Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; preset: disabled)\n    Active: inactive (dead)\n    Docs: man:mysqld(8)\n            http://dev.mysql.com/doc/refman/en/using-systemd.html\n
      $ sudo systemctl restart mysql\n

      This command has no output.

    6. Percona Server for MySQL generates a temporary password during installation. You must have the service running to access the log.

      $ sudo grep 'temporary password' /var/log/mysqld.log\n
      Expected output
      2024-02-12T16:05:03.969449Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: [random-generated-password]\n
    7. Log in to the server. Use the password retrieved by the grep command. You can type the password or copy-and-paste. You do not see the characters in the password as you type.

      $ mysql -uroot -p\nEnter password:\n
      Expected output
      Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 8\nServer version: 8.0.40 Percona Server (GPL), Release '27', Revision '2f8eeab2'$\n\nCopyright (c) 2009-2024 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2024, Oracle and/or its affiliates.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
    8. The temporary password must be replaced. Run the ALTER USER command tochange the password for the root user. Remember or save the new password. You will need it to log into the server in the next step.

      mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '[your password]';\n
      Expected output
      Query OK, 0 rows affected (0.01 sec)\n
    9. Log out of the server. to verify that the password has changed.

      mysql> exit\n
      Expected output
      Bye\n
    10. Log into the server with the new password to verify that the password has changed.

      $ mysql -uroot -p\nEnter password:\n
      Expected output
      Welcome to the MySQL monitor.  Commands end with ; or \\g.\nYour MySQL connection id is 8\nServer version: 8.0.40 Percona Server (GPL), Release '27', Revision '2f8eeab2'$\n\nCopyright (c) 2009-2024 Percona LLC and/or its affiliates\nCopyright (c) 2000, 2024, Oracle and/or its affiliates.\n\nOracle is a registered trademark of Oracle Corporation and/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\n\nmysql>\n
    "},{"location":"quickstart-yum.html#create-a-database","title":"Create a database","text":"Benefits and what to watch out for when creating databases and tables

    Creating a database and table has the following benefits:

    • Store and organize your data in a structured and consistent way.
    • Query and manipulate your data using SQL.
    • Enforce data integrity and security using constraints, triggers, views, roles, and permissions.
    • Optimize your data access and performance using indexes, partitions, caching, and other techniques.

    When you create a table, design your database schema carefully, changing it later may be difficult and costly. You should experiment with concurrency, transactions, locking, isolation, and other issues that may arise when multiple users access the same data. You must backup and restore your data regularly, as data loss or corruption may occur due to hardware failures, human errors, or malicious attacks.

    To create a database, use the CREATE DATABASE statement. You can optionally specify the character set and collation for the database in the statement. After the database is created, select the database using the USE statement or the -D option in the MySQL client.

    mysql> CREATE DATABASE mydb;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> use mydb;\n
    Expected output
    Database changed\n
    "},{"location":"quickstart-yum.html#create-a-table","title":"Create a table","text":"

    Create a table using the CREATE TABLE statement. You can specify the values for each column or use the DEFAULT keyword for columns with default values, data types, constraints, indexes, and other options.

    mysql> CREATE TABLE `employees` (\n    `id` mediumint(8) unsigned NOT NULL auto_increment,\n    `name` varchar(255) default NULL,\n    `email` varchar(255) default NULL,\n    `country` varchar(100) default NULL,\n    PRIMARY KEY (`id`)\n) AUTO_INCREMENT=1;\n
    Expected output
    Query OK, 0 rows affected, 1 warning (0.03 sec)\n
    "},{"location":"quickstart-yum.html#insert-data-into-the-table","title":"Insert data into the table","text":"

    Insert data into the table using the INSERT INTO SQL statement. This statement adds multiple records into a table in one statement.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n    (\"Erasmus Richardson\",\"posuere.cubilia.curae@outlook.net\",\"England\"),\n    (\"Jenna French\",\"rhoncus.donec@hotmail.couk\",\"Canada\"),\n    (\"Alfred Dejesus\",\"interdum@aol.org\",\"Austria\"),\n    (\"Hamilton Puckett\",\"dapibus.quam@outlook.com\",\"Canada\"),\n    (\"Michal Brzezinski\",\"magna@icloud.pl\",\"Poland\"),\n    (\"Zofia Lis\",\"zofial00@hotmail.pl\",\"Poland\"),\n    (\"Aisha Yakubu\",\"ayakubu80@outlook.com\",\"Nigeria\"),\n    (\"Miguel Cardenas\",\"euismod@yahoo.com\",\"Peru\"),\n    (\"Luke Jansen\",\"nibh@hotmail.edu\",\"Netherlands\"),\n    (\"Roger Pettersen\",\"nunc@protonmail.no\",\"Norway\");\n
    Expected output
    Query OK, 10 rows affected (0.02 sec)\nRecords: 10  Duplicates: 0  Warnings: 0\n
    "},{"location":"quickstart-yum.html#run-a-select-query","title":"Run a SELECT query","text":"

    SELECT queries retrieve data from one or more tables based on specified criteria. They are the most common type of query and can be used for various purposes, such as displaying, filtering, sorting, aggregating, or joining data. SELECT queries do not modify the data in the database but can affect performance if they involve large or complex datasets.

    mysql>SELECT id, name, email, country FROM employees WHERE country = 'Poland';\n
    Expected output
    +----+-------------------+---------------------+---------+\n| id | name              | email               | country |\n+----+-------------------+---------------------+---------+\n|  5 | Michal Brzezinski | magna@icloud.pl     | Poland  |\n|  6 | Zofia Lis         | zofial00@hotmail.pl | Poland  |\n+----+-------------------+---------------------+---------+\n2 rows in set (0.00 sec)\n
    "},{"location":"quickstart-yum.html#run-an-update-query","title":"Run an Update query","text":"

    UPDATE queries modify existing data in a table. They are used to change or correct the information stored in the database. UPDATE queries can update one or more columns and rows simultaneously, depending on the specified conditions. They may also fail if they violate any constraints or rules defined on the table.

    An example of an UPDATE query and then run a [SELECT](#select-query) with a WHERE clause to verify the update.\n
    mysql> UPDATE employees SET name = 'Zofia Niemec' WHERE id = 6;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\nRows matched: 1  Changed: 1  Warnings: 0\n
    mysql> SELECT name FROM employees WHERE id = 6;\n
    Expected output
    +--------------+\n| name         |\n+--------------+\n| Zofia Niemec |\n+--------------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-yum.html#run-an-insert-query","title":"Run an INSERT query","text":"

    INSERT queries add new data to a table and populate the database with new information. Depending on the syntax, INSERT queries can insert one or more rows at a time. The query may fail if it violates any constraints or rules defined on the table, such as primary keys, foreign keys, unique indexes, or triggers.

    Insert a row into a table and then run a SELECT with a WHERE clause to verify the record was inserted.

    mysql> INSERT INTO `employees` (`name`,`email`,`country`)\nVALUES\n(\"Kenzo Sasaki\",\"KenSasaki@outlook.com\",\"Japan\");\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id = 11;\n
    Expected output
    +----+--------------+-----------------------+---------+\n| id | name         | email                 | country |\n+----+--------------+-----------------------+---------+\n| 11 | Kenzo Sasaki | KenSasaki@outlook.com | Japan   |\n+----+--------------+-----------------------+---------+\n1 row in set (0.00 sec)\n
    "},{"location":"quickstart-yum.html#run-a-delete-query","title":"Run a Delete query","text":"

    DELETE queries remove existing data from a table. They are used to clean up the information no longer needed or relevant in the database. The DELETE queries can delete one or more rows at a time, depending on the specified conditions. They may also trigger cascading deletes on related tables if foreign key constraints are enforced.

    Delete a row in the table and run a SELECT with a WHERE clause to verify the deletion.

    mysql> DELETE FROM employees WHERE id >= 11;\n
    Expected output
    Query OK, 1 row affected (0.01 sec)\n
    mysql> SELECT id, name, email, country FROM employees WHERE id > 10;\n
    Expected output
    Empty set (0.00 sec)\n
    "},{"location":"quickstart-yum.html#troubleshooting","title":"Troubleshooting:","text":"

    Installation:

    • Verify repository is enabled: sudo yum repolist

    • Check for package conflicts: sudo yum deplist percona-server-server

    • Consult package logs: sudo journalctl -u yum

    MySQL startup:

    • Review system logs: sudo journalctl -u mysqld

    • Check configuration files: /etc/my.cnf

    "},{"location":"quickstart-yum.html#security-steps","title":"Security Steps:","text":"
    • Keep software updated: sudo yum update regularly.

    • Strong root password: Set a complex, unique password using mysql_secure_installation.

    • Disable unused accounts and databases: Remove unnecessary elements.

    • Monitor Server Activity: Employ tools, like Percona Monitoring and Management, and logs to monitor server activity for suspicious behavior.

    • Backup data regularly: Ensure robust backups for disaster recovery.

    "},{"location":"quickstart-yum.html#secure-the-installation","title":"Secure the installation","text":"

    You can increase the security of MySQL by running sudo mysql_secure installation.

    After installing MySQL, you should run the mysql_secure_installation script to improve the security of your database server. This script helps you perform several important tasks, such as:

    • Set a password for the root user

    • Select a level for the password validation policy

    • Remove anonymous users

    • Disable root login remotely

    • Remove the test database

    • Reload the privilege table to ensure all changes take effect immediately

    By running this script, you can prevent unauthorized access to your server and protect your data from potential threats.

    $ sudo mysql_secure_installation\n
    Expected output
    Securing the MySQL server deployment.\n\nEnter password for user root:\n\nVALIDATE PASSWORD COMPONENT can be used to test passwords\nand improve security. It checks the strength of password\nand allows the users to set only those passwords which are\nsecure enough. Would you like to setup VALIDATE PASSWORD component?\n\nPress y|Y for Yes, any other key for No:\n\nThere are three levels of password validation policy:\n\nLOW    Length >= 8\nMEDIUM Length >= 8, numeric, mixed case, and special characters\nSTRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file\n\nPlease enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1\nUsing existing password for root.\n\nEstimated strength of the password: 0\nChange the password for root ? ((Press y|Y for Yes, any other key for No) :\n\nNew password:\n\nRe-enter new password:\n\nEstimated strength of the password: 100\nDo you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) :\nBy default, a MySQL installation has an anonymous user,\nallowing anyone to log into MySQL without having to have\na user account created for them. This is intended only for\ntesting, and to make the installation go a bit smoother.\nYou should remove them before moving into a production\nenvironment.\n\nRemove anonymous users? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\n\nNormally, root should only be allowed to connect from\n'localhost'. This ensures that someone cannot guess at\nthe root password from the network.\n\nDisallow root login remotely? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\nBy default, MySQL comes with a database named 'test' that\nanyone can access. This is also intended only for testing,\nand should be removed before moving into a production\nenvironment.\n\n\nRemove test database and access to it? (Press y|Y for Yes, any other key for No) :\n - Dropping test database...\nSuccess.\n\n - Removing privileges on test database...\nSuccess.\n\nReloading the privilege tables will ensure that all changes\nmade so far will take effect immediately.\n\nReload privilege tables now? (Press y|Y for Yes, any other key for No) :\nSuccess.\n\nAll done!\n
    "},{"location":"quickstart-yum.html#next-step","title":"Next step","text":"

    Choose your next steps

    "},{"location":"reading-audit-log-filter-files.html","title":"Reading Audit Log Filter files","text":"

    The Audit Log Filter functions can provide a SQL interface to read JSON-format audit log files. The functions cannot read log files in other formats. Configuring the plugin for JSON logging lets the functions use the directory that contains the current audit log filter file and search in that location for readable files. The value of the audit_log_filter_file system variable provides the file location, base name, and the suffix and then searches for names that match the pattern.

    If the file is renamed and no longer fits the pattern, the file is ignored.

    "},{"location":"reading-audit-log-filter-files.html#functions-used-for-reading-the-files","title":"Functions used for reading the files","text":"

    The following functions read the files in the JSON-format:

    • audit_log_read - reads audit log filter events
    • [audit_log_read_bookmark()](audit-log-filter-variables.md#audit_log_read_bookmark) - for the most recently read event, returns a bookmark. The bookmark can be passed toaudit_log_read()`.

    Initialize a read sequence by using a bookmark or an argument that specifies the start position:

    mysql> SELECT audit_log_read(audit_log_read_bookmark());\n\nThe following example continues reading from the current position:\n\n```{.bash data-prompt=\"mysql>\"}\nmysql> SELECT audit_log_read();\n

    Reading a file is closed when the session ends or calling audit_log_read() with another argument.

    "},{"location":"removing-tokudb.html","title":"Migrate and remove the TokuDB storage engine","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    "},{"location":"removing-tokudb.html#migrate-to-myrocks","title":"Migrate to MyRocks","text":"

    To migrate data use the mysqldump client utility or the tools in the MySQL Workbench to dump and restore the database.

    We recommended migrating to the MyRocks storage engine. Follow these steps to migrate the data:

    1. Use mysqldump to backup the TokuDB database into a single file.

    2. Create a MyRocks instance with MyRocks tables with no data.

    3. Replace the references to TokuDB with MyRocks.

    4. Enable the following variable: rocksdb_bulk_load. This variable also enables rocksdb_commit_in_the_middle.

    5. Import the data into the MyRocks database.

    6. Go to Remove the plugins

    "},{"location":"removing-tokudb.html#migrate-to-innodb","title":"Migrate to InnoDB","text":"

    Remove the TokuDB storage engine from Percona Server for MySQL without causing any errors.

    If you need the data in the TokuDB tables, you must alter the tables to another supported storage engine. Do not remove the TokuDB storage engine before you\u2019ve changed your tables to a supported storage engine.

    mysql> ALTER TABLE table-name ENGINE=InnoDB;\n

    If you remove the TokuDB storage engine and then find you are missing data, you must reinstall the TokuDB storage engine to access the data.

    "},{"location":"removing-tokudb.html#remove-the-plugins","title":"Remove the plugins","text":"

    To remove the TokuDB storage engine with all installed plugins you can use the ps-admin script:

    $ ps-admin --disable-tokudb -uroot -pPassw0rd\n

    Another option is to remove the TokuDB storage engine with all installed plugins manually:

    mysql> UNINSTALL PLUGIN tokudb;\nmysql> UNINSTALL PLUGIN tokudb_file_map;\nmysql> UNINSTALL PLUGIN tokudb_fractal_tree_info;\nmysql> UNINSTALL PLUGIN tokudb_fractal_tree_block_map;\nmysql> UNINSTALL PLUGIN tokudb_trx;\nmysql> UNINSTALL PLUGIN tokudb_locks;\nmysql> UNINSTALL PLUGIN tokudb_lock_waits;\nmysql> UNINSTALL PLUGIN tokudb_background_job_status;\n

    After the engine and the plugins are uninstalled, you can remove the TokuDB package by using the apt/yum commands:

    $ sudo yum remove Percona-Server-tokudb-80.x86_64\n
    or

    sudo apt remove percona-server-tokudb-8.0\n

    Ensure you\u2019ve removed all TokuDB-specific variables from your configuration file before restarting the server. If variables are in your configuration file, the server won\u2019t start and the logs will contain errors or warnings.

    "},{"location":"reserved-words.html","title":"Reserved keywords","text":"

    Percona uses reserved keywords, which define or manipulate a function or feature of the database. Add these words to the MySQL reserved keyword list when using Percona Server for MySQL.

    If you must use a reserved keyword as an identifier, enclose the word in a set of backtick (`) symbols.

    The following is a list of Percona-specific reserved keywords:

    • CLIENT_STATISTICS
    • CLUSTERING
    • COMPRESSION_DICTIONARY
    • EFFECTIVE
    • INDEX_STATISTICS
    • SEQUENCE_TABLE
    • TABLE_STATISTICS
    • THREAD_STATISTICS
    • USER_STATISTICS
    "},{"location":"rotating-master-key.html","title":"Rotate the master key","text":"

    The Master key should be periodically rotated. Rotate the key if you believe the key has been compromised. The Master key rotation changes the Master key and tablespace keys are re-encrypted and updated in the tablespace headers. The operation does not affect tablespace data.

    If the master key rotation is interrupted, the rotation operation is rolled forward when the server restarts. InnoDB reads the encryption data from the tablespace header, if certain tablespace keys have been encrypted with the prior master key, InnoDB retrieves the master key from the keyring to decrypt the tablespace key. InnoDB re-encrypts the tablespace key with the new Master key.

    To allow for Master Key rotation, you can encrypt an already encrypted InnoDB system tablespace with a new master key by running the following ALTER INSTANCE statement:

    mysql> ALTER INSTANCE ROTATE INNODB MASTER KEY;\n

    The rotation operation must complete before any tablespace encryption operation can begin.

    Note

    The rotation re-encrypts each tablespace key. The tablespace key is not changed. If you want to change a tablespace key, disable and then re-enable encryption.

    "},{"location":"secure-log-path-variable.html","title":"The secure_log_path variable","text":""},{"location":"secure-log-path-variable.html#secure_log_path","title":"secure_log_path","text":"

    This variable is implemented in Percona Server for MySQL 8.0.28-19 (2022-05-12).

    Variable Name Description Command-line \u2013secure-log-path Dynamic No Scope Global Data type String Default empty string

    This variable restricts the location of the slow_query_log, general_log, and buffered_error_log files. Also, this variable is applied to the following options:

    • slow_query_log_file - specifies the name of the slow query log file and the directory where to store this file.

    • general_log_file - specifies the name of the general log file and the directory where to store this file.

    • buffered_error_log_filename - specifies the name of the buffered error log file and the directory where to store this file. You can specify the size of the buffer for error logging in bytes with the buffered-error-log-size option.

    The secure_log_path variable is read-only and is set up in a configuration file or the command line.

    The value accepts a directory name as a string. The default value is an empty string. An empty string adds a warning to the error log and the log files are located in the default directory, /var/lib/mysql. If the value is a directory name, the log files are located in that directory. An attempt to move the log files from the specified directory results in an error.

    "},{"location":"secure-log-path-variable.html#the-example-of-the-secure_log_path-variable-usage","title":"The example of the secure_log_path variable usage","text":"

    Run the following commands as root:

    1. Create the direcory to store the log files.

      [root@localhost ~]# mkdir /var/lib/mysqld-logs\n
    2. Enable the following options and set them up with the created directory in /etc/my.cnf configuration file.

      [mysqld]\nsecure_log_path=/var/lib/mysqld-logs\ngeneral-log=ON\ngeneral-log-file=/var/lib/mysqld-logs/general_log\nslow-query-log=ON\nslow-query-log-file=/var/lib/mysqld-logs/slow_log\nbuffered-error-log-size=1000\nbuffered-error-log-filename=/var/lib/mysqld-logs/buffered_log \n
    3. Change the owner and group of the /var/lib/mysqld-logs directory and all its subdirectories and files to mysql.

      [root@localhost ~]# chown -R mysql:mysql /var/lib/mysqld-logs\n
    4. Restart the MySQL server.

      [root@localhost ~]# systemctl restart mysql\n
    5. Check that the slow query log and the general log are enabled for the MySQL server.

      [root@localhost ~]# mysql -e \"select @@slow_query_log, @@general_log, @@secure_log_path\"\n
      Expected output
      +------------------+---------------+-----------------------+\n| @@slow_query_log | @@general_log | @@secure_log_path \u00a0 \u00a0 |\n+------------------+---------------+-----------------------+\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1 | /var/lib/mysqld-logs/ |\n+------------------+---------------+-----------------------+\n
    6. Check that the slow query log and the general log are stored in the /var/lib/mysqld-logs directory.

      [root@localhost ~]# cd /var/lib/mysqld-logs/\n[root@localhost mysqld-logs]# ls -lrth\n
      Expected output
      -rw-r-----. 1 mysql mysqld-logs 240 Aug 18 11:56 localhost-slow.log\n-rw-r-----. 1 mysql mysqld-logs 565 Aug 18 11:56 localhost.log\n
    "},{"location":"selinux.html","title":"Working with SELinux","text":"

    The Linux kernel, through the Linux Security Module (LSM), supports Security-Enhanced Linux (SELinux). This module provides a way to support mandatory access control policies. SELinux defines how confined processes interact with files, network ports, directories, other processes, and additional server components.

    An SELinux policy defines the set of rules, the types for files, and the domains for processes. Rules determine how a process interacts with another type. SELinux decides whether to allow or deny an action based on the subject\u2019s context, what object initiates the action and what object is the action\u2019s target.

    A label represents the context for administrators and users.

    CentOS 7 and CentOS 8 contain a MySQL SELinux policy. Percona Server for MySQL is a drop-in replacement for MySQL and can use this policy without changes.

    "},{"location":"selinux.html#selinux-context-example","title":"SELinux context example","text":"

    To view the SELinux context, add the -Z switch to many of the utilities. Here is an example of the context for mysqld:

    $ ps -eZ | grep mysqld_t\n
    Expected output
    system_u:system_r:mysqld_t:s0    3356 ?        00:00:01 mysqld\n

    The context has the following properties:

    • User - system_u

    • Role - system_r

    • Type or domain - mysqld_t

    • Sensitivity level - s0 3356

    Most SELinux policy rules are based on the type or domain.

    "},{"location":"selinux.html#list-selinux-types-or-domains-associated-with-files","title":"List SELinux types or domains associated with files","text":"

    The security property that SELinux relies on is the Type security property. The type name often end with a _t. A group of objects with the same type security value belongs to the same domain.

    To view the mysqldb_t types associated with the MySQL directories and files, run the following command:

    $ ls -laZ /var/lib/ | grep mysql\n
    Expected output
    drwxr-x--x. mysql   mysql   system_u:object_r:mysqld_db_t:s0 mysql\ndrwxr-x---. mysql   mysql   system_u:object_r:mysqld_db_t:s0 mysql-files\ndrwxr-x---. mysql   mysql   system_u:object_r:mysqld_db_t:s0 mysql-keyring\n

    Note

    If a policy type does not define the type property for an object, the default value is unconfined_t.

    "},{"location":"selinux.html#selinux-modes","title":"SELinux modes","text":"

    SELinux has the following modes:

    • Disabled - No SELinux policy modules loaded, which disables policies. Nothing is reported.

    • Permissive - SELinux is active, but policy modules are not enforced. A policy violation is reported but does not stop the action.

    • Enforcing - SELinux is active, and violations are reported and denied. If there is no rule to allow access to a confined resource, SELinux denies the access.

    "},{"location":"selinux.html#policy-types","title":"Policy types","text":"

    SELinux has several policy types:

    • Targeted - Most processes operate without restriction. Specific services are contained in security domains and defined by policies.

    • Strict - All processes are contained in security domains and defined by policies.

    SELinux has confined processes that run in a domain and restricts everything unless explicitly allowed. An unconfined process in an unconfined domain is allowed almost all access.

    MySQL is a confined process, and the policy module defines which files are read, which ports are opened, and so on. SELinux assumes the Percona Server for MySQL installation uses the default file locations and default ports.

    If you change the default, you must also edit the policy. If you do not update the policy, SELinux, in enforcing mode, denies access to all non-default resources.

    "},{"location":"selinux.html#check-the-selinux-mode","title":"Check the SELinux mode","text":"

    To check the current SELinux mode, use either of the following commands:

    $ sestatus\n
    Expected output
    SELinux status:                 enabled\nSELinuxfs mount:                /sys/fs/selinux\nSELinux root directory:         /etc/selinux\nLoaded policy name:             targeted\nCurrent mode:                   enforcing\nMode from config file:          enforcing\nPolicy MLS status:              enabled\nPolicy deny_unknown status:     allowed\nMemory protection checking:     actual (secure)\nMax kernel policy version:      31\n

    or

    $ grep ^SELINUX= /etc/selinux/config\n
    Expected output
    SELINUX=enforcing\n

    Note

    Add the -b parameter to sestatus to display the Policy booleans. The boolean values for each parameter is shown. An example of using the b parameter is the following:

    $ sestatus -b | grep mysql\n
    Expected output
    mysql_connect_any                           off\nselinuxuser_mysql_connect_enabled\n

    The /etc/selinux/config file controls if SELinux is disabled or enabled, and if enabled, whether SELinux operates in enforcing mode or permissive mode.

    "},{"location":"selinux.html#disable-selinux","title":"Disable SELinux","text":"

    If you plan to use the enforcing mode at another time, use the permissive mode instead of disabling SELinux. During the time that SELinux is disabled, the system may contain mislabeled objects or objects with no label. If you re-enable SELinux and plan to set SELinux to enforcing, you must follow the steps to Relabel the entire file system.

    On boot, to disable SELinux, set the selinux=0 kernel option. The kernel does not load the SELinux infrastructure. This option has the same effect as changing the SELINUX=disabled instruction in the configuration file and then rebooting the system.

    "},{"location":"selinux.html#additional-selinux-tools","title":"Additional SELinux tools","text":"

    Install the SELinux management tools, such as semanage or sesearch, if needed.

    On RHEL 7 or compatible operating systems, use the following command as root:

    $ yum -y install policycoreutils-python\n

    On RHEL 8 or compatible operating systems, use the following command as root:

    $ yum -y install policycoreutils-python-utils\n

    Note

    You may need root privileges to run SELinux management commands.

    "},{"location":"selinux.html#switch-the-mode-in-the-configuration-file","title":"Switch the mode in the configuration file","text":"

    Switching between modes may help when troubleshooting or when modifying rules.

    To permanently change the mode, edit the /etc/selinux/config file and change the SELINUX= value. You should also verify the change.

    $ cat /etc/selinux/config | grep SELINUX= | grep -v ^#\n
    Expected output
    SELINUX=enforcing\nSELINUX=enforcing\n
    $ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config\n\n$ cat /etc/selinux/config | grep SELINUX= | grep -v ^#\n
    Expected output
    SELINUX=permissive\nSELINUX=permissive\n

    Reboot your system after the change.

    If switching from either disabled mode or permissive mode to enforcing, see Relabel the entire file system.

    "},{"location":"selinux.html#switch-the-mode-until-the-next-reboot","title":"Switch the mode until the next reboot","text":"

    To change the mode until the next reboot, use either of the following commands as root:

    $ setenforce Enforcing\n

    or

    $ setenforce 1\n

    The following setenforce parameters are available:

    setenforce parameters Also Permitted 0 Permissive 1 Enforcing

    You can view the current mode by running either of the following commands:

    $ getenforce\n
    Expected output
    Enforcing\n

    or

    $ sestatus | grep -i mode\n
    Expected output
    Current mode:                   permissive\nMode from config file:          enforcing\n
    "},{"location":"selinux.html#switch-the-mode-for-a-service","title":"Switch the mode for a service","text":"

    You can move one or more services into a permissive domain. The other services remain in enforcing mode.

    To add a service to the permissive domain, run the following as root:

    $ sudo semanage permissive -a mysqld_t\n

    To list the current permissive domains, run the following command:

    $ sudo semanage permissive -l\n
    Expected output
    ...\nCustomized Permissive Types\n\nmysqld_t\n\nBuiltin Permissive Types\n...\n

    To delete a service from the permissive domain, run the following:

    $ sudo semanage permissive -d mysqld_t\n

    The service returns to the system\u2019s SELinux mode. Be sure to follow the steps to Relabel the entire file system.

    "},{"location":"selinux.html#relabel-the-entire-file-system","title":"Relabel the entire file system","text":"

    Switching from disabled or permissive to enforcing requires additional steps. The enforcing mode requires the correct contexts, or labels, to function. The permissive mode allows users and processes to label files and system objects incorrectly. The disabled mode does not load the SELinux infrastructure and does not label resources or processes.

    RHEL and compatible systems, use the fixfiles application for relabeling. You can relabel the entire file system or the file contexts of an application.

    For one application, run the following command:

    $ fixfiles -R mysqld restore\n

    To relabel the file system without rebooting the system, use the following command:

    $ fixfiles -f -F relabel\n

    Another option relabels the file system during a reboot. You can either add a touch file, read during the reboot operation, or configure a kernel boot parameter. The completion of the relabeling operation automatically removes the touch file.

    Add the touch file as root:

    $ touch /.autorelabel\n

    To configure the kernel, add the autorelabel=1 kernel parameter to the boot parameter list. The parameter forces a system relabel. Reboot in permissive mode to allow the process to complete before changing to enforcing.

    Note

    Relabeling an entire filesystem takes time. When the relabeling is complete, the system reboots again.

    "},{"location":"selinux.html#set-a-custom-data-directory","title":"Set a custom data directory","text":"

    If you do not use the default settings, SELinux, in enforcing mode, prevents access to the system.

    For example, during installation, you have used the following configuration:

    datadir=/var/lib/mysqlcustom\nsocket=/var/lib/mysqlcustom/mysql.sock\n

    Restart the service.

    $ service mysqld restart\n
    Expected output
    Redirecting to /bin/systemctl restart mysqld.service\nJob for mysqld.service failed because the control process exited with error code.\nSee \"systemctl status mysqld.service\" and \"journalctl -xe\" for details.\n

    Check the journal log to see the error code.

    $ journalctl -xe\n
    Expected output
    ...\nSELinux is preventing mysqld from getattr access to the file /var/lib/mysqlcustom/ibdata1.\n...\n

    Check the SELinux types in /var/lib/mysqlcustom.

    ls -1aZ /var/lib/mysqlcustom\n
    Expected output
      total 164288\n  drwxr-x--x.  6 mysql mysql system_u:object_r:var_lib_t:s0       4096 Dec  2 07:58  .\n  drwxr-xr-x. 38 root  root  system_u:object_r:var_lib_t:s0       4096 Dec  1 14:29  ..\n  ...\n  -rw-r-----.  1 mysql mysql system_u:object_r:var_lib_t:s0   12582912 Dec  1 14:29  ibdata1\n  ...\n

    To solve the issue, use the following methods:

    • Set the proper labels for mysqlcustom files

    • Change the mysqld SELinux policy to allow mysqld access to var_lib_t files.

    The recommended solution is to set the proper labels. The following procedure assumes you have already created and set ownership to the custom data directory location:

    1. To change the SELinux context, use semanage fcontext. In this step, you define how SELinux deals with the custom paths:

      $ semanage fcontext -a -e /var/lib/mysql /var/lib/mysqlcustom\n

      SELinux applies the same labeling schema, defined in the mysqld policy, for the /var/lib/mysql directory to the custom directory. Files created within the custom directory are labeled as if they were in /var/lib/mysql.

    2. To restorecon command applies the change.

      $ restorecon -R -v /var/lib/mysqlcustom\n
    3. Restart the mysqld service:

      $ service mysqld start\n
    "},{"location":"selinux.html#set-a-custom-log-location","title":"Set a custom log location","text":"

    If you do not use the default settings, SELinux, in enforcing mode, prevents access to the location. Change the log location to a custom location in my.cnf:

    log-error=/logs/mysqld.log\n

    Verify the log location with the following command:

    $ ls -laZ /\n
    Expected output
      ...\n  drwxrwxrwx.   2 root root unconfined_u:object_r:default_t:s0    6 Dec  2 09:16 logs\n  ...\n

    Starting MySQL returns the following message:

    $ service mysql start\n
    Expected output
    Redirecting to /bin/systemctl start mysql.service\nJob for mysqld.service failed because the control process exited with error code.\nSee \"systemctl status mysqld.service\" and \"journalctl -xe\" for details.\n\n$ journalctl -xe\n...\nSELinux is preventing mysqld from write access to the directory logs.\n...\n

    The default SELinux policy allows mysqld to write logs into a location tagged with var_log_t, which is the /var/log location. You can solve the issue with either of the following methods:

    • Tag the /logs location properly

    • Edit the SELinux policy to allow mysqld access to all directories.

    To tag the custom /logs location is the recommended method since it locks down access. Run the following commands to tag the custom location:

    $ semanage fcontext -a -t var_log_t /logs\n$ restorecon -v /logs\n

    You may not be able to change the /logs directory label. For example, other applications, with their own rules, use the same directory.

    To adjust the SELinux policy when a directory is shared, follow these steps:

    1. Create a local policy:

      ausearch -c 'mysqld' --raw | audit2allow -M my-mysqld\n
    2. This command generates the my-mysqld.te and the my-mysqld.pp files. The mysqld.te is the type enforcement policy file. The my-mysqld.pp is the policy module loaded as a binary file into the SELinux subsystem.

      An example of the my-myslqd.te file:

      module my-mysqld 1.0;\n\nrequire {\n    *type mysqld_t*;\n    type var_lib_t;\n    *type default_t*;\n    class file getattr;\n    *class dir write*;\n}\n\n============= mysqld_t ==============\n*allow mysqld_t default_t:dir write*;\nallow mysqld_t var_lib_t:file getattr;\n

      The policy contains rules for the custom data directory and the custom logs directory. We have set the proper labels for the data directory location, and applying this auto-generated policy would loosen our hardening by allowing mysqld to access var_lib_t tags.

    3. SELinux-generated events are converted to rules. A generated policy may contain rules for recent violations and include unrelated rules. Unrelated rules are generated from actions, such as changing the data directory location, that are not related to the logs directory. Add the --start parameter to use log events after a specific time to filter out the unwanted events. This parameter captures events when the time stamp is equal to the specified time or later. SELinux generates a policy for the current actions.

      $ ausearch --start 10:00:00 -c 'mysqld' --raw | audit2allow -M my-mysqld\n
    4. This policy allows mysqld writing into the tagged directories. Open the my_mysqld file:

      module my-mysqld 1.0;\n\nrequire {\n    type mysqld_t;\n    type default_t;\n    class dir write;\n}\n\n============= mysqld_t ==============\nallow mysqld_t default_t:dir write;\n
    5. Install the SELinux policy module:

      $ semodule -i my-mysqld.pp\n

    Restart the service. If you have a failure, check the journal log and follow the same procedure.

    If SELinux prevents mysql from creating a log file inside the directory. You can view all the violations by changing the SELinux mode to permissive and then running mysqld. All violations are logged in the journal log. After this run, you can generate a local policy module, install it, and switch SELinux back to enforcing mode. Follow this procedure:

    1. Unload the current local my-mysqld policy module:

      $ semodule -r my-mysqld\n
    2. You can put a single domain into permissive mode. Other domains on the system to remain in enforcing mode. Use semanage permissive with the -a parameter to change mysqld_t to permissive mode:

      $ semanage permissive -a mysqld_t\n
    3. Verify the mode change:

      $ semdule -l | grep permissive\n
      Expected output
      ...\npermissive_mysqld_t\n...\n
    4. To make searching the log easier, return the time:

      $ date\n
    5. Start the service.

      $ service mysqld start\n
    6. MySQL starts, and SELinux logs the violations in the journal log. Check the journal log:

      $ journalctl -xe\n
    7. Stop the service:

      $ service mysqld stop\n
    8. Generate a local mysqld policy, using the time returned from step 4:

      $ ausearch --start <date-c 'mysqld' --raw | audit2allow -M my-mysqld\n
    9. Review the policy (the policy you generate may be different):

      $ cat my-mysqld.te\n
      Expected output
      module my-mysqld 1.0;\n\nrequire {\ntype default_t;\n    type mysqld_t;\n    class dir { add_name write };\n    class file { append create open };\n}\n\n============= mysqld_t ==============\nallow mysqld_t default_t:dir { add_name write };\nallow mysqld_t default_t:file { append create open };\n
    10. Install the policy:

      $ semodule -i my-mysqld.pp\n
    11. Use semanage permissive with the -d parameter, which deletes the permissive domain for the service:

      $ semanage permissive -d mysqld_t\n
    12. Restart the service:

      $ service mysqld start\n

    Note

    Use this procedure to adjust the local mysqld policy module. You should review the changes which are generated to ensure the rules are not too tolerant.

    "},{"location":"selinux.html#set-secure_file_priv-directory","title":"Set secure_file_priv directory","text":"

    Update the SELinux tags for the /var/lib/mysql-files/ directory, used for SELECT ... INTO OUTFILE or similar operations, if required. The server needs only read/write access to the destination directory.

    To set secure_file_priv to use this directory, run the following commands to set the context:

    $ semanage fcontext -a -t mysqld_db_t \"/var/lib/mysql-files/(/.*)?\"\n$ restorecon -Rv /var/lib/mysql-files\n

    Edit the path for a different location, if needed.

    "},{"location":"sequence-table.html","title":"SEQUENCE_TABLE(n) function","text":"

    Using SEQUENCE_TABLE() function provides the following:

    Benefit Description Generates Sequences Acts as an inline table-valued function that generates a sequence of numbers. Table-Valued Function Unlike traditional scalar functions, SEQUENCE_TABLE() returns a virtual table with a single column named value containing the generated sequence. Simpler Syntax Simplifies queries that need to generate predictable sequences of numbers. Flexibility Allows dynamic definition of sequences within queries, offering more control compared to pre-defined tables for sequences. Predefined Sequence Does not manage sequences like Oracle or PostgreSQL; instead, it allows definition and generation of sequences within a SELECT statement. Customization Enables customization of starting value, increment/decrement amount, and number of values to generate."},{"location":"sequence-table.html#version-update","title":"Version update","text":"

    Percona Server for MySQL 8.0.37 deprecated SEQUENCE_TABLE(), and this function can be removed in a future release. We recommend that you use PERCONA_SEQUENCE_TABLE() instead.

    To maintain compatibility with existing third-party software, SEQUENCE_TABLE is no longer a reserved term and can be used as a regular identifier.

    Percona Server for MySQL 8.0.20-11 introduced the SEQUENCE_TABLE() function.

    "},{"location":"sequence-table.html#table-functions","title":"Table functions","text":"

    The function is an inline table-valued function. This function creates a temporary table with multiple rows. You can use this function within a single SELECT statement. Oracle MySQL Server only has the JSON_TABLE table function. The Percona Server for MySQL has the JSON_TABLE and SEQUENCE_TABLE() table functions. A single SELECT statement generates a multi-row result set. In contrast, a scalar function (like EXP(x) or LOWER(str) always returns a single value of a specific data type.

    "},{"location":"sequence-table.html#syntax","title":"Syntax","text":"

    As with any derived tables, a table function requires an alias in the SELECT statement.

    The result set is a single column with the predefined column name value of type BIGINT UNSIGNED. You can reference the value column in SELECT statements. The following statements are valid. Using n as the number of generated values, the following is the basic syntax:

    • SEQUENCE_TABLE(n) [AS] alias
    SELECT \u2026 FROM SEQUENCE_TABLE(n) [AS] alias\n\nSEQUENCE_TABLE(n) [AS] alias\n
    SELECT * FROM SEQUENCE_TABLE(n) AS tt;\nSELECT <expr(value)> FROM SEQUENCE_TABLE(n) AS tt;\n

    The first number in the series, the initial term, is defined as 0, and the series ends with a value less than n.

    "},{"location":"sequence-table.html#example-usage","title":"Example usage","text":"

    Using SEQUENCE_TABLE():

    mysql> SELECT * FROM SEQUENCE_TABLE(5)) AS sequence_data;\n

    Using PERCONA_SEQUENCE_TABLE():

    mysql> SELECT * FROM PERCONA_SEQUENCE_TABLE(5)) AS sequence_data;\n
    "},{"location":"sequence-table.html#basic-sequence-generation","title":"Basic sequence generation","text":"

    In this example, the following statement generates a sequence:

    mysql> SELECT * FROM SEQUENCE_TABLE(3) AS tt;\n
    Expected output
    +-------+\n| value |\n+-------+\n|     0 |\n|     1 |\n|     2 |\n+-------+\n
    "},{"location":"sequence-table.html#start-with-a-specific-value","title":"Start with a specific value","text":"

    You can define the initial value using the WHERE clause. The following example starts the sequence with 4.

    SELECT value AS result FROM SEQUENCE_TABLE(8) AS tt WHERE value >= 4;\n
    Expected output
    +--------+\n| result |\n+--------+\n|      4 |\n|      5 |\n|      6 |\n|      7 |\n+--------+\n
    "},{"location":"sequence-table.html#filter-even-numbers","title":"Filter even numbers","text":"

    Consecutive terms increase or decrease by a common difference. The default common difference value is 1. However, it is possible to filter the results using the WHERE clause to simulate common differences greater than 1.

    The following example prints only even numbers from the 0..7 range:

    SELECT value AS result FROM SEQUENCE_TABLE(8) AS tt WHERE value % 2 = 0;\n
    Expected output
    +--------+\n| result |\n+--------+\n|      0 |\n|      2 |\n|      4 |\n|      6 |\n+--------+\n
    "},{"location":"sequence-table.html#generate-random-numbers","title":"Generate random numbers","text":"

    The following is an example of using the function to populate a table with a set of random numbers:

    mysql> SELECT FLOOR(RAND() * 100) AS result FROM SEQUENCE_TABLE(4) AS tt;\n

    The output could be the following:

    Expected output
    +--------+\n| result |\n+--------+\n|     24 |\n|     56 |\n|     70 |\n|     25 |\n+--------+\n
    "},{"location":"sequence-table.html#generate-random-strings","title":"Generate random strings","text":"

    You can populate a table with a set of pseudo-random strings with the following statement:

    mysql> SELECT MD5(value) AS result FROM SEQUENCE_TABLE(4) AS tt;\n
    Expected output
    +----------------------------------+\n| result                           |\n+----------------------------------+\n| f17d9c990f40f8ac215f2ecdfd7d0451 |\n| 2e5751b7cfd7f053cd29e946fb2649a4 |\n| b026324c6904b2a9cb4b88d6d61c81d1 |\n| 26ab0db90d72e28ad0ba1e22ee510510 |\n+----------------------------------+\n
    "},{"location":"sequence-table.html#add-a-sequence-to-a-table","title":"Add a sequence to a table","text":"

    You can add the sequence as a column to a new table or an existing table, as shown in this example:

    mysql> CREATE TABLE t1 AS SELECT * FROM SEQUENCE_TABLE(4) AS tt;\n\nmysql> SELECT * FROM t1;\n
    Expected output
    +-------+\n| value |\n+-------+\n|     0 |\n|     1 |\n|     2 |\n|     3 |\n+-------+\n

    Sequences are useful for various purposes, such as populating tables and generating test data.

    "},{"location":"server-version-numbers.html","title":"Understand version numbers","text":"

    A version number identifies the product release. The product contains the latest Generally Available (GA) features at the time of that release.

    8.0.29 -21. 2 Base version Minor build version Custom build

    Percona uses semantic version numbering, which follows the pattern of base version, minor build version, and an optional custom build. Percona assigns unique, non-negative integers in increasing order for each minor build release. The version number combines the base MySQL 8.0 version number, the minor build version, and the custom build version, if needed.

    For example, the version numbers for Percona Server for MySQL 8.0.29-21.2 define the following information:

    • Base version - the leftmost numbers indicate MySQL 8.0 version used as a base.

    • Minor build version - an internal number that increases by one every time Percona Server for MySQL is released.

    • Custom build version - an optional number assigned to custom builds used for bug fixes. The software features, unless they\u2019re included in the bug fix, don\u2019t change.

    Percona Server for MySQL 8.0.28-19 and 8.0.28-20 are both based on MySQL 8.0.28.

    "},{"location":"show-engines.html","title":"Show storage engines","text":"

    This feature changes the comment field displayed when the SHOW STORAGE ENGINES command is executed and XtraDB is the storage engine.

    "},{"location":"show-engines.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.

    Before the Change:

    mysql> show storage engines;\n
    Expected output
    +------------+---------+----------------------------------------------------------------+--------------+------+------------+\n| Engine     | Support | Comment                                                        | Transactions | XA   | Savepoints |\n+------------+---------+----------------------------------------------------------------+--------------+------+------------+\n| InnoDB     | YES     | Supports transactions, row-level locking, and foreign keys     | YES          | YES  | YES        |\n...\n+------------+---------+----------------------------------------------------------------+--------------+------+------------+\n

    After the Change:

    mysql> show storage engines;\n
    Expected output
    +------------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n| Engine     | Support | Comment                                                                    | Transactions |   XA | Savepoints |\n+------------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n| InnoDB     | YES     | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys |          YES | YES  | YES        |\n...\n+------------+---------+----------------------------------------------------------------------------+--------------+------+------------+\n
    "},{"location":"slow-extended.html","title":"Slow query log variables","text":"

    This feature adds microsecond time resolution and additional statistics to the slow query log output. It lets you enable or disable the slow query log at runtime, adds logging for the replica SQL thread, and adds fine-grained control over what and how much to log into the slow query log.

    "},{"location":"slow-extended.html#system-variables","title":"System Variables","text":""},{"location":"slow-extended.html#log_slow_filter","title":"log_slow_filter","text":"Option Description Command-line Yes Config file Yes Scope Global, Session Dynamic Yes

    Controls which slow queries are recorded in the slow query log based on any combination of the following values:

    Option Description filesort Logs when a query requires sorting. filesort_on_disk Records queries that sort results using temporary tables on disk. full_join Records queries that join tables without using indexes. full_scan Records queries that scan the entire table. qc_miss Logs queries that could not be served from the query cache. tmp_table Records queries that create temporary tables in memory. tmp_table_on_disk Records queries that create temporary tables on disk.

    Multiple values are separated by commas. For example: full_scan,tmp_table_on_disk

    If no values are specified, the filter is disabled and all queries are logged.

    "},{"location":"slow-extended.html#log_slow_rate_type","title":"log_slow_rate_type","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic yes Data type Enumerated Default session, query

    Determines the context of log_slow_rate_limit.

    Value Description session The rate limit applies to the entire server session. The maximum number of slow queries allowed per second is counted across all queries executed within that session. query The rate limit applies to each individual query. This means that each query can be logged as slow if it exceeds the specified limit, regardless of the overall session activity."},{"location":"slow-extended.html#log_slow_rate_limit","title":"log_slow_rate_limit","text":"Option Description Command-line Yes Config file Yes Scope Global, session Dynamic yes Default 1 Range 1-1000

    The log_slow_rate_limit variable controls how often queries are logged in the slow query log. Instead of logging every query, it only logs one out of every n queries, where n is the value of log_slow_rate_limit.

    By default, n is 1, so all queries are logged. This setting helps reduce the amount of information in the slow query log, which is useful during debugging to avoid overwhelming the log file.

    Please note: when log_slow_rate_type is session rate limiting is disabled for the replication thread.

    Logging all queries might consume I/O bandwidth and cause the log file to grow large.

    • When log_slow_rate_type is session, this option lets you log full sessions, so you have complete records of sessions for later analysis; but you can rate-limit the number of sessions that are logged. Note that this feature will not work well if your application uses any type of connection pooling or persistent connections. Note that you change log_slow_rate_limit in session mode, you should reconnect for get effect.

    • When log_slow_rate_type is query, this option lets you log just some queries for later analysis. For example, if you set the value to 100, then one percent of queries will be logged.

    Note that every query has global unique query_id and every connection can has it own (session) log_slow_rate_limit. Decision \u201clog or no\u201d calculated in following manner:

    • if log_slow_rate_limit = 1 - log every query

    • If log_slow_rate_limit > 1 - log every 1/log_slow_rate_limit query.

    This allows flexible setup logging behavior.

    For example, if you set the value to 100, then 1 query for every 100 queries of sessions/queries is logged.

    In Percona Server for MySQL information about the log_slow_rate_limit has been added to the slow query log. This means that if the log_slow_rate_limit is effective it will be reflected in the slow query log for each written query.

    Expected output
    Log_slow_rate_type: query  Log_slow_rate_limit: 10\n
    "},{"location":"slow-extended.html#log_slow_sp_statements","title":"log_slow_sp_statements","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Enumerated Default session Range session, query

    If TRUE, statements executed by stored procedures are logged to the slow if it is open.

    Percona Server for MySQL implemented improvements for logging of stored procedures to the slow query log:

    • Each query from a stored procedure is now logged to the slow query log individually

    • CALL itself isn\u2019t logged to the slow query log anymore as this would be counting twice for the same query which would lead to incorrect results

    • Queries that were called inside of stored procedures are annotated in the slow query log with the stored procedure name in which they run.

    Example of the improved stored procedure slow query log entry:

    mysqlDELIMITER //\nmysqlCREATE PROCEDURE improved_sp_log()\n       BEGIN\n        SELECT * FROM City;\n        SELECT * FROM Country;\n       END//\nmysqlDELIMITER ;\nmysqlCALL improved_sp_log();\n

    When we check the slow query log after running the stored procedure, with log_slow_sp_statements set to TRUE, it should look like this:

    Expected output
    # Time: 150109 11:38:55\n# User@Host: root[root] @ localhost []\n# Thread_id: 40  Schema: world  Last_errno: 0  Killed: 0\n# Query_time: 0.012989  Lock_time: 0.000033  Rows_sent: 4079  Rows_examined: 4079  Rows_affected: 0  Rows_read: 4079\n# Bytes_sent: 161085\n# Stored routine: world.improved_sp_log\nSET timestamp=1420803535;\nSELECT * FROM City;\n# User@Host: root[root] @ localhost []\n# Thread_id: 40  Schema: world  Last_errno: 0  Killed: 0\n# Query_time: 0.001413  Lock_time: 0.000017  Rows_sent: 4318  Rows_examined: 4318  Rows_affected: 0  Rows_read: 4318\n# Bytes_sent: 194601\n# Stored routine: world.improved_sp_log\nSET timestamp=1420803535;\n

    If variable log_slow_sp_statements is set to FALSE:

    • Entry is added to a slow-log for a CALL statement only and not for any of the individual statements run in that stored procedure

    • Execution time is reported for the CALL statement as the total execution time of the CALL including all its statements

    If we run the same stored procedure with the log_slow_sp_statements is set to FALSE slow query log should look like this:

    Expected output
    # Time: 150109 11:51:42\n# User@Host: root[root] @ localhost []\n# Thread_id: 40  Schema: world  Last_errno: 0  Killed: 0\n# Query_time: 0.013947  Lock_time: 0.000000  Rows_sent: 4318  Rows_examined: 4318  Rows_affected: 0  Rows_read: 4318\n# Bytes_sent: 194612\nSET timestamp=1420804302;\nCALL improved_sp_log();\n

    Note

    Support for logging stored procedures doesn\u2019t involve triggers, so they won\u2019t be logged even if this feature is enabled.

    "},{"location":"slow-extended.html#log_slow_verbosity","title":"log_slow_verbosity","text":"Option Description Command-line Yes Config file Yes Scope Global, session Dynamic Yes

    Specifies how much information to include in your slow log. The value is a comma-delimited string, and can contain any combination of the following values:

    • microtime: Log queries with microsecond precision.

    • query_plan: Log information about the query\u2019s execution plan.

    • innodb: Log InnoDB statistics.

    • minimal: Equivalent to enabling just microtime.

    • standard: Equivalent to enabling microtime,query_plan.

    • full: Equivalent to microtime,query_plan,innodb.

    • profiling: Enables profiling of all queries in all connections.

    • profiling_use_getrusage: Enables usage of the getrusage function.

    • query_info: Enables printing Query_tables and Query_digest into the slow query log. These fields are disabled by default.

    You can combine the options. For example, to enable microsecond query timing and InnoDB statistics, set this option to microtime,innodb or standard.

    "},{"location":"slow-extended.html#slow_query_log_use_global_control","title":"slow_query_log_use_global_control","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Default None

    Specifies which variables have global scope instead of local. For such variables, the global variable value is used in the current session, but without copying this value to the session value. Value is a \u201cflag\u201d variable - you can specify multiple values separated by commas

    • none: All variables use local scope

    • log_slow_filter: Global variable log_slow_filter has effect (instead of local)

    • log_slow_rate_limit: Global variable log_slow_rate_limit has effect (instead of local)

    • log_slow_verbosity: Global variable log_slow_verbosity has effect (instead of local)

    • long_query_time: Global variable long_query_time has effect (instead of local)

    • min_examined_row_limit: Global variable min_examined_row_limit has effect (instead of local)

    • all Global variables has effect (instead of local)

    "},{"location":"slow-extended.html#slow_query_log_always_write_time","title":"slow_query_log_always_write_time","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Default 10

    This variable can be used to specify the query execution time after which the query will be written to the slow query log. It can be used to specify an additional execution time threshold for the slow query log, that, when exceeded, will cause a query to be logged unconditionally, that is, log_slow_rate_limit will not apply to it.

    "},{"location":"slowlog-rotation.html","title":"Slow query log rotation and expiration","text":"

    Important

    This feature is a tech preview. Before using this feature in production, we recommend that you test restoring production from physical backups in your environment, and also use the alternative backup method for redundancy.

    This feature was implemented in Percona Server for MySQL 8.0.27-18.

    Percona has implemented two new variables, max_slowlog_size and max_slowlog_files to provide users with ability to control the slow query log disk usage. These variables have the same behavior as the max_binlog_size variable and the max_binlog_files variable used for controlling the binary log.

    "},{"location":"slowlog-rotation.html#system-variables","title":"System variables","text":""},{"location":"slowlog-rotation.html#max_slowlog_size","title":"max_slowlog_size","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type numeric Default 0 (unlimited) Range 0 - 1073741824

    The max_slowlog_size variable controls when the server rotates the slow query log file based on size.

    The value is set to 0 by default, which means the server does not automatically rotate the slow query log file.

    The block size is 4096 bytes. If you set a value that is not a multiple of 4096, the server rounds it down to the nearest multiple of 4096. For example, setting max_slowlog_size to any value less than 4096 will effectively set the value to 0.

    If you set a limit for this size and enable this feature, the server will rename the slow query log file to slow_query_log_file.000001 once it reaches the specified size.

    "},{"location":"slowlog-rotation.html#max_slowlog_files","title":"max_slowlog_files","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type numeric Default 0 (unlimited) Range 0 - 102400

    This variable limits the total amount of slow query log files and is used with max_slowlog_size.

    The server creates and adds slow query logs until it reaches the range\u2019s upper value. When the upper value is reached, the server creates a new slow query log file with a higher sequence number and deletes the log file with the lowest sequence number, maintaining the total amount defined in the range.

    "},{"location":"source-tarball.html","title":"Install Percona Server for MySQL from a source tarball","text":"

    Fetch and extract the source tarball.

    For example

    $ wget https://downloads.percona.com/downloads/Percona-Server-8.0/Percona-Server-8.0.26-16/binary/tarball/Percona-Server-8.0.26-16-Linux.x86_64.glibc2.12.tar.gz\n$ tar xfz Percona-Server-8.0.26-16-Linux.x86_64.glibc2.12.tar.gz\n

    To complete the installation, follow the instructions in Compile Percona Server for MySQL from Source.

    "},{"location":"ssl-improvement.html","title":"SSL improvements","text":"

    Percona Server for MySQL passes Elliptic Curve Cryptography (ECC) ciphers to OpenSSL by default.

    Note

    Although documented as supported, elliptic-curve crypto-based ciphers do not work with MySQL.

    "},{"location":"stacktrace.html","title":"Stacktrace","text":""},{"location":"stacktrace.html#stack-trace","title":"Stack trace","text":"

    Developers use the stack trace in the debug process, either an interactive investigation or during the post-mortem. No configuration is required to generate a stack trace.

    Implemented in Percona Server for MySQL 8.0.21-12, stack trace adds the following:

    Name Description Prints binary BuildID The Strip utility removes unneeded sections and debugging information to reduce the size. This method is standard with containers where the size of the image is essential. The BuildID lets you resolve the stack trace when the Strip utility removes the binary symbols table. Print the server version information The version information establishes the starting point for analysis. Some applications, such as MySQL, only print this information to a log on startup, and when the crash occurs, the size of the log may be large, rotated, or truncated."},{"location":"start-transaction-with-consistent-snapshot.html","title":"Start transaction with consistent snapshot","text":"

    Percona Server for MySQL has ported MariaDB enhancement for START TRANSACTION WITH CONSISTENT SNAPSHOTS feature to MySQL 5.6 group commit implementation. This enhancement makes binary log positions consistent with InnoDB transaction snapshots.

    This feature is quite useful to obtain logical backups with correct positions without running a FLUSH TABLES WITH READ LOCK. Binary log position can be obtained by two newly implemented status variables: Binlog_snapshot_file and Binlog_snapshot_position. After starting a transaction using the START TRANSACTION WITH CONSISTENT SNAPSHOT, these two variables will provide you with the binlog position corresponding to the state of the database of the consistent snapshot so taken, irrespectively of which other transactions have been committed since the snapshot was taken.

    "},{"location":"start-transaction-with-consistent-snapshot.html#snapshot-cloning","title":"Snapshot cloning","text":"

    The Percona Server for MySQL implementation extends the START TRANSACTION WITH CONSISTENT SNAPSHOT syntax with the optional FROM SESSION clause:

    START TRANSACTION WITH CONSISTENT SNAPSHOT FROM SESSION <session_id>;\n

    When specified, all participating storage engines and binary log instead of creating a new snapshot of data (or binary log coordinates), create a copy of the snapshot which has been created by an active transaction in the specified session. session_id is the session identifier reported in the Id column of SHOW PROCESSLIST.

    Currently, snapshot cloning is only supported by XtraDB and the binary log. As with the regular START TRANSACTION WITH CONSISTENT SNAPSHOT, snapshot clones can only be created with the REPEATABLE READ isolation level.

    For XtraDB, a transaction with a cloned snapshot will only see data visible or changed by the donor transaction. That is, the cloned transaction will see no changes committed by transactions that started after the donor transaction, not even changes made by itself. Note that in case of chained cloning the donor transaction is the first one in the chain. For example, if transaction A is cloned into transaction B, which is in turn cloned into transaction C, the latter will have read view from transaction A (i.e., the donor transaction). Therefore, it will see changes made by transaction A, but not by transaction B.

    "},{"location":"start-transaction-with-consistent-snapshot.html#mysqldump","title":"mysqldump","text":"

    mysqldump has been updated to use new status variables automatically when they are supported by the server and both \u2013single-transaction and \u2013master-data are specified on the command line. Along with the mysqldump improvements introduced in Backup Locks there is now a way to generate mysqldump backups that are guaranteed to be consistent without using FLUSH TABLES WITH READ LOCK even if --master-data is requested.

    "},{"location":"start-transaction-with-consistent-snapshot.html#system-variables","title":"System variables","text":""},{"location":"start-transaction-with-consistent-snapshot.html#have_snapshot_cloning","title":"have_snapshot_cloning","text":"Option Description Command Line: Yes Config file No Scope: Global Dynamic: No Data type Boolean

    This server variable is implemented to help other utilities detect if the server supports the FROM SESSION extension. When available, the snapshot cloning feature and the syntax extension to START TRANSACTION WITH CONSISTENT SNAPSHOT are supported by the server, and the variable value is always YES.

    "},{"location":"start-transaction-with-consistent-snapshot.html#status-variables","title":"Status variables","text":""},{"location":"start-transaction-with-consistent-snapshot.html#binlog_snapshot_file","title":"Binlog_snapshot_file","text":"Option Description Scope: Global Data type String"},{"location":"start-transaction-with-consistent-snapshot.html#binlog_snapshot_position","title":"Binlog_snapshot_position","text":"Option Description Scope: Global Data type Numeric"},{"location":"start-transaction-with-consistent-snapshot.html#binlog_snapshot_gtid_executed","title":"Binlog_snapshot_gtid_executed","text":"Option Description Scope: Global Data type UUID

    Returns the gtid_executed state when start transaction with consistent snapshot starts. Within this transaction, the GTID returns the same value. Other sessions can have transactions that modify the gtid_executed.

    These status variables are only available when the binary log is enabled globally.

    "},{"location":"start-transaction-with-consistent-snapshot.html#other-reading","title":"Other reading","text":"
    • MariaDB Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT
    "},{"location":"status-variables.html","title":"MyRocks status variables","text":"

    MyRocks status variables provide details about the inner workings of the storage engine and they can be useful in tuning the storage engine to a particular environment.

    You can view these variables and their values by running:

    mysql> SHOW STATUS LIKE 'rocksdb%';\n

    The following global status variables are available:

    Name Var Type rocksdb_rows_deleted Numeric rocksdb_rows_inserted Numeric rocksdb_rows_read Numeric rocksdb_rows_unfiltered_no_snapshot Numeric rocksdb_rows_updated Numeric rocksdb_rows_expired Numeric rocksdb_system_rows_deleted Numeric rocksdb_system_rows_inserted Numeric rocksdb_system_rows_read Numeric rocksdb_system_rows_updated Numeric rocksdb_memtable_total Numeric rocksdb_memtable_unflushed Numeric rocksdb_queries_point Numeric rocksdb_queries_range Numeric rocksdb_covered_secondary_key_lookups Numeric rocksdb_additional_compactions_trigger Numeric rocksdb_block_cache_add Numeric rocksdb_block_cache_add_failures Numeric rocksdb_block_cache_bytes_read Numeric rocksdb_block_cache_bytes_write Numeric rocksdb_block_cache_data_add Numeric rocksdb_block_cache_data_bytes_insert Numeric rocksdb_block_cache_data_hit Numeric rocksdb_block_cache_data_miss Numeric rocksdb_block_cache_filter_add Numeric rocksdb_block_cache_filter_bytes_evict Numeric rocksdb_block_cache_filter_bytes_insert Numeric rocksdb_block_cache_filter_hit Numeric rocksdb_block_cache_filter_miss Numeric rocksdb_block_cache_hit Numeric rocksdb_block_cache_index_add Numeric rocksdb_block_cache_index_bytes_evict Numeric rocksdb_block_cache_index_bytes_insert Numeric rocksdb_block_cache_index_hit Numeric rocksdb_block_cache_index_miss Numeric rocksdb_block_cache_miss Numeric rocksdb_block_cache_compressed_hit Numeric rocksdb_block_cache_compressed_miss Numeric rocksdb_bloom_filter_prefix_checked Numeric rocksdb_bloom_filter_prefix_useful Numeric rocksdb_bloom_filter_useful Numeric rocksdb_bytes_read Numeric rocksdb_bytes_written Numeric rocksdb_compact_read_bytes Numeric rocksdb_compact_write_bytes Numeric rocksdb_compaction_key_drop_new Numeric rocksdb_compaction_key_drop_obsolete Numeric rocksdb_compaction_key_drop_user Numeric rocksdb_flush_write_bytes Numeric rocksdb_get_hit_l0 Numeric rocksdb_get_hit_l1 Numeric rocksdb_get_hit_l2_and_up Numeric rocksdb_get_updates_since_calls Numeric rocksdb_iter_bytes_read Numeric rocksdb_memtable_hit Numeric rocksdb_memtable_miss Numeric rocksdb_no_file_closes Numeric rocksdb_no_file_errors Numeric rocksdb_no_file_opens Numeric rocksdb_num_iterators Numeric rocksdb_number_block_not_compressed Numeric rocksdb_number_db_next Numeric rocksdb_number_db_next_found Numeric rocksdb_number_db_prev Numeric rocksdb_number_db_prev_found Numeric rocksdb_number_db_seek Numeric rocksdb_number_db_seek_found Numeric rocksdb_number_deletes_filtered Numeric rocksdb_number_keys_read Numeric rocksdb_number_keys_updated Numeric rocksdb_number_keys_written Numeric rocksdb_number_merge_failures Numeric rocksdb_number_multiget_bytes_read Numeric rocksdb_number_multiget_get Numeric rocksdb_number_multiget_keys_read Numeric rocksdb_number_reseeks_iteration Numeric rocksdb_number_sst_entry_delete Numeric rocksdb_number_sst_entry_merge Numeric rocksdb_number_sst_entry_other Numeric rocksdb_number_sst_entry_put Numeric rocksdb_number_sst_entry_singledelete Numeric rocksdb_number_stat_computes Numeric rocksdb_number_superversion_acquires Numeric rocksdb_number_superversion_cleanups Numeric rocksdb_number_superversion_releases Numeric rocksdb_rate_limit_delay_millis Numeric rocksdb_row_lock_deadlocks Numeric rocksdb_row_lock_wait_timeouts Numeric rocksdb_snapshot_conflict_errors Numeric rocksdb_stall_l0_file_count_limit_slowdowns Numeric rocksdb_stall_locked_l0_file_count_limit_slowdowns Numeric rocksdb_stall_l0_file_count_limit_stops Numeric rocksdb_stall_locked_l0_file_count_limit_stops Numeric rocksdb_stall_pending_compaction_limit_stops Numeric rocksdb_stall_pending_compaction_limit_slowdowns Numeric rocksdb_stall_memtable_limit_stops Numeric rocksdb_stall_memtable_limit_slowdowns Numeric rocksdb_stall_total_stops Numeric rocksdb_stall_total_slowdowns Numeric rocksdb_stall_micros Numeric rocksdb_wal_bytes Numeric rocksdb_wal_group_syncs Numeric rocksdb_wal_synced Numeric rocksdb_write_other Numeric rocksdb_write_self Numeric rocksdb_write_timedout Numeric rocksdb_write_wal Numeric"},{"location":"status-variables.html#rocksdb_rows_deleted","title":"rocksdb_rows_deleted","text":"

    This variable shows the number of rows that were deleted from MyRocks tables.

    "},{"location":"status-variables.html#rocksdb_rows_inserted","title":"rocksdb_rows_inserted","text":"

    This variable shows the number of rows that were inserted into MyRocks tables.

    "},{"location":"status-variables.html#rocksdb_rows_read","title":"rocksdb_rows_read","text":"

    This variable shows the number of rows that were read from MyRocks tables.

    "},{"location":"status-variables.html#rocksdb_rows_unfiltered_no_snapshot","title":"rocksdb_rows_unfiltered_no_snapshot","text":"

    This variable shows how many reads need TTL and have no snapshot timestamp.

    "},{"location":"status-variables.html#rocksdb_rows_updated","title":"rocksdb_rows_updated","text":"

    This variable shows the number of rows that were updated in MyRocks tables.

    "},{"location":"status-variables.html#rocksdb_rows_expired","title":"rocksdb_rows_expired","text":"

    This variable shows the number of expired rows in MyRocks tables.

    "},{"location":"status-variables.html#rocksdb_system_rows_deleted","title":"rocksdb_system_rows_deleted","text":"

    This variable shows the number of rows that were deleted from MyRocks system tables.

    "},{"location":"status-variables.html#rocksdb_system_rows_inserted","title":"rocksdb_system_rows_inserted","text":"

    This variable shows the number of rows that were inserted into MyRocks system tables.

    "},{"location":"status-variables.html#rocksdb_system_rows_read","title":"rocksdb_system_rows_read","text":"

    This variable shows the number of rows that were read from MyRocks system tables.

    "},{"location":"status-variables.html#rocksdb_system_rows_updated","title":"rocksdb_system_rows_updated","text":"

    This variable shows the number of rows that were updated in MyRocks system tables.

    "},{"location":"status-variables.html#rocksdb_memtable_total","title":"rocksdb_memtable_total","text":"

    This variable shows the memory usage, in bytes, of all memtables.

    "},{"location":"status-variables.html#rocksdb_memtable_unflushed","title":"rocksdb_memtable_unflushed","text":"

    This variable shows the memory usage, in bytes, of all unflushed memtables.

    "},{"location":"status-variables.html#rocksdb_queries_point","title":"rocksdb_queries_point","text":"

    This variable shows the number of single row queries.

    "},{"location":"status-variables.html#rocksdb_queries_range","title":"rocksdb_queries_range","text":"

    This variable shows the number of multi/range row queries.

    "},{"location":"status-variables.html#rocksdb_covered_secondary_key_lookups","title":"rocksdb_covered_secondary_key_lookups","text":"

    This variable shows the number of lookups via the secondary index that returned all fields requested directly from the secondary index.

    "},{"location":"status-variables.html#rocksdb_additional_compactions_trigger","title":"rocksdb_additional_compactions_trigger","text":"

    This variable shows the number of triggered additional compactions. MyRocks triggers an additional compaction if (number of deletions / number of entries) > (rocksdb_compaction_sequential_deletes / rocksdb_compaction_sequential_deletes_window) in the SST file.

    "},{"location":"status-variables.html#rocksdb_block_cache_add","title":"rocksdb_block_cache_add","text":"

    This variable shows the number of blocks added to block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_add_failures","title":"rocksdb_block_cache_add_failures","text":"

    This variable shows the number of failures when adding blocks to block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_bytes_read","title":"rocksdb_block_cache_bytes_read","text":"

    This variable shows the number of bytes read from cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_bytes_write","title":"rocksdb_block_cache_bytes_write","text":"

    This variable shows the number of bytes written into cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_data_add","title":"rocksdb_block_cache_data_add","text":"

    This variable shows the number of data blocks added to block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_data_bytes_insert","title":"rocksdb_block_cache_data_bytes_insert","text":"

    This variable shows the number of bytes of data blocks inserted into cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_data_hit","title":"rocksdb_block_cache_data_hit","text":"

    This variable shows the number of cache hits when accessing the data block from the block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_data_miss","title":"rocksdb_block_cache_data_miss","text":"

    This variable shows the number of cache misses when accessing the data block from the block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_filter_add","title":"rocksdb_block_cache_filter_add","text":"

    This variable shows the number of filter blocks added to block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_filter_bytes_evict","title":"rocksdb_block_cache_filter_bytes_evict","text":"

    This variable shows the number of bytes of bloom filter blocks removed from cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_filter_bytes_insert","title":"rocksdb_block_cache_filter_bytes_insert","text":"

    This variable shows the number of bytes of bloom filter blocks inserted into cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_filter_hit","title":"rocksdb_block_cache_filter_hit","text":"

    This variable shows the number of times cache hit when accessing filter block from block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_filter_miss","title":"rocksdb_block_cache_filter_miss","text":"

    This variable shows the number of times cache miss when accessing filter block from block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_hit","title":"rocksdb_block_cache_hit","text":"

    This variable shows the total number of block cache hits.

    "},{"location":"status-variables.html#rocksdb_block_cache_index_add","title":"rocksdb_block_cache_index_add","text":"

    This variable shows the number of index blocks added to block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_index_bytes_evict","title":"rocksdb_block_cache_index_bytes_evict","text":"

    This variable shows the number of bytes of index block erased from cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_index_bytes_insert","title":"rocksdb_block_cache_index_bytes_insert","text":"

    This variable shows the number of bytes of index blocks inserted into cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_index_hit","title":"rocksdb_block_cache_index_hit","text":"

    This variable shows the total number of block cache index hits.

    "},{"location":"status-variables.html#rocksdb_block_cache_index_miss","title":"rocksdb_block_cache_index_miss","text":"

    This variable shows the number of times cache hit when accessing index block from block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_miss","title":"rocksdb_block_cache_miss","text":"

    This variable shows the total number of block cache misses.

    "},{"location":"status-variables.html#rocksdb_block_cache_compressed_hit","title":"rocksdb_block_cache_compressed_hit","text":"

    This variable shows the number of hits in the compressed block cache.

    "},{"location":"status-variables.html#rocksdb_block_cache_compressed_miss","title":"rocksdb_block_cache_compressed_miss","text":"

    This variable shows the number of misses in the compressed block cache.

    "},{"location":"status-variables.html#rocksdb_bloom_filter_prefix_checked","title":"rocksdb_bloom_filter_prefix_checked","text":"

    This variable shows the number of times bloom was checked before creating iterator on a file.

    "},{"location":"status-variables.html#rocksdb_bloom_filter_prefix_useful","title":"rocksdb_bloom_filter_prefix_useful","text":"

    This variable shows the number of times the check was useful in avoiding iterator creation (and thus likely IOPs).

    "},{"location":"status-variables.html#rocksdb_bloom_filter_useful","title":"rocksdb_bloom_filter_useful","text":"

    This variable shows the number of times bloom filter has avoided file reads.

    "},{"location":"status-variables.html#rocksdb_bytes_read","title":"rocksdb_bytes_read","text":"

    This variable shows the total number of uncompressed bytes read. It could be either from memtables, cache, or table files.

    "},{"location":"status-variables.html#rocksdb_bytes_written","title":"rocksdb_bytes_written","text":"

    This variable shows the total number of uncompressed bytes written.

    "},{"location":"status-variables.html#rocksdb_compact_read_bytes","title":"rocksdb_compact_read_bytes","text":"

    This variable shows the number of bytes read during compaction

    "},{"location":"status-variables.html#rocksdb_compact_write_bytes","title":"rocksdb_compact_write_bytes","text":"

    This variable shows the number of bytes written during compaction.

    "},{"location":"status-variables.html#rocksdb_compaction_key_drop_new","title":"rocksdb_compaction_key_drop_new","text":"

    This variable shows the number of key drops during compaction because it was overwritten with a newer value.

    "},{"location":"status-variables.html#rocksdb_compaction_key_drop_obsolete","title":"rocksdb_compaction_key_drop_obsolete","text":"

    This variable shows the number of key drops during compaction because it was obsolete.

    "},{"location":"status-variables.html#rocksdb_compaction_key_drop_user","title":"rocksdb_compaction_key_drop_user","text":"

    This variable shows the number of key drops during compaction because user compaction function has dropped the key.

    "},{"location":"status-variables.html#rocksdb_flush_write_bytes","title":"rocksdb_flush_write_bytes","text":"

    This variable shows the number of bytes written during flush.

    "},{"location":"status-variables.html#rocksdb_get_hit_l0","title":"rocksdb_get_hit_l0","text":"

    This variable shows the number of Get() queries served by L0.

    "},{"location":"status-variables.html#rocksdb_get_hit_l1","title":"rocksdb_get_hit_l1","text":"

    This variable shows the number of Get() queries served by L1.

    "},{"location":"status-variables.html#rocksdb_get_hit_l2_and_up","title":"rocksdb_get_hit_l2_and_up","text":"

    This variable shows the number of Get() queries served by L2 and up.

    "},{"location":"status-variables.html#rocksdb_get_updates_since_calls","title":"rocksdb_get_updates_since_calls","text":"

    This variable shows the number of calls to GetUpdatesSince function. Useful to keep track of transaction log iterator refreshes

    "},{"location":"status-variables.html#rocksdb_iter_bytes_read","title":"rocksdb_iter_bytes_read","text":"

    This variable shows the number of uncompressed bytes read from an iterator. It includes size of key and value.

    "},{"location":"status-variables.html#rocksdb_memtable_hit","title":"rocksdb_memtable_hit","text":"

    This variable shows the number of memtable hits.

    "},{"location":"status-variables.html#rocksdb_memtable_miss","title":"rocksdb_memtable_miss","text":"

    This variable shows the number of memtable misses.

    "},{"location":"status-variables.html#rocksdb_no_file_closes","title":"rocksdb_no_file_closes","text":"

    This variable shows the number of time file were closed.

    "},{"location":"status-variables.html#rocksdb_no_file_errors","title":"rocksdb_no_file_errors","text":"

    This variable shows number of errors trying to read in data from an sst file.

    "},{"location":"status-variables.html#rocksdb_no_file_opens","title":"rocksdb_no_file_opens","text":"

    This variable shows the number of time file were opened.

    "},{"location":"status-variables.html#rocksdb_num_iterators","title":"rocksdb_num_iterators","text":"

    This variable shows the number of currently open iterators.

    "},{"location":"status-variables.html#rocksdb_number_block_not_compressed","title":"rocksdb_number_block_not_compressed","text":"

    This variable shows the number of uncompressed blocks.

    "},{"location":"status-variables.html#rocksdb_number_db_next","title":"rocksdb_number_db_next","text":"

    This variable shows the number of calls to next.

    "},{"location":"status-variables.html#rocksdb_number_db_next_found","title":"rocksdb_number_db_next_found","text":"

    This variable shows the number of calls to next that returned data.

    "},{"location":"status-variables.html#rocksdb_number_db_prev","title":"rocksdb_number_db_prev","text":"

    This variable shows the number of calls to prev.

    "},{"location":"status-variables.html#rocksdb_number_db_prev_found","title":"rocksdb_number_db_prev_found","text":"

    This variable shows the number of calls to prev that returned data.

    "},{"location":"status-variables.html#rocksdb_number_db_seek","title":"rocksdb_number_db_seek","text":"

    This variable shows the number of calls to seek.

    "},{"location":"status-variables.html#rocksdb_number_db_seek_found","title":"rocksdb_number_db_seek_found","text":"

    This variable shows the number of calls to seek that returned data.

    "},{"location":"status-variables.html#rocksdb_number_deletes_filtered","title":"rocksdb_number_deletes_filtered","text":"

    This variable shows the number of deleted records that were not required to be written to storage because key did not exist.

    "},{"location":"status-variables.html#rocksdb_number_keys_read","title":"rocksdb_number_keys_read","text":"

    This variable shows the number of keys read.

    "},{"location":"status-variables.html#rocksdb_number_keys_updated","title":"rocksdb_number_keys_updated","text":"

    This variable shows the number of keys updated, if inplace update is enabled.

    "},{"location":"status-variables.html#rocksdb_number_keys_written","title":"rocksdb_number_keys_written","text":"

    This variable shows the number of keys written to the database.

    "},{"location":"status-variables.html#rocksdb_number_merge_failures","title":"rocksdb_number_merge_failures","text":"

    This variable shows the number of failures performing merge operator actions in RocksDB.

    "},{"location":"status-variables.html#rocksdb_number_multiget_bytes_read","title":"rocksdb_number_multiget_bytes_read","text":"

    This variable shows the number of bytes read during RocksDB MultiGet() calls.

    "},{"location":"status-variables.html#rocksdb_number_multiget_get","title":"rocksdb_number_multiget_get","text":"

    This variable shows the number MultiGet() requests to RocksDB.

    "},{"location":"status-variables.html#rocksdb_number_multiget_keys_read","title":"rocksdb_number_multiget_keys_read","text":"

    This variable shows the keys read via MultiGet().

    "},{"location":"status-variables.html#rocksdb_number_reseeks_iteration","title":"rocksdb_number_reseeks_iteration","text":"

    This variable shows the number of times reseek happened inside an iteration to skip over large number of keys with same userkey.

    "},{"location":"status-variables.html#rocksdb_number_sst_entry_delete","title":"rocksdb_number_sst_entry_delete","text":"

    This variable shows the total number of delete markers written by MyRocks.

    "},{"location":"status-variables.html#rocksdb_number_sst_entry_merge","title":"rocksdb_number_sst_entry_merge","text":"

    This variable shows the total number of merge keys written by MyRocks.

    "},{"location":"status-variables.html#rocksdb_number_sst_entry_other","title":"rocksdb_number_sst_entry_other","text":"

    This variable shows the total number of non-delete, non-merge, non-put keys written by MyRocks.

    "},{"location":"status-variables.html#rocksdb_number_sst_entry_put","title":"rocksdb_number_sst_entry_put","text":"

    This variable shows the total number of put keys written by MyRocks.

    "},{"location":"status-variables.html#rocksdb_number_sst_entry_singledelete","title":"rocksdb_number_sst_entry_singledelete","text":"

    This variable shows the total number of single delete keys written by MyRocks.

    "},{"location":"status-variables.html#rocksdb_number_stat_computes","title":"rocksdb_number_stat_computes","text":"

    This variable isn\u2019t used anymore and will be removed in future releases.

    "},{"location":"status-variables.html#rocksdb_number_superversion_acquires","title":"rocksdb_number_superversion_acquires","text":"

    This variable shows the number of times the superversion structure has been acquired in RocksDB, this is used for tracking all of the files for the database.

    "},{"location":"status-variables.html#rocksdb_number_superversion_cleanups","title":"rocksdb_number_superversion_cleanups","text":""},{"location":"status-variables.html#rocksdb_number_superversion_releases","title":"rocksdb_number_superversion_releases","text":""},{"location":"status-variables.html#rocksdb_rate_limit_delay_millis","title":"rocksdb_rate_limit_delay_millis","text":"

    This variable was removed in Percona Server for MySQL Percona Server 5.7.23-23.

    "},{"location":"status-variables.html#rocksdb_row_lock_deadlocks","title":"rocksdb_row_lock_deadlocks","text":"

    This variable shows the total number of deadlocks that have been detected since the instance was started.

    "},{"location":"status-variables.html#rocksdb_row_lock_wait_timeouts","title":"rocksdb_row_lock_wait_timeouts","text":"

    This variable shows the total number of row lock wait timeouts that have been detected since the instance was started.

    "},{"location":"status-variables.html#rocksdb_snapshot_conflict_errors","title":"rocksdb_snapshot_conflict_errors","text":"

    This variable shows the number of snapshot conflict errors occurring during write transactions that forces the transaction to rollback.

    "},{"location":"status-variables.html#rocksdb_stall_l0_file_count_limit_slowdowns","title":"rocksdb_stall_l0_file_count_limit_slowdowns","text":"

    This variable shows the slowdowns in write due to L0 being close to full.

    "},{"location":"status-variables.html#rocksdb_stall_locked_l0_file_count_limit_slowdowns","title":"rocksdb_stall_locked_l0_file_count_limit_slowdowns","text":"

    This variable shows the slowdowns in write due to L0 being close to full and compaction for L0 is already in progress.

    "},{"location":"status-variables.html#rocksdb_stall_l0_file_count_limit_stops","title":"rocksdb_stall_l0_file_count_limit_stops","text":"

    This variable shows the stalls in write due to L0 being full.

    "},{"location":"status-variables.html#rocksdb_stall_locked_l0_file_count_limit_stops","title":"rocksdb_stall_locked_l0_file_count_limit_stops","text":"

    This variable shows the stalls in write due to L0 being full and compaction for L0 is already in progress.

    "},{"location":"status-variables.html#rocksdb_stall_pending_compaction_limit_stops","title":"rocksdb_stall_pending_compaction_limit_stops","text":"

    This variable shows the stalls in write due to hitting limits set for max number of pending compaction bytes.

    "},{"location":"status-variables.html#rocksdb_stall_pending_compaction_limit_slowdowns","title":"rocksdb_stall_pending_compaction_limit_slowdowns","text":"

    This variable shows the slowdowns in write due to getting close to limits set for max number of pending compaction bytes.

    "},{"location":"status-variables.html#rocksdb_stall_memtable_limit_stops","title":"rocksdb_stall_memtable_limit_stops","text":"

    This variable shows the stalls in write due to hitting max number of memTables allowed.

    "},{"location":"status-variables.html#rocksdb_stall_memtable_limit_slowdowns","title":"rocksdb_stall_memtable_limit_slowdowns","text":"

    This variable shows the slowdowns in writes due to getting close to max number of memtables allowed.

    "},{"location":"status-variables.html#rocksdb_stall_total_stops","title":"rocksdb_stall_total_stops","text":"

    This variable shows the total number of write stalls.

    "},{"location":"status-variables.html#rocksdb_stall_total_slowdowns","title":"rocksdb_stall_total_slowdowns","text":"

    This variable shows the total number of write slowdowns.

    "},{"location":"status-variables.html#rocksdb_stall_micros","title":"rocksdb_stall_micros","text":"

    This variable shows how long (in microseconds) the writer had to wait for compaction or flush to finish.

    "},{"location":"status-variables.html#rocksdb_wal_bytes","title":"rocksdb_wal_bytes","text":"

    This variables shows the number of bytes written to WAL.

    "},{"location":"status-variables.html#rocksdb_wal_group_syncs","title":"rocksdb_wal_group_syncs","text":"

    This variable shows the number of group commit WAL file syncs that have occurred.

    "},{"location":"status-variables.html#rocksdb_wal_synced","title":"rocksdb_wal_synced","text":"

    This variable shows the number of times WAL sync was done.

    "},{"location":"status-variables.html#rocksdb_write_other","title":"rocksdb_write_other","text":"

    This variable shows the number of writes processed by another thread.

    "},{"location":"status-variables.html#rocksdb_write_self","title":"rocksdb_write_self","text":"

    This variable shows the number of writes that were processed by a requesting thread.

    "},{"location":"status-variables.html#rocksdb_write_timedout","title":"rocksdb_write_timedout","text":"

    This variable shows the number of writes ending up with timed-out.

    "},{"location":"status-variables.html#rocksdb_write_wal","title":"rocksdb_write_wal","text":"

    This variable shows the number of Write calls that request WAL.

    "},{"location":"telemetry.html","title":"Telemetry and data collection","text":"

    Percona has the following types of telemetry:

    • Installation-time telemetry

    • Continuous telemetry

    By understanding these types of telemetry systems and their respective features, you can effectively implement and manage them to gather valuable insights and improve your systems and software.

    You control whether to share this information. The program is optional. You can disable either or both telemetry systems if you don\u2019t want to share anonymous data.

    Percona protects your privacy. They don\u2019t gather personal information. All collected data is anonymous, preventing the identification of individual users or servers. Our Percona Privacy policy provides more details on data handling.

    Percona includes the telemetry systems only in software packages, compressed archives (tarballs), and Docker images.

    "},{"location":"telemetry.html#why-telemetry-matters","title":"Why telemetry matters","text":"

    Telemetry in Percona Server for MySQL has the following qualities:

    Advantages Description See How People Use Your Software Telemetry collects anonymous data on how users interact with our software. This tells developers which features are popular, which ones are confusing, and if anything is causing crashes. Identify Issues Early Telemetry can catch bugs or performance problems before they become widespread.

    Benefits for Users in the Long Run:

    Advantages Description Faster Bug Fixes With telemetry data, developers can pinpoint issues affecting specific users and prioritize fixing them quickly. Better Features Telemetry helps developers understand user needs and preferences. This allows them to focus on features that will be genuinely useful and improve your overall experience. A More Stable Experience By identifying and resolving issues early, telemetry helps create a more stable and reliable software experience for everyone."},{"location":"telemetry.html#installation-time-telemetry","title":"Installation-time telemetry","text":"

    This telemetry is executed only once during software installation or Docker container startup. It collects information at the moment of installation and does not run afterward.

    This telemetry collects the relevant data during the installation, such as system configuration, hardware specifications, software version, and environment details. After the installation is completed, this telemetry process does not run again or collect additional data.

    This telemetry helps us to gain insights into the initial setup to tailor future updates or support.

    "},{"location":"telemetry.html#installation-time-telemetry-file-example","title":"Installation-time telemetry file example","text":"

    An example of the data collected is the following:

    [{\"id\" : \"c416c3ee-48cd-471c-9733-37c2886f8231\",\n\"product_family\" : \"PRODUCT_FAMILY_PS\",\n\"instanceId\" : \"6aef422e-56a7-4530-af9d-94cc02198343\",\n\"createTime\" : \"2023-10-16T10:46:23Z\",\n\"metrics\":\n[{\"key\" : \"deployment\",\"value\" : \"PACKAGE\"},\n{\"key\" : \"pillar_version\",\"value\" : \"8.0.35-27\"},\n{\"key\" : \"OS\",\"value\" : \"Oracle Linux Server 8.8\"},\n{\"key\" : \"hardware_arch\",\"value\" : \"x86_64 x86_64\"}]}]\n
    "},{"location":"telemetry.html#disable-installation-telemetry","title":"Disable installation telemetry","text":"

    This telemetry feature is enabled by default.

    You can disable this telemetry if you decide not to send installation data to Percona. Set the PERCONA_TELEMETRY_DISABLE=1 environment variable for either the root user or the operating system before installing.

    These actions do not affect the continuous telemetry system.

    Debian-derived distributionRed Hat-derived distributionDOCKER

    Add the environment variable before the installation process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 apt install percona-server-server\n

    Add the environment variable before the installation process.

    $ sudo PERCONA_TELEMETRY_DISABLE=1 yum install percona-server-server\n

    Add the environment variable when running a command in a new container.

    $ docker run -d -e MYSQL_ROOT_PASSWORD=test1234# -e PERCONA_TELEMETRY_DISABLE=1 --name=percona-server percona/percona-server:8.1\n
    "},{"location":"telemetry.html#continuous-telemetry","title":"Continuous telemetry","text":"

    This telemetry system involves setting up a telemetry agent and a database (DB) component. It continuously collects and sends information daily.

    The telemetry agent runs at scheduled daily intervals to collect data. The agent gathers data (for example, usage statistics) and sends this information to the Percona platform.

    Percona includes the telemetry systems only in software packages, compressed archives (tarballs), and Docker images. The continuous telemetry system requires a telemetry agent and a specific folder to write data when installed from compressed archives (tarballs). If these elements are available, telemetry collects and sends the data.

    "},{"location":"telemetry.html#elements-of-the-continuous-telemetry-system","title":"Elements of the continuous telemetry system","text":"

    Percona collects information using these elements:

    Function Description Percona Telemetry DB component This component collects metrics directly from the database and stores them in a metrics file. Metrics File This standalone file on the database host\u2019s file system stores the collected metrics. Telemetry Agent This independent process runs on your database host\u2019s operating system and performs the following tasks: - Collects OS-level metrics.- Reads the metrics file and adds the OS-level metrics.- Sends the complete set of metrics to the Percona Platform. - Collects the list of installed Percona packages using the local package manager.

    Telemetry uses the Percona Platform with these components:

    Functions Description Telemetry Service This service offers an API endpoint for sending telemetry data. It handles incoming requests and saves the data into telemetry storage. Telemetry Storage This component stores all telemetry data for the long term."},{"location":"telemetry.html#overview-of-the-db-component","title":"Overview of the DB component","text":"

    The Percona Server for MySQL includes a DB component by default during installation. This component extends the database\u2019s functionality in one of these ways, depending on the specific database:

    • Modifying the source code directly

    • Adding modular components

    • Adding self-contained extensions

    The DB component has the following qualities:

    • Collects metrics from the database daily

    • Writes these metrics to a new JSON file on the local file system, named with a timestamp and the .json extension

    • Stores only the most recent week\u2019s data by deleting older Metrics files before creating a new one.

    The DB component does NOT collect the following:

    • Database names

    • User names or credentials

    • Data entered by users

    "},{"location":"telemetry.html#locations-of-metrics-files-and-telemetry-history","title":"Locations of metrics files and telemetry history","text":"

    Percona stores the Metrics file in one of the following directories on the local file system. The location depends on the product.

    • Telemetry root path - /usr/local/percona/telemetry

    • PSMDB (mongod) root path - ${telemetry root path}/psmdb/

    • PSMDB (mongos) root path - ${telemetry root path}/psmdbs/

    • PS root path - ${telemetry root path}/ps/

    • PXC root path - ${telemetry root path}/pxc/

    • PG root path - ${telemetry root path}/pg/

    Percona archives the telemetry history in ${telemetry root path}/history/.

    "},{"location":"telemetry.html#metrics-file-format","title":"Metrics file format","text":"

    The Metrics file uses the Javascript Object Notation (JSON) format. Percona reserves the right to extend the current set of JSON structure attributes in the future.

    The following is an example of the Metrics file content:

    {\n  \"db_instance_id\": \"e83c568c-e140-11ee-8320-7e207666b18a\",\n  \"pillar_version\": \"8.0.35-27\",\n  \"active_plugins\": [\n    \"binlog\",\n    \"mysql_native_password\",\n    \"sha256_password\",\n    \"caching_sha2_password\",\n    \"sha2_cache_cleaner\",\n    \"daemon_keyring_proxy_plugin\",\n    \"PERFORMANCE_SCHEMA\",\n    \"CSV\",\n    \"MEMORY\",\n    \"InnoDB\",\n    \"INNODB_TRX\",\n    \"INNODB_CMP\",\n    \"INNODB_CMP_RESET\",\n    \"INNODB_CMPMEM\",\n    \"INNODB_CMPMEM_RESET\",\n    \"INNODB_CMP_PER_INDEX\",\n    \"INNODB_CMP_PER_INDEX_RESET\",\n    \"INNODB_BUFFER_PAGE\",\n    \"INNODB_BUFFER_PAGE_LRU\",\n    \"INNODB_BUFFER_POOL_STATS\",\n    \"INNODB_TEMP_TABLE_INFO\",\n    \"INNODB_METRICS\",\n    \"INNODB_FT_DEFAULT_STOPWORD\",\n    \"INNODB_FT_DELETED\",\n    \"INNODB_FT_BEING_DELETED\",\n    \"INNODB_FT_CONFIG\",\n    \"INNODB_FT_INDEX_CACHE\",\n    \"INNODB_FT_INDEX_TABLE\",\n    \"INNODB_TABLES\",\n    \"INNODB_TABLESTATS\",\n    \"INNODB_INDEXES\",\n    \"INNODB_TABLESPACES\",\n    \"INNODB_COLUMNS\",\n    \"INNODB_VIRTUAL\",\n    \"INNODB_CACHED_INDEXES\",\n    \"INNODB_SESSION_TEMP_TABLESPACES\",\n    \"MyISAM\",\n    \"MRG_MYISAM\",\n    \"TempTable\",\n    \"ARCHIVE\",\n    \"BLACKHOLE\",\n    \"ngram\",\n    \"mysqlx_cache_cleaner\",\n    \"mysqlx\",\n    \"ROCKSDB\",\n    \"rpl_semi_sync_source\",\n    \"ROCKSDB_CFSTATS\",\n    \"ROCKSDB_DBSTATS\",\n    \"ROCKSDB_PERF_CONTEXT\",\n    \"ROCKSDB_PERF_CONTEXT_GLOBAL\",\n    \"ROCKSDB_CF_OPTIONS\",\n    \"ROCKSDB_GLOBAL_INFO\",\n    \"ROCKSDB_COMPACTION_HISTORY\",\n    \"ROCKSDB_COMPACTION_STATS\",\n    \"ROCKSDB_ACTIVE_COMPACTION_STATS\",\n    \"ROCKSDB_DDL\",\n    \"ROCKSDB_INDEX_FILE_MAP\",\n    \"ROCKSDB_LOCKS\",\n    \"ROCKSDB_TRX\",\n    \"ROCKSDB_DEADLOCK\"\n  ],\n  \"active_components\": [\n    \"file://component_percona_telemetry\"\n  ],\n  \"uptime\": \"6185\",\n  \"databases_count\": \"7\",\n  \"databases_size\": \"33149\",\n  \"se_engines_in_use\": [\n    \"InnoDB\",\n    \"ROCKSDB\"\n  ],\n  \"replication_info\": {\n    \"is_semisync_source\": \"1\",\n    \"is_replica\": \"1\"\n  }\n}\n
    "},{"location":"telemetry.html#percona-telemetry-agent","title":"Percona telemetry agent","text":"

    This program, called percona-telemetry-agent, constantly runs in the background on your server\u2019s operating system. It manages JSON files, which store the collected data in a specific location (${telemetry root path}). This agent can create, read, write, and delete these files.

    The agent\u2019s log file, containing information about its activity, is located at /var/log/percona/telemetry-agent.log.

    In the first 24 hours, no information is collected or sent. After that period, the agent tries to send the collected information to Percona\u2019s servers (Percona Platform) daily. If this operation fails, the agent retries up to five times. After the data is successfully sent, the agent saves a copy of the sent data in a separate \u201chistory\u201d folder, and then, deletes the original file created by the database.

    The agent won\u2019t send any data if the target directory doesn\u2019t contain specific files related to Percona software.

    "},{"location":"telemetry.html#telemetry-agent-payload-example","title":"Telemetry agent payload example","text":"

    The following is an example of a telemetry agent payload:

    {\n  \"reports\": [\n    {\n      \"id\": \"B5BDC47B-B717-4EF5-AEDF-41A17C9C18BB\",\n      \"createTime\": \"2023-09-01T10:56:49Z\",\n      \"instanceId\": \"B5BDC47B-B717-4EF5-AEDF-41A17C9C18BA\",\n      \"productFamily\": \"PRODUCT_FAMILY_PS\",\n      \"metrics\": [\n        {\n          \"key\": \"OS\",\n          \"value\": \"Ubuntu\"\n        },\n        {\n          \"key\": \"pillar_version\",\n          \"value\": \"8.0.33-25\"\n        }\n      ]\n    }\n  ]\n}\n

    The agent sends information about the database and metrics.

    Key Description \u201cid\u201d A randomly generated Universally Unique Identifier (UUID) version 4 of the request \u201ccreateTime\u201d UNIX timestamp \u201cinstanceId\u201d The DB Host ID. The value can be taken from the instanceId, the /usr/local/percona/telemetry_uuid or generated as a UUID version 4 if the file is absent. \u201cproductFamily\u201d The value from the file path \u201cmetrics\u201d An array of key:value pairs collected from the Metrics file.

    The following operating system-level metrics are sent with each check:

    Key Description \u201cOS\u201d The name of the operating system \u201chardware_arch\u201d CPU architecture used on the DB host \u201cdeployment\u201d How the application was deployed. The possible values could be \u201cPACKAGE\u201d or \u201cDOCKER\u201d. \u201cinstalled_packages\u201d A list of the installed Percona packages.

    The information includes the following:

    • Package name

    • Package version - the same format as Red Hat Enterprise Linux or Debian

    • Package repository - if possible

    The package names must fit the following pattern:

    • percona-*

    • Percona-*

    • proxysql*

    • pmm

    • etcd*

    • haproxy

    • patroni

    • pg*

    • postgis

    • wal2json

    "},{"location":"telemetry.html#disable-continuous-telemetry","title":"Disable continuous telemetry","text":"

    Percona software enables the continuous telemetry system by default. Disable the Telemetry agent and uninstall the DB component to turn off this telemetry completely.

    These actions do not affect Installation-time telemetry.

    "},{"location":"telemetry.html#disable-the-telemetry-agent","title":"Disable the telemetry agent","text":"

    You can either disable the telemetry agent for a session or permanently.

    Disable temporarilyDisable permanently

    Turn off telemetry temporarily until the next server restart:

    $ systemctl stop percona-telemetry-agent\n

    Turn off telemetry permanently:

    $ systemctl disable percona-telemetry-agent\n
    "},{"location":"telemetry.html#telemetry-agent-dependencies-and-removal-considerations","title":"Telemetry agent dependencies and removal considerations","text":"

    Installing a Linux package also installs mandatory dependencies. The Telemetry agent is a mandatory dependency for Percona Server for MySQL packages. Removing the agent will delete the database.

    On YUM-based systems, the system removes the Telemetry agent package when you remove the last dependency package.

    On APT-based systems, you must use the --autoremove option to remove all dependencies, as the system doesn\u2019t automatically remove the Telemetry agent when you remove the database package.

    The --autoremove option only removes unnecessary dependencies. It doesn\u2019t remove dependencies required by other packages or guarantee the removal of all package-associated dependencies.

    "},{"location":"telemetry.html#disable-db-component","title":"Disable DB component","text":"

    The DB component continues to generate daily telemetry files and store them for a week, even after you stop the telemetry agent service. These files persist for seven days.

    Uninstall the component on the server without restarting the database server:

    mysql> UNINSTALL COMPONENT \"file://component_percona_telemetry\";\n

    Restarting the database server after uninstalling the component can reactivate the telemetry component. This action happens because the server reloads settings during restart, including any instructions for telemetry.

    To prevent this reactivation, edit the my.cnf configuration file. Add this line:

    [mysqld]\n\npercona_telemetry_disable=1\n

    Restart the server after editing the configuration file. This setting ensures that the telemetry remains disabled even after a server restart.

    "},{"location":"thread-based-profiling.html","title":"Thread based profiling","text":"

    Percona Server for MySQL now uses thread based profiling by default, instead of process based profiling. This was implemented because with process based profiling, threads on the server, other than the one being profiled, can affect the profiling information.

    "},{"location":"thread-based-profiling.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.

    Thread based profiling is using the information provided by the kernel getrusage function. Since the 2.6.26 kernel version, thread based resource usage is available with the RUSAGE_THREAD. This means that the thread based profiling will be used if you\u2019re running the 2.6.26 kernel or newer, or if the RUSAGE_THREAD has been ported back.

    This feature is enabled by default if your system supports it, in other cases it uses process based profiling.

    "},{"location":"threadpool.html","title":"Thread pool","text":"

    Thread pooling can improve performance and scalability for MySQL databases. This technique reuses a fixed number of threads to handle multiple client connections and execute statements. It reduces the overhead of creating and destroying threads and avoids the contention and context switching that can occur when there are too many threads.

    If you have fewer than 20,000 connections, using the thread pool does not provide significant benefits. It\u2019s better to keep thread pooling disabled and use the default method.

    The default method, called one-thread-per-connection, creates a new thread for each client that connects to the MySQL server. This thread manages all queries and responses for that connection until it\u2019s closed. This approach works well for a moderate number of connections, but it can become inefficient as the number of connections increases.

    MySQL supports thread pooling through the thread pool plugin, which replaces the default one-thread-per-connection model. When a statement arrives, the thread group either begins executing it immediately or queues it for later execution in a round-robin fashion. The high-priority queue consists of several thread groups, each managing client connections. Each thread group has a listener thread that listens for incoming statements from the connections assigned to the group. The thread pool exposes several system variables that can be used to configure its operation, such as thread_pool_size, thread_pool_algorithm, thread_pool_stall_limit, and others.

    The thread pool plugin consists of several thread groups, each of which manages a set of client connections. As connections are established, the thread pool assigns them to thread groups using the round-robin method. This method assigns threads fairly and efficiently. Here\u2019s how it works:

    1. The thread pool starts with a set number of thread groups.

    2. When a new task arrives, the pool needs to assign it to a group.

    3. It does this by going through the groups in order, one by one.

    4. Let\u2019s say you have four thread groups. The assignment would work like this:

    5. Task 1 goes to Group 1
    6. Task 2 goes to Group 2
    7. Task 3 goes to Group 3
    8. Task 4 goes to Group 4
    9. Task 5 goes back to Group 1

    10. This pattern continues, always moving to the next group and starting over when it reaches the end.

    11. Each group handles its assigned tasks using its available threads.

    This round-robin approach spreads work evenly across all groups. It prevents any single group from getting overloaded while others sit idle. This method helps maintain balanced performance across the system.

    MySQL executes statements using one thread per client connection. When the number of connections increases past a specific point, performance degrades. This feature introduces a dynamic thread pool, which enables the server to maintain top performance even with a large number of client connections. The server decreases the number of threads using the thread pool and reduces the context switching and hot lock contentions. The thread pool is most effective with OLTP workloads (relatively short CPU-bound queries).

    Set the thread pool variable thread_handling to pool-of-threads by adding the following line to my.cnf:

    thread_handling=pool-of-threads\n

    Although the default values for the thread pool should provide good performance, additional tuning should be performed with the dynamic system variables. The goal is to minimize the number of open transactions on the server. Short-running transactions commit faster and deallocate server resources and locks.

    Due to the following differences, this implementation is not compatible with upstream:

    • Built into the server, upstream implements the thread pool as a plugin

    • Does not minimize the number of concurrent transactions

    Priority Queue:

    A queue that assigns a priority to each data element and processes them according to their priority. The data element with the highest priority is served first, regardless of its order in the queue. A priority queue can be implemented using an array, a linked list, a heap, or a binary search tree. It can also be ascending or descending, meaning that the highest priority is either the smallest or the largest value.

    "},{"location":"threadpool.html#version-specific-information","title":"Version specific information","text":"

    Starting with 8.0.14, Percona Server for MySQL uses the upstream implementation of the admin_port. The variables extra_port and extra_max_connections are removed and not supported. Remove the extra_port and extra_max_connections variables from your configuration file before upgrading to 8.0.14 or higher. In 8.0.14 or higher, the variables cause a boot error, and the server refuses to start.

    Implemented in 8.0.12-1: We ported the Thread Pool feature from Percona Server for MySQL 5.7.

    "},{"location":"threadpool.html#priority-connection-scheduling","title":"Priority connection scheduling","text":"

    The thread pool limits the number of concurrently running queries. The number of open transactions may remain high. Connections with already-started transactions are added to the end of the queue. A high number of open transactions has implications for the currently running queries. The thread_pool_high_prio_tickets variable controls the high-priority queue policy and assigns tickets to each new connection.

    The thread pool adds the connection to the high-priority queue and decrements the ticket if the connection has the following attributes:

    • Has an open transaction

    • Has a non-zero number of high-priority tickets

    Otherwise, the variable adds the connection to the low-priority queue with the initial value.

    Each time, the thread pool checks the high-priority queue for the next connection. When the high-priority queue is empty, the thread pool picks connections from the low-priority queue. The default behavior is to put events from already started transactions into the high-priority queue.

    If the value equals 0, all connections are put into the low-priority queue. If the value exceeds zero, each connection could be put into a high-priority queue.

    The thread_pool_high_prio_mode variable prioritizes all statements for a connection or assigns connections to the low-priority queue. To implement this new thread_pool_high_prio_mode variable

    "},{"location":"threadpool.html#low-priority-queue-throttling","title":"Low-priority queue throttling","text":"

    One case that can limit thread pool performance and even lead to deadlocks under high concurrency is when thread groups are oversubscribed due to active threads reaching the oversubscribe limit. Still, all/most worker threads are waiting on locks currently held by a transaction from another connection that is not currently in the thread pool.

    In this case, the oversubscribe limit does not account for those threads in the pool that marked themselves inactive. As a result, the number of threads (both active and waiting) in the pool grows until it hits the thread_pool_max_threads value. If the connection executing the transaction holding the lock has managed to enter the thread pool by then, we get a large (depending on the thread_pool_max_threads value) number of concurrently running threads and, thus, suboptimal performance. Otherwise, we get a deadlock as no more threads can be created to process those transaction(s) and release the lock(s).

    Such situations are prevented by throttling the low-priority queue when the total number of worker threads (both active and waiting ones) reaches the oversubscribe limit. If there are too many worker threads, do not start new transactions; create new threads until queued events from the already-started transactions are processed.

    "},{"location":"threadpool.html#handling-long-network-waits","title":"Handling long network waits","text":"

    Specific workloads (large result sets, BLOBs, slow clients) can wait longer on network I/O (socket reads and writes). Whenever the server waits, this should be communicated to the thread pool so it can start a new query by either waking a waiting thread or sometimes creating a new one. This implementation has been ported from MariaDB patch MDEV-156.

    "},{"location":"threadpool.html#system-variables","title":"System variables","text":""},{"location":"threadpool.html#thread_handling","title":"thread_handling","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type String Default one-thread-per-connection

    This variable defines how the server handles threads for connections from the client.

    Values Description one-thread-per-connection One thread handles all requests for a connection pool-of-threads A thread pool handles requests for all connections no-threads A single thread for all connections for debugging mode"},{"location":"threadpool.html#thread_pool_idle_timeout","title":"thread_pool_idle_timeout","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: Numeric Default value: 60 (seconds)

    This variable can limit the time an idle thread should wait before exiting.

    "},{"location":"threadpool.html#thread_pool_high_prio_mode","title":"thread_pool_high_prio_mode","text":"

    This variable provides more fine-grained control over high-priority scheduling globally or per connection.

    The following values are allowed:

    • transactions (the default). In this mode, only statements from already started transactions may go into the high-priority queue depending on the number of high-priority tickets currently available in a connection (see thread_pool_high_prio_tickets).

    • statements. In this mode, all individual statements go into the high-priority queue, regardless of the transactional state and the number of available high-priority tickets. Use this value to prioritize AUTOCOMMIT transactions or other statements, such as administrative ones. Setting this value globally essentially disables high-priority scheduling. All connections use the high-priority queue.

    • none. This mode disables the priority queue for a connection. Certain types of connections, such as monitoring, are insensitive to execution latency and do not allocate the server resources that would impact the performance of other connections. These types of connections do not require high-priority scheduling. Setting this value globally essentially disables high-priority scheduling. All connections use the low-priority queue.

    "},{"location":"threadpool.html#thread_pool_high_prio_tickets","title":"thread_pool_high_prio_tickets","text":"Option Description Command-line: Yes Config file: Yes Scope: Global, Session Dynamic: Yes Data type: Numeric Default value: 4294967295

    This variable controls the high-priority queue policy. Assigns the selected number of tickets to each new connection to enter the high-priority queue. Setting this variable to 0 disables the high-priority queue.

    "},{"location":"threadpool.html#thread_pool_max_threads","title":"thread_pool_max_threads","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: Numeric Default value: 100000

    This variable can limit the maximum number of threads in the pool. When the limit is reached, the server does not create new threads.

    "},{"location":"threadpool.html#thread_pool_oversubscribe","title":"thread_pool_oversubscribe","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: Numeric Default value: 3

    Determines the number of threads run simultaneously. A value lower than 3 could cause sleep and wake-up actions.

    "},{"location":"threadpool.html#thread_pool_size","title":"thread_pool_size","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: Numeric Default value: Number of processors

    Defines the number of threads that can use the CPU simultaneously.

    "},{"location":"threadpool.html#thread_pool_stall_limit","title":"thread_pool_stall_limit","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: No Data type: Numeric Default value: 500 (ms)

    Defines the number of milliseconds before a running thread is considered stalled. When this limit is reached, the thread pool will wake up or create another thread. This variable prevents a long-running query from monopolizing the pool.

    "},{"location":"threadpool.html#extra_port","title":"extra_port","text":"

    The variable was removed in Percona Server for MySQL 8.0.14.

    It specifies an additional port that Percona Server for MySQL listens to. This port can be used in case no new connections can be established due to all worker threads being busy or being locked when the pool-of-threads feature is enabled.

    The following command connects to the extra port:

    mysql --port='extra-port-number' --protocol=tcp\n
    "},{"location":"threadpool.html#extra_max_connections","title":"extra_max_connections","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: No Data type: Numeric Default value: 1

    The variable was removed in Percona Server for MySQL 8.0.14. This variable can be used to specify the maximum allowed number of connections plus one extra SUPER user connection on the extra_port. This can be used with the extra_port variable to access the server in case no new connections can be established due to all worker threads being busy or being locked when the pool-of-threads feature is enabled.

    "},{"location":"threadpool.html#status-variables","title":"Status variables","text":""},{"location":"threadpool.html#threadpool_idle_threads","title":"Threadpool_idle_threads","text":"Option Description Scope: Global Data type: Numeric

    This status variable shows the number of idle threads in the pool.

    "},{"location":"threadpool.html#threadpool_threads","title":"Threadpool_threads","text":"Option Description Scope: Global Data type: Numeric

    This status variable shows the number of threads in the pool.

    "},{"location":"toku-backup.html","title":"Percona TokuBackup","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    Percona TokuBackup is an open-source hot backup utility for MySQL servers running the TokuDB storage engine (including Percona Server for MySQL and MariaDB). It does not lock your database during backup. The TokuBackup library intercepts system calls that write files and duplicates the writes to the backup directory.

    Note

    This feature is currently considered tech preview and should not be used in a production environment.

    "},{"location":"toku-backup.html#installing-from-binaries","title":"Installing From Binaries","text":"

    The installation of TokuBackup can be performed with the ps-admin script.

    To install Percona TokuBackup complete the following steps. Run the following commands as root or by using the sudo command.

    1. Run ps-admin.enable-tokubackup to add the preload-hotbackup option into [mysqld_safe] section of my.cnf.

      The output could be:

      Checking SELinux status... INFO: SELinux is disabled.\n\nChecking if preload-hotbackup option is already set in config file... INFO: Option preload-hotbackup is not set in the config file.\n\nChecking TokuBackup plugin status... INFO: TokuBackup plugin is not installed.\n\nAdding preload-hotbackup option into /etc/my.cnf INFO: Successfully added preload-hotbackup option into /etc/my.cnf PLEASE RESTART MYSQL SERVICE AND RUN THIS SCRIPT AGAIN TO FINISH INSTALLATION!\n
      2. Restart mysql service: :bash:

          $ service mysql restart\n
    2. Run ps-admin \u2013enable-tokubackup again to finish the installation of the TokuBackup plugin.

      ```text Checking SELinux status\u2026 INFO: SELinux is disabled.

      Checking if preload-hotbackup option is already set in config file\u2026 INFO: Option preload-hotbackup is set in the config file.

      Checking TokuBackup plugin status\u2026 INFO: TokuBackup plugin is not installed.

      Checking if Percona Server is running with libHotBackup.so preloaded\u2026 INFO: Percona Server is running with libHotBackup.so preloaded.

      Installing TokuBackup plugin\u2026 INFO: Successfully installed TokuBackup plugin.

    "},{"location":"toku-backup.html#making-a-backup","title":"Making a Backup","text":"

    To run Percona TokuBackup, the backup destination directory must exist, be writable and owned by the same user under which MySQL server is running (usually mysql) and empty.

    Once this directory is created, the backup can be run using the following command:

    mysql> set tokudb_backup_dir='/path_to_empty_directory';\n

    Note

    Setting the tokudb_backup_dir variable automatically starts the backup process to the specified directory. Percona TokuBackup will take full backup each time, currently there is no incremental backup option

    If you get any error on this step (e.g. caused by some misconfiguration), the Reporting Errors section explains how to find out the reason.

    "},{"location":"toku-backup.html#restoring-from-backup","title":"Restoring From Backup","text":"

    Percona TokuBackup does not have any functionality for restoring a backup. You can use rsync or cp to restore the files. You should check that the restored files have the correct ownership and permissions.

    NOTE: Make sure that the datadir is empty and that MySQL server is shut down before restoring from backup. You can\u2019t restore to a datadir of a running mysqld instance (except when importing a partial backup).

    The following example shows how you might use the rsync command to restore the backup:

    $ rsync -avrP /data/backup/ /var/lib/mysql/\n

    Since attributes of files are preserved, in most cases you will need to change their ownership to mysql before starting the database server. Otherwise, the files will be owned by the user who created the backup.

    $ chown -R mysql:mysql /var/lib/mysql\n

    If you have changed default TokuDB data directory (tokudb_data_dir) or TokuDB log directory (tokudb_log_dir) or both of them, you will see separate folders for each setting in backup directory after taking backup. You\u2019ll need to restore each folder separately:

    $ rsync -avrP /data/backup/mysql_data_dir/ /var/lib/mysql/\n$ rsync -avrP /data/backup/tokudb_data_dir/ /path/to/original/tokudb_data_dir/\n$ rsync -avrP /data/backup/tokudb_log_dir/ /path/to/original/tokudb_log_dir/\n$ chown -R mysql:mysql /var/lib/mysql\n$ chown -R mysql:mysql /path/to/original/tokudb_data_dir\n$ chown -R mysql:mysql /path/to/original/tokudb_log_dir\n
    "},{"location":"toku-backup.html#advanced-configuration","title":"Advanced Configuration","text":""},{"location":"toku-backup.html#monitoring-progress","title":"Monitoring Progress","text":"

    TokuBackup updates the PROCESSLIST state while the backup is in progress. You can see the output by running SHOW PROCESSLIST or SHOW FULL PROCESSLIST.

    "},{"location":"toku-backup.html#excluding-source-files","title":"Excluding Source Files","text":"

    You can exclude certain files and directories based on a regular expression set in the tokudb_backup_exclude session variable. If the source file name matches the excluded regular expression, then the source file is excluded from backup.

    For example, to exclude all lost+found directories from backup, use the following command:

    mysql> SET tokudb_backup_exclude='/lost\\\\+found($|/)';\n

    Note

    The server pid file is excluded by default. If you\u2019re providing your own additions to the exclusions and have the pid file in the default location, you will need to add the mysqld_safe.pid entry.

    "},{"location":"toku-backup.html#throttling-backup-rate","title":"Throttling Backup Rate","text":"

    You can throttle the backup rate using the tokudb_backup_throttle session-level variable. This variable throttles the write rate in bytes per second of the backup to prevent TokuBackup from crowding out other jobs in the system. The default and max value is 18446744073709551615.

    mysql> SET tokudb_backup_throttle=1000000;\n
    "},{"location":"toku-backup.html#restricting-backup-target","title":"Restricting Backup Target","text":"

    You can restrict the location of the destination directory where the backups can be located using the tokudb_backup_allowed_prefix system-level variable. Attempts to backup to a location outside of the specified directory or its children will result in an error.

    The default is null, backups have no restricted locations. This read-only variable can be set in the my.cnf configuration file and displayed with the SHOW VARIABLES command:

    mysql> SHOW VARIABLES LIKE 'tokudb_backup_allowed_prefix';\n

    The output could be:

    +------------------------------+-----------+\n| Variable_name                | Value     |\n+------------------------------+-----------+\n| tokudb_backup_allowed_prefix | /dumpdir  |\n+------------------------------+-----------+\n
    "},{"location":"toku-backup.html#reporting-errors","title":"Reporting Errors","text":"

    Percona TokuBackup uses two variables to capture errors. They are tokudb_backup_last_error and tokudb_backup_last_error_string. When TokuBackup encounters an error, these will report on the error number and the error string respectively. For example, the following output shows these parameters following an attempted backup to a directory that was not empty:

    mysql> SET tokudb_backup_dir='/tmp/backupdir';\n

    The output could be:

    ERROR 1231 (42000): Variable 'tokudb_backup_dir' can't be set to the value of '/tmp/backupdir'\n\nmysql> SELECT @@tokudb_backup_last_error;\n+----------------------------+\n| @@tokudb_backup_last_error |\n+----------------------------+\n|                         17 |\n+----------------------------+\n
    mysql> SELECT @@tokudb_backup_last_error_string;\n

    The output could be:

    +---------------------------------------------------+\n| @@tokudb_backup_last_error_string                 |\n+---------------------------------------------------+\n| tokudb backup couldn't create needed directories. |\n+---------------------------------------------------+\n
    "},{"location":"toku-backup.html#using-tokudb-hot-backup-for-replication","title":"Using TokuDB Hot Backup for Replication","text":"

    TokuDB Hot Backup makes a transactionally consistent copy of the TokuDB files while applications read and write to these files. The TokuDB hot backup library intercepts certain system calls that writes files and duplicates the writes on backup files while copying files to the backup directory. The copied files contain the same content as the original files.

    TokuDB Hot Backup also has an API. This API includes the start capturing and stop capturing commands. The \u201ccapturing\u201d command starts the process, when a portion of a file is copied to the backup location, and this portion is changed, these changes are also applied to the backup location.

    Replication often uses backup replication to create replicas. You must know the last executed global transaction identifier (GTID) or binary log position both for the replica and source configuration.

    To lock tables, use FLUSH TABLE WITH READ LOCK or use the smart locks like LOCK TABLES FOR BACKUP or LOCK BINLOG FOR BACKUP.

    During the copy process, the binlog is flushed, and the changes are copied to backup by the \u201ccapturing\u201d mechanism. After everything has been copied, and the \u201ccapturing\u201d mechanism is still running, use the LOCK BINLOG FOR BACKUP. After this statement is executed, the binlog is flushed, the changes are captured, and any queries that could change the binlog position or executed GTID are blocked.

    After this command, we can stop capturing and retrieve the last executed GTID or binlog log position and unlock the binlog.

    After a backup is taken, there are the following files in the backup directory:

    • tokubackup_slave_info

    • tokubackup_binlog_info

    These files contain information for replica and source. You can use this information to start a new replica from the source or replica.

    The SHOW MASTER STATUS and SHOW SLAVE STATUS commands provide the information.

    Important

    As of MySQL 8.0.22, the SHOW SLAVE STATUS statement is deprecated. Use SHOW REPLICA STATUS instead.

    In specific binlog formats, a binary log event can contain statements that produce temporary tables on the replica side, and the result of further statements may depend on the temporary table content. Typically, temporary tables are not selected for backup because they are created in a separate directory. A backup created with temporary tables created by binlog events can cause issues when restored because the temporary tables are not restored. The data may be inconsistent.

    The following system variables \u2013tokudb-backup-safe-slave, which enables or disables the safe-slave mode, and \u2013tokudb-backup-safe-slave-timeout, which defines the maximum amount of time in seconds to wait until temporary tables disappear. The safe-slave mode, when used with LOCK BINLOG FOR BACKUP, the replica SQL thread is stopped and checked to see if temporary tables produced by the replica exist or do not exist. If temporary tables exist, the replica SQL thread is restarted until there are no temporary tables or a defined timeout is reached.

    You should not use this option for group-replication.

    "},{"location":"toku-backup.html#create-a-backup-with-a-timestamp","title":"Create a Backup with a Timestamp","text":"

    If you plan to store more than one backup in a location, you should add a timestamp to the backup directory name.

    A sample Bash script has this information:

    #!/bin/bash\n\ntm=$(date \"+%Y-%m-%d-%H-%M-%S\");\nbackup_dir=$PWD/backup/$tm;\nmkdir -p $backup_dir;\nbin/mysql -uroot -e \"set tokudb_backup_dir='$backup_dir'\"\n
    "},{"location":"toku-backup.html#limitations-and-known-issues","title":"Limitations and known issues","text":"
    • You must disable InnoDB asynchronous IO if backing up InnoDB tables with TokuBackup. Otherwise you will have inconsistent, unrecoverable backups. The appropriate setting is innodb_use_native_aio=0.

    • To be able to run Point-In-Time-Recovery you\u2019ll need to manually get the binary log position.

    • Transactional storage engines (TokuDB and InnoDB) will perform recovery on the backup copy of the database when it is first started.

    • Tables using non-transactional storage engines (MyISAM) are not locked during the copy and may report issues when starting up the backup. It is best to avoid operations that modify these tables at the end of a hot backup operation (adding/changing users, stored procedures, etc.).

    • The database is copied locally to the path specified in /path/to/backup. This folder must exist, be writable, be empty, and contain enough space for a full copy of the database.

    • TokuBackup always makes a backup of the MySQL datadir and optionally the tokudb_data_dir, tokudb_log_dir, and the binary log folder. The latter three are only backed up separately if they are not the same as or contained in the MySQL datadir. None of these three folders can be a parent of the MySQL datadir.

    • No other directory structures are supported. All InnoDB, MyISAM, and other storage engine files must be within the MySQL datadir.

    • TokuBackup does not follow symbolic links.

    • TokuBackup does not backup MySQL configuration file(s).

    • TokuBackup does not backup tablespaces if they are out of datadir.

    • Due to upstream bug #80183, TokuBackup can\u2019t recover backed-up table data if backup was taken while running OPTIMIZE TABLE or ALTER TABLE ... TABLESPACE.

    • TokuBackup doesn\u2019t support incremental backups.

    "},{"location":"tokudb-background-analyze-table.html","title":"TokuDB background ANALYZE TABLE","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    *Percona Server for MySQL can automatically analyze tables in the background based on a measured change in data. This has been done by implementing the background job manager that can perform operations on a background thread.

    "},{"location":"tokudb-background-analyze-table.html#background-jobs","title":"Background Jobs","text":"

    Background jobs and schedule are transient in nature and are not persisted anywhere. Any currently running job will be terminated on shutdown and all scheduled jobs will be forgotten about on server restart. There can\u2019t be two jobs on the same table scheduled or running at any one point in time. If you manually invoke an ANALYZE TABLE that conflicts with either a pending or running job, the running job will be canceled and the users task will run immediately in the foreground. All the scheduled and running background jobs can be viewed by querying the TOKUDB_BACKGROUND_JOB_STATUS table.

    New tokudb_analyze_in_background variable has been implemented in order to control if the ANALYZE TABLE will be dispatched to the background process or if it will be running in the foreground. To control the function of ANALYZE TABLE a new tokudb_analyze_mode variable has been implemented. This variable offers options to cancel any running or scheduled job on the specified table (TOKUDB_ANALYZE_CANCEL), use existing analysis algorithm (TOKUDB_ANALYZE_STANDARD), or to recount the logical rows in table and update persistent count (TOKUDB_ANALYZE_RECOUNT_ROWS).

    TOKUDB_ANALYZE_RECOUNT_ROWS is a new mechanism that is used to perform a logical recount of all rows in a table and persist that as the basis value for the table row estimate. This mode was added for tables that have been upgraded from an older version of TokuDB that only reported physical row counts and never had a proper logical row count. Newly created tables/partitions will begin counting logical rows correctly from their creation and should not need to be recounted unless some odd edge condition causes the logical count to become inaccurate over time. This analysis mode has no effect on the table cardinality counts. It will take the currently set session values for tokudb_analyze_in_background, and tokudb_analyze_throttle. Changing the global or session instances of these values after scheduling will have no effect on the job.

    Any background job, both pending and running, can be canceled by setting the tokudb_analyze_mode to TOKUDB_ANALYZE_CANCEL and issuing the ANALYZE TABLE on the table for which you want to cancel all the jobs for.

    "},{"location":"tokudb-background-analyze-table.html#auto-analysis","title":"Auto analysis","text":"

    To implement the background analysis and gathering of cardinality statistics on a TokuDB tables new delta value is now maintained in memory for each TokuDB table. This value is not persisted anywhere and it is reset to 0 on a server start. It is incremented for each INSERT/UPDATE/DELETE command and ignores the impact of transactions (rollback specifically). When this delta value exceeds the tokudb_auto_analyze percentage of rows in the table an analysis is performed according to the current session\u2019s settings. Other analysis for this table will be disabled until this analysis completes. When this analysis completes, the delta is reset to 0 to begin recalculating table changes for the next potential analysis.

    Status values are now reported to server immediately upon completion of any analysis (previously new status values were not used until the table has been closed and re-opened). Half-time direction reversal of analysis has been implemented, meaning that if a tokudb_analyze_time is in effect and the analysis has not reached the half way point of the index by the time tokudb_analyze_time/2 has been reached: it will stop the forward progress and restart the analysis from the last/rightmost row in the table, progressing leftwards and keeping/adding to the status information accumulated from the first half of the scan.

    For small ratios of table_rows / tokudb_auto_analyze, auto analysis will be run for almost every change. The trigger formula is: if (table_delta >= ((table_rows \\* tokudb_auto_analyze) / 100)) then run ANALYZE TABLE. If a user manually invokes an ANALYZE TABLE and tokudb_auto_analyze is enabled and there are no conflicting background jobs, the users ANALYZE TABLE will behave exactly as if the delta level has been exceeded in that the analysis is executed and delta reset to 0 upon completion.

    "},{"location":"tokudb-background-analyze-table.html#system-variables","title":"System Variables","text":""},{"location":"tokudb-background-analyze-table.html#tokudb_analyze_in_background","title":"tokudb_analyze_in_background","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Boolean Default ON

    When this variable is set to ON it will dispatch any ANALYZE TABLE job to a background process and return immediately, otherwise ANALYZE TABLE will run in foreground/client context.

    "},{"location":"tokudb-background-analyze-table.html#tokudb_analyze_mode","title":"tokudb_analyze_mode","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type ENUM Default TOKUDB_ANALYZE_STANDARD Range TOKUDB_ANALYZE_CANCEL, TOKUDB_ANALYZE_STANDARD, TOKUDB_ANALYZE_RECOUNT_ROWS

    This variable is used to control the function of ANALYZE TABLE. Possible values are:

    * `TOKUDB_ANALYZE_CANCEL` - Cancel any running or scheduled job on the specified table.\n\n\n* `TOKUDB_ANALYZE_STANDARD` - Use existing analysis algorithm. This is the standard table cardinality analysis mode used to obtain cardinality statistics for a tables and its indexes. It will take the currently set session values for tokudb_analyze_time, tokudb_analyze_in_background, and tokudb_analyze_throttle at the time of its scheduling, either via a user invoked `ANALYZE TABLE` or an auto schedule as a result of tokudb_auto_analyze threshold being hit. Changing the global or session instances of these values after scheduling will have no effect on the scheduled job.\n\n\n* `TOKUDB_ANALYZE_RECOUNT_ROWS` - Recount logical rows in table and update persistent count. This is a new mechanism that is used to perform a logical recount of all rows in a table and persist that as the basis value for the table row estimate. This mode was added for tables that have been upgraded from an older version of *TokuDB*/PerconaFT that only reported physical row counts and never had a proper logical row count. Newly created tables/partitions will begin counting logical rows correctly from their creation and should not need to be recounted unless some odd edge condition causes the logical count to become inaccurate over time. This analysis mode has no effect on the table cardinality counts. It will take the currently set session values for tokudb_analyze_in_background, and tokudb_analyze_throttle. Changing the global or session instances of these values after scheduling will have no effect on the job.\n
    "},{"location":"tokudb-background-analyze-table.html#tokudb_analyze_throttle","title":"tokudb_analyze_throttle","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Numeric Default 0

    This variable is used to define maximum number of keys to visit per second when performing ANALYZE TABLE with either a TOKUDB_ANALYZE_STANDARD or TOKUDB_ANALYZE_RECOUNT_ROWS.

    "},{"location":"tokudb-background-analyze-table.html#tokudb_analyze_time","title":"tokudb_analyze_time","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Numeric Default 5

    This session variable controls the number of seconds an analyze operation will spend on each index when calculating cardinality. Cardinality is shown by executing the following command:

    ```text\nSHOW INDEXES FROM table_name;\n```\n

    If an analyze is never performed on a table then the cardinality is 1 for primary key indexes and unique secondary indexes, and NULL (unknown) for all other indexes. Proper cardinality can lead to improved performance of complex SQL statements.

    "},{"location":"tokudb-background-analyze-table.html#tokudb_auto_analyze","title":"tokudb_auto_analyze","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Numeric Default 30

    Percentage of table change as INSERT/UPDATE/DELETE commands to trigger an ANALYZE TABLE using the current session tokudb_analyze_in_background, tokudb_analyze_mode, tokudb_analyze_throttle, and tokudb_analyze_time settings. If this variable is enabled and tokudb_analyze_in_background variable is set to OFF, analysis will be performed directly within the client thread context that triggered the analysis. NOTE: InnoDB enabled this functionality by default when they introduced it. Due to the potential unexpected new load it might place on a server, it is disabled by default in TokuDB.

    "},{"location":"tokudb-background-analyze-table.html#tokudb_cardinality_scale_percent","title":"tokudb_cardinality_scale_percent","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Numeric Default 100 Range 0-100

    Percentage to scale table/index statistics when sending to the server to make an index appear to be either more or less unique than it actually is. InnoDB has a hard coded scaling factor of 50%. So if a table of 200 rows had an index with 40 unique values, InnoDB would return 200/40/2 or 2 for the index. The new TokuDB formula is the same but factored differently to use percent, for the same table.index (200/40 * tokudb_cardinality_scale) / 100, for a scale of 50% the result would also be 2 for the index.

    "},{"location":"tokudb-background-analyze-table.html#information_schema-tables","title":"INFORMATION_SCHEMA Tables","text":"

    INFORMATION_SCHEMA.TOKUDB_BACKGROUND_JOB_STATUS

    Column Name Description \u2018id\u2019 \u2018Simple monotonically incrementing job id, resets to 0 on server start.\u2019 \u2018database_name\u2019 \u2018Database name\u2019 \u2018table_name\u2019 \u2018Table name\u2019 \u2018job_type\u2019 \u2018Type of job, either TOKUDB_ANALYZE_STANDARD or TOKUDB_ANALYZE_RECOUNT_ROWS\u2019 \u2018job_params\u2019 \u2018Param values used by this job in string format. For example: TOKUDB_ANALYZE_DELETE_TIME=1.0; TOKUDB_ANALYZE_TIME=5; TOKUDB_ANALYZE_THROTTLE=2048;\u2019 \u2018scheduler\u2019 \u2018Either USER or AUTO to indicate if the job was explicitly scheduled by a user or if it was scheduled as an automatic trigger\u2019 \u2018scheduled_time\u2019 \u2018The time the job was scheduled\u2019 \u2018started_time\u2019 \u2018The time the job was started\u2019 \u2018status\u2019 \u2018Current job status if running. For example: ANALYZE TABLE standard db.tbl.idx 3 of 5 50% rows 10% time scanning forward\u2019

    This table holds the information on scheduled and running background ANALYZE TABLE jobs for TokuDB tables.

    "},{"location":"tokudb-faq.html","title":"TokuDB frequently asked questions","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    "},{"location":"tokudb-faq.html#transactional-operations","title":"Transactional Operations","text":""},{"location":"tokudb-faq.html#what-transactional-operations-does-tokudb-support","title":"What transactional operations does TokuDB support?","text":"

    TokuDB supports BEGIN TRANSACTION, END TRANSACTION, COMMIT, ROLLBACK, SAVEPOINT, and RELEASE SAVEPOINT.

    "},{"location":"tokudb-faq.html#tokudb-and-the-file-system","title":"TokuDB and the File System","text":""},{"location":"tokudb-faq.html#how-can-i-determine-which-files-belong-to-the-various-tables-and-indexes-in-my-schemas","title":"How can I determine which files belong to the various tables and indexes in my schemas?","text":"

    The tokudb_file_map plugin lists all Fractal Tree Indexes and their corresponding data files. The internal_file_name is the actual file name (in the data folder).

    mysqlSELECT * FROM information_schema.tokudb_file_map;\n

    The output could be:

    +--------------------------+---------------------------------------+---------------+-------------+------------------------+\n| dictionary_name          | internal_file_name                    | table_schema  | table_name  | table_dictionary_name  |\n+--------------------------+---------------------------------------+---------------+-------------+------------------------+\n| ./test/tmc-key-idx_col2  | ./_test_tmc_key_idx_col2_a_14.tokudb  | test          | tmc         | key_idx_col2           |\n| ./test/tmc-main          | ./_test_tmc_main_9_14.tokudb          | test          | tmc         | main                   |\n| ./test/tmc-status        | ./_test_tmc_status_8_14.tokudb        | test          | tmc         | status                 |\n+--------------------------+---------------------------------------+---------------+-------------+------------------------+\n
    "},{"location":"tokudb-faq.html#full-disks","title":"Full Disks","text":""},{"location":"tokudb-faq.html#what-happens-when-the-disk-system-fills-up","title":"What happens when the disk system fills up?","text":"

    The disk system may fill up during bulk load operations, such as LOAD DATA IN FILE or CREATE INDEX, or during incremental operations like INSERT.

    In the bulk case, running out of disk space will cause the statement to fail with ERROR 1030 (HY000): Got error 1 from storage engine. The temporary space used by the bulk loader will be released. If this happens, you can use a separate physical disk for the temporary files (for more information, see tokudb_tmp_dir). If server runs out of free space TokuDB will assert the server to prevent data corruption to existing data files.

    Otherwise, disk space can run low during non-bulk operations. When available space is below a user-configurable reserve (5% by default) inserts are prevented and transactions that perform inserts are aborted. If the disk becomes completely full then TokuDB will freeze until some disk space is made available.

    Details about the disk system:

    • There is a free-space reserve requirement, which is a user-configurable parameter given as a percentage of the total space in the file system. The default reserve is five percent. This value is available in the global variable tokudb_fs_reserve_percent. We recommend that this reserve be at least half the size of your physical memory.

    TokuDB polls the file system every five seconds to determine how much free space is available. If the free space dips below the reserve, then further table inserts are prohibited. Any transaction that attempts to insert rows will be aborted. Inserts are re-enabled when twice the reserve is available in the file system (so freeing a small amount of disk storage will not be sufficient to resume inserts). Warning messages are sent to the system error log when free space dips below twice the reserve and again when free space dips below the reserve.

    Even with inserts prohibited it is still possible for the file system to become completely full. For example this can happen because another storage engine or another application consumes disk space.

    • If the file system becomes completely full, then TokuDB will freeze. It will not crash, but it will not respond to most SQL commands until some disk space is made available. When TokuDB is frozen in this state, it will still respond to the following command:

      SHOW ENGINE TokuDB STATUS;\n

      The output could be:

      Make disk space available will allow the storage engine to continue running, but inserts will still be prohibited until twice the reserve is free.\n

      Note

      Engine status displays a field indicating if disk free space is above twice the reserve, below twice the reserve, or below the reserve. It will also display a special warning if the disk is completely full.

    • In order to make space available on this system you can:

      • Add some disk space to the filesystem.

      • Delete some non-TokuDB files manually.

      • If the disk is not completely full, you may be able to reclaim space by aborting any transactions that are very old. Old transactions can consume large volumes of disk space in the recovery log.

      • If the disk is not completely full, you can drop indexes or drop tables from your TokuDB databases.

      • Deleting large numbers of rows from an existing table and then closing the table may free some space, but it may not. Deleting rows may simply leave unused space (available for new inserts) inside TokuDB data files rather than shrink the files (internal fragmentation).

    The fine print:

    • The TokuDB storage engine can use up to three separate file systems simultaneously, one each for the data, the recovery log, and the error log. All three are monitored, and if any one of the three falls below the relevant threshold then a warning message will be issued and inserts may be prohibited.

    • Warning messages to the error log are not repeated unless available disk space has been above the relevant threshold for at least one minute. This prevents excess messages in the error log if the disk free space is fluctuating around the limit.

    • Even if there are no other storage engines or other applications running, it is still possible for TokuDB to consume more disk space when operations such as row delete and query are performed, or when checkpoints are taken. This can happen because TokuDB can write cached information when it is time-efficient rather than when inserts are issued by the application, because operations in addition to insert (such as delete) create log entries, and also because of internal fragmentation of TokuDB data files.

    • The tokudb_fs_reserve_percent variable can not be changed once the system has started. It can only be set in my.cnf or on the mysqld command line.

    "},{"location":"tokudb-faq.html#backup","title":"Backup","text":""},{"location":"tokudb-faq.html#how-do-i-back-up-a-system-with-tokudb-tables","title":"How do I back up a system with TokuDB tables?","text":""},{"location":"tokudb-faq.html#taking-backups-with-percona-tokubackup","title":"Taking backups with Percona TokuBackup","text":"

    TokuDB is capable of performing online backups with Percona TokuBackup. To perform a backup, execute backup to '/path/to/backup';. This will create backup of the server and return when complete. The backup can be used by another server using a copy of the binaries on the source server. You can view the progress of the backup by executing SHOW PROCESSLIST;. TokuBackup produces a copy of your running MySQL server that is consistent at the end time of the backup process. The thread copying files from source to destination can be throttled by setting the tokudb_backup_throttle server variable. For more information check Percona TokuBackup.

    The following conditions apply:

    * Currently, *TokuBackup* only supports tables using the *TokuDB* storage engine and the *MyISAM* tables in the `mysql` database.\n\n!!! warning\n\n    You must disable *InnoDB* asynchronous IO if backing up *InnoDB* tables via *TokuBackup* utility. Otherwise you will have inconsistent, unrecoverable backups. The appropriate setting is innodb_use_native_aio to `0`.\n\n\n* Transactional storage engines (*TokuDB* and *InnoDB*) will perform recovery on the backup copy of the database when it is first started.\n\n\n* Tables using non-transactional storage engines (*MyISAM*) are not locked during the copy and may report issues when starting up the backup. It is best to avoid operations that modify these tables at the end of a hot backup operation (adding/changing users, stored procedures, etc.).\n\n\n* The database is copied locally to the path specified in `/path/to/backup`. This folder must exist, be writable, be empty, and contain enough space for a full copy of the database.\n\n\n* *TokuBackup* always makes a backup of the *MySQL* `datadir` and optionally the tokudb_data_dir, tokudb_log_dir, and the binary log folder. The latter three are only backed up separately if they are not the same as or contained in the *MySQL* `datadir`. None of these three folders can be a parent of the *MySQL* `datadir`.\n\n\n* A folder is created in the given backup destination for each of the source folders.\n\n\n* No other directory structures are supported. All *InnoDB*, *MyISAM*, and other storage engine files must be within the *MySQL* `datadir`.\n\n\n* *TokuBackup* does not follow symbolic links.\n
    "},{"location":"tokudb-faq.html#other-options-for-taking-backups","title":"Other options for taking backups","text":"

    TokuDB tables are represented in the file system with dictionary files, log files, and metadata files. A consistent copy of all of these files must be made during a backup. Copying the files while they may be modified by a running MySQL may result in an inconsistent copy of the database.

    LVM snapshots may be used to get a consistent snapshot of all of the TokuDB files. The LVM snapshot may then be backed up at leisure.

    The SELECT INTO OUTFILE statement or mysqldump application may also be used to get a logical backup of the database.

    "},{"location":"tokudb-faq.html#references","title":"References","text":"

    The MySQL 5.5 reference manual describes several backup methods and strategies. In addition, we recommend reading the backup and recovery chapter in High Performance MySQL, 3rd Edition, by Baron Schwartz, Peter Zaitsev, and Vadim Tkachenko, Copyright 2012, O\u2019Reilly Media.

    "},{"location":"tokudb-faq.html#cold-backup","title":"Cold Backup","text":"

    When MySQL is shut down, a copy of the MySQL data directory, the TokuDB data directory, and the TokuDB log directory can be made. In the simplest configuration, the TokuDB files are stored in the MySQL data directory with all of other MySQL files. One merely has to back up this directory.

    "},{"location":"tokudb-faq.html#hot-backup-using-mylvmbackup","title":"Hot Backup using mylvmbackup","text":"

    The mylvmbackup utility, located on Launchpad, works with TokuDB. It does all of the magic required to get consistent copies of all of the MySQL tables, including MyISAM tables, InnoDB tables, etc., creates the LVM snapshots, and backs up the snapshots.

    "},{"location":"tokudb-faq.html#logical-snapshots","title":"Logical Snapshots","text":"

    A logical snapshot of the databases uses a SQL statements to retrieve table rows and restore them. When used within a transaction, a consistent snapshot of the database can be taken. This method can be used to export tables from one database server and import them into another server.

    The SELECT INTO OUTFILE statement is used to take a logical snapshot of a database. The LOAD DATA INFILE statement is used to load the table data. Please see the MySQL 5.6 reference manual for details.

    Note

    Please do not use the mysqlhotcopy to back up TokuDB tables. This script is incompatible with TokuDB.

    "},{"location":"tokudb-faq.html#missing-log-files","title":"Missing Log Files","text":""},{"location":"tokudb-faq.html#what-do-i-do-if-i-delete-my-logs-files-or-they-are-otherwise-missing","title":"What do I do if I delete my logs files or they are otherwise missing?","text":"

    You\u2019ll need to recover from a backup. It is essential that the log files be present in order to restart the database.

    "},{"location":"tokudb-faq.html#isolation-levels","title":"Isolation Levels","text":""},{"location":"tokudb-faq.html#what-is-the-default-isolation-level-for-tokudb","title":"What is the default isolation level for TokuDB?","text":"

    It is repeatable-read (MVCC).

    "},{"location":"tokudb-faq.html#how-can-i-change-the-isolation-level","title":"How can I change the isolation level?","text":"

    TokuDB supports repeatable-read, serializable, read-uncommitted and read-committed isolation levels (other levels are not supported). TokuDB employs pessimistic locking, and aborts a transaction when a lock conflict is detected.

    To guarantee that lock conflicts do not occur, use repeatable-read, read-uncommitted or read-committed isolation level.

    "},{"location":"tokudb-faq.html#lock-wait-timeout-exceeded","title":"Lock Wait Timeout Exceeded","text":""},{"location":"tokudb-faq.html#why-do-my-mysql-clients-get-lock-timeout-errors-for-my-update-queries-and-what-should-my-application-do-when-it-gets-these-errors","title":"Why do my MySQL clients get lock timeout errors for my update queries? And what should my application do when it gets these errors?","text":"

    Updates can get lock timeouts if some other transaction is holding a lock on the rows being updated for longer than the TokuDB lock timeout. You may want to increase the this timeout.

    If an update deadlocks, then the transaction should abort and retry.

    For more information on diagnosing locking issues, see Lock Visualization in TokuDB.

    "},{"location":"tokudb-faq.html#row-size","title":"Row Size","text":""},{"location":"tokudb-faq.html#what-is-the-maximum-row-size","title":"What is the maximum row size?","text":"

    The maximum row size is 32 MiB.

    "},{"location":"tokudb-faq.html#nfs-cifs","title":"NFS & CIFS","text":""},{"location":"tokudb-faq.html#can-the-data-directories-reside-on-a-disk-that-is-nfs-or-cifs-mounted","title":"Can the data directories reside on a disk that is NFS or CIFS mounted?","text":"

    Yes, we do have customers in production with NFS & CIFS volumes today. However, both of these disk types can pose a challenge to performance and data integrity due to their complexity. If you\u2019re seeking performance, the switching infrastructure and protocols of a traditional network were not conceptualized for low response times and can be very difficult to troubleshoot. If you\u2019re concerned with data integrity, the possible data caching at the NFS level can cause inconsistencies between the logs and data files that may never be detected in the event of a crash. If you are thinking of using a NFS or CIFS mount, we would recommend that you use synchronous mount options, which are available from the NFS mount man page, but these settings may decrease performance. For further discussion please look here.

    "},{"location":"tokudb-faq.html#using-other-storage-engines","title":"Using Other Storage Engines","text":""},{"location":"tokudb-faq.html#can-the-myisam-and-innodb-storage-engines-be-used","title":"Can the MyISAM and InnoDB Storage Engines be used?","text":"

    MyISAM and InnoDB can be used directly in conjunction with TokuDB. Please note that you should not over-commit memory between InnoDB and TokuDB. The total memory assigned to both caches must be less than physical memory.

    "},{"location":"tokudb-faq.html#can-the-federated-storage-engines-be-used","title":"Can the Federated Storage Engines be used?","text":"

    The Federated Storage Engine can also be used, however it is disabled by default in MySQL. It can be enabled by either running mysqld with --federated as a command line parameter, or by putting federated in the [mysqld] section of the my.cnf file.

    For more information see the MySQL 8.0 Reference Manual: FEDERATED Storage Engine.

    "},{"location":"tokudb-faq.html#using-mysql-patches-with-tokudb","title":"Using MySQL Patches with TokuDB","text":""},{"location":"tokudb-faq.html#can-i-use-mysql-source-code-patches-with-tokudb","title":"Can I use MySQL source code patches with TokuDB?","text":"

    Yes, but you need to apply Percona patches as well as your patches to MySQL to build a binary that works with the Percona Fractal Tree library.

    "},{"location":"tokudb-faq.html#truncate-table-vs-delete-from-table","title":"Truncate Table vs Delete from Table","text":""},{"location":"tokudb-faq.html#which-is-faster-truncate-table-or-delete-from-table","title":"Which is faster, TRUNCATE TABLE or DELETE FROM TABLE?","text":"

    Use TRUNCATE TABLE whenever possible. A table truncation runs in constant time, whereas a DELETE FROM TABLE requires a row-by-row deletion and thus runs in time linear to the table size.

    "},{"location":"tokudb-faq.html#foreign-keys","title":"Foreign Keys","text":""},{"location":"tokudb-faq.html#does-tokudb-enforce-foreign-key-constraints","title":"Does TokuDB enforce foreign key constraints?","text":"

    No. TokuDB ignores foreign key declarations.

    "},{"location":"tokudb-faq.html#dropping-indexes","title":"Dropping Indexes","text":""},{"location":"tokudb-faq.html#is-dropping-an-index-in-tokudb-hot","title":"Is dropping an index in TokuDB hot?","text":"

    No, the table is locked for the amount of time it takes the file system to delete the file associated with the index.

    "},{"location":"tokudb-file-management.html","title":"TokuDB file management","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    As mentioned in the TokuDB files and file types Percona FT is particular when validating its data set. If a file goes missing or can\u2019t be accessed, or seems to contain some nonsensical data, it will assert, abort or fail to start. It does this not to annoy you, but to try to protect you from doing any further damage to your data.

    This document contains examples of common file maintenance operations and instructions on how to safely execute these operations.

    The tokudb_dir_per_db option addressed two shortcomings the renaming of data files on table/index rename, and the ability to group data files together within a directory that represents a single database. This feature is enabled by default.

    The tokudb_dir_cmd variable can be used to edit the contents of the TokuDB/PerconaFT directory map.

    "},{"location":"tokudb-file-management.html#moving-tokudb-data-files-to-a-location-outside-of-the-default-mysql-datadir","title":"Moving TokuDB data files to a location outside of the default MySQL datadir","text":"

    TokuDB uses the location specified by the tokudb_data_dir variable for all of its data files. If the tokudb_data_dir variable is not explicitly set, TokuDB will use the location specified by the servers datadir for these files.

    The TokuDB data files are protected from concurrent process access by the __tokudb_lock_dont_delete_me_data file that is located in the same directory as the TokuDB data files.

    TokuDB data files may be moved to other locations with symlinks left behind in their place. If those symlinks refer to files on other physical data volumes, the tokudb_fs_reserve_percent monitor will not traverse the symlink and monitor the real location for adequate space in the file system.

    To safely move your TokuDB data files:

    1. Shut the server down cleanly.

    2. Change the tokudb_data_dir in your my.cnf configuration file to the location where you wish to store your TokuDB data files.

    3. Create your new target directory.

    4. Move your \\*.tokudb files and your __tokudb_lock_dont_delete_me_data from the current location to the new location.

    5. Restart your server.

    "},{"location":"tokudb-file-management.html#moving-tokudb-temporary-files-to-a-location-outside-of-the-default-mysql-datadir","title":"Moving TokuDB temporary files to a location outside of the default MySQL datadir","text":"

    TokuDB will use the location specified by the tokudb_tmp_dir variable for all of its temporary files. If tokudb_tmp_dir variable is not explicitly set, TokuDB will use the location specified by the tokudb_data_dir variable. If the tokudb_data_dir variable is also not explicitly set, TokuDB will use the location specified by the servers datadir for these files.

    TokuDB temporary files are protected from concurrent process access by the __tokudb_lock_dont_delete_me_temp file that is located in the same directory as the TokuDB temporary files.

    If you locate your TokuDB temporary files on a physical volume that is different from where your TokuDB data files or recovery log files are located, the tokudb_fs_reserve_percent monitor will not monitor their location for adequate space in the file system.

    To safely move your TokuDB temporary files:

    1. Shut the server down cleanly. A clean shutdown will ensure that there are no temporary files that need to be relocated.

    2. Change the tokudb_tmp_dir variable in your my.cnf configuration file to the location where you wish to store your new TokuDB temporary files.

    3. Create your new target directory.

    4. Move your __tokudb_lock_dont_delete_me_temp file from the current location to the new location.

    5. Restart your server.

    "},{"location":"tokudb-file-management.html#moving-tokudb-recovery-log-files-to-a-location-outside-of-the-default-mysql-datadir","title":"Moving TokuDB recovery log files to a location outside of the default MySQL datadir","text":"

    TokuDB will use the location specified by the tokudb_log_dir variable for all of its recovery log files. If the tokudb_log_dir variable is not explicitly set, TokuDB will use the location specified by the servers source/glossary.rst`datadir` for these files.

    The TokuDB recovery log files are protected from concurrent process access by the __tokudb_lock_dont_delete_me_logs file that is located in the same directory as the TokuDB recovery log files.

    TokuDB recovery log files may be moved to another location with symlinks left behind in place of the tokudb_log_dir. If that symlink refers to a directory on another physical data volume, the tokudb_fs_reserve_percent monitor will not traverse the symlink and monitor the real location for adequate space in the file system.

    To safely move your TokuDB recovery log files:

    1. Shut the server down cleanly.

    2. Change the tokudb_log_dir in your my.cnf configuration file to the location where you wish to store your TokuDB recovery log files.

    3. Create your new target directory.

    4. Move your log\\*.tokulog\\* files and your __tokudb_lock_dont_delete_me_logs file from the current location to the new location.

    5. Restart your server.

    "},{"location":"tokudb-file-management.html#improved-table-renaming-functionality","title":"Improved table renaming functionality","text":"

    When you rename a TokuDB table via SQL, the data files on disk keep their original names and only the mapping in the Percona FT directory file is changed to map the new dictionary name to the original internal file names. This makes it difficult to quickly match database/table/index names to their actual files on disk, requiring you to use the refTOKUDB_FILE_MAP table to cross reference.

    The tokudb_dir_per_db variable is implemented to address this issue.

    When tokudb_dir_per_dbis enabled (ON by default), this is no longer the case. When you rename a table, the mapping in the Percona FT directory file will be updated and the files will be renamed on disk to reflect the new table name.

    "},{"location":"tokudb-file-management.html#improved-directory-layout-functionality","title":"Improved directory layout functionality","text":"

    Many users have had issues with managing the huge volume of individual files that TokuDB and Percona FT use. The tokudb_dir_per_db variable addresses this issue.

    When tokudb_dir_per_db variable is enabled (ON by default), all new tables and indices will be placed within their corresponding database directory within the tokudb_data_dir or server datadir.

    If you have tokudb_data_dir variable set to something other than the server datadir, TokuDB will create a directory matching the name of the database, but upon dropping of the database, this directory will remain behind.

    Existing table files will not be automatically relocated to their corresponding database directory.

    You can easily move a tables data files into the new scheme and proper database directory with a few steps:

    mysql> SET GLOBAL tokudb_dir_per_db=true;\nmysql> RENAME TABLE <table> TO <tmp_table>;\nmysql> RENAME TABLE <tmp_table> TO <table>;\n

    Note

    Two renames are needed because MySQL doesn\u2019t allow you to rename a table to itself. The first rename, renames the table to the temporary name and moves the table files into the owning database directory. The second rename sets the table name back to the original name. Tables can also be renamed/moved across databases and will be placed correctly into the corresponding database directory.

    Warning

    You must be careful with renaming tables in case you have used any tricks to create symlinks of the database directories on different storage volumes, the move is not a simple directory move on the same volume but a physical copy across volumes. This can take quite some time and prevent access to the table being moved during the copy.

    "},{"location":"tokudb-file-management.html#system-variables","title":"System Variables","text":""},{"location":"tokudb-file-management.html#tokudb_dir_cmd","title":"tokudb_dir_cmd","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type String

    This variable is used to send commands to edit TokuDB directory files.

    Warning

    Use this variable only if you know what you are doing otherwise it WILL lead to data loss.

    "},{"location":"tokudb-file-management.html#status-variables","title":"Status Variables","text":""},{"location":"tokudb-file-management.html#tokudb_dir_cmd_last_error","title":"tokudb_dir_cmd_last_error","text":"Option Description Scope Global Data type Numeric

    This variable contains the error number of the last executed command by using the tokudb_dir_cmd variable.

    "},{"location":"tokudb-file-management.html#tokudb_dir_cmd_last_error_string","title":"tokudb_dir_cmd_last_error_string","text":"Option Description Scope Global Data type Numeric

    This variable contains the error string of the last executed command by using the tokudb_dir_cmd variable.

    "},{"location":"tokudb-files-file-types.html","title":"TokuDB files and file types","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    The TokuDB file set consists of many different files that all serve various purposes.

    If you have any TokuDB data, your data directory should look similar to this:

    root@server:/var/lib/mysql# ls -lah\n

    The output could be:

    ...\n-rw-rw----  1 mysql mysql  76M Oct 13 18:45 ibdata1\n...\n-rw-rw----  1 mysql mysql  16K Oct 13 15:52 tokudb.directory\n-rw-rw----  1 mysql mysql  16K Oct 13 15:52 tokudb.environment\n-rw-------  1 mysql mysql    0 Oct 13 15:52 __tokudb_lock_dont_delete_me_data\n-rw-------  1 mysql mysql    0 Oct 13 15:52 __tokudb_lock_dont_delete_me_environment\n-rw-------  1 mysql mysql    0 Oct 13 15:52 __tokudb_lock_dont_delete_me_logs\n-rw-------  1 mysql mysql    0 Oct 13 15:52 __tokudb_lock_dont_delete_me_recovery\n-rw-------  1 mysql mysql    0 Oct 13 15:52 __tokudb_lock_dont_delete_me_temp\n-rw-rw----  1 mysql mysql  16K Oct 13 15:52 tokudb.rollback\n...\n

    This document lists the different types of TokuDB and Percona Fractal Tree files, explains their purpose, shows their location and how to move them around.

    "},{"location":"tokudb-files-file-types.html#tokudbenvironment","title":"tokudb.environment","text":"

    This file is the root of the Percona FT file set and contains various bits of metadata about the system, such as creation times, current file format versions, etc.

    Percona FT will create/expect this file in the directory specified by the MySQL datadir.

    "},{"location":"tokudb-files-file-types.html#tokudbrollback","title":"tokudb.rollback","text":"

    Every transaction within Percona FT maintains its own transaction rollback log. These logs are stored together within a single Percona FT dictionary file and take up space within the Percona FT cachetable (just like any other Percona FT dictionary).

    The transaction rollback logs will undo any changes made by a transaction if the transaction is explicitly rolled back, or rolled back via recovery as a result of an uncommitted transaction when a crash occurs.

    Percona FT will create/expect this file in the directory specified by the MySQL datadir.

    "},{"location":"tokudb-files-file-types.html#tokudbdirectory","title":"tokudb.directory","text":"

    Percona FT maintains a mapping of a dictionary name (example: sbtest.sbtest1.main) to an internal file name (example: _sbtest_sbtest1_main_xx_x_xx.tokudb). This mapping is stored within this single Percona FT dictionary file and takes up space within the Percona FT cachetable just like any other Percona FT dictionary.

    Percona FT will create/expect this file in the directory specified by the MySQL datadir.

    "},{"location":"tokudb-files-file-types.html#dictionary-files","title":"Dictionary files","text":"

    TokuDB dictionary (data) files store actual user data. For each MySQL table there will be:

    • One status dictionary that contains metadata about the table.

    • One main dictionary that stores the full primary key (an imaginary key is used if one was not explicitly specified) and full row data.

    • One key dictionary for each additional key/index on the table.

    These are typically named: _<database>_<table>_<key>_<internal_txn_id>.tokudb

    Percona FT creates/expects these files in the directory specified by tokudb_data_dir if set, otherwise the MySQL datadir is used.

    "},{"location":"tokudb-files-file-types.html#recovery-log-files","title":"Recovery log files","text":"

    The Percona FT recovery log records every operation that modifies a Percona FT dictionary. Periodically, the system will take a snapshot of the system called a checkpoint. This checkpoint ensures that the modifications recorded within the Percona FT recovery logs have been applied to the appropriate dictionary files up to a known point in time and synced to disk.

    These files have a rolling naming convention, but use: log<log_file_number>.tokulog<log_file_format_version>.

    Percona FT creates/expects these files in the directory specified by tokudb_log_dir if set, otherwise the MySQL datadir is used.

    Percona FT does not track what log files should or shouldn\u2019t be present. Upon startup, it discovers the logs in the log directory, and replays them in order. If the wrong logs are present, the recovery aborts and possibly damages the dictionaries.

    "},{"location":"tokudb-files-file-types.html#temporary-files","title":"Temporary files","text":"

    Percona FT might need to create some temporary files in order to perform some operations. When the bulk loader is active, these temporary files might grow to be quite large.

    As different operations start and finish, the files will come and go.

    There are no temporary files left behind upon a clean shutdown,

    Percona FT creates/expects these files in the directory specified by tokudb_tmp_dir if set. If not, the tokudb_data_dir is used if set, otherwise the MySQL datadir is used.

    "},{"location":"tokudb-files-file-types.html#lock-files","title":"Lock files","text":"

    Percona FT uses lock files to prevent multiple processes from accessing and writing to the files in the assorted Percona FT functionality areas. Each lock file will be in the same directory as the file(s) that it is protecting.

    These empty files are only used as semaphores across processes. They are safe to delete/ignore as long as no server instances are currently running and using the data set.

    __tokudb_lock_dont_delete_me_environment

    __tokudb_lock_dont_delete_me_recovery

    __tokudb_lock_dont_delete_me_logs

    __tokudb_lock_dont_delete_me_data

    __tokudb_lock_dont_delete_me_temp

    Percona FT validates its data set. If a file goes missing or not found, or seems to contain some nonsensical data, it will assert, abort or fail to start. It does this not to annoy you, but to try to protect you from doing any further damage to your data.

    "},{"location":"tokudb-fractal-tree-indexing.html","title":"TokuDB fractal tree indexing","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    Fractal Tree indexing is the technology behind TokuDB and is protected by multiple patents. This index type enhances the traditional B-tree data structure used in other database engines and optimizes performance for modern hardware and data sets.

    "},{"location":"tokudb-fractal-tree-indexing.html#background","title":"Background","text":"

    The B-tree data structure was optimized for large blocks of data, but the performance is limited by I/O bandwidth. The size of a production database generally exceeds available main memory. Most leaves in a tree are stored on disk, not in RAM. If a leaf is not in main memory inserting information requires a disk I/O operation. Continually adding RAM to keep pace with the data growth is too expensive.

    "},{"location":"tokudb-fractal-tree-indexing.html#buffers","title":"Buffers","text":"

    Like a B-tree structure, a fractal tree index is a tree data structure, but each node has buffers that allow messages to be stored. Insertions, deletions, and updates are inserted into the buffers as messages. Buffers let each disk operation be more efficient by writing large amounts of data. Buffers also avoid the common B-tree scenario when disk writes change only a small amount of data.

    In fractal tree indexes, non-leaf (internal) nodes have child nodes. The number of child nodes is variable and based on a pre-defined range. When data is inserted or deleted from a node, the number of child nodes changes. Internal nodes may join or split to maintain the defined range. When the buffer is full, the messages are flushed to children nodes.

    Fractal tree index data structure involves the same algorithmic complexity as B-tree queries. There is no data loss because the queries follow the path from the root to leaf and pass through all messages. A query knows the current state of data even if changes have not been propagated to the corresponding leaves.

    Each message is stamped with a unique message sequence number (MSN) when the message is stored in a non-leaf node message buffer. The MSN maintains the order of messages and ensures the messages are only applied once to leaf nodes when the leaf node is updated by messages.

    Buffers are also serialized to disk, messages in internal nodes are not lost in the case of a crash or outage. If a write happened after a checkpoint, but before a crash, recovery replays the operation from the log.

    "},{"location":"tokudb-installation.html","title":"TokuDB installation","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    Percona Server for MySQL is compatible with the separately available TokuDB storage engine package. The TokuDB engine must be separately downloaded and then enabled as a plug-in component. This package can be installed alongside the standard Percona Server for MySQL 8.0 releases and does not require any specially adapted version of Percona Server for MySQL.

    The TokuDB storage engine is a scalable, ACID and MVCC compliant storage engine that provides indexing-based query improvements, offers online schema modifications, and reduces replica lag for both hard disk drives and flash memory. This storage engine is specifically designed for high performance on write-intensive workloads which is achieved with Fractal Tree indexing. To learn more about Fractal Tree indexing, you can visit the following Wikipedia page.

    Warning

    Only the Percona supplied TokuDB engine should be used with Percona Server for MySQL 8.0. A TokuDB engine downloaded from other sources is not compatible. TokuDB file formats are not the same across MySQL variants. Migrating from one variant to any other variant requires a logical data dump and reload.

    "},{"location":"tokudb-installation.html#prerequisites","title":"Prerequisites","text":""},{"location":"tokudb-installation.html#libjemalloc-library","title":"libjemalloc library","text":"

    TokuDB storage engine requires libjemalloc library 3.3.0 or greater. If the version in the distribution repository is lower than that you can use one from Percona Software Repositories or download it.

    If the libjemalloc wasn\u2019t installed and enabled before it will be automatically installed when installing the TokuDB storage engine package by using the **apt** or **yum** package manager, but *Percona Server for MySQL* instance should be restarted forlibjemallocto be loaded. This waylibjemallocwill be loaded withLD_PRELOAD. You can also enablelibjemallocby specifying malloc-lib variable in the[mysqld_safe]section of themy.cnf` file:

    [mysqld_safe]\nmalloc-lib= /path/to/jemalloc\n
    "},{"location":"tokudb-installation.html#transparent-huge-pages","title":"Transparent huge pages","text":"

    TokuDB won\u2019t be able to start if the transparent huge pages are enabled. Transparent huge pages is a feature available in the newer kernel versions. You can check if the Transparent huge pages are enabled with: cat /sys/kernel/mm/transparent_hugepage/enabled

    If transparent huge pages are enabled and you try to start the TokuDB engine you\u2019ll get the following message in you error.log:

    Transparent huge pages are enabled, according to /sys/kernel/mm/redhat_transparent_hugepage/enabled\nTransparent huge pages are enabled, according to /sys/kernel/mm/transparent_hugepage/enabled\n

    You can disable transparent huge pages permanently by passing transparent_hugepage=never to the kernel in your bootloader

    Note

    For this change to take an effect you must reboot your server.

    You can disable the transparent huge pages by running the following command as root.

    Note

    This setting lasts only until the server is rebooted.

    echo never > /sys/kernel/mm/transparent_hugepage/enabled\necho never > /sys/kernel/mm/transparent_hugepage/defrag\n
    "},{"location":"tokudb-installation.html#installation","title":"Installation","text":"

    The TokuDB storage engine for Percona Server for MySQL is currently available in our apt and yum repositories.

    You can install the Percona Server for MySQL with the TokuDB engine by using the respective package manager:

    For yum, use the following command:

        $ yum install percona-server-tokudb.x86_64\n

    For apt, use the following command:

        $ apt install percona-server-tokudb\n
    "},{"location":"tokudb-installation.html#enabling-the-tokudb-storage-engine","title":"Enabling the TokuDB Storage Engine","text":"

    Once the TokuDB server package is installed, the following output is shown:

    Percona Server for MySQL has implemented ps-admin to make the enabling the TokuDB storage engine easier. This script will automatically disable Transparent huge pages, if they\u2019re enabled, and install and enable the TokuDB storage engine with all the required plugins. You need to run this script as root or with sudo. The script should only be used for local installations and should not be used to install TokuDB to a remote server. After you run the script with required parameters:

    Percona Server for MySQL has implemented ps_tokudb_admin script to make the enabling the TokuDB storage engine easier. This script will automatically disable Transparent huge pages, if they\u2019re enabled, and install and enable the TokuDB storage engine with all the required plugins. You need to run this script as root or with sudo. The script should only be used for local installations and should not be used to install TokuDB to a remote server. After you run the script with required parameters:

    $ ps-admin --enable-tokudb -uroot -pPassw0rd\n

    Following output will be displayed:

    Checking if Percona server is running with jemalloc enabled...\n>> Percona server is running with jemalloc enabled.\n\nChecking transparent huge pages status on the system...\n>> Transparent huge pages are currently disabled on the system.\n\nChecking if thp-setting=never option is already set in config file...\n>> Option thp-setting=never is not set in the config file.\n>> (needed only if THP is not disabled permanently on the system)\n\nChecking TokuDB plugin status...\n>> TokuDB plugin is not installed.\n\nAdding thp-setting=never option into /etc/mysql/my.cnf\n>> Successfuly added thp-setting=never option into /etc/mysql/my.cnf\n\nInstalling TokuDB engine...\n>> Successfuly installed TokuDB plugin.\n

    If the script returns no errors, TokuDB storage engine should be successfully enabled on your server. You can check it out by running SHOW ENGINES;

    "},{"location":"tokudb-installation.html#enabling-the-tokudb-storage-engine-manually","title":"Enabling the TokuDB Storage Engine Manually","text":"

    If you don\u2019t want to use ps-admin you\u2019ll need to manually install the storage engine ad required plugins.

    INSTALL PLUGIN tokudb SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_file_map SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_fractal_tree_info SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_fractal_tree_block_map SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_trx SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_locks SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_lock_waits SONAME 'ha_tokudb.so';\nINSTALL PLUGIN tokudb_background_job_status SONAME 'ha_tokudb.so';\n

    After the engine has been installed it should be present in the engines list. To check if the engine has been correctly installed and active: SHOW ENGINES;

    To check if all the TokuDB plugins have been installed correctly you should run: SHOW PLUGINS;

    "},{"location":"tokudb-installation.html#tokudb-version","title":"TokuDB Version","text":"

    TokuDB storage engine version can be checked with: SELECT @@tokudb_version;

    "},{"location":"tokudb-installation.html#upgrade","title":"Upgrade","text":"

    Before upgrading to Percona Server for MySQL 8.0, make sure that your system is ready by running mysqlcheck: mysqlcheck -u root -p --all-databases --check-upgrade

    Warning

    With partitioned tables that use the TokuDB or MyRocks storage engine, the upgrade only works with native partitioning.

    "},{"location":"tokudb-intro.html","title":"TokuDB introduction","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see TokuDB version changes.

    TokuDB is a highly scalable, zero-maintenance downtime MySQL storage engine that delivers indexing-based query acceleration, improved replication performance, unparalleled compression, and live schema modification. The TokuDB storage engine is a scalable, ACID and MVCC compliant storage engine that provides indexing-based query improvements, offers online schema modifications, and reduces replica lag for both hard disk drives and flash memory. This storage engine is specifically designed for high performance on write-intensive workloads which is achieved with Fractal Tree indexing.

    Percona Server for MySQL is compatible with the separately available TokuDB storage engine package. The TokuDB engine must be separately downloaded and then enabled as a plug-in component. This package can be installed alongside with standard Percona Server for MySQL releases and does not require any specially adapted version of Percona Server for MySQL.

    Warning

    Only the Percona supplied TokuDB engine should be used with Percona Server for MySQL. A TokuDB engine downloadedfrom other sources is not compatible. TokuDB file formats are not the same across MySQL variants. Migrating from one variant to any other variant requires a logical data dump and reload.

    Additional features unique to TokuDB include:

    • Up to 25x Data Compression

    • Fast Inserts

    • Eliminates Replica Lag with Read Free Replication

    • Hot Schema Changes

    • Hot Index Creation - TokuDB tables support insertions, deletions and queries with no down time while indexes are being added to that table

    • Hot column addition, deletion, expansion, and rename - TokuDB tables support insertions, deletions and queries without down-time when an alter table adds, deletes, expands, or renames columns

    • On-line Backup

    Note

    The TokuDB storage engine does not support the nowait and skip locked modifiers introduced in the InnoDB storage engine with MySQL 8.0.

    For more information on installing and using TokuDB click on the following links:

    • TokuDB Installation

    • Using TokuDB

    • Getting Started with TokuDB

    • TokuDB Variables

    • Percona TokuBackup

    • TokuDB Troubleshooting

    • Frequently Asked Questions

    • Migrating and Removing the TokuDB storage engine

    "},{"location":"tokudb-intro.html#getting-the-most-from-tokudb","title":"Getting the Most from TokuDB","text":"

    Compression

    *TokuDB* compresses all data on disk, including indexes. Compression\nlowers cost by reducing the amount of storage required and frees up disk\nspace for additional indexes to achieve improved query performance. Depending\non the compressibility of the data, we have seen compression ratios up to 25x\nfor high compression. Compression can also lead to improved performance since\nless data needs to be read from and written to disk.\n

    Fast Insertions and Deletions

    TokuDB\u2019s Fractal Tree technology enables fast\nindexed insertions and deletions. Fractal Trees match B-trees in their\nindexing sweet spot (sequential data) and are up to two orders of magnitude\nfaster for random data with high cardinality.\n

    Eliminates Replica Lag

    *TokuDB* replication replicas can be configured to process\nthe replication stream with virtually no read IO. Uniqueness checking is\nperformed on the *TokuDB* source and can be skipped on all *TokuDB*\nreplica. Also, row based replication ensures that all before and after row\nimages are captured in the binary logs, so the *TokuDB* replicas can harness\nthe power of Fractal Tree indexes and bypass traditional read-modify-write\nbehavior. This \u201cRead Free Replication\u201d ensures that replication replicas do not\nfall behind the source and can be used for read scaling, backups, and\ndisaster recovery, without sharding, expensive hardware, or limits on what\ncan be replicated.\n

    Hot Index Creation

    *TokuDB* allows the addition of indexes to an existing table\nwhile inserts and queries are being performed on that table. This means that\n*MySQL* can be run continuously with no blocking of queries or insertions\nwhile indexes are added and eliminates the down-time that index changes would\notherwise require.\n

    Hot Column Addition, Deletion, Expansion and Rename

    *TokuDB* allows the addition\nof new columns to an existing table, the deletion of existing columns from an\nexisting table, the expansion of `char`, `varchar`, `varbinary`, and\n`integer` type columns in an existing table, and the renaming of an\nexisting column while inserts and queries are being performed on that table.\n

    Online (Hot) Backup

    The *TokuDB* can create backups of online database servers without downtime.\n

    Fast Indexing

    In practice, slow indexing often leads users to choose a smaller\nnumber of sub-optimal indexes in order to keep up with incoming data\nrates. These sub-optimal indexes result in disproportionately slower queries,\nsince the difference in speed between a query with an index and the same\nquery when no index is available can be many orders of magnitude. Thus, fast\nindexing means fast queries.\n

    Clustering Keys and Other Indexing Improvements

    *TokuDB* tables are clustered on\nthe primary key. *TokuDB* also supports clustering secondary keys, providing\nbetter performance on a broader range of queries. A clustering key includes\n(or clusters) all of the columns in a table along with the key. As a result,\none can efficiently retrieve any column when doing a range query on a\nclustering key. Also, with *TokuDB*, an auto-increment column can be used in\nany index and in any position within an index. Lastly, *TokuDB* indexes can\ninclude up to 32 columns.\n

    Less Aging/Fragmentation

    *TokuDB* can run much longer, likely indefinitely,\nwithout the need to perform the customary practice of dump/reload or\n`OPTIMIZE TABLE` to restore database performance. The key is the\nfundamental difference with which the Fractal Tree stores data on\ndisk. Since, by default, the Fractal Tree will store data in 4MB chunks\n(pre-compression), as compared to InnoDB\u2019s 16KB, *TokuDB* has the ability to\navoid \u201cdatabase disorder\u201d up to 250x better than InnoDB.\n

    Bulk Loader

    *TokuDB* uses a parallel loader to create tables and offline\nindexes. This parallel loader will use multiple cores for fast offline table\nand index creation.\n

    Full-Featured Database

    *TokuDB* supports fully ACID-compliant transactions, MVCC\n(Multi-Version Concurrency Control), serialized isolation levels, row-level\nlocking, and XA. *TokuDB* scales with high number of client connections, even\nfor large tables.\n

    Lock Diagnostics

    *TokuDB* provides users with the tools to diagnose locking and\ndeadlock issues. For more information, see [Lock Visualization in TokuDB](tokudb_troubleshooting.md#lock-visualization-in-tokudb).\n

    Progress Tracking

    Running `SHOW PROCESSLIST` when adding indexes provides\nstatus on how many rows have been processed. Running `SHOW PROCESSLIST`\nalso shows progress on queries, as well as insertions, deletions and\nupdates. This information is helpful for estimating how long operations will\ntake to complete.\n

    Fast Recovery

    *TokuDB* supports very fast recovery, typically less than a minute.\n
    "},{"location":"tokudb-performance-schema.html","title":"TokuDB Performance Schema integration","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    TokuDB is integrated with Performance Schema

    This integration can be used for profiling additional TokuDB operations.

    TokuDB instruments available in Performance Schema can be seen in PERFORMANCE_SCHEMA.SETUP_INSTRUMENTS table:

    mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE \"%/fti/%\";\n

    The output could be the following:

    +------------------------------------------------------------+---------+-------+\n| NAME                                                       | ENABLED | TIMED |\n+------------------------------------------------------------+---------+-------+\n| wait/synch/mutex/fti/kibbutz_mutex                         | NO      | NO    |\n| wait/synch/mutex/fti/minicron_p_mutex                      | NO      | NO    |\n| wait/synch/mutex/fti/queue_result_mutex                    | NO      | NO    |\n| wait/synch/mutex/fti/tpool_lock_mutex                      | NO      | NO    |\n| wait/synch/mutex/fti/workset_lock_mutex                    | NO      | NO    |\n| wait/synch/mutex/fti/bjm_jobs_lock_mutex                   | NO      | NO    |\n| wait/synch/mutex/fti/log_internal_lock_mutex               | NO      | NO    |\n| wait/synch/mutex/fti/cachetable_ev_thread_lock_mutex       | NO      | NO    |\n| wait/synch/mutex/fti/cachetable_disk_nb_mutex              | NO      | NO    |\n| wait/synch/mutex/fti/safe_file_size_lock_mutex             | NO      | NO    |\n| wait/synch/mutex/fti/cachetable_m_mutex_key                | NO      | NO    |\n| wait/synch/mutex/fti/checkpoint_safe_mutex                 | NO      | NO    |\n| wait/synch/mutex/fti/ft_ref_lock_mutex                     | NO      | NO    |\n| wait/synch/mutex/fti/ft_open_close_lock_mutex              | NO      | NO    |\n| wait/synch/mutex/fti/loader_error_mutex                    | NO      | NO    |\n| wait/synch/mutex/fti/bfs_mutex                             | NO      | NO    |\n| wait/synch/mutex/fti/loader_bl_mutex                       | NO      | NO    |\n| wait/synch/mutex/fti/loader_fi_lock_mutex                  | NO      | NO    |\n| wait/synch/mutex/fti/loader_out_mutex                      | NO      | NO    |\n| wait/synch/mutex/fti/result_output_condition_lock_mutex    | NO      | NO    |\n| wait/synch/mutex/fti/block_table_mutex                     | NO      | NO    |\n| wait/synch/mutex/fti/rollback_log_node_cache_mutex         | NO      | NO    |\n| wait/synch/mutex/fti/txn_lock_mutex                        | NO      | NO    |\n| wait/synch/mutex/fti/txn_state_lock_mutex                  | NO      | NO    |\n| wait/synch/mutex/fti/txn_child_manager_mutex               | NO      | NO    |\n| wait/synch/mutex/fti/txn_manager_lock_mutex                | NO      | NO    |\n| wait/synch/mutex/fti/treenode_mutex                        | NO      | NO    |\n| wait/synch/mutex/fti/locktree_request_info_mutex           | NO      | NO    |\n| wait/synch/mutex/fti/locktree_request_info_retry_mutex_key | NO      | NO    |\n| wait/synch/mutex/fti/manager_mutex                         | NO      | NO    |\n| wait/synch/mutex/fti/manager_escalation_mutex              | NO      | NO    |\n| wait/synch/mutex/fti/db_txn_struct_i_txn_mutex             | NO      | NO    |\n| wait/synch/mutex/fti/manager_escalator_mutex               | NO      | NO    |\n| wait/synch/mutex/fti/indexer_i_indexer_lock_mutex          | NO      | NO    |\n| wait/synch/mutex/fti/indexer_i_indexer_estimate_lock_mutex | NO      | NO    |\n| wait/synch/mutex/fti/fti_probe_1                           | NO      | NO    |\n| wait/synch/rwlock/fti/multi_operation_lock                 | NO      | NO    |\n| wait/synch/rwlock/fti/low_priority_multi_operation_lock    | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_m_list_lock               | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_m_pending_lock_expensive  | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_m_pending_lock_cheap      | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_m_lock                    | NO      | NO    |\n| wait/synch/rwlock/fti/result_i_open_dbs_rwlock             | NO      | NO    |\n| wait/synch/rwlock/fti/checkpoint_safe_rwlock               | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_value                     | NO      | NO    |\n| wait/synch/rwlock/fti/safe_file_size_lock_rwlock           | NO      | NO    |\n| wait/synch/rwlock/fti/cachetable_disk_nb_rwlock            | NO      | NO    |\n| wait/synch/cond/fti/result_state_cond                      | NO      | NO    |\n| wait/synch/cond/fti/bjm_jobs_wait                          | NO      | NO    |\n| wait/synch/cond/fti/cachetable_p_refcount_wait             | NO      | NO    |\n| wait/synch/cond/fti/cachetable_m_flow_control_cond         | NO      | NO    |\n| wait/synch/cond/fti/cachetable_m_ev_thread_cond            | NO      | NO    |\n| wait/synch/cond/fti/bfs_cond                               | NO      | NO    |\n| wait/synch/cond/fti/result_output_condition                | NO      | NO    |\n| wait/synch/cond/fti/manager_m_escalator_done               | NO      | NO    |\n| wait/synch/cond/fti/lock_request_m_wait_cond               | NO      | NO    |\n| wait/synch/cond/fti/queue_result_cond                      | NO      | NO    |\n| wait/synch/cond/fti/ws_worker_wait                         | NO      | NO    |\n| wait/synch/cond/fti/rwlock_wait_read                       | NO      | NO    |\n| wait/synch/cond/fti/rwlock_wait_write                      | NO      | NO    |\n| wait/synch/cond/fti/rwlock_cond                            | NO      | NO    |\n| wait/synch/cond/fti/tp_thread_wait                         | NO      | NO    |\n| wait/synch/cond/fti/tp_pool_wait_free                      | NO      | NO    |\n| wait/synch/cond/fti/frwlock_m_wait_read                    | NO      | NO    |\n| wait/synch/cond/fti/kibbutz_k_cond                         | NO      | NO    |\n| wait/synch/cond/fti/minicron_p_condvar                     | NO      | NO    |\n| wait/synch/cond/fti/locktree_request_info_retry_cv_key     | NO      | NO    |\n| wait/io/file/fti/tokudb_data_file                          | YES     | YES   |\n| wait/io/file/fti/tokudb_load_file                          | YES     | YES   |\n| wait/io/file/fti/tokudb_tmp_file                           | YES     | YES   |\n| wait/io/file/fti/tokudb_log_file                           | YES     | YES   |\n+------------------------------------------------------------+---------+-------+\n

    For TokuDB-related objects, following clauses can be used when querying Performance Schema tables:

    • WHERE EVENT_NAME LIKE '%fti%' or

    • WHERE NAME LIKE '%fti%'

    For example, to get the information about TokuDB related events you can query PERFORMANCE_SCHEMA.events_waits_summary_global_by_event_name like:

    mysql> SELECT * FROM performance_schema.events_waits_summary_global_by_event_name WHERE EVENT_NAME LIKE '%fti%';\n

    The output could be the following:

    +-----------------------------------------+------------+----------------+----------------+----------------+----------------+\n| EVENT_NAME                              | COUNT_STAR | SUM_TIMER_WAIT | MIN_TIMER_WAIT | AVG_TIMER_WAIT | MAX_TIMER_WAIT |\n+-----------------------------------------+------------+----------------+----------------+----------------+----------------+\n| wait/synch/mutex/fti/kibbutz_mutex      |          0 |              0 |              0 |              0 |              0 |\n| wait/synch/mutex/fti/minicron_p_mutex   |          0 |              0 |              0 |              0 |              0 |\n| wait/synch/mutex/fti/queue_result_mutex |          0 |              0 |              0 |              0 |              0 |\n| wait/synch/mutex/fti/tpool_lock_mutex   |          0 |              0 |              0 |              0 |              0 |\n| wait/synch/mutex/fti/workset_lock_mutex |          0 |              0 |              0 |              0 |              0 |\n...\n| wait/io/file/fti/tokudb_data_file       |         30 |      179862410 |              0 |        5995080 |       68488420 |\n| wait/io/file/fti/tokudb_load_file       |          0 |              0 |              0 |              0 |              0 |\n| wait/io/file/fti/tokudb_tmp_file        |          0 |              0 |              0 |              0 |              0 |\n| wait/io/file/fti/tokudb_log_file        |       1367 |  2925647870145 |              0 |     2140195785 |    12013357720 |\n+-----------------------------------------+------------+----------------+----------------+----------------+----------------+\n71 rows in set (0.02 sec)\n
    "},{"location":"tokudb-quickstart.html","title":"Get started with TokuDB","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    • Operating Systems

      TokuDB is currently supported on 64-bit Linux only.

    • Memory

      TokuDB requires at least 1GB of main memory.

      For the best results, run with at least 2GB of main memory.

    • Disk space and configuration

      Make sure to allocate enough disk space for data, indexes and logs.

      Due to high compression, TokuDB may achieve up to 25x space savings on dataand indexes over InnoDB.

    "},{"location":"tokudb-quickstart.html#creating-tables-and-loading-data","title":"Creating Tables and Loading Data","text":"

    TokuDB tables are created the same way as other tables in MySQL by specifying ENGINE=TokuDB in the table definition. For example, the following command creates a table with a single column and uses the TokuDB storage engine to store its data:

    mysql> CREATE TABLE table (\nid INT(11) NOT NULL) ENGINE=TokuDB;\n
    "},{"location":"tokudb-quickstart.html#loading-data","title":"Loading data","text":"

    Once TokuDB tables have been created, data can be inserted or loaded using standard MySQL insert or bulk load operations. For example, the following command loads data from a file into the table:

    mysql> LOAD DATA INFILE file\nINTO TABLE table;\n

    Note

    For more information about loading data, see the MySQL 8.0 reference manual.

    "},{"location":"tokudb-quickstart.html#migrating-data-from-an-existing-database","title":"Migrating Data from an Existing Database","text":"

    Use the following command to convert an existing table for the TokuDB storage engine:

    mysql> ALTER TABLE table\nENGINE=TokuDB;\n
    "},{"location":"tokudb-quickstart.html#bulk-loading-data","title":"Bulk Loading Data","text":"

    The TokuDB bulk loader imports data much faster than regular MySQL with InnoDB. To make use of the loader you need flat files in either comma separated or tab separated format. The MySQL LOAD DATA INFILE statement will invoke the bulk loader if the table is empty. Keep in mind that while this is the most convenient and, in most cases, the fastest way to initialize a TokuDB table, it may not be replication safe if applied to the source.

    To obtain the logical backup and then bulk load into TokuDB, follow these steps:

    1. Create a logical backup of the original table. The easiest way to achieve this is using SELECT \u2026 INTO OUTFILE. Keep in mind that the file will be created on the server: SELECT \\* FROM table INTO OUTFILE \u2018file.csv\u2019;

    2. Copy the output file either to the destination server or the client machine from which you plan to load it.

    3. Load the data into the server using LOAD DATA INFILE. If loading from a machine other than the server use the keyword LOCAL to point to the file on local machine. Keep in mind that you will need enough disk space on the temporary directory on the server since the local file will be copied onto the server by the MySQL client utility: LOAD DATA [LOCAL] INFILE \u2018file.csv\u2019;

    It is possible to create the CSV file using either mysqldump or the MySQL client utility as well, in which case the resulting file will reside on a local directory. In these 2 cases you have to make sure to use the correct command line options to create a file compatible with LOAD DATA INFILE.

    The bulk loader will use more space than normal for logs and temporary files while running, make sure that your file system has enough disk space to process your load. As a rule of thumb, it should be approximately 1.5 times the size of the raw data.

    Note

    Please read the original MySQL Documentation to understand the needed privileges and replication issues around LOAD DATA INFILE.

    "},{"location":"tokudb-quickstart.html#considerations-to-run-tokudb-in-production","title":"Considerations to Run TokuDB in Production","text":"

    In most cases, the default options should be left in-place to run TokuDB, however it is a good idea to review some of the configuration parameters.

    "},{"location":"tokudb-quickstart.html#memory-allocation","title":"Memory allocation","text":"

    TokuDB will allocate 50% of the installed RAM for its own cache (global variable tokudb_cache_size). While this is optimal in most situations, there are cases where it may lead to memory over allocation. If the system tries to allocate more memory than is available, the machine will begin swapping and run much slower than normal.

    It is necessary to set the tokudb_cache_size to a value other than the default in the following cases:

    Running other memory heavy processes on the same server as TokuDB

    In many cases, the database process needs to share the system with other\nserver processes like additional database instances, http server, application\nserver, e-mail server, monitoring systems and others. In order to properly\nconfigure TokuDB\u2019s memory consumption, it\u2019s important to understand how much\nfree memory will be left and assign a sensible value for *TokuDB*. There is\nno fixed rule, but a conservative choice would be 50% of available RAM while\nall the other processes are running. If the result is under 2 GB, you should\nconsider moving some of the other processes to a different system or using a\ndedicated database server.\n\ntokudb_cache_size is a static variable, so it needs to be set\nbefore starting the server and cannot be changed while the server is\nrunning. For example, to set up TokuDB\u2019s cache to 4G, add the following line\nto your `my.cnf` file:\n\n```\ntokudb_cache_size = 4G\n```\n

    System using InnoDB and TokuDB

    When using both the *TokuDB* and *InnoDB* storage engines, you need to manage\nthe cache size for each. For example, on a server with 16 GB of RAM you could\nuse the following values in your configuration file:\n\n```\ninnodb_buffer_pool_size = 2G\ntokudb_cache_size = 8G\n```\n

    Using TokuDB with Federated or FederatedX tables

    The Federated engine in *MySQL* and FederatedX in *MariaDB* allow you to\nconnect to a table on a remote server and query it as if it were a local\ntable (please see the MySQL Documentation: 14.11. The FEDERATED Storage\nEngine for details). When accessing the remote table, these engines could\nimport the complete table contents to the local server to execute a query. In\nthis case, you will have to make sure that there is enough free memory on the\nserver to handle these remote tables. For example, if your remote table is 8\nGB in size, the server has to have more than 8 GB of free RAM to process\nqueries against that table without going into swapping or causing a kernel\npanic and crash the *MySQL* process. There are no parameters to limit the\namount of memory that the Federated or FederatedX engine will allocate while\nimporting the remote dataset.\n
    "},{"location":"tokudb-quickstart.html#specifying-the-location-for-files","title":"Specifying the Location for Files","text":"

    As with InnoDB, it is possible to specify different locations than the default for TokuDB\u2019s data, log and temporary files. This way you may distribute the load and control the disk space. The following variables control file location:

    • tokudb_data_dir: This variable defines the directory where the TokuDB tables are stored. The default location for TokuDB\u2019s data files is the MySQL data directory.

    • tokudb_log_dir: This variable defines the directory where the TokuDB log files are stored. The default location for TokuDB\u2019s log files is the MySQL data directory. Configuring a separate log directory is somewhat involved and should be done only if absolutely necessary. We recommend to keep the data and log files under the same directory.

    • tokudb_tmp_dir: This variable defines the directory where the TokuDB bulk loader stores temporary files. The bulk loader can create large temporary files while it is loading a table, so putting these temporary files on a disk separate from the data directory can be useful. For example, it can make sense to use a high-performance disk for the data directory and a very inexpensive disk for the temporary directory. The default location for TokuDB\u2019s temporary files is the MySQL data directory.

    "},{"location":"tokudb-quickstart.html#table-maintenance","title":"Table Maintenance","text":"

    The fractal tree provides fast performance by inserting small messages in the buffers in the fractal trees instead of requiring a potential IO for an update on every row in the table as required by a B-tree. Additional background information on how fractal trees operate can be found here. For tables whose workload pattern is a high number of sequential deletes, it may be beneficial to flush these delete messages down to the basement nodes in order to allow for faster access. The way to perform this operation is via the OPTIMIZE command.

    The following extensions to the OPTIMIZE command have been added in TokuDB version 7.5.5:

    "},{"location":"tokudb-quickstart.html#hot-optimize-throttling","title":"Hot Optimize Throttling","text":"

    By default, table optimization will run with all available resources. To limit the amount of resources, it is possible to limit the speed of table optimization. The tokudb_optimize_throttle session variable determines an upper bound on how many fractal tree leaf nodes per second are optimized. The default is 0 (no upper bound) with a valid range of [0,1000000]. For example, to limit the table optimization to 1 leaf node per second, use the following setting: SET tokudb_optimize_throttle=1;

    "},{"location":"tokudb-quickstart.html#optimize-a-single-index-of-a-table","title":"Optimize a Single Index of a Table","text":"

    To optimize a single index in a table, the tokudb_optimize_index_name session variable can be set to select the index by name. For example, to optimize the primary key of a table:

    mysql> SET tokudb_optimize_index_name='primary';\nOPTIMIZE TABLE t;\n
    "},{"location":"tokudb-quickstart.html#optimize-a-subset-of-a-fractal-tree-index","title":"Optimize a Subset of a Fractal Tree Index","text":"

    For patterns where the left side of the tree has many deletions (a common pattern with increasing id or date values), it may be useful to delete a percentage of the tree. In this case, it is possible to optimize a subset of a fractal tree starting at the left side. The tokudb_optimize_index_fraction session variable controls the size of the sub tree. Valid values are in the range [0.0,1.0] with default 1.0 (optimize the whole tree). For example, to optimize the leftmost 10% of the primary key:

    SET tokudb_optimize_index_name='primary';\nSET tokudb_optimize_index_fraction=0.1;\nOPTIMIZE TABLE t;\n
    "},{"location":"tokudb-status-variables.html","title":"TokuDB status variables","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    TokuDB status variables provide details about the inner workings of TokuDB storage engine and they can be useful in tuning the storage engine to a particular environment.

    You can view these variables and their values by running:

    mysql> SHOW STATUS LIKE 'tokudb%';\n
    "},{"location":"tokudb-status-variables.html#tokudb-status-variables-summary","title":"TokuDB Status Variables Summary","text":"

    The following global status variables are available:

    Name Var Type Tokudb_DB_OPENS integer Tokudb_DB_CLOSES integer Tokudb_DB_OPEN_CURRENT integer Tokudb_DB_OPEN_MAX integer Tokudb_LEAF_ENTRY_MAX_COMMITTED_XR integer Tokudb_LEAF_ENTRY_MAX_PROVISIONAL_XR integer Tokudb_LEAF_ENTRY_EXPANDED integer Tokudb_LEAF_ENTRY_MAX_MEMSIZE integer Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_IN integer Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_OUT integer Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_IN integer Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_OUT integer Tokudb_CHECKPOINT_PERIOD integer Tokudb_CHECKPOINT_FOOTPRINT integer Tokudb_CHECKPOINT_LAST_BEGAN datetime Tokudb_CHECKPOINT_LAST_COMPLETE_BEGAN datetime Tokudb_CHECKPOINT_LAST_COMPLETE_ENDED datetime Tokudb_CHECKPOINT_DURATION integer Tokudb_CHECKPOINT_DURATION_LAST integer Tokudb_CHECKPOINT_LAST_LSN integer Tokudb_CHECKPOINT_TAKEN integer Tokudb_CHECKPOINT_FAILED integer Tokudb_CHECKPOINT_WAITERS_NOW integer Tokudb_CHECKPOINT_WAITERS_MAX integer Tokudb_CHECKPOINT_CLIENT_WAIT_ON_MO integer Tokudb_CHECKPOINT_CLIENT_WAIT_ON_CS integer Tokudb_CHECKPOINT_BEGIN_TIME integer Tokudb_CHECKPOINT_LONG_BEGIN_TIME integer Tokudb_CHECKPOINT_LONG_BEGIN_COUNT integer Tokudb_CHECKPOINT_END_TIME integer Tokudb_CHECKPOINT_LONG_END_TIME integer Tokudb_CHECKPOINT_LONG_END_COUNT integer Tokudb_CACHETABLE_MISS integer Tokudb_CACHETABLE_MISS_TIME integer Tokudb_CACHETABLE_PREFETCHES integer Tokudb_CACHETABLE_SIZE_CURRENT integer Tokudb_CACHETABLE_SIZE_LIMIT integer Tokudb_CACHETABLE_SIZE_WRITING integer Tokudb_CACHETABLE_SIZE_NONLEAF integer Tokudb_CACHETABLE_SIZE_LEAF integer Tokudb_CACHETABLE_SIZE_ROLLBACK integer Tokudb_CACHETABLE_SIZE_CACHEPRESSURE integer Tokudb_CACHETABLE_SIZE_CLONED integer Tokudb_CACHETABLE_EVICTIONS integer Tokudb_CACHETABLE_CLEANER_EXECUTIONS integer Tokudb_CACHETABLE_CLEANER_PERIOD integer Tokudb_CACHETABLE_CLEANER_ITERATIONS integer Tokudb_CACHETABLE_WAIT_PRESSURE_COUNT integer Tokudb_CACHETABLE_WAIT_PRESSURE_TIME integer Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_COUNT integer Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_TIME integer Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS integer Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS_ACTIVE integer Tokudb_CACHETABLE_POOL_CLIENT_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CLIENT_MAX_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_ITEMS_PROCESSED integer Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_EXECUTION_TIME integer Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS integer Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS_ACTIVE integer Tokudb_CACHETABLE_POOL_CACHETABLE_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CACHETABLE_MAX_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_ITEMS_PROCESSED integer Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_EXECUTION_TIME integer Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS integer Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS_ACTIVE integer Tokudb_CACHETABLE_POOL_CHECKPOINT_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CHECKPOINT_MAX_QUEUE_SIZE integer Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_ITEMS_PROCESSED integer Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_EXECUTION_TIME integer Tokudb_LOCKTREE_MEMORY_SIZE integer Tokudb_LOCKTREE_MEMORY_SIZE_LIMIT integer Tokudb_LOCKTREE_ESCALATION_NUM integer Tokudb_LOCKTREE_ESCALATION_SECONDS numeric Tokudb_LOCKTREE_LATEST_POST_ESCALATION_MEMORY_SIZE integer Tokudb_LOCKTREE_OPEN_CURRENT integer Tokudb_LOCKTREE_PENDING_LOCK_REQUESTS integer Tokudb_LOCKTREE_STO_ELIGIBLE_NUM integer Tokudb_LOCKTREE_STO_ENDED_NUM integer Tokudb_LOCKTREE_STO_ENDED_SECONDS numeric Tokudb_LOCKTREE_WAIT_COUNT integer Tokudb_LOCKTREE_WAIT_TIME integer Tokudb_LOCKTREE_LONG_WAIT_COUNT integer Tokudb_LOCKTREE_LONG_WAIT_TIME integer Tokudb_LOCKTREE_TIMEOUT_COUNT integer Tokudb_LOCKTREE_WAIT_ESCALATION_COUNT integer Tokudb_LOCKTREE_WAIT_ESCALATION_TIME integer Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_COUNT integer Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_TIME integer Tokudb_DICTIONARY_UPDATES integer Tokudb_DICTIONARY_BROADCAST_UPDATES integer Tokudb_DESCRIPTOR_SET integer Tokudb_MESSAGES_IGNORED_BY_LEAF_DUE_TO_MSN integer Tokudb_TOTAL_SEARCH_RETRIES integer Tokudb_SEARCH_TRIES_GT_HEIGHT integer Tokudb_SEARCH_TRIES_GT_HEIGHTPLUS3 integer Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT integer Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_BYTES integer Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_UNCOMPRESSED_BYTES integer Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_SECONDS numeric Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_BYTES integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_UNCOMPRESSE integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_SECONDS numeric Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT integer Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_BYTES integer Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_UNCOMPRESSED_BYTES integer Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_SECONDS numeric Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_BYTES integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_UNCOMPRESSED_BY integer Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_SECONDS numeric Tokudb_LEAF_NODE_COMPRESSION_RATIO numeric Tokudb_NONLEAF_NODE_COMPRESSION_RATIO numeric Tokudb_OVERALL_NODE_COMPRESSION_RATIO numeric Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS numeric Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS_BYTES integer Tokudb_LEAF_NODE_PARTIAL_EVICTIONS integer Tokudb_LEAF_NODE_PARTIAL_EVICTIONS_BYTES integer Tokudb_LEAF_NODE_FULL_EVICTIONS integer Tokudb_LEAF_NODE_FULL_EVICTIONS_BYTES integer Tokudb_NONLEAF_NODE_FULL_EVICTIONS integer Tokudb_NONLEAF_NODE_FULL_EVICTIONS_BYTES integer Tokudb_LEAF_NODES_CREATED integer Tokudb_NONLEAF_NODES_CREATED integer Tokudb_LEAF_NODES_DESTROYED integer Tokudb_NONLEAF_NODES_DESTROYED integer Tokudb_MESSAGES_INJECTED_AT_ROOT_BYTES integer Tokudb_MESSAGES_FLUSHED_FROM_H1_TO_LEAVES_BYTES integer Tokudb_MESSAGES_IN_TREES_ESTIMATE_BYTES integer Tokudb_MESSAGES_INJECTED_AT_ROOT integer Tokudb_BROADCASE_MESSAGES_INJECTED_AT_ROOT integer Tokudb_BASEMENTS_DECOMPRESSED_TARGET_QUERY integer Tokudb_BASEMENTS_DECOMPRESSED_PRELOCKED_RANGE integer Tokudb_BASEMENTS_DECOMPRESSED_PREFETCH integer Tokudb_BASEMENTS_DECOMPRESSED_FOR_WRITE integer Tokudb_BUFFERS_DECOMPRESSED_TARGET_QUERY integer Tokudb_BUFFERS_DECOMPRESSED_PRELOCKED_RANGE integer Tokudb_BUFFERS_DECOMPRESSED_PREFETCH integer Tokudb_BUFFERS_DECOMPRESSED_FOR_WRITE integer Tokudb_PIVOTS_FETCHED_FOR_QUERY integer Tokudb_PIVOTS_FETCHED_FOR_QUERY_BYTES integer Tokudb_PIVOTS_FETCHED_FOR_QUERY_SECONDS numeric Tokudb_PIVOTS_FETCHED_FOR_PREFETCH integer Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_BYTES integer Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_SECONDS numeric Tokudb_PIVOTS_FETCHED_FOR_WRITE integer Tokudb_PIVOTS_FETCHED_FOR_WRITE_BYTES integer Tokudb_PIVOTS_FETCHED_FOR_WRITE_SECONDS numeric Tokudb_BASEMENTS_FETCHED_TARGET_QUERY integer Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_BYTES integer Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_SECONDS numeric Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE integer Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_BYTES integer Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_SECONDS numeric Tokudb_BASEMENTS_FETCHED_PREFETCH integer Tokudb_BASEMENTS_FETCHED_PREFETCH_BYTES integer Tokudb_BASEMENTS_FETCHED_PREFETCH_SECONDS numeric Tokudb_BASEMENTS_FETCHED_FOR_WRITE integer Tokudb_BASEMENTS_FETCHED_FOR_WRITE_BYTES integer Tokudb_BASEMENTS_FETCHED_FOR_WRITE_SECONDS numeric Tokudb_BUFFERS_FETCHED_TARGET_QUERY integer Tokudb_BUFFERS_FETCHED_TARGET_QUERY_BYTES integer Tokudb_BUFFERS_FETCHED_TARGET_QUERY_SECONDS numeric Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE integer Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_BYTES integer Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_SECONDS numeric Tokudb_BUFFERS_FETCHED_PREFETCH integer Tokudb_BUFFERS_FETCHED_PREFETCH_BYTES integer Tokudb_BUFFERS_FETCHED_PREFETCH_SECONDS numeric Tokudb_BUFFERS_FETCHED_FOR_WRITE integer Tokudb_BUFFERS_FETCHED_FOR_WRITE_BYTES integer Tokudb_BUFFERS_FETCHED_FOR_WRITE_SECONDS integer Tokudb_LEAF_COMPRESSION_TO_MEMORY_SECONDS numeric Tokudb_LEAF_SERIALIZATION_TO_MEMORY_SECONDS numeric Tokudb_LEAF_DECOMPRESSION_TO_MEMORY_SECONDS numeric Tokudb_LEAF_DESERIALIZATION_TO_MEMORY_SECONDS numeric Tokudb_NONLEAF_COMPRESSION_TO_MEMORY_SECONDS numeric Tokudb_NONLEAF_SERIALIZATION_TO_MEMORY_SECONDS numeric Tokudb_NONLEAF_DECOMPRESSION_TO_MEMORY_SECONDS numeric Tokudb_NONLEAF_DESERIALIZATION_TO_MEMORY_SECONDS numeric Tokudb_PROMOTION_ROOTS_SPLIT integer Tokudb_PROMOTION_LEAF_ROOTS_INJECTED_INTO integer Tokudb_PROMOTION_H1_ROOTS_INJECTED_INTO integer Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_0 integer Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_1 integer Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_2 integer Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_3 integer Tokudb_PROMOTION_INJECTIONS_LOWER_THAN_DEPTH_3 integer Tokudb_PROMOTION_STOPPED_NONEMPTY_BUFFER integer Tokudb_PROMOTION_STOPPED_AT_HEIGHT_1 integer Tokudb_PROMOTION_STOPPED_CHILD_LOCKED_OR_NOT_IN_MEMORY integer Tokudb_PROMOTION_STOPPED_CHILD_NOT_FULLY_IN_MEMORY integer Tokudb_PROMOTION_STOPPED_AFTER_LOCKING_CHILD integer Tokudb_BASEMENT_DESERIALIZATION_FIXED_KEY integer Tokudb_BASEMENT_DESERIALIZATION_VARIABLE_KEY integer Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_SUCCESS integer Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_FAIL_POS integer Tokudb_RIGHTMOST_LEAF_SHORTCUT_FAIL_REACTIVE integer Tokudb_CURSOR_SKIP_DELETED_LEAF_ENTRY integer Tokudb_FLUSHER_CLEANER_TOTAL_NODES integer Tokudb_FLUSHER_CLEANER_H1_NODES integer Tokudb_FLUSHER_CLEANER_HGT1_NODES integer Tokudb_FLUSHER_CLEANER_EMPTY_NODES integer Tokudb_FLUSHER_CLEANER_NODES_DIRTIED integer Tokudb_FLUSHER_CLEANER_MAX_BUFFER_SIZE integer Tokudb_FLUSHER_CLEANER_MIN_BUFFER_SIZE integer Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_SIZE integer Tokudb_FLUSHER_CLEANER_MAX_BUFFER_WORKDONE integer Tokudb_FLUSHER_CLEANER_MIN_BUFFER_WORKDONE integer Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_WORKDONE integer Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_STARTED integer Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_RUNNING integer Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_COMPLETED integer Tokudb_FLUSHER_CLEANER_NUM_DIRTIED_FOR_LEAF_MERGE integer Tokudb_FLUSHER_FLUSH_TOTAL integer Tokudb_FLUSHER_FLUSH_IN_MEMORY integer Tokudb_FLUSHER_FLUSH_NEEDED_IO integer Tokudb_FLUSHER_FLUSH_CASCADES integer Tokudb_FLUSHER_FLUSH_CASCADES_1 integer Tokudb_FLUSHER_FLUSH_CASCADES_2 integer Tokudb_FLUSHER_FLUSH_CASCADES_3 integer Tokudb_FLUSHER_FLUSH_CASCADES_4 integer Tokudb_FLUSHER_FLUSH_CASCADES_5 integer Tokudb_FLUSHER_FLUSH_CASCADES_GT_5 integer Tokudb_FLUSHER_SPLIT_LEAF integer Tokudb_FLUSHER_SPLIT_NONLEAF integer Tokudb_FLUSHER_MERGE_LEAF integer Tokudb_FLUSHER_MERGE_NONLEAF integer Tokudb_FLUSHER_BALANCE_LEAF integer Tokudb_HOT_NUM_STARTED integer Tokudb_HOT_NUM_COMPLETED integer Tokudb_HOT_NUM_ABORTED integer Tokudb_HOT_MAX_ROOT_FLUSH_COUNT integer Tokudb_TXN_BEGIN integer Tokudb_TXN_BEGIN_READ_ONLY integer Tokudb_TXN_COMMITS integer Tokudb_TXN_ABORTS integer Tokudb_LOGGER_NEXT_LSN integer Tokudb_LOGGER_WRITES integer Tokudb_LOGGER_WRITES_BYTES integer Tokudb_LOGGER_WRITES_UNCOMPRESSED_BYTES integer Tokudb_LOGGER_WRITES_SECONDS numeric Tokudb_LOGGER_WAIT_LONG integer Tokudb_LOADER_NUM_CREATED integer Tokudb_LOADER_NUM_CURRENT integer Tokudb_LOADER_NUM_MAX integer Tokudb_MEMORY_MALLOC_COUNT integer Tokudb_MEMORY_FREE_COUNT integer Tokudb_MEMORY_REALLOC_COUNT integer Tokudb_MEMORY_MALLOC_FAIL integer Tokudb_MEMORY_REALLOC_FAIL integer Tokudb_MEMORY_REQUESTED integer Tokudb_MEMORY_USED integer Tokudb_MEMORY_FREED integer Tokudb_MEMORY_MAX_REQUESTED_SIZE integer Tokudb_MEMORY_LAST_FAILED_SIZE integer Tokudb_MEM_ESTIMATED_MAXIMUM_MEMORY_FOOTPRINT integer Tokudb_MEMORY_MALLOCATOR_VERSION string Tokudb_MEMORY_MMAP_THRESHOLD integer Tokudb_FILESYSTEM_THREADS_BLOCKED_BY_FULL_DISK integer Tokudb_FILESYSTEM_FSYNC_TIME integer Tokudb_FILESYSTEM_FSYNC_NUM integer Tokudb_FILESYSTEM_LONG_FSYNC_TIME integer Tokudb_FILESYSTEM_LONG_FSYNC_NUM integer"},{"location":"tokudb-status-variables.html#tokudb_db_opens","title":"Tokudb_DB_OPENS","text":"

    This variable shows the number of times an individual PerconaFT dictionary file was opened. This is a not a useful value for a regular user to use for any purpose due to layers of open/close caching on top.

    "},{"location":"tokudb-status-variables.html#tokudb_db_closes","title":"Tokudb_DB_CLOSES","text":"

    This variable shows the number of times an individual PerconaFT dictionary file was closed. This is a not a useful value for a regular user to use for any purpose due to layers of open/close caching on top.

    "},{"location":"tokudb-status-variables.html#tokudb_db_open_current","title":"Tokudb_DB_OPEN_CURRENT","text":"

    This variable shows the number of currently opened databases.

    "},{"location":"tokudb-status-variables.html#tokudb_db_open_max","title":"Tokudb_DB_OPEN_MAX","text":"

    This variable shows the maximum number of concurrently opened databases.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_max_committed_xr","title":"Tokudb_LEAF_ENTRY_MAX_COMMITTED_XR","text":"

    This variable shows the maximum number of committed transaction records that were stored on disk in a new or modified row.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_max_provisional_xr","title":"Tokudb_LEAF_ENTRY_MAX_PROVISIONAL_XR","text":"

    This variable shows the maximum number of provisional transaction records that were stored on disk in a new or modified row.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_expanded","title":"Tokudb_LEAF_ENTRY_EXPANDED","text":"

    This variable shows the number of times that an expanded memory mechanism was used to store a new or modified row on disk.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_max_memsize","title":"Tokudb_LEAF_ENTRY_MAX_MEMSIZE","text":"

    This variable shows the maximum number of bytes that were stored on disk as a new or modified row. This is the maximum uncompressed size of any row stored in TokuDB that was created or modified since the server started.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_apply_gc_bytes_in","title":"Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_IN","text":"

    This variable shows the total number of bytes of leaf nodes data before performing garbage collection for non-flush events.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_apply_gc_bytes_out","title":"Tokudb_LEAF_ENTRY_APPLY_GC_BYTES_OUT","text":"

    This variable shows the total number of bytes of leaf nodes data after performing garbage collection for non-flush events.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_normal_gc_bytes_in","title":"Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_IN","text":"

    This variable shows the total number of bytes of leaf nodes data before performing garbage collection for flush events.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_entry_normal_gc_bytes_out","title":"Tokudb_LEAF_ENTRY_NORMAL_GC_BYTES_OUT","text":"

    This variable shows the total number of bytes of leaf nodes data after performing garbage collection for flush events.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_period","title":"Tokudb_CHECKPOINT_PERIOD","text":"

    This variable shows the interval in seconds between the end of an automatic checkpoint and the beginning of the next automatic checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_footprint","title":"Tokudb_CHECKPOINT_FOOTPRINT","text":"

    This variable shows at what stage the checkpointer is at. It\u2019s used for debugging purposes only and not a useful value for a normal user.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_last_began","title":"Tokudb_CHECKPOINT_LAST_BEGAN","text":"

    This variable shows the time the last checkpoint began. If a checkpoint is currently in progress, then this time may be later than the time the last checkpoint completed. If no checkpoint has ever taken place, then this value will be Dec 31, 1969 on Linux hosts.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_last_complete_began","title":"Tokudb_CHECKPOINT_LAST_COMPLETE_BEGAN","text":"

    This variable shows the time the last complete checkpoint started. Any data that changed after this time will not be captured in the checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_last_complete_ended","title":"Tokudb_CHECKPOINT_LAST_COMPLETE_ENDED","text":"

    This variable shows the time the last complete checkpoint ended.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_duration","title":"Tokudb_CHECKPOINT_DURATION","text":"

    This variable shows time (in seconds) required to complete all checkpoints.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_duration_last","title":"Tokudb_CHECKPOINT_DURATION_LAST","text":"

    This variable shows time (in seconds) required to complete the last checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_last_lsn","title":"Tokudb_CHECKPOINT_LAST_LSN","text":"

    This variable shows the last successful checkpoint LSN. Each checkpoint from the time the PerconaFT environment is created has a monotonically incrementing LSN. This is not a useful value for a normal user to use for any purpose other than having some idea of how many checkpoints have occurred since the system was first created.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_taken","title":"Tokudb_CHECKPOINT_TAKEN","text":"

    This variable shows the number of complete checkpoints that have been taken.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_failed","title":"Tokudb_CHECKPOINT_FAILED","text":"

    This variable shows the number of checkpoints that have failed for any reason.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_waiters_now","title":"Tokudb_CHECKPOINT_WAITERS_NOW","text":"

    This variable shows the current number of threads waiting for the checkpoint safe lock. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_waiters_max","title":"Tokudb_CHECKPOINT_WAITERS_MAX","text":"

    This variable shows the maximum number of threads that concurrently waited for the checkpoint safe lock. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_client_wait_on_mo","title":"Tokudb_CHECKPOINT_CLIENT_WAIT_ON_MO","text":"

    This variable shows the number of times a non-checkpoint client thread waited for the multi-operation lock. It is an internal rwlock that is similar in nature to the InnoDB kernel mutex, it effectively halts all access to the PerconaFT API when write locked. The begin phase of the checkpoint takes this lock for a brief period.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_client_wait_on_cs","title":"Tokudb_CHECKPOINT_CLIENT_WAIT_ON_CS","text":"

    This variable shows the number of times a non-checkpoint client thread waited for the checkpoint-safe lock. This is the lock taken when you SET tokudb_checkpoint_lock=1. If a client trying to lock/postpone the checkpointer has to wait for the currently running checkpoint to complete, that wait time will be reflected here and summed. This is not a useful metric as regular users should never be manipulating the checkpoint lock.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_begin_time","title":"Tokudb_CHECKPOINT_BEGIN_TIME","text":"

    This variable shows the cumulative time (in microseconds) required to mark all dirty nodes as pending a checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_long_begin_time","title":"Tokudb_CHECKPOINT_LONG_BEGIN_TIME","text":"

    This variable shows the cumulative actual time (in microseconds) of checkpoint begin stages that took longer than 1 second.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_long_begin_count","title":"Tokudb_CHECKPOINT_LONG_BEGIN_COUNT","text":"

    This variable shows the number of checkpoints whose begin stage took longer than 1 second.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_end_time","title":"Tokudb_CHECKPOINT_END_TIME","text":"

    This variable shows the time spent in checkpoint end operation in seconds.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_long_end_time","title":"Tokudb_CHECKPOINT_LONG_END_TIME","text":"

    This variable shows the total time of long checkpoints in seconds.

    "},{"location":"tokudb-status-variables.html#tokudb_checkpoint_long_end_count","title":"Tokudb_CHECKPOINT_LONG_END_COUNT","text":"

    This variable shows the number of checkpoints whose end_checkpoint operations exceeded 1 minute.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_miss","title":"Tokudb_CACHETABLE_MISS","text":"

    This variable shows the number of times the application was unable to access the data in the internal cache. A cache miss means that date will need to be read from disk.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_miss_time","title":"Tokudb_CACHETABLE_MISS_TIME","text":"

    This variable shows the total time, in microseconds, of how long the database has had to wait for a disk read to complete.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_prefetches","title":"Tokudb_CACHETABLE_PREFETCHES","text":"

    This variable shows the total number of times that a block of memory has been prefetched into the database\u2019s cache. Data is prefetched when the database\u2019s algorithms determine that a block of memory is likely to be accessed by the application.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_current","title":"Tokudb_CACHETABLE_SIZE_CURRENT","text":"

    This variable shows how much of the uncompressed data, in bytes, is currently in the database\u2019s internal cache.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_limit","title":"Tokudb_CACHETABLE_SIZE_LIMIT","text":"

    This variable shows how much of the uncompressed data, in bytes, will fit in the database\u2019s internal cache.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_writing","title":"Tokudb_CACHETABLE_SIZE_WRITING","text":"

    This variable shows the number of bytes that are currently queued up to be written to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_nonleaf","title":"Tokudb_CACHETABLE_SIZE_NONLEAF","text":"

    This variable shows the amount of memory, in bytes, the current set of non-leaf nodes occupy in the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_leaf","title":"Tokudb_CACHETABLE_SIZE_LEAF","text":"

    This variable shows the amount of memory, in bytes, the current set of (decompressed) leaf nodes occupy in the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_rollback","title":"Tokudb_CACHETABLE_SIZE_ROLLBACK","text":"

    This variable shows the rollback nodes size, in bytes, in the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_cachepressure","title":"Tokudb_CACHETABLE_SIZE_CACHEPRESSURE","text":"

    This variable shows the number of bytes causing cache pressure (the sum of buffers and work done counters), helps to understand if cleaner threads are keeping up with workload. It should really be looked at as more of a value to use in a ratio of cache pressure / cache table size. The closer that ratio evaluates to 1, the higher the cache pressure.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_size_cloned","title":"Tokudb_CACHETABLE_SIZE_CLONED","text":"

    This variable shows the amount of memory, in bytes, currently used for cloned nodes. During the checkpoint operation, dirty nodes are cloned prior to serialization/compression, then written to disk. After which, the memory for the cloned block is returned for re-use.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_evictions","title":"Tokudb_CACHETABLE_EVICTIONS","text":"

    This variable shows the number of blocks evicted from cache. On its own this is not a useful number as its impact on performance depends entirely on the hardware and workload in use. For example, two workloads, one random, one linear for the same starting data set will have two wildly different eviction patterns.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_cleaner_executions","title":"Tokudb_CACHETABLE_CLEANER_EXECUTIONS","text":"

    This variable shows the total number of times the cleaner thread loop has executed.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_cleaner_period","title":"Tokudb_CACHETABLE_CLEANER_PERIOD","text":"

    TokuDB includes a cleaner thread that optimizes indexes in the background. This variable is the time, in seconds, between the completion of a group of cleaner operations and the beginning of the next group of cleaner operations. The cleaner operations run on a background thread performing work that does not need to be done on the client thread.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_cleaner_iterations","title":"Tokudb_CACHETABLE_CLEANER_ITERATIONS","text":"

    This variable shows the number of cleaner operations that are performed every cleaner period.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_wait_pressure_count","title":"Tokudb_CACHETABLE_WAIT_PRESSURE_COUNT","text":"

    This variable shows the number of times a thread was stalled due to cache pressure.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_wait_pressure_time","title":"Tokudb_CACHETABLE_WAIT_PRESSURE_TIME","text":"

    This variable shows the total time, in microseconds, waiting on cache pressure to subside.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_long_wait_pressure_count","title":"Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_COUNT","text":"

    This variable shows the number of times a thread was stalled for more than one second due to cache pressure.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_long_wait_pressure_time","title":"Tokudb_CACHETABLE_LONG_WAIT_PRESSURE_TIME","text":"

    This variable shows the total time, in microseconds, waiting on cache pressure to subside for more than one second.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_num_threads","title":"Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS","text":"

    This variable shows the number of threads in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_num_threads_active","title":"Tokudb_CACHETABLE_POOL_CLIENT_NUM_THREADS_ACTIVE","text":"

    This variable shows the number of currently active threads in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_queue_size","title":"Tokudb_CACHETABLE_POOL_CLIENT_QUEUE_SIZE","text":"

    This variable shows the number of currently queued work items in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_max_queue_size","title":"Tokudb_CACHETABLE_POOL_CLIENT_MAX_QUEUE_SIZE","text":"

    This variable shows the largest number of queued work items in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_total_items_processed","title":"Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_ITEMS_PROCESSED","text":"

    This variable shows the total number of work items processed in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_client_total_execution_time","title":"Tokudb_CACHETABLE_POOL_CLIENT_TOTAL_EXECUTION_TIME","text":"

    This variable shows the total execution time of processing work items in the client thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_num_threads","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS","text":"

    This variable shows the number of threads in the cachetable threadpool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_num_threads_active","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_NUM_THREADS_ACTIVE","text":"

    This variable shows the number of currently active threads in the cachetable thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_queue_size","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_QUEUE_SIZE","text":"

    This variable shows the number of currently queued work items in the cachetable thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_max_queue_size","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_MAX_QUEUE_SIZE","text":"

    This variable shows the largest number of queued work items in the cachetable thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_total_items_processed","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_ITEMS_PROCESSED","text":"

    This variable shows the total number of work items processed in the cachetable thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_cachetable_total_execution_time","title":"Tokudb_CACHETABLE_POOL_CACHETABLE_TOTAL_EXECUTION_TIME","text":"

    This variable shows the total execution time of processing work items in the cachetable thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_num_threads","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS","text":"

    This variable shows the number of threads in the checkpoint threadpool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_num_threads_active","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_NUM_THREADS_ACTIVE","text":"

    This variable shows the number of currently active threads in the checkpoint thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_queue_size","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_QUEUE_SIZE","text":"

    This variable shows the number of currently queued work items in the checkpoint thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_max_queue_size","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_MAX_QUEUE_SIZE","text":"

    This variable shows the largest number of queued work items in the checkpoint thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_total_items_processed","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_ITEMS_PROCESSED","text":"

    This variable shows the total number of work items processed in the checkpoint thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_cachetable_pool_checkpoint_total_execution_time","title":"Tokudb_CACHETABLE_POOL_CHECKPOINT_TOTAL_EXECUTION_TIME","text":"

    This variable shows the total execution time of processing work items in the checkpoint thread pool.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_memory_size","title":"Tokudb_LOCKTREE_MEMORY_SIZE","text":"

    This variable shows the amount of memory, in bytes, that the locktree is currently using.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_memory_size_limit","title":"Tokudb_LOCKTREE_MEMORY_SIZE_LIMIT","text":"

    This variable shows the maximum amount of memory, in bytes, that the locktree is allowed to use.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_escalation_num","title":"Tokudb_LOCKTREE_ESCALATION_NUM","text":"

    This variable shows the number of times the locktree needed to run lock escalation to reduce its memory footprint.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_escalation_seconds","title":"Tokudb_LOCKTREE_ESCALATION_SECONDS","text":"

    This variable shows the total number of seconds spent performing locktree escalation.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_latest_post_escalation_memory_size","title":"Tokudb_LOCKTREE_LATEST_POST_ESCALATION_MEMORY_SIZE","text":"

    This variable shows the locktree size, in bytes, after most current locktree escalation.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_open_current","title":"Tokudb_LOCKTREE_OPEN_CURRENT","text":"

    This variable shows the number of locktrees that are currently opened.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_pending_lock_requests","title":"Tokudb_LOCKTREE_PENDING_LOCK_REQUESTS","text":"

    This variable shows the number of requests waiting for a lock grant.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_sto_eligible_num","title":"Tokudb_LOCKTREE_STO_ELIGIBLE_NUM","text":"

    This variable shows the number of locktrees eligible for Single Transaction optimizations. STO optimization are behaviors that can happen within the locktree when there is exactly one transaction active within the locktree. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_sto_ended_num","title":"Tokudb_LOCKTREE_STO_ENDED_NUM","text":"

    This variable shows the total number of times a Single Transaction Optimization was ended early due to another transaction starting. STO optimization are behaviors that can happen within the locktree when there is exactly one transaction active within the locktree. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_sto_ended_seconds","title":"Tokudb_LOCKTREE_STO_ENDED_SECONDS","text":"

    This variable shows the total number of seconds ending the Single Transaction Optimizations. STO optimization are behaviors that can happen within the locktree when there is exactly one transaction active within the locktree. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_wait_count","title":"Tokudb_LOCKTREE_WAIT_COUNT","text":"

    This variable shows the number of times that a lock request could not be acquired because of a conflict with some other transaction. PerconaFT lock request cycles to try to obtain a lock, if it can not get a lock, it sleeps/waits and times out, checks to get the lock again, repeat. This value indicates the number of cycles it needed to execute before it obtained the lock.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_wait_time","title":"Tokudb_LOCKTREE_WAIT_TIME","text":"

    This variable shows the total time, in microseconds, spent by client waiting for a lock conflict to be resolved.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_long_wait_count","title":"Tokudb_LOCKTREE_LONG_WAIT_COUNT","text":"

    This variable shows number of lock waits greater than one second in duration.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_long_wait_time","title":"Tokudb_LOCKTREE_LONG_WAIT_TIME","text":"

    This variable shows the total time, in microseconds, of the long waits.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_timeout_count","title":"Tokudb_LOCKTREE_TIMEOUT_COUNT","text":"

    This variable shows the number of times that a lock request timed out.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_wait_escalation_count","title":"Tokudb_LOCKTREE_WAIT_ESCALATION_COUNT","text":"

    When the sum of the sizes of locks taken reaches the lock tree limit, we run lock escalation on a background thread. The clients threads need to wait for escalation to consolidate locks and free up memory. This variables shows the number of times a client thread had to wait on lock escalation.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_wait_escalation_time","title":"Tokudb_LOCKTREE_WAIT_ESCALATION_TIME","text":"

    This variable shows the total time, in microseconds, that a client thread spent waiting for lock escalation to free up memory.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_long_wait_escalation_count","title":"Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_COUNT","text":"

    This variable shows number of times that a client thread had to wait on lock escalation and the wait time was greater than one second.

    "},{"location":"tokudb-status-variables.html#tokudb_locktree_long_wait_escalation_time","title":"Tokudb_LOCKTREE_LONG_WAIT_ESCALATION_TIME","text":"

    This variable shows the total time, in microseconds, of the long waits for lock escalation to free up memory.

    "},{"location":"tokudb-status-variables.html#tokudb_dictionary_updates","title":"Tokudb_DICTIONARY_UPDATES","text":"

    This variable shows the total number of rows that have been updated in all primary and secondary indexes combined, if those updates have been done with a separate recovery log entry per index.

    "},{"location":"tokudb-status-variables.html#tokudb_dictionary_broadcast_updates","title":"Tokudb_DICTIONARY_BROADCAST_UPDATES","text":"

    This variable shows the number of broadcast updates that have been successfully performed. A broadcast update is an update that affects all rows in a dictionary.

    "},{"location":"tokudb-status-variables.html#tokudb_descriptor_set","title":"Tokudb_DESCRIPTOR_SET","text":"

    This variable shows the number of time a descriptor was updated when the entire dictionary was updated (for example, when the schema has been changed).

    "},{"location":"tokudb-status-variables.html#tokudb_messages_ignored_by_leaf_due_to_msn","title":"Tokudb_MESSAGES_IGNORED_BY_LEAF_DUE_TO_MSN","text":"

    This variable shows the number of messages that were ignored by a leaf because it had already been applied.

    "},{"location":"tokudb-status-variables.html#tokudb_total_search_retries","title":"Tokudb_TOTAL_SEARCH_RETRIES","text":"

    Internal value that is no use to anyone other than a developer debugging a specific query/search issue.

    "},{"location":"tokudb-status-variables.html#tokudb_search_tries_gt_height","title":"Tokudb_SEARCH_TRIES_GT_HEIGHT","text":"

    Internal value that is no use to anyone other than a developer debugging a specific query/search issue.

    "},{"location":"tokudb-status-variables.html#tokudb_search_tries_gt_heightplus3","title":"Tokudb_SEARCH_TRIES_GT_HEIGHTPLUS3","text":"

    Internal value that is no use to anyone other than a developer debugging a specific query/search issue.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_not_checkpoint","title":"Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT","text":"

    This variable shows the number of leaf nodes flushed to disk, not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_not_checkpoint_bytes","title":"Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_BYTES","text":"

    This variable shows the size, in bytes, of leaf nodes flushed to disk, not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_not_checkpoint_uncompressed_bytes","title":"Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_UNCOMPRESSED_BYTES","text":"

    This variable shows the size, in bytes, of uncompressed leaf nodes flushed to disk not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_not_checkpoint_seconds","title":"Tokudb_LEAF_NODES_FLUSHED_NOT_CHECKPOINT_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when writing leaf nodes flushed to disk, not for checkpoint

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT","text":"

    This variable shows the number of non-leaf nodes flushed to disk, not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_bytes","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_BYTES","text":"

    This variable shows the size, in bytes, of non-leaf nodes flushed to disk, not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_uncompresse","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_UNCOMPRESSE","text":"

    This variable shows the size, in bytes, of uncompressed non-leaf nodes flushed to disk not for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_seconds","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_NOT_CHECKPOINT_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when writing non-leaf nodes flushed to disk, not for checkpoint

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_checkpoint","title":"Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT","text":"

    This variable shows the number of leaf nodes flushed to disk, for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_checkpoint_bytes","title":"Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_BYTES","text":"

    This variable shows the size, in bytes, of leaf nodes flushed to disk, for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_checkpoint_uncompressed_bytes","title":"Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_UNCOMPRESSED_BYTES","text":"

    This variable shows the size, in bytes, of uncompressed leaf nodes flushed to disk for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_flushed_checkpoint_seconds","title":"Tokudb_LEAF_NODES_FLUSHED_CHECKPOINT_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when writing leaf nodes flushed to disk for checkpoint

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_checkpoint","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT","text":"

    This variable shows the number of non-leaf nodes flushed to disk, for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_bytes","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_BYTES","text":"

    This variable shows the size, in bytes, of non-leaf nodes flushed to disk, for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_uncompressed_by","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_UNCOMPRESSED_BY","text":"

    This variable shows the size, in bytes, of uncompressed non-leaf nodes flushed to disk for checkpoint.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_seconds","title":"Tokudb_NONLEAF_NODES_FLUSHED_TO_DISK_CHECKPOINT_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when writing non-leaf nodes flushed to disk for checkpoint

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_node_compression_ratio","title":"Tokudb_LEAF_NODE_COMPRESSION_RATIO","text":"

    This variable shows the ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_node_compression_ratio","title":"Tokudb_NONLEAF_NODE_COMPRESSION_RATIO","text":"

    This variable shows the ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for non-leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_overall_node_compression_ratio","title":"Tokudb_OVERALL_NODE_COMPRESSION_RATIO","text":"

    This variable shows the ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for all nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_node_partial_evictions","title":"Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS","text":"

    This variable shows the number of times a partition of a non-leaf node was evicted from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_node_partial_evictions_bytes","title":"Tokudb_NONLEAF_NODE_PARTIAL_EVICTIONS_BYTES","text":"

    This variable shows the amount, in bytes, of memory freed by evicting partitions of non-leaf nodes from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_node_partial_evictions","title":"Tokudb_LEAF_NODE_PARTIAL_EVICTIONS","text":"

    This variable shows the number of times a partition of a leaf node was evicted from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_node_partial_evictions_bytes","title":"Tokudb_LEAF_NODE_PARTIAL_EVICTIONS_BYTES","text":"

    This variable shows the amount, in bytes, of memory freed by evicting partitions of leaf nodes from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_node_full_evictions","title":"Tokudb_LEAF_NODE_FULL_EVICTIONS","text":"

    This variable shows the number of times a full leaf node was evicted from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_node_full_evictions_bytes","title":"Tokudb_LEAF_NODE_FULL_EVICTIONS_BYTES","text":"

    This variable shows the amount, in bytes, of memory freed by evicting full leaf nodes from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_node_full_evictions","title":"Tokudb_NONLEAF_NODE_FULL_EVICTIONS","text":"

    This variable shows the number of times a full non-leaf node was evicted from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_node_full_evictions_bytes","title":"Tokudb_NONLEAF_NODE_FULL_EVICTIONS_BYTES","text":"

    This variable shows the amount, in bytes, of memory freed by evicting full non-leaf nodes from the cache.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_created","title":"Tokudb_LEAF_NODES_CREATED","text":"

    This variable shows the number of created leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_created","title":"Tokudb_NONLEAF_NODES_CREATED","text":"

    This variable shows the number of created non-leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_nodes_destroyed","title":"Tokudb_LEAF_NODES_DESTROYED","text":"

    This variable shows the number of destroyed leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_nodes_destroyed","title":"Tokudb_NONLEAF_NODES_DESTROYED","text":"

    This variable shows the number of destroyed non-leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_messages_injected_at_root_bytes","title":"Tokudb_MESSAGES_INJECTED_AT_ROOT_BYTES","text":"

    This variable shows the size, in bytes, of messages injected at root (for all trees).

    "},{"location":"tokudb-status-variables.html#tokudb_messages_flushed_from_h1_to_leaves_bytes","title":"Tokudb_MESSAGES_FLUSHED_FROM_H1_TO_LEAVES_BYTES","text":"

    This variable shows the size, in bytes, of messages flushed from h1 nodes to leaves.

    "},{"location":"tokudb-status-variables.html#tokudb_messages_in_trees_estimate_bytes","title":"Tokudb_MESSAGES_IN_TREES_ESTIMATE_BYTES","text":"

    This variable shows the estimated size, in bytes, of messages currently in trees.

    "},{"location":"tokudb-status-variables.html#tokudb_messages_injected_at_root","title":"Tokudb_MESSAGES_INJECTED_AT_ROOT","text":"

    This variables shows the number of messages that were injected at root node of a tree.

    "},{"location":"tokudb-status-variables.html#tokudb_broadcase_messages_injected_at_root","title":"Tokudb_BROADCASE_MESSAGES_INJECTED_AT_ROOT","text":"

    This variable shows the number of broadcast messages dropped into the root node of a tree. These are things such as the result of OPTIMIZE TABLE and a few other operations. This is not a useful metric for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_decompressed_target_query","title":"Tokudb_BASEMENTS_DECOMPRESSED_TARGET_QUERY","text":"

    This variable shows the number of basement nodes decompressed for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_decompressed_prelocked_range","title":"Tokudb_BASEMENTS_DECOMPRESSED_PRELOCKED_RANGE","text":"

    This variable shows the number of basement nodes aggressively decompressed by queries.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_decompressed_prefetch","title":"Tokudb_BASEMENTS_DECOMPRESSED_PREFETCH","text":"

    This variable shows the number of basement nodes decompressed by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_decompressed_for_write","title":"Tokudb_BASEMENTS_DECOMPRESSED_FOR_WRITE","text":"

    This variable shows the number of basement nodes decompressed for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_decompressed_target_query","title":"Tokudb_BUFFERS_DECOMPRESSED_TARGET_QUERY","text":"

    This variable shows the number of buffers decompressed for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_decompressed_prelocked_range","title":"Tokudb_BUFFERS_DECOMPRESSED_PRELOCKED_RANGE","text":"

    This variable shows the number of buffers decompressed by queries aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_decompressed_prefetch","title":"Tokudb_BUFFERS_DECOMPRESSED_PREFETCH","text":"

    This variable shows the number of buffers decompressed by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_decompressed_for_write","title":"Tokudb_BUFFERS_DECOMPRESSED_FOR_WRITE","text":"

    This variable shows the number of buffers decompressed for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_query","title":"Tokudb_PIVOTS_FETCHED_FOR_QUERY","text":"

    This variable shows the number of pivot nodes fetched for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_query_bytes","title":"Tokudb_PIVOTS_FETCHED_FOR_QUERY_BYTES","text":"

    This variable shows the number of bytes of pivot nodes fetched for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_query_seconds","title":"Tokudb_PIVOTS_FETCHED_FOR_QUERY_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching pivot nodes for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_prefetch","title":"Tokudb_PIVOTS_FETCHED_FOR_PREFETCH","text":"

    This variable shows the number of pivot nodes fetched by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_prefetch_bytes","title":"Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_BYTES","text":"

    This variable shows the number of bytes of pivot nodes fetched for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_prefetch_seconds","title":"Tokudb_PIVOTS_FETCHED_FOR_PREFETCH_SECONDS","text":"

    This variable shows the number seconds waiting for I/O when fetching pivot nodes by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_write","title":"Tokudb_PIVOTS_FETCHED_FOR_WRITE","text":"

    This variable shows the number of pivot nodes fetched for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_write_bytes","title":"Tokudb_PIVOTS_FETCHED_FOR_WRITE_BYTES","text":"

    This variable shows the number of bytes of pivot nodes fetched for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_pivots_fetched_for_write_seconds","title":"Tokudb_PIVOTS_FETCHED_FOR_WRITE_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching pivot nodes for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_target_query","title":"Tokudb_BASEMENTS_FETCHED_TARGET_QUERY","text":"

    This variable shows the number of basement nodes fetched from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_target_query_bytes","title":"Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_BYTES","text":"

    This variable shows the number of basement node bytes fetched from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_target_query_seconds","title":"Tokudb_BASEMENTS_FETCHED_TARGET_QUERY_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching basement nodes from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prelocked_range","title":"Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE","text":"

    This variable shows the number of basement nodes fetched from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prelocked_range_bytes","title":"Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_BYTES","text":"

    This variable shows the number of basement node bytes fetched from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prelocked_range_seconds","title":"Tokudb_BASEMENTS_FETCHED_PRELOCKED_RANGE_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching basement nodes from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prefetch","title":"Tokudb_BASEMENTS_FETCHED_PREFETCH","text":"

    This variable shows the number of basement nodes fetched from disk by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prefetch_bytes","title":"Tokudb_BASEMENTS_FETCHED_PREFETCH_BYTES","text":"

    This variable shows the number of basement node bytes fetched from disk by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_prefetch_seconds","title":"Tokudb_BASEMENTS_FETCHED_PREFETCH_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching basement nodes from disk by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_for_write","title":"Tokudb_BASEMENTS_FETCHED_FOR_WRITE","text":"

    This variable shows the number of buffers fetched from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_for_write_bytes","title":"Tokudb_BASEMENTS_FETCHED_FOR_WRITE_BYTES","text":"

    This variable shows the number of buffer bytes fetched from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_basements_fetched_for_write_seconds","title":"Tokudb_BASEMENTS_FETCHED_FOR_WRITE_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching buffers from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_target_query","title":"Tokudb_BUFFERS_FETCHED_TARGET_QUERY","text":"

    This variable shows the number of buffers fetched from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_target_query_bytes","title":"Tokudb_BUFFERS_FETCHED_TARGET_QUERY_BYTES","text":"

    This variable shows the number of buffer bytes fetched from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_target_query_seconds","title":"Tokudb_BUFFERS_FETCHED_TARGET_QUERY_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching buffers from disk for queries.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prelocked_range","title":"Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE","text":"

    This variable shows the number of buffers fetched from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prelocked_range_bytes","title":"Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_BYTES","text":"

    This variable shows the number of buffer bytes fetched from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prelocked_range_seconds","title":"Tokudb_BUFFERS_FETCHED_PRELOCKED_RANGE_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching buffers from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prefetch","title":"Tokudb_BUFFERS_FETCHED_PREFETCH","text":"

    This variable shows the number of buffers fetched from disk aggressively.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prefetch_bytes","title":"Tokudb_BUFFERS_FETCHED_PREFETCH_BYTES","text":"

    This variable shows the number of buffer bytes fetched from disk by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_prefetch_seconds","title":"Tokudb_BUFFERS_FETCHED_PREFETCH_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching buffers from disk by a prefetch thread.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_for_write","title":"Tokudb_BUFFERS_FETCHED_FOR_WRITE","text":"

    This variable shows the number of buffers fetched from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_for_write_bytes","title":"Tokudb_BUFFERS_FETCHED_FOR_WRITE_BYTES","text":"

    This variable shows the number of buffer bytes fetched from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_buffers_fetched_for_write_seconds","title":"Tokudb_BUFFERS_FETCHED_FOR_WRITE_SECONDS","text":"

    This variable shows the number of seconds waiting for I/O when fetching buffers from disk for writes.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_compression_to_memory_seconds","title":"Tokudb_LEAF_COMPRESSION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent compressing leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_serialization_to_memory_seconds","title":"Tokudb_LEAF_SERIALIZATION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent serializing leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_decompression_to_memory_seconds","title":"Tokudb_LEAF_DECOMPRESSION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent decompressing leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_leaf_deserialization_to_memory_seconds","title":"Tokudb_LEAF_DESERIALIZATION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent deserializing leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_compression_to_memory_seconds","title":"Tokudb_NONLEAF_COMPRESSION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent compressing non leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_serialization_to_memory_seconds","title":"Tokudb_NONLEAF_SERIALIZATION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent serializing non leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_decompression_to_memory_seconds","title":"Tokudb_NONLEAF_DECOMPRESSION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent decompressing non leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_nonleaf_deserialization_to_memory_seconds","title":"Tokudb_NONLEAF_DESERIALIZATION_TO_MEMORY_SECONDS","text":"

    This variable shows the total time, in seconds, spent deserializing non leaf nodes.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_roots_split","title":"Tokudb_PROMOTION_ROOTS_SPLIT","text":"

    This variable shows the number of times the root split during promotion.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_leaf_roots_injected_into","title":"Tokudb_PROMOTION_LEAF_ROOTS_INJECTED_INTO","text":"

    This variable shows the number of times a message stopped at a root with height 0.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_h1_roots_injected_into","title":"Tokudb_PROMOTION_H1_ROOTS_INJECTED_INTO","text":"

    This variable shows the number of times a message stopped at a root with height 1.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_injections_at_depth_0","title":"Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_0","text":"

    This variable shows the number of times a message stopped at depth 0.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_injections_at_depth_1","title":"Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_1","text":"

    This variable shows the number of times a message stopped at depth 1.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_injections_at_depth_2","title":"Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_2","text":"

    This variable shows the number of times a message stopped at depth 2.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_injections_at_depth_3","title":"Tokudb_PROMOTION_INJECTIONS_AT_DEPTH_3","text":"

    This variable shows the number of times a message stopped at depth 3.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_injections_lower_than_depth_3","title":"Tokudb_PROMOTION_INJECTIONS_LOWER_THAN_DEPTH_3","text":"

    This variable shows the number of times a message was promoted past depth 3.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_stopped_nonempty_buffer","title":"Tokudb_PROMOTION_STOPPED_NONEMPTY_BUFFER","text":"

    This variable shows the number of times a message stopped because it reached a nonempty buffer.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_stopped_at_height_1","title":"Tokudb_PROMOTION_STOPPED_AT_HEIGHT_1","text":"

    This variable shows the number of times a message stopped because it had reached height 1.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_stopped_child_locked_or_not_in_memory","title":"Tokudb_PROMOTION_STOPPED_CHILD_LOCKED_OR_NOT_IN_MEMORY","text":"

    This variable shows the number of times a message stopped because it could not cheaply get access to a child.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_stopped_child_not_fully_in_memory","title":"Tokudb_PROMOTION_STOPPED_CHILD_NOT_FULLY_IN_MEMORY","text":"

    This variable shows the number of times a message stopped because it could not cheaply get access to a child.

    "},{"location":"tokudb-status-variables.html#tokudb_promotion_stopped_after_locking_child","title":"Tokudb_PROMOTION_STOPPED_AFTER_LOCKING_CHILD","text":"

    This variable shows the number of times a message stopped before a child which had been locked.

    "},{"location":"tokudb-status-variables.html#tokudb_basement_deserialization_fixed_key","title":"Tokudb_BASEMENT_DESERIALIZATION_FIXED_KEY","text":"

    This variable shows the number of basement nodes deserialized where all keys had the same size, leaving the basement in a format that is optimal for in-memory workloads.

    "},{"location":"tokudb-status-variables.html#tokudb_basement_deserialization_variable_key","title":"Tokudb_BASEMENT_DESERIALIZATION_VARIABLE_KEY","text":"

    This variable shows the number of basement nodes deserialized where all keys did not have the same size, and thus ineligible for an in-memory optimization.

    "},{"location":"tokudb-status-variables.html#tokudb_pro_rightmost_leaf_shortcut_success","title":"Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_SUCCESS","text":"

    This variable shows the number of times a message injection detected a series of sequential inserts to the rightmost side of the tree and successfully applied an insert message directly to the rightmost leaf node. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_pro_rightmost_leaf_shortcut_fail_pos","title":"Tokudb_PRO_RIGHTMOST_LEAF_SHORTCUT_FAIL_POS","text":"

    This variable shows the number of times a message injection detected a series of sequential inserts to the rightmost side of the tree and was unable to follow the pattern of directly applying an insert message directly to the rightmost leaf node because the key does not continue the sequence. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_rightmost_leaf_shortcut_fail_reactive","title":"Tokudb_RIGHTMOST_LEAF_SHORTCUT_FAIL_REACTIVE","text":"

    This variable shows the number of times a message injection detected a series of sequential inserts to the rightmost side of the tree and was unable to follow the pattern of directly applying an insert message directly to the rightmost leaf node because the leaf is full. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_cursor_skip_deleted_leaf_entry","title":"Tokudb_CURSOR_SKIP_DELETED_LEAF_ENTRY","text":"

    This variable shows the number of leaf entries skipped during search/scan because the result of message application and reconciliation of the leaf entry MVCC stack reveals that the leaf entry is deleted in the current transactions view. It is a good indicator that there might be excessive garbage in a tree if a range scan seems to take too long.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_total_nodes","title":"Tokudb_FLUSHER_CLEANER_TOTAL_NODES","text":"

    This variable shows the total number of nodes potentially flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_h1_nodes","title":"Tokudb_FLUSHER_CLEANER_H1_NODES","text":"

    This variable shows the number of height 1 nodes that had messages flushed by flusher or cleaner threads, i.e., internal nodes immediately above leaf nodes. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_hgt1_nodes","title":"Tokudb_FLUSHER_CLEANER_HGT1_NODES","text":"

    This variable shows the number of nodes with height greater than 1 that had messages flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_empty_nodes","title":"Tokudb_FLUSHER_CLEANER_EMPTY_NODES","text":"

    This variable shows the number of nodes cleaned by flusher or cleaner threads which had empty message buffers. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_nodes_dirtied","title":"Tokudb_FLUSHER_CLEANER_NODES_DIRTIED","text":"

    This variable shows the number of nodes dirtied by flusher or cleaner threads as a result of flushing messages downward. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_max_buffer_size","title":"Tokudb_FLUSHER_CLEANER_MAX_BUFFER_SIZE","text":"

    This variable shows the maximum bytes in a message buffer flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_min_buffer_size","title":"Tokudb_FLUSHER_CLEANER_MIN_BUFFER_SIZE","text":"

    This variable shows the minimum bytes in a message buffer flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_total_buffer_size","title":"Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_SIZE","text":"

    This variable shows the total bytes in buffers flushed by flusher and cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_max_buffer_workdone","title":"Tokudb_FLUSHER_CLEANER_MAX_BUFFER_WORKDONE","text":"

    This variable shows the maximum bytes worth of work done in a message buffer flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_min_buffer_workdone","title":"Tokudb_FLUSHER_CLEANER_MIN_BUFFER_WORKDONE","text":"

    This variable shows the minimum bytes worth of work done in a message buffer flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_total_buffer_workdone","title":"Tokudb_FLUSHER_CLEANER_TOTAL_BUFFER_WORKDONE","text":"

    This variable shows the total bytes worth of work done in buffers flushed by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_num_leaf_merges_started","title":"Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_STARTED","text":"

    This variable shows the number of times flusher and cleaner threads tried to merge two leafs. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_num_leaf_merges_running","title":"Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_RUNNING","text":"

    This variable shows the number of flusher and cleaner threads leaf merges in progress. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_num_leaf_merges_completed","title":"Tokudb_FLUSHER_CLEANER_NUM_LEAF_MERGES_COMPLETED","text":"

    This variable shows the number of successful flusher and cleaner threads leaf merges. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_cleaner_num_dirtied_for_leaf_merge","title":"Tokudb_FLUSHER_CLEANER_NUM_DIRTIED_FOR_LEAF_MERGE","text":"

    This variable shows the number of nodes dirtied by flusher or cleaner threads performing leaf node merges. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_total","title":"Tokudb_FLUSHER_FLUSH_TOTAL","text":"

    This variable shows the total number of flushes done by flusher threads or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_in_memory","title":"Tokudb_FLUSHER_FLUSH_IN_MEMORY","text":"

    This variable shows the number of in memory flushes (required no disk reads) by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_needed_io","title":"Tokudb_FLUSHER_FLUSH_NEEDED_IO","text":"

    This variable shows the number of flushes that read something off disk by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades","title":"Tokudb_FLUSHER_FLUSH_CASCADES","text":"

    This variable shows the number of flushes that triggered a flush in child node by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_1","title":"Tokudb_FLUSHER_FLUSH_CASCADES_1","text":"

    This variable shows the number of flushes that triggered one cascading flush by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_2","title":"Tokudb_FLUSHER_FLUSH_CASCADES_2","text":"

    This variable shows the number of flushes that triggered two cascading flushes by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_3","title":"Tokudb_FLUSHER_FLUSH_CASCADES_3","text":"

    This variable shows the number of flushes that triggered three cascading flushes by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_4","title":"Tokudb_FLUSHER_FLUSH_CASCADES_4","text":"

    This variable shows the number of flushes that triggered four cascading flushes by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_5","title":"Tokudb_FLUSHER_FLUSH_CASCADES_5","text":"

    This variable shows the number of flushes that triggered five cascading flushes by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_flush_cascades_gt_5","title":"Tokudb_FLUSHER_FLUSH_CASCADES_GT_5","text":"

    This variable shows the number of flushes that triggered more than five cascading flushes by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_split_leaf","title":"Tokudb_FLUSHER_SPLIT_LEAF","text":"

    This variable shows the total number of leaf node splits done by flusher threads or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_split_nonleaf","title":"Tokudb_FLUSHER_SPLIT_NONLEAF","text":"

    This variable shows the total number of non-leaf node splits done by flusher threads or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_merge_leaf","title":"Tokudb_FLUSHER_MERGE_LEAF","text":"

    This variable shows the total number of leaf node merges done by flusher threads or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_merge_nonleaf","title":"Tokudb_FLUSHER_MERGE_NONLEAF","text":"

    This variable shows the total number of non-leaf node merges done by flusher threads or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_flusher_balance_leaf","title":"Tokudb_FLUSHER_BALANCE_LEAF","text":"

    This variable shows the number of times two adjacent leaf nodes were rebalanced or had their content redistributed evenly by flusher or cleaner threads. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_hot_num_started","title":"Tokudb_HOT_NUM_STARTED","text":"

    This variable shows the number of hot operations started (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_hot_num_completed","title":"Tokudb_HOT_NUM_COMPLETED","text":"

    This variable shows the number of hot operations completed (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_hot_num_aborted","title":"Tokudb_HOT_NUM_ABORTED","text":"

    This variable shows the number of hot operations aborted (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_hot_max_root_flush_count","title":"Tokudb_HOT_MAX_ROOT_FLUSH_COUNT","text":"

    This variable shows the maximum number of flushes from root ever required to optimize trees. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_txn_begin","title":"Tokudb_TXN_BEGIN","text":"

    This variable shows the number of transactions that have been started.

    "},{"location":"tokudb-status-variables.html#tokudb_txn_begin_read_only","title":"Tokudb_TXN_BEGIN_READ_ONLY","text":"

    This variable shows the number of read-only transactions started.

    "},{"location":"tokudb-status-variables.html#tokudb_txn_commits","title":"Tokudb_TXN_COMMITS","text":"

    This variable shows the total number of transactions that have been committed.

    "},{"location":"tokudb-status-variables.html#tokudb_txn_aborts","title":"Tokudb_TXN_ABORTS","text":"

    This variable shows the total number of transactions that have been aborted.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_next_lsn","title":"Tokudb_LOGGER_NEXT_LSN","text":"

    This variable shows the recovery logger next LSN. This is a not a useful value for a regular user to use for any purpose.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_writes","title":"Tokudb_LOGGER_WRITES","text":"

    This variable shows the number of times the logger has written to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_writes_bytes","title":"Tokudb_LOGGER_WRITES_BYTES","text":"

    This variable shows the number of bytes the logger has written to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_writes_uncompressed_bytes","title":"Tokudb_LOGGER_WRITES_UNCOMPRESSED_BYTES","text":"

    This variable shows the number of uncompressed bytes the logger has written to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_writes_seconds","title":"Tokudb_LOGGER_WRITES_SECONDS","text":"

    This variable shows the number of seconds waiting for IO when writing logs to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_logger_wait_long","title":"Tokudb_LOGGER_WAIT_LONG","text":"

    This variable shows the number of times a logger write operation required 100ms or more.

    "},{"location":"tokudb-status-variables.html#tokudb_loader_num_created","title":"Tokudb_LOADER_NUM_CREATED","text":"

    This variable shows the number of times one of our internal objects, a loader, has been created.

    "},{"location":"tokudb-status-variables.html#tokudb_loader_num_current","title":"Tokudb_LOADER_NUM_CURRENT","text":"

    This variable shows the number of loaders that currently exist.

    "},{"location":"tokudb-status-variables.html#tokudb_loader_num_max","title":"Tokudb_LOADER_NUM_MAX","text":"

    This variable shows the maximum number of loaders that ever existed simultaneously.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_malloc_count","title":"Tokudb_MEMORY_MALLOC_COUNT","text":"

    This variable shows the number of malloc operations by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_free_count","title":"Tokudb_MEMORY_FREE_COUNT","text":"

    This variable shows the number of free operations by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_realloc_count","title":"Tokudb_MEMORY_REALLOC_COUNT","text":"

    This variable shows the number of realloc operations by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_malloc_fail","title":"Tokudb_MEMORY_MALLOC_FAIL","text":"

    This variable shows the number of malloc operations that failed by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_realloc_fail","title":"Tokudb_MEMORY_REALLOC_FAIL","text":"

    This variable shows the number of realloc operations that failed by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_requested","title":"Tokudb_MEMORY_REQUESTED","text":"

    This variable shows the number of bytes requested by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_used","title":"Tokudb_MEMORY_USED","text":"

    This variable shows the number of bytes used (requested + overhead) by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_freed","title":"Tokudb_MEMORY_FREED","text":"

    This variable shows the number of bytes freed by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_max_requested_size","title":"Tokudb_MEMORY_MAX_REQUESTED_SIZE","text":"

    This variable shows the largest attempted allocation size by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_last_failed_size","title":"Tokudb_MEMORY_LAST_FAILED_SIZE","text":"

    This variable shows the size of the last failed allocation attempt by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_mem_estimated_maximum_memory_footprint","title":"Tokudb_MEM_ESTIMATED_MAXIMUM_MEMORY_FOOTPRINT","text":"

    This variable shows the maximum memory footprint of the storage engine, the max value of (used - freed).

    "},{"location":"tokudb-status-variables.html#tokudb_memory_mallocator_version","title":"Tokudb_MEMORY_MALLOCATOR_VERSION","text":"

    This variable shows the version of the memory allocator library detected by PerconaFT.

    "},{"location":"tokudb-status-variables.html#tokudb_memory_mmap_threshold","title":"Tokudb_MEMORY_MMAP_THRESHOLD","text":"

    This variable shows the mmap threshold in PerconaFT, anything larger than this gets mmap'ed.

    "},{"location":"tokudb-status-variables.html#tokudb_filesystem_threads_blocked_by_full_disk","title":"Tokudb_FILESYSTEM_THREADS_BLOCKED_BY_FULL_DISK","text":"

    This variable shows the number of threads that are currently blocked because they are attempting to write to a full disk. This is normally zero. If this value is non-zero, then a warning will appear in the disk free space field.

    "},{"location":"tokudb-status-variables.html#tokudb_filesystem_fsync_time","title":"Tokudb_FILESYSTEM_FSYNC_TIME","text":"

    This variable shows the total time, in microseconds, used to fsync to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_filesystem_fsync_num","title":"Tokudb_FILESYSTEM_FSYNC_NUM","text":"

    This variable shows the total number of times the database has flushed the operating system\u2019s file buffers to disk.

    "},{"location":"tokudb-status-variables.html#tokudb_filesystem_long_fsync_time","title":"Tokudb_FILESYSTEM_LONG_FSYNC_TIME","text":"

    This variable shows the total time, in microseconds, used to fsync to dis k when the operation required more than one second.

    "},{"location":"tokudb-status-variables.html#tokudb_filesystem_long_fsync_num","title":"Tokudb_FILESYSTEM_LONG_FSYNC_NUM","text":"

    This variable shows the total number of times the database has flushed the operating system\u2019s file buffers to disk and this operation required more than one second.

    "},{"location":"tokudb-troubleshooting.html","title":"TokuDB troubleshooting","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    "},{"location":"tokudb-troubleshooting.html#known-issues","title":"Known Issues","text":"

    Replication and binary logging: TokuDB supports binary logging and replication with one restriction. TokuDB does not implement a lock on the auto-increment function, so concurrent insert statements with one or more of the statements inserting multiple rows may result in a non-deterministic interleaving of the auto-increment values. When running replication with these concurrent inserts, the auto-increment values on the replica table may not match the auto-increment values on the source table. Note that this is only an issue with Statement Based Replication (SBR), and not Row Based Replication (RBR).

    For more information about auto-increment and replication, see the MySQL Reference Manual: AUTO_INCREMENT handling in InnoDB.

    In addition, when using the REPLACE INTO or INSERT IGNORE on tables with no secondary indexes or tables where secondary indexes are subsets of the primary, the session variable tokudb_pk_insert_mode controls whether row based replication will work.

    Uninformative error message: The LOAD DATA INFILE command can sometimes

    produce `ERROR 1030 (HY000): Got error 1 from storage engine`. The message\nshould say that the error is caused by insufficient disk space for the\ntemporary files created by the loader.\n

    Transparent Huge Pages: TokuDB will refuse to start if transparent huge

    pages are enabled. Transparent huge page support can be disabled by issuing the\nfollowing as root:\n
    # echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled\n

    Lock

    Note

    The previous command needs to be executed after every reboot, because it defaults to always.

    XA behavior vs. InnoDB: InnoDB forces a deadlocked XA transaction to

    abort, *TokuDB* does not.\n

    Disabling the unique checks: For tables with unique keys, every insertion into the table causes a lookup by key followed by an insertion, if the key is not in the table. This greatly limits insertion performance. If one knows by design that the rows being inserted into the table have unique keys, then one can disable the key lookup prior to insertion.

    If your primary key is an auto-increment key, and none of your secondary keys are declared to be unique, then setting unique_checks=OFF will provide limited performance gains. On the other hand, if your primary key has a lot of entropy (it looks random), or your secondary keys are declared unique and have a lot of entropy, then disabling unique checks can provide a significant performance boost.

    If unique_checks is disabled when the primary key is not unique, secondary indexes may become corrupted. In this case, the indexes should be dropped and rebuilt. This behavior differs from that of InnoDB, in which uniqueness is always checked on the primary key, and setting unique_checks to off turns off uniqueness checking on secondary indexes only. Turning off uniqueness checking on the primary key can provide large performance boosts, but it should only be done when the primary key is known to be unique.

    Group Replication: TokuDB storage engine doesn\u2019t support Group Replication.

    As of 8.0.17, InnoDB supports multi-valued indexes. TokuDB does not support this feature.

    As of 8.0.17, InnoDB supports the Clone Plugin and the Clone Plugin API. TokuDB tables do not support either of these features.

    "},{"location":"tokudb-troubleshooting.html#lock-visualization-in-tokudb","title":"Lock Visualization in TokuDB","text":"

    TokuDB uses key range locks to implement serializable transactions, which are acquired as the transaction progresses. The locks are released when the transaction commits or aborts (this implements two phase locking).

    TokuDB stores these locks in a data structure called the lock tree. The lock tree stores the set of range locks granted to each transaction. In addition, the lock tree stores the set of locks that are not granted due to a conflict with locks granted to some other transaction. When these other transactions are retired, these pending lock requests are retried. If a pending lock request is not granted before the lock timer expires, then the lock request is aborted.

    Lock visualization in TokuDB exposes the state of the lock tree with tables in the information schema. We also provide a mechanism that may be used by a database client to retrieve details about lock conflicts that it encountered while executing a transaction.

    "},{"location":"tokudb-troubleshooting.html#the-tokudb_trx-table","title":"The TOKUDB_TRX table","text":"

    The TOKUDB_TRX table in the INFORMATION_SCHEMA maps TokuDB transaction identifiers to MySQL client identifiers. This mapping allows one to associate a TokuDB transaction with a MySQL client operation.

    The following query returns the MySQL clients that have a live TokuDB transaction:

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_TRX,\nINFORMATION_SCHEMA.PROCESSLIST\nWHERE trx_mysql_thread_id = id;\n
    "},{"location":"tokudb-troubleshooting.html#the-tokudb_locks-table","title":"The TOKUDB_LOCKS table","text":"

    The tokudb_locks table in the information schema contains the set of locks granted to TokuDB transactions.

    The following query returns all of the locks granted to some TokuDB transaction:

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCKS;\n

    The following query returns the locks granted to some MySQL client:

    SELECT id FROM INFORMATION_SCHEMA.TOKUDB_LOCKS,\nINFORMATION_SCHEMA.PROCESSLIST\nWHERE locks_mysql_thread_id = id;\n
    "},{"location":"tokudb-troubleshooting.html#the-tokudb_lock_waits-table","title":"The TOKUDB_LOCK_WAITS table","text":"

    The tokudb_lock_waits table in the information schema contains the set of lock requests that are not granted due to a lock conflict with some other transaction.

    The following query returns the locks that are waiting to be granted due to a lock conflict with some other transaction:

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCK_WAITS;\n
    "},{"location":"tokudb-troubleshooting.html#supporting-explicit-default-value-expressions-as-of-8013-3","title":"Supporting explicit DEFAULT value expressions as of 8.0.13-3","text":"

    TokuDB does not support explicit DEFAULT value expressions as of verion 8.0.13-3.

    "},{"location":"tokudb-troubleshooting.html#the-tokudb_lock_timeout_debug-session-variable","title":"The tokudb_lock_timeout_debug session variable","text":"

    The tokudb_lock_timeout_debug session variable controls how lock timeouts and lock deadlocks seen by the database client are reported.

    The following values are available:

    • 0

      No lock timeouts or lock deadlocks are reported.

    • 1

      A JSON document that describes the lock conflict is stored in the tokudb_last_lock_timeout session variable

    • 2

      A JSON document that describes the lock conflict is printed to the MySQL error log.

      Supported since 7.5.5: In addition to the JSON document describing the lock conflict, the following lines are printed to the MySQL error log:

      • A line containing the blocked thread id and blocked SQL

      • A line containing the blocking thread id and the blocking SQL.

    • 3

      A JSON document that describes the lock conflict is stored in the tokudb_last_lock_timeout session variable and is printed to the MySQL error log.

      Supported since 7.5.5: In addition to the JSON document describing the lock conflict, the following lines are printed to the MySQL error log:

      • A line containing the blocked thread id and blocked SQL

      • A line containing the blocking thread id and the blocking SQL.

    "},{"location":"tokudb-troubleshooting.html#the-tokudb_last_lock_timeout-session-variable","title":"The tokudb_last_lock_timeout session variable","text":"

    The tokudb_last_lock_timeout session variable contains a JSON document that describes the last lock conflict seen by the current MySQL client. It gets set when a blocked lock request times out or a lock deadlock is detected. The tokudb_lock_timeout_debug session variable should have bit 0 set (decimal 1).

    "},{"location":"tokudb-troubleshooting.html#example","title":"Example","text":"

    Suppose that we create a table with a single column that is the primary key.

    mysql> SHOW CREATE TABLE table;\n\nCreate Table: CREATE TABLE \u2018table\u2018 (\n\u2018id\u2018 int(11) NOT NULL,\nPRIMARY KEY (\u2018id\u2018)) ENGINE=TokuDB DEFAULT CHARSET=latin1\n

    Suppose that we have 2 MySQL clients with ID\u2019s 1 and 2 respectively. Suppose that MySQL client 1 inserts some values into table. TokuDB transaction 51 is created for the insert statement. Since autocommit is disabled, transaction 51 is still live after the insert statement completes, and we can query the tokudb_locks table in information schema to see the locks that are held by the transaction.

    mysql> SET AUTOCOMMIT=OFF;\nmysql> INSERT INTO table VALUES (1),(10),(100);\n
    mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCKS;\n
    mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCK_WAITS;\n

    The keys are currently hex dumped.

    Now we switch to the other MySQL client with ID 2.

    mysql> INSERT INTO table VALUES (2),(20),(100);\n

    The insert gets blocked since there is a conflict on the primary key with value 100.

    The granted TokuDB locks are:

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCKS;\n

    The locks that are pending due to a conflict are:

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCK_WAITS;\n

    THe output could be:

    +-------------------+-----------------+------------------+---------------------+----------------------+-----------------------+--------------------+------------------+-----------------------------+\n| requesting_trx_id | blocking_trx_id | lock_waits_dname | lock_waits_key_left | lock_waits_key_right | lock_waits_start_time | locks_table_schema | locks_table_name | locks_table_dictionary_name |\n+-------------------+-----------------+------------------+---------------------+----------------------+-----------------------+--------------------+------------------+-----------------------------+\n|                62 |              51 | ./test/t-main    | 0064000000          | 0064000000           |         1380656990910 | test               | t                | main                        |\n+-------------------+-----------------+------------------+---------------------+----------------------+-----------------------+--------------------+------------------+-----------------------------+\n

    Eventually, the lock for client 2 times out, and we can retrieve a JSON document that describes the conflict.

    SELECT @@TOKUDB_LAST_LOCK_TIMEOUT;\n
    ROLLBACK;\n

    Since transaction 62 was rolled back, all of the locks taken by it are released.

    SELECT * FROM INFORMATION_SCHEMA.TOKUDB_LOCKS;\n
    "},{"location":"tokudb-troubleshooting.html#engine-status","title":"Engine Status","text":"

    Engine status provides details about the inner workings of TokuDB and can be useful in tuning your particular environment. The engine status can be determined by running the following command: SHOW ENGINE tokudb STATUS;

    The following is a reference of the table status statements:

    Table Status

    Description

    disk free space

    This is a gross estimate of how much of your file system is available. Possible displays in this field are:

    • More than twice the reserve (\u201cmore than 10 percent of total file system space\u201d)

    • Less than twice the reserve

    • Less than the reserve

    • File system is completely full

    time of environment creation

    This is the time when the TokuDB storage engine was first started up. Normally, this is when mysqld was initially installed with TokuDB. If the environment was upgraded from TokuDB 4.x (4.2.0 or later), then this will be displayed as \u201cDec 31, 1969\u201d on Linux hosts.

    time of engine startup

    This is the time when the TokuDB storage engine started up. Normally, this is when mysqld started.

    time now

    Current date/time on server.

    db opens

    This is the number of times an individual PerconaFT dictionary file was opened. This is a not a useful value for a regular user to use for any purpose due to layers of open/close caching on top.

    db closes

    This is the number of times an individual PerconaFT dictionary file was closed. This is a not a useful value for a regular user to use for any purpose due to layers of open/close caching on top.

    num open dbs now

    This is the number of currently open databases.

    max open dbs

    This is the maximum number of concurrently opened databases.

    period, in ms, that recovery log is automatically fsynced

    fsync() frequency in milliseconds.

    dictionary inserts

    This is the total number of rows that have been inserted into all primary and secondary indexes combined, when those inserts have been done with a separate recovery log entry per index. For example, inserting a row into a table with one primary and two secondary indexes will increase this count by three, if the inserts were done with separate recovery log entries.

    dictionary inserts fail

    This is the number of single-index insert operations that failed.

    dictionary deletes

    This is the total number of rows that have been deleted from all primary and secondary indexes combined, if those deletes have been done with a separate recovery log entry per index.

    dictionary deletes fail

    This is the number of single-index delete operations that failed.

    dictionary updates

    This is the total number of rows that have been updated in all primary and secondary indexes combined, if those updates have been done with a separate recovery log entry per index.

    dictionary updates fail

    This is the number of single-index update operations that failed.

    dictionary broadcast updates

    This is the number of broadcast updates that have been successfully performed. A broadcast update is an update that affects all rows in a dictionary.

    dictionary broadcast updates fail

    This is the number of broadcast updates that have failed.

    dictionary multi inserts

    This is the total number of rows that have been inserted into all primary and secondary indexes combined, when those inserts have been done with a single recovery log entry for the entire row. (For example, inserting a row into a table with one primary and two secondary indexes will normally increase this count by three).

    dictionary multi inserts fail

    This is the number of multi-index insert operations that failed.

    dictionary multi deletes

    This is the total number of rows that have been deleted from all primary and secondary indexes combined, when those deletes have been done with a single recovery log entry for the entire row.

    dictionary multi deletes fail

    This is the number of multi-index delete operations that failed.

    dictionary updates multi

    This is the total number of rows that have been updated in all primary and secondary indexes combined, if those updates have been done with a single recovery log entry for the entire row.

    dictionary updates fail multi

    This is the number of multi-index update operations that failed.

    le: max committed xr

    This is the maximum number of committed transaction records that were stored on disk in a new or modified row.

    le: max provisional xr

    This is the maximum number of provisional transaction records that were stored on disk in a new or modified row.

    le: expanded

    This is the number of times that an expanded memory mechanism was used to store a new or modified row on disk.

    le: max memsize

    This is the maximum number of bytes that were stored on disk as a new or modified row. This is the maximum uncompressed size of any row stored in TokuDB that was created or modified since the server started.

    le: size of leafentries before garbage collection (during message application)

    Total number of bytes of leaf nodes data before performing garbage collection for non-flush events.

    le: size of leafentries after garbage collection (during message application)

    Total number of bytes of leaf nodes data after performing garbage collection for non-flush events.

    le: size of leafentries before garbage collection (outside message application)

    Total number of bytes of leaf nodes data before performing garbage collection for flush events.

    le: size of leafentries after garbage collection (outside message application)

    Total number of bytes of leaf nodes data after performing garbage collection for flush events.

    checkpoint: period

    This is the interval in seconds between the end of an automatic checkpoint and the beginning of the next automatic checkpoint.

    checkpoint: footprint

    Where the database is in the checkpoint process.

    checkpoint: last checkpoint began

    This is the time the last checkpoint began. If a checkpoint is currently in progress, then this time may be later than the time the last checkpoint completed.

    Note

    If no checkpoint has ever taken place, then this value will be Dec 31, 1969 on Linux hosts.

    checkpoint: last complete checkpoint began

    This is the time the last complete checkpoint started. Any data that changed after this time will not be captured in the checkpoint.

    checkpoint: last complete checkpoint ended

    This is the time the last complete checkpoint ended.

    checkpoint: time spent during checkpoint (begin and end phases)

    Time (in seconds) required to complete all checkpoints.

    checkpoint: time spent during last checkpoint (begin and end phases)

    Time (in seconds) required to complete the last checkpoint.

    checkpoint: last complete checkpoint LSN

    This is the Log Sequence Number of the last complete checkpoint.

    checkpoint: checkpoints taken

    This is the number of complete checkpoints that have been taken.

    checkpoint: checkpoints failed

    This is the number of checkpoints that have failed for any reason.

    checkpoint: waiters now

    This is the current number of threads simultaneously waiting for the checkpoint-safe lock to perform a checkpoint.

    checkpoint: waiters max

    This is the maximum number of threads ever simultaneously waiting for the checkpoint-safe lock to perform a checkpoint.

    checkpoint: non-checkpoint client wait on mo lock

    The number of times a non-checkpoint client thread waited for the multi-operation lock.

    checkpoint: non-checkpoint client wait on cs lock

    The number of times a non-checkpoint client thread waited for the checkpoint-safe lock.

    checkpoint: checkpoint begin time

    Cumulative time (in microseconds) required to mark all dirty nodes as pending a checkpoint.

    checkpoint: long checkpoint begin time

    The total time, in microseconds, of long checkpoint begins. A long checkpoint begin is one taking more than 1 second.

    checkpoint: long checkpoint begin count

    The total number of times a checkpoint begin took more than 1 second.

    checkpoint: checkpoint end time

    The time spent in checkpoint end operation in seconds.

    checkpoint: long checkpoint end time

    The time spent in checkpoint end operation in seconds.

    checkpoint: long checkpoint end count

    This is the count of end_checkpoint operations that exceeded 1 minute.

    cachetable: miss

    This is a count of how many times the application was unable to access your data in the internal cache.

    cachetable: miss time

    This is the total time, in microseconds, of how long the database has had to wait for a disk read to complete.

    cachetable: prefetches

    This is the total number of times that a block of memory has been prefetched into the database\u2019s cache. Data is prefetched when the database\u2019s algorithms determine that a block of memory is likely to be accessed by the application.

    cachetable: size current

    This shows how much of the uncompressed data, in bytes, is currently in the database\u2019s internal cache.

    cachetable: size limit

    This shows how much of the uncompressed data, in bytes, will fit in the database\u2019s internal cache.

    cachetable: size writing

    This is the number of bytes that are currently queued up to be written to disk.

    cachetable: size nonleaf

    This shows the amount of memory, in bytes, the current set of non-leaf nodes occupy in the cache.

    cachetable: size leaf

    This shows the amount of memory, in bytes, the current set of (decompressed) leaf nodes occupy in the cache.

    cachetable: size rollback

    This shows the rollback nodes size, in bytes, in the cache.

    cachetable: size cachepressure

    This shows the number of bytes causing cache pressure (the sum of buffers and work done counters), helps to understand if cleaner threads are keeping up with workload. It should really be looked at as more of a value to use in a ratio of cache pressure / cache table size. The closer that ratio evaluates to 1, the higher the cache pressure.

    cachetable: size currently cloned data for checkpoint

    Amount of memory, in bytes, currently used for cloned nodes. During the checkpoint operation, dirty nodes are cloned prior to serialization/compression, then written to disk. After which, the memory for the cloned block is returned for re-use.

    cachetable: evictions

    Number of blocks evicted from cache.

    cachetable: cleaner executions

    Total number of times the cleaner thread loop has executed.

    cachetable: cleaner period

    TokuDB includes a cleaner thread that optimizes indexes in the background. This variable is the time, in seconds, between the completion of a group of cleaner operations and the beginning of the next group of cleaner operations. The cleaner operations run on a background thread performing work that does not need to be done on the client thread.

    cachetable: cleaner iterations

    This is the number of cleaner operations that are performed every cleaner period.

    cachetable: number of waits on cache pressure

    The number of times a thread was stalled due to cache pressure.

    cachetable: time waiting on cache pressure

    Total time, in microseconds, waiting on cache pressure to subside.

    cachetable: number of long waits on cache pressure

    The number of times a thread was stalled for more than 1 second due to cache pressure.

    cachetable: long time waiting on cache pressure

    Total time, in microseconds, waiting on cache pressure to subside for more than 1 second.

    cachetable: client pool: number of threads in pool

    The number of threads in the client thread pool.

    cachetable: client pool: number of currently active threads in pool

    The number of currently active threads in the client thread pool.

    cachetable: client pool: number of currently queued work items

    The number of currently queued work items in the client thread pool.

    cachetable: client pool: largest number of queued work items

    The largest number of queued work items in the client thread pool.

    cachetable: client pool: total number of work items processed

    The total number of work items processed in the client thread pool.

    cachetable: client pool: total execution time of processing work items

    The total execution time of processing work items in the client thread pool.

    cachetable: cachetable pool: number of threads in pool

    The number of threads in the cachetable thread pool.

    cachetable: cachetable pool: number of currently active threads in pool

    The number of currently active threads in the cachetable thread pool.

    cachetable: cachetable pool: number of currently queued work items

    The number of currently queued work items in the cachetable thread pool.

    cachetable: cachetable pool: largest number of queued work items

    The largest number of queued work items in the cachetable thread pool.

    cachetable: cachetable pool: total number of work items processed

    The total number of work items processed in the cachetable thread pool.

    cachetable: cachetable pool: total execution time of processing work items

    The total execution time of processing work items in the cachetable thread pool.

    cachetable: checkpoint pool: number of threads in pool

    The number of threads in the checkpoint thread pool.

    cachetable: checkpoint pool: number of currently active threads in pool

    The number of currently active threads in the checkpoint thread pool.

    cachetable: checkpoint pool: number of currently queued work items

    The number of currently queued work items in the checkpoint thread pool.

    cachetable: checkpoint pool: largest number of queued work items

    The largest number of queued work items in the checkpoint thread pool.

    cachetable: checkpoint pool: total number of work items processed

    The total number of work items processed in the checkpoint thread pool.

    cachetable: checkpoint pool: total execution time of processing work items

    The total execution time of processing work items in the checkpoint thread pool.

    locktree: memory size

    The amount of memory, in bytes, that the locktree is currently using.

    locktree: memory size limit

    The maximum amount of memory, in bytes, that the locktree is allowed to use.

    locktree: number of times lock escalation ran

    Number of times the locktree needed to run lock escalation to reduce its memory footprint.

    locktree: time spent running escalation (seconds)

    Total number of seconds spent performing locktree escalation.

    locktree: latest post-escalation memory size

    Size of the locktree, in bytes, after most current locktree escalation.

    locktree: number of locktrees open now

    Number of locktrees currently open.

    locktree: number of pending lock requests

    Number of requests waiting for a lock grant.

    locktree: number of locktrees eligible for the STO

    Number of locktrees eligible for \u201cSingle Transaction Optimizations\u201d. STO optimization are behaviors that can happen within the locktree when there is exactly one transaction active within the locktree. This is a not a useful value for a regular user to use for any purpose.

    locktree: number of times a locktree ended the STO early

    Total number of times a \u201csingle transaction optimization\u201d was ended early due to another trans- action starting.

    locktree: time spent ending the STO early (seconds)

    Total number of seconds ending \u201cSingle Transaction Optimizations\u201d. STO optimization are behaviors that can happen within the locktree when there is exactly one transaction active within the locktree. This is a not a useful value for a regular user to use for any purpose.

    locktree: number of wait locks

    Number of times that a lock request could not be acquired because of a conflict with some other transaction.

    locktree: time waiting for locks

    Total time, in microseconds, spend by some client waiting for a lock conflict to be resolved.

    locktree: number of long wait locks

    Number of lock waits greater than 1 second in duration.

    locktree: long time waiting for locks

    Total time, in microseconds, of the long waits.

    locktree: number of lock timeouts

    Count of the number of times that a lock request timed out.

    locktree: number of waits on lock escalation

    When the sum of the sizes of locks taken reaches the lock tree limit, we run lock escalation on a background thread. The clients threads need to wait for escalation to consolidate locks and free up memory. This counter counts the number of times a client thread has to wait on lock escalation.

    locktree: time waiting on lock escalation

    Total time, in microseconds, that a client thread spent waiting for lock escalation to free up memory.

    locktree: number of long waits on lock escalation

    Number of times that a client thread had to wait on lock escalation and the wait time was greater than 1 second.

    locktree: long time waiting on lock escalation

    Total time, in microseconds, of the long waits for lock escalation to free up memory.

    ft: dictionary updates

    This is the total number of rows that have been updated in all primary and secondary indexes combined, if those updates have been done with a separate recovery log entry per index.

    ft: dictionary broadcast updates

    This is the number of broadcast updates that have been successfully performed. A broadcast update is an update that affects all rows in a dictionary.

    ft: descriptor set

    This is the number of time a descriptor was updated when the entire dictionary was updated (for example, when the schema has been changed).

    ft: messages ignored by leaf due to msn

    The number of messages that were ignored by a leaf because it had already been applied.

    ft: total search retries due to TRY AGAIN

    Total number of search retries due to TRY AGAIN. Internal value that is no use to anyone other than a developer debugging a specific query/search issue.

    ft: searches requiring more tries than the height of the tree

    Number of searches that required more tries than the height of the tree.

    ft: searches requiring more tries than the height of the tree plus three

    Number of searches that required more tries than the height of the tree plus three.

    ft: leaf nodes flushed to disk (not for checkpoint)

    Number of leaf nodes flushed to disk, not for checkpoint.

    ft: leaf nodes flushed to disk (not for checkpoint) (bytes)

    Number of bytes of leaf nodes flushed to disk, not for checkpoint.

    ft: leaf nodes flushed to disk (not for checkpoint) (uncompressed bytes)

    Number of bytes of leaf nodes flushed to disk, not for checkpoint.

    ft: leaf nodes flushed to disk (not for checkpoint) (seconds)

    Number of seconds waiting for IO when writing leaf nodes flushed to disk, not for checkpoint.

    ft: nonleaf nodes flushed to disk (not for checkpoint)

    Number of non-leaf nodes flushed to disk, not for checkpoint.

    ft: nonleaf nodes flushed to disk (not for checkpoint) (bytes)

    Number of bytes of non-leaf nodes flushed to disk, not for checkpoint.

    ft: nonleaf nodes flushed to disk (not for checkpoint) (uncompressed bytes)

    Number of uncompressed bytes of non-leaf nodes flushed to disk, not for checkpoint.

    ft: nonleaf nodes flushed to disk (not for checkpoint) (seconds)

    Number of seconds waiting for I/O when writing non-leaf nodes flushed to disk, not for checkpoint.

    ft: leaf nodes flushed to disk (for checkpoint)

    Number of leaf nodes flushed to disk for checkpoint.

    ft: leaf nodes flushed to disk (for checkpoint) (bytes)

    Number of bytes of leaf nodes flushed to disk for checkpoint.

    ft: leaf nodes flushed to disk (for checkpoint) (uncompressed bytes)

    Number of uncompressed bytes of leaf nodes flushed to disk for checkpoint.

    ft: leaf nodes flushed to disk (for checkpoint) (seconds)

    Number of seconds waiting for IO when writing leaf nodes flushed to disk for checkpoint.

    ft: nonleaf nodes flushed to disk (for checkpoint)

    Number of non-leaf nodes flushed to disk for checkpoint.

    ft: nonleaf nodes flushed to disk (for checkpoint) (bytes)

    Number of bytes of non-leaf nodes flushed to disk for checkpoint.

    ft: nonleaf nodes flushed to disk (for checkpoint) (uncompressed bytes)

    Number of uncompressed bytes of non-leaf nodes flushed to disk for checkpoint.

    ft: nonleaf nodes flushed to disk (for checkpoint) (seconds)

    Number of seconds waiting for IO when writing non-leaf nodes flushed to disk for checkpoint.

    ft: uncompressed / compressed bytes written (leaf)

    Ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for leaf nodes.

    ft: uncompressed / compressed bytes written (nonleaf)

    Ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for non-leaf nodes.

    ft: uncompressed / compressed bytes written (overall)

    Ratio of uncompressed bytes (in-memory) to compressed bytes (on-disk) for all nodes.

    ft: nonleaf node partial evictions

    The number of times a partition of a non-leaf node was evicted from the cache.

    ft: nonleaf node partial evictions (bytes)

    The number of bytes freed by evicting partitions of non-leaf nodes from the cache.

    ft: leaf node partial evictions

    The number of times a partition of a leaf node was evicted from the cache.

    ft: leaf node partial evictions (bytes)

    The number of bytes freed by evicting partitions of leaf nodes from the cache.

    ft: leaf node full evictions

    The number of times a full leaf node was evicted from the cache.

    ft: leaf node full evictions (bytes)

    The number of bytes freed by evicting full leaf nodes from the cache.

    ft: nonleaf node full evictions (bytes)

    The number of bytes freed by evicting full non-leaf nodes from the cache.

    ft: nonleaf node full evictions

    The number of times a full non-leaf node was evicted from the cache.

    ft: leaf nodes created

    Number of created leaf nodes .

    ft: nonleaf nodes created

    Number of created non-leaf nodes.

    ft: leaf nodes destroyed

    Number of destroyed leaf nodes.

    ft: nonleaf nodes destroyed

    Number of destroyed non-leaf nodes.

    ft: bytes of messages injected at root (all trees)

    Amount of messages, in bytes, injected at root (for all trees).

    ft: bytes of messages flushed from h1 nodes to leaves

    Amount of messages, in bytes, flushed from h1 nodes to leaves.

    ft: bytes of messages currently in trees (estimate)

    Amount of messages, in bytes, currently in trees (estimate).

    ft: messages injected at root

    Number of messages injected at root node of a tree.

    ft: broadcast messages injected at root

    Number of broadcast messages injected at root node of a tree.

    ft: basements decompressed as a target of a query

    Number of basement nodes decompressed for queries.

    ft: basements decompressed for prelocked range

    Number of basement nodes decompressed by queries aggressively.

    ft: basements decompressed for prefetch

    Number of basement nodes decompressed by a prefetch thread.

    ft: basements decompressed for write

    Number of basement nodes decompressed for writes.

    ft: buffers decompressed as a target of a query

    Number of buffers decompressed for queries.

    ft: buffers decompressed for prelocked range

    Number of buffers decompressed by queries aggressively.

    ft: buffers decompressed for prefetch

    Number of buffers decompressed by a prefetch thread.

    ft: buffers decompressed for write

    Number of buffers decompressed for writes.

    ft: pivots fetched for query

    Number of pivot nodes fetched for queries.

    ft: pivots fetched for query (bytes)

    Number of bytes of pivot nodes fetched for queries.

    ft: pivots fetched for query (seconds)

    Number of seconds waiting for I/O when fetching pivot nodes for queries.

    ft: pivots fetched for prefetch

    Number of pivot nodes fetched by a prefetch thread.

    ft: pivots fetched for prefetch (bytes)

    Number of bytes of pivot nodes fetched by a prefetch thread.

    ft: pivots fetched for prefetch (seconds)

    Number seconds waiting for I/O when fetching pivot nodes by a prefetch thread.

    ft: pivots fetched for write

    Number of pivot nodes fetched for writes.

    ft: pivots fetched for write (bytes)

    Number of bytes of pivot nodes fetched for writes.

    ft: pivots fetched for write (seconds)

    Number of seconds waiting for I/O when fetching pivot nodes for writes.

    ft: basements fetched as a target of a query

    Number of basement nodes fetched from disk for queries.

    ft: basements fetched as a target of a query (bytes)

    Number of basement node bytes fetched from disk for queries.

    ft: basements fetched as a target of a query (seconds)

    Number of seconds waiting for IO when fetching basement nodes from disk for queries.

    ft: basements fetched for prelocked range

    Number of basement nodes fetched from disk aggressively.

    ft: basements fetched for prelocked range (bytes)

    Number of basement node bytes fetched from disk aggressively.

    ft: basements fetched for prelocked range (seconds)

    Number of seconds waiting for I/O when fetching basement nodes from disk aggressively.

    ft: basements fetched for prefetch

    Number of basement nodes fetched from disk by a prefetch thread.

    ft: basements fetched for prefetch (bytes)

    Number of basement node bytes fetched from disk by a prefetch thread.

    ft: basements fetched for prefetch (seconds)

    Number of seconds waiting for I/O when fetching basement nodes from disk by a prefetch thread.

    ft: basements fetched for write

    Number of basement nodes fetched from disk for writes.

    ft: basements fetched for write (bytes)

    Number of basement node bytes fetched from disk for writes.

    ft: basements fetched for write (seconds)

    Number of seconds waiting for I/O when fetching basement nodes from disk for writes.

    ft: buffers fetched as a target of a query

    Number of buffers fetched from disk for queries.

    ft: buffers fetched as a target of a query (bytes)

    Number of buffer bytes fetched from disk for queries.

    ft: buffers fetched as a target of a query (seconds)

    Number of seconds waiting for I/O when fetching buffers from disk for queries.

    ft: buffers fetched for prelocked range

    Number of buffers fetched from disk aggressively.

    ft: buffers fetched for prelocked range (bytes)

    Number of buffer bytes fetched from disk aggressively.

    ft: buffers fetched for prelocked range (seconds)

    Number of seconds waiting for I/O when fetching buffers from disk aggressively.

    ft: buffers fetched for prefetch

    Number of buffers fetched from disk by a prefetch thread.

    ft: buffers fetched for prefetch (bytes)

    Number of buffer bytes fetched from disk by a prefetch thread.

    ft: buffers fetched for prefetch (seconds)

    Number of seconds waiting for I/O when fetching buffers from disk by a prefetch thread.

    ft: buffers fetched for write

    Number of buffers fetched from disk for writes.

    ft: buffers fetched for write (bytes)

    Number of buffer bytes fetched from disk for writes.

    ft: buffers fetched for write (seconds)

    Number of seconds waiting for I/O when fetching buffers from disk for writes.

    ft: leaf compression to memory (seconds)

    Total time, in seconds, spent compressing leaf nodes.

    ft: leaf serialization to memory (seconds)

    Total time, in seconds, spent serializing leaf nodes.

    ft: leaf decompression to memory (seconds)

    Total time, in seconds, spent decompressing leaf nodes.

    ft: leaf deserialization to memory (seconds)

    Total time, in seconds, spent deserializing leaf nodes.

    ft: nonleaf compression to memory (seconds)

    Total time, in seconds, spent compressing non leaf nodes.

    ft: nonleaf serialization to memory (seconds)

    Total time, in seconds, spent serializing non leaf nodes.

    ft: nonleaf decompression to memory (seconds)

    Total time, in seconds, spent decompressing non leaf nodes.

    ft: nonleaf deserialization to memory (seconds)

    Total time, in seconds, spent deserializing non leaf nodes.

    ft: promotion: roots split

    Number of times the root split during promotion.

    ft: promotion: leaf roots injected into

    Number of times a message stopped at a root with height 0.

    ft: promotion: h1 roots injected into

    Number of times a message stopped at a root with height 1.

    ft: promotion: injections at depth 0

    Number of times a message stopped at depth 0.

    ft: promotion: injections at depth 1

    Number of times a message stopped at depth 1.

    ft: promotion: injections at depth 2

    Number of times a message stopped at depth 2.

    ft: promotion: injections at depth 3

    Number of times a message stopped at depth 3.

    ft: promotion: injections lower than depth 3

    Number of times a message was promoted past depth 3.

    ft: promotion: stopped because of a nonempty buffer

    Number of times a message stopped because it reached a nonempty buffer.

    ft: promotion: stopped at height 1

    Number of times a message stopped because it had reached height 1.

    ft: promotion: stopped because the child was locked or not at all in memory

    Number of times promotion was stopped because the child node was locked or not at all in memory. This is a not a useful value for a regular user to use for any purpose.

    ft: promotion: stopped because the child was not fully in memory

    Number of times promotion was stopped because the child node was not at all in memory. This is a not a useful value for a normal user to use for any purpose.

    ft: promotion: stopped anyway, after locking the child

    Number of times a message stopped before a child which had been locked.

    ft: basement nodes deserialized with fixed-keysize

    The number of basement nodes deserialized where all keys had the same size, leaving the basement in a format that is optimal for in-memory workloads.

    ft: basement nodes deserialized with variable-keysize

    The number of basement nodes deserialized where all keys did not have the same size, and thus ineligible for an in-memory optimization.

    ft: promotion: succeeded in using the rightmost leaf shortcut

    Rightmost insertions used the rightmost-leaf pin path, meaning that the Fractal Tree index detected and properly optimized rightmost inserts.

    ft: promotion: tried the rightmost leaf shortcut but failed (out-of-bounds)

    Rightmost insertions did not use the rightmost-leaf pin path, due to the insert not actually being into the rightmost leaf node.

    ft: promotion: tried the rightmost leaf shortcut but failed (child reactive)

    Rightmost insertions did not use the rightmost-leaf pin path, due to the leaf being too large (needed to split).

    ft: cursor skipped deleted leaf entries

    Number of leaf entries skipped during search/scan because the result of message application and reconciliation of the leaf entry MVCC stack reveals that the leaf entry is deleted in the current transactions view. It is a good indicator that there might be excessive garbage in a tree if a range scan seems to take too long.

    ft flusher: total nodes potentially flushed by cleaner thread

    Total number of nodes whose buffers are potentially flushed by cleaner thread.

    ft flusher: height-one nodes flushed by cleaner thread

    Number of nodes of height one whose message buffers are flushed by cleaner thread.

    ft flusher: height-greater-than-one nodes flushed by cleaner thread

    Number of nodes of height > 1 whose message buffers are flushed by cleaner thread.

    ft flusher: nodes cleaned which had empty buffers

    Number of nodes that are selected by cleaner, but whose buffers are empty.

    ft flusher: nodes dirtied by cleaner thread

    Number of nodes that are made dirty by the cleaner thread.

    ft flusher: max bytes in a buffer flushed by cleaner thread

    Max number of bytes in message buffer flushed by cleaner thread.

    ft flusher: min bytes in a buffer flushed by cleaner thread

    Min number of bytes in message buffer flushed by cleaner thread.

    ft flusher: total bytes in buffers flushed by cleaner thread

    Total number of bytes in message buffers flushed by cleaner thread.

    ft flusher: max workdone in a buffer flushed by cleaner thread

    Max workdone value of any message buffer flushed by cleaner thread.

    ft flusher: min workdone in a buffer flushed by cleaner thread

    Min workdone value of any message buffer flushed by cleaner thread.

    ft flusher: total workdone in buffers flushed by cleaner thread

    Total workdone value of message buffers flushed by cleaner thread.

    ft flusher: times cleaner thread tries to merge a leaf

    The number of times the cleaner thread tries to merge a leaf.

    ft flusher: cleaner thread leaf merges in progress

    The number of cleaner thread leaf merges in progress.

    ft flusher: cleaner thread leaf merges successful

    The number of times the cleaner thread successfully merges a leaf.

    ft flusher: nodes dirtied by cleaner thread leaf merges

    The number of nodes dirtied by the \u201cflush from root\u201d process to merge a leaf node.

    ft flusher: total number of flushes done by flusher threads or cleaner threads

    Total number of flushes done by flusher threads or cleaner threads.

    ft flusher: number of in memory flushes

    Number of in-memory flushes.

    ft flusher: number of flushes that read something off disk

    Number of flushes that had to read a child (or part) off disk.

    ft flusher: number of flushes that triggered another flush in child

    Number of flushes that triggered another flush in the child.

    ft flusher: number of flushes that triggered 1 cascading flush

    Number of flushes that triggered 1 cascading flush.

    ft flusher: number of flushes that triggered 2 cascading flushes

    Number of flushes that triggered 2 cascading flushes.

    ft flusher: number of flushes that triggered 3 cascading flushes

    Number of flushes that triggered 3 cascading flushes.

    ft flusher: number of flushes that triggered 4 cascading flushes

    Number of flushes that triggered 4 cascading flushes.

    ft flusher: number of flushes that triggered 5 cascading flushes

    Number of flushes that triggered 5 cascading flushes.

    ft flusher: number of flushes that triggered over 5 cascading flushes

    Number of flushes that triggered more than 5 cascading flushes.

    ft flusher: leaf node splits

    Number of leaf nodes split.

    ft flusher: nonleaf node splits

    Number of non-leaf nodes split.

    ft flusher: leaf node merges

    Number of times leaf nodes are merged.

    ft flusher: nonleaf node merges

    Number of times non-leaf nodes are merged.

    ft flusher: leaf node balances

    Number of times a leaf node is balanced.

    hot: operations ever started

    This variable shows the number of hot operations started (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    hot: operations successfully completed

    The number of hot operations that have successfully completed (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    hot: operations aborted

    The number of hot operations that have been aborted (OPTIMIZE TABLE). This is a not a useful value for a regular user to use for any purpose.

    hot: max number of flushes from root ever required to optimize a tree

    The maximum number of flushes from the root ever required to optimize a tree.

    txn: begin

    This is the number of transactions that have been started.

    txn: begin read only

    Number of read only transactions started.

    txn: successful commits

    This is the total number of transactions that have been committed.

    txn: aborts

    This is the total number of transactions that have been aborted.

    logger: next LSN

    This is the next unassigned Log Sequence Number. It will be assigned to the next entry in the recovery log.

    logger: writes

    Number of times the logger has written to disk.

    logger: writes (bytes)

    Number of bytes the logger has written to disk.

    logger: writes (uncompressed bytes)

    Number of uncompressed the logger has written to disk.

    logger: writes (seconds)

    Number of seconds waiting for I/O when writing logs to disk.

    logger: number of long logger write operations

    Number of times a logger write operation required 100ms or more.

    indexer: number of indexers successfully created

    This is the number of times one of our internal objects, a indexer, has been created.

    indexer: number of calls to toku_indexer_create_indexer() that failed

    This is the number of times a indexer was requested but could not be created.

    indexer: number of calls to indexer->build() succeeded

    This is the total number of times that indexes were created using a indexer.

    indexer: number of calls to indexer->build() failed

    This is the total number of times that indexes were unable to be created using a indexer

    indexer: number of calls to indexer->close() that succeeded

    This is the number of indexers that successfully created the requested index(es).

    indexer: number of calls to indexer->close() that failed

    This is the number of indexers that were unable to create the requested index(es).

    indexer: number of calls to indexer->abort()

    This is the number of indexers that were aborted.

    indexer: number of indexers currently in existence

    This is the number of indexers that currently exist.

    indexer: max number of indexers that ever existed simultaneously

    This is the maximum number of indexers that ever existed simultaneously.

    loader: number of loaders successfully created

    This is the number of times one of our internal objects, a loader, has been created.

    loader: number of calls to toku_loader_create_loader() that failed

    This is the number of times a loader was requested but could not be created.

    loader: number of calls to loader->put() succeeded

    This is the total number of rows that were inserted using a loader.

    loader: number of calls to loader->put() failed

    This is the total number of rows that were unable to be inserted using a loader.

    loader: number of calls to loader->close() that succeeded

    This is the number of loaders that successfully created the requested table.

    loader: number of calls to loader->close() that failed

    This is the number of loaders that were unable to create the requested table.

    loader: number of calls to loader->abort()

    This is the number of loaders that were aborted.

    loader: number of loaders currently in existence

    This is the number of loaders that currently exist.

    loader: max number of loaders that ever existed simultaneously

    This is the maximum number of loaders that ever existed simultaneously.

    memory: number of malloc operations

    Number of calls to malloc().

    memory: number of free operations

    Number of calls to free().

    memory: number of realloc operations

    Number of calls to realloc().

    memory: number of malloc operations that failed

    Number of failed calls to malloc().

    memory: number of realloc operations that failed

    Number of failed calls to realloc().

    memory: number of bytes requested

    Total number of bytes requested from memory allocator library.

    memory: number of bytes freed

    Total number of bytes allocated from memory allocation library that have been freed (used - freed = bytes in use).

    memory: largest attempted allocation size

    Largest number of bytes in a single successful malloc() operation.

    memory: size of the last failed allocation attempt

    Largest number of bytes in a single failed malloc() operation.

    memory: number of bytes used (requested + overhead)

    Total number of bytes allocated by memory allocator library.

    memory: estimated maximum memory footprint

    Maximum memory footprint of the storage engine, the max value of (used - freed).

    memory: mallocator version

    Version string from in-use memory allocator.

    memory: mmap threshold

    The threshold for malloc to use mmap.

    filesystem: ENOSPC redzone state

    The state of how much disk space exists with respect to the red zone value. Redzone is space greater than tokudb_fs_reserve_percent and less than full disk.

    Valid values are:

    0

    Space is available

    1

    Warning, with 2x of redzone value. Operations are allowed, but engine status prints a warning.

    2

    In red zone, insert operations are blocked

    3

    All operations are blocked

    filesystem: threads currently blocked by full disk

    This is the number of threads that are currently blocked because they are attempting to write to a full disk. This is normally zero. If this value is non-zero, then a warning will appear in the \u201cdisk free space\u201d field.

    filesystem: number of operations rejected by enospc prevention (red zone)

    This is the number of database inserts that have been rejected because the amount of disk free space was less than the reserve.

    filesystem: most recent disk full

    This is the most recent time when the disk file system was entirely full. If the disk has never been full, then this value will be Dec 31, 1969 on Linux hosts.

    filesystem: number of write operations that returned ENOSPC

    This is the number of times that an attempt to write to disk failed because the disk was full. If the disk is full, this number will continue increasing until space is available.

    filesystem: fsync time

    This the total time, in microseconds, used to fsync to disk.

    filesystem: fsync count

    This is the total number of times the database has flushed the operating system\u2019s file buffers to disk.

    filesystem: long fsync time

    This the total time, in microseconds, used to fsync to disk when the operation required more than 1 second.

    filesystem: long fsync count

    This is the total number of times the database has flushed the operating system\u2019s file buffers to disk and this operation required more than 1 second.

    context: tree traversals blocked by a full fetch

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a full fetch.

    context: tree traversals blocked by a partial fetch

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a partial fetch.

    context: tree traversals blocked by a full eviction

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a full eviction.

    context: tree traversals blocked by a partial eviction

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a partial eviction.

    context: tree traversals blocked by a message injection

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of message injection.

    context: tree traversals blocked by a message application

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of message application (applying fresh ancestors messages to a basement node).

    context: tree traversals blocked by a flush

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a buffer flush from parent to child.

    context: tree traversals blocked by a the cleaner thread

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of a cleaner thread.

    context: tree traversals blocked by something uninstrumented

    Number of times node rwlock contention was observed while pinning nodes from root to leaf because of something uninstrumented.

    context: promotion blocked by a full fetch (should never happen)

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a full fetch.

    context: promotion blocked by a partial fetch (should never happen)

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a partial fetch.

    context: promotion blocked by a full eviction (should never happen)

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a full eviction.

    context: promotion blocked by a partial eviction (should never happen)

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a partial eviction.

    context: promotion blocked by a message injection

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of message injection.

    context: promotion blocked by a message application

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of message application (applying fresh ancestors messages to a basement node).

    context: promotion blocked by a flush

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a buffer flush from parent to child.

    context: promotion blocked by the cleaner thread

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of a cleaner thread.

    context: promotion blocked by something uninstrumented

    Number of times node rwlock contention was observed within promotion (pinning nodes from root to the buffer to receive the message) because of something uninstrumented.

    context: something uninstrumented blocked by something uninstrumented

    Number of times node rwlock contention was observed for an uninstrumented process because of something uninstrumented.

    handlerton: primary key bytes inserted

    Total number of bytes inserted into all primary key indexes.

    "},{"location":"tokudb-variables.html","title":"TokuDB variables","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    Like all storage engines, TokuDB has variables to tune performance and control behavior. Fractal Tree algorithms are designed for near optimal performance and TokuDB\u2019s default settings should work well in most situations, eliminating the need for complex and time consuming tuning in most cases.

    "},{"location":"tokudb-variables.html#tokudb-server-variables","title":"TokuDB Server Variables","text":"Name Cmd-Line Option File Var Scope Dynamic tokudb_alter_print_error Yes Yes Session, Global Yes tokudb_analyze_delete_fraction Yes Yes Session, Global Yes tokudb_analyze_in_background Yes Yes Session, Global Yes tokudb_analyze_mode Yes Yes Session, Global Yes tokudb_analyze_throttle Yes Yes Session, Global Yes tokudb_analyze_time Yes Yes Session, Global Yes tokudb_auto_analyze Yes Yes Session, Global Yes tokudb_backup_allowed_prefix No Yes Global No tokudb_backup_dir No Yes Session No tokudb_backup_exclude Yes Yes Session, Global Yes tokudb_backup_last_error Yes Yes Session, Global Yes tokudb_backup_last_error_string Yes Yes Session, Global Yes tokudb_backup_plugin_version No No Global No tokudb_backup_throttle Yes Yes Session, Global Yes tokudb_backup_version No No Global No tokudb_block_size Yes Yes Session, Global Yes tokudb_bulk_fetch Yes Yes Session, Global Yes tokudb_cachetable_pool_threads Yes Yes Global No tokudb_cardinality_scale_percent Yes Yes Global Yes tokudb_check_jemalloc Yes Yes Global No tokudb_checkpoint_lock Yes Yes Global No tokudb_checkpoint_on_flush_logs Yes Yes Global Yes tokudb_checkpoint_pool_threads Yes Yes Global Yes tokudb_checkpointing_period Yes Yes Global Yes tokudb_cleaner_iterations Yes Yes Global Yes tokudb_cleaner_period Yes Yes Global Yes tokudb_client_pool_threads Yes Yes Global No tokudb_commit_sync Yes Yes Session, Global Yes [tokudb_compress_buffers_before_eviction](#tokudb_compress_buffers_before_eviction Yes Yes Global No tokudb_create_index_online Yes Yes Session, Global Yes tokudb_data_dir Yes Yes Global No tokudb_debug Yes Yes Global Yes tokudb_dir_per_db Yes Yes Global Yes tokudb_directio Yes Yes Global No tokudb_disable_hot_alter Yes Yes Session, Global Yes tokudb_disable_prefetching Yes Yes Session, Global Yes tokudb_disable_slow_alter Yes Yes Session, Global Yes tokudb_empty_scan Yes Yes Session, Global Yes tokudb_enable_fast_update Yes Yes Session, Global Yes tokudb_enable_fast_upsert Yes Yes Session, Global Yes tokudb_enable_partial_eviction Yes Yes Global No tokudb_fanout Yes Yes Session, Global Yes tokudb_fs_reserve_percent Yes Yes Global No tokudb_fsync_log_period Yes Yes Global Yes tokudb_hide_default_row_format Yes Yes Session, Global Yes tokudb_killed_time Yes Yes Session, Global Yes tokudb_last_lock_timeout Yes Yes Session, Global Yes tokudb_load_save_space Yes Yes Session, Global Yes tokudb_loader_memory_size Yes Yes Session, Global Yes tokudb_lock_timeout Yes Yes Session, Global Yes tokudb_lock_timeout_debug Yes Yes Session, Global Yes tokudb_log_dir Yes Yes Global No tokudb_max_lock_memory Yes Yes Global No tokudb_optimize_index_fraction Yes Yes Session, Global Yes tokudb_optimize_index_name Yes Yes Session, Global Yes tokudb_optimize_throttle Yes Yes Session, Global Yes tokudb_pk_insert_mode Yes Yes Session, Global Yes tokudb_prelock_empty Yes Yes Session, Global Yes tokudb_read_block_size Yes Yes Session, Global Yes tokudb_read_buf_size Yes Yes Session, Global Yes tokudb_read_status_frequency Yes Yes Global Yes tokudb_row_format Yes Yes Session, Global Yes tokudb_rpl_check_readonly Yes Yes Session, Global Yes tokudb_rpl_lookup_rows Yes Yes Session, Global Yes tokudb_rpl_lookup_rows_delay Yes Yes Session, Global Yes tokudb_rpl_unique_checks Yes Yes Session, Global Yes tokudb_rpl_unique_checks_delay Yes Yes Session, Global Yes tokudb_strip_frm_data Yes Yes Global No tokudb_support_xa Yes Yes Session, Global Yes tokudb_tmp_dir Yes Yes Global No tokudb_version No No Global No tokudb_write_status_frequency Yes Yes Global Yes"},{"location":"tokudb-variables.html#tokudb_alter_print_error","title":"tokudb_alter_print_error","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Boolean Default OFF

    When set to ON errors will be printed to the client during the ALTER TABLE operations on TokuDB tables.

    "},{"location":"tokudb-variables.html#tokudb_analyze_delete_fraction","title":"tokudb_analyze_delete_fraction","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Numeric Default 1.000000 Range 0.0 - 1.000000

    This variables controls whether or not deleted rows in the fractal tree are reported to the client and to the MySQL error log during an ANALYZE TABLE operation on a TokuDB table. When set to 1, nothing is reported. When set to 0.1 and at least 10% of the rows scanned by ANALYZE were deleted rows that are not yet garbage collected, a report is returned to the client and the MySQL error log.

    "},{"location":"tokudb-variables.html#tokudb_backup_allowed_prefix","title":"tokudb_backup_allowed_prefix","text":"Option Description Command-line No Config file Yes Scope Global Dynamic No Data type String Default NULL

    This system-level variable restricts the location of the destination directory where the backups can be located. Attempts to backup to a location outside of the directory this variable points to or its children will result in an error.

    The default is NULL, backups have no restricted locations. This read only variable can be set in the my.cnf configuration file and displayed with the SHOW VARIABLES command when Percona TokuBackup plugin is loaded.

    mysql> SHOW VARIABLES LIKE 'tokudb_backup_allowed_prefix';\n

    The output could be:

    +------------------------------+-----------+\n| Variable_name                | Value     |\n+------------------------------+-----------+\n| tokudb_backup_allowed_prefix | /dumpdir  |\n+------------------------------+-----------+\n
    "},{"location":"tokudb-variables.html#tokudb_backup_dir","title":"tokudb_backup_dir","text":"Option Description Command-line No Config file No Scope Session Dynamic Yes Data type String Default NULL

    When enabled, this session level variable serves two purposes, to point to the destination directory where the backups will be dumped and to kick off the backup as soon as it is set. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_exclude","title":"tokudb_backup_exclude","text":"Option Description Command-line No Config file No Scope Session Dynamic Yes Data type String Default (mysqld_safe.pid)+

    Use this variable to set a regular expression that defines source files excluded from backup. For example, to exclude all lost+found directories, use the following command:

    mysql> set tokudb_backup_exclude='/lost\\\\+found($|/)';\n

    For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_last_error","title":"tokudb_backup_last_error","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 0

    This session variable will contain the error number from the last backup. 0 indicates success. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_last_error_string","title":"tokudb_backup_last_error_string","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type String Default NULL

    This session variable will contain the error string from the last backup. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_plugin_version","title":"tokudb_backup_plugin_version","text":"Option Description Command-line No Config file No Scope Global Dynamic No Data type String

    This read-only server variable documents the version of the TokuBackup plugin. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_throttle","title":"tokudb_backup_throttle","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 18446744073709551615

    This variable specifies the maximum number of bytes per second the copier of a hot backup process will consume. Lowering its value will cause the hot backup operation to take more time but consume less I/O on the server. The default value is 18446744073709551615 which means no throttling. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_backup_version","title":"tokudb_backup_version","text":"Option Description Command-line No Config file No Scope Global Dynamic No Data type String

    This read-only server variable documents the version of the hot backup library. For more information see Percona TokuBackup.

    "},{"location":"tokudb-variables.html#tokudb_block_size","title":"tokudb_block_size","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 512 MB Range 4096 - 4294967295

    This variable controls the maximum size of node in memory before messages must be flushed or node must be split.

    Changing the value of tokudb_block_size only affects subsequently created tables and indexes. The value of this variable cannot be changed for an existing table/index without a dump and reload.

    "},{"location":"tokudb-variables.html#tokudb_bulk_fetch","title":"tokudb_bulk_fetch","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    This variable determines if our bulk fetch algorithm is used for SELECT statements. SELECT statements include pure SELECT ... statements, as well as INSERT INTO table-name ... SELECT ..., CREATE TABLE table-name ... SELECT ..., REPLACE INTO table-name ... SELECT ..., INSERT IGNORE INTO table-name ... SELECT ..., and INSERT INTO table-name ... SELECT ... ON DUPLICATE KEY UPDATE.

    "},{"location":"tokudb-variables.html#tokudb_cache_size","title":"tokudb_cache_size","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Numeric

    This variable configures the size in bytes of the TokuDB cache table. The default cache table size is \u00bd of physical memory. Percona highly recommends using the default setting if using buffered I/O, if using direct I/O then consider setting this parameter to 80% of available memory.

    Consider decreasing tokudb_cache_size if excessive swapping is causing performance problems. Swapping may occur when running multiple MySQL server instances or if other running applications use large amounts of physical memory.

    "},{"location":"tokudb-variables.html#tokudb_cachetable_pool_threads","title":"tokudb_cachetable_pool_threads","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 0 Range 0 - 1024

    This variable defines the number of threads for the cachetable worker thread pool. This pool is used to perform node prefetches, and to serialize, compress, and write nodes during cachetable eviction. The default value of 0 calculates the pool size to be num_cpu_threads * 2.

    "},{"location":"tokudb-variables.html#tokudb_check_jemalloc","title":"tokudb_check_jemalloc","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Boolean Default OFF

    This variable enables/disables startup checking if jemalloc is linked and correct version and that transparent huge pages are disabled. Used for testing only.

    "},{"location":"tokudb-variables.html#tokudb_checkpoint_lock","title":"tokudb_checkpoint_lock","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default OFF

    Disables checkpointing when true. Session variable but acts like a global, any session disabling checkpointing disables it globally. If a session sets this lock and disconnects or terminates for any reason, the lock will not be released. Special purpose only, do not use this in your application.

    "},{"location":"tokudb-variables.html#tokudb_checkpoint_on_flush_logs","title":"tokudb_checkpoint_on_flush_logs","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Boolean Default OFF

    When enabled forces a checkpoint if we get a flush logs command from the server.

    "},{"location":"tokudb-variables.html#tokudb_checkpoint_pool_threads","title":"tokudb_checkpoint_pool_threads","text":"Option Description Command-line Yes Config file Yes Scope Dynamic No Data type Numeric Default 0 Range 0 - 1024

    This defines the number of threads for the checkpoint worker thread pool. This pool is used to serialize, compress and write nodes cloned during checkpoint. Default of 0 uses old algorithm to set pool size to num_cpu_threads/4.

    "},{"location":"tokudb-variables.html#tokudb_checkpointing_period","title":"tokudb_checkpointing_period","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 60 Range 0 - 4294967295

    This variable specifies the time in seconds between the beginning of one checkpoint and the beginning of the next. The default time between TokuDB checkpoints is 60 seconds. We recommend leaving this variable unchanged.

    "},{"location":"tokudb-variables.html#tokudb_cleaner_iterations","title":"tokudb_cleaner_iterations","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 5 Range 0 - 18446744073709551615

    This variable specifies how many internal nodes get processed in each tokudb_cleaner_period period. The default value is 5. Setting this variable to 0 turns off cleaner threads.

    "},{"location":"tokudb-variables.html#tokudb_cleaner_period","title":"tokudb_cleaner_period","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 1 Range 0 - 18446744073709551615

    This variable specifies how often in seconds the cleaner thread runs. The default value is 1. Setting this variable to 0 turns off cleaner threads.

    "},{"location":"tokudb-variables.html#tokudb_client_pool_threads","title":"tokudb_client_pool_threads","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Numeric Default 0 Range 0 - 1024

    This variable defines the number of threads for the client operations thread pool. This pool is used to perform node maintenance on over/undersized nodes such as message flushing down the tree, node splits, and node merges. Default of 0 uses old algorithm to set pool size to 1 \\* num_cpu_threads.

    "},{"location":"tokudb-variables.html#tokudb_commit_sync","title":"tokudb_commit_sync","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    Session variable tokudb_commit_sync controls whether or not the transaction log is flushed when a transaction commits. The default behavior is that the transaction log is flushed by the commit. Flushing the transaction log requires a disk write and may adversely affect the performance of your application.

    To disable synchronous flushing of the transaction log, disable the tokudb_commit_sync session variable as follows:

    SET tokudb_commit_sync=OFF;\n

    Disabling this variable may make the system run faster. However, transactions committed since the last checkpoint are not guaranteed to survive a crash.

    Warning

    By disabling this variable and/or setting the tokudb_fsync_log_period to non-zero value you have effectively downgraded the durability of the storage engine. If you were to have a crash in this same window, you would lose data. The same issue would also appear if you were using some kind of volume snapshot for backups.

    "},{"location":"tokudb-variables.html#tokudb_compress_buffers_before_eviction","title":"tokudb_compress_buffers_before_eviction","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Boolean Default ON

    When this variable is enabled it allows the evictor to compress unused internal node partitions in order to reduce memory requirements as a first step of partial eviction before fully evicting the partition and eventually the entire node.

    "},{"location":"tokudb-variables.html#tokudb_create_index_online","title":"tokudb_create_index_online","text":"

    This variable controls whether indexes created with the CREATE INDEX command are hot (if enabled), or offline (if disabled). Hot index creation means that the table is available for inserts and queries while the index is being created. Offline index creation means that the table is not available for inserts and queries while the index is being created.

    Note

    Hot index creation is slower than offline index creation.

    "},{"location":"tokudb-variables.html#tokudb_data_dir","title":"tokudb_data_dir","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type String Default NULL

    This variable configures the directory name where the TokuDB tables are stored. The default value is NULL which uses the location of the MySQL data directory. For more information check TokuDB files and file types and TokuDB file management.

    "},{"location":"tokudb-variables.html#tokudb_debug","title":"tokudb_debug","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 0 Range 0 - 18446744073709551615

    This variable enables mysqld debug printing to STDERR for TokuDB. Produces tremendous amounts of output that is nearly useless to anyone but a TokuDB developer, not recommended for any production use at all. It is a mask value ULONG:

    #define TOKUDB_DEBUG_INIT                   (1<<0)\n#define TOKUDB_DEBUG_OPEN                   (1<<1)\n#define TOKUDB_DEBUG_ENTER                  (1<<2)\n#define TOKUDB_DEBUG_RETURN                 (1<<3)\n#define TOKUDB_DEBUG_ERROR                  (1<<4)\n#define TOKUDB_DEBUG_TXN                    (1<<5)\n#define TOKUDB_DEBUG_AUTO_INCREMENT         (1<<6)\n#define TOKUDB_DEBUG_INDEX_KEY              (1<<7)\n#define TOKUDB_DEBUG_LOCK                   (1<<8)\n#define TOKUDB_DEBUG_CHECK_KEY              (1<<9)\n#define TOKUDB_DEBUG_HIDE_DDL_LOCK_ERRORS   (1<<10)\n#define TOKUDB_DEBUG_ALTER_TABLE            (1<<11)\n#define TOKUDB_DEBUG_UPSERT                 (1<<12)\n#define TOKUDB_DEBUG_CHECK                  (1<<13)\n#define TOKUDB_DEBUG_ANALYZE                (1<<14)\n#define TOKUDB_DEBUG_XA                     (1<<15)\n#define TOKUDB_DEBUG_SHARE                  (1<<16)\n
    "},{"location":"tokudb-variables.html#tokudb_dir_per_db","title":"tokudb_dir_per_db","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Boolean Default ON

    When this variable is set to ON all new tables and indices will be placed within their corresponding database directory within the tokudb_data_dir or system datadir. Existing table files will not be automatically relocated to their corresponding database directory. If you rename a table, while this variable is enabled, the mapping in the Percona FT directory file will be updated and the files will be renamed on disk to reflect the new table name. For more information check TokuDB files and file types and TokuDB file management.

    "},{"location":"tokudb-variables.html#tokudb_directio","title":"tokudb_directio","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Boolean Default OFF

    When enabled, TokuDB employs Direct I/O rather than Buffered I/O for writes. When using Direct I/O, consider increasing tokudb_cache_size from its default of \u00bd physical memory.

    "},{"location":"tokudb-variables.html#tokudb_disable_hot_alter","title":"tokudb_disable_hot_alter","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default OFF

    This variable is used specifically for testing or to disable hot alter in case there are bugs. Not for use in production.

    "},{"location":"tokudb-variables.html#tokudb_disable_prefetching","title":"tokudb_disable_prefetching","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default OFF

    TokuDB attempts to aggressively prefetch additional blocks of rows, which is helpful for most range queries but may create unnecessary I/O for range queries with LIMIT clauses. Prefetching is ON by default, with a value of 0, it can be disabled by setting this variable to 1.

    "},{"location":"tokudb-variables.html#tokudb_disable_slow_alter","title":"tokudb_disable_slow_alter","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default OFF

    This variable is used specifically for testing or to disable hot alter in case there are bugs. Not for use in production. It controls whether slow alter tables are allowed. For example, the following command is slow because HCADER does not allow a mixture of column additions, deletions, or expansions:

    ALTER TABLE table\nADD COLUMN column_a INT,\nDROP COLUMN column_b;\n

    By default, tokudb_disable_slow_alter is disabled, and the engine reports back to MySQL that this is unsupported resulting in the following output:

    ERROR 1112 (42000): Table 'test_slow' uses an extension that doesn't exist in this MySQL version\n
    "},{"location":"tokudb-variables.html#tokudb_empty_scan","title":"tokudb_empty_scan","text":"

    Defines direction to be used to perform table scan to check for empty tables for bulk loader.

    "},{"location":"tokudb-variables.html#tokudb_enable_fast_update","title":"tokudb_enable_fast_update","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Boolean Default OFF

    Toggles the fast updates feature ON/OFF for the UPDATE statement. Fast update involves queries optimization to avoid random reads during their execution.

    "},{"location":"tokudb-variables.html#tokudb_enable_fast_upsert","title":"tokudb_enable_fast_upsert","text":"Option Description Command-line Yes Config file Yes Scope Global/Session Dynamic Yes Data type Boolean Default OFF

    Toggles the fast updates feature ON/OFF for the INSERT statement. Fast update involves queries optimization to avoid random reads during their execution.

    "},{"location":"tokudb-variables.html#tokudb_enable_partial_eviction","title":"tokudb_enable_partial_eviction","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Boolean Default OFF

    This variable is used to control if partial eviction of nodes is enabled or disabled.

    "},{"location":"tokudb-variables.html#tokudb_fanout","title":"tokudb_fanout","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 16 Range 2-16384

    This variable controls the Fractal Tree fanout. The fanout defines the number of pivot keys or child nodes for each internal tree node. Changing the value of tokudb_fanout only affects subsequently created tables and indexes. The value of this variable cannot be changed for an existing table/index without a dump and reload.

    "},{"location":"tokudb-variables.html#tokudb_fs_reserve_percent","title":"tokudb_fs_reserve_percent","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Numeric Default 5 Range 0-100

    This variable controls the percentage of the file system that must be available for inserts to be allowed. By default, this is set to 5. We recommend that this reserve be at least half the size of your physical memory. See Full Disks for more information.

    "},{"location":"tokudb-variables.html#tokudb_fsync_log_period","title":"tokudb_fsync_log_period","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 0 Range 0-4294967295

    This variable controls the frequency, in milliseconds, for fsync() operations. If set to 0 then the fsync() behavior is only controlled by the tokudb_commit_sync, which can be ON or OFF.

    "},{"location":"tokudb-variables.html#tokudb_hide_default_row_format","title":"tokudb_hide_default_row_format","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    This variable is used to hide the ROW_FORMAT in SHOW CREATE TABLE. If zlib compression is used, row format will show as DEFAULT.

    "},{"location":"tokudb-variables.html#tokudb_killed_time","title":"tokudb_killed_time","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 4000 Range 0-18446744073709551615

    This variable is used to specify frequency in milliseconds for lock wait to check to see if the lock was killed.

    "},{"location":"tokudb-variables.html#tokudb_last_lock_timeout","title":"tokudb_last_lock_timeout","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type String Default NULL

    This variable contains a JSON document that describes the last lock conflict seen by the current MySQL client. It gets set when a blocked lock request times out or a lock deadlock is detected.

    The tokudb_lock_timeout_debug session variable must have bit 0 set for this behavior, otherwise this session variable will be empty.

    "},{"location":"tokudb-variables.html#tokudb_load_save_space","title":"tokudb_load_save_space","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    This session variable changes the behavior of the bulk loader. When it is disabled the bulk loader stores intermediate data using uncompressed files (which consumes additional CPU), whereas ON compresses the intermediate files.

    Note

    The location of the temporary disk space used by the bulk loader may be specified with the tokudb_tmp_dir server variable.

    If a LOAD DATA INFILE statement fails with the error message ERROR 1030 (HY000): Got error 1 from storage engine, then there may not be enough disk space for the optimized loader, so disable tokudb_prelock_empty and try again. More information is available in Known Issues.

    "},{"location":"tokudb-variables.html#tokudb_loader_memory_size","title":"tokudb_loader_memory_size","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 100000000 Range 0-18446744073709551615

    This variable limits the amount of memory (in bytes) that the TokuDB bulk loader will use for each loader instance. Increasing this value may provide a performance benefit when loading extremely large tables with several secondary indexes.

    Note

    Memory allocated to a loader is taken from the TokuDB cache, defined in tokudb_cache_size, and may impact the running workload\u2019s performance as existing cached data must be ejected for the loader to begin.

    "},{"location":"tokudb-variables.html#tokudb_lock_timeout","title":"tokudb_lock_timeout","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 4000 Range 0-18446744073709551615

    This variable controls the amount of time that a transaction will wait for a lock held by another transaction to be released. If the conflicting transaction does not release the lock within the lock timeout, the transaction that was waiting for the lock will get a lock timeout error. The units are milliseconds. A value of 0 disables lock waits. The default value is 4000 (four seconds).

    If your application gets a lock wait timeout error (-30994), then you may find that increasing the tokudb_lock_timeout may help. If your application gets a deadlock found error (-30995), then you need to abort the current transaction and retry it.

    "},{"location":"tokudb-variables.html#tokudb_lock_timeout_debug","title":"tokudb_lock_timeout_debug","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 1 Range 0-3

    The following values are available:

    • 0: No lock timeouts or lock deadlocks are reported.

    • 1: A JSON document that describes the lock conflict is stored in the tokudb_last_lock_timeout session variable

    • 2: A JSON document that describes the lock conflict is printed to the MySQL error log.

    In addition to the JSON document describing the lock conflict, the following lines are printed to the MySQL error log:

    * A line containing the blocked thread id and blocked SQL\n\n* A line containing the blocking thread id and the blocking SQL.\n
    • 3: A JSON document that describes the lock conflict is stored in the tokudb_last_lock_timeout session variable and is printed to the MySQL error log.

    In addition to the JSON document describing the lock conflict, the following lines are printed to the MySQL error log:

    * A line containing the blocked thread id and blocked SQL\n\n* A line containing the blocking thread id and the blocking SQL.\n
    "},{"location":"tokudb-variables.html#tokudb_log_dir","title":"tokudb_log_dir","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type String Default NULL

    This variable specifies the directory where the TokuDB log files are stored. The default value is NULL which uses the location of the MySQL data directory. Configuring a separate log directory is somewhat involved. Please contact Percona support for more details. For more information check TokuDB files and file types and TokuDB file management.

    Warning

    After changing TokuDB log directory path, the old TokuDB recovery log file should be moved to new directory prior to start of MySQL server and log file\u2019s owner must be the mysql user. Otherwise server will fail to initialize the TokuDB store engine restart.

    "},{"location":"tokudb-variables.html#tokudb_max_lock_memory","title":"tokudb_max_lock_memory","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type Numeric Default 65560320 Range 0-18446744073709551615

    This variable specifies the maximum amount of memory for the PerconaFT lock table.

    "},{"location":"tokudb-variables.html#tokudb_optimize_index_fraction","title":"tokudb_optimize_index_fraction","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 1.000000 Range 0.000000 - 1.000000

    For patterns where the left side of the tree has many deletions (a common pattern with increasing id or date values), it may be useful to delete a percentage of the tree. In this case, it\u2019s possible to optimize a subset of a fractal tree starting at the left side. The tokudb_optimize_index_fraction session variable controls the size of the sub tree. Valid values are in the range [0.0,1.0] with default 1.0 (optimize the whole tree).

    "},{"location":"tokudb-variables.html#tokudb_optimize_index_name","title":"tokudb_optimize_index_name","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type String Default NULL

    This variable can be used to optimize a single index in a table, it can be set to select the index by name.

    "},{"location":"tokudb-variables.html#tokudb_optimize_throttle","title":"tokudb_optimize_throttle","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 0 Range 0-18446744073709551615

    By default, table optimization will run with all available resources. To limit the amount of resources, it is possible to limit the speed of table optimization. This determines an upper bound on how many fractal tree leaf nodes per second are optimized. The default 0 imposes no limit.

    "},{"location":"tokudb-variables.html#tokudb_pk_insert_mode","title":"tokudb_pk_insert_mode","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 1 Range 0-3

    Note

    The tokudb_pk_insert_mode session variable was removed and the behavior is now that of the former tokudb_pk_insert_mode set to 1. The optimization will be used where safe and not used where not safe.

    "},{"location":"tokudb-variables.html#tokudb_prelock_empty","title":"tokudb_prelock_empty","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    By default TokuDB preemptively grabs an entire table lock on empty tables. If one transaction is doing the loading, such as when the user is doing a table load into an empty table, this default provides a considerable speedup.

    However, if multiple transactions try to do concurrent operations on an empty table, all but one transaction will be locked out. Disabling tokudb_prelock_empty optimizes for this multi-transaction case by turning off preemptive pre-locking.

    Note

    If this variable is set to OFF, fast bulk loading is turned off as well.

    "},{"location":"tokudb-variables.html#tokudb_read_block_size","title":"tokudb_read_block_size","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 16384 (16KB) Range 4096 - 4294967295

    Fractal tree leaves are subdivided into read blocks, in order to speed up point queries. This variable controls the target uncompressed size of the read blocks. The units are bytes and the default is 64 KB. A smaller value favors read performance for point and small range scans over large range scans and higher compression. The minimum value of this variable is 4096 (4KB).

    Changing the value of tokudb_read_block_size only affects subsequently created tables. The value of this variable cannot be changed for an existing table without a dump and reload.

    "},{"location":"tokudb-variables.html#tokudb_read_buf_size","title":"tokudb_read_buf_size","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 131072 (128KB) Range 0 - 1048576

    This variable controls the size of the buffer used to store values that are bulk fetched as part of a large range query. Its unit is bytes and its default value is 131,072 (128 KB).

    A value of 0 turns off bulk fetching. Each client keeps a thread of this size, so it should be lowered if situations where there are a large number of clients simultaneously querying a table.

    "},{"location":"tokudb-variables.html#tokudb_read_status_frequency","title":"tokudb_read_status_frequency","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 10000 Range 0 - 4294967295

    This variable controls in how many reads the progress is measured to display SHOW PROCESSLIST. Reads are defined as SELECT queries.

    For slow queries, it can be helpful to set this variable and tokudb_write_status_frequency to 1, and then run SHOW PROCESSLIST several times to understand what progress is being made.

    "},{"location":"tokudb-variables.html#tokudb_row_format","title":"tokudb_row_format","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type ENUM Default TOKUDB_QUICKLZ Range TOKUDB_DEFAULT, TOKUDB_FAST, TOKUDB_SMALL, TOKUDB_ZLIB, TOKUDB_QUICKLZ, TOKUDB_LZMA, TOKUDB_SNAPPY, TOKUDB_UNCOMPRESSED

    This controls the default compression algorithm used to compress data. For more information on compression algorithms see Compression Details.

    "},{"location":"tokudb-variables.html#tokudb_rpl_check_readonly","title":"tokudb_rpl_check_readonly","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    The TokuDB replication code will run row events from the binary log with Read Free Replication when the replica is in read-only mode. This variable is used to disable the replica read only check in the TokuDB replication code.

    This allows Read-Free-Replication to run when the replica is NOT read-only. By default, tokudb_rpl_check_readonly is enabled (check that replica is read-only). Do NOT change this value unless you completely understand the implications!

    "},{"location":"tokudb-variables.html#tokudb_rpl_lookup_rows","title":"tokudb_rpl_lookup_rows","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    When disabled, TokuDB replication replicas skip row lookups for delete row log events and update row log events, which eliminates all associated read I/O for these operations.

    Warning

    TokuDB Read Free Replication will not propagate UPDATE and DELETE events reliably if TokuDB table is missing the primary key which will eventually lead to data inconsistency on the replica.

    Note

    Optimization is only enabled when read_only is set to 1 and binlog_format is ROW.

    "},{"location":"tokudb-variables.html#tokudb_rpl_lookup_rows_delay","title":"tokudb_rpl_lookup_rows_delay","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 0 Range 0 - 18446744073709551615

    This variable allows for simulation of long disk reads by sleeping for the given number of microseconds prior to the row lookup query, it should only be set to a non-zero value for testing.

    "},{"location":"tokudb-variables.html#tokudb_rpl_unique_checks","title":"tokudb_rpl_unique_checks","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    When disabled, TokuDB replication replicas skip uniqueness checks on inserts and updates, which eliminates all associated read I/O for these operations.

    Note

    Optimization is only enabled when read_only is set to 1 and binlog_format is ROW.

    "},{"location":"tokudb-variables.html#tokudb_rpl_unique_checks_delay","title":"tokudb_rpl_unique_checks_delay","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Numeric Default 0 Range 0 - 18446744073709551615

    This variable allows for simulation of long disk reads by sleeping for the given number of microseconds prior to the row lookup query, it should only be set to a non-zero value for testing.

    "},{"location":"tokudb-variables.html#tokudb_strip_frm_data","title":"tokudb_strip_frm_data","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Boolean Default OFF

    When this variable is set to ON during the startup server will check all the status files and remove the embedded .frm metadata. This variable can be used to assist in TokuDB data recovery.

    Warning

    Use this variable only if you know what you\u2019re doing otherwise it could lead to data loss.

    "},{"location":"tokudb-variables.html#tokudb_support_xa","title":"tokudb_support_xa","text":"Option Description Command-line Yes Config file Yes Scope Session, Global Dynamic Yes Data type Boolean Default ON

    This variable defines whether or not the prepare phase of an XA transaction performs an fsync().

    "},{"location":"tokudb-variables.html#tokudb_tmp_dir","title":"tokudb_tmp_dir","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic No Data type String

    This variable specifies the directory where the TokuDB bulk loader stores temporary files. The bulk loader can create large temporary files while it is loading a table, so putting these temporary files on a disk separate from the data directory can be useful.

    For example, it can make sense to use a high-performance disk for the data directory and a very inexpensive disk for the temporary directory. The default location for TokuDB\u2019s temporary files is the MySQL data directory.

    tokudb_load_save_space determines whether the data is compressed or not. The error message ERROR 1030 (HY000): Got error 1 from storage engine could indicate that the disk has run out of space.

    For more information check TokuDB files and file types and TokuDB file management.

    "},{"location":"tokudb-variables.html#tokudb_version","title":"tokudb_version","text":"Option Description Command-line No Config file No Scope Global Dynamic No Data type String

    This read-only variable documents the version of the TokuDB storage engine.

    "},{"location":"tokudb-variables.html#tokudb_write_status_frequency","title":"tokudb_write_status_frequency","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type Numeric Default 1000 Range 0 - 4294967295

    This variable controls in how many writes the progress is measured to display SHOW PROCESSLIST. Writes are defined as INSERT, UPDATE and DELETE queries.

    For slow queries, it can be helpful to set this variable and tokudb_read_status_frequency to 1, and then run SHOW PROCESSLIST several times to understand what progress is being made.

    "},{"location":"tokudb-version-changes.html","title":"TokuDB changes in Percona Server for MySQL by version","text":""},{"location":"tokudb-version-changes.html#removed","title":"Removed","text":"

    Percona Server for MySQL 8.0.28-19 removes the TokuDB storage engine and this storage engine is no longer supported.

    We have removed the storage engine from the installation packages and disabled the storage engine in our binary builds.

    "},{"location":"tokudb-version-changes.html#disabled","title":"Disabled","text":"

    Percona Server for MySQL 8.0.26-16 includes the TokuDB storage engine plugins in the binary builds and packages but disables them.

    The tokudb_enabled option and the tokudb_backup_enabled option control the state of the plugins and have a default setting of FALSE. The plugins fail to initialize and print a deprecation message if you attempt to load them.

    We recommend migrating the data to the MyRocks storage engine

    Set the tokudb_enabled and tokudb_backup_enabled options to TRUE in your my.cnf configuration file.

    This action enables the plugins needed for migration.

    [tokudb]\n\n# Enable TokuDB\ntokudb_enabled=TRUE\n# Enable TokuDB backup\ntokudb_backup_enabled=TRUE\n

    After saving these changes, restart your server instance to apply the new settings. This restart is crucial since it initializes the plugins and prepares your system for migration to MyRocks.

    "},{"location":"tokudb-version-changes.html#deprecated","title":"Deprecated","text":"

    The TokuDB storage engine was declared deprecated in Percona Server for MySQL 8.0. For more information, see the Percona blog post: Heads-Up: TokuDB Support Changes and Future Removal from Percona Server for MySQL 8.0.

    "},{"location":"topic-index.html","title":"Topic index","text":"
    • Adaptive network buffers
    • Advanced encryption key rotation
    • Apt pinning the Percona Server for MySQL 8.0 packages
    • Audit Log Filter compression and encryption
    • Audit Log Filter file format overview
    • Audit Log Filter format - JSON
    • Audit Log Filter format - XML (new style)
    • Audit Log Filter format - XML (old style)
    • Audit Log Filter functions, options, and variables
    • Audit Log Filter naming conventions
    • Audit Log Filter overview
    • Audit Log Filter restrictions
    • Audit Log Filter security
    • Audit log plugin
    • Backup and restore overview
    • Backup locks
    • Binary logs and replication improvements
    • Binary tarball file names available based on the Percona Server for MySQL version
    • Build APT packages
    • Compare the data masking component to the data masking plugin
    • Compile Percona Server for MySQL 8.0 from source
    • Compressed columns with dictionaries
    • Copyright and licensing information
    • Data at Rest encryption
    • Data masking component functions
    • Data masking overview
    • Data masking plugin functions
    • Development of Percona Server for MySQL
    • Differences between Percona MyRocks and Facebook MyRocks
    • Disable Audit Log Filter logging
    • Docker environment variables
    • Downgrade Percona Server for MySQL
    • Encrypt Binary Log files and Relay Log files
    • Encrypt doublewrite buffers
    • Encrypt File-Per-Table Tablespace
    • Encrypt schema or general tablespace
    • Encrypt system tablespace
    • Encrypt temporary files
    • Encrypt the undo tablespace
    • Encrypting the Redo Log data
    • Encryption functions
    • Enforcing storage engine
    • Expanded fast index creation
    • Extended mysqlbinlog
    • Extended mysqldump
    • Extended SELECT INTO OUTFILE/DUMPFILE
    • Extended show engine InnoDB status
    • Extended SHOW GRANTS
    • Fast updates with TokuDB
    • FIDO authentication plugin
    • Files in the DEB package built for Percona Server for MySQL 8.0
    • Files in the RPM package built for Percona Server for MySQL
    • Filter the Audit Log Filter logs
    • Frequently asked questions
    • Gap locks detection
    • Get started with TokuDB
    • Glossary
    • Group replication system variables
    • Handle corrupt tables
    • Home
    • Improved InnoDB I/O scalability
    • Improved MEMORY storage engine
    • Index of INFORMATION_SCHEMA tables
    • InnoDB full-text search improvements
    • InnoDB page fragmentation counters
    • Install and remove the data masking plugin
    • Install from Percona Software repository
    • Install Percona Server for MySQL 8.0 from a binary tarball
    • Install Percona Server for MySQL 8.0 using downloaded DEB packages
    • Install Percona Server for MySQL from a source tarball
    • Install Percona Server for MySQL using downloaded RPM packages
    • Install Percona Server for MySQL
    • Install the Audit Log Filter
    • Install the data masking component
    • Install using Docker
    • Installing and configuring Percona Server for MySQL with ZenFS support
    • Jemalloc memory allocation profiling
    • Kill idle transactions
    • LDAP authentication plugin system variables
    • Limit the estimation of records in a Query
    • Limiting the disk space used by binary log files
    • List of features available in Percona Server for MySQL releases
    • List of variables introduced in Percona Server for MySQL 8.0
    • Manage group replication flow control
    • Manage the Audit Log Filter files
    • Migrate and remove the TokuDB storage engine
    • Misc. INFORMATION_SCHEMA tables
    • Multiple page asynchronous I/O requests
    • MyRocks column families
    • MyRocks data loading
    • MyRocks Information Schema tables
    • MyRocks limitations
    • MyRocks server variables
    • MyRocks status variables
    • PAM authentication plugin
    • Percona MyRocks installation guide
    • Percona MyRocks introduction
    • Percona Product download instructions
    • Percona Server for MySQL feature comparison
    • Percona TokuBackup
    • Percona Toolkit UDFs
    • Performance Schema MyRocks changes
    • Post-installation
    • Prefix index queries optimization
    • Process list
    • Quickstart guide for Percona Server for MySQL
    • Reading Audit Log Filter files
    • Release notes index
    • Rotate the master key
    • Run Percona Server for MySQL 8.0 after APT repository installation
    • Run Percona Server for MySQL
    • SEQUENCE_TABLE(n) function
    • Show storage engines
    • Slow query log rotation and expiration
    • Slow query log
    • SSL improvements
    • Stack trace
    • Start transaction with consistent snapshot
    • Support for PROXY protocol
    • The Percona XtraDB storage engine
    • The ProcFS plugin
    • The secure_log_path variable
    • Thread based profiling
    • Thread pool
    • TokuDB background ANALYZE TABLE
    • TokuDB file management
    • TokuDB files and file types
    • TokuDB fractal tree indexing
    • TokuDB frequently asked questions
    • TokuDB installation
    • TokuDB introduction
    • TokuDB Performance Schema integration
    • TokuDB status variables
    • TokuDB troubleshooting
    • TokuDB variables
    • Too many connections warning
    • Trademark policy
    • Trigger updates
    • Understand version numbers
    • Uninstall Audit Log Filter
    • UNINSTALL COMPONENT
    • Uninstall Percona Server for MySQL
    • Uninstall Percona Server ro MySQL 8.0 using the APT package manager
    • Uninstall the data masking component
    • Updated supported features
    • Use an APT repository to install Percona Server for MySQL 8.0
    • Use Percona Monitoring and Management (PMM) Advisors
    • Use the Amazon Key Management Service (AWS KMS)
    • Use the keyring component or keyring plugin
    • Use TokuDB
    • User statistics
    • Using LDAP authentication plugins
    • Using libcoredumper
    • Using the Key Management Interoperability Protocol (KMIP)
    • Utility user
    • Verify the encryption for tables, tablespaces, and schemas
    • Working with AppArmor
    • Working with SELinux
    • XtraDB changed page tracking
    • XtraDB performance improvements for I/O-bound highly-concurrent workloads
    "},{"location":"trademark-policy.html","title":"Trademark policy","text":"

    This Trademark Policy is to ensure that users of Percona-branded products or services know that what they receive has really been developed, approved, tested, and maintained by Percona. Trademarks help to prevent confusion in the marketplace, by distinguishing one company\u2019s or person\u2019s products and services from another\u2019s.

    Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup, Percona XtraBackup, Percona Server for MySQL, and Percona Live, plus the distinctive visual icons and logos associated with these marks. Both the unregistered and registered marks of Percona are protected.

    Use of any Percona trademark in the name, URL, or another identifying characteristic of any product, service, website, or other use is not permitted without Percona\u2019s written permission with the following three limited exceptions.

    First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona fide Percona product.

    Second, when Percona has released a product under a version of the GNU General Public License (\u201cGPL\u201d), you may use the appropriate Percona mark when distributing a verbatim copy of that product in accordance with the terms and conditions of the GPL.

    Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software that has been modified with minor changes for the sole purpose of allowing the software to operate on an operating system or hardware platform for which Percona has not yet released the software, provided that those third party changes do not affect the behavior, functionality, features, design or performance of the software. Users who acquire this Percona-branded software receive substantially exact implementations of the Percona software.

    Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if Percona believes that your modification is beyond the scope of the limited license granted in this Policy or that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon revocation, you must immediately cease using the applicable Percona mark. If you do not immediately cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified versions of Percona software; such use will require our prior written permission.

    Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate, modify, or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified version of the Percona Server for MySQL, XYZ may not brand that modification as \u201cXYZ Percona Server\u201d or \u201cPercona XYZ Server\u201d, even if that modification otherwise complies with the third exception noted above.

    In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as amended from time to time. For instance, any mention of Percona trademarks should include the full trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to omit the word \u201cPercona\u201d for brevity on the second and subsequent uses, where such omission does not cause confusion.

    In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please contact trademarks@percona.com for assistance and we will do our very best to be helpful.

    "},{"location":"trigger-updates.html","title":"Trigger updates","text":"

    Clients can issue simultaneous queries for a table. To avoid scalability problems, each thread-handling query has its own table instance. The server uses a special cache, called the Table Cache, which contains open table instanances. The use of the cache avoids paying the penalty in resources for opening and closing tables for each statement.

    The table_open_cache system variable sets soft limits on the cache size. This limit can be temporarily exceeded if the currently executing queries require more open tables than specified. However, when these queries complete, the server closes the unused table instances from this cache using the least recently used (LRU) algorithm.

    The table_open_cache_instances system variable shows the number of open tables cache instances.

    For more information, see How MySQL opens and closes tables.

    Opening a table with triggers in Table Cache also parses the trigger definitions and associates the open table instance with its own instances of the defined trigger bodies. When a connection executes a DML statement and must run a trigger, that connection gets its own instance of the trigger body for that specific open table instance. As a result of this approach, caching open table instances and also caching an associated trigger body for each trigger can consume a surprising amount of memory.

    "},{"location":"trigger-updates.html#version-specific-information","title":"Version specific information","text":"

    Percona Server for MySQL 8.0.34 adds the open_tables_with_triggers status variable.

    Percona Server for MySQL 8.0.31 adds the following abilities:

    • Avoid using table instances with fully-loaded and parsed triggers by read-only queries
    • Show trigger CREATE statements even if the statement is unparseable

    The additional system variable reduces the Table Cache memory consumption on the server when tables that contain trigger definitions also are part of a significant read-only workload.

    "},{"location":"trigger-updates.html#system-variables","title":"System variables","text":""},{"location":"trigger-updates.html#table_open_cache_triggers","title":"table_open_cache_triggers","text":"Option Description Command-line --table-open-cache-triggers Dynamic Yes Scope Global Data type Integer Default 524288 Minimum value 1 Maximum value 524288

    This variable allows you to set a soft limit on the maximum of open tables in the Table Cache, which contains fully-loaded triggers. By default, the value is the maximum value to avoid existing users observing a change in behavior.

    If the number of open table instances with fully-loaded triggers exceeds the value, then unused table instances with fully-loaded triggers are removed. This operation uses the least recently used (LRU) method for managing storage areas.

    The value can be a start-up option or changed dynamically.

    "},{"location":"trigger-updates.html#status-variables","title":"Status variables","text":"

    The following status variables are available:

    Variable name Description Open_tables_with_triggers The current number of TABLE instances with fully-loaded triggers in the table_open_cache. table_open_cache_triggers_hits A hit means the statement required an open table instance with fully-loaded triggers and was able to get one from the table_open_cache. table_open_cache_triggers_misses A miss means the statement requiring an open table instance with fully-loaded triggers was not found one in the table_open_cache. The statement may find a table instance without fully-loaded triggers and finalized their loading for it. table_open_cache_triggers_overflows An overflow indicates the number of unused table instances with triggers that were expelled from the table_open_cache due to the table_open_cache_triggers soft limit. This variable may demonstrate that the table_open_cache_triggers value should be increased."},{"location":"trigger-updates.html#show-create-trigger-statment-changes","title":"SHOW CREATE TRIGGER statment changes","text":"

    The SHOW CREATE TRIGGER statement shows the CREATE statement used to create the trigger. The statement also shows definitions which can no longer be parsed. For example, you can show the definition of a trigger created before a server upgrade which changed the trigger syntax.

    "},{"location":"udf-percona-toolkit.html","title":"Percona Toolkit UDFs","text":"

    Three Percona Toolkit UDFs that provide faster checksums are provided:

    • libfnv1a_udf

    • libfnv_udf

    • libmurmur_udf

    "},{"location":"udf-percona-toolkit.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"udf-percona-toolkit.html#other-information","title":"Other information","text":"
    • Author/Origin: Baron Schwartz
    "},{"location":"udf-percona-toolkit.html#installation","title":"Installation","text":"

    These UDFs are part of the Percona Server for MySQL packages. To install one of the UDFs into the server, execute one of the following commands, depending on which UDF you want to install:

    mysql -e \"CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'\"\nmysql -e \"CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'\"\nmysql -e \"CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'\"\n

    Executing each of these commands will install its respective UDF into the server.

    "},{"location":"udf-percona-toolkit.html#troubleshooting","title":"Troubleshooting","text":"

    If you get the error:

    Error message
    ERROR 1126 (HY000): Can't open shared library 'fnv_udf.so' (errno: 22 fnv_udf.so: cannot open shared object file: No such file or directory)\n

    Then you may need to copy the .so file to another location in your system. Try both /lib and /usr/lib. Look at your environment\u2019s $LD_LIBRARY_PATH variable for clues. If none is set, and neither /lib nor /usr/lib works, you may need to set LD_LIBRARY_PATH to /lib or /usr/lib.

    "},{"location":"udf-percona-toolkit.html#other-reading","title":"Other reading","text":"
    • Percona Toolkit documentation
    "},{"location":"uninstall-audit-log-filter.html","title":"Uninstall Audit Log Filter","text":"

    To remove the plugin, run the following:

    mysql> DROP TABLE IF EXISTS mysql.audit_log_user;\nmysql> DROP TABLE IF EXISTS mysql.audit_log_filter;\nmysql> UNINSTALL PLUGIN audit_log_filter;\n
    "},{"location":"uninstall-component.html","title":"UNINSTALL COMPONENT","text":"

    The UNINSTALL COMPONENT does the following:

    • Deactivates the component
    • Uninstalls the component

    If the statement does not undo any persisted variables.

    If an error, such as a misspelled component name, occurs, the statement fails and nothing happens.

    You can uninstall multiple components at the same time.

    "},{"location":"uninstall-component.html#required-privilege","title":"Required privilege","text":"

    The statement requires the DELETE privilege for the mysql.component system table. Executing the statement removes the registration row from this table.

    "},{"location":"uninstall-component.html#example","title":"Example","text":"

    The following is an example of the UNINSTALL COMPONENT statement.

    mysql > UNINSTALL COMPONENT 'file://componentA' ;\n
    "},{"location":"uninstall-data-masking-component.html","title":"Uninstall the data masking component","text":"

    The following steps uninstall the component:

    1. Uninstall the component with UNINSTALL_COMPONENT and the loadable functions.

      mysql> UNINSTALL COMPONENT 'file://component_masking_functions';\n
    2. Drop masking_dictionaries.

      mysql> DROP TABLE mysql.masking_dictionaries;\n
    "},{"location":"uninstall-data-masking-component.html#useful-links","title":"Useful links","text":"

    Install the data masking component

    Data masking component functions

    "},{"location":"upgrade-changes-deprecated.html","title":"Deprecated in MySQL 8.0","text":"

    The utf8mb3 character set is deprecated. Please use utf8mb4 instead. The utf8mb3 character set is valid in MySQL 8.0, however, it is recommended to use utf8mb4 for the improved Unicode support. Review Migrating to utf8mb4: Things to consider for more information.

    The caching_sha2_password is the default authentication plugin in MySQL 8.0 and provides a superset of the capabilities of the sha256_password authentication plugin. The sha256_password is deprecated. The new default authentication plugin \u2018caching_sha2_password\u2019 offers more secure password hashing and improved client connection authentication than the previously used \u2018mysql_native_password\u2019. The existing users that are created with mysql_native_password plugin can still be used and logged in to the DB. The new users will be created with caching_sha2_password plugin if you don\u2019t change the default authentication plugin.

    The mysql_upgrade client is deprecated because its capabilities for upgrading the system tables in the mysql system schema and objects in other schemas have been moved into the MySQL server. As of MySQL 8.0.16, the server performs all tasks previously handled by mysql_upgrade. The upgrade process automatically starts by running a new MySQL binary with an older data directory. The mysql_upgrade_info file, which creates a text file in the data directory and used to store the MySQL version number. In data_directory, new InnoDB files are created.

    After installing a new MySQL version, the server now automatically performs all necessary upgrade tasks at the next startup and is not dependent on the DBA invoking mysql_upgrade.

    In addition, the server updates the contents of the help tables (something mysql_upgrade did not do). A new upgrade option provides control over how the server performs automatic data dictionary and server upgrade operations at startup. The validate_password plugin has been reimplemented to use the server component infrastructure. The plugin form of validate_password is still available but is deprecated.

    The ENGINE clause for the ALTER TABLESPACE and DROP TABLESPACE statements. The PAD_CHAR_TO_FULL_LENGTH SQL mode. AUTO_INCREMENT support is deprecated for columns of type FLOAT and DOUBLE (and any synonyms). Consider removing the AUTO_INCREMENT attribute from such columns, or convert them to an integer type. The UNSIGNED attribute is deprecated for columns of type FLOAT, DOUBLE, and DECIMAL (and any synonyms). Consider using a simple CHECK constraint instead for such columns. FLOAT(M,D) and DOUBLE(M,D) syntax to specify the number of digits for columns of type FLOAT and DOUBLE (and any synonyms) is a nonstandard MySQL extension. This syntax is deprecated. The nonstandard C-style &&, ||, and ! operators that are synonyms for the standard SQL AND, OR, and NOT operators, respectively, are deprecated. Applications that use the nonstandard operators should be adjusted to use the standard operators.

    The relay_log_info_file system variable and \u2013master-info-file option are deprecated. Previously, these were used to specify the name of the relay log info log and master info log when relay_log_info_repository=FILE and master_info_repository=FILE were set, but those settings have been deprecated. The use of files for the relay log info log and master info log has been superseded by crash-safe slave tables, which are the default in MySQL 8.0. The use of the MYSQL_PWD environment variable to specify a MySQL password is deprecated.

    "},{"location":"upgrade-changes-general.html","title":"General changes","text":"

    MySQL now incorporates a transactional data dictionary that stores information about database objects.

    An atomic DDL statement combines the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic transaction.

    The MySQL server now automatically performs all necessary upgrade tasks at the next startup to upgrade the system tables in the mysql schema, as well as objects in other schemas such as the sys schema and user schemas. It is no longer required to manually invoke mysql_upgrade as of version 8.0.16.

    MySQL Server now supports SSL session reuse by default with a timeout setting that maintains a session cache that establishes the period a client is permitted to request session reuse for new connections.

    MySQL now supports the creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group.

    MySQL table encryption can now be managed globally by defining and enforcing encryption defaults. The default_table_encryption variable defines an encryption default for newly created schemas and general tablespace. These defaults are enforced by enabling the table_encryption_privilege_check variable.

    The default character set has changed from latin1 to utf8mb4. The utf8mb4 character set has several new collations available.

    MySQL supports the use of expressions as default values for the BLOB, TEXT, GEOMETRY, and JSON data types.

    MySQL now has a backup lock that permits DMLs during an online backup while preventing operations that could result in an inconsistent snapshot.

    MySQL Server now permits a TCP/IP port to be configured specifically for administrative connections. This administration port is available even when max_connections connections are already established on the primary port.

    MySQL Server now supports invisible indexes which are not used by the optimizer, and makes it possible to test the effect of removing an index without actually removing it.

    MySQL Server now has a Document Store for developing both SQL and NoSQL document applications using a single database.

    MySQL 8.0 makes it possible to persist global, dynamic server variables using the SET PERSIST command instead of the usual SET GLOBAL one.

    "},{"location":"upgrade-changes-innodb.html","title":"InnoDB changes","text":"

    The maximum auto-increment counter value is now persistent across server restarts.

    When encountering index tree corruption, InnoDB writes a corruption flag to the redo log, which makes the corruption flag crash-safe.

    A new dynamic variable, innodb_deadlock_detect, may be used to disable deadlock detection.

    InnoDB temporary tables are now created in the shared temporary tablespace, ibtmp1.

    System tables and data dictionary tables are now created in a single InnoDB tablespace file named mysql.ibd in the MySQL data directory.

    By default, undo logs now reside in two undo tablespaces, not in the system tablespace, and are created when the MySQL instance is initialized.

    The new innodb_dedicated_server variable, disabled by default, can be used to have InnoDB automatically configure several options based on detected server memory.

    Tablespace files can be moved or restored to a new location while the server is offline using the innodb_directories option.

    "},{"location":"upgrade-changes-removed.html","title":"Removed in MySQL 8.0","text":"

    The innodb_locks_unsafe_for_binlog system variable is removed. The READ COMMITTED isolation level provides similar functionality.

    Using GRANT to create users. Instead, use CREATE USER. Following this practice makes the NO_AUTO_CREATE_USER SQL mode immaterial for GRANT statements, so it too is removed, and an error now is written to the server log when the presence of this value for the sql_mode option in the options file prevents mysqld from starting.

    Using GRANT to modify account properties other than privilege assignments. This includes authentication, SSL, and resource-limit properties. Instead, establish such properties at account-creation time with CREATE USER or modify them afterward with ALTER USER.

    IDENTIFIED BY PASSWORD \u2018auth_string\u2019 syntax for CREATE USER and GRANT. Instead, use IDENTIFIED WITH auth_plugin AS \u2018auth_string\u2019 for CREATE USER and ALTER USER, where the \u2018auth_string\u2019 value is in a format compatible with the named plugin.

    The PASSWORD() function. Additionally, PASSWORD() removal means that SET PASSWORD \u2026 = PASSWORD(\u2018auth_string\u2019) syntax is no longer available. The old_passwords system variable.

    The query cache was removed. Removal includes the following: The FLUSH QUERY CACHE and RESET QUERY CACHE statements. These system variables: query_cache_limit, query_cache_min_res_unit, query_cache_size, query_cache_type, query_cache_wlock_invalidate. These status variables: Qcache_free_blocks, Qcache_free_memory, Qcache_hits, Qcache_inserts, Qcache_lowmem_prunes, Qcache_not_cached, Qcache_queries_in_cache, Qcache_total_blocks.

    The removal of the query cache also removed the following thread states:

    • checking privileges on cached query
    • checking query cache for a query
    • invalidating query cache entries
    • sending cached result to the client
    • storing result in the query cache
    • Waiting for query cache lock

    The tx_isolation and tx_read_only system variables have been removed. Use transaction_isolation and transaction_read_only instead.

    The sync_frm system variable is removed because .frm files are obsolete.

    The secure_auth system variable and \u2013secure-auth client option have been removed. The MYSQL_SECURE_AUTH option for the mysql_options() C API function was removed. The log_warnings system variable and \u2013log-warnings server option have been removed. Use the log_error_verbosity system variable instead. The global scope for the sql_log_bin system variable was removed. sql_log_bin has session scope only, and applications that rely on accessing @@GLOBAL.sql_log_bin should be adjusted. The unused date_format, datetime_format, time_format, and max_tmp_tables system variables are removed. The deprecated ASC or DESC qualifiers for GROUP BY clauses are removed. Queries that previously relied on GROUP BY sorting may produce results that differ from previous MySQL versions. To produce a given sort order, provide an ORDER BY clause. The parser no longer treats N as a synonym for NULL in SQL statements. Use NULL instead. This change does not affect text file import or export operations performed with LOAD DATA or SELECT \u2026 INTO OUTFILE, for which NULL continues to be represented by N. The client-side \u2013ssl and \u2013ssl-verify-server-cert options have been removed. Use \u2013ssl-mode=REQUIRED instead of \u2013ssl=1 or \u2013enable-ssl. Use \u2013ssl-mode=DISABLED instead of \u2013ssl=0, \u2013skip-ssl, or \u2013disable-ssl. Use \u2013ssl-mode=VERIFY_IDENTITY instead of \u2013ssl-verify-server-cert options. The mysql_install_db program has been removed from MySQL distributions. Data directory initialization should be performed by invoking mysqld with the \u2013initialize or \u2013initialize-insecure option instead. In addition, the \u2013bootstrap option for mysqld that was used by mysql_install_db was removed, and the INSTALL_SCRIPTDIR CMake option that controlled the installation location for mysql_install_db was removed. The mysql_plugin utility was removed. Alternatives include loading plugins at server startup using the \u2013plugin-load or \u2013plugin-load-add option, or at runtime using the INSTALL PLUGIN statement.

    The resolveip utility is removed. nslookup, host, or dig can be used instead.

    Review Features removed in MySQL 8.0 for more information

    "},{"location":"upgrade-changes-secure.html","title":"Security & account management changes in MySQL 8.0","text":"

    The grant tables in the mysql system database are now InnoDB (transactional) tables.

    A new caching_sha2_password authentication plugin is available. Like the sha256_password plugin, caching_sha2_password implements SHA-256 password hashing, but uses caching to address latency issues at connect time.

    MySQL now supports roles, which are named collections of privileges. Roles can be created and dropped. Roles can have privileges granted to and revoked from them. Roles can be granted to and revoked from user accounts.

    MySQL now incorporates the concept of user account categories, with system and regular users distinguished according to whether they have the SYSTEM_USER privilege.

    MySQL now maintains information about password history, enabling restrictions on reuse of previous passwords.

    MySQL now supports FIPS mode, if compiled using OpenSSL, and an OpenSSL library and FIPS Object Module are available at runtime.

    MySQL now enables administrators to configure user accounts such that too many consecutive login failures due to incorrect passwords cause temporary account locking.

    As of MySQL 8.0.27, MySQL supports multi-factor authentication (MFA), which makes it possible to create accounts that have up to three authentication methods.

    "},{"location":"upgrade-cutover.html","title":"Upgrade by migrating from one environment to another","text":"

    A cut over upgrade involves migrating an exact copy of the database server from one environment to an upgraded server without redesigning the database, code or supported features. The environments must have the same hardware and operating system.

    During the migration, all writes on the current server are stopped and application traffic is redirected to the new server. After the application begins writing to the new server, you can start tear down the old environment.

    The benefits are the following:

    • Can upgrade the operating system and database server at the same time

    • Allows an upgrade of the hardware

    • Requires only one migration

    The disadvantage is an entire environment must be built.

    "},{"location":"upgrade-percona-repos.html","title":"Upgrade using the Percona repositories","text":"

    We recommend using the Percona repositories to upgrade your server.

    Find the instructions on how to enable the repositories in the following documents:

    • Percona APT Repository

    • Percona RPM Repository

    If you used the TokuDB storage engine in Percona Server for MySQL 5.7, we recommend that you migrate to either MyRocks or InnoDB, verify the migration, and then upgrade to 8.0. Percona Server for MySQL 8.0.29 removed the TokuDB storage engine.

    DEB-based distributionsRPM-based distributions

    Run the following commands as root or use the sudo command.

    1. Make a full backup (or dump if possible) of your database. Move the database configuration file, my.cnf, to another directory to save it. If the configuration file is not moved, it can be overwritten.

    2. Stop the server with the appropriate command for your system:

    systemctl stop mysql`\n
    1. Modify the database configuration file, my.cnf, as needed.

    2. Install Percona Server for MySQL:

      $ sudo apt update\n$ sudo apt install curl\n$ curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb \n$ sudo apt install gnupg2 lsb-release ./percona-release_latest.generic_all.deb\n$ sudo apt update\n$ sudo percona-release setup ps80\n$ sudo apt install percona-server-server\n
    3. Install the storage engine packages.

      Percona Server for MySQL 8.0.28-19 removes TokuDB. For more information, see TokuDB Introduction.

      If you used the TokuDB storage engine in Percona Server for MySQL 5.7, we recommend that you migrate to either MyRocks or InnoDB, verify the migration, and then upgrade to 8.0.

      If you used the MyRocks storage engine in Percona Server for MySQL 5.7, install the percona-server-rocksdb package:

      $ sudo apt install percona-server-rocksdb\n
    4. Running the upgrade:

      Starting with Percona Server for MySQL 8.0.16-7, mysql_upgrade is deprecated. After this version, no operation occurs and this utility generates a message. The mysqld binary automatically runs the upgrade process if needed.

      To find more information, see MySQL Upgrade Process Upgrades

      If you are upgrading to a Percona Server for MySQL version before 8.0.16-7, the installation script does not automatically run mysql_upgrade. Run mysql_upgrade manually.

      $ mysql_upgrade\n
      Expected output
      Checking if update is needed.\nChecking server version.\nRunning queries to upgrade MySQL server.\nChecking system database.\nmysql.columns_priv                                 OK\nmysql.db                                           OK\nmysql.engine_cost                                  OK\n...\nUpgrade process completed successfully.\nChecking if update is needed.\n
    5. Restart the service

      $ sudo systemctl restart mysqld\n

    After the service has been successfully restarted you can use the new Percona Server for MySQL 8.0.

    Run the following commands as root or use the sudo command.

    1. Make a full backup (or dump if possible) of your database. Copy the database configuration file, for example, my.cnf, to another directory to save it.

    2. Stop the server with the appropriate command for your system:

      $ systemctl stop mysql`\n
    3. Check your installed packages with rpm -qa | grep Percona-Server.

    4. Remove the packages without dependencies. This command only removes the specified packages and leaves any dependent packages. The command does not prompt for confirmation:

      $ rpm -qa | grep Percona-Server | xargs rpm -e --nodeps\n

      It is important to remove the packages without dependencies as many packages may depend on these (as they replace mysql) and will be removed if omitted.

      To remove the listed packages, run:

      $ rpm -qa | grep '^mysql-' | xargs rpm -e --nodeps\n
    5. Install the percona-server-server package:

      {.bash data-prompt=\"$\"} $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm $ sudo percona-release setup ps80 $ sudo yum install percona-server-server

    6. Install the storage engine packages. Percona Server for MySQL 8.0.28-19 removes TokuDB. For more information, see TokuDB Introduction.

      If you used the TokuDB storage engine in Percona Server for MySQL 5.7, we recommend that you migrate to either MyRocks or InnoDB, verify the migration, and then upgrade to 8.0.

      If you used the MyRocks storage engine in Percona Server for MySQL 5.7, install the percona-server-rocksdb package:

      $ yum install percona-server-rocksdb\n
    7. Modify your configuration file, my.cnf, and reinstall the plugins if necessary.

    8. Running the upgrade

      Starting with Percona Server for MySQL 8.0.16-7, mysql_upgrade is deprecated. After this version, no operation occurs and this utility generates a message. The mysqld binary automatically runs the upgrade process if needed.

      To find more information, see MySQL Upgrade Process Upgrades

      If you are upgrading to a Percona Server for MySQL version before 8.0.16-7, you can start the mysql service using service mysql start. Use mysql_upgrade to migrate to the new grant tables. The mysql_upgrade rebuilds the required indexes and does the required modifications:

      $ mysql_upgrade\n
    9. Restart the service.

      $ systemctl mysql restart`.\n

    After the service has been successfully restarted you can use the Percona Server for MySQL 8.0.

    "},{"location":"upgrade-plan.html","title":"Plan an upgrade","text":"

    A database upgrade must be planned and tested. The downtime requirements are a major factor in the upgrade strategy.

    The following steps should be reviewed and tested:

    Step Description Review current environment Research which server components/plugins are used, application versions are used, user connections and when is the database has the heaviest traffic. Hardware and software requirements Review if any infrastructure upgrades are required to support the database upgrade. Consider if the hardware and the software must be upgraded. Do regression testing Perform regression testing in the test environment. Upgrade sequence Review the high priority processes to plan testing the process. Back up and rollback Plan what happens if the upgrade fails. Back up the important files, including configuration files. Be aware that after the database server is upgraded, downgrading is not supported. Release notes Review the release notes for the new version."},{"location":"upgrade-pro.html","title":"Upgrade to Percona Server for MySQL Pro","text":"

    Percona Server for MySQL Pro includes the capabilities that are typically requested by large enterprises. Percona Server for MySQL Pro contains packages created and tested by Percona. These packages are supported only for Percona Customers with a subscription.

    Become a Percona Customer

    Review Get more help for ways that we can work with you.

    This document provides instructions on upgrading from Percona Server for MySQL to Percona Server for MySQL Pro.

    "},{"location":"upgrade-pro.html#preconditions","title":"Preconditions","text":"

    Request the access to the pro repository from Percona Support. You will receive the client ID and the access token which you use when downloading the packages.

    Check files in packages built for Percona Server for MySQL Pro

    "},{"location":"upgrade-pro.html#procedure","title":"Procedure","text":"
    1. Configure the repository

      On Debian and UbuntuOn RHEL and derivatives
      1. Create the /etc/apt/sources.list.d/psmysql-pro.list configuration file with the following contents

        To get the OPERATING_SYSTEM value, run lsb_release -sc.

        /etc/apt/sources.list.d/psmysql-pro.list
        deb http://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/apt/ OPERATING_SYSTEM main\n
      2. Update the local cache

        $ sudo apt update\n

      Create the /etc/yum.repos.d/psmysql-pro.repo configuration file with the following contents

      /etc/yum.repos.d/psmysql-pro.repo
      [ps-8.0-pro]\nname=PS_8.0_PRO\nbaseurl=http://repo.percona.com/private/[CLIENTID]-[TOKEN]/ps-80-pro/yum/main/$releasever/RPMS/x86_64\nenabled=1\ngpgkey = https://repo.percona.com/yum/PERCONA-PACKAGING-KEY\n
    2. Stop the mysql server

      $ sudo systemctl stop mysql\n
    3. Install Percona Server for MySQL Pro packages

      On Debian and UbuntuOn RHEL and derivatives
      $ sudo apt install -y percona-server-server-pro\n

      Install other required packages. Check files in the DEB package built for Percona Server for MySQL 8.0.

      The --allow erasing option allows Yum to remove existing packages that conflict with the new installation. This is often necessary when upgrading or reinstalling software.

      $ sudo yum install --allowerasing percona-server-server-pro\n

      Install other required packages. Check files in the RPM package built for Percona Server for MySQL 8.0.

    4. Start the server

      $ sudo systemctl start mysql\n

    Note

    On Debian 12, you may receive the following warning after running systemctl commands:

    Warning: The unit file, source configuration file, or drop-ins of mysql.service changed on disk. Run 'systemctl daemon-reload' to reload units.\n

    Run the suggested command:

    $ sudo systemctl daemon-reload\n

    Downgrade from Percona Server for MySQL Pro

    "},{"location":"upgrade-pt.html","title":"Percona Tools that can help with an upgrade","text":"

    Percona has several tools in our Percona Toolkit that can help with upgrade planning, and make the entire process much easier and less prone to downtime or issues.

    Name Description pt-upgrade tool helps you run application SELECT queries and generates reports on how each query pattern performs on the servers across different versions of MySQL pt-query-digest As best practice dictates gathering and testing all application queries by activating the slow log for a period of time, most companies will end up with an enormous amount of slow log data. The pt-query-digest tool can assist in query digest preparation for your upgrade testing. pt-config-diff The pt-config-diff tool helps in determining the differences in MySQL settings between files and server variables. This allows comparison of the upgraded version to the previous version, allowing validation of configuration differences. pt-show-grants The pt-show-grants tool extracts, orders, and then prints grants for MySQL user accounts. This can help to export and backup your MySQL grants prior to an upgrade, or allow you to easily replicate users from one server to another by simply extracting the grants from the first server and piping the output directly into another server.

    For more information, see Percona Toolkit

    "},{"location":"upgrade-standalone-packages.html","title":"Upgrade using Standalone Packages","text":"

    Make a full backup (or dump if possible) of your database. Move the database configuration file, my.cnf, to another direction to save it. Stop the server with /etc/init.d/mysql stop.

    Debian-derived distributionRed Hat-derived distributions
    1. Remove the installed packages with their dependencies: sudo apt autoremove percona-server percona-client

    2. Do the required modifications in the database configuration file my.cnf.

    3. Download the following packages for your architecture:

      • percona-server-server

      • percona-server-client

      • percona-server-common

      • libperconaserverclient21

      The following example will download Percona Server for MySQL 8.0.29-21 release packages for Debian 11.0:

      $ wget https://downloads.percona.com/downloads/Percona-Server-LATEST/Percona-Server-8.0.29-21/binary/debian/bullseye/x86_64/Percona-Server-8.0.29-21-rc59f87d2854-bullseye-x86_64-bundle.tar\n
    4. Unpack the bundle to get the packages: tar xvf Percona-Server-8.0.29-21-rc59f87d2854-bullseye-x86_64-bundle.tar.

      After you unpack the bundle, you should see the following packages:

      $ ls *.deb\n
      Expected output
      llibperconaserverclient21-dev_8.0.29-21-1.bullseye_amd64.deb  \npercona-server-dbg_8.0.29-21-1.bullseye_amd64.deb\nlibperconaserverclient21_8.0.29-21-1.bullseye_amd64.deb      \npercona-server-rocksdb_8.0.29-21-1.bullseye_amd64.deb\npercona-mysql-router_8.0.29-21-1.bullseye_amd64.deb\npercona-server-server_8.0.29-21-1.bullseye_amd64.deb\npercona-server-client_8.0.29-21-1.bullseye_amd64.deb     \npercona-server-source_8.0.29-21-1.bullseye_amd64.deb\npercona-server-common_8.0.29-21-1.bullseye_amd64.deb     \npercona-server-test_8.0.29-21-1.bullseye_amd64.deb\n
    5. Install Percona Server for MySQL:

      $ sudo dpkg -i *.deb\n

      This command installs the packages from the bundle. Another option is to download or specify only the packages you need for running Percona Server for MySQL installation (libperconaserverclient21-dev_8.0.29-21-1.bullseye_amd64.deb, percona-server-client-8.0.29-21-1.bullseye_amd64.deb, percona-server-common-8.0.29-21-1.bullseye_amd64.deb, and percona-server-server-8.0.29-21-1.bullseye_amd64.deb.

      Warning

      When installing packages manually, you must resolve all the dependencies and install missing packages yourself. At least the following packages should be installed before installing Percona Server for MySQL 8.0: * libmecab2, * libjemalloc1, * zlib1g-dev, * libaio1.

    6. Running the upgrade:

      Starting with Percona Server 8.0.16-7, mysql_upgrade is deprecated. The functionality moved to the mysqld binary which automatically runs the upgrade process, if needed. If you attempt to run mysql_upgrade, no operation happens and the following message appears: \u201cThe mysql_upgrade client is now deprecated. The actions executed by the upgrade client are now done by the server.\u201d To find more information, see MySQL Upgrade Process Upgrades

      If you are upgrading to a Percona Server for MySQL version before 8.0.16-7, the installation script does not run automatically mysql_upgrade. You must run mysql_upgrade manually.

      $ mysql_upgrade\n
      Expected output
      Checking if update is needed.\nChecking server version.\nRunning queries to upgrade MySQL server.\nChecking system database.\nmysql.columns_priv                                 OK\nmysql.db                                           OK\nmysql.engine_cost                                  OK\n...\nUpgrade process completed successfully.\nChecking if update is needed.\n
    7. Restart the service with service mysql restart. After the service has been successfully restarted use the new Percona Server for MySQL 8.0.

    1. Check the installed packages:

      $ rpm -qa | grep percona-server\n
      Expected output
      percona-server-shared-8.0.29-21.1.el8.x86_64\npercona-server-shared-compat-8.0.29-21.1.el8.x86_64\npercona-server-client-8.0.29-21.1.el8.x86_64\npercona-server-server-8.0.29-21.1.el8.x86_64\n

      You may have the shared-compat package, which is required for compatibility.

    2. Remove the packages without dependencies with rpm -qa | grep percona-server | xargs rpm -e --nodeps.

      It is important that you remove the packages without dependencies as many packages may depend on these (as they replace mysql) and will be removed if omitted.

      To remove the listed packages, run:

      $ rpm -qa | grep '^mysql-'| xargs rpm -e --nodeps`\n
    3. Download the packages of the desired series for your architecture from the download page. The easiest way is to download the bundle which contains all the packages. The following example downloads Percona Server for MySQL 8.0.29-21 release packages for CentOS 8:

      $ wget https://downloads.percona.com/downloads/Percona-Server-LATEST/Percona-Server-8.0.29-21/binary/redhat/8/x86_64/Percona-Server-8.0.29-21-rc59f87d2854-el8-x86_64-bundle.tar\n
    4. Unpack the bundle to get the packages

      $ tar xvf Percona-Server-8.0.29-21-rc59f87d2854-el8-x86_64-bundle.tar\n

      After you unpack the bundle, you should see the following packages: ls \\*.rpm

    5. Install Percona Server for MySQL:

      $ sudo rpm -ivh percona-server-server-8.0.29-21.1.el8.x86_64.rpm \\\n> percona-server-client-8.0.29-21.1.el8.x86_64.rpm \\\n> percona-server-shared-8.0.29-21.1.el8.x86_64.rpm \\\n> percona-server-shared-compat-8.0.29-21.1.el8.x86_64.rpm\n

      This command installs only packages required to run the Percona Server for MySQL 8.0.

    6. You can install all the packages (for debugging, testing, etc.) with sudo rpm -ivh \\*.rpm.

      Note

      When manually installing packages, you must resolve all the dependencies and install missing ones.

    7. Modify your configuration file, my.cnf, and install the plugins if necessary. If you are using the TokuDB storage engine you must comment out all the TokuDB specific variables in your configuration file(s) before starting the server, otherwise, the server will not start.

    RHEL/CentOS automatically backs up the previous configuration file to /etc/my.cnf.rpmsave and installs the default my.cnf. After the upgrade/install process completes you can move the old configuration file back (after you remove all the unsupported system variables).

    1. The schema of the grant table has changed, the server must be started without reading the grants. Add a line to my.cnf in the [mysqld] section,
    [mysqld]\nskip-grant-tables\n

    Restart the mysql server with service mysql start.

    1. Running the upgrade:

      Starting with Percona Server for MySQL 8.0.16-7, mysql_upgrade is deprecated. After this version, no operation occurs and this utility generates a message. The mysqld binary automatically runs the upgrade process if needed.

      To find more information, see MySQL Upgrade Process Upgrades

      If you are upgrading to a Percona Server for MySQL version before 8.0.16-7, run mysql_upgrade to migrate to the new grant tables and rebuild the required indexes and do the required modifications.

    2. Restart the server with service mysql restart. After the service has been successfully restarted you can use the new Percona Server for MySQL 8.0.

    "},{"location":"upgrade-strategies.html","title":"Upgrade strategies","text":"

    There are different strategies to consider when upgrading from MySQL 5.7 to MySQL 8.

    "},{"location":"upgrade-strategies.html#in-place-upgrade","title":"In-place upgrade","text":"

    An upgrade, version 8 does not allow a rollback. The in-place upgrade strategy is not recommended, and it should be used only as a last resort.

    An in-place upgrade involves shutting down the 5.7 server, and replacing the server binaries, or packages, with new ones. At this point the new server version can be started on the existing data directory. If the new version is less than 8.0.16, you should run mysql_upgrade. Note that the server should be configured to perform a slow shutdown by setting innodb_fast_shutdown=0 prior to shutdown.

    The benefits are:

    • Less additional infrastructure cost compared to a new environment, but nodes must be tested.
    • An upgrade can be completed over weeks with cool-down periods between reader node upgrades.
    • Requires a failover of production traffic, and for minimal downtime you must have good high-availability tools.

    If you use XA transactions with InnoDB, running XA RECOVER before upgrading checks for uncommitted XA transactions. If results are returned, either commit or rollback the XA transactions by issuing an XA COMMIT or XA ROLLBACK statement.

    "},{"location":"upgrade-strategies.html#new-environment-with-cut-over","title":"New environment with cut over","text":"

    Upgrading with a new environment involves provisioning a duplicate environment with the same number of servers with the same hardware specs and same operating system as the current production nodes.

    On the newly provided hardware, the target MySQL version will be installed. The new environment will be set up, and the production data will be recovered. Remember that you can use pt-config-diff to verify MySQL configurations.

    Replication from the current source to the newly built environment will be established. At cutover time, all writes on the current source will be halted, and the application traffic will need to be redirected to the new source. The cutover can be done using a Virtual IP address or manually redirecting the application itself. Once writes are being received on the new environment, you are in a fail forward situation, and the old environment can be torn down.

    This will have additional infrastructure cost, as an entire new environment must be built. Ability to upgrade both the OS and the DBMS at the same time. Allows upgrade of hardware easily. Requires only a single cutover window.

    "},{"location":"upgrade-tokudb-myrocks.html","title":"Upgrade from systems that use the MyRocks or TokuDB storage engine and partitioned tables","text":"

    Due to the limitation imposed by MySQL, the storage engine provides support for partitioning. MySQL 8.0 only provides support for partitioned table for the InnoDB storage engine.

    If you use partitioned tables with the MyRocks or TokuDB storage engine, the upgrade may fail if you do not enable the native partitioning provided by the storage engine.

    TokuDB is deprecated. For more information, see TokuDB Introduction.

    Before you attempt the upgrade, check whether you have any tables that are not using the native partitioning.

    $ mysqlcheck -u root --all-databases --check-upgrade\n

    If tables are found, mysqlcheck issues a warning:

    Enable either the rocksdb_enable_native_partition variable or the tokudb_enable_native_partition variable depending on the storage engine and restart the server.

    Your next step is to alter the tables that are not using the native partitioning with the UPGRADE PARTITIONING clause:

    ALTER TABLE <table-name> UPGRADE PARTITIONING\n

    Complete these steps for each table that mysqlcheck list. Otherwise, the upgrade to 8.0 fails and your error log contains messages like the following:

    2018-12-17T18:34:14.152660Z 2 [ERROR] [MY-013140] [Server] The 'partitioning' feature is not available; you need to remove '--skip-partition' or use MySQL built with '-DWITH_PARTITION_STORAGE_ENGINE=1'\n2018-12-17T18:34:14.152679Z 2 [ERROR] [MY-013140] [Server] Can't find file: './comp_test/t1_RocksDB_lz4.frm' (errno: 0 - Success)\n2018-12-17T18:34:14.152691Z 2 [ERROR] [MY-013137] [Server] Can't find file: './comp_test/t1_RocksDB_lz4.frm' (OS errno: 0 - Success)\n
    "},{"location":"upgrade-tokudb-myrocks.html#perform-a-distribution-upgrade-in-place-on-a-system-with-installed-percona-packages","title":"Perform a distribution upgrade in-place on a system with installed Percona packages","text":"

    The following is the recommended process for performing a distribution upgrade on a system with the Percona packages installed:

    1. Record the installed Percona packages.

    2. Back up the data and configurations.

    3. Uninstall the Percona packages without removing the configuration file or data.

    4. Perform the upgrade by following the distribution upgrade instructions

    5. Reboot the system.

    6. Install the Percona packages intended for the upgraded version of the distribution.

    "},{"location":"upgrade.html","title":"Upgrade from 5.7 to 8.0 overview","text":"

    Upgrading your server to 8.0 has the following benefits:

    Benefits Description Security fixes These patches and updates protect your data from cyberattacks and address vulnerabilities or bugs in the database software. New or improved features You have access to new or improved features which enhance the functionality, performance, and availability of the database. Reduced labor You can automate some routine tasks. Relevance Your customers and stakeholders have changing needs and expectations. Using the latest version can help to deliver solutions faster. Reduced operational costs An upgraded database server can help reduce your operational costs because the server has improved efficiency and scalability.

    Not upgrading your database can have the following risks:

    Risks Description Security risks Your database server is vulnerable to cyberattacks because you do not receive security fixes. These attacks can result in data breaches, data loss, and data corruption. These actions can harm the organization\u2019s reputation and lose money. Service risks You do not benefit from new or improved features. This risk may cause poor user experience, reduced productivity, and increased downtime. Support risks You are limited in support access. This risk can result in longer resolution times, unresolved issues, and higher support costs. Compatibility risks You can experience compatibility issues with hardware, operating system, or applications since the older version is not supported on newer platforms. At some point, the database server is no longer supportable. Failure risk A failure in either hardware, operating system or application may force an upgrade at the wrong time.

    Review Get more help for ways that we can work with you.

    Create a test environment to verify the upgrade before you upgrade the production servers. The test environment is crucial to the success of the upgrade. There is no supported downgrade procedure. You can try to replicate from an 8.0 version to 5.7 or restore a backup.

    Several tools in the Percona Toolkit can help with the upgrade process.

    We recommend upgrading to the latest version. The following topics describe the major changes from 5.7 to 8.0:

    • General changes
    • InnoDB changes
    • Security and account management changes
    • Deprecated in 8.0
    • Removed in 8.0

    Review the documentation for other changes between 5.7 to 8.0.

    Review Upgrade Strategies for an overview of the major strategies.

    The following list summarizes a number of the changes in the 8.0 series and has useful guides that can help you perform a smooth upgrade. We strongly recommend reading this information:

    • Upgrading MySQL

    • Before You Begin

    • Upgrade Paths

    • Changes in MySQL 8.0

    • Preparing your Installation for Upgrade

    • MySQL 8 Minor Version Upgrades Are ONE-WAY Only

    • Percona Utilities That Make Major MySQL Version Upgrades Easier

    • Percona Server for MySQL 8.0 Release notes

    • Upgrade Troubleshooting

    • Rebuilding or Repairing Tables or Indexes

    Review other Percona blogs that contain upgrade information.

    Implemented in Percona Server for MySQL 8.0.15-5, Percona Server for MySQL uses the upstream implementation of binary log file encryption and relay log file encryption.

    "},{"location":"upgrade.html#known-limitation","title":"Known limitation","text":"

    The Percona 5.7 and Percona 8.0 Dockerfiles have a different user ID (UID). This difference can create compatibility and permissions issues. The UID determines the permissions for the anonymous volume mounts. Since the UIDs differ between versions, the container does not have the necessary permissions to access or modify these volumes.

    "},{"location":"user-stats.html","title":"User statistics","text":"

    This feature adds several INFORMATION_SCHEMA tables, several commands, and the userstat variable. The tables and commands can be used to understand the server activity better and identify the source of the load.

    The functionality is disabled by default and must be enabled by setting userstat to ON. It works by keeping several hash tables in memory. To avoid contention over global mutexes, each connection has its own local statistics, which are occasionally merged into the global statistics, and the local statistics are then reset to 0.

    "},{"location":"user-stats.html#version-specific-information","title":"Version specific information","text":"
    • 8.0.12-1: The feature was ported from Percona Server for MySQL 5.7.
    "},{"location":"user-stats.html#other-information","title":"Other information","text":"
    • Author/Origin: Google; Percona added the INFORMATION_SCHEMA tables and the userstat variable.
    "},{"location":"user-stats.html#system-variables","title":"System variables","text":""},{"location":"user-stats.html#userstat","title":"userstat","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type BOOLEAN Default OFF Range ON/OFF

    Enables or disables collection of statistics. The default is OFF, meaning no statistics are gathered. This is to ensure that the statistics collection doesn\u2019t cause any extra load on the server unless desired.

    "},{"location":"user-stats.html#thread_statistics","title":"thread_statistics","text":"Option Description Command-line Yes Config file Yes Scope Global Dynamic Yes Data type BOOLEAN Default OFF Range ON/OFF

    Enables or disables collection of thread statistics. The default is OFF, meaning no thread statistics are gathered. This is to ensure that the statistics collection doesn\u2019t cause any extra load on the server unless desired. The variable userstat must be enabled as well in order for thread statistics to be collected.

    "},{"location":"user-stats.html#information_schema-tables","title":"INFORMATION_SCHEMA Tables","text":""},{"location":"user-stats.html#information_schemaclient_statistics","title":"INFORMATION_SCHEMA.CLIENT_STATISTICS","text":"Column Name Description \u2018CLIENT\u2019 \u2018The IP address or hostname from which the connection originated.\u2019 \u2018TOTAL_CONNECTIONS\u2019 \u2018The number of connections created for this client.\u2019 \u2018CONCURRENT_CONNECTIONS\u2019 \u2018The number of concurrent connections for this client.\u2019 \u2018CONNECTED_TIME\u2019 \u2018The cumulative number of seconds elapsed while there were connections from this client.\u2019 \u2018BUSY_TIME\u2019 \u2018The cumulative number of seconds there was activity on connections from this client.\u2019 \u2018CPU_TIME\u2019 \u2018The cumulative CPU time elapsed, in seconds, while servicing this client\u2019s connections.\u2019 \u2018BYTES_RECEIVED\u2019 \u2018The number of bytes received from this client\u2019s connections.\u2019 \u2018BYTES_SENT\u2019 \u2018The number of bytes sent to this client\u2019s connections.\u2019 \u2018BINLOG_BYTES_WRITTEN\u2019 \u2018The number of bytes written to the binary log from this client\u2019s connections.\u2019 \u2018ROWS_FETCHED\u2019 \u2018The number of rows fetched by this client\u2019s connections.\u2019 \u2018ROWS_UPDATED\u2019 \u2018The number of rows updated by this client\u2019s connections.\u2019 \u2018TABLE_ROWS_READ\u2019 \u2018The number of rows read from tables by this client\u2019s connections. (It may be different from ROWS_FETCHED.)\u2019 \u2018SELECT_COMMANDS\u2019 \u2018The number of SELECT commands executed from this client\u2019s connections.\u2019 \u2018UPDATE_COMMANDS\u2019 \u2018The number of UPDATE commands executed from this client\u2019s connections.\u2019 \u2018OTHER_COMMANDS\u2019 \u2018The number of other commands executed from this client\u2019s connections.\u2019 \u2018COMMIT_TRANSACTIONS\u2019 \u2018The number of COMMIT commands issued by this client\u2019s connections.\u2019 \u2018ROLLBACK_TRANSACTIONS\u2019 \u2018The number of ROLLBACK commands issued by this client\u2019s connections.\u2019 \u2018DENIED_CONNECTIONS\u2019 \u2018The number of connections denied to this client.\u2019 \u2018LOST_CONNECTIONS\u2019 \u2018The number of this client\u2019s connections that were terminated uncleanly.\u2019 \u2018ACCESS_DENIED\u2019 \u2018The number of times this client\u2019s connections issued commands that were denied.\u2019 \u2018EMPTY_QUERIES\u2019 \u2018The number of times this client\u2019s connections sent empty queries to the server.\u2019

    This table holds statistics about client connections. The Percona version of the feature restricts this table\u2019s visibility to users who have the SUPER or PROCESS privilege.

    For example:

    mysql>SELECT * FROM INFORMATION_SCHEMA.CLIENT_STATISTICS\\G\n
    Expected output
    *************************** 1. row ***************************\n                CLIENT: 10.1.12.30\n     TOTAL_CONNECTIONS: 20\nCONCURRENT_CONNECTIONS: 0\n        CONNECTED_TIME: 0\n             BUSY_TIME: 93\n              CPU_TIME: 48\n        BYTES_RECEIVED: 5031\n            BYTES_SENT: 276926\n   BINLOG_BYTES_WRITTEN: 217\n          ROWS_FETCHED: 81\n          ROWS_UPDATED: 0\n       TABLE_ROWS_READ: 52836023\n       SELECT_COMMANDS: 26\n       UPDATE_COMMANDS: 1\n        OTHER_COMMANDS: 145\n   COMMIT_TRANSACTIONS: 1\n ROLLBACK_TRANSACTIONS: 0\n    DENIED_CONNECTIONS: 0\n      LOST_CONNECTIONS: 0\n         ACCESS_DENIED: 0\n         EMPTY_QUERIES: 0\n
    "},{"location":"user-stats.html#information_schema-tables_1","title":"INFORMATION_SCHEMA tables","text":""},{"location":"user-stats.html#information_schemaindex_statistics","title":"INFORMATION_SCHEMA.INDEX_STATISTICS","text":"Column Name Description \u2018TABLE_SCHEMA\u2019 \u2018The schema (database) name.\u2019 \u2018TABLE_NAME\u2019 \u2018The table name.\u2019 \u2018INDEX_NAME\u2019 \u2018The index name (as visible in SHOW CREATE TABLE).\u2019 \u2018ROWS_READ\u2019 \u2018The number of rows read from this index.\u2019

    This table shows statistics on index usage. An older version of the feature contained a single column that had the TABLE_SCHEMA, TABLE_NAME, and INDEX_NAME columns concatenated together. The Percona version of the feature separates these into three columns. Users can see entries only for tables to which they have SELECT access.

    This table makes it possible to do many things that were difficult or impossible previously. For example, you can use it to find unused indexes and generate DROP commands to remove them.

    Example:

    mysql> SELECT * FROM INFORMATION_SCHEMA.INDEX_STATISTICS WHERE TABLE_NAME='tables_priv';\n
    Expected output
    +--------------+-----------------------+--------------------+-----------+\n| TABLE_SCHEMA | TABLE_NAME            | INDEX_NAME         | ROWS_READ |\n+--------------+-----------------------+--------------------+-----------+\n| mysql        | tables_priv           | PRIMARY            |         2 |\n+--------------+-----------------------+--------------------+-----------+\n

    Note

    The current implementation of index statistics doesn\u2019t support partitioned tables.

    "},{"location":"user-stats.html#information_schematable_statistics","title":"INFORMATION_SCHEMA.TABLE_STATISTICS","text":"Column Name Description \u2018TABLE_SCHEMA\u2019 \u2018The schema (database) name.\u2019 \u2018TABLE_NAME\u2019 \u2018The table name.\u2019 \u2018ROWS_READ\u2019 \u2018The number of rows read from the table.\u2019 \u2018ROWS_CHANGED\u2019 \u2018The number of rows changed in the table.\u2019 \u2018ROWS_CHANGED_X_INDEXES\u2019 \u2018The number of rows changed in the table, multiplied by the number of indexes changed.\u2019

    This table is similar in function to the INDEX_STATISTICS table.

    For example:

    mysql> SELECT * FROM INFORMATION_SCHEMA.TABLE_STATISTICS WHERE TABLE_NAME='tables_priv';\n
    Expected output
    +--------------+-------------------------------+-----------+--------------+------------------------+\n| TABLE_SCHEMA | TABLE_NAME                    | ROWS_READ | ROWS_CHANGED | ROWS_CHANGED_X_INDEXES |\n+--------------+-------------------------------+-----------+--------------+------------------------+\n| mysql        | tables_priv                   |         2 |            0 |                      0 |\n+--------------+-------------------------------+-----------+--------------+------------------------+\n

    Note

    The current implementation of table statistics doesn\u2019t support partitioned tables.

    "},{"location":"user-stats.html#information_schemathread_statistics","title":"INFORMATION_SCHEMA.THREAD_STATISTICS","text":"Column Name Description \u2018THREAD_ID\u2019 \u2018Thread ID\u2019 \u2018TOTAL_CONNECTIONS\u2019 \u2018The number of connections created from this thread.\u2019 \u2018CONNECTED_TIME\u2019 \u2018The cumulative number of seconds elapsed while there were connections from this thread.\u2019 \u2018BUSY_TIME\u2019 \u2018The cumulative number of seconds there was activity from this thread.\u2019 \u2018CPU_TIME\u2019 \u2018The cumulative CPU time elapsed while servicing this thread.\u2019 \u2018BYTES_RECEIVED\u2019 \u2018The number of bytes received from this thread.\u2019 \u2018BYTES_SENT\u2019 \u2018The number of bytes sent to this thread.\u2019 \u2018BINLOG_BYTES_WRITTEN\u2019 \u2018The number of bytes written to the binary log from this thread.\u2019 \u2018ROWS_FETCHED\u2019 \u2018The number of rows fetched by this thread.\u2019 \u2018ROWS_UPDATED\u2019 \u2018The number of rows updated by this thread.\u2019 \u2018TABLE_ROWS_READ\u2019 \u2018The number of rows read from tables by this tread.\u2019 \u2018SELECT_COMMANDS\u2019 \u2018The number of SELECT commands executed from this thread.\u2019 \u2018UPDATE_COMMANDS\u2019 \u2018The number of UPDATE commands executed from this thread.\u2019 \u2018OTHER_COMMANDS\u2019 \u2018The number of other commands executed from this thread.\u2019 \u2018COMMIT_TRANSACTIONS\u2019 \u2018The number of COMMIT commands issued by this thread.\u2019 \u2018ROLLBACK_TRANSACTIONS\u2019 \u2018The number of ROLLBACK commands issued by this thread.\u2019 \u2018DENIED_CONNECTIONS\u2019 \u2018The number of connections denied to this thread.\u2019 \u2018LOST_CONNECTIONS\u2019 \u2018The number of thread connections that were terminated uncleanly.\u2019 \u2018ACCESS_DENIED\u2019 \u2018The number of times this thread issued commands that were denied.\u2019 \u2018EMPTY_QUERIES\u2019 \u2018The number of times this thread sent empty queries to the server.\u2019 \u2018TOTAL_SSL_CONNECTIONS\u2019 \u2018The number of thread connections that used SSL.\u2019

    In order for this table to be populated with statistics, the additional variable thread_statistics should be set to ON.

    "},{"location":"user-stats.html#information_schemauser_statistics","title":"INFORMATION_SCHEMA.USER_STATISTICS","text":"Column Name Description \u2018USER\u2019 \u2018The username. The value #mysql_system_user# appears when there is no username (such as for the replica SQL thread).\u2019 \u2018TOTAL_CONNECTIONS\u2019 \u2018The number of connections created from this user.\u2019 \u2018CONCURRENT_CONNECTIONS\u2019 \u2018The number of concurrent connections for this user.\u2019 \u2018CONNECTED_TIME\u2019 \u2018The cumulative number of seconds elapsed while there were connections from this user.\u2019 \u2018BUSY_TIME\u2019 \u2018The cumulative number of seconds there was activity on connections from this user.\u2019 \u2018CPU_TIME\u2019 \u2018The cumulative CPU time elapsed, in seconds, while servicing this user\u2019s connections.\u2019 \u2018BYTES_RECEIVED\u2019 \u2018The number of bytes received from this user\u2019s connections.\u2019 \u2018BYTES_SENT\u2019 \u2018The number of bytes sent to this user\u2019s connections.\u2019 \u2018BINLOG_BYTES_WRITTEN\u2019 \u2018The number of bytes written to the binary log from this user\u2019s connections.\u2019 \u2018ROWS_FETCHED\u2019 \u2018The number of rows fetched by this user\u2019s connections.\u2019 \u2018ROWS_UPDATED\u2019 \u2018The number of rows updated by this user\u2019s connections.\u2019 \u2018TABLE_ROWS_READ\u2019 \u2018The number of rows read from tables by this user\u2019s connections. (It may be different from ROWS_FETCHED.)\u2019 \u2018SELECT_COMMANDS\u2019 \u2018The number of SELECT commands executed from this user\u2019s connections.\u2019 \u2018UPDATE_COMMANDS\u2019 \u2018The number of UPDATE commands executed from this user\u2019s connections.\u2019 \u2018OTHER_COMMANDS\u2019 \u2018The number of other commands executed from this user\u2019s connections.\u2019 \u2018COMMIT_TRANSACTIONS\u2019 \u2018The number of COMMIT commands issued by this user\u2019s connections.\u2019 \u2018ROLLBACK_TRANSACTIONS\u2019 \u2018The number of ROLLBACK commands issued by this user\u2019s connections.\u2019 \u2018DENIED_CONNECTIONS\u2019 \u2018The number of connections denied to this user.\u2019 \u2018LOST_CONNECTIONS\u2019 \u2018The number of this user\u2019s connections that were terminated uncleanly.\u2019 \u2018ACCESS_DENIED\u2019 \u2018The number of times this user\u2019s connections issued commands that were denied.\u2019 \u2018EMPTY_QUERIES\u2019 \u2018The number of times this user\u2019s connections sent empty queries to the server.\u2019

    This table contains information about user activity. The Percona version of the patch restricts this table\u2019s visibility to users who have the SUPER or PROCESS privilege.

    The table gives answers to questions such as which users cause the most load, and whether any users are being abusive. It also lets you measure how close to capacity the server may be. For example, you can use it to find out whether replication is likely to start falling behind.

    Example:

    mysql>SELECT * FROM INFORMATION_SCHEMA.USER_STATISTICS\\G\n
    Expected output
    *************************** 1. row ***************************\n                  USER: root\n     TOTAL_CONNECTIONS: 5592\n CONCURRENT_CONNECTIONS: 0\n         CONNECTED_TIME: 6844\n             BUSY_TIME: 179\n              CPU_TIME: 72\n        BYTES_RECEIVED: 603344\n            BYTES_SENT: 15663832\n  BINLOG_BYTES_WRITTEN: 217\n          ROWS_FETCHED: 9793\n          ROWS_UPDATED: 0\n       TABLE_ROWS_READ: 52836023\n       SELECT_COMMANDS: 9701\n       UPDATE_COMMANDS: 1\n        OTHER_COMMANDS: 2614\n   COMMIT_TRANSACTIONS: 1\n ROLLBACK_TRANSACTIONS: 0\n    DENIED_CONNECTIONS: 0\n      LOST_CONNECTIONS: 0\n         ACCESS_DENIED: 0\n         EMPTY_QUERIES: 0\n
    "},{"location":"user-stats.html#commands-provided","title":"Commands Provided","text":"
    • FLUSH CLIENT_STATISTICS

    • FLUSH INDEX_STATISTICS

    • FLUSH TABLE_STATISTICS

    • FLUSH THREAD_STATISTICS

    • FLUSH USER_STATISTICS

    These commands discard the specified type of stored statistical information.

    • SHOW CLIENT_STATISTICS

    • SHOW INDEX_STATISTICS

    • SHOW TABLE_STATISTICS

    • SHOW THREAD_STATISTICS

    • SHOW USER_STATISTICS

    These commands are another way to display the information you can get from the INFORMATION_SCHEMA tables. The commands accept WHERE clauses. They also accept but ignore LIKE clauses.

    "},{"location":"user-stats.html#status-variables","title":"Status Variables","text":""},{"location":"user-stats.html#com_show_client_statistics","title":"Com_show_client_statistics","text":"Option Description Scope Global/Session Data type numeric

    The Com_show_client_statistics statement counter variable indicates the number of times the statement SHOW CLIENT_STATISTICS has been executed.

    "},{"location":"user-stats.html#com_show_index_statistics","title":"Com_show_index_statistics","text":"Option Description Scope Global/Session Data type numeric

    The Com_show_index_statistics statement counter variable indicates the number of times the statement SHOW INDEX_STATISTICS has been executed.

    "},{"location":"user-stats.html#com_show_table_statistics","title":"Com_show_table_statistics","text":"Option Description Scope Global/Session Data type numeric

    The Com_show_table_statistics statement counter variable indicates the number of times the statement SHOW TABLE_STATISTICS has been executed.

    "},{"location":"user-stats.html#com_show_thread_statistics","title":"Com_show_thread_statistics","text":"Option Description Scope Global/Session Data type numeric

    The Com_show_thread_statistics statement counter variable indicates the number of times the statement SHOW THREAD_STATISTICS has been executed.

    "},{"location":"user-stats.html#com_show_user_statistics","title":"Com_show_user_statistics","text":"Option Description Scope Global/Session Data type numeric

    The Com_show_user_statistics statement counter variable indicates the number of times the statement SHOW USER_STATISTICS has been executed.

    "},{"location":"using-amz-kms.html","title":"Use the Amazon Key Management Service (AWS KMS)","text":"

    As of Percona Server for MySQL 8.0.30-22, the Amazon Key Management Service (AWS KMS) feature is Generally Available (GA).

    Percona Server for MySQL 8.0.28-20 adds support for the Amazon Key Management Service (AWS KMS). Percona Server generates the keyring keys. Amazon Web Services (AWS) encrypts the keyring data.

    The AWS KMS lets you create and manage cryptographic keys across AWS services. For more information, see the AWS Key Management Service Documentation.

    To use the AWS KMS component, do the following:

    • Have an AWS user account. This account has an access key and a secret key.

    • Create a KMS key ID. The KMS key can then be referenced in the configuration either by its ID, alias (the key can have any number of aliases), or ARN.

    "},{"location":"using-amz-kms.html#component-installation","title":"Component installation","text":"

    You should only load the AWS KMS component with a manifest file. The server uses this manifest file, and the component consults its configuration file during initialization.

    For more information, see Installing and Uninstalling Components

    You should create a global manifest file named mysqld.my in the installation directory and, optionally, create a local manifest file, also named mysqld.my in a data directory.

    To install a KMS component, do the following:

    1. Write a manifest in a valid JSON format

    2. Write a configuration file

    A manifest file indicates which component to load. The server does not load the component if the manifest file associated with the component does not exist. The server reads the global manifest file from the installation directory during startup. The global manifest file can contain the required information or point to a local manifest file in the data directory. If you have multiple server instances that use different keyring components, use a local manifest file in each data directory to load the correct keyring component for that instance.

    Note

    Enable only one keyring plugin or one keyring component at a time for each server instance. Enabling multiple keyring plugins or keyring components or mixing keyring plugins or keyring components is not supported and may result in data loss.

    The following example is a global manifest file that does not use local manifests:

    {\n \"read_local_manifest\": false,\n \"components\": \"file://component_keyring_kms\"\n}\n

    The following is an example of a global manifest file that points to a local manifest file:

    {\n \"read_local_manifest\": true\n}\n

    The following is an example of a local manifest file:

    {\n \"components\": \"file://component_keyring_kms\"\n}\n

    The configuration settings are either in a global configuration file or a local configuration file. The settings are the same.

    The KMS configuration file has the following options:

    • read_local_config

    • path - the location of the JSON keyring database file.

    • read_only - if true, the keyring cannot be modified.

    • kms_key - the identifier of an AWS KMS master key. The user must create this key before creating the manifest file. The identifier can be one of the following:

      • UUID

      • Alias

      • ARN

    For more information, see Finding the key ID and key ARN.

    • region - the AWS where the KMS is stored. Any HTTP request connects to this region.

    • auth_key - an AWS user authentication key. The user must have access to the KMS key.

    • secret_access_key - the secret key (API \u201cpassword\u201d) for the AWS user.

    Note

    The configuration file contains authentication information. Only the MySQL process should be able to read this file.

    The following JSON is an example of a configuration file:

    {\n \"read_local_config\": \"true/false\",\n \"path\": \"/usr/local/mysql/keyring-mysql/aws-keyring-data\",\n \"region\": \"eu-central-1\",\n \"kms_key\": \"UUID, alias or ARN as displayed by the KMS console\",\n \"auth_key\": \"AWS user key\",\n \"secret_access_key\": \"AWS user secret key\"\n}\n

    For more information, see Keyring Component installation

    "},{"location":"using-keyring-plugin.html","title":"Use the keyring component or keyring plugin","text":"

    The keyring_vault plugin can store the encryption keys inside the HashiCorp Vault.

    See also

    Hashicorp Documentation: Installing Vault Hashicorp Documentation: Production Hardening

    Percona Server for MySQL may use either of the following plugins:

    • keyring_file stores the keyring data locally

    • keyring_vault provides an interface for the database with a HashiCorp Vault server to store key and secure encryption keys.

    Note

    The keyring_file plugin should not be used for regulatory compliance.

    To install the plugin, follow the installing and uninstalling plugins instructions.

    "},{"location":"using-keyring-plugin.html#load-the-keyring-plugin","title":"Load the keyring plugin","text":"

    You should load the plugin at server startup with the -early-plugin-load option to enable keyrings.

    Warning

    Only one keyring plugin should be enabled at a time. Enabling multiple keyring plugins is not supported and may result in data loss.

    We recommend that you load the plugin in the configuration file to facilitate recovery for encrypted tables. Also, the redo log encryption and the undo log encryption cannot be used without --early-plugin-load. The normal plugin load happens too late in startup.

    Note

    The keyring_vault extension, \u201c.so\u201d and the file location for the vault configuration should be changed to match your operating system\u2019s extension and the file location in your operating system.

    To use the keyring_vault, you can add this option to your configuration file:

    [mysqld]\nearly-plugin-load=\"keyring_vault=keyring_vault.so\"\nloose-keyring_vault_config=\"/home/mysql/keyring_vault.conf\"\n\nThe keyring_vault extension, \".so\" and the file location for the vault\nconfiguration should be changed to match your operating system's extension\nand operating system location.\n

    You could also run the following command which loads the keyring_file plugin:

    $ mysqld --early-plugin-load=\"keyring_file=keyring_file.so\"\n

    Note

    If a server starts with multiple plugins loaded early, the --early-plugin-load option should contain the plugin names in a double-quoted list, separating each plugin name by a semicolon. The double quotes ensure the semicolons do not create issues when executing the list in a script.

    After installing the plugin, you must also point the keyring_vault_config variable to the keyring_vault configuration file.

    The keyring_vault_config file has the following information:

    • vault_url - the Vault server address

    • secret_mount_point - the mount point name where the keyring_vault stores the keys.

    • secret_mount_point_version - the KV Secrets Engine version (kv or kv-v2) used. Implemented in Percona Server for MySQL 8.0.23-14.

    • token - a token generated by the Vault server

    • vault_ca [optional] - if the machine does not trust the Vault\u2019s CA certificate, this variable points to the CA certificate used to sign the Vault\u2019s certificates

    This is an example of a configuration file:

    vault_url = https://vault.public.com:8202\nsecret_mount_point = secret\nsecret_mount_point_version = AUTO\ntoken = {randomly-generated-alphanumeric-string}\nvault_ca = /data/keyring_vault_confs/vault_ca.crt\n

    Warning

    Each secret_mount_point must be used by only one server. If multiple servers use the same secret_mount_point, the behavior is unpredictable.

    The first time a key is fetched from a keyring, the keyring_vault communicates with the Vault server to retrieve the key type and data.

    "},{"location":"using-keyring-plugin.html#secret_mount_point_version-information","title":"secret_mount_point_version information","text":"

    Implemented in Percona Server for MySQL 8.0.23-14, the secret_mount_point_version can be either a 1, 2, AUTO, or the secret_mount_point_version parameter is not listed in the configuration file.

    Value Description 1 Works with KV Secrets Engine - Version 1 (kv). When forming key operation URLs, the secret_mount_point is always used without any transformations. For example, to return a key named skey, the URL is /v1//skey 2 Works with KV Secrets Engine - Version 2 (kv) The initialization logic splits the secret_mount_point parameter into two parts:
    • The mount_point_path - the mount path under which the Vault Server secret was created
    • The directory_path - a virtual directory suffix that can be used to create virtual namespaces with the same real mount point
    For example, both the mount_point_path and the directory_path are needed to form key access URLs: /v1/<mount_point_path/data//skey AUTO An autodetection mechanism probes and determines if the secrets engine version is kv or kv-v2 and based on the outcome will either use the secret_mount_point as is, or split the secret_mount_point into two parts. Not listed If the secret_mount_point_version is not listed in the configuration file, the behavior is the same as AUTO.

    If you set the secret_mount_point_version to 2 but the path pointed by secret_mount_point is based on KV Secrets Engine - Version 1 (kv), an error is reported, and the plugin fails to initialize.

    If you set the secret_mount_point_version to 1 but the path pointed by secret_mount_point is based on KV Secrets Engine - Version 2 (kv-v2), the plugin initialization succeeds but any MySQL keyring-related operations fail.

    "},{"location":"using-keyring-plugin.html#upgrade-from-8022-13-or-earlier-to-8023-14-or-later","title":"Upgrade from 8.0.22-13 or earlier to 8.0.23-14 or later","text":"

    The keyring_vault plugin configuration files created before Percona Server for MySQL 8.0.23-14 work only with KV Secrets Engine - Version 1 (kv) and do not have the secret_mount_point_version parameter. After the upgrade to 8.0.23-14 or later, the secret_mount_point_version is implicitly considered AUTO and the information is probed and the secrets engine version is determined to 1.

    "},{"location":"using-keyring-plugin.html#upgrade-from-vault-secrets-engine-version-1-to-version-2","title":"Upgrade from Vault Secrets Engine Version 1 to Version 2","text":"

    You can upgrade from the Vault Secrets Engine Version 1 to Version 2. Use either of the following methods:

    • Set the secret_mount_point_version to AUTO or the variable is not set in the keyring_vault plugin configuration files in all Percona Servers. The AUTO value ensures that the autodetection mechanism is invoked during the plugin\u2019s initialization.

    • Set the secret_mount_point_version to 2 to ensure that plugins do not initialize unless the kv to kv-v2 upgrade completes.

    Note

    The keyring_vault plugin that works with kv-v2 secret engines does not use the built-in key versioning capabilities. The keyring key versions are encoded into key names.

    "},{"location":"using-keyring-plugin.html#kv-secret-engine-considerations-for-upgrading-from-57-to-80","title":"KV Secret Engine considerations for upgrading from 5.7 to 8.0","text":"

    When you upgrade from Percona Server for MySQL 5.7.32 or older, you can only use KV Secrets Engine 1 (kv). You can upgrade to any version of Percona Server for MySQL 8.0. The old keyring_vault`` plugin and newkeyring_vaultplugin work correctly with the existing Vault Server data under the existingkeyring_vault` plugin configuration file.

    If you upgrade from Percona Server for MySQL 5.7.33 or newer, you have the following options: If you are using KV Secrets Engine 1 (kv), you can upgrade to any version of Percona Server for MySQL 8.0. If you use KV Secrets Engine 2 (kv-v2)you can upgrade with *Percona Server for MySQL* 8.0.23 or newer. *Percona Server for MySQL* 8.0.23.14 is the first version of the 8.0 series which has thekeyring_vaultplugin that supportskv-v2`. A user-created key deletion is only possible using the keyring_udf plugin, which deletes the key from the in-memory hash map and the Vault server. You cannot delete system keys, such as the master key.

    This plugin supports the SQL interface for keyring key management described in the General-Purpose Keyring Key-Management Functions manual. The plugin library contains user-defined keyring functions allowing access to the internal keyring service functions. To enable the functions, you must enable the keyring_udf plugin:

    mysql> INSTALL PLUGIN keyring_udf SONAME 'keyring_udf.so';\n

    Note

    The keyring_udf plugin must be installed. Using the user-defined functions without the keyring_udf plugin generates an error.

    You must also create keyring encryption user-defined functions.

    "},{"location":"using-keyring-plugin.html#use-the-keyring_file-component","title":"Use the keyring_file component","text":"

    See keyring component installation for information on installing the component.

    Warning

    The keyring_file component should not be used for regulatory compliance.

    See also

    MySQL Documentation: Using the keyring_file component

    "},{"location":"using-keyring-plugin.html#system-variables","title":"System variables","text":""},{"location":"using-keyring-plugin.html#keyring_vault_config","title":"keyring_vault_config","text":"Option Description Command-line \u2013keyring-vault-config Scope Global Dynamic Yes Data type Text Default

    This variable defines the location of the keyring_vault_plugin configuration file.

    "},{"location":"using-keyring-plugin.html#keyring_vault_timeout","title":"keyring_vault_timeout","text":"Option Description Command-line \u2013keyring-vault-timeout Scope Global Dynamic Yes Data type Numeric Default 15

    Set the duration in seconds for the Vault server connection timeout. The default value is 15. The allowed range is from 0 to 86400. The timeout can also be disabled to wait an infinite amount of time by setting this variable to 0.

    "},{"location":"using-kmip.html","title":"Using the Key Management Interoperability Protocol (KMIP)","text":"

    As of Percona Server for MySQL 8.0.30-22, the Key Management Interoperability Protocol (KMIP) feature is Generally Available (GA).

    Percona Server for MySQL 8.0.27-18 adds support for the OASIS Key Management Interoperability Protocol (KMIP). This implementation was tested with the PyKMIP server and the HashiCorp Vault Enterprise KMIP Secrets Engine.

    KMIP enables communication between key management systems and the database server. The protocol can do the following:

    • Streamline encryption key management

    • Eliminate redundant key management processes

    "},{"location":"using-kmip.html#component-installation","title":"Component installation","text":"

    The KMIP component must be installed with a manifest. A keyring component is not loaded with the --early-plugin-load option on the server. The server uses a manifest and the component consults its configuration file during initialization. You should only load a keyring component with a manifest file. Do not use the INSTALL_COMPONENT statement, which loads the keyring components too late in the startup sequence of the server. For example, InnoDB requires the component, but because the components are registered in the mysql.component table, this table is loaded after InnoDB initialization.

    You should create a global manifest file named mysqld.my in the installation directory and, optionally, create a local manifest file, also named mysqld.my in a data directory.

    To install a keyring component, you must do the following:

    1. Write a manifest in a valid JSON format

    2. Write a configuration file

    A manifest file indicates which component to load. If the manifest file does not exist, the server does not load the component associated with that file. During startup, the server reads the global manifest file from the installation directory. The global manifest file can contain the required information or point to a local manifest file located in the data directory. If you have multiple server instances that use different keyring components use a local manifest file in each data directory to load the correct keyring component for that instance.

    Note

    Enable only one keyring plugin or one keyring component at a time for each server instance. Enabling multiple keyring plugins or keyring components or mixing keyring plugins or keyring components is not supported and may result in data loss.

    The following is an example of a global manifest file that does not use local manifests:

    {\n \"read_local_manifest\": false,\n \"components\": \"file://component_keyring_kmip\"\n}\n

    The following is an example of a global manifest file that points to a local manifest file:

    {\n \"read_local_manifest\": true\n}\n

    The following is an example of a local manifest file:

    {\n \"components\": \"file://component_keyring_kmip\"\n}\n

    The configuration settings are either in a global configuration file or a local configuration file. The settings are the same. The following JSON example of a configuration file.

    {\n \"server_addr\": \"127.0.0.1\",\n \"server_port\": \"5696\",\n \"client_ca\": \"client_certificate.pem\",\n \"client_key\": \"client_key.pem\",\n \"server_ca\": \"root_certificate.pem\"\n}\n

    For more information, see Keyring Component installation

    "},{"location":"using-tokudb.html","title":"Use TokuDB","text":"

    Starting with Percona Server for MySQL 8.0.28-19 (2022-05-12), the TokuDB storage engine is no longer supported. For more information, see the TokuDB Introduction and TokuDB version changes.

    "},{"location":"using-tokudb.html#fast-insertions-and-richer-indexes","title":"Fast Insertions and Richer Indexes","text":"

    TokuDB\u2019s fast indexing enables fast queries through the use of rich indexes, such as covering and clustering indexes. It\u2019s worth investing some time to optimize index definitions to get the best performance from MySQL and TokuDB. Here are some resources to get you started:

    • \u201cUnderstanding Indexing\u201d by Zardosht Kasheff (video)

    • Rule of Thumb for Choosing Column Order in Indexes

    • Covering Indexes: Orders-of-Magnitude Improvements

    • Introducing Multiple Clustering Indexes

    • Clustering Indexes vs. Covering Indexes

    • How Clustering Indexes Sometimes Helps UPDATE and DELETE Performance

    • High Performance MySQL, 3rd Edition by Baron Schwartz, Peter Zaitsev, Vadim Tkachenko, Copyright 2012, O\u2019Reilly Media. See Chapter 5, Indexing for High Performance.

    "},{"location":"using-tokudb.html#clustering-secondary-indexes","title":"Clustering Secondary Indexes","text":"

    One of the keys to exploiting TokuDB\u2019s strength in indexing is to make use of clustering secondary indexes.

    TokuDB allows a secondary key to be defined as a clustering key. This means that all of the columns in the table are clustered with the secondary key. Percona Server for MySQL parser and query optimizer support Multiple Clustering Keys when TokuDB engine is used. This means that the query optimizer will avoid primary clustered index reads and replace them by secondary clustered index reads in certain scenarios.

    The parser has been extended to support following syntax:

    CREATE TABLE ... ( ..., CLUSTERING KEY identifier (column list), ...\nCREATE TABLE ... ( ..., UNIQUE CLUSTERING KEY identifier (column list), ...\nCREATE TABLE ... ( ..., CLUSTERING UNIQUE KEY identifier (column list), ...\nCREATE TABLE ... ( ..., CONSTRAINT identifier UNIQUE CLUSTERING KEY identifier (column list), ...\nCREATE TABLE ... ( ..., CONSTRAINT identifier CLUSTERING UNIQUE KEY identifier (column list), ...\n\nCREATE TABLE ... (... column type CLUSTERING [UNIQUE] [KEY], ...)\nCREATE TABLE ... (... column type [UNIQUE] CLUSTERING [KEY], ...)\n\nALTER TABLE ..., ADD CLUSTERING INDEX identifier (column list), ...\nALTER TABLE ..., ADD UNIQUE CLUSTERING INDEX identifier (column list), ...\nALTER TABLE ..., ADD CLUSTERING UNIQUE INDEX identifier (column list), ...\nALTER TABLE ..., ADD CONSTRAINT identifier UNIQUE CLUSTERING INDEX identifier (column list), ...\nALTER TABLE ..., ADD CONSTRAINT identifier CLUSTERING UNIQUE INDEX identifier (column list), ...\n\nCREATE CLUSTERING INDEX identifier ON ...\n

    To define a secondary index as clustering, simply add the word CLUSTERING before the key definition. For example:

    CREATE TABLE foo (\n  column_a INT,\n  column_b INT,\n  column_c INT,\n  PRIMARY KEY index_a (column_a),\n  CLUSTERING KEY index_b (column_b)) ENGINE = TokuDB;\n

    In the previous example, the primary table is indexed on column_a. Additionally, there is a secondary clustering index (named index_b) sorted on column_b. Unlike non-clustered indexes, clustering indexes include all the columns of a table and can be used as covering indexes. For example, the following query will run very fast using the clustering index_b:

    SELECT column_c\n  FROM foo\n  WHERE column_b BETWEEN 10 AND 100;\n

    This index is sorted on column_b, making the WHERE clause fast, and includes column_c, which avoids lookups in the primary table to satisfy the query.

    TokuDB makes clustering indexes feasible because of its excellent compression and very high indexing rates. For more information about using clustering indexes, see Introducing Multiple Clustering Indexes.

    "},{"location":"using-tokudb.html#hot-index-creation","title":"Hot Index Creation","text":"

    TokuDB enables you to add indexes to an existing table and still perform inserts and queries on that table while the index is being created.

    The ONLINE keyword is not used. Instead, the value of the tokudb_create_index_online client session variable is examined.

    Hot index creation is invoked using the CREATE INDEX command after setting tokudb_create_index_online to on as follows:

    mysql> SET tokudb_create_index_online=on;\nQuery OK, 0 rows affected (0.00 sec)\n\nmysql> CREATE INDEX index ON foo (field_name);\n

    Alternatively, using the ALTER TABLE command for creating an index will create the index offline (with the table unavailable for inserts or queries), regardless of the value of tokudb_create_index_online. The only way to hot create an index is to use the CREATE INDEX command.

    Hot creating an index will be slower than creating the index offline, and progress depends how busy the mysqld server is with other tasks. Progress of the index creation can be seen by using the SHOW PROCESSLIST command (in another client). Once the index creation completes, the new index will be used in future query plans.

    If more than one hot CREATE INDEX is issued for a particular table, the indexes will be created serially. An index creation that is waiting for another to complete will be shown as Locked in SHOW PROCESSLIST. We recommend that each CREATE INDEX be allowed to complete before the next one is started.

    "},{"location":"using-tokudb.html#hot-column-add-delete-expand-and-rename-hcader","title":"Hot Column Add, Delete, Expand, and Rename (HCADER)","text":"

    TokuDB enables you to add or delete columns in an existing table, expand char, varchar, varbinary, and integer type columns in an existing table, or rename an existing column in a table with little blocking of other updates and queries. HCADER typically blocks other queries with a table lock for no more than a few seconds. After that initial short-term table locking, the system modifies each row (when adding, deleting, or expanding columns) later, when the row is next brought into main memory from disk. For column rename, all the work is done during the seconds of downtime. On-disk rows need not be modified.

    To get good performance from HCADER, observe the following guidelines:

    • The work of altering the table for column addition, deletion, or expansion is performed as subsequent operations touch parts of the Fractal Tree, both in the primary index and secondary indexes.

    You can force the column addition, deletion, or expansion work to be performed all at once using the standard syntax of OPTIMIZE TABLE X, when a column has been added to, deleted from, or expanded in table X. It is important to note that as of TokuDB version 7.1.0, OPTIMIZE TABLE is also hot, so that a table supports updates and queries without blocking while an OPTIMIZE TABLE is being performed. Also, a hot OPTIMIZE TABLE does not rebuild the indexes, since TokuDB indexes do not age. Rather, they flush all background work, such as that induced by a hot column addition, deletion, or expansion.

    • Each hot column addition, deletion, or expansion operation must be performed individually (with its own SQL statement). If you want to add, delete, or expand multiple columns use multiple statements.

    • Avoid adding, deleting, or expanding a column at the same time as adding or dropping an index.

    • The time that the table lock is held can vary. The table-locking time for HCADER is dominated by the time it takes to flush dirty pages, because MySQL closes the table after altering it. If a checkpoint has happened recently, this operation is fast (on the order of seconds). However, if the table has many dirty pages, then the flushing stage can take on the order of minutes.

    • Avoid dropping a column that is part of an index. If a column to be dropped is part of an index, then dropping that column is slow. To drop a column that is part of an index, first drop the indexes that reference the column in one alter table statement, and then drop the column in another statement.

    • Hot column expansion operations are only supported to char, varchar, varbinary, and integer data types. Hot column expansion is not supported if the given column is part of the primary key or any secondary keys.

    • Rename only one column per statement. Renaming more than one column will revert to the standard MySQL blocking behavior. The proper syntax is as follows:

    ALTER TABLE table\n  CHANGE column_old column_new\n  DATA_TYPE REQUIRED_NESS DEFAULT\n

    Here\u2019s an example of how that might look:

    ALTER TABLE table\n  CHANGE column_old column_new\n  INT(10) NOT NULL;\n

    Notice that all of the column attributes must be specified. ALTER TABLE table CHANGE column_old column_new; induces a slow, blocking column rename.

    • Hot column rename does not support the following data types: TIME, ENUM, BLOB, TINYBLOB, MEDIUMBLOB, LONGBLOB. Renaming columns of these types will revert to the standard MySQL blocking behavior.

    • Temporary tables cannot take advantage of HCADER. Temporary tables are typically small anyway, so altering them using the standard method is usually fast.

    "},{"location":"using-tokudb.html#compression-details","title":"Compression Details","text":"

    TokuDB offers different levels of compression, which trade off between the amount of CPU used and the compression achieved. Standard compression uses less CPU but generally compresses at a lower level, high compression uses more CPU and generally compresses at a higher level. We have seen compression up to 25x on customer data.

    Compression in TokuDB occurs on background threads, which means that high compression need not slow down your database. Indeed, in some settings, we\u2019ve seen higher overall database performance with high compression.

    Note

    We recommend that users use standard compression on machines with six or fewer cores, and high compression on machines with more than six cores.

    The ultimate choice depends on the particulars of how a database is used, and we recommend that users use the default settings unless they have profiled their system with high compression in place.

    The table is compressed using whichever row format is specified in the session variable tokudb_row_format. If no row format is set nor is tokudb_row_format, the QUICKLZ compression algorithm is used.

    The row_format and tokudb_row_format variables accept the following values:

    Value Description TOKUDB_DEFAULT Sets the compression to the default behavior. As of TokuDB 7.1.0, the default behavior is to compress using the zlib library. In the future this behavior may change. TOKUDB_FAST Sets the compression to use the quicklz library. TOKUDB_SMALL Sets the compression to use the lzma library. TOKUDB_ZLIB Compress using the zlib library, which provides mid-range compression and CPU utilization. TOKUDB_QUICKLZ Compress using the quicklz library, which provides light compression and low CPU utilization. TOKUDB_LZMA Compress using the lzma library, which provides the highest compression and high CPU utilization. TOKUDB_SNAPPY This compression is using snappy library and aims for very high speeds and reasonable compression. TOKUDB_UNCOMPRESSED This setting turns off compression and is useful for tables with data that cannot be compressed."},{"location":"using-tokudb.html#read-free-replication","title":"Read Free Replication","text":"

    TokuDB replicas can be configured to perform significantly less read IO in order to apply changes from the source. By utilizing the power of Fractal Tree indexes:

    • insert/update/delete operations can be configured to eliminate read-modify-write behavior and simply inject messages into the appropriate Fractal Tree indexes

    • update/delete operations can be configured to eliminate the IO required for uniqueness checking

    To enable Read Free Replication, the servers must be configured as follows:

    • On the replication source:

      • Enable row based replication: set BINLOG_FORMAT=ROW
    • On the replication replica(s):

      • The replica must be in read-only mode: set read_only=1

      • Disable unique checks: set tokudb_rpl_unique_checks=0

      • Disable lookups (read-modify-write): set tokudb_rpl_lookup_rows=0

    Note

    You can modify one or both behaviors on the replica(s).

    Note

    As long as the source is using row based replication, this optimization is available on a TokuDB replica. This means that it\u2019s available even if the source is using InnoDB or MyISAM tables, or running non-TokuDB binaries.

    Warning

    TokuDB Read Free Replication will not propagate UPDATE and DELETE events reliably if TokuDB table is missing the primary key which will eventually lead to data inconsistency on the replica.

    "},{"location":"using-tokudb.html#transactions-and-acid-compliant-recovery","title":"Transactions and ACID-compliant Recovery","text":"

    By default, TokuDB checkpoints all open tables regularly and logs all changes between checkpoints, so that after a power failure or system crash, TokuDB will restore all tables into their fully ACID-compliant state. That is, all committed transactions will be reflected in the tables, and any transaction not committed at the time of failure will be rolled back.

    The default checkpoint period is every 60 seconds, and this specifies the time from the beginning of one checkpoint to the beginning of the next. If a checkpoint requires more than the defined checkpoint period to complete, the next checkpoint begins immediately. It is also related to the frequency with which log files are trimmed, as described below. The user can induce a checkpoint at any time by issuing the FLUSH LOGS command. When a database is shut down normally it is also checkpointed and all open transactions are aborted. The logs are trimmed at startup.

    "},{"location":"using-tokudb.html#managing-log-size","title":"Managing Log Size","text":"

    TokuDB keeps log files back to the most recent checkpoint. Whenever a log file reaches 100 MB, a new log file is started. Whenever there is a checkpoint, all log files older than the checkpoint are discarded. If the checkpoint period is set to be a very large number, logs will get trimmed less frequently. This value is set to 60 seconds by default.

    TokuDB also keeps rollback logs for each open transaction. The size of each log is proportional to the amount of work done by its transaction and is stored compressed on disk. Rollback logs are trimmed when the associated transaction completes.

    "},{"location":"using-tokudb.html#recovery","title":"Recovery","text":"

    Recovery is fully automatic with TokuDB. TokuDB uses both the log files and rollback logs to recover from a crash. The time to recover from a crash is proportional to the combined size of the log files and uncompressed size of rollback logs. Thus, if there were no long-standing transactions open at the time of the most recent checkpoint, recovery will take less than a minute.

    "},{"location":"using-tokudb.html#disabling-the-write-cache","title":"Disabling the Write Cache","text":"

    When using any transaction-safe database, it is essential that you understand the write-caching characteristics of your hardware. TokuDB provides transaction safe (ACID compliant) data storage for MySQL. However, if the underlying operating system or hardware does not actually write data to disk when it says it did, the system can corrupt your database when the machine crashes. For example, TokuDB can not guarantee proper recovery if it is mounted on an NFS volume. It is always safe to disable the write cache, but you may be giving up some performance.

    For most configurations you must disable the write cache on your disk drives. On ATA/SATA drives, the following command should disable the write cache:

    $ hdparm -W0 /dev/hda\n

    There are some cases when you can keep the write cache, for example:

    • Write caching can remain enabled when using XFS, but only if XFS reports that disk write barriers work. If you see one of the following messages in /var/log/messages, then you must disable the write cache:

      • Disabling barriers, not supported with external log device

      • Disabling barriers, not supported by the underlying device

      • Disabling barriers, trial barrier write failed

    XFS write barriers appear to succeed for single disks (with no LVM), or for certain kernels such as that provided by Fedora 12.

    In the following cases, you must disable the write cache:

    • If you use the ext3 filesystem

    • If you use LVM (although recent Linux kernels, such as Fedora 12, have fixed this problem)

    • If you use Linux\u2019s software RAID

    • If you use a RAID controller with battery-backed-up memory. This may seem counter-intuitive.

    In summary, you should disable the write cache, unless you have a very specific reason not to do so.

    "},{"location":"using-tokudb.html#progress-tracking","title":"Progress Tracking","text":"

    TokuDB has a system for tracking progress of long running statements, thereby removing the need to define triggers to track statement execution, as follows:

    • Bulk Load: When loading large tables using LOAD DATA INFILE commands, doing a SHOW PROCESSLIST command in a separate client session shows progress. There are two progress stages. The first will state something like Inserted about 1000000 rows. After all rows are processed like this, the next stage tracks progress by showing what fraction of the work is done (e.g. Loading of data about 45% done)

    • Adding Indexes: When adding indexes via ALTER TABLE or CREATE INDEX, the command SHOW PROCESSLIST shows progress. When adding indexes via ALTER TABLE or CREATE INDEX, the command SHOW PROCESSLIST will include an estimation of the number of rows processed. Use this information to verify progress is being made. Similar to bulk loading, the first stage shows how many rows have been processed, and the second stage shows progress with a fraction.

    • Commits and Aborts: When committing or aborting a transaction, the command SHOW PROCESSLIST will include an estimate of the transactional operations processed.

    "},{"location":"using-tokudb.html#migrating-to-tokudb","title":"Migrating to TokuDB","text":"

    To convert an existing table to use the TokuDB engine, run ALTER TABLE... ENGINE=TokuDB. If you wish to load from a file, use LOAD DATA INFILE and not mysqldump. Using mysqldump will be much slower. To create a file that can be loaded with LOAD DATA INFILE, refer to the INTO OUTFILE option of the SELECT Syntax.

    Note

    Creating this file does not save the schema of your table, so you may want to create a copy of that as well.

    "},{"location":"utility-user.html","title":"Utility user","text":"

    Percona Server for MySQL has implemented ability to have a MySQL user who has system access to do administrative tasks but limited access to user schema. This feature is especially useful to those operating MySQL As A Service.

    This user has a mixed and special scope of abilities and protection:

    • Utility user does not appear in the mysql.user table and can not be modified by any other user, including root.

    • Utility user does not appear in INFORMATION_SCHEMA.USER_STATISTICS, INFORMATION_SCHEMA.CLIENT_STATISTICS or THREAD_STATISTICS tables or in any performance_schema tables.

    • Utility user\u2019s queries may appear in the general and slow logs.

    • Utility user does not have the ability create, modify, delete or see any schemas or data not specified, except for information_schema.

    • Utility user may modify all visible, non-read-only system variables (see expanded_option_modifiers functionality).

    • Utility user may see, create, modify and delete other system users only if given access to the mysql schema.

    • Regular users may be granted proxy rights to the utility user but attempts to impersonate the utility user fail. The utility user may not be granted proxy rights on any regular user.

    For example, GRANT PROXY ON utility_user TO regular_user; does not fail, but any actual attempt to impersonate as the utility user fails.

    GRANT PROXY ON regular_user TO utility_user; fails when utility_user is an exact match or is more specific than than the utility user specified.

    At server start, the server notes in the log output that the utility user exists and the schemas that the utility user can access.

    "},{"location":"utility-user.html#version-specific-information","title":"Version specific information","text":"
    • The utility_user_dynamic_privileges variable was implemented in Percona Server for MySQL 8.0.20-11.

    • Percona Server for MySQL 8.0.17-8: The feature was ported from Percona Server for MySQL 5.7.

    "},{"location":"utility-user.html#system-variables","title":"System variables","text":"

    In order to have the ability for a special type of MySQL user, which will have a very limited and special amount of control over the system and can not be see or modified by any other user including the root user, three new options have been added.

    "},{"location":"utility-user.html#utility_user","title":"utility_user","text":"Option Description Command Line: Yes Config file utility_user=<user@host> Scope: Global Dynamic: No Data type String Default NULL

    Specifies a MySQL user that will be added to the internal list of users and recognized as the utility user.

    Option utility_user specifies the user which the system creates and recognizes as the utility user. The host in the utility user specification follows conventions described in the MySQL manual. For example, the conventions allow wildcards and IP masks. Anonymous user names are not permitted to be used for the utility user name.

    This user must not be an exact match to any other user that exists in the mysql.user table. If the server detects that the user specified with this option exactly matches any user within the mysql.user table on start up, the server reports an error and exits gracefully.

    If host name wildcards are used and a more specific user specification is identified on start up, the server reports a warning and continues.

    Error message
    utility_user=frank@% and [frank@localhost](mailto:frank@localhost) exists within the mysql.user table.\n

    If a client attempts to create a MySQL user that matches this user specification exactly or if host name wildcards are used for the utility user and the user being created has the same name and a more specific host, the creation attempt fails with an error.

    Error message
    utility_user=frank@% and CREATE USER [\u2018frank@localhost](mailto:'frank@localhost)\u2019;\n

    As a result of these requirements, it is strongly recommended that a very unique user name and reasonably specific host be used.

    Verify the script or tools test they are running within the correct user by executing SELECT CURRENT_USER() and comparing the result against the known utility user.

    "},{"location":"utility-user.html#utility_user_password","title":"utility_user_password","text":"Option Description Command Line: Yes Config file utility_user_password=password Scope: Global Dynamic: No Data type String Default NULL

    Specifies the password required for the utility user.

    Option utility_user_password specifies the password for the utility user and must be specified or the server exits with an error.

    Utility user password
    utility_user_password=Passw0rD\n
    "},{"location":"utility-user.html#utility_user_schema_access","title":"utility_user_schema_access","text":"Option Description Command Line: Yes Config file utility_user_schema_access=schema,schema,schema Scope: Global Dynamic: No Data type String Default NULL

    Specifies the schemas that the utility user has access to in a comma delimited list.

    Option utility_user_schema_access specifies the name(s) of the schema(s) that the utility user will have access to read write and modify. If a particular schema named here does not exist on start up it will be ignored. If a schema by the name of any of those listed in this option is created after the server is started, the utility user will have full access to it.

    Utility user schema access
    utility_user_schema_access=schema1,schema2,schema3\n
    "},{"location":"utility-user.html#utility_user_privileges","title":"utility_user_privileges","text":"Option Description Command Line: Yes Config file utility_user_privileges=privilege1,privilege2,privilege3 Scope: Global Dynamic: No Data type String Default NULL

    This variable can be used to specify a comma-separated list of extra access privileges to grant to the utility user. Supported values for the privileges list are: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, GRANT, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE

    Option utility-user-privileges allows a comma-separated list of extra access privileges to grant to the utility user.

    Utility user privileges
    utility-user-privileges =\u201dCREATE,DROP,LOCK TABLES\u201d;\n
    "},{"location":"utility-user.html#utility_user_dynamic_privileges","title":"utility_user_dynamic_privileges","text":"Option Description Command Line: Yes Config file utility_user_dynamic_privileges=privilege1,privilege2,privilege3 Scope: Global Dynamic: No Data type String Default NULL

    This variable was implemented in Percona Server for MySQL 8.0.20-11.

    This variable allows a comma-separated list of extra access dynamic privileges to grant to the utility user. The supported values for the dynamic privileges are:

    • APPLICATION_PASSWORD_ADMIN

    • AUDIT_ADMIN

    • BACKUP_ADMIN

    • BINLOG_ADMIN

    • BINLOG_ENCRYPTION_ADMIN

    • CLONE_ADMIN

    • CONNECTION_ADMIN

    • ENCRYPTION_KEY_ADMIN

    • FIREWALL_ADMIN

    • FIREWALL_USER

    • GROUP_REPLICATION_ADMIN

    • INNODB_REDO_LOG_ARCHIVE

    • NDB_STORED_USER

    • PERSIST_RO_VARIABLES_ADMIN

    • REPLICATION_APPLIER

    • REPLICATION_SLAVE_ADMIN

    • RESOURCE_GROUP_ADMIN

    • RESOURCE_GROUP_USER

    • ROLE_ADMIN

    • SESSION_VARIABLES_ADMIN

    • SET_USER_ID

    • SHOW_ROUTINE

    • SYSTEM_USER

    • SYSTEM_VARIABLES_ADMIN

    • TABLE_ENCRYPTION_ADMIN

    • VERSION_TOKEN_ADMIN

    • XA_RECOVER_ADMIN

    Other dynamic privileges may be defined by plugins.

    Option utility_user_dynamic_privileges allows a comma-separated list of extra-access dynamic privileges to grant to the utility user.

    Utility user dynamic privileges
    utility_user_dynamic_privileges =\u201dSYSTEM_USER,AUDIT_ADMIN\u201d;\n
    "},{"location":"uuid-versions.html","title":"UUID_VX component","text":"

    A Universally Unique Identifier (UUID) is a 128-bit number used to identify information uniquely in computer systems. It is often represented as a 32-character hexadecimal string divided into five groups separated by hyphens.

    Benefit Description Global Uniqueness UUIDs ensure that each identifier is unique across different databases and systems without needing a central authority to manage the IDs. This prevents ID conflicts when merging data from multiple sources. Decentralized Generation Since UUIDs can be generated independently by different systems, there is no need for coordination. This is particularly useful in distributed environments where systems might not have constant communication with each other. Scalability UUIDs support scalability in distributed databases. New records can be added without worrying about generating duplicate IDs, even when data is inserted concurrently across multiple nodes. Improved Data Merging When data from various sources is combined, UUIDs prevent conflicts, making the merging process simpler and more reliable. Security UUIDs, especially those generated randomly (like UUIDv4), are hard to predict, adding a layer of security when used as identifiers.

    The following table describes the UUID versions:

    UUID Version Description Version 1 (Time-based) - Generated using the current time and a node identifier (usually the MAC address). - Ensures uniqueness over time and across nodes. Version 2 (DCE Security) - Similar to version 1 but includes additional information such as POSIX UID/GID. - Often used in environments requiring enhanced security. Version 3 (Name-based, MD5 hash) - Generated from a namespace identifier and a name (string). - Uses the MD5 hashing algorithm to ensure the UUID is derived from the namespace and name. Version 4 (Random) - Generated using random numbers. - Offers high uniqueness and is easy to generate without requiring specific inputs. Version 5 (Name-based, SHA-1 hash) - Similar to version 3 but uses the SHA-1 hashing algorithm. - Provides a stronger hash function than MD5. Version 6 (Time-ordered) - A reordered version of UUIDv1 for better indexing and storage efficiency. - Combines timestamp and random or unique data. Version 7 (Unix Epoch Time) - Combines a high-precision timestamp with random data. - Provides unique, time-ordered UUIDs that are ideal for database indexing. Version 8 (Custom) - Reserved for user-defined purposes and experimental uses. - Allows custom formats and structures according to specific requirements.

    UUID version 4 (UUIDv4) generates a unique identifier using random numbers. This randomness ensures a high level of uniqueness without needing a central authority to manage IDs. However, using UUIDv4 as a primary key in a distributed database is not recommended. The random nature of UUIDv4 leads to several issues:

    Issue Description Inefficient Indexing UUIDv4 does not follow any order, causing inefficient indexing. Databases struggle to keep records organized, leading to slower query performance. Fragmentation The random distribution of UUIDv4 can cause data fragmentation, making database storage less efficient. Storage Space UUIDs are larger (128 bits) than traditional integer keys, consuming more storage space and memory.

    For better performance and efficiency in a distributed database, consider using UUIDv7, which incorporates timestamps for some order levels.

    UUID version 7 (UUIDv7) creates time-ordered identifiers by encoding a Unix timestamp with millisecond precision in the first 48 bits. It uses 6 bits to specify the UUID version and variant, while the remaining 74 bits are random. This time-ordering results in nearly sequential values, which helps improve index performance and locality in distributed systems.

    "},{"location":"uuid-versions.html#install-the-uuid_vx-component","title":"Install the UUID_VX component","text":"
    mysql> INSTALL COMPONENT 'file://component_uuid_vx_udf';\n
    Expected output
    Query OK, 0 rows affected (0.03 sec) \n
    "},{"location":"uuid-versions.html#character-sets-available","title":"Character sets available","text":"

    The following character sets are used in the component:

    Character set Description ascii Used everywhere UUID strings are returned by functions or accepted as function arguments. utf8mb4 Used for string arguments in hash-based UUID generators, like UUID_V3() and UUID_V5() functions. binary Used for arguments in the BIN_TO_UUID_VX() function and for results from the UUID_VX_TO_BIN() function."},{"location":"uuid-versions.html#functions-available-in-uuid_vx","title":"Functions available in UUID_VX","text":"

    The following functions are compatible with all UUID versions:

    Function name Argument Description BIN_TO_UUID_VX() One string argument that must be a hexadecimal of exactly 32 characters (16 bytes) The function returns a UUID with binary data from the argument. It returns an error for all other inputs. IS_MAX_UUID_VX() One string argument that represents a UUID in standard or hexadecimal form. The function returns true if the argument is a valid UUID and is a MAX UUID. It returns false for all other inputs. If the argument is NULL, it returns NULL. If the argument cannot be parsed as a UUID, the function throws an error. IS_NIL_UUID_VX() One string argument representing a UUID in standard or hexadecimal form. The function returns true if the string is a NIL UUID. If the argument is NULL, it returns NULL. If the argument is not a valid UUID, it throws an error. IS_UUID_VX() One string argument that represents a UUID in either standard or hexadecimal form. The function returns true if the argument is a valid UUID. If the argument is NULL, it returns NULL. For any other input, it returns false. MAX_UUID_VX() No argument This function generates a MAX UUID, which has all 128 bits set to one (FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF). This function result is the opposite of the NIL UUID. NIL_UUID_VX() No argument. This function generates a NIL UUID, which has all 128 bits set to zero (00000000-0000-0000-0000-000000000000). UUID_VX_TO_BIN() One string argument, formatted as a UUID or in hexadecimal form The function converts the string arugment to its binary representation. UUID_VX_VARIANT() One string argument that represents a UUID in either standard or hexadecimal format. The function returns the UUID version (1-8) or an error if the argument is not a valid UUID or returns NULL if the input is NULL. UUID_VX_VERSION() One string representing a UUID in standard or hexadecimal form. The function returns version of UUID(1-8). The function throws an error if the argument is not a valid UUID in formatted or hexadecimal form or returns a NULL if the argument is NULL. If the argument is a valid UUID string but has an unknown value (outside of the 1-8 range) the function returns -1."},{"location":"uuid-versions.html#examples-of-functions-for-all-uuid-versions","title":"Examples of functions for all UUID versions","text":"
    mysql> SELECT is_uuid_vx('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +----------------------------------------------------+\n| is_uuid_vx('01900bf6-0eb0-715a-80f4-636367e07777') |\n+----------------------------------------------------+\n|                                                  1 |\n+----------------------------------------------------+\n
    mysql> SELECT uuid_vx_version('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +---------------------------------------------------------+\n| uuid_vx_version('01900bf6-0eb0-715a-80f4-636367e07777') |\n+---------------------------------------------------------+\n|                                                       7 |\n+---------------------------------------------------------+\n
     mysql> SELECT uuid_vx_variant('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +---------------------------------------------------------+\n| uuid_vx_variant('01900bf6-0eb0-715a-80f4-636367e07777') |\n+---------------------------------------------------------+\n|                                                       1 |\n+---------------------------------------------------------+\n
    "},{"location":"uuid-versions.html#uuid-generator-functions","title":"UUID generator functions","text":"

    The following functions generate specific UUID versions:

    UUID Version Arguement Description UUID_V1() No argument Generates a version 1 UUID based on a timestamp. If possible, use UUID_V7() instead. UUID_V3() One or two arguments: the first argument is a string that is hashed with MD5 and used in the UUID; the second argument is optional and specifies a namespace (integer values: DNS: 0, URL: 1, OID: 2, X.500: 3; default is 1 or URL). Generates a version 3 UUID based on a name. Note: MD5 is outdated and not secure. Use with caution and avoid exposing sensitive data. UUID_V4() No argument The function generates a version 4 UUID using random numbers and is similar to the built-in UUID() function. UUID_V5() One or two arguments: the first argument is a string that is hashed with SHA1 and used in the UUID; the second argument is optional and specifies a namespace (integer values: DNS: 0, URL: 1, OID: 2, X.500: 3; default is 1 or URL). Generates a version 5 UUID based on a name. Note: SHA1 is better than MD5 but still not secure. Use with caution and avoid exposing sensitive data. UUID_V6() No argument Generates a version 6 UUID based on a timestamp. If possible, use UUID_V7() instead. UUID_V7() Can have either no argument or a one integer argument: the argument is the number of milliseconds to adjust the timestamp forward or backward (negative values). Generates a version 7 UUID based on a timestamp. If there is no argument, no timestamp shift occurs. Timestamp shift can hide the actual creation time of the record.

    The UUID_v3() function and UUID_v5() function do not validate the string argument, such as whether the URL is formatted correctly or the DNS name is correct. These functions generate a string hash and then add that hash to a UUID with the defined namespace. The user specifies the string.

    "},{"location":"uuid-versions.html#uuid-generator-examples","title":"UUID generator examples","text":"

    UUID version 1:

    mysql> SELECT uuid_v1();\n
    Expected output
    +--------------------------------------+\n| uuid_v1()                            |\n+--------------------------------------+\n| 14c22f93-2962-11ef-9078-c3abf1c446bb |\n+--------------------------------------+\n

    UUID version 3 takes one argument and uses the default UUID namespace as \u201cURL\u201d.

    mysql> SELECT uuid_v3('http://example.com');\n
    Expected output
    +--------------------------------------+\n| uuid_v3('http://example.com')        |\n+--------------------------------------+\n| d632b50c-7913-3137-ae9a-2d93f56e70d5 |\n+--------------------------------------+\n

    UUID version 3 takes one argument and uses an explicit UUID namespace is \u201cURL\u201d.

    mysql> SELECT uuid_v3('http://example.com', 1);\n
    Expected output
    +--------------------------------------+\n| uuid_v3('http://example.com')        |\n+--------------------------------------+\n| d632b50c-7913-3137-ae9a-2d93f56e70d5 |\n+--------------------------------------+\n

    UUID version 3 takes one argument, with the explicit UUID namespace set to \u201cDNS\u201d.

    mysql> SELECT uuid_v3('example.com',0);\n
    Expected output
    +--------------------------------------+\n| uuid_v3('example.com',0)             |\n+--------------------------------------+\n| 9073926b-929f-31c2-abc9-fad77ae3e8eb |\n+--------------------------------------+\n

    UUID version 4:

    mysql> SELECT uuid_v4();\n
    Expected output
    +--------------------------------------+\n| uuid_v4()                            |\n+--------------------------------------+\n| a408e4ad-9b98-4edb-a105-40f22648a928 |\n+--------------------------------------+\n

    UUID version 5:

    mysql> SELECT uuid_v5(\"http://example.com\");\n
    Expected output
    +--------------------------------------+\n| uuid_v5(\"http://example.com\")        |\n+--------------------------------------+\n| 8c9ddcb0-8084-5a7f-a988-1095ab18b5df |\n+--------------------------------------+\n

    UUID version 6:

    mysql> SELECT uuid_v6();\n
    Expected output
    +--------------------------------------+\n| uuid_v6()                            |\n+--------------------------------------+\n| 1ef29686-2168-64a7-b9a2-adb13f80f118 |\n+--------------------------------------+\n

    UUID version 7 generation:

    mysql>SELECT uuid_v7();\n
    Expected output
    +--------------------------------------+\n| uuid_v7()                            |\n+--------------------------------------+\n| 019010f6-0426-70f0-80b0-b63decd3d7d1 |\n+--------------------------------------+\n1 row in set (0.00 sec)\n

    UUID version 7 with timestamp offset in 84000 seconds in the future

    mysql> SELECT uuid_v7(84000000);\n
    Expected output
    +--------------------------------------+\n| uuid_v7(84000000)                    |\n+--------------------------------------+\n| 019015f8-c7c4-70b4-8043-fe241c2be36c |\n+--------------------------------------+\n
    "},{"location":"uuid-versions.html#time-based-functions","title":"Time-based functions","text":"

    The following functions are used only with time-based UUIDs, specifically versions 1, 6, and 7.

    Function name Argument Description UUID_VX_TO_TIMESTAMP() One string argument Returns a timestamp string like \u201c2024-05-29 18:04:14.201\u201d. If the argument is not parsable as UUID v.1,6,7, the function throws an error. The function always uses UTC time, regardless of system settings or time zone settings in MySQL. UUID_VX_TO_TIMESTAMP_TZ() One string argument Returns a timestamp string with the time zone like \u201cWed May 29 18:05:07 2024 GMT\u201d. If the argument is not parsable as UUID v.1,6,7, the function throws an error. The function always uses UTC time (GMT time zone), regardless of system settings or time zone settings in MySQL. UUID_VX_TO_UNIXTIME() One string argument Returns a number of milliseconds since the Epoch. If the argument is not parsable as UUID v.1,6,7, the function throws an error."},{"location":"uuid-versions.html#timestamp-based-function-examples","title":"Timestamp-based function examples","text":"
    mysql> SELECT uuid_vx_to_timestamp('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +--------------------------------------------------------------+\n| uuid_vx_to_timestamp('01900bf6-0eb0-715a-80f4-636367e07777') |\n+--------------------------------------------------------------+\n| 2024-06-12 10:19:53.392                                      |\n+--------------------------------------------------------------+\n1 row in set (0.00 sec)\n
    mysql> SELECT uuid_vx_to_timestamp_tz('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +-----------------------------------------------------------------+\n| uuid_vx_to_timestamp_tz('01900bf6-0eb0-715a-80f4-636367e07777') |\n+-----------------------------------------------------------------+\n| Wed Jun 12 10:19:53 2024 GMT                                    |\n+-----------------------------------------------------------------+\n
    mysql> SELECT uuid_vx_to_unixtime('01900bf6-0eb0-715a-80f4-636367e07777');\n
    Expected output
    +-------------------------------------------------------------+\n| uuid_vx_to_unixtime('01900bf6-0eb0-715a-80f4-636367e07777') |\n+-------------------------------------------------------------+\n|                                               1718187593392 |\n+-------------------------------------------------------------+\n
    "},{"location":"uuid-versions.html#uninstall-the-uuid_vx-component","title":"Uninstall the UUID_VX component","text":"
    mysql> UNINSTALL COMPONENT 'file://component_uuid_vx_udf';\n
    Expected output
    Query OK, 0 rows affected (0.03 sec)\n
    "},{"location":"variables.html","title":"MyRocks server variables","text":"

    The MyRocks server variables expose configuration of the underlying RocksDB engine. There several ways to set these variables:

    • For production deployments, you should have all variables defined in the configuration file.

    • Dynamic variables can be changed at runtime using the SET statement.

    • If you want to test things out, you can set some of the variables when starting mysqld using corresponding command-line options.

    If a variable was not set in either the configuration file or as a command-line option, the default value is used.

    Also, all variables can exist in one or both of the following scopes:

    • Global scope defines how the variable affects overall server operation.

    • Session scope defines how the variable affects operation for individual client connections.

    Variable Name rocksdb_access_hint_on_compaction_start rocksdb_advise_random_on_open rocksdb_allow_concurrent_memtable_write rocksdb_allow_to_start_after_corruption rocksdb_allow_mmap_reads rocksdb_allow_mmap_writes rocksdb_allow_unsafe_alter rocksdb_alter_column_default_inplace rocksdb_alter_table_comment_inplace rocksdb_base_background_compactions rocksdb_blind_delete_primary_key rocksdb_block_cache_numshardbits rocksdb_block_cache_size rocksdb_bulk_load_fail_if_not_bottommost_level rocksdb_bulk_load_partial_index rocksdb_bulk_load_use_sst_partitioner rocksdb_block_restart_interval rocksdb_block_size rocksdb_block_size_deviation rocksdb_bulk_load rocksdb_bulk_load_allow_sk rocksdb_bulk_load_allow_unsorted rocksdb_bulk_load_size rocksdb_bytes_per_sync rocksdb_cache_dump rocksdb_cache_high_pri_pool_ratio rocksdb_cache_index_and_filter_blocks rocksdb_cache_index_and_filter_with_high_priority rocksdb_cancel_manual_compactions rocksdb_charge_memory rocksdb_check_iterate_bounds rocksdb_checksums_pct rocksdb_collect_sst_properties rocksdb_column_default_value_as_expression rocksdb_commit_in_the_middle rocksdb_commit_time_batch_for_recovery rocksdb_compact_cf rocksdb_compact_lzero_now rocksdb_compaction_readahead_size rocksdb_compaction_sequential_deletes rocksdb_compaction_sequential_deletes_count_sd rocksdb_compaction_sequential_deletes_file_size rocksdb_compaction_sequential_deletes_window rocksdb_concurrent_prepare rocksdb_converter_record_cached_length rocksdb_corrupt_data_action rocksdb_create_checkpoint rocksdb_create_if_missing rocksdb_create_missing_column_families rocksdb_create_temporary_checkpoint rocksdb_datadir rocksdb_db_write_buffer_size rocksdb_deadlock_detect rocksdb_deadlock_detect_depth rocksdb_debug_cardinality_multipler rocksdb_debug_manual_compaction_delay rocksdb_debug_optimizer_no_zero_cardinality rocksdb_debug_ttl_ignore_pk rocksdb_debug_ttl_read_filter_ts rocksdb_debug_ttl_rec_ts rocksdb_debug_ttl_snapshot_ts rocksdb_default_cf_options rocksdb_delayed_write_rate rocksdb_delete_cf rocksdb_delete_obsolete_files_period_micros rocksdb_disable_file_deletions rocksdb_disable_instant_ddl rocksdb_enable_bulk_load_api rocksdb_enable_delete_range_for_drop_index rocksdb_enable_insert_with_update_caching rocksdb_enable_iterate_bounds rocksdb_enable_pipelined_write rocksdb_enable_remove_orphaned_dropped_cfs rocksdb_enable_ttl rocksdb_enable_ttl_read_filtering rocksdb_enable_thread_tracking rocksdb_enable_write_thread_adaptive_yield rocksdb_error_if_exists rocksdb_error_on_suboptimal_collation rocksdb_file_checksums rocksdb_flush_log_at_trx_commit rocksdb_flush_memtable_on_analyze rocksdb_force_compute_memtable_stats rocksdb_force_compute_memtable_stats_cachetime rocksdb_force_flush_memtable_and_lzero_now rocksdb_force_flush_memtable_now rocksdb_force_index_records_in_range rocksdb_hash_index_allow_collision rocksdb_ignore_unknown_options rocksdb_index_type rocksdb_info_log_level rocksdb_is_fd_close_on_exec rocksdb_keep_log_file_num rocksdb_large_prefix rocksdb_lock_scanned_rows rocksdb_lock_wait_timeout rocksdb_log_file_time_to_roll rocksdb_manifest_preallocation_size rocksdb_manual_compaction_bottommost_level rocksdb_manual_compaction_threads rocksdb_manual_wal_flush rocksdb_master_skip_tx_api rocksdb_max_background_compactions rocksdb_max_background_flushes rocksdb_max_background_jobs rocksdb_max_bottom_pri_background_compactions rocksdb_max_compaction_history rocksdb_max_file_opening_threads rocksdb_max_latest_deadlocks rocksdb_max_log_file_size rocksdb_max_manifest_file_size rocksdb_max_manual_compactions rocksdb_max_open_files rocksdb_max_row_locks rocksdb_max_subcompactions rocksdb_max_total_wal_size rocksdb_merge_buf_size rocksdb_merge_combine_read_size rocksdb_merge_tmp_file_removal_delay_ms rocksdb_new_table_reader_for_compaction_inputs rocksdb_no_block_cache rocksdb_no_create_column_family rocksdb_override_cf_options rocksdb_paranoid_checks rocksdb_partial_index_ignore_killed rocksdb_partial_index_sort_max_mem rocksdb_pause_background_work rocksdb_partial_index_blind_delete rocksdb_perf_context_level rocksdb_persistent_cache_path rocksdb_persistent_cache_size_mb rocksdb_pin_l0_filter_and_index_blocks_in_cache rocksdb_print_snapshot_conflict_queries rocksdb_protection_bytes_per_key rocksdb_rate_limiter_bytes_per_sec rocksdb_read_free_rpl rocksdb_read_free_rpl_tables rocksdb_records_in_range rocksdb_reset_stats rocksdb_rollback_on_timeout rocksdb_rpl_skip_tx_api rocksdb_seconds_between_stat_computes rocksdb_signal_drop_index_thread rocksdb_sim_cache_size rocksdb_skip_bloom_filter_on_read rocksdb_skip_fill_cache rocksdb_skip_locks_if_skip_unique_check rocksdb_sst_mgr_rate_bytes_per_sec rocksdb_stats_dump_period_sec rocksdb_stats_level rocksdb_stats_recalc_rate rocksdb_store_row_debug_checksums rocksdb_strict_collation_check rocksdb_strict_collation_exceptions rocksdb_table_cache_numshardbits rocksdb_table_stats_background_thread_nice_value rocksdb_table_stats_max_num_rows_scanned rocksdb_table_stats_recalc_threshold_count rocksdb_table_stats_recalc_threshold_pct rocksdb_table_stats_sampling_pct rocksdb_table_stats_use_table_scan rocksdb_tmpdir rocksdb_two_write_queues rocksdb_trace_block_cache_access rocksdb_trace_queries rocksdb_trace_sst_api rocksdb_track_and_verify_wals_in_manifest rocksdb_unsafe_for_binlog rocksdb_update_cf_options rocksdb_use_adaptive_mutex rocksdb_use_default_sk_cf rocksdb_use_direct_io_for_flush_and_compaction rocksdb_use_direct_reads rocksdb_use_fsync rocksdb_use_hyper_clock_cache rocksdb_use_write_buffer_manager rocksdb_validate_tables rocksdb_verify_row_debug_checksums rocksdb_wal_bytes_per_sync rocksdb_wal_dir rocksdb_wal_recovery_mode rocksdb_wal_size_limit_mb rocksdb_wal_ttl_seconds rocksdb_whole_key_filtering rocksdb_write_batch_flush_threshold rocksdb_write_batch_max_bytes rocksdb_write_disable_wal rocksdb_write_ignore_missing_column_families rocksdb_write_policy"},{"location":"variables.html#rocksdb_access_hint_on_compaction_start","title":"rocksdb_access_hint_on_compaction_start","text":"Option Description Command-line \u2013rocksdb-access-hint-on-compaction-start Dynamic No Scope Global Data type String or numeric Default NORMAL or 1

    Specifies the file access pattern once a compaction is started, applied to all input files of a compaction. Possible values are:

    • 0 = NONE

    • 1 = NORMAL (default)

    • 2 = SEQUENTIAL

    • 3 = WILLNEED

    "},{"location":"variables.html#rocksdb_advise_random_on_open","title":"rocksdb_advise_random_on_open","text":"Option Description Command-line \u2013rocksdb-advise-random-on-open Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether to hint the underlying file system that the file access pattern is random, when a data file is opened. Enabled by default.

    "},{"location":"variables.html#rocksdb_allow_concurrent_memtable_write","title":"rocksdb_allow_concurrent_memtable_write","text":"Option Description Command-line \u2013rocksdb-allow-concurrent-memtable-write Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to allow multiple writers to update memtables in parallel. Disabled by default.

    "},{"location":"variables.html#rocksdb_allow_to_start_after_corruption","title":"rocksdb_allow_to_start_after_corruption","text":"Option Description Command-line \u2013rocksdb_allow_to_start_after_corruption Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to allow server to restart once MyRocks reported data corruption. Disabled by default.

    Once corruption is detected server writes marker file (named ROCKSDB_CORRUPTED) in the data directory and aborts. If marker file exists, then mysqld exits on startup with an error message. The restart failure will continue until the problem is solved or until mysqld is started with this variable turned on in the command line.

    Note

    Not all memtables support concurrent writes.

    "},{"location":"variables.html#rocksdb_allow_mmap_reads","title":"rocksdb_allow_mmap_reads","text":"Option Description Command-line \u2013rocksdb-allow-mmap-reads Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to allow the OS to map a data file into memory for reads. Disabled by default. If you enable this, make sure that rocksdb_use_direct_reads is disabled.

    "},{"location":"variables.html#rocksdb_allow_mmap_writes","title":"rocksdb_allow_mmap_writes","text":"Option Description Command-line \u2013rocksdb-allow-mmap-writes Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to allow the OS to map a data file into memory for writes. Disabled by default.

    "},{"location":"variables.html#rocksdb_allow_unsafe_alter","title":"rocksdb_allow_unsafe_alter","text":"Option Description Command-line \u2013rocksdb-allow-unsafe-alter Dynamic No Scope Global Data type Boolean Default OFF

    Enables crash unsafe INPLACE ADD|DROP partition.

    "},{"location":"variables.html#rocksdb_alter_column_default_inplace","title":"rocksdb_alter_column_default_inplace","text":"Option Description Command-line \u2013rocksdb-alter-column-default-inplace Dynamic Yes Scope Global Data type Boolean Default ON

    Allows an inplace alter for the ALTER COLUMN default operation.

    "},{"location":"variables.html#rocksdb_alter_table_comment_inplace","title":"rocksdb_alter_table_comment_inplace","text":"Option Description Command-line \u2013rocksdb_alter_table_comment_inplace Dynamic Yes Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    Allows changing ALTER TABLE COMMENT inplace.

    This variable is disabled (OFF) by default.

    "},{"location":"variables.html#rocksdb_base_background_compactions","title":"rocksdb_base_background_compactions","text":"Option Description Command-line \u2013rocksdb-base-background-compactions Dynamic No Scope Global Data type Numeric Default 1

    Specifies the suggested number of concurrent background compaction jobs, submitted to the default LOW priority thread pool in RocksDB. The default is 1. The allowed range of values is from -1 to 64. The maximum value depends on the rocksdb_max_background_compactions variable. This variable was replaced with rocksdb_max_background_jobs, which automatically decides how many threads to allocate toward flush/compaction.

    "},{"location":"variables.html#rocksdb_blind_delete_primary_key","title":"rocksdb_blind_delete_primary_key","text":"Option Description Command-line \u2013rocksdb-blind-delete-primary-key Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Skips verifying if rows exists before executing deletes. The following conditions must be met:

    • The variable is enabled

    • Only a single table listed in the DELETE statement

    • The table has only a primary key with no secondary keys

    "},{"location":"variables.html#rocksdb_block_cache_numshardbits","title":"rocksdb_block_cache_numshardbits","text":"Option Description Command-line \u2013rocksdb-block-cache-numshardbits Dynamic No Scope Global Data type Numeric Default -1

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    This variable specifies the number of shards ,numShardBits, for the block cache in RocksDB. The cache is sharded into 2^numShardBits shards by the key hash.

    The default value is -1. The -1 value means that RocksDB automatically determines the number of shards for the block cache based on the cache capacity.

    The minimum value is -1 and the maximum value is 8.

    "},{"location":"variables.html#rocksdb_block_cache_size","title":"rocksdb_block_cache_size","text":"Option Description Command-line \u2013rocksdb-block-cache-size Dynamic No Scope Global Data type Numeric Default 536870912

    Specifies the size of the LRU block cache for RocksDB. This memory is reserved for the block cache, which is in addition to any filesystem caching that may occur.

    Minimum value is 1024, because that\u2019s the size of one block.

    Default value is 536870912.

    Maximum value is 9223372036854775807.

    "},{"location":"variables.html#rocksdb_bulk_load_fail_if_not_bottommost_level","title":"rocksdb_bulk_load_fail_if_not_bottommost_level","text":"Option Description Command-line \u2013rocksdb_bulk_load_fail_if_not_bottommost_level Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    When this variable is enabled, the bulk load fails if an sst file created during bulk load cannot be placed to the bottommost level in the rocksdb.

    This variable can be enabled or disabled only when the rocksdb_bulk_load is OFF.

    This variable is disabled (OFF) by default.

    Warning

    When rocksdb_bulk_load_fail_if_not_bottommost_level is disabled, it may cause severe performance impact.

    "},{"location":"variables.html#rocksdb_block_restart_interval","title":"rocksdb_block_restart_interval","text":"Option Description Command-line \u2013rocksdb-block-restart-interval Dynamic No Scope Global Data type Numeric Default 16

    Specifies the number of keys for each set of delta encoded data. Default value is 16. Allowed range is from 1 to 2147483647.

    "},{"location":"variables.html#rocksdb_block_size","title":"rocksdb_block_size","text":"Option Description Command-line \u2013rocksdb-block-size Dynamic No Scope Global Data type Numeric Default 16 KB

    Specifies the size of the data block for reading RocksDB data files. The default value is 16 KB. The allowed range is from 1024 to 18446744073709551615 bytes.

    "},{"location":"variables.html#rocksdb_block_size_deviation","title":"rocksdb_block_size_deviation","text":"Option Description Command-line \u2013rocksdb-block-size-deviation Dynamic No Scope Global Data type Numeric Default 10

    Specifies the threshold for free space allowed in a data block (see rocksdb_block_size). If there is less space remaining, close the block (and write to new block). Default value is 10, meaning that the block is not closed until there is less than 10 bits of free space remaining.

    Allowed range is from 1 to 2147483647.

    "},{"location":"variables.html#rocksdb_bulk_load_allow_sk","title":"rocksdb_bulk_load_allow_sk","text":"Option Description Command-line \u2013rocksdb-bulk-load-allow-sk Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Enabling this variable allows secondary keys to be added using the bulk loading feature. This variable can be enabled or disabled only when the rocksdb_bulk_load is OFF.

    "},{"location":"variables.html#rocksdb_bulk_load_allow_unsorted","title":"rocksdb_bulk_load_allow_unsorted","text":"Option Description Command-line \u2013rocksdb-bulk-load-allow-unsorted Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    By default, the bulk loader requires its input to be sorted in the primary key order. If enabled, unsorted inputs are allowed too, which are then sorted by the bulkloader itself, at a performance penalty.

    "},{"location":"variables.html#rocksdb_bulk_load","title":"rocksdb_bulk_load","text":"Option Description Command-line \u2013rocksdb-bulk-load Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to use bulk load: MyRocks will ignore checking keys for uniqueness or acquiring locks during transactions. Disabled by default. Enable this only if you are certain that there are no row conflicts, for example, when setting up a new MyRocks instance from a MySQL dump.

    When the rocksdb_bulk_load variable is enabled, it behaves as if the variable rocksdb_commit_in_the_middle is enabled, even if the variable rocksdb_commit_in_the_middle is disabled.

    "},{"location":"variables.html#rocksdb_bulk_load_partial_index","title":"rocksdb_bulk_load_partial_index","text":"Option Description Command-line \u2013rocksdb-bulk-load-partial-index Dynamic Yes Scope Local Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.27-18. Materializes partial index during bulk load instead of leaving the index empty.

    "},{"location":"variables.html#rocksdb_bulk_load_use_sst_partitioner","title":"rocksdb_bulk_load_use_sst_partitioner","text":"Option Description Command-line \u2013rocksdb_bulk_load_use_sst_partitioner Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    If enabled, this variable uses sst partitioner to split sst files to ensure bulk load sst files can be ingested to bottommost level.

    This variable is disabled (OFF) by default.

    "},{"location":"variables.html#rocksdb_bulk_load_size","title":"rocksdb_bulk_load_size","text":"Option Description Command-line \u2013rocksdb-bulk-load-size Dynamic Yes Scope Global, Session Data type Numeric Default 1000

    Specifies the number of keys to accumulate before committing them to the storage engine when bulk load is enabled (see rocksdb_bulk_load). Default value is 1000, which means that a batch can contain up to 1000 records before they are implicitly committed. Allowed range is from 1 to 1073741824.

    "},{"location":"variables.html#rocksdb_bytes_per_sync","title":"rocksdb_bytes_per_sync","text":"Option Description Command-line \u2013rocksdb-bytes-per-sync Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies how often should the OS sync files to disk as they are being written, asynchronously, in the background. This operation can be used to smooth out write I/O over time. Default value is 0 meaning that files are never synced. Allowed range is up to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_cache_dump","title":"rocksdb_cache_dump","text":"Option Description Command-line \u2013rocksdb-cache-dump Dynamic No Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Includes RocksDB block cache content in core dump. This variable is enabled by default.

    "},{"location":"variables.html#rocksdb_cache_high_pri_pool_ratio","title":"rocksdb_cache_high_pri_pool_ratio","text":"Option Description Command-line \u2013rocksdb-cache-high-pri-pool-ratio Dynamic No Scope Global Data type Double Default 0.0

    This variable specifies the size of the block cache high-pri pool. The default value and minimum value is 0.0. The maximum value is 1.0.

    "},{"location":"variables.html#rocksdb_cache_index_and_filter_blocks","title":"rocksdb_cache_index_and_filter_blocks","text":"Option Description Command-line \u2013rocksdb-cache-index-and-filter-blocks Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether RocksDB should use the block cache for caching the index and bloomfilter data blocks from each data file. Enabled by default. If you disable this feature, RocksDB allocates additional memory to maintain these data blocks.

    "},{"location":"variables.html#rocksdb_cache_index_and_filter_with_high_priority","title":"rocksdb_cache_index_and_filter_with_high_priority","text":"Option Description Command-line \u2013rocksdb-cache-index-and-filter-with-high-priority Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether RocksDB should use the block cache with high priority for caching the index and bloomfilter data blocks from each data file. Enabled by default. If you disable this feature, RocksDB allocates additional memory to maintain these data blocks.

    "},{"location":"variables.html#rocksdb_cancel_manual_compactions","title":"rocksdb_cancel_manual_compactions","text":"Option Description Command-line \u2013rocksdb-cancel-manual-compactions Dynamic Yes Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.27-18. Cancels all ongoing manual compactions.

    "},{"location":"variables.html#rocksdb_charge_memory","title":"rocksdb_charge_memory","text":"Option Description Command-line \u2013rocksdb_charge_memory Dynamic No Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    This variable is tech preview and may be removed in the future releases.

    Turns on RocksDB memory-charging related features (BlockBasedTableOptions::cache_usage_options.options.charged) from cnf files. This variable is related to rocksdb_use_write_buffer_manager.

    This variable is disabled (OFF) by default.

    "},{"location":"variables.html#rocksdb_check_iterate_bounds","title":"rocksdb_check_iterate_bounds","text":"Option Description Command-line \u2013rocksdb-check-iterate-bounds Dynamic Yes Scope Global, Session Data type Boolean Default ON

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    This variable enables checking the upper and lower bounds of the RocksDB iterator during iteration. The default value in ON which means this variable is enabled.

    "},{"location":"variables.html#rocksdb_checksums_pct","title":"rocksdb_checksums_pct","text":"Option Description Command-line \u2013rocksdb-checksums-pct Dynamic Yes Scope Global, Session Data type Numeric Default 100

    Specifies the percentage of rows to be checksummed. Default value is 100 (checksum all rows). Allowed range is from 0 to 100.

    "},{"location":"variables.html#rocksdb_collect_sst_properties","title":"rocksdb_collect_sst_properties","text":"Option Description Command-line \u2013rocksdb-collect-sst-properties Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether to collect statistics on each data file to improve optimizer behavior. Enabled by default.

    "},{"location":"variables.html#rocksdb_column_default_value_as_expression","title":"rocksdb_column_default_value_as_expression","text":"Option Description Command-line \u2013rocksdb_column_default_value_as_expression Dynamic Yes Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    Allows to set a function as the default value for a column.

    This variable is enabled (ON) by default.

    "},{"location":"variables.html#rocksdb_commit_in_the_middle","title":"rocksdb_commit_in_the_middle","text":"Option Description Command-line \u2013rocksdb-commit-in-the-middle Dynamic Yes Scope Global Data type Boolean Default OFF

    Specifies whether to commit rows implicitly when a batch contains more than the value of rocksdb_bulk_load_size.

    This option should only be enabled at the time of data import because it may cause locking errors.

    This variable is disabled by default. When the rocksdb_bulk_load variable is enabled, it behaves as if the variable rocksdb_commit_in_the_middle is enabled, even if the variable rocksdb_commit_in_the_middle is disabled.

    "},{"location":"variables.html#rocksdb_commit_time_batch_for_recovery","title":"rocksdb_commit_time_batch_for_recovery","text":"Option Description Command-line \u2013rocksdb-commit-time-batch-for-recovery Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to write the commit time write batch into the database or not.

    Note

    If the commit time write batch is only useful for recovery, then writing to WAL is enough.

    "},{"location":"variables.html#rocksdb_compact_cf","title":"rocksdb_compact_cf","text":"Option Description Command-line \u2013rocksdb-compact-cf Dynamic Yes Scope Global Data type String Default

    Specifies the name of the column family to compact.

    "},{"location":"variables.html#rocksdb_compact_lzero_now","title":"rocksdb_compact_lzero_now","text":"Option Description Command-line \u2013rocksdb-compact-lzero-now Dynamic Yes Scope Global Data type Boolean Default OFF

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    This variable acts as a trigger. Set the variable to ON, rocksdb-compact-lzero-now=ON, to immediately compact all the Level 0 (L0) files. After all the L0 files are compacted, the variable value automatically switches to OFF.

    "},{"location":"variables.html#rocksdb_compaction_readahead_size","title":"rocksdb_compaction_readahead_size","text":"Option Description Command-line \u2013rocksdb-compaction-readahead-size Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies the size of reads to perform ahead of compaction. Default value is 0. Set this to at least 2 megabytes (16777216) when using MyRocks with spinning disks to ensure sequential reads instead of random. Maximum allowed value is 18446744073709551615.

    Note

    If you set this variable to a non-zero value, rocksdb_new_table_reader_for_compaction_inputs is enabled.

    "},{"location":"variables.html#rocksdb_compaction_sequential_deletes","title":"rocksdb_compaction_sequential_deletes","text":"Option Description Command-line \u2013rocksdb-compaction-sequential-deletes Dynamic Yes Scope Global Data type Numeric Default 149999

    Note

    In version Percona Server for MySQL 8.0.36-28 and later, the default value is changed from 0 to 149999.

    Specifies the threshold to trigger compaction on a file if it has more than this number of sequential delete markers.

    The default value is 149999.

    Maximum allowed value is 2000000 (two million delete markers).

    Note

    Depending on workload patterns, MyRocks can potentially maintain large numbers of delete markers, which increases latency of queries. This compaction feature will reduce latency, but may also increase the MyRocks write rate. Use this variable together with rocksdb_compaction_sequential_deletes_file_size to only perform compaction on large files.

    "},{"location":"variables.html#rocksdb_compaction_sequential_deletes_count_sd","title":"rocksdb_compaction_sequential_deletes_count_sd","text":"Option Description Command-line \u2013rocksdb-compaction-sequential-deletes-count-sd Dynamic Yes Scope Global Data type Boolean Default ON

    Note

    In version Percona Server for MySQL 8.0.36-28 and later, the default value is changed from OFF to ON.

    Specifies whether to count single deletes as delete markers recognized by rocksdb_compaction_sequential_deletes.

    The default value is ON which means the variable is enabled.

    "},{"location":"variables.html#rocksdb_compaction_sequential_deletes_file_size","title":"rocksdb_compaction_sequential_deletes_file_size","text":"Option Description Command-line \u2013rocksdb-compaction-sequential-deletes-file-size Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies the minimum file size required to trigger compaction on it by rocksdb_compaction_sequential_deletes. Default value is 0, meaning that compaction is triggered regardless of file size. Allowed range is from -1 to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_compaction_sequential_deletes_window","title":"rocksdb_compaction_sequential_deletes_window","text":"Option Description Command-line \u2013rocksdb-compaction-sequential-deletes-window Dynamic Yes Scope Global Data type Numeric Default 150000

    Note

    In version Percona Server for MySQL 8.0.36-28 and later, the default value is changed from 0 to 150000.

    Specifies the size of the window for counting delete markers by rocksdb_compaction_sequential_deletes. Default value is 150000. Allowed range is up to 2000000 (two million).

    "},{"location":"variables.html#rocksdb_concurrent_prepare","title":"rocksdb_concurrent_prepare","text":"Option Description Command-line \u2013rocksdb-concurrent_prepare Dynamic No Scope Global Data type Boolean Default ON

    When enabled this variable allows/encourages threads that are using two-phase commit to prepare in parallel. This variable was renamed in upstream to rocksdb_two_write_queues.

    "},{"location":"variables.html#rocksdb_corrupt_data_action","title":"rocksdb_corrupt_data_action","text":"Option Description Command-line \u2013rocksdb_corrupt_data_action Dynamic Yes Scope Global Data type enum { ERROR = 0, ABORT_SERVER, WARNING }; Default ERROR

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    This variable controls the behavior when hitting the data corruption in MyRocks.

    You can select one of the following actions:

    • ERROR - fail the query with the error HA_ERR_ROCKSDB_CORRUPT_DATA

    • ABORT_SERVER - crash the server

    • WARNING - pass the query with warning

    The default value is ERROR that means the query fails with the error HA_ERR_ROCKSDB_CORRUPT_DATA.

    "},{"location":"variables.html#rocksdb_converter_record_cached_length","title":"rocksdb_converter_record_cached_length","text":"Option Description Command-line \u2013rocksdb_converter_record_cached_length Dynamic Yes Scope Global Data type Numeric Default 0

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    Specifies the maximum number of bytes to cache on table handler for encoding table record data.

    If the used memory exceeds rocksdb_converter_record_cached_length, the memory is released when the handler is returned to the table handler cache.

    The minimum value is 0 (zero) that means there is no limit. The maximum value is UINT64_MAX (0xffffffffffffffff).

    The default value is 0(zero) that means there is no limit.

    "},{"location":"variables.html#rocksdb_create_checkpoint","title":"rocksdb_create_checkpoint","text":"Option Description Command-line \u2013rocksdb-create-checkpoint Dynamic Yes Scope Global Data type String Default

    Specifies the directory where MyRocks should create a checkpoint. Empty by default.

    "},{"location":"variables.html#rocksdb_create_if_missing","title":"rocksdb_create_if_missing","text":"Option Description Command-line \u2013rocksdb-create-if-missing Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether MyRocks should create its database if it does not exist. Enabled by default.

    "},{"location":"variables.html#rocksdb_create_missing_column_families","title":"rocksdb_create_missing_column_families","text":"Option Description Command-line \u2013rocksdb-create-missing-column-families Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether MyRocks should create new column families if they do not exist. Disabled by default.

    "},{"location":"variables.html#rocksdb_create_temporary_checkpoint","title":"rocksdb_create_temporary_checkpoint","text":"Option Description Command-line \u2013rocksdb-create-temporary-checkpoint Dynamic Yes Scope Session Data type String

    This variable has been implemented in Percona Server for MySQL 8.0.15-6. When specified it will create a temporary RocksDB \u2018checkpoint\u2019 or \u2018snapshot\u2019 in the datadir. If the session ends with an existing checkpoint, or if the variable is reset to another value, the checkpoint will get removed. This variable should be used by backup tools. Prolonged use or other misuse can have serious side effects to the server instance.

    "},{"location":"variables.html#rocksdb_datadir","title":"rocksdb_datadir","text":"Option Description Command-line \u2013rocksdb-datadir Dynamic No Scope Global Data type String Default ./.rocksdb

    Specifies the location of the MyRocks data directory. By default, it is created in the current working directory.

    "},{"location":"variables.html#rocksdb_db_write_buffer_size","title":"rocksdb_db_write_buffer_size","text":"Option Description Command-line \u2013rocksdb-db-write-buffer-size Dynamic No Scope Global Data type Numeric Default 0

    Specifies the maximum size of all memtables used to store writes in MyRocks across all column families. When this size is reached, the data is flushed to persistent media. The default value is 0. The allowed range is up to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_deadlock_detect","title":"rocksdb_deadlock_detect","text":"Option Description Command-line \u2013rocksdb-deadlock-detect Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether MyRocks should detect deadlocks. Disabled by default.

    "},{"location":"variables.html#rocksdb_deadlock_detect_depth","title":"rocksdb_deadlock_detect_depth","text":"Option Description Command-line \u2013rocksdb-deadlock-detect-depth Dynamic Yes Scope Global, Session Data type Numeric Default 50

    Specifies the number of transactions deadlock detection will traverse through before assuming deadlock.

    "},{"location":"variables.html#rocksdb_debug_cardinality_multiplier","title":"rocksdb_debug_cardinality_multiplier","text":"Option Description Command-line \u2013rocksdb-debug-cardinality-multiplier Dynamic Yes Scope Global Data type UINT Default 2

    The cardinality multiplier used in tests. The minimum value is 0. The maxium value is 2147483647 (INT_MAX).

    "},{"location":"variables.html#rocksdb_debug_manual_compaction_delay","title":"rocksdb_debug_manual_compaction_delay","text":"Option Description Command-line \u2013rocksdb-debug-manual-compaction-delay Dynamic Yes Scope Global Data type UINT Default 0

    Only use this variable when debugging.

    This variable specifies a sleep, in seconds, to simulate long-running compactions. The minimum value is 0. The maximum value is 4292967295 (UINT_MAX).

    "},{"location":"variables.html#rocksdb_debug_optimizer_no_zero_cardinality","title":"rocksdb_debug_optimizer_no_zero_cardinality","text":"Option Description Command-line \u2013rocksdb-debug-optimizer-no-zero-cardinality Dynamic Yes Scope Global Data type Boolean Default ON

    Specifies whether MyRocks should prevent zero cardinality by always overriding it with some value.

    "},{"location":"variables.html#rocksdb_debug_ttl_ignore_pk","title":"rocksdb_debug_ttl_ignore_pk","text":"Option Description Command-line \u2013rocksdb-debug-ttl-ignore-pk Dynamic Yes Scope Global Data type Boolean Default OFF

    For debugging purposes only. If true, compaction filtering will not occur on Primary Key TTL data. This variable is a no-op in non-debug builds.

    "},{"location":"variables.html#rocksdb_debug_ttl_read_filter_ts","title":"rocksdb_debug_ttl_read_filter_ts","text":"Option Description Command-line \u2013rocksdb_debug-ttl-read-filter-ts Dynamic Yes Scope Global Data type Numeric Default 0

    For debugging purposes only. Overrides the TTL read filtering time to time + debug_ttl_read_filter_ts. A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.

    "},{"location":"variables.html#rocksdb_debug_ttl_rec_ts","title":"rocksdb_debug_ttl_rec_ts","text":"Option Description Command-line \u2013rocksdb-debug-ttl-rec-ts Dynamic Yes Scope Global Data type Numeric Default 0

    For debugging purposes only. Overrides the TTL of records to now() + debug_ttl_rec_ts. The value can be \u00b1 to simulate a record inserted in the past vs a record inserted in the future . A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.

    "},{"location":"variables.html#rocksdb_debug_ttl_snapshot_ts","title":"rocksdb_debug_ttl_snapshot_ts","text":"Option Description Command-line \u2013rocksdb-debug-ttl-snapshot-ts Dynamic Yes Scope Global Data type Numeric Default 0

    For debugging purposes only. Sets the snapshot during compaction to now() + rocksdb_debug_set_ttl_snapshot_ts.

    The value can be \u00b1 to simulate a snapshot in the past vs a snapshot created in the future . A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.

    "},{"location":"variables.html#rocksdb_default_cf_options","title":"rocksdb_default_cf_options","text":"Option Description Command-line \u2013rocksdb-default-cf-options Dynamic No Scope Global Data type String

    The dafault value is:

    block_based_table_factory= {cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=1};level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=true;compaction_pri=kMinOverlappingRatio;compression=kLZ4Compression;bottommost_compression=kLZ4Compression;\n

    Specifies the default column family options for MyRocks. On startup, the server applies this option to all existing column families. This option is read-only at runtime.

    "},{"location":"variables.html#rocksdb_delayed_write_rate","title":"rocksdb_delayed_write_rate","text":"Option Description Command-line \u2013rocksdb-delayed-write-rate Dynamic Yes Scope Global Data type Numeric Default 16777216

    Specifies the write rate in bytes per second, which should be used if MyRocks hits a soft limit or threshold for writes. Default value is 16777216 (16 MB/sec). Allowed range is from 0 to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_delete_cf","title":"rocksdb_delete_cf","text":"Option Description Command-line \u2013rocksdb-delete-cf Dynamic Yes Scope Global Data type String Default \u201c\u201d

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Deletes the column family by name. The default value is \u201c\u201d , an empty string.

    For example:

    SET @@global.ROCKSDB_DELETE_CF = 'cf_primary_key';\n
    "},{"location":"variables.html#rocksdb_delete_obsolete_files_period_micros","title":"rocksdb_delete_obsolete_files_period_micros","text":"Option Description Command-line \u2013rocksdb-delete-obsolete-files-period-micros Dynamic No Scope Global Data type Numeric Default 21600000000

    Specifies the period in microseconds to delete obsolete files regardless of files removed during compaction. Default value is 21600000000 (6 hours). Allowed range is up to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_disable_file_deletions","title":"rocksdb_disable_file_deletions","text":"Option Description Command-line \u2013rocksdb-disable-file-deletions Dynamic Yes Scope Session Data type Boolean Default OFF

    This variable has been implemented in Percona Server for MySQL 8.0.15-6. It allows a client to temporarily disable RocksDB deletion of old WAL and .sst files for the purposes of making a consistent backup. If the client session terminates for any reason after disabling deletions and has not re-enabled deletions, they will be explicitly re-enabled. This variable should be used by backup tools. Prolonged use or other misuse can have serious side effects to the server instance.

    "},{"location":"variables.html#rocksdb_disable_instant_ddl","title":"rocksdb_disable_instant_ddl","text":"Option Description Command-line \u2013rocksdb_disable_instant_ddl Dynamic Yes Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    Disables instant DDL during ALTER TABLE operations.

    This variable is enabled (ON) by default.

    "},{"location":"variables.html#rocksdb_enable_bulk_load_api","title":"rocksdb_enable_bulk_load_api","text":"Option Description Command-line \u2013rocksdb-enable-bulk-load-api Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether to use the SSTFileWriter feature for bulk loading, This feature bypasses the memtable, but requires keys to be inserted into the table in either ascending or descending order. Enabled by default. If disabled, bulk loading uses the normal write path via the memtable and does not require keys to be inserted in any order.

    "},{"location":"variables.html#rocksdb_enable_delete_range_for_drop_index","title":"rocksdb_enable_delete_range_for_drop_index","text":"Option Description Command-line \u2013rocksdb_enable_delete_range_for_drop_index Dynamic Yes Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    Enables drop table / index by calling the DeleteRange.

    This option is disabled (OFF) by default.

    "},{"location":"variables.html#rocksdb_enable_insert_with_update_caching","title":"rocksdb_enable_insert_with_update_caching","text":"Option Description Command-line \u2013rocksdb-enable-insert-with-update-caching Dynamic Yes Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Specifies whether to enable optimization where the read is cached from a failed insertion attempt in INSERT ON DUPLICATE KEY UPDATE.

    "},{"location":"variables.html#rocksdb_enable_iterate_bounds","title":"rocksdb_enable_iterate_bounds","text":"Option Description Command-line \u2013rocksdb-enable-iterate-bounds Dynamic Yes Scope Global, Local Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Enables the rocksdb iterator upper bounds and lower bounds in read options.

    "},{"location":"variables.html#rocksdb_enable_pipelined_write","title":"rocksdb_enable_pipelined_write","text":"Option Description Command-line \u2013rocksdb-enable-pipelined-write Dynamic No Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.25-15.

    DBOptions::enable_pipelined_write for RocksDB.

    If enable_pipelined_write is ON, a separate write thread is maintained for WAL write and memtable write. A write thread first enters the WAL writer queue and then the memtable writer queue. A pending thread on the WAL writer queue only waits for the previous WAL write operations but does not wait for memtable write operations. Enabling the feature may improve write throughput and reduce latency of the prepare phase of a two-phase commit.

    "},{"location":"variables.html#rocksdb_enable_remove_orphaned_dropped_cfs","title":"rocksdb_enable_remove_orphaned_dropped_cfs","text":"Option Description Command-line \u2013rocksdb-enable-remove-orphaned-dropped-cfs Dynamic Yes Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Enables the removal of dropped column families (cfs) from metadata if the cfs do not exist in the cf manager.

    The default value is ON.

    "},{"location":"variables.html#rocksdb_enable_ttl","title":"rocksdb_enable_ttl","text":"Option Description Command-line \u2013rocksdb-enable-ttl Dynamic No Scope Global Data type Boolean Default ON

    By default, this variable removes expired Time-to-Live (TTL) records during compaction. TTL records are entries in the database that are automatically set to expire after a specified period of time and can be deleted during the compaction process.

    Set this variable to OFF to keep expired TTL records during compaction.

    "},{"location":"variables.html#rocksdb_enable_ttl_read_filtering","title":"rocksdb_enable_ttl_read_filtering","text":"Option Description Command-line \u2013rocksdb-enable-ttl-read-filtering Dynamic Yes Scope Global Data type Boolean Default ON

    For tables with TTL, expired records are skipped/filtered out during processing and in query results.

    Disabling this option allows these records to be seen, but the rows may disappear during transactions since they are deleted during compaction. Use with caution.

    "},{"location":"variables.html#rocksdb_enable_thread_tracking","title":"rocksdb_enable_thread_tracking","text":"Option Description Command-line \u2013rocksdb-enable-thread-tracking Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to enable tracking the status of threads accessing the database. Disabled by default. If enabled, thread status will be available via GetThreadList().

    "},{"location":"variables.html#rocksdb_enable_write_thread_adaptive_yield","title":"rocksdb_enable_write_thread_adaptive_yield","text":"Option Description Command-line \u2013rocksdb-enable-write-thread-adaptive-yield Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether the MyRocks write batch group leader should wait up to the maximum allowed time before blocking on a mutex. Disabled by default. Enable it to increase throughput for concurrent workloads.

    "},{"location":"variables.html#rocksdb_error_if_exists","title":"rocksdb_error_if_exists","text":"Option Description Command-line \u2013rocksdb-error-if-exists Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to report an error when a database already exists. Disabled by default.

    "},{"location":"variables.html#rocksdb_error_on_suboptimal_collation","title":"rocksdb_error_on_suboptimal_collation","text":"Option Description Command-line \u2013rocksdb-error-on-suboptimal-collation Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether to report an error instead of a warning if an index is created on a char field where the table has a sub-optimal collation (case insensitive). Enabled by default.

    "},{"location":"variables.html#rocksdb_file_checksums","title":"rocksdb_file_checksums","text":"Option Description Command-line \u2013rocksdb-file-checksums Dynamic No Scope Global Data type Boolean Default OFF

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    This variable controls whether to write and check RocksDB file-level checksums. The default value is OFF which means the variable is disabled.

    "},{"location":"variables.html#rocksdb_flush_log_at_trx_commit","title":"rocksdb_flush_log_at_trx_commit","text":"Option Description Command-line \u2013rocksdb-flush-log-at-trx-commit Dynamic Yes Scope Global, Session Data type Numeric Default 1

    Specifies whether to sync on every transaction commit, similar to innodb_flush_log_at_trx_commit. Enabled by default, which ensures ACID compliance.

    Possible values:

    • 0: Do not sync on transaction commit. This provides better performance, but may lead to data inconsistency in case of a crash.

    • 1: Sync on every transaction commit. This is set by default and recommended as it ensures data consistency, but reduces performance.

    • 2: Sync every second.

    "},{"location":"variables.html#rocksdb_flush_memtable_on_analyze","title":"rocksdb_flush_memtable_on_analyze","text":"Option Description Command-line \u2013rocksdb-flush-memtable-on-analyze Dynamic Yes Scope Global, Session Data type Boolean Default ON

    Specifies whether to flush the memtable when running ANALYZE on a table. Enabled by default. This ensures accurate cardinality by including data in the memtable for calculating stats.

    "},{"location":"variables.html#rocksdb_force_compute_memtable_stats","title":"rocksdb_force_compute_memtable_stats","text":"Option Description Command-line \u2013rocksdb-force-compute-memtable-stats Dynamic Yes Scope Global Data type Boolean Default ON

    Specifies whether data in the memtables should be included for calculating index statistics used by the query optimizer. Enabled by default. This provides better accuracy, but may reduce performance.

    "},{"location":"variables.html#rocksdb_force_compute_memtable_stats_cachetime","title":"rocksdb_force_compute_memtable_stats_cachetime","text":"Option Description Command-line \u2013rocksdb-force-compute-memtable-stats-cachetime Dynamic Yes Scope Global Data type Numeric Default 60000000

    Specifies for how long the cached value of memtable statistics should be used instead of computing it every time during the query plan analysis.

    "},{"location":"variables.html#rocksdb_force_flush_memtable_and_lzero_now","title":"rocksdb_force_flush_memtable_and_lzero_now","text":"Option Description Command-line \u2013rocksdb-force-flush-memtable-and-lzero-now Dynamic Yes Scope Global Data type Boolean Default OFF

    Works similar to rocksdb_force_flush_memtable_now but also flushes all L0 files.

    "},{"location":"variables.html#rocksdb_force_flush_memtable_now","title":"rocksdb_force_flush_memtable_now","text":"Option Description Command-line \u2013rocksdb-force-flush-memtable-now Dynamic Yes Scope Global Data type Boolean Default OFF

    Note

    In version Percona Server for MySQL 8.0.36-28 and later, the default value is changed from ON to OFF.

    This variable acts as a trigger. Set the variable to ON, rocksdb_force_flush_memtable_now=ON, to immediately flush all memtables. After all memtables are flushed, the variable value automatically switches to OFF.

    Warning

    Use with caution! Write requests will be blocked until all memtables are flushed.

    "},{"location":"variables.html#rocksdb_force_index_records_in_range","title":"rocksdb_force_index_records_in_range","text":"Option Description Command-line \u2013rocksdb-force-index-records-in-range Dynamic Yes Scope Global, Session Data type Numeric Default 1

    Specifies the value used to override the number of rows returned to query optimizer when FORCE INDEX is used. Default value is 1. Allowed range is from 0 to 2147483647. Set to 0 if you do not want to override the returned value.

    "},{"location":"variables.html#rocksdb_hash_index_allow_collision","title":"rocksdb_hash_index_allow_collision","text":"Option Description Command-line \u2013rocksdb-hash-index-allow-collision Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether hash collisions are allowed. Enabled by default, which uses less memory. If disabled, full prefix is stored to prevent hash collisions.

    "},{"location":"variables.html#rocksdb_ignore_unknown_options","title":"rocksdb_ignore_unknown_options","text":"Option Description Command-line Dynamic No Scope Global Data type Boolean Default ON

    When enabled, it allows RocksDB to receive unknown options and not exit.

    "},{"location":"variables.html#rocksdb_index_type","title":"rocksdb_index_type","text":"Option Description Command-line \u2013rocksdb-index-type Dynamic No Scope Global Data type Enum Default kBinarySearch

    Specifies the type of indexing used by MyRocks:

    • kBinarySearch: Binary search (default).

    • kHashSearch: Hash search.

    "},{"location":"variables.html#rocksdb_info_log_level","title":"rocksdb_info_log_level","text":"Option Description Command-line \u2013rocksdb-info-log-level Dynamic Yes Scope Global Data type Enum Default error_level

    Specifies the level for filtering messages written by MyRocks to the mysqld log.

    • debug_level: Maximum logging (everything including debugging log messages)

    • info_level

    • warn_level

    • error_level (default)

    • fatal_level: Minimum logging (only fatal error messages logged)

    "},{"location":"variables.html#rocksdb_is_fd_close_on_exec","title":"rocksdb_is_fd_close_on_exec","text":"Option Description Command-line \u2013rocksdb-is-fd-close-on-exec Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether child processes should inherit open file jandles. Enabled by default.

    "},{"location":"variables.html#rocksdb_large_prefix","title":"rocksdb_large_prefix","text":"Option Description Command-line \u2013rocksdb-large-prefix Dynamic Yes Scope Global Data type Boolean Default ON

    This variable is deprecated in Percona Server for MySQL 8.0.36-28 and will be removed in a future release.

    When enabled, this option allows index key prefixes longer than 767 bytes (up to 3072 bytes). The values for rocksdb_large_prefix should be the same between source and replica.

    Note

    In version Percona Server for MySQL 8.0.16-7 and later, the default value is changed to ON.

    "},{"location":"variables.html#rocksdb_keep_log_file_num","title":"rocksdb_keep_log_file_num","text":"Option Description Command-line \u2013rocksdb-keep-log-file-num Dynamic No Scope Global Data type Numeric Default 1000

    Specifies the maximum number of info log files to keep. Default value is 1000. Allowed range is from 1 to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_lock_scanned_rows","title":"rocksdb_lock_scanned_rows","text":"Option Description Command-line \u2013rocksdb-lock-scanned-rows Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to hold the lock on rows that are scanned during UPDATE and not actually updated. Disabled by default.

    "},{"location":"variables.html#rocksdb_lock_wait_timeout","title":"rocksdb_lock_wait_timeout","text":"Option Description Command-line \u2013rocksdb-lock-wait-timeout Dynamic Yes Scope Global, Session Data type Numeric Default 1

    Specifies the number of seconds MyRocks should wait to acquire a row lock before aborting the request. Default value is 1. Allowed range is up to 1073741824.

    "},{"location":"variables.html#rocksdb_log_file_time_to_roll","title":"rocksdb_log_file_time_to_roll","text":"Option Description Command-line \u2013rocksdb-log-file-time-to-roll Dynamic No Scope Global Data type Numeric Default 0

    Specifies the period (in seconds) for rotating the info log files. Default value is 0, meaning that the log file is not rotated. Allowed range is up to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_manifest_preallocation_size","title":"rocksdb_manifest_preallocation_size","text":"Option Description Command-line \u2013rocksdb-manifest-preallocation-size Dynamic No Scope Global Data type Numeric Default 0

    Specifies the number of bytes to preallocate for the MANIFEST file used by MyRocks to store information about column families, levels, active files, etc. Default value is 0. Allowed range is up to 18446744073709551615.

    Note

    A value of 4194304 (4 MB) is reasonable to reduce random I/O on XFS.

    "},{"location":"variables.html#rocksdb_manual_compaction_bottommost_level","title":"rocksdb_manual_compaction_bottommost_level","text":"Option Description Command-line \u2013rocksdb-manual-compaction-bottommost-level Dynamic Yes Scope Local Data type Enum Default kForceOptimized

    Option for bottommost level compaction during manual compaction:

    • kSkip - Skip bottommost level compaction

    • kIfHaveCompactionFilter - Only compact bottommost level if there is a compaction filter

    • kForce - Always compact bottommost level

    • kForceOptimized - Always compact bottommost level but in bottommost level avoid double-compacting files created in the same compaction

    "},{"location":"variables.html#rocksdb_manual_compaction_threads","title":"rocksdb_manual_compaction_threads","text":"Option Description Command-line \u2013rocksdb-manual-compaction-threads Dynamic Yes Scope Local Data type INT Default 0

    The variable defines the number of RocksDB threads to run for a manual compaction. The minimum value is 0. The maximum value is 120.

    "},{"location":"variables.html#rocksdb_manual_wal_flush","title":"rocksdb_manual_wal_flush","text":"Option Description Command-line \u2013rocksdb-manual-wal-flush Dynamic No Scope Global Data type Boolean Default ON

    This variable can be used to disable automatic/timed WAL flushing and instead rely on the application to do the flushing.

    "},{"location":"variables.html#rocksdb_master_skip_tx_api","title":"rocksdb_master_skip_tx_api","text":"Option Description Command-line Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.20-11. When enabled, uses the WriteBatch API, which is faster. The session does not hold any lock on row access. This variable is not effective on replica.

    Note

    Due to the disabled row locks, improper use of the variable can cause data corruption or inconsistency.

    "},{"location":"variables.html#rocksdb_max_background_compactions","title":"rocksdb_max_background_compactions","text":"Option Description Command-line \u2013rocksdb-max-background-compactions Dynamic Yes Scope Global Data type Numeric Default -1

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    Sets DBOptions:: max_background_compactions for RocksDB. The default value is -1 The allowed range is -1 to 64. This variable was replaced by rocksdb_max_background_jobs, which automatically decides how many threads to allocate towards flush/compaction. This variable was re-implemented in Percona Server for MySQL 8.0.20-11.

    "},{"location":"variables.html#rocksdb_max_background_flushes","title":"rocksdb_max_background_flushes","text":"Option Description Command-line \u2013rocksdb-max-background-flushes Dynamic No Scope Global Data type Numeric Default -1

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    Sets DBOptions:: max_background_flushes for RocksDB. The default value is -1. The allowed range is -1 to 64. This variable has been replaced by rocksdb_max_background_jobs, which automatically decides how many threads to allocate towards flush/compaction.

    "},{"location":"variables.html#rocksdb_max_background_jobs","title":"rocksdb_max_background_jobs","text":"Option Description Command-line \u2013rocksdb-max-background-jobs Dynamic Yes Scope Global Data type Numeric Default 2

    This variable replaced rocksdb_base_background_compactions, rocksdb_max_background_compactions, and rocksdb_max_background_flushes variables. This variable specifies the maximum number of background jobs. It automatically decides how many threads to allocate towards flush/compaction. It was implemented to reduce the number of (confusing) options users and can tweak and push the responsibility down to RocksDB level.

    "},{"location":"variables.html#rocksdb_max_bottom_pri_background_compactions","title":"rocksdb_max_bottom_pri_background_compactions","text":"Option Description Command-line \u2013rocksdb_max_bottom_pri_background_compactions Dynamic No Data type Unsigned integer Default 0

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Creates a specified number of threads, sets a lower CPU priority, and letting compactions use them. The maximum compaction concurrency is capped by rocksdb_max_background_compactions or rocksdb_max_background_jobs

    The minimum value is 0 and the maximum value is 64.

    "},{"location":"variables.html#rocksdb_max_compaction_history","title":"rocksdb_max_compaction_history","text":"Option Description Command-line \u2013rocksdb-max-compaction-history Dynamic Yes Scope Global Data type Unsigned integer Default 64

    The minimum value is 0 and the maximum value is UINT64_MAX.

    Tracks the history for at most rockdb_mx_compaction_history completed compactions. The history is in the INFORMATION_SCHEMA.ROCKSDB_COMPACTION_HISTORY table.

    "},{"location":"variables.html#rocksdb_max_file_opening_threads","title":"rocksdb_max_file_opening_threads","text":"Option Description Command-line \u2013rocksdb-max-file-opening-threads Dynamic No Scope Global Data type Numeric Default 16

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    This variable sets DBOptions::max_file_opening_threads for RocksDB. The default value is 16. The minimum value is 1 and the maximum value is 2147483647 (INT_MAX).

    "},{"location":"variables.html#rocksdb_max_latest_deadlocks","title":"rocksdb_max_latest_deadlocks","text":"Option Description Command-line \u2013rocksdb-max-latest-deadlocks Dynamic Yes Scope Global Data type Numeric Default 5

    Specifies the maximum number of recent deadlocks to store.

    "},{"location":"variables.html#rocksdb_max_log_file_size","title":"rocksdb_max_log_file_size","text":"Option Description Command-line \u2013rocksdb-max-log-file-size Dynamic No Scope Global Data type Numeric Default 0

    Specifies the maximum size for info log files, after which the log is rotated. Default value is 0, meaning that only one log file is used. Allowed range is up to 18446744073709551615.

    Also see rocksdb_log_file_time_to_roll.

    "},{"location":"variables.html#rocksdb_max_manifest_file_size","title":"rocksdb_max_manifest_file_size","text":"Option Description Command-line \u2013rocksdb-manifest-log-file-size Dynamic No Scope Global Data type Numeric Default 18446744073709551615

    Specifies the maximum size of the MANIFEST data file, after which it is rotated. Default value is also the maximum, making it practically unlimited: only one manifest file is used.

    "},{"location":"variables.html#rocksdb_max_manual_compactions","title":"rocksdb_max_manual_compactions","text":"Option Description Command-line \u2013rocksdb-max-manual-compactions Dynamic Yes Scope Global Data type UINT Default 10

    The variable defines the maximum number of pending plus ongoing manual compactions. The default value and the minimum value is 0. The maximum value is 4294967295 (UNIT_MAX).

    "},{"location":"variables.html#rocksdb_max_open_files","title":"rocksdb_max_open_files","text":"Option Description Command-line \u2013rocksdb-max-open-files Dynamic No Scope Global Data type Numeric Default 1000

    Specifies the maximum number of file handles opened by MyRocks. Values in the range between 0 and open_files_limit are taken as they are. If rocksdb_max_open_files value is greater than open_files_limit, it will be reset to \u00bd of open_files_limit, and a warning will be emitted to the mysqld error log. A value of -2 denotes auto tuning: just sets rocksdb_max_open_files value to \u00bd of open_files_limit. Finally, -1 means no limit, i.e. an infinite number of file handles.

    Warning

    Setting rocksdb_max_open_files to -1 is dangerous, as the server may quickly run out of file handles in this case.

    "},{"location":"variables.html#rocksdb_max_row_locks","title":"rocksdb_max_row_locks","text":"Option Description Command-line \u2013rocksdb-max-row-locks Dynamic Yes Scope Global Data type Numeric Default 1048576

    Specifies the limit on the maximum number of row locks a transaction can have before it fails. Default value is also the maximum, making it practically unlimited: transactions never fail due to row locks.

    "},{"location":"variables.html#rocksdb_max_subcompactions","title":"rocksdb_max_subcompactions","text":"Option Description Command-line \u2013rocksdb-max-subcompactions Dynamic No Scope Global Data type Numeric Default 1

    Specifies the maximum number of threads allowed for each compaction job. Default value of 1 means no subcompactions (one thread per compaction job). Allowed range is up to 64.

    "},{"location":"variables.html#rocksdb_max_total_wal_size","title":"rocksdb_max_total_wal_size","text":"Option Description Command-line \u2013rocksdb-max-total-wal-size Dynamic No Scope Global Data type Numeric Default 2 GB

    Specifies the maximum total size of WAL (write-ahead log) files, after which memtables are flushed. Default value is 2 GB The allowed range is up to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_merge_buf_size","title":"rocksdb_merge_buf_size","text":"Option Description Command-line \u2013rocksdb-merge-buf-size Dynamic Yes Scope Global Data type Numeric Default 67108864

    Specifies the size (in bytes) of the merge-sort buffers used to accumulate data during secondary key creation. New entries are written directly to the lowest level in the database, instead of updating indexes through the memtable and L0. These values are sorted using merge-sort, with buffers set to 64 MB by default (67108864). Allowed range is from 100 to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_merge_combine_read_size","title":"rocksdb_merge_combine_read_size","text":"Option Description Command-line \u2013rocksdb-merge-combine-read-size Dynamic Yes Scope Global Data type Numeric Default 1073741824

    Specifies the size (in bytes) of the merge-combine buffer used for the merge-sort algorithm as described in rocksdb_merge_buf_size. Default size is 1 GB (1073741824). Allowed range is from 100 to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_merge_tmp_file_removal_delay_ms","title":"rocksdb_merge_tmp_file_removal_delay_ms","text":"Option Description Command-line \u2013rocksdb_merge_tmp_file_removal_delay_ms Dynamic Yes Scope Global, Session Data type Numeric Default 0

    Fast secondary index creation creates merge files when needed. After finishing secondary index creation, merge files are removed. By default, the file removal is done without any sleep, so removing GBs of merge files within <1s may happen, which will cause trim stalls on Flash. This variable can be used to rate limit the delay in milliseconds.

    "},{"location":"variables.html#rocksdb_new_table_reader_for_compaction_inputs","title":"rocksdb_new_table_reader_for_compaction_inputs","text":"Option Description Command-line \u2013rocksdb-new-table-reader-for-compaction-inputs Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether MyRocks should create a new file descriptor and table reader for each compaction input. Disabled by default. Enabling this may increase memory consumption, but will also allow pre-fetch options to be specified for compaction input files without impacting table readers used for user queries.

    "},{"location":"variables.html#rocksdb_no_block_cache","title":"rocksdb_no_block_cache","text":"Option Description Command-line \u2013rocksdb-no-block-cache Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to disable the block cache for column families. Variable is disabled by default, meaning that using the block cache is allowed.

    "},{"location":"variables.html#rocksdb_no_create_column_family","title":"rocksdb_no_create_column_family","text":"Option Description Command-line \u2013rocksdb-no-create-column-family Dynamic No Scope Global Data type Boolean Default ON

    Controls the processing of the column family name given in the COMMENT clause in the CREATE TABLE or ALTER TABLE statement in case the column family name does not refer to an existing column family.

    If rocksdb_no_create_column_family is set to NO, a new column family will be created and the new index will be placed into it.

    If rocksdb_no_create_column_family is set to YES, no new column family will be created and the index will be placed into the default column family. A warning is issued in this case informing that the specified column family does not exist and cannot be created.

    "},{"location":"variables.html#rocksdb_override_cf_options","title":"rocksdb_override_cf_options","text":"Option Description Command-line \u2013rocksdb-override-cf-options Dynamic No Scope Global Data type String Default

    Specifies option overrides for each column family. Empty by default.

    "},{"location":"variables.html#rocksdb_paranoid_checks","title":"rocksdb_paranoid_checks","text":"Option Description Command-line \u2013rocksdb-paranoid-checks Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether MyRocks should re-read the data file as soon as it is created to verify correctness. Enabled by default.

    "},{"location":"variables.html#rocksdb_partial_index_ignore_killed","title":"rocksdb_partial_index_ignore_killed","text":"Option Description Command-line \u2013rocksdb-partial-index-ignore-killed Dynamic Yes Scope Global Data type Boolean Default ON

    This variable has been implemented in Percona Server for MySQL 8.0.36-28.

    If this variable is set to ON, the partial index materialization ignores the killed flag and continues materialization until completion. If queries are killed during materialization due to timeout, the work done so far is wasted, and the killed query will likely be retried later, hitting the same issue.

    The dafault value is ON which means this variable is enabled.

    "},{"location":"variables.html#rocksdb_partial_index_sort_max_mem","title":"rocksdb_partial_index_sort_max_mem","text":"Option Description Command-line \u2013rocksdb-partial-index-sort-max-mem Dynamic Yes Scope Local Data type Unsigned Integer Default 0

    The variable was implemented in Percona Server for MySQL 8.0.27-18. Maximum memory to use when sorting an unmaterialized group for partial indexes. The 0(zero) value is defined as no limit.

    "},{"location":"variables.html#rocksdb_pause_background_work","title":"rocksdb_pause_background_work","text":"Option Description Command-line \u2013rocksdb-pause-background-work Dynamic Yes Scope Global Data type Boolean Default OFF

    Specifies whether MyRocks should pause all background operations. Disabled by default. There is no practical reason for a user to ever use this variable because it is intended as a test synchronization tool for the MyRocks MTR test suites.

    Warning

    If someone were to set a rocksdb_force_flush_memtable_now to 1 while rocksdb_pause_background_work is set to 1, the client that issued the rocksdb_force_flush_memtable_now=1 will be blocked indefinitely until rocksdb_pause_background_work is set to 0.

    "},{"location":"variables.html#rocksdb_partial_index_blind_delete","title":"rocksdb_partial_index_blind_delete","text":"Option Description Command-line \u2013rocksdb_partial_index_blind_delete Dynamic Yes Scope Global Data type Boolean Default ON

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    If enabled, the server does not read from the partial index to check if the key exists before deleting the partial index and the delete marker is unconditionally written.

    If the variable is disabled (OFF), the server always reads from partial index to check if key exists before deleting the partial index.

    This variable is enabled (ON) by default.

    "},{"location":"variables.html#rocksdb_perf_context_level","title":"rocksdb_perf_context_level","text":"Option Description Command-line \u2013rocksdb-perf-context-level Dynamic Yes Scope Global, Session Data type Numeric Default 0

    Specifies the level of information to capture with the Perf Context plugins. The default value is 0. The allowed range is up to 5.

    Value Description 1 Disable perf stats 2 Enable only count stats 3 Enable count stats and time stats except for mutexes 4 Enable count stats and time stats, except for wall time or CPU time for mutexes 5 Enable all count stats and time stats"},{"location":"variables.html#rocksdb_persistent_cache_path","title":"rocksdb_persistent_cache_path","text":"Option Description Command-line \u2013rocksdb-persistent-cache-path Dynamic No Scope Global Data type String Default

    Specifies the path to the persistent cache. Set this together with rocksdb_persistent_cache_size_mb.

    "},{"location":"variables.html#rocksdb_persistent_cache_size_mb","title":"rocksdb_persistent_cache_size_mb","text":"Option Description Command-line \u2013rocksdb-persistent-cache-size-mb Dynamic No Scope Global Data type Numeric Default 0

    Specifies the size of the persisten cache in megabytes. Default is 0 (persistent cache disabled). Allowed range is up to 18446744073709551615. Set this together with rocksdb_persistent_cache_path.

    "},{"location":"variables.html#rocksdb_pin_l0_filter_and_index_blocks_in_cache","title":"rocksdb_pin_l0_filter_and_index_blocks_in_cache","text":"Option Description Command-line \u2013rocksdb-pin-l0-filter-and-index-blocks-in-cache Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether MyRocks pins the filter and index blocks in the cache if rocksdb_cache_index_and_filter_blocks is enabled. Enabled by default.

    "},{"location":"variables.html#rocksdb_print_snapshot_conflict_queries","title":"rocksdb_print_snapshot_conflict_queries","text":"Option Description Command-line \u2013rocksdb-print-snapshot-conflict-queries Dynamic Yes Scope Global Data type Boolean Default OFF

    Specifies whether queries that generate snapshot conflicts should be logged to the error log. Disabled by default.

    "},{"location":"variables.html#rocksdb_protection_bytes_per_key","title":"rocksdb_protection_bytes_per_key","text":"Option Description Command-line \u2013rocksdb_protection_bytes_per_key Dynamic Yes Scope Global, Session Data type Numeric Default 0

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    This variable is used to configure WriteOptions::protection_bytes_per_key. The default value is 0 (disabled). When this variable is set to 1, 2, 4, or 8, it uses that number of bytes per key value to protect entries in the WriteBatch.

    The minimum value is 0.

    The maximum value is ULONG_MAX (0xFFFFFFFF).

    "},{"location":"variables.html#rocksdb_rate_limiter_bytes_per_sec","title":"rocksdb_rate_limiter_bytes_per_sec","text":"Option Description Command-line \u2013rocksdb-rate-limiter-bytes-per-sec Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies the maximum rate at which MyRocks can write to media via memtable flushes and compaction. Default value is 0 (write rate is not limited). Allowed range is up to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_read_free_rpl","title":"rocksdb_read_free_rpl","text":"Option Description Command-line \u2013rocksdb-read-free-rpl Dynamic Yes Scope Global Data type Enum Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Uses read-free replication, which allows no row lookup during replication, on the replica.

    The options are the following:

    • OFF - Disables the variable

    • PK_SK - Enables the variable on all tables with a primary key

    • PK_ONLY - Enables the variable on tables where the only key is the primary key

    "},{"location":"variables.html#rocksdb_read_free_rpl_tables","title":"rocksdb_read_free_rpl_tables","text":"Option Description Command-line \u2013rocksdb-read-free-rpl-tables Dynamic Yes Scope Global, Session Data type String Default

    The variable was disabled in Percona Server for MySQL 8.0.20-11. We recommend that you use rocksdb_read_free_rpl instead of this variable.

    This variable lists tables (as a regular expression) that should use read-free replication on the replica (that is, replication without row lookups). Empty by default.

    "},{"location":"variables.html#rocksdb_records_in_range","title":"rocksdb_records_in_range","text":"Option Description Command-line \u2013rocksdb-records-in-range Dynamic Yes Scope Global, Session Data type Numeric Default 0

    Specifies the value to override the result of records_in_range(). Default value is 0. Allowed range is up to 2147483647.

    "},{"location":"variables.html#rocksdb_reset_stats","title":"rocksdb_reset_stats","text":"Option Description Command-line \u2013rocksdb-reset-stats Dynamic Yes Scope Global Data type Boolean Default OFF

    Resets MyRocks internal statistics dynamically (without restarting the server).

    "},{"location":"variables.html#rocksdb_rollback_on_timeout","title":"rocksdb_rollback_on_timeout","text":"Option Description Command-line \u2013rocksdb-rollback-on-timeout Dynamic Yes Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.20-11. By default, only the last statement on a transaction is rolled back. If --rocksdb-rollback-on-timeout=ON, a transaction timeout causes a rollback of the entire transaction.

    "},{"location":"variables.html#rocksdb_rpl_skip_tx_api","title":"rocksdb_rpl_skip_tx_api","text":"Option Description Command-line \u2013rocksdb-rpl-skip-tx-api Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether write batches should be used for replication thread instead of the transaction API. Disabled by default.

    There are two conditions which are necessary to use it: row replication format and replica operating in super read only mode.

    "},{"location":"variables.html#rocksdb_seconds_between_stat_computes","title":"rocksdb_seconds_between_stat_computes","text":"Option Description Command-line \u2013rocksdb-seconds-between-stat-computes Dynamic Yes Scope Global Data type Numeric Default 3600

    Specifies the number of seconds to wait between recomputation of table statistics for the optimizer. During that time, only changed indexes are updated. Default value is 3600. Allowed is from 0 to 4294967295.

    "},{"location":"variables.html#rocksdb_signal_drop_index_thread","title":"rocksdb_signal_drop_index_thread","text":"Option Description Command-line \u2013rocksdb-signal-drop-index-thread Dynamic Yes Scope Global Data type Boolean Default OFF

    Signals the MyRocks drop index thread to wake up.

    "},{"location":"variables.html#rocksdb_sim_cache_size","title":"rocksdb_sim_cache_size","text":"Option Description Command-line \u2013rocksdb-sim-cache-size Dynamic No Scope Global Data type Numeric Default 0

    Enables the simulated cache, which allows us to figure out the hit/miss rate with a specific cache size without changing the real block cache.

    "},{"location":"variables.html#rocksdb_skip_bloom_filter_on_read","title":"rocksdb_skip_bloom_filter_on_read","text":"Option Description Command-line \u2013rocksdb-skip-bloom-filter-on_read Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether bloom filters should be skipped on reads. Disabled by default (bloom filters are not skipped).

    "},{"location":"variables.html#rocksdb_skip_fill_cache","title":"rocksdb_skip_fill_cache","text":"Option Description Command-line \u2013rocksdb-skip-fill-cache Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to skip caching data on read requests. Disabled by default (caching is not skipped).

    "},{"location":"variables.html#rocksdb_skip_locks_if_skip_unique_check","title":"rocksdb_skip_locks_if_skip_unique_check","text":"Option Description Command-line rocksdb_skip_locks_if_skip_unique_check Dynamic Yes Scope Global Data type Boolean Default OFF

    Skip row locking when unique checks are disabled.

    "},{"location":"variables.html#rocksdb_sst_mgr_rate_bytes_per_sec","title":"rocksdb_sst_mgr_rate_bytes_per_sec","text":"Option Description Command-line \u2013rocksdb-sst-mgr-rate-bytes-per-sec Dynamic Yes Scope Global, Session Data type Numeric Default 0

    Specifies the maximum rate for writing to data files. Default value is 0. This option is not effective on HDD. Allowed range is from 0 to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_stats_dump_period_sec","title":"rocksdb_stats_dump_period_sec","text":"Option Description Command-line \u2013rocksdb-stats-dump-period-sec Dynamic No Scope Global Data type Numeric Default 600

    Specifies the period in seconds for performing a dump of the MyRocks statistics to the info log. Default value is 600. Allowed range is up to 2147483647.

    "},{"location":"variables.html#rocksdb_stats_level","title":"rocksdb_stats_level","text":"Option Description Command-line \u2013rocksdb-stats-level Dynamic Yes Scope Global Data type Numeric Default 0

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Controls the RocksDB statistics level. The default value is \u201c0\u201d (kExceptHistogramOrTimers), which is the fastest level. The maximum value is \u201c4\u201d.

    "},{"location":"variables.html#rocksdb_stats_recalc_rate","title":"rocksdb_stats_recalc_rate","text":"Option Description Command-line \u2013rocksdb-stats-recalc-rate Dynamic No Scope Global Data type Numeric Default 0

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Specifies the number of indexes to recalculate per second. Recalculating index statistics periodically ensures it to match the actual sum from SST files. Default value is 0. Allowed range is up to 4294967295.

    "},{"location":"variables.html#rocksdb_store_row_debug_checksums","title":"rocksdb_store_row_debug_checksums","text":"Option Description Command-line \u2013rocksdb-store-row-debug-checksums Dynamic Yes Scope Global Data type Boolean Default OFF

    Specifies whether to include checksums when writing index or table records. Disabled by default.

    "},{"location":"variables.html#rocksdb_strict_collation_check","title":"rocksdb_strict_collation_check","text":"Option Description Command-line \u2013rocksdb-strict-collation-check Dynamic Yes Scope Global Data type Boolean Default ON

    This variable is considered deprecated in version 8.0.23-14.

    Specifies whether to check and verify that table indexes have proper collation settings. Enabled by default.

    "},{"location":"variables.html#rocksdb_strict_collation_exceptions","title":"rocksdb_strict_collation_exceptions","text":"Option Description Command-line \u2013rocksdb-strict-collation-exceptions Dynamic Yes Scope Global Data type String Default

    This variable is considered deprecated in version 8.0.23-14.

    Lists tables (as a regular expression) that should be excluded from verifying case-sensitive collation enforced by rocksdb_strict_collation_check. Empty by default.

    "},{"location":"variables.html#rocksdb_table_cache_numshardbits","title":"rocksdb_table_cache_numshardbits","text":"Option Description Command-line \u2013rocksdb-table-cache-numshardbits Dynamic No Scope Global Data type Numeric Default 6

    Specifies the number if table caches. The default value is 6. The allowed range is from 0 to 19.

    "},{"location":"variables.html#rocksdb_table_stats_background_thread_nice_value","title":"rocksdb_table_stats_background_thread_nice_value","text":"Option Description Command-line \u2013rocksdb-table-stats-background-thread-nice-value Dynamic Yes Scope Global Data type Numeric Default 19

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    The nice value for index stats. The minimum = -20 (THREAD_PRIO_MIN) The maximum = 19 (THREAD_PRIO_MAX)

    "},{"location":"variables.html#rocksdb_table_stats_max_num_rows_scanned","title":"rocksdb_table_stats_max_num_rows_scanned","text":"Option Description Command-line \u2013rocksdb-table-stats-max-num-rows-scanned Dynamic Yes Scope Global Data type Numeric Default 0

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    The maximum number of rows to scan in a table scan based on a cardinality calculation. The minimum is 0 (every modification triggers a stats recalculation). The maximum is 18,446,744,073,709,551,615.

    "},{"location":"variables.html#rocksdb_table_stats_recalc_threshold_count","title":"rocksdb_table_stats_recalc_threshold_count","text":"Option Description Command-line \u2013rocksdb-table-stats-recalc-threshold-count Dynamic Yes Scope Global Data type Numeric Default 100

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    The number of modified rows to trigger a stats recalculation. This is a dependent variable for stats recalculation. The minimum is 0. The maximum is 18,446,744,073,709,551,615.

    "},{"location":"variables.html#rocksdb_table_stats_recalc_threshold_pct","title":"rocksdb_table_stats_recalc_threshold_pct","text":"Option Description Command-line \u2013rocksdb-table-stats-recalc-threshold-pct Dynamic Yes Scope Global Data type Numeric Default 10

    The variable was implemented in Percona Server for MySQL 8.0.20-11.

    The percentage of the number of modified rows over the total number of rows to trigger stats recalculations. This is a dependent variable for stats recalculation. The minimum value is 0 The maximum value is 100 (RDB_TBL_STATS_RECALC_THRESHOLD_PCT_MAX).

    "},{"location":"variables.html#rocksdb_table_stats_sampling_pct","title":"rocksdb_table_stats_sampling_pct","text":"Option Description Command-line \u2013rocksdb-table-stats-sampling-pct Dynamic Yes Scope Global Data type Numeric Default 10

    Specifies the percentage of entries to sample when collecting statistics about table properties. Default value is 10. Allowed range is from 0 to 100.

    "},{"location":"variables.html#rocksdb_table_stats_use_table_scan","title":"rocksdb_table_stats_use_table_scan","text":"Option Description Command-line \u2013rocksdb-table-stats-use-table-scan Dynamic Yes Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Enables table-scan-based index calculations. The default value is OFF.

    "},{"location":"variables.html#rocksdb_tmpdir","title":"rocksdb_tmpdir","text":"Option Description Command-line \u2013rocksdb-tmpdir Dynamic Yes Scope Global, Session Data type String Default

    Specifies the path to the directory for temporary files during DDL operations.

    "},{"location":"variables.html#rocksdb_trace_block_cache_access","title":"rocksdb_trace_block_cache_access","text":"Option Description Command-line \u2013rocksdb-trace-block-cache-access Dynamic Yes Scope Global Data type String Default \"\"

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Defines the block cache trace option string. The format is sampling frequency: max_trace_file_size:trace_file_name. The sampling frequency value and max_trace_file_size value are positive integers. The block accesses are saved to the rocksdb_datadir/block_cache_traces/trace_file_name. The default value is an empty string.

    "},{"location":"variables.html#rocksdb_trace_queries","title":"rocksdb_trace_queries","text":"Option Description Command-line \u2013rocksdb-trace-queries Dynamic Yes Scope Global Data type String Default \"\"

    This variable is a trace option string. The format is sampling_frequency:max_trace_file_size:trace_file_name. The sampling_frequency and max_trace_file_size are positive integers. The queries are saved to the rocksdb_datadir/queries_traces/trace_file_name.

    "},{"location":"variables.html#rocksdb_trace_sst_api","title":"rocksdb_trace_sst_api","text":"Option Description Command-line \u2013rocksdb-trace-sst-api Dynamic Yes Scope Global Data type Boolean Default OFF

    Specifies whether to generate trace output in the log for each call to SstFileWriter. Disabled by default.

    "},{"location":"variables.html#rocksdb_track_and_verify_wals_in_manifest","title":"rocksdb_track_and_verify_wals_in_manifest","text":"Option Description Command-line \u2013rocksdb-track-and-verify-wals-in-manifest Dynamic No Scope Global Data type Boolean Default ON

    DBOptions::track_and_verify_wals_in_manifest for RocksDB.

    "},{"location":"variables.html#rocksdb_two_write_queues","title":"rocksdb_two_write_queues","text":"Option Description Command-line \u2013rocksdb-track-and-verify-wals-in-manifest Dynamic No Scope Global Data type Boolean Default ON

    When enabled this variable allows/encourages threads that are using two-phase commit to prepare in parallel.

    "},{"location":"variables.html#rocksdb_unsafe_for_binlog","title":"rocksdb_unsafe_for_binlog","text":"Option Description Command-line \u2013rocksdb-unsafe-for-binlog Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to allow statement-based binary logging which may break consistency. Disabled by default.

    "},{"location":"variables.html#rocksdb_update_cf_options","title":"rocksdb_update_cf_options","text":"Option Description Command-line \u2013rocksdb-update-cf-options Dynamic No Scope Global Data type String Default

    Specifies option updates for each column family. Empty by default.

    "},{"location":"variables.html#rocksdb_use_adaptive_mutex","title":"rocksdb_use_adaptive_mutex","text":"Option Description Command-line \u2013rocksdb-use-adaptive-mutex Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to use adaptive mutex which spins in user space before resorting to the kernel. Disabled by default.

    "},{"location":"variables.html#rocksdb_use_default_sk_cf","title":"rocksdb_use_default_sk_cf","text":"Option Description Command-line \u2013rocksdb-use-default-sk-cf Dynamic No Scope Global Data type Boolean Default OFF

    Use default_sk column family for secondary keys.

    "},{"location":"variables.html#rocksdb_use_direct_io_for_flush_and_compaction","title":"rocksdb_use_direct_io_for_flush_and_compaction","text":"Option Description Command-line \u2013rocksdb-use-direct-io-for-flush-and-compaction Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to write to data files directly, without caches or buffers. Disabled by default.

    "},{"location":"variables.html#rocksdb_use_direct_reads","title":"rocksdb_use_direct_reads","text":"Option Description Command-line \u2013rocksdb-use-direct-reads Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether to read data files directly, without caches or buffers. Disabled by default. If you enable this, make sure that rocksdb_allow_mmap_reads is disabled.

    "},{"location":"variables.html#rocksdb_use_fsync","title":"rocksdb_use_fsync","text":"Option Description Command-line \u2013rocksdb-use-fsync Dynamic No Scope Global Data type Boolean Default OFF

    Specifies whether MyRocks should use fsync instead of fdatasync when requesting a sync of a data file. Disabled by default.

    "},{"location":"variables.html#rocksdb_use_hyper_clock_cache","title":"rocksdb_use_hyper_clock_cache","text":"Option Description Command-line \u2013rocksdb_use_hyper_clock_cache Dynamic No Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    If enabled, this variable uses HyperClockCache instead of default LRUCache for RocksDB.

    This variable is disabled (OFF) by default.

    "},{"location":"variables.html#rocksdb_use_write_buffer_manager","title":"rocksdb_use_write_buffer_manager","text":"Option Description Command-line \u2013rocksdb_use_write_buffer_manager Dynamic No Scope Global Data type Boolean Default OFF

    The variable was implemented in Percona Server for MySQL 8.0.33-25.

    This variable is tech preview and may be removed in the future releases.

    Allows to turn on the write buffer manager (WriteBufferManager) from cnf files. This variable is related to rocksdb_charge_memory.

    "},{"location":"variables.html#rocksdb_validate_tables","title":"rocksdb_validate_tables","text":"Option Description Command-line \u2013rocksdb-validate-tables Dynamic No Scope Global Data type Numeric Default 1

    The variable was implemented in Percona Server for MySQL 8.0.20-11. Specifies whether to verify that MySQL data dictionary is equal to the MyRocks data dictionary.

    • 0: do not verify.

    • 1: verify and fail on error (default).

    • 2: verify and continue with error.

    "},{"location":"variables.html#rocksdb_verify_row_debug_checksums","title":"rocksdb_verify_row_debug_checksums","text":"Option Description Command-line \u2013rocksdb-verify-row-debug-checksums Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to verify checksums when reading index or table records. Disabled by default.

    "},{"location":"variables.html#rocksdb_wal_bytes_per_sync","title":"rocksdb_wal_bytes_per_sync","text":"Option Description Command-line \u2013rocksdb-wal-bytes-per-sync Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies how often should the OS sync WAL (write-ahead log) files to disk as they are being written, asynchronously, in the background. This operation can be used to smooth out write I/O over time. Default value is 0, meaning that files are never synced. Allowed range is up to 18446744073709551615.

    "},{"location":"variables.html#rocksdb_wal_dir","title":"rocksdb_wal_dir","text":"Option Description Command-line \u2013rocksdb-wal-dir Dynamic No Scope Global Data type String Default

    Specifies the path to the directory where MyRocks stores WAL files.

    "},{"location":"variables.html#rocksdb_wal_recovery_mode","title":"rocksdb_wal_recovery_mode","text":"Option Description Command-line \u2013rocksdb-wal-recovery-mode Dynamic Yes Scope Global Data type Numeric Default 2

    Note

    In version Percona Server for MySQL 8.0.20-11 and later, the default is changed from 1 to 2.

    Specifies the level of tolerance when recovering write-ahead logs (WAL) files after a system crash.

    The following are the options:

    • 0: if the last WAL entry is corrupted, truncate the entry and either start the server normally or refuse to start.

    • 1: if a WAL entry is corrupted, the server fails to start and does not recover from the crash.

    • 2 (default): if a corrupted WAL entry is detected, truncate all entries after the detected corrupted entry. You can select this setting for replication replicas.

    • 3: If a corrupted WAL entry is detected, skip only the corrupted entry and continue the apply WAL entries. This option can be dangerous.

    "},{"location":"variables.html#rocksdb_wal_size_limit_mb","title":"rocksdb_wal_size_limit_mb","text":"Option Description Command-line \u2013rocksdb-wal-size-limit-mb Dynamic No Scope Global Data type Numeric Default 0

    Specifies the maximum size of all WAL files in megabytes before attempting to flush memtables and delete the oldest files. Default value is 0 (never rotated). Allowed range is up to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_wal_ttl_seconds","title":"rocksdb_wal_ttl_seconds","text":"Option Description Command-line \u2013rocksdb-wal-ttl-seconds Dynamic No Scope Global Data type Numeric Default 0

    Specifies the timeout in seconds before deleting archived WAL files. Default is 0 (archived WAL files are never deleted). Allowed range is up to 9223372036854775807.

    "},{"location":"variables.html#rocksdb_whole_key_filtering","title":"rocksdb_whole_key_filtering","text":"Option Description Command-line \u2013rocksdb-whole-key-filtering Dynamic No Scope Global Data type Boolean Default ON

    Specifies whether the bloomfilter should use the whole key for filtering instead of just the prefix. Enabled by default. Make sure that lookups use the whole key for matching.

    "},{"location":"variables.html#rocksdb_write_batch_flush_threshold","title":"rocksdb_write_batch_flush_threshold","text":"Option Description Command-line \u2013rocksdb-write-batch-flush-threshold Dynamic Yes Scope Local Data type Integer Default 0

    This variable specifies the maximum size of the write batch in bytes before flushing. Only valid if rockdb_write_policy is WRITE_UNPREPARED. There is no limit if the variable is set to the default setting.

    "},{"location":"variables.html#rocksdb_write_batch_max_bytes","title":"rocksdb_write_batch_max_bytes","text":"Option Description Command-line \u2013rocksdb-write-batch-max-bytes Dynamic Yes Scope Global Data type Numeric Default 0

    Specifies the maximum size of a RocksDB write batch in bytes. 0 means no limit. In case user exceeds the limit following error will be shown: ERROR HY000: Status error 10 received from RocksDB: Operation aborted: Memory limit reached.

    "},{"location":"variables.html#rocksdb_write_disable_wal","title":"rocksdb_write_disable_wal","text":"Option Description Command-line \u2013rocksdb-write-disable-wal Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Lets you temporarily disable writes to WAL files, which can be useful for bulk loading.

    "},{"location":"variables.html#rocksdb_write_ignore_missing_column_families","title":"rocksdb_write_ignore_missing_column_families","text":"Option Description Command-line \u2013rocksdb-write-ignore-missing-column-families Dynamic Yes Scope Global, Session Data type Boolean Default OFF

    Specifies whether to ignore writes to column families that do not exist. Disabled by default (writes to non-existent column families are not ignored).

    "},{"location":"variables.html#rocksdb_write_policy","title":"rocksdb_write_policy","text":"Option Description Command-line \u2013rocksdb-write-policy Dynamic No Scope Global Data type String Default write_committed

    Specifies when two-phase commit data are written into the database. Allowed values are write_committed, write_prepared, and write_unprepared.

    Value Description write_committed Data written at commit time write_prepared Data written after the prepare phase of a two-phase transaction write_unprepared Data written before the prepare phase of a two-phase transaction"},{"location":"verifying-encryption.html","title":"Verify the encryption for tables, tablespaces, and schemas","text":"

    If a general tablespace contains tables, check the table information to see if the table is encrypted. When the general tablespace contains no tables, you may verify if the tablespace is encrypted or not.

    For single tablespaces, verify the ENCRYPTION option using INFORMATION_SCHEMA.TABLES and the CREATE OPTIONS settings.

    mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM\n       INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION%';\n
    Expected output
    +----------------------+-------------------+------------------------------+\n| TABLE_SCHEMA         | TABLE_NAME        | CREATE_OPTIONS               |\n+----------------------+-------------------+------------------------------+\n|sample                | t1                | ENCRYPTION=\"Y\"               |\n+----------------------+-------------------+------------------------------+\n

    A flag field in the INFORMATION_SCHEMA.INNODB_TABLESPACES has bit number 13 set if the tablespace is encrypted. This bit can be checked with the flag & 8192 expression in the following way:

    SELECT space, name, flag, (flag & 8192) != 0 AS encrypted FROM\nINFORMATION_SCHEMA.INNODB_TABLESPACES WHERE name in ('foo', 'test/t2', 'bar',\n'noencrypt');\n

    The encrypted table metadata is contained in the INFORMATION_SCHEMA.INNODB_TABLESPACES_ENCRYPTION table. You must have the Process privilege to view the table information.

    Note

    This table is in tech preview and may change in future releases.

       mysql> DESCRIBE INNODB_TABLESPACES_ENCRYPTION;\n
    Expected output
    +-----------------------------+--------------------+-----+----+--------+------+\n| Field                       | Type               | Null| Key| Default| Extra|\n+-----------------------------+--------------------+-----+----+--------+------+\n| SPACE                       | int(11) unsigned   | NO  |    |        |      |\n| NAME                        | varchar(655)       | YES |    |        |      |\n| ENCRYPTION_SCHEME           | int(11) unsigned   | NO  |    |        |      |\n| KEYSERVER_REQUESTS          | int(11) unsigned   | NO  |    |        |      |\n| MIN_KEY_VERSION             | int(11) unsigned   | NO  |    |        |      |\n| CURRENT_KEY_VERSION         | int(11) unsigned   | NO  |    |        |      |\n| KEY_ROTATION_PAGE_NUMBER    | bigint(21) unsigned| YES |    |        |      |\n| KEY_ROTATION_MAX_PAGE_NUMBER| bigint(21) unsigned| YES |    |        |      |\n| CURRENT_KEY_ID              | int(11) unsigned   | NO  |    |        |      |\n| ROTATING_OR_FLUSHING        | int(1) unsigned    | NO  |    |        |      |\n+-----------------------------+--------------------+-----+----+--------+------+\n

    To identify encryption-enabled schemas, query the INFORMATION_SCHEMA.SCHEMATA table:

    mysql> SELECT SCHEMA_NAME, DEFAULT_ENCRYPTION FROM\nINFORMATION_SCHEMA.SCHEMATA WHERE DEFAULT_ENCRYPTION='YES';\n
    Expected output
    +------------------------------+---------------------------------+\n| SCHEMA_NAME                  | DEFAULT_ENCRYPTION              |\n+------------------------------+---------------------------------+\n| samples                      | YES                             |\n+------------------------------+---------------------------------+\n

    Note

    The SHOW CREATE SCHEMA statement returns the DEFAULT ENCRYPTION clause.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html","title":"XtraDB performance improvements for I/O-bound highly-concurrent workloads","text":""},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#priority-refill-for-the-buffer-pool-free-list","title":"Priority refill for the buffer pool free list","text":"

    In highly-concurrent I/O-bound workloads the following situation may happen:

    • Buffer pool free lists are used faster than they are refilled by the LRU cleaner thread.

    • Buffer pool free lists become empty and more and more query and utility (i.e., purge) threads stall, checking whether a buffer pool free list has became non-empty, sleeping, performing single-page LRU flushes.

    • The number of buffer pool free list mutex waiters increases.

    • When the LRU manager thread (or a single page LRU flush by a query thread) finally produces a free page, it is starved from putting it on the buffer pool free list as it must acquire the buffer pool free list mutex too. However, being one thread in up to hundreds, the chances of a prompt acquisition are low.

    This is addressed by delegating all the LRU flushes to the to the LRU manager thread, never attempting to evict a page or perform a LRU single page flush by a query thread, and introducing a backoff algorithm to reduce buffer pool free list mutex pressure on empty buffer pool free lists. This is controlled through a new system variable innodb_empty_free_list_algorithm.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#innodb_empty_free_list_algorithm","title":"innodb_empty_free_list_algorithm","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: legacy, backoff Default legacy

    When legacy option is set, server will use the upstream algorithm and when the backoff is selected, Percona implementation will be used.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#multi-threaded-lru-flusher","title":"Multi-threaded LRU flusher","text":"

    Percona Server for MySQL features a true multi-threaded LRU flushing. In this scheme, each buffer pool instance has its own dedicated LRU manager thread that is tasked with performing LRU flushes and evictions to refill the free list of that buffer pool instance. Existing multi-threaded flusher no longer does any LRU flushing and is tasked with flush list flushing only.

    • All threads still synchronize on each coordinator thread iteration. If a particular flushing job is stuck on one of the worker threads, the rest will idle until the stuck one completes.

    • The coordinator thread heuristics focus on flush list adaptive flushing without considering the state of free lists, which might be in need of urgent refill for a subset of buffer pool instances on a loaded server.

    • LRU flushing is serialized with flush list flushing for each buffer pool instance, introducing the risk that the right flushing mode will not happen for a particular instance because it is being flushed in the other mode.

    The following InnoDB metrics are no longer accounted, as their semantics do not make sense under the current LRU flushing design: buffer_LRU_batch_flush_avg_time_slot, buffer_LRU_batch_flush_avg_pass, buffer_LRU_batch_flush_avg_time_thread, buffer_LRU_batch_flush_avg_time_est.

    The need for InnoDB recovery thread writer threads is also removed, consequently all associated code is deleted.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#doublewrite-buffer","title":"Doublewrite buffer","text":"

    As of Percona Server for MySQL 8.0.20-11, the parallel doublewrite buffer is replaced with the MySQL implementation.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#innodb_parallel_doublewrite_path","title":"innodb_parallel_doublewrite_path","text":"Option Description Command-line: Yes Scope: Global Dynamic: No Data type: String Default xb_doublewrite

    As of Percona Server for MySQL 8.0.20-11, this variable is considered deprecated and has no effect. You should use innodb_doublewrite_dir.

    This variable is used to specify the location of the parallel doublewrite file. It accepts both absolute and relative paths. In the latter case they are treated as relative to the data directory.

    Percona Server for MySQL has introduced several options, only available in builds compiled with UNIV_PERF_DEBUG C preprocessor define.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#innodb_sched_priority_master","title":"innodb_sched_priority_master","text":"Option Description Command-line: Yes Config file: Yes Scope: Global Dynamic: Yes Data type: Boolean

    This variable can be added to the configuration file.

    "},{"location":"xtradb-performance-improvements-io-bound-highly-concurrent-workloads.html#other-reading","title":"Other reading","text":"
    • Bug #74637 - make dirty page flushing more adaptive

    • Bug #67808 - in innodb engine, double write and multi-buffer pool instance reduce concurrency

    • Bug #69232 - buf_dblwr->mutex can be splited into two

    "},{"location":"yum-download-rpm.html","title":"Install Percona Server for MySQL using downloaded RPM packages","text":"

    You should be aware that when you install packages manually, you must resolve and install any dependencies. This process may involve finding and installing the necessary dependencies before installing the packages. Dependencies are other packages that a package may need to function correctly. For example, a package may rely on a specific library. If that library is not installed, or if the installed library is the wrong version, the package may not work correctly.

    Package managers, like APT or YUM install the dependencies for you.

    Download the packages from Percona Product Downloads. If needed, Instructions for the Percona Product Download are available.

    "},{"location":"yum-download-rpm.html#version-changes","title":"Version changes","text":"

    Starting with Percona Server 8.0.33-25, the RPM builds for RHEL 8 and RHEL 9 contain ARM packages with the aarch64.rpm extension. This means that Percona Server for MySQL is available for users on ARM-based systems.

    "},{"location":"yum-download-rpm.html#download-and-install-rpm-packages","title":"Download and install RPM packages","text":"

    The following example downloads Percona Server for MySQL 8.0.32-24 release packages for RHEL 8.

    1. Using Wget, the following command downloads a specific version of Percona Server for MySQL on Red Hat Enterprise Linux 8 from the Percona website.

      $ wget https://downloads.percona.com/downloads/Percona-Server-8.0/Percona-Server-8.0.32-24/binary/redhat/8/x86_64/Percona-Server-8.0.32-24-re5c6e9d2-el8-x86_64-bundle.tar\n
    2. The following command extracts the contents of Percona Server for MySQL tarball. The tar command uses these options for the extraction:

      • x - extract

      • v - a verbose description of the tar extraction

      • f - name of the archive file

      $ tar xvf Percona-Server-8.0.32-24-re5c6e9d2-el8-x86_64-bundle.tar\n
    3. The following command uses the ls utility to list the RPM files in the current directory. The command uses the *.rpm pattern. The * is a wildcard that matches any number of characters. The .rpm specifies that we only want the files that end in this extension.

      $ ls *.rpm\n
      The output should look like the following:

      Expected output
      percona-icu-data-files-8.0.32-24.1.el8.x86_64.rpm\npercona-mysql-router-8.0.32-24.1.el8.x86_64.rpm\npercona-mysql-router-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-client-8.0.32-24.1.el8.x86_64.rpm\npercona-server-client-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-debugsource-8.0.32-24.1.el8.x86_64.rpm\npercona-server-devel-8.0.32-24.1.el8.x86_64.rpm \npercona-server-rocksdb-8.0.32-24.1.el8.x86_64.rpm\npercona-server-rocksdb-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-server-8.0.32-24.1.el8.x86_64.rpm\npercona-server-server-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-shared-8.0.32-24.1.el8.x86_64.rpm\npercona-server-shared-compat-8.0.32-24.1.el8.x86_64.rpm\npercona-server-shared-debuginfo-8.0.32-24.1.el8.x86_64.rpm\npercona-server-test-8.0.32-24.1.el8.x86_64.rpm\npercona-server-test-debuginfo-8.0.32-24.1.el8.x86_64.rpm\n
    4. [Optional] Install jemalloc. The following command downloads a specific version of jemalloc for RHEL 8 from the Percona repository.

      $ wget https://repo.percona.com/yum/release/8/RPMS/x86_64/jemalloc-3.6.0-1.el8.x86_64.rpm\n
    5. An EL8-based RHEL distribution or derivatives package installation requires you to disable the mysql module. We are installing a different version than the one provided by the module, so we must disable the module before installation.

      $ sudo yum module disable mysql\n
    6. The following command uses superuser privileges to install RPM packages in the current directory using the rpm command. The rpm command uses the following options:

      • i - install

      • v - verbose, describe the process in detail

      • h - display hash marks (#) to display installation progress

      $ sudo rpm -ivh *.rpm\n
    "},{"location":"yum-files.html","title":"Files in the RPM package built for Percona Server for MySQL 8.0","text":"

    Each of the Percona Server for MySQL RPM packages has a particular purpose.

    Package Contains percona-server-server Server itself (the mysqld binary) percona-server-debuginfo Debug symbols for the server percona-server-client Command line client percona-server-devel Header files needed to compile software using the client library. percona-server-shared Client shared library. percona-server-rocksdb The files for rocksdb installation. percona-mysql-router The mysql router. percona-server-shared-compat Shared libraries for software compiled against older versions of the client library. The following libraries are included in this package: libmysqlclient.so.12, libmysqlclient.so.14, libmysqlclient.so.15, libmysqlclient.so.16, and libmysqlclient.so.18. This package is not included in downloads for Red Hat Enterprise Linux 9 and derivatives. percona-server-test Includes the test suite for Percona Server for MySQL."},{"location":"yum-repo.html","title":"Install from Percona Software repository","text":"

    Ready-to-use packages are available from the Percona Server for MySQL software repositories and the download page.

    The Percona yum repository supports popular RPM-based operating systems. The easiest way to install the Percona RPM repository is to install an RPM configuring yum and installing the Percona GPG key.

    We gather Telemetry data in the Percona packages and Docker images.

    Review Get more help for ways that we can work with you.

    "},{"location":"yum-repo.html#version-changes","title":"Version changes","text":"

    Starting with Percona Server 8.0.33-25, the RPM builds for RHEL 8 and RHEL 9 contain ARM packages with the aarch64.rpm extension. This means that Percona Server for MySQL is available for users on ARM-based systems.

    "},{"location":"yum-repo.html#supported-platforms","title":"Supported platforms","text":"

    Specific information on the supported platforms, products, and versions are described in Percona Software and Platform Lifecycle.

    "},{"location":"yum-repo.html#red-hat-certified","title":"Red Hat Certified","text":"

    Percona Server for MySQL is certified for Red Hat Enterprise Linux 8. This certification is based on common and secure best practices and successful interoperability with the operating system. Percona Server is listed in the Red Hat Ecosystem Catalog.

    "},{"location":"yum-repo.html#limitations","title":"Limitations","text":"

    The RPM packages for Red Hat Enterprise Linux 7 and the compatible derivatives do not support TLSv1.3. This version requires OpenSSL 1.1.1, which is currently unavailable on this platform.

    RHEL 8 and other EL8 systems enable the MySQL module by default. This module hides the Percona-provided packages and the module must be disabled to make these packages visible. The following command disables the module:

    $ sudo yum module disable mysql\n
    "},{"location":"yum-repo.html#install","title":"Install","text":"

    Install from Percona Software Repository For more information on the Percona Software repositories and configuring Percona Repositories with percona-release, see the Percona Software Repositories Documentation. Run the following commands as a root user or with sudo.

    Install on Red Hat 7Install on Red Hat 8 or later

    The first command uses yum to install the Percona repository from the Percona website. The second command enables the ps-80 release series of the Percona Server. The third command allows the tools repository. This repository contains additional Percona software. The fourth command installs Percona Server for MySQL.

    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release enable-only ps-80 release\n$ sudo percona-release enable tools release\n$ sudo yum install percona-server-server\n

    The first command uses yum to install the Percona repository from the Percona website. The second command uses the percona-release script to set up the ps-80 release series of Percona Server. The third command installs Percona Server for MySQL.

    $ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm\n$ sudo percona-release setup ps-80\n$ sudo yum install percona-server-server\n
    "},{"location":"yum-repo.html#available-storage-engines","title":"Available storage engines","text":"

    Percona Server for MySQL 8.0 comes with the TokuDB storage engine and MyRocks storage engine. These storage engines are installed as plugins.

    Percona Server for MySQL 8.0.28-19 (2022-05-12)and higher do not support the TokuDB storage engine. We have removed the storage engine from the installation packages and disabled the storage engine in our binary builds. For more information, see TokuDB version changes.

    For information on how to install and configure TokuDB, refer to the TokuDB Installation guide.

    For information on how to install and configure MyRocks, refer to the Percona MyRocks Installation guide.

    "},{"location":"yum-repo.html#percona-yum-testing-repository","title":"Percona yum Testing repository","text":"

    Percona offers pre-release builds from our testing repository.

    To subscribe to the testing repository, you enable the testing repository in /etc/yum.repos.d/percona-release.repo by updating the second section, \u2018testing\u2019 and set both percona-testing-$basearch and percona-testing-noarch to enabled = 1.

    There are three sections in this file:

    • release

    • testing

    • experimental

    You must install the Percona repository first if the installation has not been done already.

    "},{"location":"yum-run.html","title":"Run Percona Server for MySQL","text":"

    Percona Server for MySQL stores the data files in /var/lib/mysql/ by default. The configuration file used to manage Percona Server for MySQL is the /etc/my.cnf.

    "},{"location":"yum-run.html#manage-the-service","title":"Manage the service","text":"

    You would use systemctl to manage system services and daemons in a Linux environment. It provides a systematic and unified interface for controlling the state of services and checking their status. With systemctl, you can start a service to initiate its operations, stop it to terminate its operations, restart it to refresh its state, and check the status to monitor its performance and health. This command is essential for system administrators who need precise control over service management tasks.

    The RHEL distributions and derivatives come with systemd as the default system and service manager.

    systemctl is a command-line utility that is used to control the systemd system and service manager. It\u2019s a primary tool for managing services on Linux distributions that use systemd.

    You can use either sytemctl or service. Currently, both options are supported.

    Percona Server for MySQL is not started automatically on the RHEL distributions and derivatives after installation.

    The following command uses superuser privileges to start, check the status, stop, and restart the MySQL service using systemctl.

    $ sudo systemctl start mysql\n$ sudo systemctl status mysql\n$ sudo systemctl stop mysql\n$ sudo systemctl restart mysql\n
    "},{"location":"yum-run.html#selinux-and-security","title":"SELinux and security","text":"

    Security-Enhanced Linux (SELinux) is a security feature of the Linux operating system that provides a mechanism for supporting access control security policies. This ability includes mandatory access controls (MAC), which the United States National Security Agency (NSA) implemented and first introduced in CentOS and Red Hat Enterprise Linux.

    As you administer your system, SELinux gives you a framework to manage access controls, such as setting permissions and deciding which users or programs can access specific files. SELinux operates on the principle of least privilege, where every process and system user runs with only the minimum permissions necessary to function.

    For information on SELinux, see Working with SELinux.

    The RHEL 8 distributions and derivatives have added a system-wide cryptographic policies component. This component lets administrators manage the cryptographic compliance of the entire system with a single command. This ability simplifies the task of meeting specific security requirements for cryptographic algorithms.

    "},{"location":"yum-uninstall.html","title":"Uninstall Percona Server for MySQL","text":"

    To completely uninstall Percona Server for MySQL, remove all the installed packages and data files.

    1. Stop the Percona Server for MySQL service:

      $ sudo systemctl stop mysql\n
    2. As a superuser, either root or using sudo, use yum and the remove option. You can use the remove option to remove a specific package or a group of packages. In the example, the command removes all packages starting with percona-server.

      $ sudo yum remove percona-server*\n
    3. The first command removes the /var/lib/mysql directory and everything within it. The second command removes the /etc/my.cnf file, the main configuration file for Percona Server for MySQL.

      Warning

      This step removes all the packages and deletes all the data files (databases, tables, logs, and other files). Take a backup before doing this in case you need the data.

      $ rm -rf /var/lib/mysql\n$ rm -f /etc/my.cnf\n
    "},{"location":"zenfs.html","title":"Installing and configuring Percona Server for MySQL with ZenFS support","text":"

    Implemented in Percona Server for MySQL 8.0.26-16.

    A solid state drive (SSD) does not overwrite data like a magnetic hard disk drive. Data must be written to an empty page. An SSD issue is write amplification. This issue is when the same data is written multiple times.

    An SSD is organized in pages and blocks. Data is written in pages and erased in blocks. If, for example, you have 8KB data on a page. The application updates one sector (512 Bytes) of that page. The controller reads the page in RAM, marks the old page as stale, updates the sector, and then writes a new page with this 8KB of data. The process is efficient use of the storage space but also shortens the SSD lifespan because the SSD parts do wear out.

    Garbage collection can also cause large-scale write amplification. The stale data is erased in blocks, which can consist of hundreds of pages. The SSD controller searches for pages that are marked stale. Pages that are not stale but are stored in that block are moved to another block before the block is erased and marked ready for use.

    The zone storage model organizes the SSD into a set of zones that are uniform in size and uses the Zoned Namespaces (ZNS) technology. ZNS is optimized for an SSD and exposes this zoned block storage interface between the host and SSD. ZNS enables smart data placement. Writes are sequential within a zone.

    ZenFS is a file system plugin for RocksDB which uses the RocksDB file system to place files into zones on a raw zoned block device (ZBD). The plugin adds native support for ZNS, avoids on device garbage collection, and minimizes write amplification. File data is stored in a set of extents. Within a zone, extents are a contiguous part of the address space. Garbage collection is an option, but this selection can cause write amplification.

    ZenFS depends on the libzbd user library and requires a Linux kernel implementation that supports NVMe Zoned Namespaces. The kernel must be configured with zone block device support enabled.

    Read the Western Digital and Percona deliver Utrastar DC ZN540 Zoned Namespace SSD support for Percona Server for MySQL PDF for more information.

    The following procedure installs Percona Server for MySQL and then configures --rocksdb-fs-uri=zenfs://dev:<short_block_device_name> for data storage.

    Note

    The can have a short name designation which is the . For example, if the is /dev/nvme0n2 remove the /dev/ portion and the is nvme0n2. The block device name and short block device name must be substituted with the appropriate name from your system. To indicate that such a substitution is needed in statements, we use and .

    For the moment, the ZenFS plugin can be enabled in following distributions:

    Distribution Name Notes Debian 11.1 Able to run the ZenFS plugin Ubuntu 20.04.3 Requires the 5.11 HWE kernel patched with the allow blk-zoned ioctls without CAPT_SYS_ADMIN patch

    If the ZenFS functionality is not enabled on Ubuntu 20.04, the binaries with ZenFS support can run on the standard 5.4 kernel.

    Other Linux distributions are adding support for ZenFS, but Percona does not provide installation packages for those distributions.

    "},{"location":"zenfs.html#installation","title":"Installation","text":"

    Start with the installation of Percona Server for MySQL.

    1. The steps are listed here for convenience, for an explanation, see Installing Percona Server for MySQL from Percona apt repository.

      $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb\n$ sudo apt install gnupg2 lsb-release ./percona-release_latest.*_all.deb\n$ sudo percona-release setup ps80\n
    2. Install Percona Server for MySQL with MyRocks and the ZenFS plugin package. The binaries are listed in the Installing Percona Server for MySQL from a Binary Tarball section of the Percona Server for MySQL installation instructions.

      $ sudo apt install percona-server-server\n
    3. Install the RocksDB plugin package. This package copies ha_rocksdb.so into a predefined location. The RocksDB storage engine is not enabled.

      $ sudo apt install percona-server-rocksdb\n
    "},{"location":"zenfs.html#configuration","title":"Configuration","text":"
    1. Identify your ZBD device, , with lsblk. Add the -o option and specify which columns to print.

      In the example, the NAME column returns the block device name, the SIZE column returns the size of the device, and the ZONED column returns information if the device uses the zone model. The value, host-managed, identifies a ZBD model.

      lsblk -o NAME,SIZE,ZONED\nNAME                    SIZE  ZONED\nsda                       247.9G  none\n|-sda1                    230.9G  none\n|-sda2                        1G  none\n|-sda3                       16G  none\n<short_block_device_name>   7.2T  host-managed\n
    2. Change the ownership of to the mysql:mysql user account.

      $ sudo chown mysql:mysql <block_device_name>\n
    3. Change the permissions so that the user or owner can read and write and the MySQL group can read, in case they must take a backup, for .

      $ sudo chmod 640 <block_device_name>\n
    4. Change the scheduler to mq_deadline with a udev rule. Create /etc/udev/rules.d/60-scheduler.rules if the file does not exist, and add the following rule:

      ACTION==\"add|change\", KERNEL==\"<short_block_device_name>\", ATTR{queue/scheduler}=\"mq-deadline\"\n
    5. Restart the machine to apply the rule.

    6. Verify if the rule was applied correctly by running the following line:

      $ cat /sys/block/<short_block_device_name>/queue/scheduler\n
    7. Review that the output of the previous command matches:

      [mq-deadline] none\n
    8. Create an auxiliary directory for ZenFS. For example /var/lib/mysql_aux_nvme0n2.

      The ZenFS auxiliary directory is a regular (POSIX) file directory used internally to resolve file locks and shared access. There are no strict requirements for the location but the directory must be write accessible for the mysql:mysql UNIX system user account. Each ZBD must have an individual auxiliary directory. This directory is recommended to be at the same level as \u201c/var/lib/mysql\u201d, which is the default Percona Server for MySQL directory.

      Note

      AppArmor is enabled by default in Debian 11. If your AppArmor mode is set to enforce, you must edit the profile to allow access to these locations. Add the following rules to usr.sbin.mysqld:

      /var/lib/mysql_aux_*/ r,\n/var/lib/mysql_aux_*/** rwk,\n

      Don\u2019t forget to reload the policy if you make edits:

      $ sudo service apparmor reload\n

      For more information, see Working with AppArmor.

      Note

      If you must configure ZenFS to use a directory inside /var/lib (owned by root:root without write permissions for other user accounts), edit your AppArmor profile (described in an earlier step), if needed, and do the following steps manually: