+ How the system refers to this metric, e.g., sql.bytesin.
+
+
+
+
+ Downsampler
+
+
+
+ The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view.
+
+
+ Options:
+
+
AVG: Returns the average value over the time period.
+
MIN: Returns the lowest value seen.
+
MAX: Returns the highest value seen.
+
SUM: Returns the sum of all values seen.
+
+
+
+
+
+
+ Aggregator
+
+
+
+ Used to combine data points from different nodes. It has the same operations available as the Downsampler.
+
+
+ Options:
+
+
AVG: Returns the average value over the time period.
+
MIN: Returns the lowest value seen.
+
MAX: Returns the highest value seen.
+
SUM: Returns the sum of all values seen.
+
+
+
+
+
+
+ Rate
+
+
+
+ Determines how to display the rate of change during the selected time period.
+
+
+ Options:
+
+
+
+ Normal: Returns the actual recorded value.
+
+
+ Rate: Returns the rate of change of the value per second.
+
+
+ Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase).
+
+
+
+
+
+
+
+ Source
+
+
+ The set of nodes being queried, which is either:
+
+
+ The entire cluster.
+
+
+ A single, named node.
+
+
+
+
+
+
+ Per Node
+
+
+ If checked, the chart will show a line for each node's value of this metric.
+
+
+
+
diff --git a/_includes/v20.2/admin-ui/admin-ui-log-files.md b/_includes/v20.2/admin-ui/admin-ui-log-files.md
new file mode 100644
index 00000000000..51ed9c3aee5
--- /dev/null
+++ b/_includes/v20.2/admin-ui/admin-ui-log-files.md
@@ -0,0 +1,7 @@
+Log files can be accessed using the Admin UI, which displays them in JSON format.
+
+1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click [**Advanced Debug**](admin-ui-debug-pages.html) in the left-hand navigation.
+
+2. Under **Raw Status Endpoints (JSON)**, click **Log Files** to view the JSON of all collected logs.
+
+3. Copy one of the log filenames. Then click **Specific Log File** and replace the `cockroach.log` placeholder in the URL with the filename.
\ No newline at end of file
diff --git a/_includes/v20.2/admin-ui/admin-ui-metrics-navigation.md b/_includes/v20.2/admin-ui/admin-ui-metrics-navigation.md
new file mode 100644
index 00000000000..6516c21dfce
--- /dev/null
+++ b/_includes/v20.2/admin-ui/admin-ui-metrics-navigation.md
@@ -0,0 +1,5 @@
+## Dashboard navigation
+
+Use the **Graph** menu to display metrics for your entire cluster or for a specific node.
+
+To the right of the Graph and Dashboard menus, a range selector allows you to filter the view for a predefined timeframe or custom date/time range. Use the navigation buttons to move to the previous, next, or current timeframe. Note that the active timeframe is reflected in the URL and can be easily shared.
\ No newline at end of file
diff --git a/_includes/v20.2/admin-ui/logical-bytes.md b/_includes/v20.2/admin-ui/logical-bytes.md
new file mode 100644
index 00000000000..e85f04cea92
--- /dev/null
+++ b/_includes/v20.2/admin-ui/logical-bytes.md
@@ -0,0 +1 @@
+Logical bytes reflect the approximate number of bytes stored in the database. This value may deviate from the number of physical bytes on disk, due to factors such as compression and [write amplification](https://en.wikipedia.org/wiki/Write_amplification).
\ No newline at end of file
diff --git a/_includes/v20.2/app/BasicExample.java b/_includes/v20.2/app/BasicExample.java
new file mode 100644
index 00000000000..d63c30f6aba
--- /dev/null
+++ b/_includes/v20.2/app/BasicExample.java
@@ -0,0 +1,437 @@
+import java.util.*;
+import java.time.*;
+import java.sql.*;
+import javax.sql.DataSource;
+
+import org.postgresql.ds.PGSimpleDataSource;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac BasicExample.java && java BasicExample
+
+ To build the javadoc:
+
+ $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java
+
+ At a high level, this code consists of two classes:
+
+ 1. BasicExample, which is where the application logic lives.
+
+ 2. BasicExampleDAO, which is used by the application to access the
+ data store.
+
+*/
+
+public class BasicExample {
+
+ public static void main(String[] args) {
+
+ // Configure the database connection.
+ PGSimpleDataSource ds = new PGSimpleDataSource();
+ ds.setServerName("localhost");
+ ds.setPortNumber(26257);
+ ds.setDatabaseName("bank");
+ ds.setUser("maxroach");
+ ds.setPassword(null);
+ ds.setSsl(true);
+ ds.setSslMode("require");
+ ds.setSslCert("certs/client.maxroach.crt");
+ ds.setSslKey("certs/client.maxroach.key.pk8");
+ ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
+ ds.setApplicationName("BasicExample");
+
+ // Create DAO.
+ BasicExampleDAO dao = new BasicExampleDAO(ds);
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // necessary in production code.
+ dao.testRetryHandling();
+
+ // Set up the 'accounts' table.
+ dao.createAccounts();
+
+ // Insert a few accounts "by hand", using INSERTs on the backend.
+ Map balances = new HashMap();
+ balances.put("1", "1000");
+ balances.put("2", "250");
+ int updatedAccounts = dao.updateAccounts(balances);
+ System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts);
+
+ // How much money is in these accounts?
+ int balance1 = dao.getAccountBalance(1);
+ int balance2 = dao.getAccountBalance(2);
+ System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
+
+ // Transfer $100 from account 1 to account 2
+ int fromAccount = 1;
+ int toAccount = 2;
+ int transferAmount = 100;
+ int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
+ if (transferredAccounts != -1) {
+ System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
+ }
+
+ balance1 = dao.getAccountBalance(1);
+ balance2 = dao.getAccountBalance(2);
+ System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
+
+ // Bulk insertion example using JDBC's batching support.
+ int totalRowsInserted = dao.bulkInsertRandomAccountData();
+ System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted);
+
+ // Print out 10 account values.
+ int accountsRead = dao.readAccounts(10);
+
+ // Drop the 'accounts' table so this code can be run again.
+ dao.tearDown();
+ }
+}
+
+/**
+ * Data access object used by 'BasicExample'. Abstraction over some
+ * common CockroachDB operations, including:
+ *
+ * - Auto-handling transaction retries in the 'runSQL' method
+ *
+ * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
+ * method
+ */
+
+class BasicExampleDAO {
+
+ private static final int MAX_RETRY_COUNT = 3;
+ private static final String SAVEPOINT_NAME = "cockroach_restart";
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final boolean FORCE_RETRY = false;
+
+ private final DataSource ds;
+
+ BasicExampleDAO(DataSource ds) {
+ this.ds = ds;
+ }
+
+ /**
+ Used to test the retry logic in 'runSQL'. It is not necessary
+ in production code.
+ */
+ void testRetryHandling() {
+ if (this.FORCE_RETRY) {
+ runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
+ }
+ }
+
+ /**
+ * Run SQL code in a way that automatically handles the
+ * transaction retry logic so we don't have to duplicate it in
+ * various places.
+ *
+ * @param sqlCode a String containing the SQL code you want to
+ * execute. Can have placeholders, e.g., "INSERT INTO accounts
+ * (id, balance) VALUES (?, ?)".
+ *
+ * @param args String Varargs to fill in the SQL code's
+ * placeholders.
+ * @return Integer Number of rows updated, or -1 if an error is thrown.
+ */
+ public Integer runSQL(String sqlCode, String... args) {
+
+ // This block is only used to emit class and method names in
+ // the program output. It is not necessary in production
+ // code.
+ StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
+ StackTraceElement elem = stacktrace[2];
+ String callerClass = elem.getClassName();
+ String callerMethod = elem.getMethodName();
+
+ int rv = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // We're managing the commit lifecycle ourselves so we can
+ // automatically issue transaction retries.
+ connection.setAutoCommit(false);
+
+ int retryCount = 0;
+
+ while (retryCount < MAX_RETRY_COUNT) {
+
+ Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryHandling()'.
+ if (FORCE_RETRY) {
+ forceRetry(connection); // SELECT 1
+ }
+
+ try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {
+
+ // Loop over the args and insert them into the
+ // prepared statement based on their types. In
+ // this simple example we classify the argument
+ // types as "integers" and "everything else"
+ // (a.k.a. strings).
+ for (int i=0; i %10s\n", name, val);
+ }
+ }
+ }
+ } else {
+ int updateCount = pstmt.getUpdateCount();
+ rv += updateCount;
+
+ // This printed output is for debugging and/or demonstration
+ // purposes only. It would not be necessary in production code.
+ System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt);
+ }
+
+ connection.releaseSavepoint(sp);
+ connection.commit();
+ break;
+
+ } catch (SQLException e) {
+
+ if (RETRY_SQL_STATE.equals(e.getSQLState())) {
+ System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n",
+ e.getSQLState(), e.getMessage(), retryCount);
+ connection.rollback(sp);
+ retryCount++;
+ rv = -1;
+ } else {
+ rv = -1;
+ throw e;
+ }
+ }
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ rv = -1;
+ }
+
+ return rv;
+ }
+
+ /**
+ * Helper method called by 'testRetryHandling'. It simply issues
+ * a "SELECT 1" inside the transaction to force a retry. This is
+ * necessary to take the connection's session out of the AutoRetry
+ * state, since otherwise the other statements in the session will
+ * be retried automatically, and the client (us) will not see a
+ * retry error. Note that this information is taken from the
+ * following test:
+ * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
+ *
+ * @param connection Connection
+ */
+ private void forceRetry(Connection connection) throws SQLException {
+ try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
+ statement.executeQuery();
+ }
+ }
+
+ /**
+ * Creates a fresh, empty accounts table in the database.
+ */
+ public void createAccounts() {
+ runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
+ };
+
+ /**
+ * Update accounts by passing in a Map of (ID, Balance) pairs.
+ *
+ * @param accounts (Map)
+ * @return The number of updated accounts (int)
+ */
+ public int updateAccounts(Map accounts) {
+ int rows = 0;
+ for (Map.Entry account : accounts.entrySet()) {
+
+ String k = account.getKey();
+ String v = account.getValue();
+
+ String[] args = {k, v};
+ rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
+ }
+ return rows;
+ }
+
+ /**
+ * Transfer funds between one account and another. Handles
+ * transaction retries in case of conflict automatically on the
+ * backend.
+ * @param fromId (int)
+ * @param toId (int)
+ * @param amount (int)
+ * @return The number of updated accounts (int)
+ */
+ public int transferFunds(int fromId, int toId, int amount) {
+ String sFromId = Integer.toString(fromId);
+ String sToId = Integer.toString(toId);
+ String sAmount = Integer.toString(amount);
+
+ // We have omitted explicit BEGIN/COMMIT statements for
+ // brevity. Individual statements are treated as implicit
+ // transactions by CockroachDB (see
+ // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).
+
+ String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
+ "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
+ "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";
+
+ return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
+ }
+
+ /**
+ * Get the account balance for one account.
+ *
+ * We skip using the retry logic in 'runSQL()' here for the
+ * following reasons:
+ *
+ * 1. Since this is a single read ("SELECT"), we don't expect any
+ * transaction conflicts to handle
+ *
+ * 2. We need to return the balance as an integer
+ *
+ * @param id (int)
+ * @return balance (int)
+ */
+ public int getAccountBalance(int id) {
+ int balance = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // Check the current balance.
+ ResultSet res = connection.createStatement()
+ .executeQuery("SELECT balance FROM accounts WHERE id = "
+ + id);
+ if(!res.next()) {
+ System.out.printf("No users in the table with id %i", id);
+ } else {
+ balance = res.getInt("balance");
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+
+ return balance;
+ }
+
+ /**
+ * Insert randomized account data (ID, balance) using the JDBC
+ * fast path for bulk inserts. The fastest way to get data into
+ * CockroachDB is the IMPORT statement. However, if you must bulk
+ * ingest from the application using INSERT statements, the best
+ * option is the method shown here. It will require the following:
+ *
+ * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
+ * settings (see the connection info in 'BasicExample.main').
+ *
+ * 2. Inserting in batches of 128 rows, as used inside this method
+ * (see BATCH_SIZE), since the PGJDBC driver's logic works best
+ * with powers of two, such that a batch of size 128 can be 6x
+ * faster than a batch of size 250.
+ * @return The number of new accounts inserted (int)
+ */
+ public int bulkInsertRandomAccountData() {
+
+ Random random = new Random();
+ int BATCH_SIZE = 128;
+ int totalNewAccounts = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // We're managing the commit lifecycle ourselves so we can
+ // control the size of our batch inserts.
+ connection.setAutoCommit(false);
+
+ // In this example we are adding 500 rows to the database,
+ // but it could be any number. What's important is that
+ // the batch size is 128.
+ try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
+ for (int i=0; i<=(500/BATCH_SIZE);i++) {
+ for (int j=0; j %s row(s) updated in this batch\n", count.length);
+ }
+ connection.commit();
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+ return totalNewAccounts;
+ }
+
+ /**
+ * Read out a subset of accounts from the data store.
+ *
+ * @param limit (int)
+ * @return Number of accounts read (int)
+ */
+ public int readAccounts(int limit) {
+ return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
+ }
+
+ /**
+ * Perform any necessary cleanup of the data store so it can be
+ * used again.
+ */
+ public void tearDown() {
+ runSQL("DROP TABLE accounts;");
+ }
+}
diff --git a/_includes/v20.2/app/BasicSample.java b/_includes/v20.2/app/BasicSample.java
new file mode 100644
index 00000000000..25d326dd4e0
--- /dev/null
+++ b/_includes/v20.2/app/BasicSample.java
@@ -0,0 +1,55 @@
+import java.sql.*;
+import java.util.Properties;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac BasicSample.java && java BasicSample
+*/
+
+public class BasicSample {
+ public static void main(String[] args)
+ throws ClassNotFoundException, SQLException {
+
+ // Load the Postgres JDBC driver.
+ Class.forName("org.postgresql.Driver");
+
+ // Connect to the "bank" database.
+ Properties props = new Properties();
+ props.setProperty("user", "maxroach");
+ props.setProperty("sslmode", "require");
+ props.setProperty("sslrootcert", "certs/ca.crt");
+ props.setProperty("sslkey", "certs/client.maxroach.key.pk8");
+ props.setProperty("sslcert", "certs/client.maxroach.crt");
+ props.setProperty("ApplicationName", "roachtest");
+
+ Connection db = DriverManager
+ .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
+
+ try {
+ // Create the "accounts" table.
+ db.createStatement()
+ .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
+
+ // Insert two rows into the "accounts" table.
+ db.createStatement()
+ .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
+
+ // Print out the balances.
+ System.out.println("Initial balances:");
+ ResultSet res = db.createStatement()
+ .executeQuery("SELECT id, balance FROM accounts");
+ while (res.next()) {
+ System.out.printf("\taccount %s: %s\n",
+ res.getInt("id"),
+ res.getInt("balance"));
+ }
+ } finally {
+ // Close the database connection.
+ db.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/TxnSample.java b/_includes/v20.2/app/TxnSample.java
new file mode 100644
index 00000000000..624e67c80d5
--- /dev/null
+++ b/_includes/v20.2/app/TxnSample.java
@@ -0,0 +1,148 @@
+import java.sql.*;
+import java.util.Properties;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac TxnSample.java && java TxnSample
+*/
+
+// Ambiguous whether the transaction committed or not.
+class AmbiguousCommitException extends SQLException{
+ public AmbiguousCommitException(Throwable cause) {
+ super(cause);
+ }
+}
+
+class InsufficientBalanceException extends Exception {}
+
+class AccountNotFoundException extends Exception {
+ public int account;
+ public AccountNotFoundException(int account) {
+ this.account = account;
+ }
+}
+
+// A simple interface that provides a retryable lambda expression.
+interface RetryableTransaction {
+ public void run(Connection conn)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException;
+}
+
+public class TxnSample {
+ public static RetryableTransaction transferFunds(int from, int to, int amount) {
+ return new RetryableTransaction() {
+ public void run(Connection conn)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException {
+
+ // Check the current balance.
+ ResultSet res = conn.createStatement()
+ .executeQuery("SELECT balance FROM accounts WHERE id = "
+ + from);
+ if(!res.next()) {
+ throw new AccountNotFoundException(from);
+ }
+
+ int balance = res.getInt("balance");
+ if(balance < from) {
+ throw new InsufficientBalanceException();
+ }
+
+ // Perform the transfer.
+ conn.createStatement()
+ .executeUpdate("UPDATE accounts SET balance = balance - "
+ + amount + " where id = " + from);
+ conn.createStatement()
+ .executeUpdate("UPDATE accounts SET balance = balance + "
+ + amount + " where id = " + to);
+ }
+ };
+ }
+
+ public static void retryTransaction(Connection conn, RetryableTransaction tx)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException {
+
+ Savepoint sp = conn.setSavepoint("cockroach_restart");
+ while(true) {
+ boolean releaseAttempted = false;
+ try {
+ tx.run(conn);
+ releaseAttempted = true;
+ conn.releaseSavepoint(sp);
+ break;
+ }
+ catch(SQLException e) {
+ String sqlState = e.getSQLState();
+
+ // Check if the error code indicates a SERIALIZATION_FAILURE.
+ if(sqlState.equals("40001")) {
+ // Signal the database that we will attempt a retry.
+ conn.rollback(sp);
+ } else if(releaseAttempted) {
+ throw new AmbiguousCommitException(e);
+ } else {
+ throw e;
+ }
+ }
+ }
+ conn.commit();
+ }
+
+ public static void main(String[] args)
+ throws ClassNotFoundException, SQLException {
+
+ // Load the Postgres JDBC driver.
+ Class.forName("org.postgresql.Driver");
+
+ // Connect to the 'bank' database.
+ Properties props = new Properties();
+ props.setProperty("user", "maxroach");
+ props.setProperty("sslmode", "require");
+ props.setProperty("sslrootcert", "certs/ca.crt");
+ props.setProperty("sslkey", "certs/client.maxroach.key.pk8");
+ props.setProperty("sslcert", "certs/client.maxroach.crt");
+ props.setProperty("ApplicationName", "roachtest");
+
+ Connection db = DriverManager
+ .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
+
+
+ try {
+ // We need to turn off autocommit mode to allow for
+ // multi-statement transactions.
+ db.setAutoCommit(false);
+
+ // Perform the transfer. This assumes the 'accounts'
+ // table has already been created in the database.
+ RetryableTransaction transfer = transferFunds(1, 2, 100);
+ retryTransaction(db, transfer);
+
+ // Check balances after transfer.
+ db.setAutoCommit(true);
+ ResultSet res = db.createStatement()
+ .executeQuery("SELECT id, balance FROM accounts");
+ while (res.next()) {
+ System.out.printf("\taccount %s: %s\n", res.getInt("id"),
+ res.getInt("balance"));
+ }
+
+ } catch(InsufficientBalanceException e) {
+ System.out.println("Insufficient balance");
+ } catch(AccountNotFoundException e) {
+ System.out.println("No users in the table with id " + e.account);
+ } catch(AmbiguousCommitException e) {
+ System.out.println("Ambiguous result encountered: " + e);
+ } catch(SQLException e) {
+ System.out.println("SQLException encountered:" + e);
+ } finally {
+ // Close the database connection.
+ db.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/activerecord-basic-sample.rb b/_includes/v20.2/app/activerecord-basic-sample.rb
new file mode 100644
index 00000000000..f1d35e1de3a
--- /dev/null
+++ b/_includes/v20.2/app/activerecord-basic-sample.rb
@@ -0,0 +1,48 @@
+require 'active_record'
+require 'activerecord-cockroachdb-adapter'
+require 'pg'
+
+# Connect to CockroachDB through ActiveRecord.
+# In Rails, this configuration would go in config/database.yml as usual.
+ActiveRecord::Base.establish_connection(
+ adapter: 'cockroachdb',
+ username: 'maxroach',
+ database: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'require',
+ sslrootcert: 'certs/ca.crt',
+ sslkey: 'certs/client.maxroach.key',
+ sslcert: 'certs/client.maxroach.crt'
+)
+
+
+# Define the Account model.
+# In Rails, this would go in app/models/ as usual.
+class Account < ActiveRecord::Base
+ validates :id, presence: true
+ validates :balance, presence: true
+end
+
+# Define a migration for the accounts table.
+# In Rails, this would go in db/migrate/ as usual.
+class Schema < ActiveRecord::Migration[5.0]
+ def change
+ create_table :accounts, force: true do |t|
+ t.integer :balance
+ end
+ end
+end
+
+# Run the schema migration by hand.
+# In Rails, this would be done via rake db:migrate as usual.
+Schema.new.change()
+
+# Create two accounts, inserting two rows into the accounts table.
+Account.create(id: 1, balance: 1000)
+Account.create(id: 2, balance: 250)
+
+# Retrieve accounts and print out the balances
+Account.all.each do |acct|
+ puts "#{acct.id} #{acct.balance}"
+end
diff --git a/_includes/v20.2/app/basic-sample.c b/_includes/v20.2/app/basic-sample.c
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/_includes/v20.2/app/basic-sample.clj b/_includes/v20.2/app/basic-sample.clj
new file mode 100644
index 00000000000..10c98fff2ba
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.clj
@@ -0,0 +1,35 @@
+(ns test.test
+ (:require [clojure.java.jdbc :as j]
+ [test.util :as util]))
+
+;; Define the connection parameters to the cluster.
+(def db-spec {:dbtype "postgresql"
+ :dbname "bank"
+ :host "localhost"
+ :port "26257"
+ :ssl true
+ :sslmode "require"
+ :sslcert "certs/client.maxroach.crt"
+ :sslkey "certs/client.maxroach.key.pk8"
+ :user "maxroach"})
+
+(defn test-basic []
+ ;; Connect to the cluster and run the code below with
+ ;; the connection object bound to 'conn'.
+ (j/with-db-connection [conn db-spec]
+
+ ;; Insert two rows into the "accounts" table.
+ (j/insert! conn :accounts {:id 1 :balance 1000})
+ (j/insert! conn :accounts {:id 2 :balance 250})
+
+ ;; Print out the balances.
+ (println "Initial balances:")
+ (->> (j/query conn ["SELECT id, balance FROM accounts"])
+ (map println)
+ doall)
+
+ ))
+
+
+(defn -main [& args]
+ (test-basic))
diff --git a/_includes/v20.2/app/basic-sample.cpp b/_includes/v20.2/app/basic-sample.cpp
new file mode 100644
index 00000000000..67b6c1d1062
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.cpp
@@ -0,0 +1,39 @@
+#include
+#include
+#include
+#include
+#include
+#include
+
+using namespace std;
+
+int main() {
+ try {
+ // Connect to the "bank" database.
+ pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost");
+
+ pqxx::nontransaction w(c);
+
+ // Create the "accounts" table.
+ w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
+
+ // Insert two rows into the "accounts" table.
+ w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
+
+ // Print out the balances.
+ cout << "Initial balances:" << endl;
+ pqxx::result r = w.exec("SELECT id, balance FROM accounts");
+ for (auto row : r) {
+ cout << row[0].as() << ' ' << row[1].as() << endl;
+ }
+
+ w.commit(); // Note this doesn't doesn't do anything
+ // for a nontransaction, but is still required.
+ }
+ catch (const exception &e) {
+ cerr << e.what() << endl;
+ return 1;
+ }
+ cout << "Success" << endl;
+ return 0;
+}
diff --git a/_includes/v20.2/app/basic-sample.cs b/_includes/v20.2/app/basic-sample.cs
new file mode 100644
index 00000000000..d23bcf9eb11
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.cs
@@ -0,0 +1,101 @@
+using System;
+using System.Data;
+using System.Security.Cryptography.X509Certificates;
+using System.Net.Security;
+using Npgsql;
+
+namespace Cockroach
+{
+ class MainClass
+ {
+ static void Main(string[] args)
+ {
+ var connStringBuilder = new NpgsqlConnectionStringBuilder();
+ connStringBuilder.Host = "localhost";
+ connStringBuilder.Port = 26257;
+ connStringBuilder.SslMode = SslMode.Require;
+ connStringBuilder.Username = "maxroach";
+ connStringBuilder.Database = "bank";
+ Simple(connStringBuilder.ConnectionString);
+ }
+
+ static void Simple(string connString)
+ {
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
+ conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
+ conn.Open();
+
+ // Create the "accounts" table.
+ new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
+
+ // Insert two rows into the "accounts" table.
+ using (var cmd = new NpgsqlCommand())
+ {
+ cmd.Connection = conn;
+ cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
+ cmd.Parameters.AddWithValue("id1", 1);
+ cmd.Parameters.AddWithValue("val1", 1000);
+ cmd.Parameters.AddWithValue("id2", 2);
+ cmd.Parameters.AddWithValue("val2", 250);
+ cmd.ExecuteNonQuery();
+ }
+
+ // Print out the balances.
+ System.Console.WriteLine("Initial balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+ }
+ }
+
+ static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
+ {
+ // To be able to add a certificate with a private key included, we must convert it to
+ // a PKCS #12 format. The following openssl command does this:
+ // openssl pkcs12 -password pass: -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
+ // As of 2018-12-10, you need to provide a password for this to work on macOS.
+ // See https://github.com/dotnet/corefx/issues/24225
+
+ // Note that the password used during X509 cert creation below
+ // must match the password used in the openssl command above.
+ clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass"));
+ }
+
+ // By default, .Net does all of its certificate verification using the system certificate store.
+ // This callback is necessary to validate the server certificate against a CA certificate file.
+ static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
+ {
+ X509Certificate2 caCert = new X509Certificate2("ca.crt");
+ X509Chain caCertChain = new X509Chain();
+ caCertChain.ChainPolicy = new X509ChainPolicy()
+ {
+ RevocationMode = X509RevocationMode.NoCheck,
+ RevocationFlag = X509RevocationFlag.EntireChain
+ };
+ caCertChain.ChainPolicy.ExtraStore.Add(caCert);
+
+ X509Certificate2 serverCert = new X509Certificate2(certificate);
+
+ caCertChain.Build(serverCert);
+ if (caCertChain.ChainStatus.Length == 0)
+ {
+ // No errors
+ return true;
+ }
+
+ foreach (X509ChainStatus status in caCertChain.ChainStatus)
+ {
+ // Check if we got any errors other than UntrustedRoot (which we will always get if we don't install the CA cert to the system store)
+ if (status.Status != X509ChainStatusFlags.UntrustedRoot)
+ {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ }
+}
diff --git a/_includes/v20.2/app/basic-sample.go b/_includes/v20.2/app/basic-sample.go
new file mode 100644
index 00000000000..6e22c858dbb
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.go
@@ -0,0 +1,46 @@
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ "log"
+
+ _ "github.com/lib/pq"
+)
+
+func main() {
+ // Connect to the "bank" database.
+ db, err := sql.Open("postgres",
+ "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
+ if err != nil {
+ log.Fatal("error connecting to the database: ", err)
+ }
+ defer db.Close()
+
+ // Create the "accounts" table.
+ if _, err := db.Exec(
+ "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
+ log.Fatal(err)
+ }
+
+ // Insert two rows into the "accounts" table.
+ if _, err := db.Exec(
+ "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
+ log.Fatal(err)
+ }
+
+ // Print out the balances.
+ rows, err := db.Query("SELECT id, balance FROM accounts")
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer rows.Close()
+ fmt.Println("Initial balances:")
+ for rows.Next() {
+ var id, balance int
+ if err := rows.Scan(&id, &balance); err != nil {
+ log.Fatal(err)
+ }
+ fmt.Printf("%d %d\n", id, balance)
+ }
+}
diff --git a/_includes/v20.2/app/basic-sample.js b/_includes/v20.2/app/basic-sample.js
new file mode 100644
index 00000000000..4e86cb2cbca
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.js
@@ -0,0 +1,63 @@
+var async = require('async');
+var fs = require('fs');
+var pg = require('pg');
+
+// Connect to the "bank" database.
+var config = {
+ user: 'maxroach',
+ host: 'localhost',
+ database: 'bank',
+ port: 26257,
+ ssl: {
+ ca: fs.readFileSync('certs/ca.crt')
+ .toString(),
+ key: fs.readFileSync('certs/client.maxroach.key')
+ .toString(),
+ cert: fs.readFileSync('certs/client.maxroach.crt')
+ .toString()
+ }
+};
+
+// Create a pool.
+var pool = new pg.Pool(config);
+
+pool.connect(function (err, client, done) {
+
+ // Close communication with the database and exit.
+ var finish = function () {
+ done();
+ process.exit();
+ };
+
+ if (err) {
+ console.error('could not connect to cockroachdb', err);
+ finish();
+ }
+ async.waterfall([
+ function (next) {
+ // Create the 'accounts' table.
+ client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
+ },
+ function (results, next) {
+ // Insert two rows into the 'accounts' table.
+ client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
+ },
+ function (results, next) {
+ // Print out account balances.
+ client.query('SELECT id, balance FROM accounts;', next);
+ },
+ ],
+ function (err, results) {
+ if (err) {
+ console.error('Error inserting into and selecting from accounts: ', err);
+ finish();
+ }
+
+ console.log('Initial balances:');
+ results.rows.forEach(function (row) {
+ console.log(row);
+ });
+
+ finish();
+ });
+});
diff --git a/_includes/v20.2/app/basic-sample.php b/_includes/v20.2/app/basic-sample.php
new file mode 100644
index 00000000000..4edae09b12a
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.php
@@ -0,0 +1,20 @@
+ PDO::ERRMODE_EXCEPTION,
+ PDO::ATTR_EMULATE_PREPARES => true,
+ PDO::ATTR_PERSISTENT => true
+ ));
+
+ $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
+
+ print "Account balances:\r\n";
+ foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
+ print $row['id'] . ': ' . $row['balance'] . "\r\n";
+ }
+} catch (Exception $e) {
+ print $e->getMessage() . "\r\n";
+ exit(1);
+}
+?>
diff --git a/_includes/v20.2/app/basic-sample.py b/_includes/v20.2/app/basic-sample.py
new file mode 100644
index 00000000000..189d8c91797
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.py
@@ -0,0 +1,152 @@
+#!/usr/bin/env python3
+
+import psycopg2
+import psycopg2.errorcodes
+import time
+import logging
+import random
+
+
+def create_accounts(conn):
+ with conn.cursor() as cur:
+ cur.execute('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
+ cur.execute('UPSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
+ logging.debug("create_accounts(): status message: {}".format(cur.statusmessage))
+ conn.commit()
+
+
+def print_balances(conn):
+ with conn.cursor() as cur:
+ cur.execute("SELECT id, balance FROM accounts")
+ logging.debug("print_balances(): status message: {}".format(cur.statusmessage))
+ rows = cur.fetchall()
+ conn.commit()
+ print("Balances at {}".format(time.asctime()))
+ for row in rows:
+ print([str(cell) for cell in row])
+
+
+def delete_accounts(conn):
+ with conn.cursor() as cur:
+ cur.execute("DELETE FROM bank.accounts")
+ logging.debug("delete_accounts(): status message: {}".format(cur.statusmessage))
+ conn.commit()
+
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn, op):
+ retries = 0
+ max_retries = 3
+ with conn:
+ while True:
+ retries +=1
+ if retries == max_retries:
+ err_msg = "Transaction did not succeed after {} retries".format(max_retries)
+ raise ValueError(err_msg)
+
+ try:
+ op(conn)
+
+ # If we reach this point, we were able to commit, so we break
+ # from the retry loop.
+ break
+ except psycopg2.Error as e:
+ logging.debug("e.pgcode: {}".format(e.pgcode))
+ if e.pgcode == '40001':
+ # This is a retry error, so we roll back the current
+ # transaction and sleep for a bit before retrying. The
+ # sleep time increases for each failed transaction.
+ conn.rollback()
+ logging.debug("EXECUTE SERIALIZATION_FAILURE BRANCH")
+ sleep_ms = (2**retries) * 0.1 * (random.random() + 0.5)
+ logging.debug("Sleeping {} seconds".format(sleep_ms))
+ time.sleep(sleep_ms)
+ continue
+ else:
+ logging.debug("EXECUTE NON-SERIALIZATION_FAILURE BRANCH")
+ raise e
+
+
+# This function is used to test the transaction retry logic. It can be deleted
+# from production code.
+def test_retry_loop(conn):
+ with conn.cursor() as cur:
+ # The first statement in a transaction can be retried transparently on
+ # the server, so we need to add a dummy statement so that our
+ # force_retry() statement isn't the first one.
+ cur.execute('SELECT now()')
+ cur.execute("SELECT crdb_internal.force_retry('1s'::INTERVAL)")
+ logging.debug("test_retry_loop(): status message: {}".format(cur.statusmessage))
+
+
+def transfer_funds(conn, frm, to, amount):
+ with conn.cursor() as cur:
+
+ # Check the current balance.
+ cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
+ from_balance = cur.fetchone()[0]
+ if from_balance < amount:
+ err_msg = "Insufficient funds in account {}: have {}, need {}".format(frm, from_balance, amount)
+ raise RuntimeError(err_msg)
+
+ # Perform the transfer.
+ cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
+ (amount, frm))
+ cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
+ (amount, to))
+ conn.commit()
+ logging.debug("transfer_funds(): status message: {}".format(cur.statusmessage))
+
+
+def main():
+
+ conn = psycopg2.connect(
+ database='bank',
+ user='maxroach',
+ sslmode='require',
+ sslrootcert='certs/ca.crt',
+ sslkey='certs/client.maxroach.key',
+ sslcert='certs/client.maxroach.crt',
+ port=26257,
+ host='localhost'
+ )
+
+ # Uncomment the below to turn on logging to the console. This was useful
+ # when testing transaction retry handling. It is not necessary for
+ # production code.
+ # log_level = getattr(logging, 'DEBUG', None)
+ # logging.basicConfig(level=log_level)
+
+ create_accounts(conn)
+
+ print_balances(conn)
+
+ amount = 100
+ fromId = 1
+ toId = 2
+
+ try:
+ run_transaction(conn, lambda conn: transfer_funds(conn, fromId, toId, amount))
+
+ # The function below is used to test the transaction retry logic. It
+ # can be deleted from production code.
+ # run_transaction(conn, lambda conn: test_retry_loop(conn))
+ except ValueError as ve:
+ # Below, we print the error and continue on so this example is easy to
+ # run (and run, and run...). In real code you should handle this error
+ # and any others thrown by the database interaction.
+ logging.debug("run_transaction(conn, op) failed: {}".format(ve))
+ pass
+
+ print_balances(conn)
+
+ delete_accounts(conn)
+
+ # Close communication with the database.
+ conn.close()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/_includes/v20.2/app/basic-sample.rb b/_includes/v20.2/app/basic-sample.rb
new file mode 100644
index 00000000000..93f0dc3d20c
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.rb
@@ -0,0 +1,31 @@
+# Import the driver.
+require 'pg'
+
+# Connect to the "bank" database.
+conn = PG.connect(
+ user: 'maxroach',
+ dbname: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'require',
+ sslrootcert: 'certs/ca.crt',
+ sslkey:'certs/client.maxroach.key',
+ sslcert:'certs/client.maxroach.crt'
+)
+
+# Create the "accounts" table.
+conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
+
+# Insert two rows into the "accounts" table.
+conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
+
+# Print out the balances.
+puts 'Initial balances:'
+conn.exec('SELECT id, balance FROM accounts') do |res|
+ res.each do |row|
+ puts row
+ end
+end
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/basic-sample.rs b/_includes/v20.2/app/basic-sample.rs
new file mode 100644
index 00000000000..4a078991cd8
--- /dev/null
+++ b/_includes/v20.2/app/basic-sample.rs
@@ -0,0 +1,45 @@
+use openssl::error::ErrorStack;
+use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
+use postgres::Client;
+use postgres_openssl::MakeTlsConnector;
+
+fn ssl_config() -> Result {
+ let mut builder = SslConnector::builder(SslMethod::tls())?;
+ builder.set_ca_file("certs/ca.crt")?;
+ builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
+ builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
+ Ok(MakeTlsConnector::new(builder.build()))
+}
+
+fn main() {
+ let connector = ssl_config().unwrap();
+ let mut client =
+ Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
+
+ // Create the "accounts" table.
+ client
+ .execute(
+ "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
+ &[],
+ )
+ .unwrap();
+
+ // Insert two rows into the "accounts" table.
+ client
+ .execute(
+ "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
+ &[],
+ )
+ .unwrap();
+
+ // Print out the balances.
+ println!("Initial balances:");
+ for row in &client
+ .query("SELECT id, balance FROM accounts", &[])
+ .unwrap()
+ {
+ let id: i64 = row.get(0);
+ let balance: i64 = row.get(1);
+ println!("{} {}", id, balance);
+ }
+}
diff --git a/_includes/v20.2/app/before-you-begin.md b/_includes/v20.2/app/before-you-begin.md
new file mode 100644
index 00000000000..dfb97226414
--- /dev/null
+++ b/_includes/v20.2/app/before-you-begin.md
@@ -0,0 +1,8 @@
+1. [Install CockroachDB](install-cockroachdb.html).
+2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster.
+3. Choose the instructions that correspond to whether your cluster is secure or insecure:
+
+
+
+
+
diff --git a/_includes/v20.2/app/create-maxroach-user-and-bank-database.md b/_includes/v20.2/app/create-maxroach-user-and-bank-database.md
new file mode 100644
index 00000000000..4d5b4626013
--- /dev/null
+++ b/_includes/v20.2/app/create-maxroach-user-and-bank-database.md
@@ -0,0 +1,32 @@
+Start the [built-in SQL shell](cockroach-sql.html):
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql --certs-dir=certs
+~~~
+
+In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE USER IF NOT EXISTS maxroach;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE DATABASE bank;
+~~~
+
+Give the `maxroach` user the necessary permissions:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> GRANT ALL ON DATABASE bank TO maxroach;
+~~~
+
+Exit the SQL shell:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> \q
+~~~
diff --git a/_includes/v20.2/app/django-basic-sample/models.py b/_includes/v20.2/app/django-basic-sample/models.py
new file mode 100644
index 00000000000..6068f8bbb8e
--- /dev/null
+++ b/_includes/v20.2/app/django-basic-sample/models.py
@@ -0,0 +1,17 @@
+from django.db import models
+
+class Customers(models.Model):
+ id = models.AutoField(primary_key=True)
+ name = models.CharField(max_length=250)
+
+class Products(models.Model):
+ id = models.AutoField(primary_key=True)
+ name = models.CharField(max_length=250)
+ price = models.DecimalField(max_digits=18, decimal_places=2)
+
+class Orders(models.Model):
+ id = models.AutoField(primary_key=True)
+ subtotal = models.DecimalField(max_digits=18, decimal_places=2)
+ customer = models.ForeignKey(Customers, on_delete=models.CASCADE, null=True)
+ product = models.ManyToManyField(Products)
+
diff --git a/_includes/v20.2/app/django-basic-sample/settings.py b/_includes/v20.2/app/django-basic-sample/settings.py
new file mode 100644
index 00000000000..c94721d61e5
--- /dev/null
+++ b/_includes/v20.2/app/django-basic-sample/settings.py
@@ -0,0 +1,125 @@
+"""
+Django settings for myproject project.
+
+Generated by 'django-admin startproject' using Django 3.0.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/3.0/topics/settings/
+
+For the full list of settings and their values, see
+https://docs.djangoproject.com/en/3.0/ref/settings/
+"""
+
+import os
+
+# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
+BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+
+
+# Quick-start development settings - unsuitable for production
+# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
+
+# SECURITY WARNING: keep the secret key used in production secret!
+SECRET_KEY = 'spl=g73)8-)ja%x*k1eje4d#&24#t)zao^s$6vc1rdk(e3t!e('
+
+# SECURITY WARNING: don't run with debug turned on in production!
+DEBUG = True
+
+ALLOWED_HOSTS = ['0.0.0.0']
+
+
+# Application definition
+
+INSTALLED_APPS = [
+ 'django.contrib.admin',
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
+ 'django.contrib.sessions',
+ 'django.contrib.messages',
+ 'django.contrib.staticfiles',
+ 'myproject',
+]
+
+MIDDLEWARE = [
+ 'django.middleware.security.SecurityMiddleware',
+ 'django.contrib.sessions.middleware.SessionMiddleware',
+ 'django.middleware.common.CommonMiddleware',
+ 'django.middleware.csrf.CsrfViewMiddleware',
+ 'django.contrib.auth.middleware.AuthenticationMiddleware',
+ 'django.contrib.messages.middleware.MessageMiddleware',
+ 'django.middleware.clickjacking.XFrameOptionsMiddleware',
+]
+
+ROOT_URLCONF = 'myproject.urls'
+
+TEMPLATES = [
+ {
+ 'BACKEND': 'django.template.backends.django.DjangoTemplates',
+ 'DIRS': [],
+ 'APP_DIRS': True,
+ 'OPTIONS': {
+ 'context_processors': [
+ 'django.template.context_processors.debug',
+ 'django.template.context_processors.request',
+ 'django.contrib.auth.context_processors.auth',
+ 'django.contrib.messages.context_processors.messages',
+ ],
+ },
+ },
+]
+
+WSGI_APPLICATION = 'myproject.wsgi.application'
+
+
+# Database
+# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
+
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django_cockroachdb',
+ 'NAME': 'bank',
+ 'USER': 'django',
+ 'PASSWORD': 'password',
+ 'HOST': 'localhost',
+ 'PORT': '26257',
+ }
+}
+
+
+# Password validation
+# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
+
+AUTH_PASSWORD_VALIDATORS = [
+ {
+ 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
+ },
+]
+
+
+# Internationalization
+# https://docs.djangoproject.com/en/3.0/topics/i18n/
+
+LANGUAGE_CODE = 'en-us'
+
+TIME_ZONE = 'UTC'
+
+USE_I18N = True
+
+USE_L10N = True
+
+USE_TZ = True
+
+
+# Static files (CSS, JavaScript, Images)
+# https://docs.djangoproject.com/en/3.0/howto/static-files/
+
+STATIC_URL = '/static/'
diff --git a/_includes/v20.2/app/django-basic-sample/urls.py b/_includes/v20.2/app/django-basic-sample/urls.py
new file mode 100644
index 00000000000..9550d713ffa
--- /dev/null
+++ b/_includes/v20.2/app/django-basic-sample/urls.py
@@ -0,0 +1,20 @@
+from django.contrib import admin
+from django.urls import path
+
+from .views import CustomersView, OrdersView, PingView, ProductView
+
+urlpatterns = [
+ path('admin/', admin.site.urls),
+
+ path('ping/', PingView.as_view()),
+
+ # Endpoints for customers URL.
+ path('customer/', CustomersView.as_view(), name='customers'),
+ path('customer//', CustomersView.as_view(), name='customers'),
+
+ # Endpoints for customers URL.
+ path('product/', ProductView.as_view(), name='product'),
+ path('product//', ProductView.as_view(), name='product'),
+
+ path('order/', OrdersView.as_view(), name='order'),
+]
diff --git a/_includes/v20.2/app/django-basic-sample/views.py b/_includes/v20.2/app/django-basic-sample/views.py
new file mode 100644
index 00000000000..78143916ee8
--- /dev/null
+++ b/_includes/v20.2/app/django-basic-sample/views.py
@@ -0,0 +1,107 @@
+from django.http import JsonResponse, HttpResponse
+from django.utils.decorators import method_decorator
+from django.views.generic import View
+from django.views.decorators.csrf import csrf_exempt
+from django.db import Error, IntegrityError
+from django.db.transaction import atomic
+
+import json
+import sys
+import time
+
+from .models import *
+
+# Warning: Do not use retry_on_exception in an inner nested transaction.
+def retry_on_exception(num_retries=3, on_failure=HttpResponse(status=500), delay_=0.5, backoff_=1.5):
+ def retry(view):
+ def wrapper(*args, **kwargs):
+ delay = delay_
+ for i in range(num_retries):
+ try:
+ return view(*args, **kwargs)
+ except IntegrityError as ex:
+ if i == num_retries - 1:
+ return on_failure
+ elif getattr(ex.__cause__, 'pgcode', '') == errorcodes.SERIALIZATION_FAILURE:
+ time.sleep(delay)
+ delay *= backoff_
+ except Error as ex:
+ return on_failure
+ return wrapper
+ return retry
+
+class PingView(View):
+ def get(self, request, *args, **kwargs):
+ return HttpResponse("python/django", status=200)
+
+@method_decorator(csrf_exempt, name='dispatch')
+class CustomersView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ customers = list(Customers.objects.values())
+ else:
+ customers = list(Customers.objects.filter(id=id).values())
+ return JsonResponse(customers, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ name = form_data['name']
+ c = Customers(name=name)
+ c.save()
+ return HttpResponse(status=200)
+
+ @retry_on_exception(3)
+ @atomic
+ def delete(self, request, id=None, *args, **kwargs):
+ if id is None:
+ return HttpResponse(status=404)
+ Customers.objects.filter(id=id).delete()
+ return HttpResponse(status=200)
+
+ # The PUT method is shadowed by the POST method, so there doesn't seem
+ # to be a reason to include it.
+
+@method_decorator(csrf_exempt, name='dispatch')
+class ProductView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ products = list(Products.objects.values())
+ else:
+ products = list(Products.objects.filter(id=id).values())
+ return JsonResponse(products, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ name, price = form_data['name'], form_data['price']
+ p = Products(name=name, price=price)
+ p.save()
+ return HttpResponse(status=200)
+
+ # The REST API outlined in the github does not say that /product/ needs
+ # a PUT and DELETE method
+
+@method_decorator(csrf_exempt, name='dispatch')
+class OrdersView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ orders = list(Orders.objects.values())
+ else:
+ orders = list(Orders.objects.filter(id=id).values())
+ return JsonResponse(orders, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ c = Customers.objects.get(id=form_data['customer']['id'])
+ o = Orders(subtotal=form_data['subtotal'], customer=c)
+ o.save()
+ for p in form_data['products']:
+ p = Products.objects.get(id=p['id'])
+ o.product.add(p)
+ o.save()
+ return HttpResponse(status=200)
diff --git a/_includes/v20.2/app/for-a-complete-example-go.md b/_includes/v20.2/app/for-a-complete-example-go.md
new file mode 100644
index 00000000000..e0144fb1f4f
--- /dev/null
+++ b/_includes/v20.2/app/for-a-complete-example-go.md
@@ -0,0 +1,4 @@
+For complete examples, see:
+
+- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pq)
+- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html)
diff --git a/_includes/v20.2/app/for-a-complete-example-java.md b/_includes/v20.2/app/for-a-complete-example-java.md
new file mode 100644
index 00000000000..b4c63135ae0
--- /dev/null
+++ b/_includes/v20.2/app/for-a-complete-example-java.md
@@ -0,0 +1,4 @@
+For complete examples, see:
+
+- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC)
+- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html)
diff --git a/_includes/v20.2/app/for-a-complete-example-python.md b/_includes/v20.2/app/for-a-complete-example-python.md
new file mode 100644
index 00000000000..432aa82a1d6
--- /dev/null
+++ b/_includes/v20.2/app/for-a-complete-example-python.md
@@ -0,0 +1,6 @@
+For complete examples, see:
+
+- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2)
+- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html)
+- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html)
+- [Build a Python App with CockroachDB and PonyORM](build-a-python-app-with-cockroachdb-pony.html)
diff --git a/_includes/v20.2/app/gorm-basic-sample.go b/_includes/v20.2/app/gorm-basic-sample.go
new file mode 100644
index 00000000000..d18948b80b2
--- /dev/null
+++ b/_includes/v20.2/app/gorm-basic-sample.go
@@ -0,0 +1,41 @@
+package main
+
+import (
+ "fmt"
+ "log"
+
+ // Import GORM-related packages.
+ "github.com/jinzhu/gorm"
+ _ "github.com/jinzhu/gorm/dialects/postgres"
+)
+
+// Account is our model, which corresponds to the "accounts" database table.
+type Account struct {
+ ID int `gorm:"primary_key"`
+ Balance int
+}
+
+func main() {
+ // Connect to the "bank" database as the "maxroach" user.
+ const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt"
+ db, err := gorm.Open("postgres", addr)
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer db.Close()
+
+ // Automatically create the "accounts" table based on the Account model.
+ db.AutoMigrate(&Account{})
+
+ // Insert two rows into the "accounts" table.
+ db.Create(&Account{ID: 1, Balance: 1000})
+ db.Create(&Account{ID: 2, Balance: 250})
+
+ // Print out the balances.
+ var accounts []Account
+ db.Find(&accounts)
+ fmt.Println("Initial balances:")
+ for _, account := range accounts {
+ fmt.Printf("%d %d\n", account.ID, account.Balance)
+ }
+}
diff --git a/_includes/v20.2/app/gorm-sample.go b/_includes/v20.2/app/gorm-sample.go
new file mode 100644
index 00000000000..a49089c5509
--- /dev/null
+++ b/_includes/v20.2/app/gorm-sample.go
@@ -0,0 +1,206 @@
+package main
+
+import (
+ "fmt"
+ "log"
+ "math"
+ "math/rand"
+ "time"
+
+ // Import GORM-related packages.
+ "github.com/jinzhu/gorm"
+ _ "github.com/jinzhu/gorm/dialects/postgres"
+
+ // Necessary in order to check for transaction retry error codes.
+ "github.com/lib/pq"
+)
+
+// Account is our model, which corresponds to the "accounts" database
+// table.
+type Account struct {
+ ID int `gorm:"primary_key"`
+ Balance int
+}
+
+// Functions of type `txnFunc` are passed as arguments to our
+// `runTransaction` wrapper that handles transaction retries for us
+// (see implementation below).
+type txnFunc func(*gorm.DB) error
+
+// This function is used for testing the transaction retry loop. It
+// can be deleted from production code.
+var forceRetryLoop txnFunc = func(db *gorm.DB) error {
+
+ // The first statement in a transaction can be retried transparently
+ // on the server, so we need to add a dummy statement so that our
+ // force_retry statement isn't the first one.
+ if err := db.Exec("SELECT now()").Error; err != nil {
+ return err
+ }
+ // Used to force a transaction retry.
+ if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil {
+ return err
+ }
+ return nil
+}
+
+func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error {
+ var fromAccount Account
+ var toAccount Account
+
+ db.First(&fromAccount, fromID)
+ db.First(&toAccount, toID)
+
+ if fromAccount.Balance < amount {
+ return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount)
+ }
+
+ fromAccount.Balance -= amount
+ toAccount.Balance += amount
+
+ if err := db.Save(&fromAccount).Error; err != nil {
+ return err
+ }
+ if err := db.Save(&toAccount).Error; err != nil {
+ return err
+ }
+ return nil
+}
+
+func main() {
+ // Connect to the "bank" database as the "maxroach" user.
+ const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt"
+ db, err := gorm.Open("postgres", addr)
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer db.Close()
+
+ // Set to `true` and GORM will print out all DB queries.
+ db.LogMode(false)
+
+ // Automatically create the "accounts" table based on the Account
+ // model.
+ db.AutoMigrate(&Account{})
+
+ // Insert two rows into the "accounts" table.
+ var fromID = 1
+ var toID = 2
+ db.Create(&Account{ID: fromID, Balance: 1000})
+ db.Create(&Account{ID: toID, Balance: 250})
+
+ // The sequence of steps in this section is:
+ // 1. Print account balances.
+ // 2. Set up some Accounts and transfer funds between them inside
+ // a transaction.
+ // 3. Print account balances again to verify the transfer occurred.
+
+ // Print balances before transfer.
+ printBalances(db)
+
+ // The amount to be transferred between the accounts.
+ var amount = 100
+
+ // Transfer funds between accounts. To handle potential
+ // transaction retry errors, we wrap the call to `transferFunds`
+ // in `runTransaction`, a wrapper which implements a retry loop
+ // with exponential backoff around our access to the database (see
+ // the implementation for details).
+ if err := runTransaction(db,
+ func(*gorm.DB) error {
+ return transferFunds(db, fromID, toID, amount)
+ },
+ ); err != nil {
+ // If the error is returned, it's either:
+ // 1. Not a transaction retry error, i.e., some other kind
+ // of database error that you should handle here.
+ // 2. A transaction retry error that has occurred more than
+ // N times (defined by the `maxRetries` variable inside
+ // `runTransaction`), in which case you will need to figure
+ // out why your database access is resulting in so much
+ // contention (see 'Understanding and avoiding transaction
+ // contention':
+ // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
+ fmt.Println(err)
+ }
+
+ // Print balances after transfer to ensure that it worked.
+ printBalances(db)
+
+ // Delete accounts so we can start fresh when we want to run this
+ // program again.
+ deleteAccounts(db)
+}
+
+// Wrapper for a transaction. This automatically re-calls `fn` with
+// the open transaction as an argument as long as the database server
+// asks for the transaction to be retried.
+func runTransaction(db *gorm.DB, fn txnFunc) error {
+ var maxRetries = 3
+ for retries := 0; retries <= maxRetries; retries++ {
+ if retries == maxRetries {
+ return fmt.Errorf("hit max of %d retries, aborting", retries)
+ }
+ txn := db.Begin()
+ if err := fn(txn); err != nil {
+ // We need to cast GORM's db.Error to *pq.Error so we can
+ // detect the Postgres transaction retry error code and
+ // handle retries appropriately.
+ pqErr := err.(*pq.Error)
+ if pqErr.Code == "40001" {
+ // Since this is a transaction retry error, we
+ // ROLLBACK the transaction and sleep a little before
+ // trying again. Each time through the loop we sleep
+ // for a little longer than the last time
+ // (A.K.A. exponential backoff).
+ txn.Rollback()
+ var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5)
+ fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs)
+ time.Sleep(time.Millisecond * time.Duration(sleepMs))
+ } else {
+ // If it's not a retry error, it's some other sort of
+ // DB interaction error that needs to be handled by
+ // the caller.
+ return err
+ }
+ } else {
+ // All went well, so we try to commit and break out of the
+ // retry loop if possible.
+ if err := txn.Commit().Error; err != nil {
+ pqErr := err.(*pq.Error)
+ if pqErr.Code == "40001" {
+ // However, our attempt to COMMIT could also
+ // result in a retry error, in which case we
+ // continue back through the loop and try again.
+ continue
+ } else {
+ // If it's not a retry error, it's some other sort
+ // of DB interaction error that needs to be
+ // handled by the caller.
+ return err
+ }
+ }
+ break
+ }
+ }
+ return nil
+}
+
+func printBalances(db *gorm.DB) {
+ var accounts []Account
+ db.Find(&accounts)
+ fmt.Printf("Balance at '%s':\n", time.Now())
+ for _, account := range accounts {
+ fmt.Printf("%d %d\n", account.ID, account.Balance)
+ }
+}
+
+func deleteAccounts(db *gorm.DB) error {
+ // Used to tear down the accounts table so we can re-run this
+ // program.
+ err := db.Exec("DELETE from accounts where ID > 0").Error
+ if err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/_includes/v20.2/app/hibernate-basic-sample/Sample.java b/_includes/v20.2/app/hibernate-basic-sample/Sample.java
new file mode 100644
index 00000000000..58d28f37a4b
--- /dev/null
+++ b/_includes/v20.2/app/hibernate-basic-sample/Sample.java
@@ -0,0 +1,236 @@
+package com.cockroachlabs;
+
+import org.hibernate.Session;
+import org.hibernate.SessionFactory;
+import org.hibernate.Transaction;
+import org.hibernate.JDBCException;
+import org.hibernate.cfg.Configuration;
+
+import java.util.*;
+import java.util.function.Function;
+
+import javax.persistence.Column;
+import javax.persistence.Entity;
+import javax.persistence.Id;
+import javax.persistence.Table;
+
+public class Sample {
+
+ private static final Random RAND = new Random();
+ private static final boolean FORCE_RETRY = false;
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final int MAX_ATTEMPT_COUNT = 6;
+
+ // Account is our model, which corresponds to the "accounts" database table.
+ @Entity
+ @Table(name="accounts")
+ public static class Account {
+ @Id
+ @Column(name="id")
+ public long id;
+
+ public long getId() {
+ return id;
+ }
+
+ @Column(name="balance")
+ public long balance;
+ public long getBalance() {
+ return balance;
+ }
+ public void setBalance(long newBalance) {
+ this.balance = newBalance;
+ }
+
+ // Convenience constructor.
+ public Account(int id, int balance) {
+ this.id = id;
+ this.balance = balance;
+ }
+
+ // Hibernate needs a default (no-arg) constructor to create model objects.
+ public Account() {}
+ }
+
+ private static Function addAccounts() throws JDBCException{
+ Function f = s -> {
+ long rv = 0;
+ try {
+ s.save(new Account(1, 1000));
+ s.save(new Account(2, 250));
+ s.save(new Account(3, 314159));
+ rv = 1;
+ System.out.printf("APP: addAccounts() --> %d\n", rv);
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{
+ Function f = s -> {
+ long rv = 0;
+ try {
+ Account fromAccount = (Account) s.get(Account.class, fromId);
+ Account toAccount = (Account) s.get(Account.class, toId);
+ if (!(amount > fromAccount.getBalance())) {
+ fromAccount.balance -= amount;
+ toAccount.balance += amount;
+ s.save(fromAccount);
+ s.save(toAccount);
+ rv = amount;
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
+ }
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // intended for production code.
+ private static Function forceRetryLogic() throws JDBCException {
+ Function f = s -> {
+ long rv = -1;
+ try {
+ System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
+ s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate();
+ } catch (JDBCException e) {
+ System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ private static Function getAccountBalance(long id) throws JDBCException{
+ Function f = s -> {
+ long balance;
+ try {
+ Account account = s.get(Account.class, id);
+ balance = account.getBalance();
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return balance;
+ };
+ return f;
+ }
+
+ // Run SQL code in a way that automatically handles the
+ // transaction retry logic so we don't have to duplicate it in
+ // various places.
+ private static long runTransaction(Session session, Function fn) {
+ long rv = 0;
+ int attemptCount = 0;
+
+ while (attemptCount < MAX_ATTEMPT_COUNT) {
+ attemptCount++;
+
+ if (attemptCount > 1) {
+ System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount);
+ }
+
+ Transaction txn = session.beginTransaction();
+ System.out.printf("APP: BEGIN;\n");
+
+ if (attemptCount == MAX_ATTEMPT_COUNT) {
+ String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
+ throw new RuntimeException(err);
+ }
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryLogic()'.
+ if (FORCE_RETRY) {
+ session.createNativeQuery("SELECT now()").list();
+ }
+
+ try {
+ rv = fn.apply(session);
+ if (rv != -1) {
+ txn.commit();
+ System.out.printf("APP: COMMIT;\n");
+ break;
+ }
+ } catch (JDBCException e) {
+ if (RETRY_SQL_STATE.equals(e.getSQLState())) {
+ // Since this is a transaction retry error, we
+ // roll back the transaction and sleep a little
+ // before trying again. Each time through the
+ // loop we sleep for a little longer than the last
+ // time (A.K.A. exponential backoff).
+ System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount);
+ System.out.printf("APP: ROLLBACK;\n");
+ txn.rollback();
+ int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100);
+ System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
+ try {
+ Thread.sleep(sleepMillis);
+ } catch (InterruptedException ignored) {
+ // no-op
+ }
+ rv = -1;
+ } else {
+ throw e;
+ }
+ }
+ }
+ return rv;
+ }
+
+ public static void main(String[] args) {
+ // Create a SessionFactory based on our hibernate.cfg.xml configuration
+ // file, which defines how to connect to the database.
+ SessionFactory sessionFactory =
+ new Configuration()
+ .configure("hibernate.cfg.xml")
+ .addAnnotatedClass(Account.class)
+ .buildSessionFactory();
+
+ try (Session session = sessionFactory.openSession()) {
+ long fromAccountId = 1;
+ long toAccountId = 2;
+ long transferAmount = 100;
+
+ if (FORCE_RETRY) {
+ System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
+ runTransaction(session, forceRetryLogic());
+ } else {
+
+ runTransaction(session, addAccounts());
+ long fromBalance = runTransaction(session, getAccountBalance(fromAccountId));
+ long toBalance = runTransaction(session, getAccountBalance(toAccountId));
+ if (fromBalance != -1 && toBalance != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
+ }
+
+ // Transfer $100 from account 1 to account 2
+ long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount));
+ if (transferResult != -1) {
+ // Success!
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
+
+ long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId));
+ long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId));
+ if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
+ }
+ }
+ }
+ } finally {
+ sessionFactory.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/hibernate-basic-sample/build.gradle b/_includes/v20.2/app/hibernate-basic-sample/build.gradle
new file mode 100644
index 00000000000..36f33d73fe6
--- /dev/null
+++ b/_includes/v20.2/app/hibernate-basic-sample/build.gradle
@@ -0,0 +1,16 @@
+group 'com.cockroachlabs'
+version '1.0'
+
+apply plugin: 'java'
+apply plugin: 'application'
+
+mainClassName = 'com.cockroachlabs.Sample'
+
+repositories {
+ mavenCentral()
+}
+
+dependencies {
+ compile 'org.hibernate:hibernate-core:5.2.4.Final'
+ compile 'org.postgresql:postgresql:42.2.2.jre7'
+}
diff --git a/_includes/v20.2/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/_includes/v20.2/app/hibernate-basic-sample/hibernate-basic-sample.tgz
new file mode 100644
index 00000000000..3e729bf439e
Binary files /dev/null and b/_includes/v20.2/app/hibernate-basic-sample/hibernate-basic-sample.tgz differ
diff --git a/_includes/v20.2/app/hibernate-basic-sample/hibernate.cfg.xml b/_includes/v20.2/app/hibernate-basic-sample/hibernate.cfg.xml
new file mode 100644
index 00000000000..454a4950ad0
--- /dev/null
+++ b/_includes/v20.2/app/hibernate-basic-sample/hibernate.cfg.xml
@@ -0,0 +1,21 @@
+
+
+
+
+
+
+ org.postgresql.Driver
+ org.hibernate.dialect.PostgreSQL95Dialect
+
+ maxroach
+
+
+ create
+
+
+ true
+ true
+
+
diff --git a/_includes/v20.2/app/insecure/BasicExample.java b/_includes/v20.2/app/insecure/BasicExample.java
new file mode 100644
index 00000000000..de86e98d93f
--- /dev/null
+++ b/_includes/v20.2/app/insecure/BasicExample.java
@@ -0,0 +1,433 @@
+import java.util.*;
+import java.time.*;
+import java.sql.*;
+import javax.sql.DataSource;
+
+import org.postgresql.ds.PGSimpleDataSource;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac BasicExample.java && java BasicExample
+
+ To build the javadoc:
+
+ $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java
+
+ At a high level, this code consists of two classes:
+
+ 1. BasicExample, which is where the application logic lives.
+
+ 2. BasicExampleDAO, which is used by the application to access the
+ data store.
+
+*/
+
+public class BasicExample {
+
+ public static void main(String[] args) {
+
+ // Configure the database connection.
+ PGSimpleDataSource ds = new PGSimpleDataSource();
+ ds.setServerName("localhost");
+ ds.setPortNumber(26257);
+ ds.setDatabaseName("bank");
+ ds.setUser("maxroach");
+ ds.setPassword(null);
+ ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
+ ds.setApplicationName("BasicExample");
+
+ // Create DAO.
+ BasicExampleDAO dao = new BasicExampleDAO(ds);
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // necessary in production code.
+ dao.testRetryHandling();
+
+ // Set up the 'accounts' table.
+ dao.createAccounts();
+
+ // Insert a few accounts "by hand", using INSERTs on the backend.
+ Map balances = new HashMap();
+ balances.put("1", "1000");
+ balances.put("2", "250");
+ int updatedAccounts = dao.updateAccounts(balances);
+ System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts);
+
+ // How much money is in these accounts?
+ int balance1 = dao.getAccountBalance(1);
+ int balance2 = dao.getAccountBalance(2);
+ System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
+
+ // Transfer $100 from account 1 to account 2
+ int fromAccount = 1;
+ int toAccount = 2;
+ int transferAmount = 100;
+ int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
+ if (transferredAccounts != -1) {
+ System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
+ }
+
+ balance1 = dao.getAccountBalance(1);
+ balance2 = dao.getAccountBalance(2);
+ System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);
+
+ // Bulk insertion example using JDBC's batching support.
+ int totalRowsInserted = dao.bulkInsertRandomAccountData();
+ System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted);
+
+ // Print out 10 account values.
+ int accountsRead = dao.readAccounts(10);
+
+ // Drop the 'accounts' table so this code can be run again.
+ dao.tearDown();
+ }
+}
+
+/**
+ * Data access object used by 'BasicExample'. Abstraction over some
+ * common CockroachDB operations, including:
+ *
+ * - Auto-handling transaction retries in the 'runSQL' method
+ *
+ * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
+ * method
+ */
+
+class BasicExampleDAO {
+
+ private static final int MAX_RETRY_COUNT = 3;
+ private static final String SAVEPOINT_NAME = "cockroach_restart";
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final boolean FORCE_RETRY = false;
+
+ private final DataSource ds;
+
+ BasicExampleDAO(DataSource ds) {
+ this.ds = ds;
+ }
+
+ /**
+ Used to test the retry logic in 'runSQL'. It is not necessary
+ in production code.
+ */
+ void testRetryHandling() {
+ if (this.FORCE_RETRY) {
+ runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
+ }
+ }
+
+ /**
+ * Run SQL code in a way that automatically handles the
+ * transaction retry logic so we don't have to duplicate it in
+ * various places.
+ *
+ * @param sqlCode a String containing the SQL code you want to
+ * execute. Can have placeholders, e.g., "INSERT INTO accounts
+ * (id, balance) VALUES (?, ?)".
+ *
+ * @param args String Varargs to fill in the SQL code's
+ * placeholders.
+ * @return Integer Number of rows updated, or -1 if an error is thrown.
+ */
+ public Integer runSQL(String sqlCode, String... args) {
+
+ // This block is only used to emit class and method names in
+ // the program output. It is not necessary in production
+ // code.
+ StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
+ StackTraceElement elem = stacktrace[2];
+ String callerClass = elem.getClassName();
+ String callerMethod = elem.getMethodName();
+
+ int rv = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // We're managing the commit lifecycle ourselves so we can
+ // automatically issue transaction retries.
+ connection.setAutoCommit(false);
+
+ int retryCount = 0;
+
+ while (retryCount < MAX_RETRY_COUNT) {
+
+ Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryHandling()'.
+ if (FORCE_RETRY) {
+ forceRetry(connection); // SELECT 1
+ }
+
+ try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {
+
+ // Loop over the args and insert them into the
+ // prepared statement based on their types. In
+ // this simple example we classify the argument
+ // types as "integers" and "everything else"
+ // (a.k.a. strings).
+ for (int i=0; i %10s\n", name, val);
+ }
+ }
+ }
+ } else {
+ int updateCount = pstmt.getUpdateCount();
+ rv += updateCount;
+
+ // This printed output is for debugging and/or demonstration
+ // purposes only. It would not be necessary in production code.
+ System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt);
+ }
+
+ connection.releaseSavepoint(sp);
+ connection.commit();
+ break;
+
+ } catch (SQLException e) {
+
+ if (RETRY_SQL_STATE.equals(e.getSQLState())) {
+ System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n",
+ e.getSQLState(), e.getMessage(), retryCount);
+ connection.rollback(sp);
+ retryCount++;
+ rv = -1;
+ } else {
+ rv = -1;
+ throw e;
+ }
+ }
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ rv = -1;
+ }
+
+ return rv;
+ }
+
+ /**
+ * Helper method called by 'testRetryHandling'. It simply issues
+ * a "SELECT 1" inside the transaction to force a retry. This is
+ * necessary to take the connection's session out of the AutoRetry
+ * state, since otherwise the other statements in the session will
+ * be retried automatically, and the client (us) will not see a
+ * retry error. Note that this information is taken from the
+ * following test:
+ * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
+ *
+ * @param connection Connection
+ */
+ private void forceRetry(Connection connection) throws SQLException {
+ try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
+ statement.executeQuery();
+ }
+ }
+
+ /**
+ * Creates a fresh, empty accounts table in the database.
+ */
+ public void createAccounts() {
+ runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
+ };
+
+ /**
+ * Update accounts by passing in a Map of (ID, Balance) pairs.
+ *
+ * @param accounts (Map)
+ * @return The number of updated accounts (int)
+ */
+ public int updateAccounts(Map accounts) {
+ int rows = 0;
+ for (Map.Entry account : accounts.entrySet()) {
+
+ String k = account.getKey();
+ String v = account.getValue();
+
+ String[] args = {k, v};
+ rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
+ }
+ return rows;
+ }
+
+ /**
+ * Transfer funds between one account and another. Handles
+ * transaction retries in case of conflict automatically on the
+ * backend.
+ * @param fromId (int)
+ * @param toId (int)
+ * @param amount (int)
+ * @return The number of updated accounts (int)
+ */
+ public int transferFunds(int fromId, int toId, int amount) {
+ String sFromId = Integer.toString(fromId);
+ String sToId = Integer.toString(toId);
+ String sAmount = Integer.toString(amount);
+
+ // We have omitted explicit BEGIN/COMMIT statements for
+ // brevity. Individual statements are treated as implicit
+ // transactions by CockroachDB (see
+ // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).
+
+ String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
+ "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
+ "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";
+
+ return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
+ }
+
+ /**
+ * Get the account balance for one account.
+ *
+ * We skip using the retry logic in 'runSQL()' here for the
+ * following reasons:
+ *
+ * 1. Since this is a single read ("SELECT"), we don't expect any
+ * transaction conflicts to handle
+ *
+ * 2. We need to return the balance as an integer
+ *
+ * @param id (int)
+ * @return balance (int)
+ */
+ public int getAccountBalance(int id) {
+ int balance = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // Check the current balance.
+ ResultSet res = connection.createStatement()
+ .executeQuery("SELECT balance FROM accounts WHERE id = "
+ + id);
+ if(!res.next()) {
+ System.out.printf("No users in the table with id %i", id);
+ } else {
+ balance = res.getInt("balance");
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+
+ return balance;
+ }
+
+ /**
+ * Insert randomized account data (ID, balance) using the JDBC
+ * fast path for bulk inserts. The fastest way to get data into
+ * CockroachDB is the IMPORT statement. However, if you must bulk
+ * ingest from the application using INSERT statements, the best
+ * option is the method shown here. It will require the following:
+ *
+ * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
+ * settings (see the connection info in 'BasicExample.main').
+ *
+ * 2. Inserting in batches of 128 rows, as used inside this method
+ * (see BATCH_SIZE), since the PGJDBC driver's logic works best
+ * with powers of two, such that a batch of size 128 can be 6x
+ * faster than a batch of size 250.
+ * @return The number of new accounts inserted (int)
+ */
+ public int bulkInsertRandomAccountData() {
+
+ Random random = new Random();
+ int BATCH_SIZE = 128;
+ int totalNewAccounts = 0;
+
+ try (Connection connection = ds.getConnection()) {
+
+ // We're managing the commit lifecycle ourselves so we can
+ // control the size of our batch inserts.
+ connection.setAutoCommit(false);
+
+ // In this example we are adding 500 rows to the database,
+ // but it could be any number. What's important is that
+ // the batch size is 128.
+ try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
+ for (int i=0; i<=(500/BATCH_SIZE);i++) {
+ for (int j=0; j %s row(s) updated in this batch\n", count.length);
+ }
+ connection.commit();
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+ } catch (SQLException e) {
+ System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
+ e.getSQLState(), e.getCause(), e.getMessage());
+ }
+ return totalNewAccounts;
+ }
+
+ /**
+ * Read out a subset of accounts from the data store.
+ *
+ * @param limit (int)
+ * @return Number of accounts read (int)
+ */
+ public int readAccounts(int limit) {
+ return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
+ }
+
+ /**
+ * Perform any necessary cleanup of the data store so it can be
+ * used again.
+ */
+ public void tearDown() {
+ runSQL("DROP TABLE accounts;");
+ }
+}
diff --git a/_includes/v20.2/app/insecure/BasicSample.java b/_includes/v20.2/app/insecure/BasicSample.java
new file mode 100644
index 00000000000..001d38feb48
--- /dev/null
+++ b/_includes/v20.2/app/insecure/BasicSample.java
@@ -0,0 +1,51 @@
+import java.sql.*;
+import java.util.Properties;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac BasicSample.java && java BasicSample
+*/
+
+public class BasicSample {
+ public static void main(String[] args)
+ throws ClassNotFoundException, SQLException {
+
+ // Load the Postgres JDBC driver.
+ Class.forName("org.postgresql.Driver");
+
+ // Connect to the "bank" database.
+ Properties props = new Properties();
+ props.setProperty("user", "maxroach");
+ props.setProperty("sslmode", "disable");
+
+ Connection db = DriverManager
+ .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
+
+ try {
+ // Create the "accounts" table.
+ db.createStatement()
+ .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
+
+ // Insert two rows into the "accounts" table.
+ db.createStatement()
+ .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
+
+ // Print out the balances.
+ System.out.println("Initial balances:");
+ ResultSet res = db.createStatement()
+ .executeQuery("SELECT id, balance FROM accounts");
+ while (res.next()) {
+ System.out.printf("\taccount %s: %s\n",
+ res.getInt("id"),
+ res.getInt("balance"));
+ }
+ } finally {
+ // Close the database connection.
+ db.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/TxnSample.java b/_includes/v20.2/app/insecure/TxnSample.java
new file mode 100644
index 00000000000..11021ec0e71
--- /dev/null
+++ b/_includes/v20.2/app/insecure/TxnSample.java
@@ -0,0 +1,145 @@
+import java.sql.*;
+import java.util.Properties;
+
+/*
+ Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
+
+ Then, compile and run this example like so:
+
+ $ export CLASSPATH=.:/path/to/postgresql.jar
+ $ javac TxnSample.java && java TxnSample
+*/
+
+// Ambiguous whether the transaction committed or not.
+class AmbiguousCommitException extends SQLException{
+ public AmbiguousCommitException(Throwable cause) {
+ super(cause);
+ }
+}
+
+class InsufficientBalanceException extends Exception {}
+
+class AccountNotFoundException extends Exception {
+ public int account;
+ public AccountNotFoundException(int account) {
+ this.account = account;
+ }
+}
+
+// A simple interface that provides a retryable lambda expression.
+interface RetryableTransaction {
+ public void run(Connection conn)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException;
+}
+
+public class TxnSample {
+ public static RetryableTransaction transferFunds(int from, int to, int amount) {
+ return new RetryableTransaction() {
+ public void run(Connection conn)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException {
+
+ // Check the current balance.
+ ResultSet res = conn.createStatement()
+ .executeQuery("SELECT balance FROM accounts WHERE id = "
+ + from);
+ if(!res.next()) {
+ throw new AccountNotFoundException(from);
+ }
+
+ int balance = res.getInt("balance");
+ if(balance < from) {
+ throw new InsufficientBalanceException();
+ }
+
+ // Perform the transfer.
+ conn.createStatement()
+ .executeUpdate("UPDATE accounts SET balance = balance - "
+ + amount + " where id = " + from);
+ conn.createStatement()
+ .executeUpdate("UPDATE accounts SET balance = balance + "
+ + amount + " where id = " + to);
+ }
+ };
+ }
+
+ public static void retryTransaction(Connection conn, RetryableTransaction tx)
+ throws SQLException, InsufficientBalanceException,
+ AccountNotFoundException, AmbiguousCommitException {
+
+ Savepoint sp = conn.setSavepoint("cockroach_restart");
+ while(true) {
+ boolean releaseAttempted = false;
+ try {
+ tx.run(conn);
+ releaseAttempted = true;
+ conn.releaseSavepoint(sp);
+ }
+ catch(SQLException e) {
+ String sqlState = e.getSQLState();
+
+ // Check if the error code indicates a SERIALIZATION_FAILURE.
+ if(sqlState.equals("40001")) {
+ // Signal the database that we will attempt a retry.
+ conn.rollback(sp);
+ continue;
+ } else if(releaseAttempted) {
+ throw new AmbiguousCommitException(e);
+ } else {
+ throw e;
+ }
+ }
+ break;
+ }
+ conn.commit();
+ }
+
+ public static void main(String[] args)
+ throws ClassNotFoundException, SQLException {
+
+ // Load the Postgres JDBC driver.
+ Class.forName("org.postgresql.Driver");
+
+ // Connect to the 'bank' database.
+ Properties props = new Properties();
+ props.setProperty("user", "maxroach");
+ props.setProperty("sslmode", "disable");
+
+ Connection db = DriverManager
+ .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
+
+
+ try {
+ // We need to turn off autocommit mode to allow for
+ // multi-statement transactions.
+ db.setAutoCommit(false);
+
+ // Perform the transfer. This assumes the 'accounts'
+ // table has already been created in the database.
+ RetryableTransaction transfer = transferFunds(1, 2, 100);
+ retryTransaction(db, transfer);
+
+ // Check balances after transfer.
+ db.setAutoCommit(true);
+ ResultSet res = db.createStatement()
+ .executeQuery("SELECT id, balance FROM accounts");
+ while (res.next()) {
+ System.out.printf("\taccount %s: %s\n", res.getInt("id"),
+ res.getInt("balance"));
+ }
+
+ } catch(InsufficientBalanceException e) {
+ System.out.println("Insufficient balance");
+ } catch(AccountNotFoundException e) {
+ System.out.println("No users in the table with id " + e.account);
+ } catch(AmbiguousCommitException e) {
+ System.out.println("Ambiguous result encountered: " + e);
+ } catch(SQLException e) {
+ System.out.println("SQLException encountered:" + e);
+ } finally {
+ // Close the database connection.
+ db.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/activerecord-basic-sample.rb b/_includes/v20.2/app/insecure/activerecord-basic-sample.rb
new file mode 100644
index 00000000000..601838ee789
--- /dev/null
+++ b/_includes/v20.2/app/insecure/activerecord-basic-sample.rb
@@ -0,0 +1,44 @@
+require 'active_record'
+require 'activerecord-cockroachdb-adapter'
+require 'pg'
+
+# Connect to CockroachDB through ActiveRecord.
+# In Rails, this configuration would go in config/database.yml as usual.
+ActiveRecord::Base.establish_connection(
+ adapter: 'cockroachdb',
+ username: 'maxroach',
+ database: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'disable'
+)
+
+# Define the Account model.
+# In Rails, this would go in app/models/ as usual.
+class Account < ActiveRecord::Base
+ validates :id, presence: true
+ validates :balance, presence: true
+end
+
+# Define a migration for the accounts table.
+# In Rails, this would go in db/migrate/ as usual.
+class Schema < ActiveRecord::Migration[5.0]
+ def change
+ create_table :accounts, force: true do |t|
+ t.integer :balance
+ end
+ end
+end
+
+# Run the schema migration by hand.
+# In Rails, this would be done via rake db:migrate as usual.
+Schema.new.change()
+
+# Create two accounts, inserting two rows into the accounts table.
+Account.create(id: 1, balance: 1000)
+Account.create(id: 2, balance: 250)
+
+# Retrieve accounts and print out the balances
+Account.all.each do |acct|
+ puts "#{acct.id} #{acct.balance}"
+end
diff --git a/_includes/v20.2/app/insecure/basic-sample.clj b/_includes/v20.2/app/insecure/basic-sample.clj
new file mode 100644
index 00000000000..182b78b675e
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.clj
@@ -0,0 +1,31 @@
+(ns test.test
+ (:require [clojure.java.jdbc :as j]
+ [test.util :as util]))
+
+;; Define the connection parameters to the cluster.
+(def db-spec {:dbtype "postgresql"
+ :dbname "bank"
+ :host "localhost"
+ :port "26257"
+ :user "maxroach"})
+
+(defn test-basic []
+ ;; Connect to the cluster and run the code below with
+ ;; the connection object bound to 'conn'.
+ (j/with-db-connection [conn db-spec]
+
+ ;; Insert two rows into the "accounts" table.
+ (j/insert! conn :accounts {:id 1 :balance 1000})
+ (j/insert! conn :accounts {:id 2 :balance 250})
+
+ ;; Print out the balances.
+ (println "Initial balances:")
+ (->> (j/query conn ["SELECT id, balance FROM accounts"])
+ (map println)
+ doall)
+
+ ))
+
+
+(defn -main [& args]
+ (test-basic))
diff --git a/_includes/v20.2/app/insecure/basic-sample.cpp b/_includes/v20.2/app/insecure/basic-sample.cpp
new file mode 100644
index 00000000000..a06d84d1a25
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.cpp
@@ -0,0 +1,39 @@
+#include
+#include
+#include
+#include
+#include
+#include
+
+using namespace std;
+
+int main() {
+ try {
+ // Connect to the "bank" database.
+ pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
+
+ pqxx::nontransaction w(c);
+
+ // Create the "accounts" table.
+ w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
+
+ // Insert two rows into the "accounts" table.
+ w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
+
+ // Print out the balances.
+ cout << "Initial balances:" << endl;
+ pqxx::result r = w.exec("SELECT id, balance FROM accounts");
+ for (auto row : r) {
+ cout << row[0].as() << ' ' << row[1].as() << endl;
+ }
+
+ w.commit(); // Note this doesn't doesn't do anything
+ // for a nontransaction, but is still required.
+ }
+ catch (const exception &e) {
+ cerr << e.what() << endl;
+ return 1;
+ }
+ cout << "Success" << endl;
+ return 0;
+}
diff --git a/_includes/v20.2/app/insecure/basic-sample.cs b/_includes/v20.2/app/insecure/basic-sample.cs
new file mode 100644
index 00000000000..b7cf8e1ff3f
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.cs
@@ -0,0 +1,50 @@
+using System;
+using System.Data;
+using Npgsql;
+
+namespace Cockroach
+{
+ class MainClass
+ {
+ static void Main(string[] args)
+ {
+ var connStringBuilder = new NpgsqlConnectionStringBuilder();
+ connStringBuilder.Host = "localhost";
+ connStringBuilder.Port = 26257;
+ connStringBuilder.SslMode = SslMode.Disable;
+ connStringBuilder.Username = "maxroach";
+ connStringBuilder.Database = "bank";
+ Simple(connStringBuilder.ConnectionString);
+ }
+
+ static void Simple(string connString)
+ {
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ conn.Open();
+
+ // Create the "accounts" table.
+ new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
+
+ // Insert two rows into the "accounts" table.
+ using (var cmd = new NpgsqlCommand())
+ {
+ cmd.Connection = conn;
+ cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
+ cmd.Parameters.AddWithValue("id1", 1);
+ cmd.Parameters.AddWithValue("val1", 1000);
+ cmd.Parameters.AddWithValue("id2", 2);
+ cmd.Parameters.AddWithValue("val2", 250);
+ cmd.ExecuteNonQuery();
+ }
+
+ // Print out the balances.
+ System.Console.WriteLine("Initial balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+ }
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/basic-sample.go b/_includes/v20.2/app/insecure/basic-sample.go
new file mode 100644
index 00000000000..6a647f51641
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.go
@@ -0,0 +1,44 @@
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ "log"
+
+ _ "github.com/lib/pq"
+)
+
+func main() {
+ // Connect to the "bank" database.
+ db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
+ if err != nil {
+ log.Fatal("error connecting to the database: ", err)
+ }
+
+ // Create the "accounts" table.
+ if _, err := db.Exec(
+ "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
+ log.Fatal(err)
+ }
+
+ // Insert two rows into the "accounts" table.
+ if _, err := db.Exec(
+ "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
+ log.Fatal(err)
+ }
+
+ // Print out the balances.
+ rows, err := db.Query("SELECT id, balance FROM accounts")
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer rows.Close()
+ fmt.Println("Initial balances:")
+ for rows.Next() {
+ var id, balance int
+ if err := rows.Scan(&id, &balance); err != nil {
+ log.Fatal(err)
+ }
+ fmt.Printf("%d %d\n", id, balance)
+ }
+}
diff --git a/_includes/v20.2/app/insecure/basic-sample.js b/_includes/v20.2/app/insecure/basic-sample.js
new file mode 100644
index 00000000000..f89ea020a74
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.js
@@ -0,0 +1,55 @@
+var async = require('async');
+var fs = require('fs');
+var pg = require('pg');
+
+// Connect to the "bank" database.
+var config = {
+ user: 'maxroach',
+ host: 'localhost',
+ database: 'bank',
+ port: 26257
+};
+
+// Create a pool.
+var pool = new pg.Pool(config);
+
+pool.connect(function (err, client, done) {
+
+ // Close communication with the database and exit.
+ var finish = function () {
+ done();
+ process.exit();
+ };
+
+ if (err) {
+ console.error('could not connect to cockroachdb', err);
+ finish();
+ }
+ async.waterfall([
+ function (next) {
+ // Create the 'accounts' table.
+ client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
+ },
+ function (results, next) {
+ // Insert two rows into the 'accounts' table.
+ client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
+ },
+ function (results, next) {
+ // Print out account balances.
+ client.query('SELECT id, balance FROM accounts;', next);
+ },
+ ],
+ function (err, results) {
+ if (err) {
+ console.error('Error inserting into and selecting from accounts: ', err);
+ finish();
+ }
+
+ console.log('Initial balances:');
+ results.rows.forEach(function (row) {
+ console.log(row);
+ });
+
+ finish();
+ });
+});
diff --git a/_includes/v20.2/app/insecure/basic-sample.php b/_includes/v20.2/app/insecure/basic-sample.php
new file mode 100644
index 00000000000..db5a26e3111
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.php
@@ -0,0 +1,20 @@
+ PDO::ERRMODE_EXCEPTION,
+ PDO::ATTR_EMULATE_PREPARES => true,
+ PDO::ATTR_PERSISTENT => true
+ ));
+
+ $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
+
+ print "Account balances:\r\n";
+ foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
+ print $row['id'] . ': ' . $row['balance'] . "\r\n";
+ }
+} catch (Exception $e) {
+ print $e->getMessage() . "\r\n";
+ exit(1);
+}
+?>
diff --git a/_includes/v20.2/app/insecure/basic-sample.py b/_includes/v20.2/app/insecure/basic-sample.py
new file mode 100644
index 00000000000..c0df43b8afc
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python3
+
+import psycopg2
+import psycopg2.errorcodes
+import time
+import logging
+import random
+
+
+def create_accounts(conn):
+ with conn.cursor() as cur:
+ cur.execute('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
+ cur.execute('UPSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
+ logging.debug("create_accounts(): status message: {}".format(cur.statusmessage))
+ conn.commit()
+
+
+def print_balances(conn):
+ with conn.cursor() as cur:
+ cur.execute("SELECT id, balance FROM accounts")
+ logging.debug("print_balances(): status message: {}".format(cur.statusmessage))
+ rows = cur.fetchall()
+ conn.commit()
+ print("Balances at {}".format(time.asctime()))
+ for row in rows:
+ print([str(cell) for cell in row])
+
+
+def delete_accounts(conn):
+ with conn.cursor() as cur:
+ cur.execute("DELETE FROM bank.accounts")
+ logging.debug("delete_accounts(): status message: {}".format(cur.statusmessage))
+ conn.commit()
+
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn, op):
+ retries = 0
+ max_retries = 3
+ with conn:
+ while True:
+ retries +=1
+ if retries == max_retries:
+ err_msg = "Transaction did not succeed after {} retries".format(max_retries)
+ raise ValueError(err_msg)
+
+ try:
+ op(conn)
+
+ # If we reach this point, we were able to commit, so we break
+ # from the retry loop.
+ break
+ except psycopg2.Error as e:
+ logging.debug("e.pgcode: {}".format(e.pgcode))
+ if e.pgcode == '40001':
+ # This is a retry error, so we roll back the current
+ # transaction and sleep for a bit before retrying. The
+ # sleep time increases for each failed transaction.
+ conn.rollback()
+ logging.debug("EXECUTE SERIALIZATION_FAILURE BRANCH")
+ sleep_ms = (2**retries) * 0.1 * (random.random() + 0.5)
+ logging.debug("Sleeping {} seconds".format(sleep_ms))
+ time.sleep(sleep_ms)
+ continue
+ else:
+ logging.debug("EXECUTE NON-SERIALIZATION_FAILURE BRANCH")
+ raise e
+
+
+# This function is used to test the transaction retry logic. It can be deleted
+# from production code.
+def test_retry_loop(conn):
+ with conn.cursor() as cur:
+ # The first statement in a transaction can be retried transparently on
+ # the server, so we need to add a dummy statement so that our
+ # force_retry() statement isn't the first one.
+ cur.execute('SELECT now()')
+ cur.execute("SELECT crdb_internal.force_retry('1s'::INTERVAL)")
+ logging.debug("test_retry_loop(): status message: {}".format(cur.statusmessage))
+
+
+def transfer_funds(conn, frm, to, amount):
+ with conn.cursor() as cur:
+
+ # Check the current balance.
+ cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
+ from_balance = cur.fetchone()[0]
+ if from_balance < amount:
+ err_msg = "Insufficient funds in account {}: have {}, need {}".format(frm, from_balance, amount)
+ raise RuntimeError(err_msg)
+
+ # Perform the transfer.
+ cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
+ (amount, frm))
+ cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
+ (amount, to))
+ conn.commit()
+ logging.debug("transfer_funds(): status message: {}".format(cur.statusmessage))
+
+
+def main():
+
+ dsn = 'postgresql://maxroach@localhost:26257/bank?sslmode=disable'
+ conn = psycopg2.connect(dsn)
+
+ # Uncomment the below to turn on logging to the console. This was useful
+ # when testing transaction retry handling. It is not necessary for
+ # production code.
+ # log_level = getattr(logging, 'DEBUG', None)
+ # logging.basicConfig(level=log_level)
+
+ create_accounts(conn)
+
+ print_balances(conn)
+
+ amount = 100
+ fromId = 1
+ toId = 2
+
+ try:
+ run_transaction(conn, lambda conn: transfer_funds(conn, fromId, toId, amount))
+
+ # The function below is used to test the transaction retry logic. It
+ # can be deleted from production code.
+ # run_transaction(conn, lambda conn: test_retry_loop(conn))
+ except ValueError as ve:
+ # Below, we print the error and continue on so this example is easy to
+ # run (and run, and run...). In real code you should handle this error
+ # and any others thrown by the database interaction.
+ logging.debug("run_transaction(conn, op) failed: {}".format(ve))
+ pass
+
+ print_balances(conn)
+
+ delete_accounts(conn)
+
+ # Close communication with the database.
+ conn.close()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/_includes/v20.2/app/insecure/basic-sample.rb b/_includes/v20.2/app/insecure/basic-sample.rb
new file mode 100644
index 00000000000..904460381f6
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.rb
@@ -0,0 +1,28 @@
+# Import the driver.
+require 'pg'
+
+# Connect to the "bank" database.
+conn = PG.connect(
+ user: 'maxroach',
+ dbname: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'disable'
+)
+
+# Create the "accounts" table.
+conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
+
+# Insert two rows into the "accounts" table.
+conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
+
+# Print out the balances.
+puts 'Initial balances:'
+conn.exec('SELECT id, balance FROM accounts') do |res|
+ res.each do |row|
+ puts row
+ end
+end
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/insecure/basic-sample.rs b/_includes/v20.2/app/insecure/basic-sample.rs
new file mode 100644
index 00000000000..8b7c3b115a9
--- /dev/null
+++ b/_includes/v20.2/app/insecure/basic-sample.rs
@@ -0,0 +1,32 @@
+use postgres::{Client, NoTls};
+
+fn main() {
+ let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
+
+ // Create the "accounts" table.
+ client
+ .execute(
+ "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
+ &[],
+ )
+ .unwrap();
+
+ // Insert two rows into the "accounts" table.
+ client
+ .execute(
+ "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
+ &[],
+ )
+ .unwrap();
+
+ // Print out the balances.
+ println!("Initial balances:");
+ for row in &client
+ .query("SELECT id, balance FROM accounts", &[])
+ .unwrap()
+ {
+ let id: i64 = row.get(0);
+ let balance: i64 = row.get(1);
+ println!("{} {}", id, balance);
+ }
+}
diff --git a/_includes/v20.2/app/insecure/create-maxroach-user-and-bank-database.md b/_includes/v20.2/app/insecure/create-maxroach-user-and-bank-database.md
new file mode 100644
index 00000000000..5beb4cdd508
--- /dev/null
+++ b/_includes/v20.2/app/insecure/create-maxroach-user-and-bank-database.md
@@ -0,0 +1,32 @@
+Start the [built-in SQL shell](cockroach-sql.html):
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql --insecure
+~~~
+
+In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE USER IF NOT EXISTS maxroach;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE DATABASE bank;
+~~~
+
+Give the `maxroach` user the necessary permissions:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> GRANT ALL ON DATABASE bank TO maxroach;
+~~~
+
+Exit the SQL shell:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> \q
+~~~
diff --git a/_includes/v20.2/app/insecure/django-basic-sample/models.py b/_includes/v20.2/app/insecure/django-basic-sample/models.py
new file mode 100644
index 00000000000..6068f8bbb8e
--- /dev/null
+++ b/_includes/v20.2/app/insecure/django-basic-sample/models.py
@@ -0,0 +1,17 @@
+from django.db import models
+
+class Customers(models.Model):
+ id = models.AutoField(primary_key=True)
+ name = models.CharField(max_length=250)
+
+class Products(models.Model):
+ id = models.AutoField(primary_key=True)
+ name = models.CharField(max_length=250)
+ price = models.DecimalField(max_digits=18, decimal_places=2)
+
+class Orders(models.Model):
+ id = models.AutoField(primary_key=True)
+ subtotal = models.DecimalField(max_digits=18, decimal_places=2)
+ customer = models.ForeignKey(Customers, on_delete=models.CASCADE, null=True)
+ product = models.ManyToManyField(Products)
+
diff --git a/_includes/v20.2/app/insecure/django-basic-sample/settings.py b/_includes/v20.2/app/insecure/django-basic-sample/settings.py
new file mode 100644
index 00000000000..d23c128e33f
--- /dev/null
+++ b/_includes/v20.2/app/insecure/django-basic-sample/settings.py
@@ -0,0 +1,124 @@
+"""
+Django settings for myproject project.
+
+Generated by 'django-admin startproject' using Django 3.0.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/3.0/topics/settings/
+
+For the full list of settings and their values, see
+https://docs.djangoproject.com/en/3.0/ref/settings/
+"""
+
+import os
+
+# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
+BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+
+
+# Quick-start development settings - unsuitable for production
+# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
+
+# SECURITY WARNING: keep the secret key used in production secret!
+SECRET_KEY = 'spl=g73)8-)ja%x*k1eje4d#&24#t)zao^s$6vc1rdk(e3t!e('
+
+# SECURITY WARNING: don't run with debug turned on in production!
+DEBUG = True
+
+ALLOWED_HOSTS = ['0.0.0.0']
+
+
+# Application definition
+
+INSTALLED_APPS = [
+ 'django.contrib.admin',
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
+ 'django.contrib.sessions',
+ 'django.contrib.messages',
+ 'django.contrib.staticfiles',
+ 'myproject',
+]
+
+MIDDLEWARE = [
+ 'django.middleware.security.SecurityMiddleware',
+ 'django.contrib.sessions.middleware.SessionMiddleware',
+ 'django.middleware.common.CommonMiddleware',
+ 'django.middleware.csrf.CsrfViewMiddleware',
+ 'django.contrib.auth.middleware.AuthenticationMiddleware',
+ 'django.contrib.messages.middleware.MessageMiddleware',
+ 'django.middleware.clickjacking.XFrameOptionsMiddleware',
+]
+
+ROOT_URLCONF = 'myproject.urls'
+
+TEMPLATES = [
+ {
+ 'BACKEND': 'django.template.backends.django.DjangoTemplates',
+ 'DIRS': [],
+ 'APP_DIRS': True,
+ 'OPTIONS': {
+ 'context_processors': [
+ 'django.template.context_processors.debug',
+ 'django.template.context_processors.request',
+ 'django.contrib.auth.context_processors.auth',
+ 'django.contrib.messages.context_processors.messages',
+ ],
+ },
+ },
+]
+
+WSGI_APPLICATION = 'myproject.wsgi.application'
+
+
+# Database
+# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
+
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django_cockroachdb',
+ 'NAME': 'bank',
+ 'USER': 'django',
+ 'HOST': 'localhost',
+ 'PORT': '26257',
+ }
+}
+
+
+# Password validation
+# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
+
+AUTH_PASSWORD_VALIDATORS = [
+ {
+ 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
+ },
+]
+
+
+# Internationalization
+# https://docs.djangoproject.com/en/3.0/topics/i18n/
+
+LANGUAGE_CODE = 'en-us'
+
+TIME_ZONE = 'UTC'
+
+USE_I18N = True
+
+USE_L10N = True
+
+USE_TZ = True
+
+
+# Static files (CSS, JavaScript, Images)
+# https://docs.djangoproject.com/en/3.0/howto/static-files/
+
+STATIC_URL = '/static/'
diff --git a/_includes/v20.2/app/insecure/django-basic-sample/urls.py b/_includes/v20.2/app/insecure/django-basic-sample/urls.py
new file mode 100644
index 00000000000..9550d713ffa
--- /dev/null
+++ b/_includes/v20.2/app/insecure/django-basic-sample/urls.py
@@ -0,0 +1,20 @@
+from django.contrib import admin
+from django.urls import path
+
+from .views import CustomersView, OrdersView, PingView, ProductView
+
+urlpatterns = [
+ path('admin/', admin.site.urls),
+
+ path('ping/', PingView.as_view()),
+
+ # Endpoints for customers URL.
+ path('customer/', CustomersView.as_view(), name='customers'),
+ path('customer//', CustomersView.as_view(), name='customers'),
+
+ # Endpoints for customers URL.
+ path('product/', ProductView.as_view(), name='product'),
+ path('product//', ProductView.as_view(), name='product'),
+
+ path('order/', OrdersView.as_view(), name='order'),
+]
diff --git a/_includes/v20.2/app/insecure/django-basic-sample/views.py b/_includes/v20.2/app/insecure/django-basic-sample/views.py
new file mode 100644
index 00000000000..78143916ee8
--- /dev/null
+++ b/_includes/v20.2/app/insecure/django-basic-sample/views.py
@@ -0,0 +1,107 @@
+from django.http import JsonResponse, HttpResponse
+from django.utils.decorators import method_decorator
+from django.views.generic import View
+from django.views.decorators.csrf import csrf_exempt
+from django.db import Error, IntegrityError
+from django.db.transaction import atomic
+
+import json
+import sys
+import time
+
+from .models import *
+
+# Warning: Do not use retry_on_exception in an inner nested transaction.
+def retry_on_exception(num_retries=3, on_failure=HttpResponse(status=500), delay_=0.5, backoff_=1.5):
+ def retry(view):
+ def wrapper(*args, **kwargs):
+ delay = delay_
+ for i in range(num_retries):
+ try:
+ return view(*args, **kwargs)
+ except IntegrityError as ex:
+ if i == num_retries - 1:
+ return on_failure
+ elif getattr(ex.__cause__, 'pgcode', '') == errorcodes.SERIALIZATION_FAILURE:
+ time.sleep(delay)
+ delay *= backoff_
+ except Error as ex:
+ return on_failure
+ return wrapper
+ return retry
+
+class PingView(View):
+ def get(self, request, *args, **kwargs):
+ return HttpResponse("python/django", status=200)
+
+@method_decorator(csrf_exempt, name='dispatch')
+class CustomersView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ customers = list(Customers.objects.values())
+ else:
+ customers = list(Customers.objects.filter(id=id).values())
+ return JsonResponse(customers, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ name = form_data['name']
+ c = Customers(name=name)
+ c.save()
+ return HttpResponse(status=200)
+
+ @retry_on_exception(3)
+ @atomic
+ def delete(self, request, id=None, *args, **kwargs):
+ if id is None:
+ return HttpResponse(status=404)
+ Customers.objects.filter(id=id).delete()
+ return HttpResponse(status=200)
+
+ # The PUT method is shadowed by the POST method, so there doesn't seem
+ # to be a reason to include it.
+
+@method_decorator(csrf_exempt, name='dispatch')
+class ProductView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ products = list(Products.objects.values())
+ else:
+ products = list(Products.objects.filter(id=id).values())
+ return JsonResponse(products, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ name, price = form_data['name'], form_data['price']
+ p = Products(name=name, price=price)
+ p.save()
+ return HttpResponse(status=200)
+
+ # The REST API outlined in the github does not say that /product/ needs
+ # a PUT and DELETE method
+
+@method_decorator(csrf_exempt, name='dispatch')
+class OrdersView(View):
+ def get(self, request, id=None, *args, **kwargs):
+ if id is None:
+ orders = list(Orders.objects.values())
+ else:
+ orders = list(Orders.objects.filter(id=id).values())
+ return JsonResponse(orders, safe=False)
+
+ @retry_on_exception(3)
+ @atomic
+ def post(self, request, *args, **kwargs):
+ form_data = json.loads(request.body.decode())
+ c = Customers.objects.get(id=form_data['customer']['id'])
+ o = Orders(subtotal=form_data['subtotal'], customer=c)
+ o.save()
+ for p in form_data['products']:
+ p = Products.objects.get(id=p['id'])
+ o.product.add(p)
+ o.save()
+ return HttpResponse(status=200)
diff --git a/_includes/v20.2/app/insecure/gorm-basic-sample.go b/_includes/v20.2/app/insecure/gorm-basic-sample.go
new file mode 100644
index 00000000000..b8529962c2b
--- /dev/null
+++ b/_includes/v20.2/app/insecure/gorm-basic-sample.go
@@ -0,0 +1,41 @@
+package main
+
+import (
+ "fmt"
+ "log"
+
+ // Import GORM-related packages.
+ "github.com/jinzhu/gorm"
+ _ "github.com/jinzhu/gorm/dialects/postgres"
+)
+
+// Account is our model, which corresponds to the "accounts" database table.
+type Account struct {
+ ID int `gorm:"primary_key"`
+ Balance int
+}
+
+func main() {
+ // Connect to the "bank" database as the "maxroach" user.
+ const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable"
+ db, err := gorm.Open("postgres", addr)
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer db.Close()
+
+ // Automatically create the "accounts" table based on the Account model.
+ db.AutoMigrate(&Account{})
+
+ // Insert two rows into the "accounts" table.
+ db.Create(&Account{ID: 1, Balance: 1000})
+ db.Create(&Account{ID: 2, Balance: 250})
+
+ // Print out the balances.
+ var accounts []Account
+ db.Find(&accounts)
+ fmt.Println("Initial balances:")
+ for _, account := range accounts {
+ fmt.Printf("%d %d\n", account.ID, account.Balance)
+ }
+}
diff --git a/_includes/v20.2/app/insecure/gorm-sample.go b/_includes/v20.2/app/insecure/gorm-sample.go
new file mode 100644
index 00000000000..cf12ee40ebc
--- /dev/null
+++ b/_includes/v20.2/app/insecure/gorm-sample.go
@@ -0,0 +1,206 @@
+package main
+
+import (
+ "fmt"
+ "log"
+ "math"
+ "math/rand"
+ "time"
+
+ // Import GORM-related packages.
+ "github.com/jinzhu/gorm"
+ _ "github.com/jinzhu/gorm/dialects/postgres"
+
+ // Necessary in order to check for transaction retry error codes.
+ "github.com/lib/pq"
+)
+
+// Account is our model, which corresponds to the "accounts" database
+// table.
+type Account struct {
+ ID int `gorm:"primary_key"`
+ Balance int
+}
+
+// Functions of type `txnFunc` are passed as arguments to our
+// `runTransaction` wrapper that handles transaction retries for us
+// (see implementation below).
+type txnFunc func(*gorm.DB) error
+
+// This function is used for testing the transaction retry loop. It
+// can be deleted from production code.
+var forceRetryLoop txnFunc = func(db *gorm.DB) error {
+
+ // The first statement in a transaction can be retried transparently
+ // on the server, so we need to add a dummy statement so that our
+ // force_retry statement isn't the first one.
+ if err := db.Exec("SELECT now()").Error; err != nil {
+ return err
+ }
+ // Used to force a transaction retry.
+ if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil {
+ return err
+ }
+ return nil
+}
+
+func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error {
+ var fromAccount Account
+ var toAccount Account
+
+ db.First(&fromAccount, fromID)
+ db.First(&toAccount, toID)
+
+ if fromAccount.Balance < amount {
+ return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount)
+ }
+
+ fromAccount.Balance -= amount
+ toAccount.Balance += amount
+
+ if err := db.Save(&fromAccount).Error; err != nil {
+ return err
+ }
+ if err := db.Save(&toAccount).Error; err != nil {
+ return err
+ }
+ return nil
+}
+
+func main() {
+ // Connect to the "bank" database as the "maxroach" user.
+ const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable"
+ db, err := gorm.Open("postgres", addr)
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer db.Close()
+
+ // Set to `true` and GORM will print out all DB queries.
+ db.LogMode(false)
+
+ // Automatically create the "accounts" table based on the Account
+ // model.
+ db.AutoMigrate(&Account{})
+
+ // Insert two rows into the "accounts" table.
+ var fromID = 1
+ var toID = 2
+ db.Create(&Account{ID: fromID, Balance: 1000})
+ db.Create(&Account{ID: toID, Balance: 250})
+
+ // The sequence of steps in this section is:
+ // 1. Print account balances.
+ // 2. Set up some Accounts and transfer funds between them inside
+ // a transaction.
+ // 3. Print account balances again to verify the transfer occurred.
+
+ // Print balances before transfer.
+ printBalances(db)
+
+ // The amount to be transferred between the accounts.
+ var amount = 100
+
+ // Transfer funds between accounts. To handle potential
+ // transaction retry errors, we wrap the call to `transferFunds`
+ // in `runTransaction`, a wrapper which implements a retry loop
+ // with exponential backoff around our access to the database (see
+ // the implementation for details).
+ if err := runTransaction(db,
+ func(*gorm.DB) error {
+ return transferFunds(db, fromID, toID, amount)
+ },
+ ); err != nil {
+ // If the error is returned, it's either:
+ // 1. Not a transaction retry error, i.e., some other kind
+ // of database error that you should handle here.
+ // 2. A transaction retry error that has occurred more than
+ // N times (defined by the `maxRetries` variable inside
+ // `runTransaction`), in which case you will need to figure
+ // out why your database access is resulting in so much
+ // contention (see 'Understanding and avoiding transaction
+ // contention':
+ // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention)
+ fmt.Println(err)
+ }
+
+ // Print balances after transfer to ensure that it worked.
+ printBalances(db)
+
+ // Delete accounts so we can start fresh when we want to run this
+ // program again.
+ deleteAccounts(db)
+}
+
+// Wrapper for a transaction. This automatically re-calls `fn` with
+// the open transaction as an argument as long as the database server
+// asks for the transaction to be retried.
+func runTransaction(db *gorm.DB, fn txnFunc) error {
+ var maxRetries = 3
+ for retries := 0; retries <= maxRetries; retries++ {
+ if retries == maxRetries {
+ return fmt.Errorf("hit max of %d retries, aborting", retries)
+ }
+ txn := db.Begin()
+ if err := fn(txn); err != nil {
+ // We need to cast GORM's db.Error to *pq.Error so we can
+ // detect the Postgres transaction retry error code and
+ // handle retries appropriately.
+ pqErr := err.(*pq.Error)
+ if pqErr.Code == "40001" {
+ // Since this is a transaction retry error, we
+ // ROLLBACK the transaction and sleep a little before
+ // trying again. Each time through the loop we sleep
+ // for a little longer than the last time
+ // (A.K.A. exponential backoff).
+ txn.Rollback()
+ var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5)
+ fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs)
+ time.Sleep(time.Millisecond * time.Duration(sleepMs))
+ } else {
+ // If it's not a retry error, it's some other sort of
+ // DB interaction error that needs to be handled by
+ // the caller.
+ return err
+ }
+ } else {
+ // All went well, so we try to commit and break out of the
+ // retry loop if possible.
+ if err := txn.Commit().Error; err != nil {
+ pqErr := err.(*pq.Error)
+ if pqErr.Code == "40001" {
+ // However, our attempt to COMMIT could also
+ // result in a retry error, in which case we
+ // continue back through the loop and try again.
+ continue
+ } else {
+ // If it's not a retry error, it's some other sort
+ // of DB interaction error that needs to be
+ // handled by the caller.
+ return err
+ }
+ }
+ break
+ }
+ }
+ return nil
+}
+
+func printBalances(db *gorm.DB) {
+ var accounts []Account
+ db.Find(&accounts)
+ fmt.Printf("Balance at '%s':\n", time.Now())
+ for _, account := range accounts {
+ fmt.Printf("%d %d\n", account.ID, account.Balance)
+ }
+}
+
+func deleteAccounts(db *gorm.DB) error {
+ // Used to tear down the accounts table so we can re-run this
+ // program.
+ err := db.Exec("DELETE from accounts where ID > 0").Error
+ if err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/_includes/v20.2/app/insecure/hibernate-basic-sample/Sample.java b/_includes/v20.2/app/insecure/hibernate-basic-sample/Sample.java
new file mode 100644
index 00000000000..58d28f37a4b
--- /dev/null
+++ b/_includes/v20.2/app/insecure/hibernate-basic-sample/Sample.java
@@ -0,0 +1,236 @@
+package com.cockroachlabs;
+
+import org.hibernate.Session;
+import org.hibernate.SessionFactory;
+import org.hibernate.Transaction;
+import org.hibernate.JDBCException;
+import org.hibernate.cfg.Configuration;
+
+import java.util.*;
+import java.util.function.Function;
+
+import javax.persistence.Column;
+import javax.persistence.Entity;
+import javax.persistence.Id;
+import javax.persistence.Table;
+
+public class Sample {
+
+ private static final Random RAND = new Random();
+ private static final boolean FORCE_RETRY = false;
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final int MAX_ATTEMPT_COUNT = 6;
+
+ // Account is our model, which corresponds to the "accounts" database table.
+ @Entity
+ @Table(name="accounts")
+ public static class Account {
+ @Id
+ @Column(name="id")
+ public long id;
+
+ public long getId() {
+ return id;
+ }
+
+ @Column(name="balance")
+ public long balance;
+ public long getBalance() {
+ return balance;
+ }
+ public void setBalance(long newBalance) {
+ this.balance = newBalance;
+ }
+
+ // Convenience constructor.
+ public Account(int id, int balance) {
+ this.id = id;
+ this.balance = balance;
+ }
+
+ // Hibernate needs a default (no-arg) constructor to create model objects.
+ public Account() {}
+ }
+
+ private static Function addAccounts() throws JDBCException{
+ Function f = s -> {
+ long rv = 0;
+ try {
+ s.save(new Account(1, 1000));
+ s.save(new Account(2, 250));
+ s.save(new Account(3, 314159));
+ rv = 1;
+ System.out.printf("APP: addAccounts() --> %d\n", rv);
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{
+ Function f = s -> {
+ long rv = 0;
+ try {
+ Account fromAccount = (Account) s.get(Account.class, fromId);
+ Account toAccount = (Account) s.get(Account.class, toId);
+ if (!(amount > fromAccount.getBalance())) {
+ fromAccount.balance -= amount;
+ toAccount.balance += amount;
+ s.save(fromAccount);
+ s.save(toAccount);
+ rv = amount;
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
+ }
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // intended for production code.
+ private static Function forceRetryLogic() throws JDBCException {
+ Function f = s -> {
+ long rv = -1;
+ try {
+ System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
+ s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate();
+ } catch (JDBCException e) {
+ System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
+ throw e;
+ }
+ return rv;
+ };
+ return f;
+ }
+
+ private static Function getAccountBalance(long id) throws JDBCException{
+ Function f = s -> {
+ long balance;
+ try {
+ Account account = s.get(Account.class, id);
+ balance = account.getBalance();
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
+ } catch (JDBCException e) {
+ throw e;
+ }
+ return balance;
+ };
+ return f;
+ }
+
+ // Run SQL code in a way that automatically handles the
+ // transaction retry logic so we don't have to duplicate it in
+ // various places.
+ private static long runTransaction(Session session, Function fn) {
+ long rv = 0;
+ int attemptCount = 0;
+
+ while (attemptCount < MAX_ATTEMPT_COUNT) {
+ attemptCount++;
+
+ if (attemptCount > 1) {
+ System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount);
+ }
+
+ Transaction txn = session.beginTransaction();
+ System.out.printf("APP: BEGIN;\n");
+
+ if (attemptCount == MAX_ATTEMPT_COUNT) {
+ String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
+ throw new RuntimeException(err);
+ }
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryLogic()'.
+ if (FORCE_RETRY) {
+ session.createNativeQuery("SELECT now()").list();
+ }
+
+ try {
+ rv = fn.apply(session);
+ if (rv != -1) {
+ txn.commit();
+ System.out.printf("APP: COMMIT;\n");
+ break;
+ }
+ } catch (JDBCException e) {
+ if (RETRY_SQL_STATE.equals(e.getSQLState())) {
+ // Since this is a transaction retry error, we
+ // roll back the transaction and sleep a little
+ // before trying again. Each time through the
+ // loop we sleep for a little longer than the last
+ // time (A.K.A. exponential backoff).
+ System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount);
+ System.out.printf("APP: ROLLBACK;\n");
+ txn.rollback();
+ int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100);
+ System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
+ try {
+ Thread.sleep(sleepMillis);
+ } catch (InterruptedException ignored) {
+ // no-op
+ }
+ rv = -1;
+ } else {
+ throw e;
+ }
+ }
+ }
+ return rv;
+ }
+
+ public static void main(String[] args) {
+ // Create a SessionFactory based on our hibernate.cfg.xml configuration
+ // file, which defines how to connect to the database.
+ SessionFactory sessionFactory =
+ new Configuration()
+ .configure("hibernate.cfg.xml")
+ .addAnnotatedClass(Account.class)
+ .buildSessionFactory();
+
+ try (Session session = sessionFactory.openSession()) {
+ long fromAccountId = 1;
+ long toAccountId = 2;
+ long transferAmount = 100;
+
+ if (FORCE_RETRY) {
+ System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
+ runTransaction(session, forceRetryLogic());
+ } else {
+
+ runTransaction(session, addAccounts());
+ long fromBalance = runTransaction(session, getAccountBalance(fromAccountId));
+ long toBalance = runTransaction(session, getAccountBalance(toAccountId));
+ if (fromBalance != -1 && toBalance != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
+ }
+
+ // Transfer $100 from account 1 to account 2
+ long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount));
+ if (transferResult != -1) {
+ // Success!
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
+
+ long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId));
+ long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId));
+ if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
+ }
+ }
+ }
+ } finally {
+ sessionFactory.close();
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/hibernate-basic-sample/build.gradle b/_includes/v20.2/app/insecure/hibernate-basic-sample/build.gradle
new file mode 100644
index 00000000000..36f33d73fe6
--- /dev/null
+++ b/_includes/v20.2/app/insecure/hibernate-basic-sample/build.gradle
@@ -0,0 +1,16 @@
+group 'com.cockroachlabs'
+version '1.0'
+
+apply plugin: 'java'
+apply plugin: 'application'
+
+mainClassName = 'com.cockroachlabs.Sample'
+
+repositories {
+ mavenCentral()
+}
+
+dependencies {
+ compile 'org.hibernate:hibernate-core:5.2.4.Final'
+ compile 'org.postgresql:postgresql:42.2.2.jre7'
+}
diff --git a/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz
new file mode 100644
index 00000000000..8205b379229
Binary files /dev/null and b/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz differ
diff --git a/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
new file mode 100644
index 00000000000..ad27c7d746c
--- /dev/null
+++ b/_includes/v20.2/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
@@ -0,0 +1,20 @@
+
+
+
+
+
+ org.postgresql.Driver
+ org.hibernate.dialect.PostgreSQL95Dialect
+ jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable
+ maxroach
+
+
+ create
+
+
+ true
+ true
+
+
diff --git a/_includes/v20.2/app/insecure/jooq-basic-sample/Sample.java b/_includes/v20.2/app/insecure/jooq-basic-sample/Sample.java
new file mode 100644
index 00000000000..fdb8aa3115d
--- /dev/null
+++ b/_includes/v20.2/app/insecure/jooq-basic-sample/Sample.java
@@ -0,0 +1,215 @@
+package com.cockroachlabs;
+
+import com.cockroachlabs.example.jooq.db.Tables;
+import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
+import org.jooq.DSLContext;
+import org.jooq.SQLDialect;
+import org.jooq.Source;
+import org.jooq.conf.RenderQuotedNames;
+import org.jooq.conf.Settings;
+import org.jooq.exception.DataAccessException;
+import org.jooq.impl.DSL;
+
+import java.io.InputStream;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.*;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+
+import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
+
+public class Sample {
+
+ private static final Random RAND = new Random();
+ private static final boolean FORCE_RETRY = false;
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final int MAX_ATTEMPT_COUNT = 6;
+
+ private static Function addAccounts() {
+ return ctx -> {
+ long rv = 0;
+
+ ctx.delete(ACCOUNTS).execute();
+ ctx.batchInsert(
+ new AccountsRecord(1L, 1000L),
+ new AccountsRecord(2L, 250L),
+ new AccountsRecord(3L, 314159L)
+ ).execute();
+
+ rv = 1;
+ System.out.printf("APP: addAccounts() --> %d\n", rv);
+ return rv;
+ };
+ }
+
+ private static Function transferFunds(long fromId, long toId, long amount) {
+ return ctx -> {
+ long rv = 0;
+
+ AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
+ AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
+
+ if (!(amount > fromAccount.getBalance())) {
+ fromAccount.setBalance(fromAccount.getBalance() - amount);
+ toAccount.setBalance(toAccount.getBalance() + amount);
+
+ ctx.batchUpdate(fromAccount, toAccount).execute();
+ rv = amount;
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
+ }
+
+ return rv;
+ };
+ }
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // intended for production code.
+ private static Function forceRetryLogic() {
+ return ctx -> {
+ long rv = -1;
+ try {
+ System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
+ ctx.execute("SELECT crdb_internal.force_retry('1s')");
+ } catch (DataAccessException e) {
+ System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
+ throw e;
+ }
+ return rv;
+ };
+ }
+
+ private static Function getAccountBalance(long id) {
+ return ctx -> {
+ AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
+ long balance = account.getBalance();
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
+ return balance;
+ };
+ }
+
+ // Run SQL code in a way that automatically handles the
+ // transaction retry logic so we don't have to duplicate it in
+ // various places.
+ private static long runTransaction(DSLContext session, Function fn) {
+ AtomicLong rv = new AtomicLong(0L);
+ AtomicInteger attemptCount = new AtomicInteger(0);
+
+ while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
+ attemptCount.incrementAndGet();
+
+ if (attemptCount.get() > 1) {
+ System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
+ }
+
+ if (session.connectionResult(connection -> {
+ connection.setAutoCommit(false);
+ System.out.printf("APP: BEGIN;\n");
+
+ if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
+ String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
+ throw new RuntimeException(err);
+ }
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryLogic()'.
+ if (FORCE_RETRY) {
+ session.fetch("SELECT now()");
+ }
+
+ try {
+ rv.set(fn.apply(session));
+ if (rv.get() != -1) {
+ connection.commit();
+ System.out.printf("APP: COMMIT;\n");
+ return true;
+ }
+ } catch (DataAccessException | SQLException e) {
+ String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
+
+ if (RETRY_SQL_STATE.equals(sqlState)) {
+ // Since this is a transaction retry error, we
+ // roll back the transaction and sleep a little
+ // before trying again. Each time through the
+ // loop we sleep for a little longer than the last
+ // time (A.K.A. exponential backoff).
+ System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
+ System.out.printf("APP: ROLLBACK;\n");
+ connection.rollback();
+ int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
+ System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
+ try {
+ Thread.sleep(sleepMillis);
+ } catch (InterruptedException ignored) {
+ // no-op
+ }
+ rv.set(-1L);
+ } else {
+ throw e;
+ }
+ }
+
+ return false;
+ })) {
+ break;
+ }
+ }
+
+ return rv.get();
+ }
+
+ public static void main(String[] args) throws Exception {
+ try (Connection connection = DriverManager.getConnection(
+ "jdbc:postgresql://localhost:26257/bank?sslmode=disable",
+ "maxroach",
+ ""
+ )) {
+ DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
+ .withExecuteLogging(true)
+ .withRenderQuotedNames(RenderQuotedNames.NEVER));
+
+ // Initialise database with db.sql script
+ try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
+ ctx.parser().parse(Source.of(in).readString()).executeBatch();
+ }
+
+ long fromAccountId = 1;
+ long toAccountId = 2;
+ long transferAmount = 100;
+
+ if (FORCE_RETRY) {
+ System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
+ runTransaction(ctx, forceRetryLogic());
+ } else {
+
+ runTransaction(ctx, addAccounts());
+ long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
+ long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
+ if (fromBalance != -1 && toBalance != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
+ }
+
+ // Transfer $100 from account 1 to account 2
+ long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
+ if (transferResult != -1) {
+ // Success!
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
+
+ long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
+ long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
+ if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
+ }
+ }
+ }
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/_includes/v20.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip
new file mode 100644
index 00000000000..f11f86b8f43
Binary files /dev/null and b/_includes/v20.2/app/insecure/jooq-basic-sample/jooq-basic-sample.zip differ
diff --git a/_includes/v20.2/app/insecure/pony-basic-sample.py b/_includes/v20.2/app/insecure/pony-basic-sample.py
new file mode 100644
index 00000000000..3d367179d5b
--- /dev/null
+++ b/_includes/v20.2/app/insecure/pony-basic-sample.py
@@ -0,0 +1,69 @@
+import random
+from math import floor
+from pony.orm import *
+
+db = Database()
+
+# The Account class corresponds to the "accounts" database table.
+
+
+class Account(db.Entity):
+ _table_ = 'accounts'
+ id = PrimaryKey(int)
+ balance = Required(int)
+
+
+db_params = dict(provider='cockroach', user='maxroach',
+ host='localhost', port=26257, database='bank', sslmode='disable')
+
+
+sql_debug(True) # Print all generated SQL queries to stdout
+db.bind(**db_params) # Bind Database object to the real database
+db.generate_mapping(create_tables=True) # Create tables
+
+
+# Store the account IDs we create for later use.
+
+seen_account_ids = set()
+
+
+# The code below generates random IDs for new accounts.
+
+@db_session # db_session decorator manages the transactions
+def create_random_accounts(n):
+ elems = iter(range(n))
+ for i in elems:
+ billion = 1000000000
+ new_id = floor(random.random() * billion)
+ seen_account_ids.add(new_id)
+ # Create new account
+ Account(id=new_id, balance=floor(random.random() * 1000000))
+
+
+create_random_accounts(100)
+
+
+def get_random_account_id():
+ id = random.choice(tuple(seen_account_ids))
+ return id
+
+
+@db_session(retry=10) # retry of the optimistic transaction
+def transfer_funds_randomly():
+ """
+ Cuts a randomly selected account's balance in half, and gives the
+ other half to some other randomly selected account.
+ """
+
+ source_id = get_random_account_id()
+ sink_id = get_random_account_id()
+
+ source = Account.get(id=source_id)
+ amount = floor(source.balance / 2)
+
+ if source.balance < amount:
+ raise "Insufficient funds"
+
+ source.balance -= amount
+ sink = Account.get(id=sink_id)
+ sink.balance += amount
diff --git a/_includes/v20.2/app/insecure/sequelize-basic-sample.js b/_includes/v20.2/app/insecure/sequelize-basic-sample.js
new file mode 100644
index 00000000000..ca92b98e375
--- /dev/null
+++ b/_includes/v20.2/app/insecure/sequelize-basic-sample.js
@@ -0,0 +1,35 @@
+var Sequelize = require('sequelize-cockroachdb');
+
+// Connect to CockroachDB through Sequelize.
+var sequelize = new Sequelize('bank', 'maxroach', '', {
+ dialect: 'postgres',
+ port: 26257,
+ logging: false
+});
+
+// Define the Account model for the "accounts" table.
+var Account = sequelize.define('accounts', {
+ id: { type: Sequelize.INTEGER, primaryKey: true },
+ balance: { type: Sequelize.INTEGER }
+});
+
+// Create the "accounts" table.
+Account.sync({force: true}).then(function() {
+ // Insert two rows into the "accounts" table.
+ return Account.bulkCreate([
+ {id: 1, balance: 1000},
+ {id: 2, balance: 250}
+ ]);
+}).then(function() {
+ // Retrieve accounts.
+ return Account.findAll();
+}).then(function(accounts) {
+ // Print out the balances.
+ accounts.forEach(function(account) {
+ console.log(account.id + ' ' + account.balance);
+ });
+ process.exit(0);
+}).catch(function(err) {
+ console.error('error: ' + err.message);
+ process.exit(1);
+});
diff --git a/_includes/v20.2/app/insecure/txn-sample.clj b/_includes/v20.2/app/insecure/txn-sample.clj
new file mode 100644
index 00000000000..0e2d9df55e3
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.clj
@@ -0,0 +1,44 @@
+(ns test.test
+ (:require [clojure.java.jdbc :as j]
+ [test.util :as util]))
+
+;; Define the connection parameters to the cluster.
+(def db-spec {:dbtype "postgresql"
+ :dbname "bank"
+ :host "localhost"
+ :port "26257"
+ :user "maxroach"})
+
+;; The transaction we want to run.
+(defn transferFunds
+ [txn from to amount]
+
+ ;; Check the current balance.
+ (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
+ (mapv :balance)
+ (first))]
+ (when (< fromBalance amount)
+ (throw (Exception. "Insufficient funds"))))
+
+ ;; Perform the transfer.
+ (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
+ (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
+
+(defn test-txn []
+ ;; Connect to the cluster and run the code below with
+ ;; the connection object bound to 'conn'.
+ (j/with-db-connection [conn db-spec]
+
+ ;; Execute the transaction within an automatic retry block;
+ ;; the transaction object is bound to 'txn'.
+ (util/with-txn-retry [txn conn]
+ (transferFunds txn 1 2 100))
+
+ ;; Execute a query outside of an automatic retry block.
+ (println "Balances after transfer:")
+ (->> (j/query conn ["SELECT id, balance FROM accounts"])
+ (map println)
+ (doall))))
+
+(defn -main [& args]
+ (test-txn))
diff --git a/_includes/v20.2/app/insecure/txn-sample.cpp b/_includes/v20.2/app/insecure/txn-sample.cpp
new file mode 100644
index 00000000000..0f65137be22
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.cpp
@@ -0,0 +1,74 @@
+#include
+#include
+#include
+#include
+#include
+#include
+
+using namespace std;
+
+void transferFunds(
+ pqxx::dbtransaction *tx, int from, int to, int amount) {
+ // Read the balance.
+ pqxx::result r = tx->exec(
+ "SELECT balance FROM accounts WHERE id = " + to_string(from));
+ assert(r.size() == 1);
+ int fromBalance = r[0][0].as();
+
+ if (fromBalance < amount) {
+ throw domain_error("insufficient funds");
+ }
+
+ // Perform the transfer.
+ tx->exec("UPDATE accounts SET balance = balance - "
+ + to_string(amount) + " WHERE id = " + to_string(from));
+ tx->exec("UPDATE accounts SET balance = balance + "
+ + to_string(amount) + " WHERE id = " + to_string(to));
+}
+
+
+// ExecuteTx runs fn inside a transaction and retries it as needed.
+// On non-retryable failures, the transaction is aborted and rolled
+// back; on success, the transaction is committed.
+//
+// For more information about CockroachDB's transaction model see
+// https://cockroachlabs.com/docs/transactions.html.
+//
+// NOTE: the supplied exec closure should not have external side
+// effects beyond changes to the database.
+void executeTx(
+ pqxx::connection *c, function fn) {
+ pqxx::work tx(*c);
+ while (true) {
+ try {
+ pqxx::subtransaction s(tx, "cockroach_restart");
+ fn(&s);
+ s.commit();
+ break;
+ } catch (const pqxx::pqxx_exception& e) {
+ // Swallow "transaction restart" errors; the transaction will be retried.
+ // Unfortunately libpqxx doesn't give us access to the error code, so we
+ // do string matching to identify retryable errors.
+ if (string(e.base().what()).find("restart transaction:") == string::npos) {
+ throw;
+ }
+ }
+ }
+ tx.commit();
+}
+
+int main() {
+ try {
+ pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
+
+ executeTx(&c, [](pqxx::dbtransaction *tx) {
+ transferFunds(tx, 1, 2, 100);
+ });
+ }
+ catch (const exception &e) {
+ cerr << e.what() << endl;
+ return 1;
+ }
+ cout << "Success" << endl;
+ return 0;
+}
diff --git a/_includes/v20.2/app/insecure/txn-sample.cs b/_includes/v20.2/app/insecure/txn-sample.cs
new file mode 100644
index 00000000000..f64a664ccff
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.cs
@@ -0,0 +1,120 @@
+using System;
+using System.Data;
+using Npgsql;
+
+namespace Cockroach
+{
+ class MainClass
+ {
+ static void Main(string[] args)
+ {
+ var connStringBuilder = new NpgsqlConnectionStringBuilder();
+ connStringBuilder.Host = "localhost";
+ connStringBuilder.Port = 26257;
+ connStringBuilder.SslMode = SslMode.Disable;
+ connStringBuilder.Username = "maxroach";
+ connStringBuilder.Database = "bank";
+ TxnSample(connStringBuilder.ConnectionString);
+ }
+
+ static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
+ {
+ int balance = 0;
+ using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
+ using (var reader = cmd.ExecuteReader())
+ {
+ if (reader.Read())
+ {
+ balance = reader.GetInt32(0);
+ }
+ else
+ {
+ throw new DataException(String.Format("Account id={0} not found", from));
+ }
+ }
+ if (balance < amount)
+ {
+ throw new DataException(String.Format("Insufficient balance in account id={0}", from));
+ }
+ using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
+ {
+ cmd.ExecuteNonQuery();
+ }
+ using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
+ {
+ cmd.ExecuteNonQuery();
+ }
+ }
+
+ static void TxnSample(string connString)
+ {
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ conn.Open();
+
+ // Create the "accounts" table.
+ new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
+
+ // Insert two rows into the "accounts" table.
+ using (var cmd = new NpgsqlCommand())
+ {
+ cmd.Connection = conn;
+ cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
+ cmd.Parameters.AddWithValue("id1", 1);
+ cmd.Parameters.AddWithValue("val1", 1000);
+ cmd.Parameters.AddWithValue("id2", 2);
+ cmd.Parameters.AddWithValue("val2", 250);
+ cmd.ExecuteNonQuery();
+ }
+
+ // Print out the balances.
+ System.Console.WriteLine("Initial balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+
+ try
+ {
+ using (var tran = conn.BeginTransaction())
+ {
+ tran.Save("cockroach_restart");
+ while (true)
+ {
+ try
+ {
+ TransferFunds(conn, tran, 1, 2, 100);
+ tran.Commit();
+ break;
+ }
+ catch (NpgsqlException e)
+ {
+ // Check if the error code indicates a SERIALIZATION_FAILURE.
+ if (e.ErrorCode == 40001)
+ {
+ // Signal the database that we will attempt a retry.
+ tran.Rollback("cockroach_restart");
+ }
+ else
+ {
+ throw;
+ }
+ }
+ }
+ }
+ }
+ catch (DataException e)
+ {
+ Console.WriteLine(e.Message);
+ }
+
+ // Now printout the results.
+ Console.WriteLine("Final balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+ }
+ }
+ }
+}
diff --git a/_includes/v20.2/app/insecure/txn-sample.go b/_includes/v20.2/app/insecure/txn-sample.go
new file mode 100644
index 00000000000..2c0cd1b6da6
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.go
@@ -0,0 +1,51 @@
+package main
+
+import (
+ "context"
+ "database/sql"
+ "fmt"
+ "log"
+
+ "github.com/cockroachdb/cockroach-go/crdb"
+)
+
+func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
+ // Read the balance.
+ var fromBalance int
+ if err := tx.QueryRow(
+ "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
+ return err
+ }
+
+ if fromBalance < amount {
+ return fmt.Errorf("insufficient funds")
+ }
+
+ // Perform the transfer.
+ if _, err := tx.Exec(
+ "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
+ return err
+ }
+ if _, err := tx.Exec(
+ "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
+ return err
+ }
+ return nil
+}
+
+func main() {
+ db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
+ if err != nil {
+ log.Fatal("error connecting to the database: ", err)
+ }
+
+ // Run a transfer in a transaction.
+ err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
+ return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
+ })
+ if err == nil {
+ fmt.Println("Success")
+ } else {
+ log.Fatal("error: ", err)
+ }
+}
diff --git a/_includes/v20.2/app/insecure/txn-sample.js b/_includes/v20.2/app/insecure/txn-sample.js
new file mode 100644
index 00000000000..c44309b01a2
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.js
@@ -0,0 +1,146 @@
+var async = require('async');
+var fs = require('fs');
+var pg = require('pg');
+
+// Connect to the bank database.
+
+var config = {
+ user: 'maxroach',
+ host: 'localhost',
+ database: 'bank',
+ port: 26257
+};
+
+// Wrapper for a transaction. This automatically re-calls "op" with
+// the client as an argument as long as the database server asks for
+// the transaction to be retried.
+
+function txnWrapper(client, op, next) {
+ client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
+ if (err) {
+ return next(err);
+ }
+
+ var released = false;
+ async.doWhilst(function (done) {
+ var handleError = function (err) {
+ // If we got an error, see if it's a retryable one
+ // and, if so, restart.
+ if (err.code === '40001') {
+ // Signal the database that we'll retry.
+ return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
+ }
+ // A non-retryable error; break out of the
+ // doWhilst with an error.
+ return done(err);
+ };
+
+ // Attempt the work.
+ op(client, function (err) {
+ if (err) {
+ return handleError(err);
+ }
+ var opResults = arguments;
+
+ // If we reach this point, release and commit.
+ client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
+ if (err) {
+ return handleError(err);
+ }
+ released = true;
+ return done.apply(null, opResults);
+ });
+ });
+ },
+ function () {
+ return !released;
+ },
+ function (err) {
+ if (err) {
+ client.query('ROLLBACK', function () {
+ next(err);
+ });
+ } else {
+ var txnResults = arguments;
+ client.query('COMMIT', function (err) {
+ if (err) {
+ return next(err);
+ } else {
+ return next.apply(null, txnResults);
+ }
+ });
+ }
+ });
+ });
+}
+
+// The transaction we want to run.
+
+function transferFunds(client, from, to, amount, next) {
+ // Check the current balance.
+ client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
+ if (err) {
+ return next(err);
+ } else if (results.rows.length === 0) {
+ return next(new Error('account not found in table'));
+ }
+
+ var acctBal = results.rows[0].balance;
+ if (acctBal >= amount) {
+ // Perform the transfer.
+ async.waterfall([
+ function (next) {
+ // Subtract amount from account 1.
+ client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
+ },
+ function (updateResult, next) {
+ // Add amount to account 2.
+ client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
+ },
+ function (updateResult, next) {
+ // Fetch account balances after updates.
+ client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
+ next(err, selectResult ? selectResult.rows : null);
+ });
+ }
+ ], next);
+ } else {
+ next(new Error('insufficient funds'));
+ }
+ });
+}
+
+// Create a pool.
+var pool = new pg.Pool(config);
+
+pool.connect(function (err, client, done) {
+ // Closes communication with the database and exits.
+ var finish = function () {
+ done();
+ process.exit();
+ };
+
+ if (err) {
+ console.error('could not connect to cockroachdb', err);
+ finish();
+ }
+
+ // Execute the transaction.
+ txnWrapper(client,
+ function (client, next) {
+ transferFunds(client, 1, 2, 100, next);
+ },
+ function (err, results) {
+ if (err) {
+ console.error('error performing transaction', err);
+ finish();
+ }
+
+ console.log('Balances after transfer:');
+ results.forEach(function (result) {
+ console.log(result);
+ });
+
+ finish();
+ });
+});
diff --git a/_includes/v20.2/app/insecure/txn-sample.php b/_includes/v20.2/app/insecure/txn-sample.php
new file mode 100644
index 00000000000..e060d311cc3
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.php
@@ -0,0 +1,71 @@
+beginTransaction();
+ // This savepoint allows us to retry our transaction.
+ $dbh->exec("SAVEPOINT cockroach_restart");
+ } catch (Exception $e) {
+ throw $e;
+ }
+
+ while (true) {
+ try {
+ $stmt = $dbh->prepare(
+ 'UPDATE accounts SET balance = balance + :deposit ' .
+ 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
+
+ // First, withdraw the money from the old account (if possible).
+ $stmt->bindValue(':account', $from, PDO::PARAM_INT);
+ $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
+ $stmt->execute();
+ if ($stmt->rowCount() == 0) {
+ print "source account does not exist or is underfunded\r\n";
+ return;
+ }
+
+ // Next, deposit into the new account (if it exists).
+ $stmt->bindValue(':account', $to, PDO::PARAM_INT);
+ $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
+ $stmt->execute();
+ if ($stmt->rowCount() == 0) {
+ print "destination account does not exist\r\n";
+ return;
+ }
+
+ // Attempt to release the savepoint (which is really the commit).
+ $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
+ $dbh->commit();
+ return;
+ } catch (PDOException $e) {
+ if ($e->getCode() != '40001') {
+ // Non-recoverable error. Rollback and bubble error up the chain.
+ $dbh->rollBack();
+ throw $e;
+ } else {
+ // Cockroach transaction retry code. Rollback to the savepoint and
+ // restart.
+ $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
+ }
+ }
+ }
+}
+
+try {
+ $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable',
+ 'maxroach', null, array(
+ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
+ PDO::ATTR_EMULATE_PREPARES => true,
+ ));
+
+ transferMoney($dbh, 1, 2, 10);
+
+ print "Account balances after transfer:\r\n";
+ foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
+ print $row['id'] . ': ' . $row['balance'] . "\r\n";
+ }
+} catch (Exception $e) {
+ print $e->getMessage() . "\r\n";
+ exit(1);
+}
+?>
diff --git a/_includes/v20.2/app/insecure/txn-sample.py b/_includes/v20.2/app/insecure/txn-sample.py
new file mode 100644
index 00000000000..2ea05a85704
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.py
@@ -0,0 +1,73 @@
+# Import the driver.
+import psycopg2
+import psycopg2.errorcodes
+
+# Connect to the cluster.
+conn = psycopg2.connect(
+ database='bank',
+ user='maxroach',
+ sslmode='disable',
+ port=26257,
+ host='localhost'
+)
+
+def onestmt(conn, sql):
+ with conn.cursor() as cur:
+ cur.execute(sql)
+
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn, op):
+ with conn:
+ onestmt(conn, "SAVEPOINT cockroach_restart")
+ while True:
+ try:
+ # Attempt the work.
+ op(conn)
+
+ # If we reach this point, commit.
+ onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
+ break
+
+ except psycopg2.OperationalError as e:
+ if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
+ # A non-retryable error; report this up the call stack.
+ raise e
+ # Signal the database that we'll retry.
+ onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
+
+
+# The transaction we want to run.
+def transfer_funds(txn, frm, to, amount):
+ with txn.cursor() as cur:
+
+ # Check the current balance.
+ cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
+ from_balance = cur.fetchone()[0]
+ if from_balance < amount:
+ raise "Insufficient funds"
+
+ # Perform the transfer.
+ cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
+ (amount, frm))
+ cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
+ (amount, to))
+
+
+# Execute the transaction.
+run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
+
+
+with conn:
+ with conn.cursor() as cur:
+ # Check account balances.
+ cur.execute("SELECT id, balance FROM accounts")
+ rows = cur.fetchall()
+ print('Balances after transfer:')
+ for row in rows:
+ print([str(cell) for cell in row])
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/insecure/txn-sample.rb b/_includes/v20.2/app/insecure/txn-sample.rb
new file mode 100644
index 00000000000..416efb9e24d
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.rb
@@ -0,0 +1,49 @@
+# Import the driver.
+require 'pg'
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn)
+ conn.transaction do |txn|
+ txn.exec('SAVEPOINT cockroach_restart')
+ while
+ begin
+ # Attempt the work.
+ yield txn
+
+ # If we reach this point, commit.
+ txn.exec('RELEASE SAVEPOINT cockroach_restart')
+ break
+ rescue PG::TRSerializationFailure
+ txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
+ end
+ end
+ end
+end
+
+def transfer_funds(txn, from, to, amount)
+ txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
+ res.each do |row|
+ raise 'insufficient funds' if Integer(row['balance']) < amount
+ end
+ end
+ txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
+ txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
+end
+
+# Connect to the "bank" database.
+conn = PG.connect(
+ user: 'maxroach',
+ dbname: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'disable'
+)
+
+run_transaction(conn) do |txn|
+ transfer_funds(txn, 1, 2, 100)
+end
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/insecure/txn-sample.rs b/_includes/v20.2/app/insecure/txn-sample.rs
new file mode 100644
index 00000000000..d1dd0e021c9
--- /dev/null
+++ b/_includes/v20.2/app/insecure/txn-sample.rs
@@ -0,0 +1,60 @@
+use postgres::{error::SqlState, Client, Error, NoTls, Transaction};
+
+/// Runs op inside a transaction and retries it as needed.
+/// On non-retryable failures, the transaction is aborted and
+/// rolled back; on success, the transaction is committed.
+fn execute_txn(client: &mut Client, op: F) -> Result
+where
+ F: Fn(&mut Transaction) -> Result,
+{
+ let mut txn = client.transaction()?;
+ loop {
+ let mut sp = txn.savepoint("cockroach_restart")?;
+ match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
+ Err(ref err)
+ if err
+ .code()
+ .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
+ .unwrap_or(false) => {}
+ r => break r,
+ }
+ }
+ .and_then(|t| txn.commit().map(|_| t))
+}
+
+fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
+ // Read the balance.
+ let from_balance: i64 = txn
+ .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
+ .get(0);
+
+ assert!(from_balance >= amount);
+
+ // Perform the transfer.
+ txn.execute(
+ "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
+ &[&amount, &from],
+ )?;
+ txn.execute(
+ "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
+ &[&amount, &to],
+ )?;
+ Ok(())
+}
+
+fn main() {
+ let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
+
+ // Run a transfer in a transaction.
+ execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
+
+ // Check account balances after the transaction.
+ for row in &client
+ .query("SELECT id, balance FROM accounts", &[])
+ .unwrap()
+ {
+ let id: i64 = row.get(0);
+ let balance: i64 = row.get(1);
+ println!("{} {}", id, balance);
+ }
+}
diff --git a/_includes/v20.2/app/jooq-basic-sample/Sample.java b/_includes/v20.2/app/jooq-basic-sample/Sample.java
new file mode 100644
index 00000000000..9baf2057561
--- /dev/null
+++ b/_includes/v20.2/app/jooq-basic-sample/Sample.java
@@ -0,0 +1,215 @@
+package com.cockroachlabs;
+
+import com.cockroachlabs.example.jooq.db.Tables;
+import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord;
+import org.jooq.DSLContext;
+import org.jooq.SQLDialect;
+import org.jooq.Source;
+import org.jooq.conf.RenderQuotedNames;
+import org.jooq.conf.Settings;
+import org.jooq.exception.DataAccessException;
+import org.jooq.impl.DSL;
+
+import java.io.InputStream;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.*;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+
+import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS;
+
+public class Sample {
+
+ private static final Random RAND = new Random();
+ private static final boolean FORCE_RETRY = false;
+ private static final String RETRY_SQL_STATE = "40001";
+ private static final int MAX_ATTEMPT_COUNT = 6;
+
+ private static Function addAccounts() {
+ return ctx -> {
+ long rv = 0;
+
+ ctx.delete(ACCOUNTS).execute();
+ ctx.batchInsert(
+ new AccountsRecord(1L, 1000L),
+ new AccountsRecord(2L, 250L),
+ new AccountsRecord(3L, 314159L)
+ ).execute();
+
+ rv = 1;
+ System.out.printf("APP: addAccounts() --> %d\n", rv);
+ return rv;
+ };
+ }
+
+ private static Function transferFunds(long fromId, long toId, long amount) {
+ return ctx -> {
+ long rv = 0;
+
+ AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId));
+ AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId));
+
+ if (!(amount > fromAccount.getBalance())) {
+ fromAccount.setBalance(fromAccount.getBalance() - amount);
+ toAccount.setBalance(toAccount.getBalance() + amount);
+
+ ctx.batchUpdate(fromAccount, toAccount).execute();
+ rv = amount;
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv);
+ }
+
+ return rv;
+ };
+ }
+
+ // Test our retry handling logic if FORCE_RETRY is true. This
+ // method is only used to test the retry logic. It is not
+ // intended for production code.
+ private static Function forceRetryLogic() {
+ return ctx -> {
+ long rv = -1;
+ try {
+ System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n");
+ ctx.execute("SELECT crdb_internal.force_retry('1s')");
+ } catch (DataAccessException e) {
+ System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n");
+ throw e;
+ }
+ return rv;
+ };
+ }
+
+ private static Function getAccountBalance(long id) {
+ return ctx -> {
+ AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id));
+ long balance = account.getBalance();
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance);
+ return balance;
+ };
+ }
+
+ // Run SQL code in a way that automatically handles the
+ // transaction retry logic so we don't have to duplicate it in
+ // various places.
+ private static long runTransaction(DSLContext session, Function fn) {
+ AtomicLong rv = new AtomicLong(0L);
+ AtomicInteger attemptCount = new AtomicInteger(0);
+
+ while (attemptCount.get() < MAX_ATTEMPT_COUNT) {
+ attemptCount.incrementAndGet();
+
+ if (attemptCount.get() > 1) {
+ System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get());
+ }
+
+ if (session.connectionResult(connection -> {
+ connection.setAutoCommit(false);
+ System.out.printf("APP: BEGIN;\n");
+
+ if (attemptCount.get() == MAX_ATTEMPT_COUNT) {
+ String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT);
+ throw new RuntimeException(err);
+ }
+
+ // This block is only used to test the retry logic.
+ // It is not necessary in production code. See also
+ // the method 'testRetryLogic()'.
+ if (FORCE_RETRY) {
+ session.fetch("SELECT now()");
+ }
+
+ try {
+ rv.set(fn.apply(session));
+ if (rv.get() != -1) {
+ connection.commit();
+ System.out.printf("APP: COMMIT;\n");
+ return true;
+ }
+ } catch (DataAccessException | SQLException e) {
+ String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState();
+
+ if (RETRY_SQL_STATE.equals(sqlState)) {
+ // Since this is a transaction retry error, we
+ // roll back the transaction and sleep a little
+ // before trying again. Each time through the
+ // loop we sleep for a little longer than the last
+ // time (A.K.A. exponential backoff).
+ System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get());
+ System.out.printf("APP: ROLLBACK;\n");
+ connection.rollback();
+ int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100);
+ System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis);
+ try {
+ Thread.sleep(sleepMillis);
+ } catch (InterruptedException ignored) {
+ // no-op
+ }
+ rv.set(-1L);
+ } else {
+ throw e;
+ }
+ }
+
+ return false;
+ })) {
+ break;
+ }
+ }
+
+ return rv.get();
+ }
+
+ public static void main(String[] args) throws Exception {
+ try (Connection connection = DriverManager.getConnection(
+ "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt",
+ "maxroach",
+ ""
+ )) {
+ DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings()
+ .withExecuteLogging(true)
+ .withRenderQuotedNames(RenderQuotedNames.NEVER));
+
+ // Initialise database with db.sql script
+ try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) {
+ ctx.parser().parse(Source.of(in).readString()).executeBatch();
+ }
+
+ long fromAccountId = 1;
+ long toAccountId = 2;
+ long transferAmount = 100;
+
+ if (FORCE_RETRY) {
+ System.out.printf("APP: About to test retry logic in 'runTransaction'\n");
+ runTransaction(ctx, forceRetryLogic());
+ } else {
+
+ runTransaction(ctx, addAccounts());
+ long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId));
+ long toBalance = runTransaction(ctx, getAccountBalance(toAccountId));
+ if (fromBalance != -1 && toBalance != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance);
+ }
+
+ // Transfer $100 from account 1 to account 2
+ long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount));
+ if (transferResult != -1) {
+ // Success!
+ System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult);
+
+ long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId));
+ long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId));
+ if (fromBalanceAfter != -1 && toBalanceAfter != -1) {
+ // Success!
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter);
+ System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter);
+ }
+ }
+ }
+ }
+ }
+}
diff --git a/_includes/v20.2/app/jooq-basic-sample/jooq-basic-sample.zip b/_includes/v20.2/app/jooq-basic-sample/jooq-basic-sample.zip
new file mode 100644
index 00000000000..859305478c0
Binary files /dev/null and b/_includes/v20.2/app/jooq-basic-sample/jooq-basic-sample.zip differ
diff --git a/_includes/v20.2/app/pony-basic-sample.py b/_includes/v20.2/app/pony-basic-sample.py
new file mode 100644
index 00000000000..436650c7208
--- /dev/null
+++ b/_includes/v20.2/app/pony-basic-sample.py
@@ -0,0 +1,69 @@
+import random
+from math import floor
+from pony.orm import *
+
+db = Database()
+
+# The Account class corresponds to the "accounts" database table.
+
+
+class Account(db.Entity):
+ _table_ = 'accounts'
+ id = PrimaryKey(int)
+ balance = Required(int)
+
+
+db_params = dict(provider='cockroach', user='maxroach', host='localhost', port=26257, database='bank', sslmode='require',
+ sslrootcert='certs/ca.crt', sslkey='certs/client.maxroach.key', sslcert='certs/client.maxroach.crt')
+
+
+sql_debug(True) # Print all generated SQL queries to stdout
+db.bind(**db_params) # Bind Database object to the real database
+db.generate_mapping(create_tables=True) # Create tables
+
+
+# Store the account IDs we create for later use.
+
+seen_account_ids = set()
+
+
+# The code below generates random IDs for new accounts.
+
+@db_session # db_session decorator manages the transactions
+def create_random_accounts(n):
+ elems = iter(range(n))
+ for i in elems:
+ billion = 1000000000
+ new_id = floor(random.random() * billion)
+ seen_account_ids.add(new_id)
+ # Create new account
+ Account(id=new_id, balance=floor(random.random() * 1000000))
+
+
+create_random_accounts(100)
+
+
+def get_random_account_id():
+ id = random.choice(tuple(seen_account_ids))
+ return id
+
+
+@db_session(retry=10) # retry of the optimistic transaction
+def transfer_funds_randomly():
+ """
+ Cuts a randomly selected account's balance in half, and gives the
+ other half to some other randomly selected account.
+ """
+
+ source_id = get_random_account_id()
+ sink_id = get_random_account_id()
+
+ source = Account.get(id=source_id)
+ amount = floor(source.balance / 2)
+
+ if source.balance < amount:
+ raise "Insufficient funds"
+
+ source.balance -= amount
+ sink = Account.get(id=sink_id)
+ sink.balance += amount
diff --git a/_includes/v20.2/app/project.clj b/_includes/v20.2/app/project.clj
new file mode 100644
index 00000000000..41efc324b59
--- /dev/null
+++ b/_includes/v20.2/app/project.clj
@@ -0,0 +1,7 @@
+(defproject test "0.1"
+ :description "CockroachDB test"
+ :url "http://cockroachlabs.com/"
+ :dependencies [[org.clojure/clojure "1.8.0"]
+ [org.clojure/java.jdbc "0.6.1"]
+ [org.postgresql/postgresql "9.4.1211"]]
+ :main test.test)
diff --git a/_includes/v20.2/app/retry-errors.md b/_includes/v20.2/app/retry-errors.md
new file mode 100644
index 00000000000..5f219f53e12
--- /dev/null
+++ b/_includes/v20.2/app/retry-errors.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under contention.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/app/see-also-links.md b/_includes/v20.2/app/see-also-links.md
new file mode 100644
index 00000000000..e5dd6173c99
--- /dev/null
+++ b/_includes/v20.2/app/see-also-links.md
@@ -0,0 +1,9 @@
+You might also be interested in the following pages:
+
+- [Client Connection Parameters](connection-parameters.html)
+- [Data Replication](demo-data-replication.html)
+- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
+- [Automatic Rebalancing](demo-automatic-rebalancing.html)
+- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
+- [Follow-the-Workload](demo-follow-the-workload.html)
+- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
diff --git a/_includes/v20.2/app/sequelize-basic-sample.js b/_includes/v20.2/app/sequelize-basic-sample.js
new file mode 100644
index 00000000000..d87ff2ca5a5
--- /dev/null
+++ b/_includes/v20.2/app/sequelize-basic-sample.js
@@ -0,0 +1,62 @@
+var Sequelize = require('sequelize-cockroachdb');
+var fs = require('fs');
+
+// Connect to CockroachDB through Sequelize.
+var sequelize = new Sequelize('bank', 'maxroach', '', {
+ dialect: 'postgres',
+ port: 26257,
+ logging: false,
+ dialectOptions: {
+ ssl: {
+ ca: fs.readFileSync('certs/ca.crt')
+ .toString(),
+ key: fs.readFileSync('certs/client.maxroach.key')
+ .toString(),
+ cert: fs.readFileSync('certs/client.maxroach.crt')
+ .toString()
+ }
+ }
+});
+
+// Define the Account model for the "accounts" table.
+var Account = sequelize.define('accounts', {
+ id: {
+ type: Sequelize.INTEGER,
+ primaryKey: true
+ },
+ balance: {
+ type: Sequelize.INTEGER
+ }
+});
+
+// Create the "accounts" table.
+Account.sync({
+ force: true
+ })
+ .then(function () {
+ // Insert two rows into the "accounts" table.
+ return Account.bulkCreate([{
+ id: 1,
+ balance: 1000
+ },
+ {
+ id: 2,
+ balance: 250
+ }
+ ]);
+ })
+ .then(function () {
+ // Retrieve accounts.
+ return Account.findAll();
+ })
+ .then(function (accounts) {
+ // Print out the balances.
+ accounts.forEach(function (account) {
+ console.log(account.id + ' ' + account.balance);
+ });
+ process.exit(0);
+ })
+ .catch(function (err) {
+ console.error('error: ' + err.message);
+ process.exit(1);
+ });
diff --git a/_includes/v20.2/app/sqlalchemy-basic-sample.py b/_includes/v20.2/app/sqlalchemy-basic-sample.py
new file mode 100644
index 00000000000..6fa27d5691f
--- /dev/null
+++ b/_includes/v20.2/app/sqlalchemy-basic-sample.py
@@ -0,0 +1,110 @@
+import random
+from math import floor
+from sqlalchemy import create_engine, Column, Integer
+from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import sessionmaker
+from cockroachdb.sqlalchemy import run_transaction
+
+Base = declarative_base()
+
+
+# The Account class corresponds to the "accounts" database table.
+class Account(Base):
+ __tablename__ = 'accounts'
+ id = Column(Integer, primary_key=True)
+ balance = Column(Integer)
+
+
+# Create an engine to communicate with the database. The
+# "cockroachdb://" prefix for the engine URL indicates that we are
+# connecting to CockroachDB using the 'cockroachdb' dialect.
+# For more information, see
+# https://github.com/cockroachdb/sqlalchemy-cockroachdb.
+
+secure_cluster = True # Set to False for insecure clusters
+connect_args = {}
+
+if secure_cluster:
+ connect_args = {
+ 'sslmode': 'require',
+ 'sslrootcert': 'certs/ca.crt',
+ 'sslkey': 'certs/client.maxroach.key',
+ 'sslcert': 'certs/client.maxroach.crt'
+ }
+else:
+ connect_args = {'sslmode': 'disable'}
+
+engine = create_engine(
+ 'cockroachdb://maxroach@localhost:26257/bank',
+ connect_args=connect_args,
+ echo=True # Log SQL queries to stdout
+)
+
+# Automatically create the "accounts" table based on the Account class.
+Base.metadata.create_all(engine)
+
+
+# Store the account IDs we create for later use.
+
+seen_account_ids = set()
+
+
+# The code below generates random IDs for new accounts.
+
+def create_random_accounts(sess, n):
+ """Create N new accounts with random IDs and random account balances.
+
+ Note that since this is a demo, we don't do any work to ensure the
+ new IDs don't collide with existing IDs.
+ """
+ new_accounts = []
+ elems = iter(range(n))
+ for i in elems:
+ billion = 1000000000
+ new_id = floor(random.random()*billion)
+ seen_account_ids.add(new_id)
+ new_accounts.append(
+ Account(
+ id=new_id,
+ balance=floor(random.random()*1000000)
+ )
+ )
+ sess.add_all(new_accounts)
+
+
+run_transaction(sessionmaker(bind=engine),
+ lambda s: create_random_accounts(s, 100))
+
+
+# Helper for getting random existing account IDs.
+
+def get_random_account_id():
+ id = random.choice(tuple(seen_account_ids))
+ return id
+
+
+def transfer_funds_randomly(session):
+ """Transfer money randomly between accounts (during SESSION).
+
+ Cuts a randomly selected account's balance in half, and gives the
+ other half to some other randomly selected account.
+ """
+ source_id = get_random_account_id()
+ sink_id = get_random_account_id()
+
+ source = session.query(Account).filter_by(id=source_id).one()
+ amount = floor(source.balance/2)
+
+ # Check balance of the first account.
+ if source.balance < amount:
+ raise "Insufficient funds"
+
+ source.balance -= amount
+ session.query(Account).filter_by(id=sink_id).update(
+ {"balance": (Account.balance + amount)}
+ )
+
+
+# Run the transfer inside a transaction.
+
+run_transaction(sessionmaker(bind=engine), transfer_funds_randomly)
diff --git a/_includes/v20.2/app/sqlalchemy-large-txns.py b/_includes/v20.2/app/sqlalchemy-large-txns.py
new file mode 100644
index 00000000000..bc7399b663c
--- /dev/null
+++ b/_includes/v20.2/app/sqlalchemy-large-txns.py
@@ -0,0 +1,60 @@
+from sqlalchemy import create_engine, Column, Float, Integer
+from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import sessionmaker
+from cockroachdb.sqlalchemy import run_transaction
+from random import random
+
+Base = declarative_base()
+
+# The code below assumes you are running as 'root' and have run
+# the following SQL statements against an insecure cluster.
+
+# CREATE DATABASE pointstore;
+
+# USE pointstore;
+
+# CREATE TABLE points (
+# id INT PRIMARY KEY DEFAULT unique_rowid(),
+# x FLOAT NOT NULL,
+# y FLOAT NOT NULL,
+# z FLOAT NOT NULL
+# );
+
+engine = create_engine(
+ 'cockroachdb://root@localhost:26257/pointstore',
+ connect_args={
+ 'sslmode': 'disable',
+ },
+ echo=True
+)
+
+
+class Point(Base):
+ __tablename__ = 'points'
+ id = Column(Integer, primary_key=True)
+ x = Column(Float)
+ y = Column(Float)
+ z = Column(Float)
+
+
+def add_points(num_points):
+ chunk_size = 1000 # Tune this based on object sizes.
+
+ def add_points_helper(sess, chunk, num_points):
+ points = []
+ for i in range(chunk, min(chunk + chunk_size, num_points)):
+ points.append(
+ Point(x=random()*1024, y=random()*1024, z=random()*1024)
+ )
+ sess.bulk_save_objects(points)
+
+ for chunk in range(0, num_points, chunk_size):
+ run_transaction(
+ sessionmaker(bind=engine),
+ lambda s: add_points_helper(
+ s, chunk, min(chunk + chunk_size, num_points)
+ )
+ )
+
+
+add_points(10000)
diff --git a/_includes/v20.2/app/txn-sample.clj b/_includes/v20.2/app/txn-sample.clj
new file mode 100644
index 00000000000..c093078ebc4
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.clj
@@ -0,0 +1,48 @@
+(ns test.test
+ (:require [clojure.java.jdbc :as j]
+ [test.util :as util]))
+
+;; Define the connection parameters to the cluster.
+(def db-spec {:dbtype "postgresql"
+ :dbname "bank"
+ :host "localhost"
+ :port "26257"
+ :ssl true
+ :sslmode "require"
+ :sslcert "certs/client.maxroach.crt"
+ :sslkey "certs/client.maxroach.key.pk8"
+ :user "maxroach"})
+
+;; The transaction we want to run.
+(defn transferFunds
+ [txn from to amount]
+
+ ;; Check the current balance.
+ (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
+ (mapv :balance)
+ (first))]
+ (when (< fromBalance amount)
+ (throw (Exception. "Insufficient funds"))))
+
+ ;; Perform the transfer.
+ (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
+ (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
+
+(defn test-txn []
+ ;; Connect to the cluster and run the code below with
+ ;; the connection object bound to 'conn'.
+ (j/with-db-connection [conn db-spec]
+
+ ;; Execute the transaction within an automatic retry block;
+ ;; the transaction object is bound to 'txn'.
+ (util/with-txn-retry [txn conn]
+ (transferFunds txn 1 2 100))
+
+ ;; Execute a query outside of an automatic retry block.
+ (println "Balances after transfer:")
+ (->> (j/query conn ["SELECT id, balance FROM accounts"])
+ (map println)
+ (doall))))
+
+(defn -main [& args]
+ (test-txn))
diff --git a/_includes/v20.2/app/txn-sample.cpp b/_includes/v20.2/app/txn-sample.cpp
new file mode 100644
index 00000000000..728e4a2e5cc
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.cpp
@@ -0,0 +1,74 @@
+#include
+#include
+#include
+#include
+#include
+#include
+
+using namespace std;
+
+void transferFunds(
+ pqxx::dbtransaction *tx, int from, int to, int amount) {
+ // Read the balance.
+ pqxx::result r = tx->exec(
+ "SELECT balance FROM accounts WHERE id = " + to_string(from));
+ assert(r.size() == 1);
+ int fromBalance = r[0][0].as();
+
+ if (fromBalance < amount) {
+ throw domain_error("insufficient funds");
+ }
+
+ // Perform the transfer.
+ tx->exec("UPDATE accounts SET balance = balance - "
+ + to_string(amount) + " WHERE id = " + to_string(from));
+ tx->exec("UPDATE accounts SET balance = balance + "
+ + to_string(amount) + " WHERE id = " + to_string(to));
+}
+
+
+// ExecuteTx runs fn inside a transaction and retries it as needed.
+// On non-retryable failures, the transaction is aborted and rolled
+// back; on success, the transaction is committed.
+//
+// For more information about CockroachDB's transaction model see
+// https://cockroachlabs.com/docs/transactions.html.
+//
+// NOTE: the supplied exec closure should not have external side
+// effects beyond changes to the database.
+void executeTx(
+ pqxx::connection *c, function fn) {
+ pqxx::work tx(*c);
+ while (true) {
+ try {
+ pqxx::subtransaction s(tx, "cockroach_restart");
+ fn(&s);
+ s.commit();
+ break;
+ } catch (const pqxx::pqxx_exception& e) {
+ // Swallow "transaction restart" errors; the transaction will be retried.
+ // Unfortunately libpqxx doesn't give us access to the error code, so we
+ // do string matching to identify retryable errors.
+ if (string(e.base().what()).find("restart transaction:") == string::npos) {
+ throw;
+ }
+ }
+ }
+ tx.commit();
+}
+
+int main() {
+ try {
+ pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost");
+
+ executeTx(&c, [](pqxx::dbtransaction *tx) {
+ transferFunds(tx, 1, 2, 100);
+ });
+ }
+ catch (const exception &e) {
+ cerr << e.what() << endl;
+ return 1;
+ }
+ cout << "Success" << endl;
+ return 0;
+}
diff --git a/_includes/v20.2/app/txn-sample.cs b/_includes/v20.2/app/txn-sample.cs
new file mode 100644
index 00000000000..ced5063a4b9
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.cs
@@ -0,0 +1,168 @@
+using System;
+using System.Data;
+using System.Security.Cryptography.X509Certificates;
+using System.Net.Security;
+using Npgsql;
+
+namespace Cockroach
+{
+ class MainClass
+ {
+ static void Main(string[] args)
+ {
+ var connStringBuilder = new NpgsqlConnectionStringBuilder();
+ connStringBuilder.Host = "localhost";
+ connStringBuilder.Port = 26257;
+ connStringBuilder.SslMode = SslMode.Require;
+ connStringBuilder.Username = "maxroach";
+ connStringBuilder.Database = "bank";
+ TxnSample(connStringBuilder.ConnectionString);
+ }
+
+ static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
+ {
+ int balance = 0;
+ using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
+ using (var reader = cmd.ExecuteReader())
+ {
+ if (reader.Read())
+ {
+ balance = reader.GetInt32(0);
+ }
+ else
+ {
+ throw new DataException(String.Format("Account id={0} not found", from));
+ }
+ }
+ if (balance < amount)
+ {
+ throw new DataException(String.Format("Insufficient balance in account id={0}", from));
+ }
+ using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
+ {
+ cmd.ExecuteNonQuery();
+ }
+ using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
+ {
+ cmd.ExecuteNonQuery();
+ }
+ }
+
+ static void TxnSample(string connString)
+ {
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
+ conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
+
+ conn.Open();
+
+ // Create the "accounts" table.
+ new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
+
+ // Insert two rows into the "accounts" table.
+ using (var cmd = new NpgsqlCommand())
+ {
+ cmd.Connection = conn;
+ cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
+ cmd.Parameters.AddWithValue("id1", 1);
+ cmd.Parameters.AddWithValue("val1", 1000);
+ cmd.Parameters.AddWithValue("id2", 2);
+ cmd.Parameters.AddWithValue("val2", 250);
+ cmd.ExecuteNonQuery();
+ }
+
+ // Print out the balances.
+ System.Console.WriteLine("Initial balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+
+ try
+ {
+ using (var tran = conn.BeginTransaction())
+ {
+ tran.Save("cockroach_restart");
+ while (true)
+ {
+ try
+ {
+ TransferFunds(conn, tran, 1, 2, 100);
+ tran.Commit();
+ break;
+ }
+ catch (NpgsqlException e)
+ {
+ // Check if the error code indicates a SERIALIZATION_FAILURE.
+ if (e.ErrorCode == 40001)
+ {
+ // Signal the database that we will attempt a retry.
+ tran.Rollback("cockroach_restart");
+ }
+ else
+ {
+ throw;
+ }
+ }
+ }
+ }
+ }
+ catch (DataException e)
+ {
+ Console.WriteLine(e.Message);
+ }
+
+ // Now printout the results.
+ Console.WriteLine("Final balances:");
+ using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
+ using (var reader = cmd.ExecuteReader())
+ while (reader.Read())
+ Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
+ }
+ }
+
+ static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
+ {
+ // To be able to add a certificate with a private key included, we must convert it to
+ // a PKCS #12 format. The following openssl command does this:
+ // openssl pkcs12 -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
+ // As of 2018-12-10, you need to provide a password for this to work on macOS.
+ // See https://github.com/dotnet/corefx/issues/24225
+ clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass"));
+ }
+
+ // By default, .Net does all of its certificate verification using the system certificate store.
+ // This callback is necessary to validate the server certificate against a CA certificate file.
+ static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
+ {
+ X509Certificate2 caCert = new X509Certificate2("ca.crt");
+ X509Chain caCertChain = new X509Chain();
+ caCertChain.ChainPolicy = new X509ChainPolicy()
+ {
+ RevocationMode = X509RevocationMode.NoCheck,
+ RevocationFlag = X509RevocationFlag.EntireChain
+ };
+ caCertChain.ChainPolicy.ExtraStore.Add(caCert);
+
+ X509Certificate2 serverCert = new X509Certificate2(certificate);
+
+ caCertChain.Build(serverCert);
+ if (caCertChain.ChainStatus.Length == 0)
+ {
+ // No errors
+ return true;
+ }
+
+ foreach (X509ChainStatus status in caCertChain.ChainStatus)
+ {
+ // Check if we got any errors other than UntrustedRoot (which we will always get if we don't install the CA cert to the system store)
+ if (status.Status != X509ChainStatusFlags.UntrustedRoot)
+ {
+ return false;
+ }
+ }
+ return true;
+ }
+ }
+}
diff --git a/_includes/v20.2/app/txn-sample.go b/_includes/v20.2/app/txn-sample.go
new file mode 100644
index 00000000000..fc15275abca
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.go
@@ -0,0 +1,53 @@
+package main
+
+import (
+ "context"
+ "database/sql"
+ "fmt"
+ "log"
+
+ "github.com/cockroachdb/cockroach-go/crdb"
+)
+
+func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
+ // Read the balance.
+ var fromBalance int
+ if err := tx.QueryRow(
+ "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
+ return err
+ }
+
+ if fromBalance < amount {
+ return fmt.Errorf("insufficient funds")
+ }
+
+ // Perform the transfer.
+ if _, err := tx.Exec(
+ "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
+ return err
+ }
+ if _, err := tx.Exec(
+ "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
+ return err
+ }
+ return nil
+}
+
+func main() {
+ db, err := sql.Open("postgres",
+ "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
+ if err != nil {
+ log.Fatal("error connecting to the database: ", err)
+ }
+ defer db.Close()
+
+ // Run a transfer in a transaction.
+ err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
+ return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
+ })
+ if err == nil {
+ fmt.Println("Success")
+ } else {
+ log.Fatal("error: ", err)
+ }
+}
diff --git a/_includes/v20.2/app/txn-sample.js b/_includes/v20.2/app/txn-sample.js
new file mode 100644
index 00000000000..1eebaacad30
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.js
@@ -0,0 +1,154 @@
+var async = require('async');
+var fs = require('fs');
+var pg = require('pg');
+
+// Connect to the bank database.
+
+var config = {
+ user: 'maxroach',
+ host: 'localhost',
+ database: 'bank',
+ port: 26257,
+ ssl: {
+ ca: fs.readFileSync('certs/ca.crt')
+ .toString(),
+ key: fs.readFileSync('certs/client.maxroach.key')
+ .toString(),
+ cert: fs.readFileSync('certs/client.maxroach.crt')
+ .toString()
+ }
+};
+
+// Wrapper for a transaction. This automatically re-calls "op" with
+// the client as an argument as long as the database server asks for
+// the transaction to be retried.
+
+function txnWrapper(client, op, next) {
+ client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
+ if (err) {
+ return next(err);
+ }
+
+ var released = false;
+ async.doWhilst(function (done) {
+ var handleError = function (err) {
+ // If we got an error, see if it's a retryable one
+ // and, if so, restart.
+ if (err.code === '40001') {
+ // Signal the database that we'll retry.
+ return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
+ }
+ // A non-retryable error; break out of the
+ // doWhilst with an error.
+ return done(err);
+ };
+
+ // Attempt the work.
+ op(client, function (err) {
+ if (err) {
+ return handleError(err);
+ }
+ var opResults = arguments;
+
+ // If we reach this point, release and commit.
+ client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
+ if (err) {
+ return handleError(err);
+ }
+ released = true;
+ return done.apply(null, opResults);
+ });
+ });
+ },
+ function () {
+ return !released;
+ },
+ function (err) {
+ if (err) {
+ client.query('ROLLBACK', function () {
+ next(err);
+ });
+ } else {
+ var txnResults = arguments;
+ client.query('COMMIT', function (err) {
+ if (err) {
+ return next(err);
+ } else {
+ return next.apply(null, txnResults);
+ }
+ });
+ }
+ });
+ });
+}
+
+// The transaction we want to run.
+
+function transferFunds(client, from, to, amount, next) {
+ // Check the current balance.
+ client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
+ if (err) {
+ return next(err);
+ } else if (results.rows.length === 0) {
+ return next(new Error('account not found in table'));
+ }
+
+ var acctBal = results.rows[0].balance;
+ if (acctBal >= amount) {
+ // Perform the transfer.
+ async.waterfall([
+ function (next) {
+ // Subtract amount from account 1.
+ client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
+ },
+ function (updateResult, next) {
+ // Add amount to account 2.
+ client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
+ },
+ function (updateResult, next) {
+ // Fetch account balances after updates.
+ client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
+ next(err, selectResult ? selectResult.rows : null);
+ });
+ }
+ ], next);
+ } else {
+ next(new Error('insufficient funds'));
+ }
+ });
+}
+
+// Create a pool.
+var pool = new pg.Pool(config);
+
+pool.connect(function (err, client, done) {
+ // Closes communication with the database and exits.
+ var finish = function () {
+ done();
+ process.exit();
+ };
+
+ if (err) {
+ console.error('could not connect to cockroachdb', err);
+ finish();
+ }
+
+ // Execute the transaction.
+ txnWrapper(client,
+ function (client, next) {
+ transferFunds(client, 1, 2, 100, next);
+ },
+ function (err, results) {
+ if (err) {
+ console.error('error performing transaction', err);
+ finish();
+ }
+
+ console.log('Balances after transfer:');
+ results.forEach(function (result) {
+ console.log(result);
+ });
+
+ finish();
+ });
+});
diff --git a/_includes/v20.2/app/txn-sample.php b/_includes/v20.2/app/txn-sample.php
new file mode 100644
index 00000000000..363dbcd73cd
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.php
@@ -0,0 +1,71 @@
+beginTransaction();
+ // This savepoint allows us to retry our transaction.
+ $dbh->exec("SAVEPOINT cockroach_restart");
+ } catch (Exception $e) {
+ throw $e;
+ }
+
+ while (true) {
+ try {
+ $stmt = $dbh->prepare(
+ 'UPDATE accounts SET balance = balance + :deposit ' .
+ 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
+
+ // First, withdraw the money from the old account (if possible).
+ $stmt->bindValue(':account', $from, PDO::PARAM_INT);
+ $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
+ $stmt->execute();
+ if ($stmt->rowCount() == 0) {
+ print "source account does not exist or is underfunded\r\n";
+ return;
+ }
+
+ // Next, deposit into the new account (if it exists).
+ $stmt->bindValue(':account', $to, PDO::PARAM_INT);
+ $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
+ $stmt->execute();
+ if ($stmt->rowCount() == 0) {
+ print "destination account does not exist\r\n";
+ return;
+ }
+
+ // Attempt to release the savepoint (which is really the commit).
+ $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
+ $dbh->commit();
+ return;
+ } catch (PDOException $e) {
+ if ($e->getCode() != '40001') {
+ // Non-recoverable error. Rollback and bubble error up the chain.
+ $dbh->rollBack();
+ throw $e;
+ } else {
+ // Cockroach transaction retry code. Rollback to the savepoint and
+ // restart.
+ $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
+ }
+ }
+ }
+}
+
+try {
+ $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt',
+ 'maxroach', null, array(
+ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
+ PDO::ATTR_EMULATE_PREPARES => true,
+ ));
+
+ transferMoney($dbh, 1, 2, 10);
+
+ print "Account balances after transfer:\r\n";
+ foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
+ print $row['id'] . ': ' . $row['balance'] . "\r\n";
+ }
+} catch (Exception $e) {
+ print $e->getMessage() . "\r\n";
+ exit(1);
+}
+?>
diff --git a/_includes/v20.2/app/txn-sample.py b/_includes/v20.2/app/txn-sample.py
new file mode 100644
index 00000000000..d4c86a36cc8
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.py
@@ -0,0 +1,76 @@
+# Import the driver.
+import psycopg2
+import psycopg2.errorcodes
+
+# Connect to the cluster.
+conn = psycopg2.connect(
+ database='bank',
+ user='maxroach',
+ sslmode='require',
+ sslrootcert='certs/ca.crt',
+ sslkey='certs/client.maxroach.key',
+ sslcert='certs/client.maxroach.crt',
+ port=26257,
+ host='localhost'
+)
+
+def onestmt(conn, sql):
+ with conn.cursor() as cur:
+ cur.execute(sql)
+
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn, op):
+ with conn:
+ onestmt(conn, "SAVEPOINT cockroach_restart")
+ while True:
+ try:
+ # Attempt the work.
+ op(conn)
+
+ # If we reach this point, commit.
+ onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
+ break
+
+ except psycopg2.OperationalError as e:
+ if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
+ # A non-retryable error; report this up the call stack.
+ raise e
+ # Signal the database that we'll retry.
+ onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
+
+
+# The transaction we want to run.
+def transfer_funds(txn, frm, to, amount):
+ with txn.cursor() as cur:
+
+ # Check the current balance.
+ cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
+ from_balance = cur.fetchone()[0]
+ if from_balance < amount:
+ raise "Insufficient funds"
+
+ # Perform the transfer.
+ cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
+ (amount, frm))
+ cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
+ (amount, to))
+
+
+# Execute the transaction.
+run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
+
+
+with conn:
+ with conn.cursor() as cur:
+ # Check account balances.
+ cur.execute("SELECT id, balance FROM accounts")
+ rows = cur.fetchall()
+ print('Balances after transfer:')
+ for row in rows:
+ print([str(cell) for cell in row])
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/txn-sample.rb b/_includes/v20.2/app/txn-sample.rb
new file mode 100644
index 00000000000..1c3e028fdf7
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.rb
@@ -0,0 +1,52 @@
+# Import the driver.
+require 'pg'
+
+# Wrapper for a transaction.
+# This automatically re-calls "op" with the open transaction as an argument
+# as long as the database server asks for the transaction to be retried.
+def run_transaction(conn)
+ conn.transaction do |txn|
+ txn.exec('SAVEPOINT cockroach_restart')
+ while
+ begin
+ # Attempt the work.
+ yield txn
+
+ # If we reach this point, commit.
+ txn.exec('RELEASE SAVEPOINT cockroach_restart')
+ break
+ rescue PG::TRSerializationFailure
+ txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
+ end
+ end
+ end
+end
+
+def transfer_funds(txn, from, to, amount)
+ txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
+ res.each do |row|
+ raise 'insufficient funds' if Integer(row['balance']) < amount
+ end
+ end
+ txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
+ txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
+end
+
+# Connect to the "bank" database.
+conn = PG.connect(
+ user: 'maxroach',
+ dbname: 'bank',
+ host: 'localhost',
+ port: 26257,
+ sslmode: 'require',
+ sslrootcert: 'certs/ca.crt',
+ sslkey:'certs/client.maxroach.key',
+ sslcert:'certs/client.maxroach.crt'
+)
+
+run_transaction(conn) do |txn|
+ transfer_funds(txn, 1, 2, 100)
+end
+
+# Close communication with the database.
+conn.close()
diff --git a/_includes/v20.2/app/txn-sample.rs b/_includes/v20.2/app/txn-sample.rs
new file mode 100644
index 00000000000..c8e099b89e6
--- /dev/null
+++ b/_includes/v20.2/app/txn-sample.rs
@@ -0,0 +1,73 @@
+use openssl::error::ErrorStack;
+use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
+use postgres::{error::SqlState, Client, Error, Transaction};
+use postgres_openssl::MakeTlsConnector;
+
+/// Runs op inside a transaction and retries it as needed.
+/// On non-retryable failures, the transaction is aborted and
+/// rolled back; on success, the transaction is committed.
+fn execute_txn(client: &mut Client, op: F) -> Result
+where
+ F: Fn(&mut Transaction) -> Result,
+{
+ let mut txn = client.transaction()?;
+ loop {
+ let mut sp = txn.savepoint("cockroach_restart")?;
+ match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
+ Err(ref err)
+ if err
+ .code()
+ .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
+ .unwrap_or(false) => {}
+ r => break r,
+ }
+ }
+ .and_then(|t| txn.commit().map(|_| t))
+}
+
+fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
+ // Read the balance.
+ let from_balance: i64 = txn
+ .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
+ .get(0);
+
+ assert!(from_balance >= amount);
+
+ // Perform the transfer.
+ txn.execute(
+ "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
+ &[&amount, &from],
+ )?;
+ txn.execute(
+ "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
+ &[&amount, &to],
+ )?;
+ Ok(())
+}
+
+fn ssl_config() -> Result {
+ let mut builder = SslConnector::builder(SslMethod::tls())?;
+ builder.set_ca_file("certs/ca.crt")?;
+ builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
+ builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
+ Ok(MakeTlsConnector::new(builder.build()))
+}
+
+fn main() {
+ let connector = ssl_config().unwrap();
+ let mut client =
+ Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
+
+ // Run a transfer in a transaction.
+ execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
+
+ // Check account balances after the transaction.
+ for row in &client
+ .query("SELECT id, balance FROM accounts", &[])
+ .unwrap()
+ {
+ let id: i64 = row.get(0);
+ let balance: i64 = row.get(1);
+ println!("{} {}", id, balance);
+ }
+}
diff --git a/_includes/v20.2/app/util.clj b/_includes/v20.2/app/util.clj
new file mode 100644
index 00000000000..d040affe794
--- /dev/null
+++ b/_includes/v20.2/app/util.clj
@@ -0,0 +1,38 @@
+(ns test.util
+ (:require [clojure.java.jdbc :as j]
+ [clojure.walk :as walk]))
+
+(defn txn-restart-err?
+ "Takes an exception and returns true if it is a CockroachDB retry error."
+ [e]
+ (when-let [m (.getMessage e)]
+ (condp instance? e
+ java.sql.BatchUpdateException
+ (and (re-find #"getNextExc" m)
+ (txn-restart-err? (.getNextException e)))
+
+ org.postgresql.util.PSQLException
+ (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors.
+
+ false)))
+
+;; Wrapper for a transaction.
+;; This automatically invokes the body again as long as the database server
+;; asks the transaction to be retried.
+
+(defmacro with-txn-retry
+ "Wrap an evaluation within a CockroachDB retry block."
+ [[txn c] & body]
+ `(j/with-db-transaction [~txn ~c]
+ (loop []
+ (j/execute! ~txn ["savepoint cockroach_restart"])
+ (let [res# (try (let [r# (do ~@body)]
+ {:ok r#})
+ (catch java.sql.SQLException e#
+ (if (txn-restart-err? e#)
+ {:retry true}
+ (throw e#))))]
+ (if (:retry res#)
+ (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"])
+ (recur))
+ (:ok res#))))))
diff --git a/_includes/v20.2/backups/advanced-examples-list.md b/_includes/v20.2/backups/advanced-examples-list.md
new file mode 100644
index 00000000000..a4ccb450002
--- /dev/null
+++ b/_includes/v20.2/backups/advanced-examples-list.md
@@ -0,0 +1,9 @@
+For examples of advanced `BACKUP` and `RESTORE` use cases, see [Back up and Restore Data - Advanced Options](backup-and-restore-advanced-options.html). Advanced examples include:
+
+- [Incremental backups with a specified destination](backup-and-restore-advanced-options.html#incremental-backups-with-explicitly-specified-destinations)
+- [Backup with revision history and point-in-time restore](backup-and-restore-advanced-options.html#backup-with-revision-history-and-point-in-time-restore)
+- [Locality-aware backup and restore](backup-and-restore-advanced-options.html#locality-aware-backup-and-restore)
+- [Encrypted backup and restore](backup-and-restore-advanced-options.html#encrypted-backup-and-restore)
+- [Restore into a different database](backup-and-restore-advanced-options.html#restore-into-a-different-database)
+- [Remove the foreign key before restore](backup-and-restore-advanced-options.html#remove-the-foreign-key-before-restore)
+- [Restoring users from `system.users` backup](backup-and-restore-advanced-options.html#restoring-users-from-system-users-backup)
diff --git a/_includes/v20.2/backups/encrypted-backup-description.md b/_includes/v20.2/backups/encrypted-backup-description.md
new file mode 100644
index 00000000000..f96843d623b
--- /dev/null
+++ b/_includes/v20.2/backups/encrypted-backup-description.md
@@ -0,0 +1,11 @@
+ You can encrypt full or incremental backups by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must included in the [`RESTORE`](restore.html) statement.
+
+When used with [incremental backups](backup.html#incremental-backups), the `encryption_passphrase` option is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](backup-and-restore-advanced-options.html#locality-aware-backup-and-restore), the passphrase provided is applied to files in all localities.
+
+Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation.
+
+{{site.data.alerts.callout_info}}
+`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption.
+{{site.data.alerts.end}}
+
+For an example of an encrypted backup, see [Create an encrypted backup](backup-and-restore-advanced-options.html#create-an-encrypted-backup).
diff --git a/_includes/v20.2/cdc/core-csv.md b/_includes/v20.2/cdc/core-csv.md
new file mode 100644
index 00000000000..4ee6bfc587d
--- /dev/null
+++ b/_includes/v20.2/cdc/core-csv.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/cdc/core-url.md b/_includes/v20.2/cdc/core-url.md
new file mode 100644
index 00000000000..7241e203aa7
--- /dev/null
+++ b/_includes/v20.2/cdc/core-url.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/cdc/create-core-changefeed-avro.md b/_includes/v20.2/cdc/create-core-changefeed-avro.md
new file mode 100644
index 00000000000..3846c0ffd34
--- /dev/null
+++ b/_includes/v20.2/cdc/create-core-changefeed-avro.md
@@ -0,0 +1,104 @@
+In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.
+
+1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start-single-node \
+ --insecure \
+ --listen-addr=localhost \
+ --background
+ ~~~
+
+2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/).
+
+3. Move into the extracted `confluent-` directory and start Confluent:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ./bin/confluent start
+ ~~~
+
+ Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives).
+
+4. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv
+ ~~~
+
+ {% include {{ page.version.version }}/cdc/core-url.md %}
+
+ {% include {{ page.version.version }}/cdc/core-csv.md %}
+
+5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SET CLUSTER SETTING kv.rangefeed.enabled = true;
+ ~~~
+
+6. Create table `bar`:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE TABLE bar (a INT PRIMARY KEY);
+ ~~~
+
+7. Insert a row into the table:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > INSERT INTO bar VALUES (0);
+ ~~~
+
+8. Start the core changefeed:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > EXPERIMENTAL CHANGEFEED FOR bar WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081';
+ ~~~
+
+ ~~~
+ table,key,value
+ bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000
+ ~~~
+
+9. In a new terminal, add another row:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
+ ~~~
+
+10. Back in the terminal where the core changefeed is streaming, the output will appear:
+
+ ~~~
+ bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002
+ ~~~
+
+ Note that records may take a couple of seconds to display in the core changefeed.
+
+11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
+
+12. To stop `cockroach`, run:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach quit --insecure
+ ~~~
+
+13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ./bin/confluent stop
+ ~~~
+
+ To stop all Confluent processes, use:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ./bin/confluent destroy
+ ~~~
diff --git a/_includes/v20.2/cdc/create-core-changefeed.md b/_includes/v20.2/cdc/create-core-changefeed.md
new file mode 100644
index 00000000000..0e9c876a00a
--- /dev/null
+++ b/_includes/v20.2/cdc/create-core-changefeed.md
@@ -0,0 +1,80 @@
+In this example, you'll set up a core changefeed for a single-node cluster.
+
+1. In a terminal window, start `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ --insecure \
+ --listen-addr=localhost \
+ --background
+ ~~~
+
+2. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \
+ --format=csv
+ ~~~
+
+ {% include {{ page.version.version }}/cdc/core-url.md %}
+
+ {% include {{ page.version.version }}/cdc/core-csv.md %}
+
+3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SET CLUSTER SETTING kv.rangefeed.enabled = true;
+ ~~~
+
+4. Create table `foo`:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE TABLE foo (a INT PRIMARY KEY);
+ ~~~
+
+5. Insert a row into the table:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > INSERT INTO foo VALUES (0);
+ ~~~
+
+6. Start the core changefeed:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > EXPERIMENTAL CHANGEFEED FOR foo;
+ ~~~
+ ~~~
+ table,key,value
+ foo,[0],"{""after"": {""a"": 0}}"
+ ~~~
+
+7. In a new terminal, add another row:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)"
+ ~~~
+
+8. Back in the terminal where the core changefeed is streaming, the following output has appeared:
+
+ ~~~
+ foo,[1],"{""after"": {""a"": 1}}"
+ ~~~
+
+ Note that records may take a couple of seconds to display in the core changefeed.
+
+9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
+
+10. To stop `cockroach`, run:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach quit --insecure
+ ~~~
diff --git a/_includes/v20.2/cdc/print-key.md b/_includes/v20.2/cdc/print-key.md
new file mode 100644
index 00000000000..ab0b0924d30
--- /dev/null
+++ b/_includes/v20.2/cdc/print-key.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/client-transaction-retry.md b/_includes/v20.2/client-transaction-retry.md
new file mode 100644
index 00000000000..6a54534169e
--- /dev/null
+++ b/_includes/v20.2/client-transaction-retry.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/computed-columns/add-computed-column.md b/_includes/v20.2/computed-columns/add-computed-column.md
new file mode 100644
index 00000000000..c670b1c7285
--- /dev/null
+++ b/_includes/v20.2/computed-columns/add-computed-column.md
@@ -0,0 +1,55 @@
+In this example, create a table:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE x (
+ a INT NULL,
+ b INT NULL AS (a * 2) STORED,
+ c INT NULL AS (a + 4) STORED,
+ FAMILY "primary" (a, b, rowid, c)
+ );
+~~~
+
+Then, insert a row of data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO x VALUES (6);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM x;
+~~~
+
+~~~
++---+----+----+
+| a | b | c |
++---+----+----+
+| 6 | 12 | 10 |
++---+----+----+
+(1 row)
+~~~
+
+Now add another computed column to the table:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED;
+~~~
+
+The `d` column is added to the table and computed from the `a` column divided by 2.
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM x;
+~~~
+
+~~~
++---+----+----+---+
+| a | b | c | d |
++---+----+----+---+
+| 6 | 12 | 10 | 3 |
++---+----+----+---+
+(1 row)
+~~~
diff --git a/_includes/v20.2/computed-columns/convert-computed-column.md b/_includes/v20.2/computed-columns/convert-computed-column.md
new file mode 100644
index 00000000000..12fd6e7d418
--- /dev/null
+++ b/_includes/v20.2/computed-columns/convert-computed-column.md
@@ -0,0 +1,108 @@
+You can convert a stored, computed column into a regular column by using `ALTER TABLE`.
+
+In this example, create a simple table with a computed column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE office_dogs (
+ id INT PRIMARY KEY,
+ first_name STRING,
+ last_name STRING,
+ full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
+ );
+~~~
+
+Then, insert a few rows of data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO office_dogs (id, first_name, last_name) VALUES
+ (1, 'Petee', 'Hirata'),
+ (2, 'Carl', 'Kimball'),
+ (3, 'Ernie', 'Narayan');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM office_dogs;
+~~~
+
+~~~
++----+------------+-----------+---------------+
+| id | first_name | last_name | full_name |
++----+------------+-----------+---------------+
+| 1 | Petee | Hirata | Petee Hirata |
+| 2 | Carl | Kimball | Carl Kimball |
+| 3 | Ernie | Narayan | Ernie Narayan |
++----+------------+-----------+---------------+
+(3 rows)
+~~~
+
+The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SHOW COLUMNS FROM office_dogs;
+~~~
+
+~~~
++-------------+-----------+-------------+----------------+------------------------------------+-------------+
+| column_name | data_type | is_nullable | column_default | generation_expression | indices |
++-------------+-----------+-------------+----------------+------------------------------------+-------------+
+| id | INT | false | NULL | | {"primary"} |
+| first_name | STRING | true | NULL | | {} |
+| last_name | STRING | true | NULL | | {} |
+| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} |
++-------------+-----------+-------------+----------------+------------------------------------+-------------+
+(4 rows)
+~~~
+
+Now, convert the computed column (`full_name`) to a regular column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED;
+~~~
+
+Check that the computed column was converted:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SHOW COLUMNS FROM office_dogs;
+~~~
+
+~~~
++-------------+-----------+-------------+----------------+-----------------------+-------------+
+| column_name | data_type | is_nullable | column_default | generation_expression | indices |
++-------------+-----------+-------------+----------------+-----------------------+-------------+
+| id | INT | false | NULL | | {"primary"} |
+| first_name | STRING | true | NULL | | {} |
+| last_name | STRING | true | NULL | | {} |
+| full_name | STRING | true | NULL | | {} |
++-------------+-----------+-------------+----------------+-----------------------+-------------+
+(4 rows)
+~~~
+
+The computed column is now a regular column and can be updated as such:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM office_dogs;
+~~~
+
+~~~
++----+------------+-----------+----------------------+
+| id | first_name | last_name | full_name |
++----+------------+-----------+----------------------+
+| 1 | Petee | Hirata | Petee Hirata |
+| 2 | Carl | Kimball | Carl Kimball |
+| 3 | Ernie | Narayan | Ernie Narayan |
+| 4 | Lola | McDog | This is not computed |
++----+------------+-----------+----------------------+
+(4 rows)
+~~~
diff --git a/_includes/v20.2/computed-columns/jsonb.md b/_includes/v20.2/computed-columns/jsonb.md
new file mode 100644
index 00000000000..76a5b08ad8a
--- /dev/null
+++ b/_includes/v20.2/computed-columns/jsonb.md
@@ -0,0 +1,35 @@
+In this example, create a table with a `JSONB` column and a computed column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE student_profiles (
+ id STRING PRIMARY KEY AS (profile->>'id') STORED,
+ profile JSONB
+);
+~~~
+
+Then, insert a few rows of data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO student_profiles (profile) VALUES
+ ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'),
+ ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
+ ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM student_profiles;
+~~~
+~~~
++--------+---------------------------------------------------------------------------------------------------------------------+
+| id | profile |
++--------+---------------------------------------------------------------------------------------------------------------------+
+| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} |
+| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} |
+| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} |
++--------+---------------------------------------------------------------------------------------------------------------------+
+~~~
+
+The primary key `id` is computed as a field from the `profile` column.
diff --git a/_includes/v20.2/computed-columns/partitioning.md b/_includes/v20.2/computed-columns/partitioning.md
new file mode 100644
index 00000000000..926c45793b4
--- /dev/null
+++ b/_includes/v20.2/computed-columns/partitioning.md
@@ -0,0 +1,53 @@
+{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}}
+
+In this example, create a table with geo-partitioning and a computed column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE user_locations (
+ locality STRING AS (CASE
+ WHEN country IN ('ca', 'mx', 'us') THEN 'north_america'
+ WHEN country IN ('au', 'nz') THEN 'australia'
+ END) STORED,
+ id SERIAL,
+ name STRING,
+ country STRING,
+ PRIMARY KEY (locality, id))
+ PARTITION BY LIST (locality)
+ (PARTITION north_america VALUES IN ('north_america'),
+ PARTITION australia VALUES IN ('australia'));
+~~~
+
+Then, insert a few rows of data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO user_locations (name, country) VALUES
+ ('Leonard McCoy', 'us'),
+ ('Uhura', 'nz'),
+ ('Spock', 'ca'),
+ ('James Kirk', 'us'),
+ ('Scotty', 'mx'),
+ ('Hikaru Sulu', 'us'),
+ ('Pavel Chekov', 'au');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM user_locations;
+~~~
+~~~
++---------------+--------------------+---------------+---------+
+| locality | id | name | country |
++---------------+--------------------+---------------+---------+
+| australia | 333153890100609025 | Uhura | nz |
+| australia | 333153890100772865 | Pavel Chekov | au |
+| north_america | 333153890100576257 | Leonard McCoy | us |
+| north_america | 333153890100641793 | Spock | ca |
+| north_america | 333153890100674561 | James Kirk | us |
+| north_america | 333153890100707329 | Scotty | mx |
+| north_america | 333153890100740097 | Hikaru Sulu | us |
++---------------+--------------------+---------------+---------+
+~~~
+
+The `locality` column is computed from the `country` column.
diff --git a/_includes/v20.2/computed-columns/secondary-index.md b/_includes/v20.2/computed-columns/secondary-index.md
new file mode 100644
index 00000000000..e274db59d7e
--- /dev/null
+++ b/_includes/v20.2/computed-columns/secondary-index.md
@@ -0,0 +1,63 @@
+In this example, create a table with a computed columns and an index on that column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE gymnastics (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ athlete STRING,
+ vault DECIMAL,
+ bars DECIMAL,
+ beam DECIMAL,
+ floor DECIMAL,
+ combined_score DECIMAL AS (vault + bars + beam + floor) STORED,
+ INDEX total (combined_score DESC)
+ );
+~~~
+
+Then, insert a few rows a data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES
+ ('Simone Biles', 15.933, 14.800, 15.300, 15.800),
+ ('Gabby Douglas', 0, 15.766, 0, 0),
+ ('Laurie Hernandez', 15.100, 0, 15.233, 14.833),
+ ('Madison Kocian', 0, 15.933, 0, 0),
+ ('Aly Raisman', 15.833, 0, 15.000, 15.366);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM gymnastics;
+~~~
+~~~
++--------------------------------------+------------------+--------+--------+--------+--------+----------------+
+| id | athlete | vault | bars | beam | floor | combined_score |
++--------------------------------------+------------------+--------+--------+--------+--------+----------------+
+| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 |
+| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 |
+| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 |
+| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 |
+| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 |
++--------------------------------------+------------------+--------+--------+--------+--------+----------------+
+~~~
+
+Now, run a query using the secondary index:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC;
+~~~
+~~~
++------------------+----------------+
+| athlete | combined_score |
++------------------+----------------+
+| Simone Biles | 61.833 |
+| Aly Raisman | 46.199 |
+| Laurie Hernandez | 45.166 |
+| Madison Kocian | 15.933 |
+| Gabby Douglas | 15.766 |
++------------------+----------------+
+~~~
+
+The athlete with the highest combined score of 61.833 is Simone Biles.
diff --git a/_includes/v20.2/computed-columns/simple.md b/_includes/v20.2/computed-columns/simple.md
new file mode 100644
index 00000000000..49045fc6cb7
--- /dev/null
+++ b/_includes/v20.2/computed-columns/simple.md
@@ -0,0 +1,40 @@
+In this example, let's create a simple table with a computed column:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE users (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ city STRING,
+ first_name STRING,
+ last_name STRING,
+ full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED,
+ address STRING,
+ credit_card STRING,
+ dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
+);
+~~~
+
+Then, insert a few rows of data:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO users (first_name, last_name) VALUES
+ ('Lola', 'McDog'),
+ ('Carl', 'Kimball'),
+ ('Ernie', 'Narayan');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM users;
+~~~
+~~~
+ id | city | first_name | last_name | full_name | address | credit_card | dl
++--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+
+ 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL
+ e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL
+ f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL
+(3 rows)
+~~~
+
+The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html).
diff --git a/_includes/v20.2/faq/auto-generate-unique-ids.html b/_includes/v20.2/faq/auto-generate-unique-ids.html
new file mode 100644
index 00000000000..c1269995b2e
--- /dev/null
+++ b/_includes/v20.2/faq/auto-generate-unique-ids.html
@@ -0,0 +1,107 @@
+To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE users (
+ id UUID NOT NULL DEFAULT gen_random_uuid(),
+ city STRING NOT NULL,
+ name STRING NULL,
+ address STRING NULL,
+ credit_card STRING NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
+ FAMILY "primary" (id, city, name, address, credit_card)
+);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM users;
+~~~
+
+~~~
+ id | city | name | address | credit_card
++--------------------------------------+----------+-------+---------+-------------+
+ cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL
+ 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL
+ 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL
+(3 rows)
+~~~
+
+Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE users2 (
+ id BYTES DEFAULT uuid_v4(),
+ city STRING NOT NULL,
+ name STRING NULL,
+ address STRING NULL,
+ credit_card STRING NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
+ FAMILY "primary" (id, city, name, address, credit_card)
+);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM users;
+~~~
+
+~~~
+ id | city | name | address | credit_card
++------------------------------------------------+----------+-------+---------+-------------+
+ 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL
+ \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL
+ \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL
+(3 rows)
+~~~
+
+In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.
+
+This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index.
+
+If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE users3 (
+ id INT DEFAULT unique_rowid(),
+ city STRING NOT NULL,
+ name STRING NULL,
+ address STRING NULL,
+ credit_card STRING NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
+ FAMILY "primary" (id, city, name, address, credit_card)
+);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM users3;
+~~~
+
+~~~
+ id | city | name | address | credit_card
++--------------------+---------+--------+---------+-------------+
+ 469048192112197633 | chicago | Blake | NULL | NULL
+ 469048192112263169 | seattle | Hannah | NULL | NULL
+ 469048192112295937 | seattle | Bobby | NULL | NULL
+(3 rows)
+~~~
+
+Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed.
diff --git a/_includes/v20.2/faq/clock-synchronization-effects.md b/_includes/v20.2/faq/clock-synchronization-effects.md
new file mode 100644
index 00000000000..98e0d13888f
--- /dev/null
+++ b/_includes/v20.2/faq/clock-synchronization-effects.md
@@ -0,0 +1,26 @@
+CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node.
+
+While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
+
+The one rare case to note is when a node's clock suddenly jumps beyond the maximum offset before the node detects it. Although extremely unlikely, this could occur, for example, when running CockroachDB inside a VM and the VM hypervisor decides to migrate the VM to different hardware with a different time. In this case, there can be a small window of time between when the node's clock becomes unsynchronized and when the node spontaneously shuts down. During this window, it would be possible for a client to read stale data and write data derived from stale reads. To protect against this, we recommend using the `server.clock.forward_jump_check_enabled` and `server.clock.persist_upper_bound_interval` [cluster settings](cluster-settings.html).
+
+### Considerations
+
+When setting up clock synchronization:
+
+- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing).
+- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should.
+- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
+- Do not run more than one clock sync service on VMs where `cockroach` is running.
+
+### Tutorials
+
+For guidance on synchronizing clocks, see the tutorial for your deployment environment:
+
+Environment | Featured Approach
+------------|---------------------
+[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
+[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
+[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
+[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
+[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
diff --git a/_includes/v20.2/faq/clock-synchronization-monitoring.html b/_includes/v20.2/faq/clock-synchronization-monitoring.html
new file mode 100644
index 00000000000..6db8b963acb
--- /dev/null
+++ b/_includes/v20.2/faq/clock-synchronization-monitoring.html
@@ -0,0 +1,8 @@
+As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes:
+
+Metric | Definition
+-------|-----------
+`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds
+`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds
+
+As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will kill itself if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset.
diff --git a/_includes/v20.2/faq/differences-between-numberings.md b/_includes/v20.2/faq/differences-between-numberings.md
new file mode 100644
index 00000000000..741ec4f8066
--- /dev/null
+++ b/_includes/v20.2/faq/differences-between-numberings.md
@@ -0,0 +1,11 @@
+
+| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences |
+|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------|
+| Size | 16 bytes | 8 bytes | 1 to 8 bytes |
+| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered |
+| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention |
+| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values |
+| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local |
+| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher |
+| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node |
+| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited |
diff --git a/_includes/v20.2/faq/planned-maintenance.md b/_includes/v20.2/faq/planned-maintenance.md
new file mode 100644
index 00000000000..e8a3562b602
--- /dev/null
+++ b/_includes/v20.2/faq/planned-maintenance.md
@@ -0,0 +1,22 @@
+By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window.
+
+For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING server.time_until_store_dead = '15m0s';
+~~~
+
+After completing the maintenance work and [restarting the nodes](cockroach-start.html), you would then change the setting back to its default:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
+~~~
+
+It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
+
+{% include copy-clipboard.html %}
+ ~~~ sql
+ > SET CLUSTER SETTING server.shutdown.drain_wait = '10s';
+ ~~~
diff --git a/_includes/v20.2/faq/sequential-numbers.md b/_includes/v20.2/faq/sequential-numbers.md
new file mode 100644
index 00000000000..8a4794b9243
--- /dev/null
+++ b/_includes/v20.2/faq/sequential-numbers.md
@@ -0,0 +1,8 @@
+Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations:
+
+- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous
+FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details.
+- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that
+consumes a lower sequence number commits after a transaction that consumes a higher number).
+- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers.
+- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
diff --git a/_includes/v20.2/faq/sequential-transactions.md b/_includes/v20.2/faq/sequential-transactions.md
new file mode 100644
index 00000000000..684f2ce5d2a
--- /dev/null
+++ b/_includes/v20.2/faq/sequential-transactions.md
@@ -0,0 +1,19 @@
+Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly
+solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM
+TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following:
+
+- Paginating through all the changes to a table or dataset
+- Determining the order of changes to data over time
+- Determining the state of data at some point in the past
+- Determining the changes to data between two points of time
+
+Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering.
+
+However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows:
+
+- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);`
+- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;`
+
+This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result.
+
+If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs.
diff --git a/_includes/v20.2/faq/simulate-key-value-store.html b/_includes/v20.2/faq/simulate-key-value-store.html
new file mode 100644
index 00000000000..4772fa5358c
--- /dev/null
+++ b/_includes/v20.2/faq/simulate-key-value-store.html
@@ -0,0 +1,13 @@
+CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:
+
+~~~ sql
+> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);
+~~~
+
+When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation:
+
+~~~ sql
+> UPSERT INTO kv VALUES (1, b'hello')
+~~~
+
+This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.
diff --git a/_includes/v20.2/faq/sql-query-logging.md b/_includes/v20.2/faq/sql-query-logging.md
new file mode 100644
index 00000000000..4e3a8b27780
--- /dev/null
+++ b/_includes/v20.2/faq/sql-query-logging.md
@@ -0,0 +1,143 @@
+There are several ways to log SQL queries. The type of logging you use will depend on your requirements.
+
+- For per-table audit logs, turn on [SQL audit logs](#sql-audit-logs).
+- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs) and [slow query logs](#slow-query-logs).
+- For connection troubleshooting, turn on [authentication logs](#authentication-logs).
+- For local testing, turn on [per-node execution logs](#per-node-execution-logs).
+
+### SQL audit logs
+
+{% include {{ page.version.version }}/misc/experimental-warning.md %}
+
+SQL audit logging is useful if you want to log all queries that are run against specific tables.
+
+- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html).
+- For SQL reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html).
+- Note that SQL audit logs perform one disk I/O per event and will impact performance.
+
+### Cluster-wide execution logs
+
+For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING sql.trace.log_statement_execute = true;
+~~~
+
+With this setting on, each node of the cluster writes all SQL queries it executes to a secondary `cockroach-sql-exec` log file. Use the symlink `cockroach-sql-exec.log` to open the most recent log. When you no longer need to log queries, you can turn the setting back off:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING sql.trace.log_statement_execute = false;
+~~~
+
+Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
+
+### Slow query logs
+
+ Another useful [cluster setting](cluster-settings.html) is `sql.log.slow_query.latency_threshold`, which is used to log only queries whose service latency exceeds a specified threshold value (e.g., 100 milliseconds):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING sql.log.slow_query.latency_threshold = '100ms';
+~~~
+
+Each node that serves as a gateway will then record slow SQL queries to a `cockroach-sql-slow` log file. Use the symlink `cockroach-sql-slow.log` to open the most recent log. For more details on logging slow queries, see [Using the slow query log](query-behavior-troubleshooting.html#using-the-slow-query-log).
+
+Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
+
+### Authentication logs
+
+{% include {{ page.version.version }}/misc/experimental-warning.md %}
+
+SQL client connections can be logged by turning on the `server.auth_log.sql_connections.enabled` [cluster setting](cluster-settings.html):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING server.auth_log.sql_connections.enabled = true;
+~~~
+
+This will log connection established and connection terminated events to a `cockroach-auth` log file. Use the symlink `cockroach-auth.log` to open the most recent log.
+
+{{site.data.alerts.callout_info}}
+In addition to SQL sessions, connection events can include SQL-based liveness probe attempts, as well as attempts to use the [PostgreSQL cancel protocol](https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.5.7.9).
+{{site.data.alerts.end}}
+
+This example log shows both types of connection events over a `hostssl` (TLS certificate over TCP) connection:
+
+~~~
+I200219 05:08:43.083907 5235 sql/pgwire/server.go:445 [n1,client=[::1]:34588] 22 received connection
+I200219 05:08:44.171384 5235 sql/pgwire/server.go:453 [n1,client=[::1]:34588,hostssl] 26 disconnected; duration: 1.087489893s
+~~~
+
+Along with the above, SQL client authenticated sessions can be logged by turning on the `server.auth_log.sql_sessions.enabled` [cluster setting](cluster-settings.html):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING server.auth_log.sql_sessions.enabled = true;
+~~~
+
+This logs authentication method selection, authentication method application, authentication method result, and session termination events to the `cockroach-auth` log file. Use the symlink `cockroach-auth.log` to open the most recent log.
+
+This example log shows authentication success over a `hostssl` (TLS certificate over TCP) connection:
+
+~~~
+I200219 05:08:43.089501 5149 sql/pgwire/auth.go:327 [n1,client=[::1]:34588,hostssl,user=root] 23 connection matches HBA rule:
+# TYPE DATABASE USER ADDRESS METHOD OPTIONS
+host all root all cert-password
+I200219 05:08:43.091045 5149 sql/pgwire/auth.go:327 [n1,client=[::1]:34588,hostssl,user=root] 24 authentication succeeded
+I200219 05:08:44.169684 5235 sql/pgwire/conn.go:216 [n1,client=[::1]:34588,hostssl,user=root] 25 session terminated; duration: 1.080240961s
+~~~
+
+This example log shows authentication failure log over a `local` (password over Unix socket) connection:
+
+~~~
+I200219 05:02:18.148961 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 17 connection matches HBA rule:
+# TYPE DATABASE USER ADDRESS METHOD OPTIONS
+local all all password
+I200219 05:02:18.151644 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 18 user has no password defined
+I200219 05:02:18.152863 1037 sql/pgwire/auth.go:327 [n1,client,local,user=root] 19 authentication failed: password authentication failed for user root
+I200219 05:02:18.154168 1036 sql/pgwire/conn.go:216 [n1,client,local,user=root] 20 session terminated; duration: 5.261538ms
+~~~
+
+For complete logging of client connections, we recommend enabling both `server.auth_log.sql_connections.enabled` and `server.auth_log.sql_sessions.enabled`. Note that both logs perform one disk I/O per event and will impact performance.
+
+For more details on authentication and certificates, see [Authentication](authentication.html).
+
+Log files are written to CockroachDB's standard [log directory](debug-and-error-logs.html#write-to-file).
+
+### Per-node execution logs
+
+Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node.
+
+Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](cockroach-start.html) command. For example, to start a single node locally and log all client-generated SQL queries it executes, you'd run:
+
+~~~ shell
+$ cockroach start --insecure --listen-addr=localhost --vmodule=exec_log=2 --join=
+~~~
+
+{{site.data.alerts.callout_success}}
+To log CockroachDB-generated SQL queries as well, use `--vmodule=exec_log=3`.
+{{site.data.alerts.end}}
+
+From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html):
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT crdb_internal.set_vmodule('exec_log=2');
+~~~
+
+This will result in the following output:
+
+~~~
+ crdb_internal.set_vmodule
++---------------------------+
+ 0
+(1 row)
+~~~
+
+Once the logging is enabled, all client-generated SQL queries executed by the node will be written to the primary [CockroachDB log file](debug-and-error-logs.html) as follows:
+
+~~~
+I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 ""
+~~~
diff --git a/_includes/v20.2/faq/when-to-interleave-tables.html b/_includes/v20.2/faq/when-to-interleave-tables.html
new file mode 100644
index 00000000000..a65196ad693
--- /dev/null
+++ b/_includes/v20.2/faq/when-to-interleave-tables.html
@@ -0,0 +1,5 @@
+You're most likely to benefit from interleaved tables when:
+
+ - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy)
+ - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits)
+ - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs)
diff --git a/_includes/v20.2/json/json-sample.go b/_includes/v20.2/json/json-sample.go
new file mode 100644
index 00000000000..75b15e95baf
--- /dev/null
+++ b/_includes/v20.2/json/json-sample.go
@@ -0,0 +1,79 @@
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "time"
+
+ _ "github.com/lib/pq"
+)
+
+func main() {
+ db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
+ if err != nil {
+ panic(err)
+ }
+
+ // The Reddit API wants us to tell it where to start from. The first request
+ // we just say "null" to say "from the start", subsequent requests will use
+ // the value received from the last call.
+ after := "null"
+
+ for i := 0; i < 300; i++ {
+ after, err = makeReq(db, after)
+ if err != nil {
+ panic(err)
+ }
+ // Reddit limits to 30 requests per minute, so don't do any more than that.
+ time.Sleep(2 * time.Second)
+ }
+}
+
+func makeReq(db *sql.DB, after string) (string, error) {
+ // First, make a request to reddit using the appropriate "after" string.
+ client := &http.Client{}
+ req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
+
+ req.Header.Add("User-Agent", `Go`)
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return "", err
+ }
+
+ res, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
+ // accomplish multiple things at once.
+ // The JSON reddit returns looks like this:
+ // {
+ // "data": {
+ // "children": [ ... ]
+ // },
+ // "after": ...
+ // }
+ // We structure our query so that we extract the `children` field, and then
+ // expand that and insert each individual element into the database as a
+ // separate row. We then return the "after" field so we know how to make the
+ // next request.
+ r, err := db.Query(`
+ INSERT INTO jsonb_test.programming (posts)
+ SELECT json_array_elements($1->'data'->'children')
+ RETURNING $1->'data'->'after'`,
+ string(res))
+ if err != nil {
+ return "", err
+ }
+
+ // Since we did a RETURNING, we need to grab the result of our query.
+ r.Next()
+ var newAfter string
+ r.Scan(&newAfter)
+
+ return newAfter, nil
+}
diff --git a/_includes/v20.2/json/json-sample.py b/_includes/v20.2/json/json-sample.py
new file mode 100644
index 00000000000..64ab9dad0d0
--- /dev/null
+++ b/_includes/v20.2/json/json-sample.py
@@ -0,0 +1,44 @@
+import json
+import psycopg2
+import requests
+import time
+
+conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
+conn.set_session(autocommit=True)
+cur = conn.cursor()
+
+# The Reddit API wants us to tell it where to start from. The first request
+# we just say "null" to say "from the start"; subsequent requests will use
+# the value received from the last call.
+url = "https://www.reddit.com/r/programming.json"
+after = {"after": "null"}
+
+for n in range(300):
+ # First, make a request to reddit using the appropriate "after" string.
+ req = requests.get(url, params=after, headers={"User-Agent": "Python"})
+
+ # Decode the JSON and set "after" for the next request.
+ resp = req.json()
+ after = {"after": str(resp['data']['after'])}
+
+ # Convert the JSON to a string to send to the database.
+ data = json.dumps(resp)
+
+ # The JSON reddit returns looks like this:
+ # {
+ # "data": {
+ # "children": [ ... ]
+ # },
+ # "after": ...
+ # }
+ # We structure our query so that we extract the `children` field, and then
+ # expand that and insert each individual element into the database as a
+ # separate row.
+ cur.execute("""INSERT INTO jsonb_test.programming (posts)
+ SELECT json_array_elements(%s->'data'->'children')""", (data,))
+
+ # Reddit limits to 30 requests per minute, so don't do any more than that.
+ time.sleep(2)
+
+cur.close()
+conn.close()
diff --git a/_includes/v20.2/known-limitations/adding-stores-to-node.md b/_includes/v20.2/known-limitations/adding-stores-to-node.md
new file mode 100644
index 00000000000..206d98718a3
--- /dev/null
+++ b/_includes/v20.2/known-limitations/adding-stores-to-node.md
@@ -0,0 +1,5 @@
+After a node has initially joined a cluster, it is not possible to add additional [stores](cockroach-start.html#store) to the node. Stopping the node and restarting it with additional stores causes the node to not reconnect to the cluster.
+
+To work around this limitation, [decommission the node](remove-nodes.html), remove its data directory, and then run [`cockroach start`](cockroach-start.html) to join the cluster again as a new node.
+
+[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/39415)
diff --git a/_includes/v20.2/known-limitations/cdc.md b/_includes/v20.2/known-limitations/cdc.md
new file mode 100644
index 00000000000..80f482a1cd0
--- /dev/null
+++ b/_includes/v20.2/known-limitations/cdc.md
@@ -0,0 +1,10 @@
+- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables).
+- Changefeeds do not share internal buffers, so each running changefeed will increase total memory usage. To watch multiple tables, we recommend creating a changefeed with a comma-separated list of tables.
+- Many DDL queries (including [`TRUNCATE`](truncate.html) and [`DROP TABLE`](drop-table.html)) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended).
+- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html).
+- Partial or intermittent sink unavailability may impact changefeed stability; however, [ordering guarantees](change-data-capture.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](change-data-capture.html#monitor-a-changefeed).
+- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended).
+- Additional target options will be added, including partitions and ranges of primary key rows.
+- There is an open correctness issue with changefeeds connected to cloud storage sinks where new row information will display with a lower timestamp than what has already been emitted, which violates our [ordering guarantees](change-data-capture.html#ordering-guarantees).
+- Changefeeds do not pick up data ingested with the [`IMPORT INTO`](import-into.html) statement.
+- Using a [cloud storage sink](create-changefeed.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files.
diff --git a/_includes/v20.2/known-limitations/correlated-ctes.md b/_includes/v20.2/known-limitations/correlated-ctes.md
new file mode 100644
index 00000000000..165c052816b
--- /dev/null
+++ b/_includes/v20.2/known-limitations/correlated-ctes.md
@@ -0,0 +1,24 @@
+### Correlated common table expressions
+
+CockroachDB does not support correlated common table expressions. This means that a CTE cannot refer to a variable defined outside the scope of that CTE.
+
+For example, the following query returns an error:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM users
+ WHERE id =
+ (WITH rides_home AS
+ (SELECT revenue FROM rides
+ WHERE end_address = address)
+ SELECT rider_id FROM rides_home);
+~~~
+
+~~~
+ERROR: CTEs may not be correlated
+SQLSTATE: 0A000
+~~~
+
+This query returns an error because the `WITH rides_home` clause references a column (`address`) returned by the `SELECT` statement at the top level of the query, outside the `rides_home` CTE definition.
+
+For details, see the tracking issue: [cockroachdb/cockroach#42540](https://github.com/cockroachdb/cockroach/issues/42540).
\ No newline at end of file
diff --git a/_includes/v20.2/known-limitations/dropping-renaming-during-upgrade.md b/_includes/v20.2/known-limitations/dropping-renaming-during-upgrade.md
new file mode 100644
index 00000000000..38f7f9ddd87
--- /dev/null
+++ b/_includes/v20.2/known-limitations/dropping-renaming-during-upgrade.md
@@ -0,0 +1,10 @@
+When upgrading from v20.1.x to v20.2.0, as soon as any node of the cluster has run v20.2.0, it is important to avoid dropping, renaming, or truncating tables, views, sequences, or databases on the v20.1 nodes. This is true even in cases where nodes were upgraded to v20.2.0 and then rolled back to v20.1.
+
+In this case, avoid running the following operations against v20.1 nodes:
+
+- [`DROP TABLE`](drop-table.html), [`TRUNCATE TABLE`](truncate.html), [`RENAME TABLE`](rename-table.html)
+- [`DROP VIEW`](drop-view.html)
+- [`DROP SEQUENCE`](drop-sequence.html), [`RENAME SEQUENCE`](rename-sequence.html)
+- [`DROP DATABASE`](drop-database.html), [`RENAME DATABASE`](rename-database.html)
+
+Running any of these operations against v19.2 nodes will result in inconsistency between two internal tables, `system.namespace` and `system.namespace2`. This inconsistency will prevent you from being able to recreate the dropped or renamed objects; the returned error will be `ERROR: relation already exists`. In the case of a dropped or renamed database, [`SHOW DATABASES`](show-databases.html) will also return an error: `ERROR: internal error: "" is not a database`.
diff --git a/_includes/v20.2/known-limitations/dump-table-with-collations.md b/_includes/v20.2/known-limitations/dump-table-with-collations.md
new file mode 100644
index 00000000000..50c700b0e1b
--- /dev/null
+++ b/_includes/v20.2/known-limitations/dump-table-with-collations.md
@@ -0,0 +1,55 @@
+When using [`cockroach dump`](cockroach-dump.html) to dump the data of a table containing [collations](collate.html), the resulting `INSERT`s do not include the relevant collation clauses. For example:
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach start-single-node --insecure
+~~~
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql --insecure
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO de_names VALUES
+ ('Backhaus' COLLATE de),
+ ('Bär' COLLATE de),
+ ('Baz' COLLATE de)
+ ;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> q
+~~~
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach dump defaultdb de_names --insecure > dump.sql
+~~~
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cat dump.sql
+~~~
+
+~~~
+CREATE TABLE de_names (
+ name STRING COLLATE de NOT NULL,
+ CONSTRAINT "primary" PRIMARY KEY (name ASC),
+ FAMILY "primary" (name)
+);
+
+INSERT INTO de_names (name) VALUES
+ ('Backhaus'),
+ (e'B\u00E4r'),
+ ('Baz');
+~~~
+
+[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/48278)
diff --git a/_includes/v20.2/known-limitations/dump-table-with-no-columns.md b/_includes/v20.2/known-limitations/dump-table-with-no-columns.md
new file mode 100644
index 00000000000..9dc903636c5
--- /dev/null
+++ b/_includes/v20.2/known-limitations/dump-table-with-no-columns.md
@@ -0,0 +1 @@
+It is not currently possible to use [`cockroach dump`](cockroach-dump.html) to dump the schema and data of a table with no user-defined columns. See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for more details.
diff --git a/_includes/v20.2/known-limitations/import-high-disk-contention.md b/_includes/v20.2/known-limitations/import-high-disk-contention.md
new file mode 100644
index 00000000000..48b9c63acf2
--- /dev/null
+++ b/_includes/v20.2/known-limitations/import-high-disk-contention.md
@@ -0,0 +1,6 @@
+[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
+~~~
diff --git a/_includes/v20.2/known-limitations/import-interleaved-table.md b/_includes/v20.2/known-limitations/import-interleaved-table.md
new file mode 100644
index 00000000000..1cde934b496
--- /dev/null
+++ b/_includes/v20.2/known-limitations/import-interleaved-table.md
@@ -0,0 +1 @@
+After using [`cockroach dump`](cockroach-dump.html) to dump the schema and data of an interleaved table, the output must be edited before it can be imported via [`IMPORT`](import.html). See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for the workaround and more details.
diff --git a/_includes/v20.2/known-limitations/node-map.md b/_includes/v20.2/known-limitations/node-map.md
new file mode 100644
index 00000000000..df9ef58486e
--- /dev/null
+++ b/_includes/v20.2/known-limitations/node-map.md
@@ -0,0 +1,8 @@
+You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
+
+| Node | Region | Datacenter |
+| ------ | ------ | ------ |
+| Node1 | us-east | datacenter-1 |
+| Node2 | us-west | datacenter-1 |
+
+In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the Node Map will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the Node Map will be displayed.
diff --git a/_includes/v20.2/known-limitations/partitioning-with-placeholders.md b/_includes/v20.2/known-limitations/partitioning-with-placeholders.md
new file mode 100644
index 00000000000..b3c3345200d
--- /dev/null
+++ b/_includes/v20.2/known-limitations/partitioning-with-placeholders.md
@@ -0,0 +1 @@
+When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause.
diff --git a/_includes/v20.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/_includes/v20.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
new file mode 100644
index 00000000000..b7d947bb4c9
--- /dev/null
+++ b/_includes/v20.2/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md
@@ -0,0 +1,64 @@
+Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded.
+
+If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, `XXA00`, and the following error message:
+
+```
+transaction committed but schema change aborted with error:
+HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
+Manual inspection may be required to determine the actual state of the database.
+```
+
+{{site.data.alerts.callout_info}}
+This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the Postgres error code `40003`, `"statement completion unknown"`.
+{{site.data.alerts.end}}
+
+{{site.data.alerts.callout_danger}}
+If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions.
+{{site.data.alerts.end}}
+
+This error will occur in various scenarios, including but not limited to:
+
+- Creating a unique index fails because values aren't unique.
+- The evaluation of a computed value fails.
+- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column.
+
+To see an example of this error, start by creating the following table.
+
+{% include copy-clipboard.html %}
+~~~ sql
+CREATE TABLE T(x INT);
+INSERT INTO T(x) VALUES (1), (2), (3);
+~~~
+
+Then, enter the following multi-statement transaction, which will trigger the error.
+
+{% include copy-clipboard.html %}
+~~~ sql
+BEGIN;
+ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x);
+INSERT INTO T(x) VALUES (3);
+COMMIT;
+~~~
+
+~~~
+pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x"
+HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed.
+Manual inspection may be required to determine the actual state of the database.
+~~~
+
+In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted.
+
+{% include copy-clipboard.html %}
+~~~ sql
+SELECT * FROM t;
+~~~
+
+~~~
+ x
++---+
+ 1
+ 2
+ 3
+ 3
+(4 rows)
+~~~
diff --git a/_includes/v20.2/known-limitations/schema-changes-between-prepared-statements.md b/_includes/v20.2/known-limitations/schema-changes-between-prepared-statements.md
new file mode 100644
index 00000000000..c739262b4b8
--- /dev/null
+++ b/_includes/v20.2/known-limitations/schema-changes-between-prepared-statements.md
@@ -0,0 +1,42 @@
+When the schema of a table targeted by a prepared statement changes before the prepared statement is executed, CockroachDB allows the prepared statement to return results based on the changed table schema, for example:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE users (id INT PRIMARY KEY);
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> PREPARE prep1 AS SELECT * FROM users;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> ALTER TABLE users ADD COLUMN name STRING;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> INSERT INTO users VALUES (1, 'Max Roach');
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> EXECUTE prep1;
+~~~
+
+~~~
++----+-----------+
+| id | name |
++----+-----------+
+| 1 | Max Roach |
++----+-----------+
+(1 row)
+~~~
+
+It's therefore recommended to **not** use `SELECT *` in queries that will be repeated, via prepared statements or otherwise.
+
+Also, a prepared [`INSERT`](insert.html), [`UPSERT`](upsert.html), or [`DELETE`](delete.html) statement acts inconsistently when the schema of the table being written to is changed before the prepared statement is executed:
+
+- If the number of columns has increased, the prepared statement returns an error but nonetheless writes the data.
+- If the number of columns remains the same but the types have changed, the prepared statement writes the data and does not return an error.
diff --git a/_includes/v20.2/known-limitations/schema-changes-within-transactions.md b/_includes/v20.2/known-limitations/schema-changes-within-transactions.md
new file mode 100644
index 00000000000..89ece41a023
--- /dev/null
+++ b/_includes/v20.2/known-limitations/schema-changes-within-transactions.md
@@ -0,0 +1,12 @@
+Within a single [transaction](transactions.html):
+
+- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail).
+- As of version v2.1, you can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, [see this example](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table).
+- A `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table.
+- A table cannot be dropped and then recreated with the same name. This is not possible within a single transaction because `DROP TABLE` does not immediately drop the name of the table. As a workaround, split the [`DROP TABLE`](drop-table.html) and [`CREATE TABLE`](create-table.html) statements into separate transactions.
+- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed).
+- As of v19.1, some schema changes can be used in combination in a single `ALTER TABLE` statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically).
+
+{{site.data.alerts.callout_info}}
+If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/known-limitations/system-range-replication.md b/_includes/v20.2/known-limitations/system-range-replication.md
new file mode 100644
index 00000000000..1d649d04834
--- /dev/null
+++ b/_includes/v20.2/known-limitations/system-range-replication.md
@@ -0,0 +1 @@
+Changes to the [`default` cluster-wide replication zone](configure-replication-zones.html#edit-the-default-replication-zone) are automatically applied to existing replication zones, including pre-configured zones for important system ranges that must remain available for the cluster as a whole to remain available. The zones for these system ranges have an initial replication factor of 5 to make them more resilient to node failure. However, if you increase the `default` zone's replication factor above 5, consider [increasing the replication factor for important system ranges](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) as well.
diff --git a/_includes/v20.2/metric-names.md b/_includes/v20.2/metric-names.md
new file mode 100644
index 00000000000..80098b223b9
--- /dev/null
+++ b/_includes/v20.2/metric-names.md
@@ -0,0 +1,246 @@
+Name | Help
+-----|-----
+`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
+`addsstable.copies` | Number of SSTable ingestions that required copying files during application
+`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
+`build.timestamp` | Build information
+`capacity.available` | Available storage capacity
+`capacity.reserved` | Capacity reserved for snapshots
+`capacity.used` | Used storage capacity
+`capacity` | Total storage capacity
+`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds
+`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds
+`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges
+`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine
+`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine
+`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions
+`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue
+`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted
+`distsender.batches.partial` | Number of partial batches processed
+`distsender.batches` | Number of batches processed
+`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered
+`distsender.rpc.sent.local` | Number of local RPCs sent
+`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors
+`distsender.rpc.sent` | Number of RPCs sent
+`exec.error` | Number of batch KV requests that failed to execute on this node
+`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node
+`exec.success` | Number of batch KV requests executed successfully on this node
+`gcbytesage` | Cumulative age of non-live data in seconds
+`gossip.bytes.received` | Number of received gossip bytes
+`gossip.bytes.sent` | Number of sent gossip bytes
+`gossip.connections.incoming` | Number of active incoming gossip connections
+`gossip.connections.outgoing` | Number of active outgoing gossip connections
+`gossip.connections.refused` | Number of refused incoming gossip connections
+`gossip.infos.received` | Number of received gossip Info objects
+`gossip.infos.sent` | Number of sent gossip Info objects
+`intentage` | Cumulative age of intents in seconds
+`intentbytes` | Number of bytes in intent KV pairs
+`intentcount` | Count of intent keys
+`keybytes` | Number of bytes taken up by keys
+`keycount` | Count of all keys
+`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated
+`leases.epoch` | Number of replica leaseholders using epoch-based leases
+`leases.error` | Number of failed lease requests
+`leases.expiration` | Number of replica leaseholders using expiration-based leases
+`leases.success` | Number of successful lease requests
+`leases.transfers.error` | Number of failed lease transfers
+`leases.transfers.success` | Number of successful lease transfers
+`livebytes` | Number of bytes of live data (keys plus values)
+`livecount` | Count of live keys
+`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
+`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
+`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds
+`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
+`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
+`node-id` | node ID with labels for advertised RPC and HTTP addresses
+`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
+`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
+`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
+`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
+`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
+`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
+`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
+`queue.gc.info.intentsconsidered` | Number of 'old' intents
+`queue.gc.info.intenttxns` | Number of associated distinct transactions
+`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
+`queue.gc.info.pushtxn` | Number of attempted pushes
+`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
+`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
+`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
+`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
+`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
+`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
+`queue.gc.pending` | Number of pending replicas in the GC queue
+`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue
+`queue.gc.process.success` | Number of replicas successfully processed by the GC queue
+`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue
+`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
+`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
+`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
+`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
+`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
+`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
+`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
+`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
+`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
+`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
+`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
+`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
+`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue
+`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
+`queue.replicate.pending` | Number of pending replicas in the replicate queue
+`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
+`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
+`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
+`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
+`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
+`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
+`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
+`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
+`queue.split.pending` | Number of pending replicas in the split queue
+`queue.split.process.failure` | Number of replicas which failed processing in the split queue
+`queue.split.process.success` | Number of replicas successfully processed by the split queue
+`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
+`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
+`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
+`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
+`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
+`raft.commandsapplied` | Count of Raft commands applied
+`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue
+`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
+`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands
+`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries
+`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
+`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working
+`raft.rcvd.app` | Number of MsgApp messages received by this store
+`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
+`raft.rcvd.dropped` | Number of dropped incoming Raft messages
+`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
+`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
+`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
+`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
+`raft.rcvd.prop` | Number of MsgProp messages received by this store
+`raft.rcvd.snap` | Number of MsgSnap messages received by this store
+`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
+`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
+`raft.rcvd.vote` | Number of MsgVote messages received by this store
+`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
+`raft.ticks` | Number of Raft ticks queued
+`raftlog.behind` | Number of Raft log entries followers on other stores are behind
+`raftlog.truncated` | Number of Raft log entries truncated
+`range.adds` | Number of range additions
+`range.raftleadertransfers` | Number of raft leader transfers
+`range.removes` | Number of range removals
+`range.snapshots.generated` | Number of generated snapshots
+`range.snapshots.normal-applied` | Number of applied snapshots
+`range.snapshots.preemptive-applied` | Number of applied pre-emptive snapshots
+`range.splits` | Number of range splits
+`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
+`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
+`ranges` | Number of ranges
+`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
+`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
+`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
+`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
+`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue
+`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue
+`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue
+`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree
+`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue
+`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
+`replicas.leaders` | Number of raft leaders
+`replicas.leaseholders` | Number of lease holders
+`replicas.quiescent` | Number of quiesced replicas
+`replicas.reserved` | Number of replicas reserved for snapshots
+`replicas` | Number of replicas
+`requests.backpressure.split` | Number of backpressured writes waiting on a Range split
+`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue
+`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender
+`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease
+`requests.slow.raft` | Number of requests that have been stuck for a long time in raft
+`rocksdb.block.cache.hits` | Count of block cache hits
+`rocksdb.block.cache.misses` | Count of block cache misses
+`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
+`rocksdb.block.cache.usage` | Bytes used by the block cache
+`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
+`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
+`rocksdb.compactions` | Number of table compactions
+`rocksdb.flushes` | Number of table flushes
+`rocksdb.memtable.total-size` | Current size of memtable in bytes
+`rocksdb.num-sstables` | Number of rocksdb SSTables
+`rocksdb.read-amplification` | Number of disk reads per query
+`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
+`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds
+`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error.
+`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error.
+`sql.bytesin` | Number of sql bytes received
+`sql.bytesout` | Number of sql bytes sent
+`sql.conns` | Number of active sql connections
+`sql.ddl.count` | Number of SQL DDL statements
+`sql.delete.count` | Number of SQL DELETE statements
+`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution
+`sql.distsql.flows.active` | Number of distributed SQL flows currently active
+`sql.distsql.flows.total` | Number of distributed SQL flows executed
+`sql.distsql.queries.active` | Number of distributed SQL queries currently active
+`sql.distsql.queries.total` | Number of distributed SQL queries executed
+`sql.distsql.select.count` | Number of DistSQL SELECT statements
+`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution
+`sql.exec.latency` | Latency in nanoseconds of SQL statement execution
+`sql.insert.count` | Number of SQL INSERT statements
+`sql.mem.current` | Current sql statement memory usage
+`sql.mem.distsql.current` | Current sql statement memory usage for distsql
+`sql.mem.distsql.max` | Memory usage per sql statement for distsql
+`sql.mem.max` | Memory usage per sql statement
+`sql.mem.session.current` | Current sql session memory usage
+`sql.mem.session.max` | Memory usage per sql session
+`sql.mem.txn.current` | Current sql transaction memory usage
+`sql.mem.txn.max` | Memory usage per sql transaction
+`sql.misc.count` | Number of other SQL statements
+`sql.query.count` | Number of SQL queries
+`sql.select.count` | Number of SQL SELECT statements
+`sql.service.latency` | Latency in nanoseconds of SQL request execution
+`sql.txn.abort.count` | Number of SQL transaction ABORT statements
+`sql.txn.begin.count` | Number of SQL transaction BEGIN statements
+`sql.txn.commit.count` | Number of SQL transaction COMMIT statements
+`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements
+`sql.update.count` | Number of SQL UPDATE statements
+`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
+`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
+`sys.cgocalls` | Total number of cgo call
+`sys.cpu.sys.ns` | Total system cpu time in nanoseconds
+`sys.cpu.sys.percent` | Current system cpu percentage
+`sys.cpu.user.ns` | Total user cpu time in nanoseconds
+`sys.cpu.user.percent` | Current user cpu percentage
+`sys.fd.open` | Process open file descriptors
+`sys.fd.softlimit` | Process open FD soft limit
+`sys.gc.count` | Total number of GC runs
+`sys.gc.pause.ns` | Total GC pause in nanoseconds
+`sys.gc.pause.percent` | Current GC pause percentage
+`sys.go.allocbytes` | Current bytes of memory allocated by go
+`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
+`sys.goroutines` | Current number of goroutines
+`sys.rss` | Current process RSS
+`sys.uptime` | Process uptime in seconds
+`sysbytes` | Number of bytes in system KV pairs
+`syscount` | Count of system KV pairs
+`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
+`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
+`timeseries.write.samples` | Total number of metric samples written to disk
+`totalbytes` | Total number of bytes taken up by keys and values including non-live data
+`tscache.skl.read.pages` | Number of pages in the read timestamp cache
+`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache
+`tscache.skl.write.pages` | Number of pages in the write timestamp cache
+`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache
+`txn.abandons` | Number of abandoned KV transactions
+`txn.aborts` | Number of aborted KV transactions
+`txn.autoretries` | Number of automatic retries to avoid serializable restarts
+`txn.commits1PC` | Number of committed one-phase KV transactions
+`txn.commits` | Number of committed KV transactions (including 1PC)
+`txn.durations` | KV transaction durations in nanoseconds
+`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command
+`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer
+`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
+`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
+`txn.restarts` | Number of restarted KV transactions
+`valbytes` | Number of bytes taken up by values
+`valcount` | Count of all values
diff --git a/_includes/v20.2/misc/available-capacity-metric.md b/_includes/v20.2/misc/available-capacity-metric.md
new file mode 100644
index 00000000000..61dbcb9cbf2
--- /dev/null
+++ b/_includes/v20.2/misc/available-capacity-metric.md
@@ -0,0 +1 @@
+If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity.
\ No newline at end of file
diff --git a/_includes/v20.2/misc/aws-locations.md b/_includes/v20.2/misc/aws-locations.md
new file mode 100644
index 00000000000..8b073c1f230
--- /dev/null
+++ b/_includes/v20.2/misc/aws-locations.md
@@ -0,0 +1,18 @@
+| Location | SQL Statement |
+| ------ | ------ |
+| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`|
+| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` |
+| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` |
+| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` |
+| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` |
+| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` |
+| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` |
+| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` |
+| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` |
+| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` |
+| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` |
+| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` |
+| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` |
+| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` |
+| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` |
+| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` |
diff --git a/_includes/v20.2/misc/azure-locations.md b/_includes/v20.2/misc/azure-locations.md
new file mode 100644
index 00000000000..7119ff8b7cb
--- /dev/null
+++ b/_includes/v20.2/misc/azure-locations.md
@@ -0,0 +1,30 @@
+| Location | SQL Statement |
+| -------- | ------------- |
+| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` |
+| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` |
+| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` |
+| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` |
+| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` |
+| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` |
+| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` |
+| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` |
+| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` |
+| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` |
+| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` |
+| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` |
+| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` |
+| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` |
+| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` |
+| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` |
+| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` |
+| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` |
+| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` |
+| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` |
+| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` |
+| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` |
+| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` |
+| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` |
+| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` |
+| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` |
+| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` |
+| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` |
diff --git a/_includes/v20.2/misc/basic-terms.md b/_includes/v20.2/misc/basic-terms.md
new file mode 100644
index 00000000000..be108648c8b
--- /dev/null
+++ b/_includes/v20.2/misc/basic-terms.md
@@ -0,0 +1,9 @@
+Term | Definition
+-----|------------
+**Cluster** | Your CockroachDB deployment, which acts as a single logical application.
+**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
+**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as that range reaches 512 MiB in size, it splits into two ranges. This process continues for these new ranges as the table and its indexes continue growing.
+**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
+**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder.
+**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](https://www.cockroachlabs.com/docs/{{ page.version.version }}/architecture/replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder.
+**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication.
diff --git a/_includes/v20.2/misc/beta-warning.md b/_includes/v20.2/misc/beta-warning.md
new file mode 100644
index 00000000000..d326ecc3647
--- /dev/null
+++ b/_includes/v20.2/misc/beta-warning.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_danger}}
+**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](https://www.cockroachlabs.com/docs/stable/file-an-issue.html) with us if you identify a bug.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/chrome-localhost.md b/_includes/v20.2/misc/chrome-localhost.md
new file mode 100644
index 00000000000..24f9bb159a3
--- /dev/null
+++ b/_includes/v20.2/misc/chrome-localhost.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's Admin UI, so be sure to enable the feature only temporarily.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/client-side-intervention-example.md b/_includes/v20.2/misc/client-side-intervention-example.md
new file mode 100644
index 00000000000..347a6160dcb
--- /dev/null
+++ b/_includes/v20.2/misc/client-side-intervention-example.md
@@ -0,0 +1,27 @@
+The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must:
+
+- Raise an error if the `max_retries` limit is reached
+- Retry on `40001` error codes
+- [`COMMIT`](commit-transaction.html) at the end of the `try` block
+- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance
+
+~~~ python
+while true:
+ n++
+ if n == max_retries:
+ throw Error("did not succeed within N retries")
+ try:
+ # add logic here to run all your statements
+ conn.exec('COMMIT')
+ catch error:
+ if error.code != "40001":
+ throw error
+ else:
+ # This is a retry error, so we roll back the current transaction
+ # and sleep for a bit before retrying. The sleep time increases
+ # for each failed transaction. Adapted from
+ # https://colintemple.com/2017/03/java-exponential-backoff/
+ conn.exec('ROLLBACK');
+ sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1)
+ sleep(sleep_ms) # Assumes your sleep() takes milliseconds
+~~~
diff --git a/_includes/v20.2/misc/customizing-the-savepoint-name.md b/_includes/v20.2/misc/customizing-the-savepoint-name.md
new file mode 100644
index 00000000000..ed895f906f3
--- /dev/null
+++ b/_includes/v20.2/misc/customizing-the-savepoint-name.md
@@ -0,0 +1,5 @@
+Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints).
+
+Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions).
+
+This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints.
diff --git a/_includes/v20.2/misc/debug-subcommands.md b/_includes/v20.2/misc/debug-subcommands.md
new file mode 100644
index 00000000000..379047a6441
--- /dev/null
+++ b/_includes/v20.2/misc/debug-subcommands.md
@@ -0,0 +1,3 @@
+While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), and [`ballast`](cockroach-debug-ballast.html) subcommands.
+
+The other `debug` subcommands are useful only to CockroachDB's developers and contributors.
diff --git a/_includes/v20.2/misc/delete-statistics.md b/_includes/v20.2/misc/delete-statistics.md
new file mode 100644
index 00000000000..a568055e583
--- /dev/null
+++ b/_includes/v20.2/misc/delete-statistics.md
@@ -0,0 +1,17 @@
+To delete statistics for all tables in all databases:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> DELETE FROM system.table_statistics WHERE true;
+~~~
+
+To delete a named set of statistics (e.g, one named "my_stats"), run a query like the following:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> DELETE FROM system.table_statistics WHERE name = 'my_stats';
+~~~
+
+After deleting statistics, restart the nodes in your cluster to clear the statistics caches.
+
+For more information about the `DELETE` statement, see [`DELETE`](delete.html).
diff --git a/_includes/v20.2/misc/diagnostics-callout.html b/_includes/v20.2/misc/diagnostics-callout.html
new file mode 100644
index 00000000000..a969a8cf152
--- /dev/null
+++ b/_includes/v20.2/misc/diagnostics-callout.html
@@ -0,0 +1 @@
+{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/drivers.md b/_includes/v20.2/misc/drivers.md
new file mode 100644
index 00000000000..871f6a830a4
--- /dev/null
+++ b/_includes/v20.2/misc/drivers.md
@@ -0,0 +1,18 @@
+{{site.data.alerts.callout_info}}
+Applications may encounter incompatibilities when using advanced or obscure features of a driver or ORM with **beta-level** support. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
+{{site.data.alerts.end}}
+
+| App Language | Drivers | ORMs | Support level |
+|--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+------|
+| Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) [Django](build-a-python-app-with-cockroachdb-django.html) [PonyORM](build-a-python-app-with-cockroachdb-pony.html) [peewee](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database) | Full |
+| Java | [JDBC](build-a-java-app-with-cockroachdb.html) | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) [jOOQ](build-a-java-app-with-cockroachdb-jooq.html) | Full |
+| Go | [pq](build-a-go-app-with-cockroachdb.html) | [GORM](build-a-go-app-with-cockroachdb-gorm.html) | Full |
+| Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html) | Beta |
+| Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html) | Beta |
+| C | [libpq](http://www.postgresql.org/docs/9.5/static/libpq.html) | No ORMs tested | Beta |
+| C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | No ORMs tested | Beta |
+| C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | No ORMs tested | Beta |
+| Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | No ORMs tested | Beta |
+| PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | No ORMs tested | Beta |
+| Rust | postgres {% comment %} This link is in HTML instead of Markdown because HTML proofer dies bc of https://github.com/rust-lang/crates.io/issues/163 {% endcomment %} | No ORMs tested | Beta |
+| TypeScript | No drivers tested | [TypeORM](https://typeorm.io/#/) | Beta |
diff --git a/_includes/v20.2/misc/enterprise-features.md b/_includes/v20.2/misc/enterprise-features.md
new file mode 100644
index 00000000000..704a3d32e34
--- /dev/null
+++ b/_includes/v20.2/misc/enterprise-features.md
@@ -0,0 +1,12 @@
+Feature | Description
+--------+-------------------------
+[Geo-Partitioning](topology-geo-partitioned-replicas.html) | This feature gives you row-level control of how and where your data is stored to dramatically reduce read and write latencies and assist in meeting regulatory requirements in multi-region deployments.
+[Follower Reads](follower-reads.html) | This feature reduces read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data.
+[`BACKUP`](backup.html) | This feature creates full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp, stored on a service such as AWS S3, Google Cloud Storage, NFS, or HTTP storage.
Backups can be locality-aware such that each node writes files only to the backup destination that matches the node's [locality](cockroach-start.html#locality). This is useful for reducing cloud storage data transfer costs by keeping data within cloud regions and complying with data domiciling requirements.
+[`RESTORE`](restore.html) | This feature restores your cluster's schemas and data from an enterprise `BACKUP`.
+[Change Data Capture](change-data-capture.html) (CDC) | This feature provides efficient, distributed, row-level [change feeds into Apache Kafka](create-changefeed.html) for downstream processing such as reporting, caching, or full-text indexing.
+[Node Map](enable-node-map.html) | This feature visualizes the geographical configuration of a cluster by plotting node localities on a world map.
+[Locality-Aware Index Selection](cost-based-optimizer.html#preferring-the-nearest-index) | Given [multiple identical indexes](topology-duplicate-indexes.html) that have different locality constraints using [replication zones](configure-replication-zones.html), the cost-based optimizer will prefer the index that is closest to the gateway node that is planning the query. In multi-region deployments, this can lead to performance improvements due to improved data locality and reduced network traffic.
+[Encryption at Rest](encryption.html#encryption-at-rest-enterprise) | Supplementing CockroachDB's encryption in flight capabilities, this feature provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using AES in counter mode, with all key sizes allowed.
+[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory.
+[`EXPORT`](export.html) | This feature uses the CockroachDB distributed execution engine to quickly get large sets of data out of CockroachDB in a CSV format that can be ingested by downstream systems.
diff --git a/_includes/v20.2/misc/experimental-warning.md b/_includes/v20.2/misc/experimental-warning.md
new file mode 100644
index 00000000000..d38a9755593
--- /dev/null
+++ b/_includes/v20.2/misc/experimental-warning.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_danger}}
+**This is an experimental feature**. The interface and output are subject to change.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/explore-benefits-see-also.md b/_includes/v20.2/misc/explore-benefits-see-also.md
new file mode 100644
index 00000000000..72cbc961e3b
--- /dev/null
+++ b/_includes/v20.2/misc/explore-benefits-see-also.md
@@ -0,0 +1,8 @@
+- [Replication & Rebalancing](demo-replication-and-rebalancing.html)
+- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
+- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html)
+- [Serializable Transactions](demo-serializable.html)
+- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
+- [Follow-the-Workload](demo-follow-the-workload.html)
+- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
+- [JSON Support](demo-json-support.html)
diff --git a/_includes/v20.2/misc/external-urls.md b/_includes/v20.2/misc/external-urls.md
new file mode 100644
index 00000000000..9242847e5b2
--- /dev/null
+++ b/_includes/v20.2/misc/external-urls.md
@@ -0,0 +1,48 @@
+~~~
+[scheme]://[host]/[path]?[parameters]
+~~~
+
+Location | Scheme | Host | Parameters |
+|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------
+Amazon | `s3` | Bucket name | `AUTH` [1](#considerations) (optional; can be `implicit` or `specified`), `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`
+Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME`
+Google Cloud [2](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS`
+HTTP [3](#considerations) | `http` | Remote host | N/A
+NFS/Local [4](#considerations) | `nodelocal` | `nodeID` or `self` [5](#considerations) (see [Example file URLs](#example-file-urls)) | N/A
+S3-compatible services [6](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [7](#considerations) (optional), `AWS_ENDPOINT`
+
+{{site.data.alerts.callout_info}}
+The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
+{{site.data.alerts.end}}
+
+{{site.data.alerts.callout_info}}
+If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB.
+
+ If you cannot run a full proxy, you can disable external HTTP(S) access (as well as custom HTTP(S) endpoints) when performing bulk operations (e.g., `BACKUP`, `RESTORE`, etc.) by using the [`--external-io-disable-http` flag](cockroach-start.html#security). You can also disable the use of implicit credentials when accessing external cloud storage services for various bulk operations by using the [`--external-io-disable-implicit-credentials` flag](cockroach-start.html#security).
+{{site.data.alerts.end}}
+
+
+
+- 1 If the `AUTH` parameter is not provided, AWS connections default to `specified` and the access keys must be provided in the URI parameters. If the `AUTH` parameter is `implicit`, the access keys can be ommitted and [the credentials will be loaded from the environment](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/).
+
+- 2 If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)).
+
+- 3 You can create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
+
+- 4 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](cockroach-start.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
+
+- 5 Using a `nodeID` is required and the data files will be in the `extern` directory of the specified node. In most cases (including single-node clusters), using `nodelocal://1/` is sufficient. Use `self` if you do not want to specify a `nodeID`, and the individual data files will be in the `extern` directories of arbitrary nodes; however, to work correctly, each node must have the [`--external-io-dir` flag](cockroach-start.html#general) point to the same NFS mount or other network-backed, shared storage.
+
+- 6 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
+
+- 7 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
+
+#### Example file URLs
+
+Location | Example
+-------------+----------------------------------------------------------------------------------
+Amazon S3 | `s3://acme-co/employees.sql?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456`
+Azure | `azure://employees.sql?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co`
+Google Cloud | `gs://acme-co/employees.sql`
+HTTP | `http://localhost:8080/employees.sql`
+NFS/Local | `nodelocal://1/path/employees`, `nodelocal://self/nfsmount/backups/employees` [5](#considerations)
diff --git a/_includes/v20.2/misc/force-index-selection.md b/_includes/v20.2/misc/force-index-selection.md
new file mode 100644
index 00000000000..cc9798bdd7d
--- /dev/null
+++ b/_includes/v20.2/misc/force-index-selection.md
@@ -0,0 +1,61 @@
+By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table.
+
+{{site.data.alerts.callout_info}}
+Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query.
+{{site.data.alerts.end}}
+
+The syntax to force a scan of a specific index is:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM table@my_idx;
+~~~
+
+This is equivalent to the longer expression:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM table@{FORCE_INDEX=my_idx};
+~~~
+
+The syntax to force a **reverse scan** of a specific index is:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC};
+~~~
+
+Forcing a reverse can is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is
+
+{% include copy-clipboard.html %}
+~~~ sql
+SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]}
+~~~
+
+where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending).
+
+When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance.
+
+You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table
+
+{% include copy-clipboard.html %}
+~~~ sql
+> CREATE TABLE kv (K INT PRIMARY KEY, v INT);
+~~~
+
+you can check the scan direction with:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC};
+~~~
+
+~~~
+ text
++-------------------------------------+
+ scan users,rev
+ └── flags: force-index=primary,rev
+(2 rows)
+~~~
+
+To see all indexes available on a table, use [`SHOW INDEXES`](show-index.html).
diff --git a/_includes/v20.2/misc/gce-locations.md b/_includes/v20.2/misc/gce-locations.md
new file mode 100644
index 00000000000..22122aae78d
--- /dev/null
+++ b/_includes/v20.2/misc/gce-locations.md
@@ -0,0 +1,18 @@
+| Location | SQL Statement |
+| ------ | ------ |
+| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` |
+| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` |
+| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` |
+| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` |
+| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` |
+| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` |
+| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` |
+| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` |
+| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` |
+| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` |
+| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` |
+| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` |
+| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` |
+| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` |
+| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` |
+| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` |
diff --git a/_includes/v20.2/misc/haproxy.md b/_includes/v20.2/misc/haproxy.md
new file mode 100644
index 00000000000..375af8e937d
--- /dev/null
+++ b/_includes/v20.2/misc/haproxy.md
@@ -0,0 +1,39 @@
+By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly:
+
+ ~~~
+ global
+ maxconn 4096
+
+ defaults
+ mode tcp
+ # Timeout values should be configured for your specific use.
+ # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
+ timeout connect 10s
+ timeout client 1m
+ timeout server 1m
+ # TCP keep-alive on client side. Server already enables them.
+ option clitcpka
+
+ listen psql
+ bind :26257
+ mode tcp
+ balance roundrobin
+ option httpchk GET /health?ready=1
+ server cockroach1 :26257 check port 8080
+ server cockroach2 :26257 check port 8080
+ server cockroach3 :26257 check port 8080
+ ~~~
+
+ The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster:
+
+ Field | Description
+ ------|------------
+ `timeout connect` `timeout client` `timeout server` | Timeout values that should be suitable for most deployments.
+ `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.
This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node.
+ `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms.
+ `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests.
+ `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.
+
+ {{site.data.alerts.callout_info}}
+ For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html).
+ {{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/install-next-steps.html b/_includes/v20.2/misc/install-next-steps.html
new file mode 100644
index 00000000000..2111bdbed9c
--- /dev/null
+++ b/_includes/v20.2/misc/install-next-steps.html
@@ -0,0 +1,16 @@
+
+
If you're just getting started with CockroachDB:
+
diff --git a/_includes/v20.2/misc/linux-binary-prereqs.md b/_includes/v20.2/misc/linux-binary-prereqs.md
new file mode 100644
index 00000000000..541183fe71b
--- /dev/null
+++ b/_includes/v20.2/misc/linux-binary-prereqs.md
@@ -0,0 +1 @@
+
The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.
diff --git a/_includes/v20.2/misc/logging-flags.md b/_includes/v20.2/misc/logging-flags.md
new file mode 100644
index 00000000000..02a800a54bb
--- /dev/null
+++ b/_includes/v20.2/misc/logging-flags.md
@@ -0,0 +1,9 @@
+Flag | Description
+-----|------------
+`--log-dir` | Enable logging to files and write logs to the specified directory.
Setting `--log-dir` to a blank directory (`--log-dir=""`) disables logging to files.
+`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.
**Default**: 100MiB
+`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.
**Default**: 10MiB
+`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.
**Default**: `INFO`
+`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`
If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.
Setting `--logtostderr=NONE` disables logging to `stderr`.
+`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.
When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).
**Default:** `false`
+`--sql-audit-dir` | New in v2.0: If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB. For more information, see [SQL Audit Logging](sql-audit-logging.html).
diff --git a/_includes/v20.2/misc/mitigate-contention-note.md b/_includes/v20.2/misc/mitigate-contention-note.md
new file mode 100644
index 00000000000..ffe3cff554a
--- /dev/null
+++ b/_includes/v20.2/misc/mitigate-contention-note.md
@@ -0,0 +1,5 @@
+{{site.data.alerts.callout_info}}
+It's possible to mitigate read-write contention and reduce transaction retries using the following techniques:
+1. By performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).
+2. By using [`SELECT FOR UPDATE`](select-for-update.html) to order transactions by controlling concurrent access to one or more rows of a table. This reduces retries in scenarios where a transaction performs a read and then updates the same row it just read.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/movr-schema.md b/_includes/v20.2/misc/movr-schema.md
new file mode 100644
index 00000000000..0ce2a8f83e7
--- /dev/null
+++ b/_includes/v20.2/misc/movr-schema.md
@@ -0,0 +1,12 @@
+The six tables in the `movr` database store user, vehicle, and ride data for MovR:
+
+Table | Description
+--------|----------------------------
+`users` | People registered for the service.
+`vehicles` | The pool of vehicles available for the service.
+`rides` | When and where users have rented a vehicle.
+`promo_codes` | Promotional codes for users.
+`user_promo_codes` | Promotional codes in use by users.
+`vehicle_location_histories` | Vehicle location history.
+
+
diff --git a/_includes/v20.2/misc/movr-workflow.md b/_includes/v20.2/misc/movr-workflow.md
new file mode 100644
index 00000000000..3dc9a61b910
--- /dev/null
+++ b/_includes/v20.2/misc/movr-workflow.md
@@ -0,0 +1,49 @@
+The workflow for MovR is as follows (with approximations of the corresponding SQL for each step):
+
+1. A user loads the app and sees the 25 closest vehicles:
+
+ ~~~ sql
+ > SELECT id, city, status, ... FROM vehicles WHERE city =
+ ~~~
+
+2. The user signs up for the service:
+
+ ~~~ sql
+ > INSERT INTO users (id, name, address, ...) VALUES ...
+ ~~~
+
+3. In some cases, the user adds their own vehicle to share:
+
+ ~~~ sql
+ > INSERT INTO vehicles (id, city, type, ...) VALUES ...
+ ~~~
+
+4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid:
+
+ ~~~ sql
+ > SELECT code FROM user_promo_codes WHERE user_id = ...
+ ~~~
+
+ ~~~ sql
+ > UPDATE vehicles SET status = 'in_use' WHERE ...
+ ~~~
+
+ ~~~ sql
+ > INSERT INTO rides (id, city, start_addr, ...) VALUES ...
+ ~~~
+
+5. During the ride, MovR tracks the location of the vehicle:
+
+ ~~~ sql
+ > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long) VALUES ...
+ ~~~
+
+6. The user ends the ride and releases the vehicle:
+
+ ~~~ sql
+ > UPDATE vehicles SET status = 'available' WHERE ...
+ ~~~
+
+ ~~~ sql
+ > UPDATE rides SET end_address = ...
+ ~~~
diff --git a/_includes/v20.2/misc/multi-store-nodes.md b/_includes/v20.2/misc/multi-store-nodes.md
new file mode 100644
index 00000000000..01642597169
--- /dev/null
+++ b/_includes/v20.2/misc/multi-store-nodes.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_danger}}
+In the absence of [special replication constraints](configure-replication-zones.html), CockroachDB rebalances replicas to take advantage of available storage capacity. However, in a 3-node cluster with multiple stores per node, CockroachDB is **not** able to rebalance replicas from one store to another store on the same node because this would temporarily result in the node having multiple replicas of the same range, which is not allowed. This is due to the mechanics of rebalancing, where the cluster first creates a copy of the replica at the target destination before removing the source replica. To allow this type of cross-store rebalancing, the cluster must have 4 or more nodes; this allows the cluster to create a copy of the replica on a node that doesn't already have a replica of the range before removing the source replica and then migrating the new replica to the store with more capacity on the original node.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/remove-user-callout.html b/_includes/v20.2/misc/remove-user-callout.html
new file mode 100644
index 00000000000..925f83d779d
--- /dev/null
+++ b/_includes/v20.2/misc/remove-user-callout.html
@@ -0,0 +1 @@
+Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user.
diff --git a/_includes/v20.2/misc/schema-change-stmt-note.md b/_includes/v20.2/misc/schema-change-stmt-note.md
new file mode 100644
index 00000000000..b522b658652
--- /dev/null
+++ b/_includes/v20.2/misc/schema-change-stmt-note.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_info}}
+This statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html).
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/misc/schema-change-view-job.md b/_includes/v20.2/misc/schema-change-view-job.md
new file mode 100644
index 00000000000..8861174d621
--- /dev/null
+++ b/_includes/v20.2/misc/schema-change-view-job.md
@@ -0,0 +1 @@
+This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html).
diff --git a/_includes/v20.2/misc/session-vars.html b/_includes/v20.2/misc/session-vars.html
new file mode 100644
index 00000000000..b6d057fe2b0
--- /dev/null
+++ b/_includes/v20.2/misc/session-vars.html
@@ -0,0 +1,503 @@
+
The default transaction access mode for the current session. If set to on, only read operations are allowed in transactions in the current session; if set to off, both read and write operations are allowed. See SET TRANSACTION
+ for more details.
+
+ off
+
+
Yes
+
Yes
+
+
+
+
+ distsql
+
+
The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node.
+
+ auto
+
+
Yes
+
Yes
+
+
+
+
+
+ enable_implicit_select_for_update
+
+
Indicates whether UPDATE statements acquire locks using the FOR UPDATE locking mode during their initial row scan, which improves performance for contended workloads. For more information about how FOR UPDATE locking works, see the documentation for SELECT FOR UPDATE.
+
+ on
+
+
Yes
+
Yes
+
+
+
+
+ enable_zigzag_join
+
+
Indicates whether the cost-based optimizer will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering.
+
+ on
+
+
Yes
+
Yes
+
+
+
+
+ extra_float_digits
+
+
The number of digits displayed for floating-point values. Only values between -15 and 3 are supported.
+
+ 0
+
+
Yes
+
Yes
+
+
+
+
+ reorder_joins_limit
+
+
Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan. For more information, see Join reordering.
+
+ 4
+
+
Yes
+
Yes
+
+
+
+
force_savepoint_restart
+
When set to true, allows the SAVEPOINT statement to accept any name for a savepoint.
+
+ off
+
+
Yes
+
Yes
+
+
+
+
+ node_id
+
+
The ID of the node currently connected to.
+ This variable is particularly useful for verifying load balanced connections.
The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can also be set for all connections using the 'sql.defaults.results_buffer_size' cluster setting. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering.
+
+ 16384
+
+
Yes
+
Yes
+
+
+
+
+ require_explicit_primary_keys
+
+
If on, CockroachDB throws on error for all tables created without an explicit primary key defined.
+
+
+ off
+
+
Yes
+
Yes
+
+
+
+
+ search_path
+
+
A list of schemas that will be searched to resolve unqualified table or function names. For more details, see SQL name resolution.
+
+ public
+
+
Yes
+
Yes
+
+
+
+
+ server_version
+
+
The version of PostgreSQL that CockroachDB emulates.
+
Version-dependent
+
No
+
Yes
+
+
+
+
+ server_version_num
+
+
The version of PostgreSQL that CockroachDB emulates.
+
Version-dependent
+
Yes
+
Yes
+
+
+
+
+ session_id
+
+
The ID of the current session.
+
Session-dependent
+
No
+
Yes
+
+
+
+
+ session_user
+
+
The user connected for the current session.
+
User in connection string
+
No
+
Yes
+
+
+
+
+ sql_safe_updates
+
+
If false, potentially unsafe SQL statements are allowed, including DROP of a non-empty database and all dependent objects, DELETE without a WHERE clause, UPDATE without a WHERE clause, and ALTER TABLE .. DROP COLUMN. See Allow Potentially Unsafe SQL Statements for more details.
+
+ true for interactive sessions from the built-in SQL client, false for sessions from other clients
+
Yes
+
Yes
+
+
+
+
+ statement_timeout
+
+
The amount of time a statement can run before being stopped.
+ This value can be an int (e.g., 10) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., '4s'). A value of 0 turns it off.
+
+ 0s
+
+
Yes
+
Yes
+
+
+
+
+ timezone
+
+
The default time zone for the current session.
+ This session variable was named "time zone" (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
+
+ UTC
+
+
Yes
+
Yes
+
+
+
+
+ tracing
+
+
The trace recording state.
+
+ off
+
+
+
+
Yes
+
+
+
+
+ transaction_isolation
+
+
All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels.
+ This session variable was called transaction isolation level (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
+
+ SERIALIZABLE
+
+
No
+
Yes
+
+
+
+
+ transaction_priority
+
+
The priority of the current transaction. See Transactions: Isolation levels for more details.
+ This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
+
+ NORMAL
+
+
Yes
+
Yes
+
+
+
+
+ transaction_read_only
+
+
The access mode of the current transaction. See Set Transaction for more details.
+
+ off
+
+
Yes
+
Yes
+
+
+
+
+ transaction_status
+
+
The state of the current transaction. See Transactions for more details.
+ This session variable was called transaction status (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
The minimum number of rows required to use the vectorized engine to execute a query plan.
+
+
+ 1000
+
+
Yes
+
Yes
+
+
+
+
+ client_encoding
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ UTF8
+
+
No
+
Yes
+
+
+
+
+ client_min_messages
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ notice
+
+
No
+
Yes
+
+
+
+
+ datestyle
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ ISO
+
+
No
+
Yes
+
+
+
+
+ integer_datetimes
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ on
+
+
No
+
Yes
+
+
+
+
+ intervalstyle
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ postgres
+
+
No
+
Yes
+
+
+
+
+ max_identifier_length
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ 128
+
+
No
+
Yes
+
+
+
+
+ max_index_keys
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ 32
+
+
No
+
Yes
+
+
+
+
+ standard_conforming_strings
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ on
+
+
No
+
Yes
+
+
+
+
+ server_encoding
+
+
(Reserved; exposed only for ORM compatibility.)
+
+ UTF8
+
+
Yes
+
Yes
+
+
+
+
diff --git a/_includes/v20.2/misc/sorting-delete-output.md b/_includes/v20.2/misc/sorting-delete-output.md
new file mode 100644
index 00000000000..fa0d6e54be7
--- /dev/null
+++ b/_includes/v20.2/misc/sorting-delete-output.md
@@ -0,0 +1,9 @@
+To sort the output of a `DELETE` statement, use:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> WITH a AS (DELETE ... RETURNING ...)
+ SELECT ... FROM a ORDER BY ...
+~~~
+
+For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows).
diff --git a/_includes/v20.2/orchestration/kubernetes-expand-disk-size.md b/_includes/v20.2/orchestration/kubernetes-expand-disk-size.md
new file mode 100644
index 00000000000..5f5f77b4962
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-expand-disk-size.md
@@ -0,0 +1,184 @@
+You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
+) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. Increasing disk size is often beneficial for CockroachDB performance. Read our [Kubernetes performance guide](kubernetes-performance.html#disk-size) for guidance on disks.
+
+1. Get the persistent volume claims for the volumes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pvc
+ ~~~
+
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ ~~~
+
+
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m
+ ~~~
+
+
+2. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe storageclass standard
+ ~~~
+
+ ~~~
+ Name: standard
+ IsDefaultClass: Yes
+ Annotations: storageclass.kubernetes.io/is-default-class=true
+ Provisioner: kubernetes.io/gce-pd
+ Parameters: type=pd-standard
+ AllowVolumeExpansion: False
+ MountOptions:
+ ReclaimPolicy: Delete
+ VolumeBindingMode: Immediate
+ Events:
+ ~~~
+
+ If necessary, edit the storage class:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
+ ~~~
+
+ ~~~
+ storageclass.storage.k8s.io/standard patched
+ ~~~
+
+3. Edit one of the persistent volume claims to request more space:
+
+ {{site.data.alerts.callout_info}}
+ The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size.
+ {{site.data.alerts.end}}
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
+ ~~~
+
+ ~~~
+ persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
+ ~~~
+
+ ~~~
+ persistentvolumeclaim/datadir-cockroachdb-0 patched
+ ~~~
+
+
+4. Check the capacity of the persistent volume claim:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pvc datadir-my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pvc datadir-cockroachdb-0
+ ~~~
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m
+ ~~~
+
+
+ If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.
+
+ {{site.data.alerts.callout_success}}
+ Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`.
+ {{site.data.alerts.end}}
+
+5. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe pvc datadir-my-release-cockroachdb-0
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe pvc datadir-cockroachdb-0
+ ~~~
+
+
+ ~~~
+ Waiting for user to (re-)start a pod to finish file system resize of volume on node.
+ ~~~
+
+6. Delete the corresponding pod to restart it:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl delete pod my-release-cockroachdb-0
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl delete pod cockroachdb-0
+ ~~~
+
+
+ The `FileSystemResizePending` condition and message will be removed.
+
+7. View the updated persistent volume claim:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pvc datadir-my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pvc datadir-cockroachdb-0
+ ~~~
+
+ ~~~
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m
+ ~~~
+
+
+8. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.
\ No newline at end of file
diff --git a/_includes/v20.2/orchestration/kubernetes-limitations.md b/_includes/v20.2/orchestration/kubernetes-limitations.md
new file mode 100644
index 00000000000..264c1e33acc
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-limitations.md
@@ -0,0 +1,11 @@
+#### Kubernetes version
+
+Kubernetes 1.8 or higher is required in order to use our most up-to-date configuration files. Earlier Kubernetes releases do not support some of the options used in our configuration files. If you need to run on an older version of Kubernetes, we have kept around configuration files that work on older Kubernetes releases in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)).
+
+#### Helm version
+
+Helm 3.0 or higher is required when using our instructions to [deploy via Helm](orchestrate-cockroachdb-with-kubernetes.html#step-2-start-cockroachdb).
+
+#### Storage
+
+At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider a [DaemonSet](kubernetes-performance.html#running-in-a-daemonset) deployment until StatefulSets support node-local storage.
\ No newline at end of file
diff --git a/_includes/v20.2/orchestration/kubernetes-prometheus-alertmanager.md b/_includes/v20.2/orchestration/kubernetes-prometheus-alertmanager.md
new file mode 100644
index 00000000000..98d73cf6059
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-prometheus-alertmanager.md
@@ -0,0 +1,218 @@
+Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
+
+### Configure Prometheus
+
+Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring.
+
+This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using native Kubernetes concepts.
+
+{{site.data.alerts.callout_info}}
+If you're on Hosted GKE, before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#hosted-gke).
+{{site.data.alerts.end}}
+
+1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl label svc cockroachdb prometheus=cockroachdb
+ ~~~
+
+ ~~~
+ service/cockroachdb labeled
+ ~~~
+
+ This ensures that there is a Prometheus job and monitoring data only for the `cockroachdb` service, not for the `cockroach-public` service.
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl label svc my-release-cockroachdb prometheus=cockroachdb
+ ~~~
+
+ ~~~
+ service/my-release-cockroachdb labeled
+ ~~~
+
+ This ensures that there is a Prometheus job and monitoring data only for the `my-release-cockroachdb` service, not for the `my-release-cockroach-public` service.
+
+
+2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl apply \
+ -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml
+ ~~~
+
+ ~~~
+ clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
+ clusterrole.rbac.authorization.k8s.io/prometheus-operator created
+ serviceaccount/prometheus-operator created
+ deployment.apps/prometheus-operator created
+ ~~~
+3. Confirm that the `prometheus-operator` has started:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get deploy prometheus-operator
+ ~~~
+
+ ~~~
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ prometheus-operator 1/1 1 1 27s
+ ~~~
+
+4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance:
+
+ {{site.data.alerts.callout_success}}
+ This configuration defaults to using the Kubernetes CA for authentication.
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl apply \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml
+ ~~~
+
+ ~~~
+ serviceaccount/prometheus created
+ clusterrole.rbac.authorization.k8s.io/prometheus created
+ clusterrolebinding.rbac.authorization.k8s.io/prometheus created
+ servicemonitor.monitoring.coreos.com/cockroachdb created
+ prometheus.monitoring.coreos.com/cockroachdb created
+ ~~~
+
+5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus:
+
+ 1. Port-forward from your local machine to the pod running Prometheus:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl port-forward prometheus-cockroachdb-0 9090
+ ~~~
+
+ 2. Go to http://localhost:9090 in your browser.
+
+ 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
+
+
+
+ 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
+
+
+
+ {{site.data.alerts.callout_success}}
+ Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-4-access-the-admin-ui){% else %}[Access the Admin UI](#step-4-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars.
+
+ For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/).
+ {{site.data.alerts.end}}
+
+### Configure Alertmanager
+
+Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this.
+
+1. Download our alertmanager-config.yaml configuration file:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -OOOOOOOOO \
+ https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager-config.yaml
+ ~~~
+
+2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/#receiver). Initially, the file contains a dummy web hook.
+
+3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create secret generic alertmanager-cockroachdb \
+ --from-file=alertmanager.yaml=alertmanager-config.yaml
+ ~~~
+
+ ~~~
+ secret/alertmanager-cockroachdb created
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl label secret alertmanager-cockroachdb app=cockroachdb
+ ~~~
+
+ ~~~
+ secret/alertmanager-cockroachdb labeled
+ ~~~
+
+ {{site.data.alerts.callout_danger}}
+ The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `alertmanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
+ {{site.data.alerts.end}}
+
+4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl apply \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml
+ ~~~
+
+ ~~~
+ alertmanager.monitoring.coreos.com/cockroachdb created
+ service/alertmanager-cockroachdb created
+ ~~~
+
+5. Verify that Alertmanager is running:
+
+ 1. Port-forward from your local machine to the pod running Alertmanager:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl port-forward alertmanager-cockroachdb-0 9093
+ ~~~
+
+ 2. Go to http://localhost:9093 in your browser. The screen should look like this:
+
+
+
+6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this:
+
+
+
+7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl apply \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml
+ ~~~
+
+ ~~~
+ prometheusrule.monitoring.coreos.com/prometheus-cockroachdb-rules created
+ ~~~
+
+8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/rules. The screen should look like this:
+
+
+
+9. Verify that the `TestAlertManager` example alert is firing by opening http://localhost:9090/alerts. The screen should look like this:
+
+
+
+10. To remove the example alert:
+
+ 1. Use the `kubectl edit` command to open the rules for editing:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl edit prometheusrules prometheus-cockroachdb-rules
+ ~~~
+
+ 2. Remove the `dummy.rules` block and save the file:
+
+ ~~~
+ - name: rules/dummy.rules
+ rules:
+ - alert: TestAlertManager
+ expr: vector(1)
+ ~~~
diff --git a/_includes/v20.2/orchestration/kubernetes-remove-nodes-insecure.md b/_includes/v20.2/orchestration/kubernetes-remove-nodes-insecure.md
new file mode 100644
index 00000000000..cf1eaeea910
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-remove-nodes-insecure.md
@@ -0,0 +1,130 @@
+To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
+
+{{site.data.alerts.callout_danger}}
+If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
+{{site.data.alerts.end}}
+
+1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- node status \
+ --insecure \
+ --host=cockroachdb-public
+ ~~~
+
+ ~~~
+ id | address | build | started_at | updated_at | is_available | is_live
+ +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
+ 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
+ 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
+ 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
+ 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
+ (4 rows)
+ ~~~
+
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- node status \
+ --insecure \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+ ~~~
+ id | address | build | started_at | updated_at | is_available | is_live
+ +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
+ 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
+ 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
+ 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
+ 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
+ (4 rows)
+ ~~~
+
+
+2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it:
+
+ {{site.data.alerts.callout_info}}
+ It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.
+ {{site.data.alerts.end}}
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- node decommission \
+ --insecure \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- node decommission \
+ --insecure \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+ You'll then see the decommissioning status print to `stderr` as it changes:
+
+ ~~~
+ id | is_live | replicas | is_decommissioning | is_draining
+ +---+---------+----------+--------------------+-------------+
+ 4 | true | 73 | true | false
+ (1 row)
+ ~~~
+
+ Once the node has been fully decommissioned and stopped, you'll see a confirmation:
+
+ ~~~
+ id | is_live | replicas | is_decommissioning | is_draining
+ +---+---------+----------+--------------------+-------------+
+ 4 | true | 0 | true | false
+ (1 row)
+
+ No more data reported on target nodes. Please verify cluster health before removing the nodes.
+ ~~~
+
+3. Once the node has been decommissioned, remove a pod from your StatefulSet:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl scale statefulset cockroachdb --replicas=3
+ ~~~
+
+ ~~~
+ statefulset "cockroachdb" scaled
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm upgrade \
+ my-release \
+ stable/cockroachdb \
+ --set statefulset.replicas=3 \
+ --reuse-values
+ ~~~
+
diff --git a/_includes/v20.2/orchestration/kubernetes-remove-nodes-secure.md b/_includes/v20.2/orchestration/kubernetes-remove-nodes-secure.md
new file mode 100644
index 00000000000..3d6d1103e9f
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-remove-nodes-secure.md
@@ -0,0 +1,119 @@
+To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
+
+{{site.data.alerts.callout_danger}}
+If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
+{{site.data.alerts.end}}
+
+1. Get a shell into the `cockroachdb-client-secure` pod you created earlier and use the `cockroach node status` command to get the internal IDs of nodes:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach node status \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+ ~~~
+ id | address | build | started_at | updated_at | is_available | is_live
+ +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
+ 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
+ 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
+ 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
+ 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
+ (4 rows)
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach node status \
+ --certs-dir=/cockroach-certs \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+ ~~~
+ id | address | build | started_at | updated_at | is_available | is_live
+ +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
+ 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
+ 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
+ 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
+ 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
+ (4 rows)
+ ~~~
+
+
+ The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
+
+2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it:
+
+ {{site.data.alerts.callout_info}}
+ It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.
+ {{site.data.alerts.end}}
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach node decommission \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach node decommission \
+ --certs-dir=/cockroach-certs \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+ You'll then see the decommissioning status print to `stderr` as it changes:
+
+ ~~~
+ id | is_live | replicas | is_decommissioning | is_draining
+ +---+---------+----------+--------------------+-------------+
+ 4 | true | 73 | true | false
+ (1 row)
+ ~~~
+
+ Once the node has been fully decommissioned and stopped, you'll see a confirmation:
+
+ ~~~
+ id | is_live | replicas | is_decommissioning | is_draining
+ +---+---------+----------+--------------------+-------------+
+ 4 | true | 0 | true | false
+ (1 row)
+
+ No more data reported on target nodes. Please verify cluster health before removing the nodes.
+ ~~~
+
+3. Once the node has been decommissioned, remove a pod from your StatefulSet:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl scale statefulset cockroachdb --replicas=3
+ ~~~
+
+ ~~~
+ statefulset.apps/cockroachdb scaled
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm upgrade \
+ my-release \
+ stable/cockroachdb \
+ --set statefulset.replicas=3 \
+ --reuse-values
+ ~~~
+
diff --git a/_includes/v20.2/orchestration/kubernetes-scale-cluster.md b/_includes/v20.2/orchestration/kubernetes-scale-cluster.md
new file mode 100644
index 00000000000..8eb49b7d14f
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-scale-cluster.md
@@ -0,0 +1,61 @@
+Your Kubernetes cluster includes 3 worker nodes, or instances, that can run pods. A CockroachDB node runs in each pod. As recommended in our [production best practices](recommended-production-settings.html#topology), you should ensure that two pods are not placed on the same worker node.
+
+
+To do this, add a new worker node and then edit your StatefulSet configuration to add another pod for the new CockroachDB node.
+
+1. Add a worker node, bringing the total from 3 to 4:
+ - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
+ - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling).
+ - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
+ - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).
+
+2. Add a pod for the new CockroachDB node:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl scale statefulset cockroachdb --replicas=4
+ ~~~
+
+ ~~~
+ statefulset.apps/cockroachdb scaled
+ ~~~
+
+ {{site.data.alerts.callout_success}}
+ If you aren't using the Kubernetes CA to sign certificates, you can now skip to step 6.
+ {{site.data.alerts.end}}
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm upgrade \
+ my-release \
+ stable/cockroachdb \
+ --set statefulset.replicas=4 \
+ --reuse-values
+ ~~~
+
+ ~~~
+ Release "my-release" has been upgraded. Happy Helming!
+ LAST DEPLOYED: Tue May 14 14:06:43 2019
+ NAMESPACE: default
+ STATUS: DEPLOYED
+
+ RESOURCES:
+ ==> v1beta1/PodDisruptionBudget
+ NAME AGE
+ my-release-cockroachdb-budget 51m
+
+ ==> v1/Pod(related)
+
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 38m
+ my-release-cockroachdb-1 1/1 Running 0 39m
+ my-release-cockroachdb-2 1/1 Running 0 39m
+ my-release-cockroachdb-3 0/1 Pending 0 0s
+ my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m
+
+ ...
+ ~~~
+
diff --git a/_includes/v20.2/orchestration/kubernetes-simulate-failure.md b/_includes/v20.2/orchestration/kubernetes-simulate-failure.md
new file mode 100644
index 00000000000..d71bd76bfaa
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-simulate-failure.md
@@ -0,0 +1,56 @@
+Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
+
+To see this in action:
+
+1. Kill one of CockroachDB nodes:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl delete pod cockroachdb-2
+ ~~~
+
+ ~~~
+ pod "cockroachdb-2" deleted
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl delete pod my-release-cockroachdb-2
+ ~~~
+
+ ~~~
+ pod "my-release-cockroachdb-2" deleted
+ ~~~
+
+
+
+2. In the Admin UI, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
+
+3. Back in the terminal, verify that the pod was automatically restarted:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pod cockroachdb-2
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-2 1/1 Running 0 12s
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pod my-release-cockroachdb-2
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-2 1/1 Running 0 44s
+ ~~~
+
diff --git a/_includes/v20.2/orchestration/kubernetes-upgrade-cluster.md b/_includes/v20.2/orchestration/kubernetes-upgrade-cluster.md
new file mode 100644
index 00000000000..14c6e0e2010
--- /dev/null
+++ b/_includes/v20.2/orchestration/kubernetes-upgrade-cluster.md
@@ -0,0 +1,265 @@
+As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special.
+
+Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets).
+
+1. Decide how the upgrade will be finalized.
+
+ {{site.data.alerts.callout_info}}
+ This step is relevant only when upgrading from v20.1.x to v20.2. For upgrades within the v20.2.x series, skip this step.
+ {{site.data.alerts.end}}
+
+ By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in v20.2. After finalization, however, it will no longer be possible to perform a downgrade to v20.1. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade.
+
+ We recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade:
+
+ {% if page.secure == true %}
+
+ 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+
+ {% else %}
+
+ 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+ {% endif %}
+
+ 2. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SET CLUSTER SETTING cluster.preserve_downgrade_option = '19.2';
+ ~~~
+
+ 3. Exit the SQL shell and delete the temporary pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
+
+2. Kick off the upgrade process by changing the desired Docker image:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl patch statefulset cockroachdb \
+ --type='json' \
+ -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]'
+ ~~~
+
+ ~~~
+ statefulset.apps/cockroachdb patched
+ ~~~
+
+
+
+
+ {{site.data.alerts.callout_info}}
+ For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed.
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl delete job my-release-cockroachdb-init
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm upgrade \
+ my-release \
+ stable/cockroachdb \
+ --set image.tag={{page.release_info.version}} \
+ --reuse-values
+ ~~~
+
+
+3. If you then check the status of your cluster's pods, you should see them being restarted:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 1/1 Running 0 2m
+ cockroachdb-1 1/1 Running 0 2m
+ cockroachdb-2 1/1 Running 0 2m
+ cockroachdb-3 0/1 Terminating 0 1m
+ ...
+ ~~~
+
+
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 2m
+ my-release-cockroachdb-1 1/1 Running 0 3m
+ my-release-cockroachdb-2 1/1 Running 0 3m
+ my-release-cockroachdb-3 0/1 ContainerCreating 0 25s
+ my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s
+ ...
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster.
+ {{site.data.alerts.end}}
+
+
+4. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods \
+ -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
+ ~~~
+
+
+ ~~~
+ cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
+ cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
+ cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
+ cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}}
+ ...
+ ~~~
+
+
+
+ ~~~
+ my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
+ my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
+ my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
+ my-release-cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}}
+ ...
+ ~~~
+
+
+ You can also check the CockroachDB version of each node in the Admin UI:
+
+
+
+5. Finish the upgrade.
+
+ {{site.data.alerts.callout_info}}
+ This step is relevant only when upgrading from v20.1.x to v20.2. For upgrades within the v20.2.x series, skip this step.
+ {{site.data.alerts.end}}
+
+ If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
+
+ Once you are satisfied with the new version, re-enable auto-finalization:
+
+ {% if page.secure == true %}
+
+ 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html):
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+ {% else %}
+
+ 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+ {% endif %}
+
+ 2. Re-enable auto-finalization:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
+ ~~~
+
+ 3. Exit the SQL shell and delete the temporary pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
diff --git a/_includes/v20.2/orchestration/local-start-kubernetes.md b/_includes/v20.2/orchestration/local-start-kubernetes.md
new file mode 100644
index 00000000000..081c2274c0f
--- /dev/null
+++ b/_includes/v20.2/orchestration/local-start-kubernetes.md
@@ -0,0 +1,24 @@
+## Before you begin
+
+Before getting started, it's helpful to review some Kubernetes-specific terminology:
+
+Feature | Description
+--------|------------
+[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
+[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
+[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
+[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.
When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
+[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.
+
+## Step 1. Start Kubernetes
+
+1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation.
+
+ {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}}
+
+2. Start a local Kubernetes cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ minikube start
+ ~~~
diff --git a/_includes/v20.2/orchestration/monitor-cluster.md b/_includes/v20.2/orchestration/monitor-cluster.md
new file mode 100644
index 00000000000..16914b30ca3
--- /dev/null
+++ b/_includes/v20.2/orchestration/monitor-cluster.md
@@ -0,0 +1,69 @@
+To access the cluster's [Admin UI](admin-ui-overview.html):
+
+{% if page.secure == true %}
+
+1. On secure clusters, [certain pages of the Admin UI](admin-ui-overview.html#admin-ui-access) can only be accessed by `admin` users.
+
+ Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+1. Assign `roach` to the `admin` role (you only need to do this once):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > GRANT admin TO roach;
+ ~~~
+
+1. Exit the SQL shell and pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
+
+{% endif %}
+
+1. In a new terminal window, port-forward from your local machine to one of the pods:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl port-forward cockroachdb-0 8080
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl port-forward my-release-cockroachdb-0 8080
+ ~~~
+
+
+ ~~~
+ Forwarding from 127.0.0.1:8080 -> 8080
+ ~~~
+
+ {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
+
+{% if page.secure == true %}
+
+1. Go to https://localhost:8080 and log in with the username and password you created earlier.
+
+ {% include {{ page.version.version }}/misc/chrome-localhost.md %}
+
+{% else %}
+
+1. Go to http://localhost:8080.
+
+{% endif %}
+
+1. In the UI, verify that the cluster is running as expected:
+ - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster.
+ - Click the **Databases** tab on the left to verify that `bank` is listed.
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-helm-insecure.md b/_includes/v20.2/orchestration/start-cockroachdb-helm-insecure.md
new file mode 100644
index 00000000000..35ca5d4ea70
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-helm-insecure.md
@@ -0,0 +1,94 @@
+1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the official `stable` chart repository:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo add stable https://kubernetes-charts.storage.googleapis.com
+ ~~~
+
+ ~~~
+ "stable" has been added to your repositories
+ ~~~
+
+2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/helm/charts/blob/master/stable/cockroachdb/Chart.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo update
+ ~~~
+
+3. Modify our Helm chart's [`values.yaml`](https://github.com/helm/charts/blob/master/stable/cockroachdb/values.yaml) parameters for your deployment scenario.
+
+ Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below.
+
+ {% include copy-clipboard.html %}
+ ~~~
+ statefulset:
+ resources:
+ limits:
+ memory: "8Gi"
+ requests:
+ memory: "8Gi"
+ conf:
+ cache: "2Gi"
+ max-sql-memory: "2Gi"
+ ~~~
+
+ 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
+
+ {{site.data.alerts.callout_success}}
+ For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
+ {{site.data.alerts.end}}
+
+ 2. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
+
+ {{site.data.alerts.callout_info}}
+ If necessary, you can [expand disk size](orchestrate-cockroachdb-with-kubernetes.html#expand-disk-size) after the cluster is live.
+ {{site.data.alerts.end}}
+
+4. Install the CockroachDB Helm chart.
+
+ Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
+
+ {{site.data.alerts.callout_info}}
+ This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm install my-release --values my-values.yaml stable/cockroachdb
+ ~~~
+
+ Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
+
+5. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 8m
+ my-release-cockroachdb-1 1/1 Running 0 8m
+ my-release-cockroachdb-2 1/1 Running 0 8m
+ my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
+ ~~~
+
+6. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
+ pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
+ pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-helm-secure.md b/_includes/v20.2/orchestration/start-cockroachdb-helm-secure.md
new file mode 100644
index 00000000000..dd10c4c5d19
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-helm-secure.md
@@ -0,0 +1,185 @@
+1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the official `stable` chart repository:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo add stable https://kubernetes-charts.storage.googleapis.com
+ ~~~
+
+ ~~~
+ "stable" has been added to your repositories
+ ~~~
+
+2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/helm/charts/blob/master/stable/cockroachdb/Chart.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo update
+ ~~~
+
+3. Modify our Helm chart's [`values.yaml`](https://github.com/helm/charts/blob/master/stable/cockroachdb/values.yaml) parameters for your deployment scenario.
+
+ Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below.
+
+ {% include copy-clipboard.html %}
+ ~~~
+ statefulset:
+ resources:
+ limits:
+ memory: "8Gi"
+ requests:
+ memory: "8Gi"
+ conf:
+ cache: "2Gi"
+ max-sql-memory: "2Gi"
+ tls:
+ enabled: true
+ ~~~
+
+ 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`.
+
+ {{site.data.alerts.callout_success}}
+ For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
+ {{site.data.alerts.end}}
+
+ 2. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
+
+ {{site.data.alerts.callout_info}}
+ If necessary, you can [expand disk size](orchestrate-cockroachdb-with-kubernetes.html#expand-disk-size) after the cluster is live.
+ {{site.data.alerts.end}}
+
+ 3. For a secure deployment, set `tls.enabled` to true.
+
+4. Install the CockroachDB Helm chart.
+
+ Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
+
+ {{site.data.alerts.callout_info}}
+ This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm install my-release --values my-values.yaml stable/cockroachdb
+ ~~~
+
+ Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
+
+6. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the CockroachDB node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificate, at which point the CockroachDB node is started in the pod.
+
+ 1. Get the names of the `Pending` CSRs:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get csr
+ ~~~
+
+ ~~~
+ NAME AGE REQUESTOR CONDITION
+ default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
+ ...
+ ~~~
+
+ If you do not see a `Pending` CSR, wait a minute and try again.
+
+ 2. Examine the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe csr default.node.my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ Name: default.node.my-release-cockroachdb-0
+ Labels:
+ Annotations:
+ CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
+ Requesting User: system:serviceaccount:default:my-release-cockroachdb
+ Status: Pending
+ Subject:
+ Common Name: node
+ Serial Number:
+ Organization: Cockroach
+ Subject Alternative Names:
+ DNS Names: localhost
+ my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
+ my-release-cockroachdb-0.my-release-cockroachdb
+ my-release-cockroachdb-public
+ my-release-cockroachdb-public.default.svc.cluster.local
+ IP Addresses: 127.0.0.1
+ Events:
+ ~~~
+
+ 3. If everything looks correct, approve the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.node.my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-0 approved
+ ~~~
+
+ 4. Repeat steps 2 and 3 for the other 2 pods.
+
+7. Confirm that three pods are `Running` successfully:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 0/1 Running 0 6m
+ my-release-cockroachdb-1 0/1 Running 0 6m
+ my-release-cockroachdb-2 0/1 Running 0 6m
+ my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
+ ~~~
+
+8. Approve the CSR for the one-off pod from which cluster initialization happens:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.client.root
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.client.root approved
+ ~~~
+
+9. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 8m
+ my-release-cockroachdb-1 1/1 Running 0 8m
+ my-release-cockroachdb-2 1/1 Running 0 8m
+ my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
+ ~~~
+
+10. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
+ pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
+ pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-insecure.md b/_includes/v20.2/orchestration/start-cockroachdb-insecure.md
new file mode 100644
index 00000000000..cb3910c0fb0
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-insecure.md
@@ -0,0 +1,121 @@
+1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it.
+
+ Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
+ ~~~
+
+ {{site.data.alerts.callout_danger}}
+ To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. Specify this amount by adjusting `resources.requests.memory` and `resources.limits.memory` in `cockroachdb-statefulset.yaml`. Their values should be identical.
+
+ We recommend setting `cache` and `max-sql-memory` each to 1/4 of your memory allocation. For example, if you are allocating 8Gi of memory to each CockroachDB node, substitute the following values in [this line](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml#L146):
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ --cache 2Gi --max-sql-memory 2Gi
+ ~~~
+
+ Use the file to create the StatefulSet and start the cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset.yaml
+ ~~~
+
+ ~~~
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+ Alternatively, if you'd rather start with a configuration file that has been customized for performance:
+
+ 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml
+ ~~~
+
+ 2. Modify the file wherever there is a `TODO` comment.
+
+ 3. Use the file to create the StatefulSet and start the cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset-insecure.yaml
+ ~~~
+
+2. Confirm that three pods are `Running` successfully. Note that they will not
+ be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get persistentvolumes
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
+ pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
+ pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
+ pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
+ ~~~
+
+4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
+ ~~~
+
+ ~~~
+ job.batch/cluster-init created
+ ~~~
+
+5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get job cluster-init
+ ~~~
+
+ ~~~
+ NAME COMPLETIONS DURATION AGE
+ cluster-init 1/1 7s 27s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cluster-init-cqf8l 0/1 Completed 0 56s
+ cockroachdb-0 1/1 Running 0 7m51s
+ cockroachdb-1 1/1 Running 0 7m51s
+ cockroachdb-2 1/1 Running 0 7m51s
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-local-helm-insecure.md b/_includes/v20.2/orchestration/start-cockroachdb-local-helm-insecure.md
new file mode 100644
index 00000000000..1aa46194329
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-local-helm-insecure.md
@@ -0,0 +1,65 @@
+1. [Install the Helm client](https://helm.sh/docs/intro/install) and add the official `stable` chart repository:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo add stable https://kubernetes-charts.storage.googleapis.com
+ ~~~
+
+ ~~~
+ "stable" has been added to your repositories
+ ~~~
+
+2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/helm/charts/blob/master/stable/cockroachdb/Chart.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo update
+ ~~~
+
+3. Install the CockroachDB Helm chart.
+
+ Provide a "release" name to identify and track this particular deployment of the chart.
+
+ {{site.data.alerts.callout_info}}
+ This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm install my-release stable/cockroachdb
+ ~~~
+
+ Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
+
+4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 8m
+ my-release-cockroachdb-1 1/1 Running 0 8m
+ my-release-cockroachdb-2 1/1 Running 0 8m
+ my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
+ ~~~
+
+5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
+ pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
+ pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-local-helm-secure.md b/_includes/v20.2/orchestration/start-cockroachdb-local-helm-secure.md
new file mode 100644
index 00000000000..bc11de7d52f
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-local-helm-secure.md
@@ -0,0 +1,162 @@
+1. [Install the Helm client](https://helm.sh/docs/intro/install) and add the official `stable` chart repository:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo add stable https://kubernetes-charts.storage.googleapis.com
+ ~~~
+
+ ~~~
+ "stable" has been added to your repositories
+ ~~~
+
+2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/helm/charts/blob/master/stable/cockroachdb/Chart.yaml):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm repo update
+ ~~~
+
+3. Modify our Helm chart's [`values.yaml`](https://github.com/helm/charts/blob/master/stable/cockroachdb/values.yaml) parameters for your deployment scenario.
+
+ Create a `my-values.yaml` file to override the defaults. For a secure deployment, set `tls.enabled` to true:
+
+ {% include copy-clipboard.html %}
+ ~~~
+ tls:
+ enabled: true
+ ~~~
+
+4. Install the CockroachDB Helm chart.
+
+ Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`.
+
+ {{site.data.alerts.callout_info}}
+ This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ helm install my-release --values my-values.yaml stable/cockroachdb
+ ~~~
+
+ Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
+
+6. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the CockroachDB node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificate, at which point the CockroachDB node is started in the pod.
+
+ 1. Get the names of the `Pending` CSRs:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get csr
+ ~~~
+
+ ~~~
+ NAME AGE REQUESTOR CONDITION
+ default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
+ default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
+ ...
+ ~~~
+
+ If you do not see a `Pending` CSR, wait a minute and try again.
+
+ 2. Examine the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe csr default.node.my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ Name: default.node.my-release-cockroachdb-0
+ Labels:
+ Annotations:
+ CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
+ Requesting User: system:serviceaccount:default:my-release-cockroachdb
+ Status: Pending
+ Subject:
+ Common Name: node
+ Serial Number:
+ Organization: Cockroach
+ Subject Alternative Names:
+ DNS Names: localhost
+ my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
+ my-release-cockroachdb-0.my-release-cockroachdb
+ my-release-cockroachdb-public
+ my-release-cockroachdb-public.default.svc.cluster.local
+ IP Addresses: 127.0.0.1
+ Events:
+ ~~~
+
+ 3. If everything looks correct, approve the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.node.my-release-cockroachdb-0
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-0 approved
+ ~~~
+
+ 4. Repeat steps 2 and 3 for the other 2 pods.
+
+7. Confirm that three pods are `Running` successfully:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 0/1 Running 0 6m
+ my-release-cockroachdb-1 0/1 Running 0 6m
+ my-release-cockroachdb-2 0/1 Running 0 6m
+ my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
+ ~~~
+
+8. Approve the CSR for the one-off pod from which cluster initialization happens:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.client.root
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.client.root approved
+ ~~~
+
+9. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ my-release-cockroachdb-0 1/1 Running 0 8m
+ my-release-cockroachdb-1 1/1 Running 0 8m
+ my-release-cockroachdb-2 1/1 Running 0 8m
+ my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h
+ ~~~
+
+10. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
+ pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
+ pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-local-insecure.md b/_includes/v20.2/orchestration/start-cockroachdb-local-insecure.md
new file mode 100644
index 00000000000..bebb6eb3062
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-local-insecure.md
@@ -0,0 +1,83 @@
+1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
+ ~~~
+
+ ~~~
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+2. Confirm that three pods are `Running` successfully. Note that they will not
+ be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
+ pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
+ pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
+ pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
+ ~~~
+
+4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
+ ~~~
+
+ ~~~
+ job.batch/cluster-init created
+ ~~~
+
+5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get job cluster-init
+ ~~~
+
+ ~~~
+ NAME COMPLETIONS DURATION AGE
+ cluster-init 1/1 7s 27s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cluster-init-cqf8l 0/1 Completed 0 56s
+ cockroachdb-0 1/1 Running 0 7m51s
+ cockroachdb-1 1/1 Running 0 7m51s
+ cockroachdb-2 1/1 Running 0 7m51s
+ ~~~
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-local-secure.md b/_includes/v20.2/orchestration/start-cockroachdb-local-secure.md
new file mode 100644
index 00000000000..0558bcdf3b2
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-local-secure.md
@@ -0,0 +1,366 @@
+Download and modify our StatefulSet configuration, depending on how you want to sign your certificates.
+
+{{site.data.alerts.callout_danger}}
+Some environments, such as Amazon EKS, do not support certificates signed by Kubernetes' built-in CA. In this case, use the second configuration below.
+{{site.data.alerts.end}}
+
+- Using the Kubernetes CA: [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml).
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
+ ~~~
+
+- Using a non-Kubernetes CA: [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml)
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml
+ ~~~
+
+{{site.data.alerts.callout_success}}
+If you change the StatefulSet name from the default `cockroachdb`, be sure to start and end with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+{{site.data.alerts.end}}
+
+#### Initialize the cluster
+
+Choose the authentication method that corresponds to the StatefulSet configuration you downloaded and modified above.
+
+- [Kubernetes CA](#kubernetes-ca)
+- [Non-Kubernetes CA](#non-kubernetes-ca)
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
+
+##### Kubernetes CA
+
+1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset-secure.yaml
+ ~~~
+
+ ~~~
+ serviceaccount/cockroachdb created
+ role.rbac.authorization.k8s.io/cockroachdb created
+ clusterrole.rbac.authorization.k8s.io/cockroachdb created
+ rolebinding.rbac.authorization.k8s.io/cockroachdb created
+ clusterrolebinding.rbac.authorization.k8s.io/cockroachdb created
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
+
+ 1. Get the names of the `Pending` CSRs:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get csr
+ ~~~
+
+ ~~~
+ NAME AGE REQUESTOR CONDITION
+ default.node.cockroachdb-0 1m system:serviceaccount:default:cockroachdb Pending
+ default.node.cockroachdb-1 1m system:serviceaccount:default:cockroachdb Pending
+ default.node.cockroachdb-2 1m system:serviceaccount:default:cockroachdb Pending
+ ...
+ ~~~
+
+ If you do not see a `Pending` CSR, wait a minute and try again.
+
+ 2. Examine the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe csr default.node.cockroachdb-0
+ ~~~
+
+ ~~~
+ Name: default.node.cockroachdb-0
+ Labels:
+ Annotations:
+ CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
+ Requesting User: system:serviceaccount:default:cockroachdb
+ Status: Pending
+ Subject:
+ Common Name: node
+ Serial Number:
+ Organization: Cockroach
+ Subject Alternative Names:
+ DNS Names: localhost
+ cockroachdb-0.cockroachdb.default.svc.cluster.local
+ cockroachdb-0.cockroachdb
+ cockroachdb-public
+ cockroachdb-public.default.svc.cluster.local
+ IP Addresses: 127.0.0.1
+ 10.48.1.6
+ Events:
+ ~~~
+
+ 3. If everything looks correct, approve the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.node.cockroachdb-0
+ ~~~
+
+ ~~~
+ certificatesigningrequest "default.node.cockroachdb-0" approved
+ ~~~
+
+ 4. Repeat steps 2 and 3 for the other 2 pods.
+
+3. Initialize the CockroachDB cluster:
+
+ 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+ 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
+ pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
+ pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
+ ~~~
+
+ 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
+ ~~~
+
+ ~~~
+ job.batch/cluster-init-secure created
+ ~~~
+
+ 4. Approve the CSR for the one-off pod from which cluster initialization happens:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.client.root
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.client.root approved
+ ~~~
+
+ 5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get job cluster-init-secure
+ ~~~
+
+ ~~~
+ NAME COMPLETIONS DURATION AGE
+ cluster-init-secure 1/1 23s 35s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cluster-init-secure-q8s7v 0/1 Completed 0 55s
+ cockroachdb-0 1/1 Running 0 3m
+ cockroachdb-1 1/1 Running 0 3m
+ cockroachdb-2 1/1 Running 0 3m
+ ~~~
+
+##### Non-Kubernetes CA
+
+{{site.data.alerts.callout_info}}
+The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates.
+{{site.data.alerts.end}}
+
+1. Create two directories:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir certs
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir my-safe-directory
+ ~~~
+
+ Directory | Description
+ ----------|------------
+ `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
+ `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
+
+2. Create the CA certificate and key pair:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-ca \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+3. Create a client certificate and key pair for the root user:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-client \
+ root \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+4. Upload the client certificate and key to the Kubernetes cluster as a secret:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create secret \
+ generic cockroachdb.client.root \
+ --from-file=certs
+ ~~~
+
+ ~~~
+ secret/cockroachdb.client.root created
+ ~~~
+
+5. Create the certificate and key pair for your CockroachDB nodes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-node \
+ localhost 127.0.0.1 \
+ cockroachdb-public \
+ cockroachdb-public.default \
+ cockroachdb-public.default.svc.cluster.local \
+ *.cockroachdb \
+ *.cockroachdb.default \
+ *.cockroachdb.default.svc.cluster.local \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+6. Upload the node certificate and key to the Kubernetes cluster as a secret:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create secret \
+ generic cockroachdb.node \
+ --from-file=certs
+ ~~~
+
+ ~~~
+ secret/cockroachdb.node created
+ ~~~
+
+7. Check that the secrets were created on the cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get secrets
+ ~~~
+
+ ~~~
+ NAME TYPE DATA AGE
+ cockroachdb.client.root Opaque 3 41m
+ cockroachdb.node Opaque 5 14s
+ default-token-6qjdb kubernetes.io/service-account-token 3 4m
+ ~~~
+
+8. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset.yaml
+ ~~~
+
+ ~~~
+ serviceaccount/cockroachdb created
+ role.rbac.authorization.k8s.io/cockroachdb created
+ rolebinding.rbac.authorization.k8s.io/cockroachdb created
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+9. Initialize the CockroachDB cluster:
+
+ 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+ 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
+ pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
+ pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
+ ~~~
+
+ 3. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-0 \
+ -- /cockroach/cockroach init \
+ --certs-dir=/cockroach/cockroach-certs
+ ~~~
+
+ ~~~
+ Cluster successfully initialized
+ ~~~
+
+ 4. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 1/1 Running 0 3m
+ cockroachdb-1 1/1 Running 0 3m
+ cockroachdb-2 1/1 Running 0 3m
+ ~~~
\ No newline at end of file
diff --git a/_includes/v20.2/orchestration/start-cockroachdb-secure.md b/_includes/v20.2/orchestration/start-cockroachdb-secure.md
new file mode 100644
index 00000000000..8b46d80cc48
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-cockroachdb-secure.md
@@ -0,0 +1,401 @@
+Download and modify our StatefulSet configuration, depending on how you want to sign your certificates.
+
+{{site.data.alerts.callout_danger}}
+Some environments, such as Amazon EKS, do not support certificates signed by Kubernetes' built-in CA. In this case, use the second configuration below.
+{{site.data.alerts.end}}
+
+- Using the Kubernetes CA: [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml).
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
+ ~~~
+
+- Using a non-Kubernetes CA: [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml)
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml
+ ~~~
+
+#### Set up configuration file
+
+Modify the values in the StatefulSet configuration.
+
+1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. In the `containers` specification, set this amount in both `resources.requests.memory` and `resources.limits.memory`.
+
+ ~~~
+ resources:
+ requests:
+ memory: "8Gi"
+ limits:
+ memory: "8Gi"
+ ~~~
+
+ We recommend setting `cache` and `max-sql-memory` each to 1/4 of the memory allocation. These are defined as placeholder percentages in the StatefulSet command that creates the CockroachDB nodes:
+
+ ~~~
+ - "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroachdb-2.cockroachdb --cache 25% --max-sql-memory 25%"
+ ~~~
+
+ {{site.data.alerts.callout_success}}
+ For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`.
+ {{site.data.alerts.end}}
+
+ ~~~
+ --cache 2Gi --max-sql-memory 2Gi
+ ~~~
+
+2. In the `volumeClaimTemplates` specification, you may want to modify `resources.requests.storage` for your use case. This configuration defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type).
+
+ ~~~
+ resources:
+ requests:
+ storage: "100Gi"
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ If necessary, you can [expand disk size](orchestrate-cockroachdb-with-kubernetes.html#expand-disk-size) after the cluster is live.
+ {{site.data.alerts.end}}
+
+{{site.data.alerts.callout_success}}
+If you change the StatefulSet name from the default `cockroachdb`, be sure to start and end with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+{{site.data.alerts.end}}
+
+#### Initialize the cluster
+
+Choose the authentication method that corresponds to the StatefulSet configuration you downloaded and modified above.
+
+- [Kubernetes CA](#kubernetes-ca)
+- [Non-Kubernetes CA](#non-kubernetes-ca)
+
+{{site.data.alerts.callout_success}}
+The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
+{{site.data.alerts.end}}
+
+##### Kubernetes CA
+
+1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset-secure.yaml
+ ~~~
+
+ ~~~
+ serviceaccount/cockroachdb created
+ role.rbac.authorization.k8s.io/cockroachdb created
+ clusterrole.rbac.authorization.k8s.io/cockroachdb created
+ rolebinding.rbac.authorization.k8s.io/cockroachdb created
+ clusterrolebinding.rbac.authorization.k8s.io/cockroachdb created
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
+
+ 1. Get the names of the `Pending` CSRs:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get csr
+ ~~~
+
+ ~~~
+ NAME AGE REQUESTOR CONDITION
+ default.node.cockroachdb-0 1m system:serviceaccount:default:cockroachdb Pending
+ default.node.cockroachdb-1 1m system:serviceaccount:default:cockroachdb Pending
+ default.node.cockroachdb-2 1m system:serviceaccount:default:cockroachdb Pending
+ ...
+ ~~~
+
+ If you do not see a `Pending` CSR, wait a minute and try again.
+
+ 2. Examine the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl describe csr default.node.cockroachdb-0
+ ~~~
+
+ ~~~
+ Name: default.node.cockroachdb-0
+ Labels:
+ Annotations:
+ CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
+ Requesting User: system:serviceaccount:default:cockroachdb
+ Status: Pending
+ Subject:
+ Common Name: node
+ Serial Number:
+ Organization: Cockroach
+ Subject Alternative Names:
+ DNS Names: localhost
+ cockroachdb-0.cockroachdb.default.svc.cluster.local
+ cockroachdb-0.cockroachdb
+ cockroachdb-public
+ cockroachdb-public.default.svc.cluster.local
+ IP Addresses: 127.0.0.1
+ 10.48.1.6
+ Events:
+ ~~~
+
+ 3. If everything looks correct, approve the CSR for the first pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.node.cockroachdb-0
+ ~~~
+
+ ~~~
+ certificatesigningrequest "default.node.cockroachdb-0" approved
+ ~~~
+
+ 4. Repeat steps 2 and 3 for the other 2 pods.
+
+3. Initialize the CockroachDB cluster:
+
+ 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+ 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
+ pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
+ pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
+ ~~~
+
+ 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
+ ~~~
+
+ ~~~
+ job.batch/cluster-init-secure created
+ ~~~
+
+ 4. Approve the CSR for the one-off pod from which cluster initialization happens:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl certificate approve default.client.root
+ ~~~
+
+ ~~~
+ certificatesigningrequest.certificates.k8s.io/default.client.root approved
+ ~~~
+
+ 5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get job cluster-init-secure
+ ~~~
+
+ ~~~
+ NAME COMPLETIONS DURATION AGE
+ cluster-init-secure 1/1 23s 35s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cluster-init-secure-q8s7v 0/1 Completed 0 55s
+ cockroachdb-0 1/1 Running 0 3m
+ cockroachdb-1 1/1 Running 0 3m
+ cockroachdb-2 1/1 Running 0 3m
+ ~~~
+
+##### Non-Kubernetes CA
+
+{{site.data.alerts.callout_info}}
+The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates.
+{{site.data.alerts.end}}
+
+1. Create two directories:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir certs my-safe-directory
+ ~~~
+
+ Directory | Description
+ ----------|------------
+ `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
+ `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
+
+2. Create the CA certificate and key pair:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-ca \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+3. Create a client certificate and key pair for the root user:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-client \
+ root \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+4. Upload the client certificate and key to the Kubernetes cluster as a secret:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create secret \
+ generic cockroachdb.client.root \
+ --from-file=certs
+ ~~~
+
+ ~~~
+ secret/cockroachdb.client.root created
+ ~~~
+
+5. Create the certificate and key pair for your CockroachDB nodes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-node \
+ localhost 127.0.0.1 \
+ cockroachdb-public \
+ cockroachdb-public.default \
+ cockroachdb-public.default.svc.cluster.local \
+ *.cockroachdb \
+ *.cockroachdb.default \
+ *.cockroachdb.default.svc.cluster.local \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+6. Upload the node certificate and key to the Kubernetes cluster as a secret:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create secret \
+ generic cockroachdb.node \
+ --from-file=certs
+ ~~~
+
+ ~~~
+ secret/cockroachdb.node created
+ ~~~
+
+7. Check that the secrets were created on the cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get secrets
+ ~~~
+
+ ~~~
+ NAME TYPE DATA AGE
+ cockroachdb.client.root Opaque 3 41m
+ cockroachdb.node Opaque 5 14s
+ default-token-6qjdb kubernetes.io/service-account-token 3 4m
+ ~~~
+
+8. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f cockroachdb-statefulset.yaml
+ ~~~
+
+ ~~~
+ serviceaccount/cockroachdb created
+ role.rbac.authorization.k8s.io/cockroachdb created
+ rolebinding.rbac.authorization.k8s.io/cockroachdb created
+ service/cockroachdb-public created
+ service/cockroachdb created
+ poddisruptionbudget.policy/cockroachdb-budget created
+ statefulset.apps/cockroachdb created
+ ~~~
+
+9. Initialize the CockroachDB cluster:
+
+ 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 0/1 Running 0 2m
+ cockroachdb-1 0/1 Running 0 2m
+ cockroachdb-2 0/1 Running 0 2m
+ ~~~
+
+ 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pv
+ ~~~
+
+ ~~~
+ NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m
+ pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m
+ pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m
+ ~~~
+
+ 3. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-0 \
+ -- /cockroach/cockroach init \
+ --certs-dir=/cockroach/cockroach-certs
+ ~~~
+
+ ~~~
+ Cluster successfully initialized
+ ~~~
+
+ 4. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl get pods
+ ~~~
+
+ ~~~
+ NAME READY STATUS RESTARTS AGE
+ cockroachdb-0 1/1 Running 0 3m
+ cockroachdb-1 1/1 Running 0 3m
+ cockroachdb-2 1/1 Running 0 3m
+ ~~~
\ No newline at end of file
diff --git a/_includes/v20.2/orchestration/start-kubernetes.md b/_includes/v20.2/orchestration/start-kubernetes.md
new file mode 100644
index 00000000000..e8a5a22dc91
--- /dev/null
+++ b/_includes/v20.2/orchestration/start-kubernetes.md
@@ -0,0 +1,98 @@
+Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service, the hosted Amazon Elastic Kubernetes Service (EKS), or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.
+
+- [Hosted GKE](#hosted-gke)
+- [Hosted EKS](#hosted-eks)
+- [Manual GCE](#manual-gce)
+- [Manual AWS](#manual-aws)
+
+### Hosted GKE
+
+1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation.
+
+ This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
+
+ {{site.data.alerts.callout_success}}The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.{{site.data.alerts.end}}
+
+2. From your local workstation, start the Kubernetes cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ gcloud container clusters create cockroachdb --machine-type n1-standard-4
+ ~~~
+
+ ~~~
+ Creating cluster cockroachdb...done.
+ ~~~
+
+ This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--machine-type` flag tells the node pool to use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
+
+ The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
+
+3. Get the email address associated with your Google Cloud account:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ gcloud info | grep Account
+ ~~~
+
+ ~~~
+ Account: [your.google.cloud.email@example.org]
+ ~~~
+
+ {{site.data.alerts.callout_danger}}
+ This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.
+ {{site.data.alerts.end}}
+
+4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create clusterrolebinding $USER-cluster-admin-binding \
+ --clusterrole=cluster-admin \
+ --user=
+ ~~~
+
+ ~~~
+ clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created
+ ~~~
+
+### Hosted EKS
+
+1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation.
+
+ This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
+
+2. From your local workstation, start the Kubernetes cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ eksctl create cluster \
+ --name cockroachdb \
+ --nodegroup-name standard-workers \
+ --node-type m5.xlarge \
+ --nodes 3 \
+ --nodes-min 1 \
+ --nodes-max 4 \
+ --node-ami auto
+ ~~~
+
+ This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
+
+ Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster.
+
+3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console.
+
+### Manual GCE
+
+From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://kubernetes.io/docs/setup/turnkey/gce/) documentation.
+
+The process includes:
+
+- Creating a Google Cloud Platform account, installing `gcloud`, and other prerequisites.
+- Downloading and installing the latest Kubernetes release.
+- Creating GCE instances and joining them into a single Kubernetes cluster.
+- Installing `kubectl`, the command-line tool used to manage Kubernetes from your workstation.
+
+### Manual AWS
+
+From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on AWS EC2](https://kubernetes.io/docs/setup/turnkey/aws/) documentation.
diff --git a/_includes/v20.2/orchestration/test-cluster-insecure.md b/_includes/v20.2/orchestration/test-cluster-insecure.md
new file mode 100644
index 00000000000..153c8f918f0
--- /dev/null
+++ b/_includes/v20.2/orchestration/test-cluster-insecure.md
@@ -0,0 +1,72 @@
+1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it:
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=cockroachdb-public
+ ~~~
+
+
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl run cockroachdb -it \
+ --image=cockroachdb/cockroach:{{page.release_info.version}} \
+ --rm \
+ --restart=Never \
+ -- sql \
+ --insecure \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+
+2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE bank;
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE TABLE bank.accounts (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ balance DECIMAL
+ );
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > INSERT INTO bank.accounts (balance)
+ VALUES
+ (1000.50), (20000), (380), (500), (55000);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SELECT * FROM bank.accounts;
+ ~~~
+
+ ~~~
+ id | balance
+ +--------------------------------------+---------+
+ 6f123370-c48c-41ff-b384-2c185590af2b | 380
+ 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50
+ ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500
+ d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000
+ e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000
+ (5 rows)
+ ~~~
+
+3. Exit the SQL shell and delete the temporary pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
diff --git a/_includes/v20.2/orchestration/test-cluster-secure.md b/_includes/v20.2/orchestration/test-cluster-secure.md
new file mode 100644
index 00000000000..9cb18c8bf79
--- /dev/null
+++ b/_includes/v20.2/orchestration/test-cluster-secure.md
@@ -0,0 +1,202 @@
+To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, get a shell into the pod, and then start the built-in SQL client.
+
+
+- Using the Kubernetes CA: [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml)
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
+ ~~~
+
+- Using a non-Kubernetes CA: [`client.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/client.yaml)
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create \
+ -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+ ~~~
+ pod/cockroachdb-client-secure created
+ ~~~
+
+1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=cockroachdb-public
+ ~~~
+
+ ~~~
+ # Welcome to the cockroach SQL interface.
+ # All statements must be terminated by a semicolon.
+ # To exit: CTRL + D.
+ #
+ # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
+ # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
+
+ # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4
+ #
+ # Enter \? for a brief introduction.
+ #
+ root@cockroachdb-public:26257/defaultdb>
+ ~~~
+
+2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE bank;
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > INSERT INTO bank.accounts VALUES (1, 1000.50);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SELECT * FROM bank.accounts;
+ ~~~
+
+ ~~~
+ id | balance
+ +----+---------+
+ 1 | 1000.50
+ (1 row)
+ ~~~
+
+3. [Create a user with a password](create-user.html#create-a-user-with-a-password):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
+ ~~~
+
+ You will need this username and password to access the Admin UI later.
+
+4. Exit the SQL shell and pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
+
+
+
+1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely.
+
+ 1. Download the file:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -OOOOOOOOO \
+ https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
+ ~~~
+
+ 1. In the file, change `serviceAccountName: cockroachdb` to `serviceAccountName: my-release-cockroachdb`.
+
+ 1. Use the file to launch a pod and keep it running indefinitely:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl create -f client-secure.yaml
+ ~~~
+
+ ~~~
+ pod "cockroachdb-client-secure" created
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names).
+ {{site.data.alerts.end}}
+
+2. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ kubectl exec -it cockroachdb-client-secure \
+ -- ./cockroach sql \
+ --certs-dir=/cockroach-certs \
+ --host=my-release-cockroachdb-public
+ ~~~
+
+ ~~~
+ # Welcome to the cockroach SQL interface.
+ # All statements must be terminated by a semicolon.
+ # To exit: CTRL + D.
+ #
+ # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
+ # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
+
+ # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4
+ #
+ # Enter \? for a brief introduction.
+ #
+ root@my-release-cockroachdb-public:26257/defaultdb>
+ ~~~
+
+3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE bank;
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > INSERT INTO bank.accounts VALUES (1, 1000.50);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SELECT * FROM bank.accounts;
+ ~~~
+
+ ~~~
+ id | balance
+ +----+---------+
+ 1 | 1000.50
+ (1 row)
+ ~~~
+
+4. [Create a user with a password](create-user.html#create-a-user-with-a-password):
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
+ ~~~
+
+ You will need this username and password to access the Admin UI later.
+
+5. Exit the SQL shell and pod:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
+
+
+{{site.data.alerts.callout_success}}
+This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
+
+If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/performance/check-rebalancing-after-partitioning.md b/_includes/v20.2/performance/check-rebalancing-after-partitioning.md
new file mode 100644
index 00000000000..cbd783fd0b7
--- /dev/null
+++ b/_includes/v20.2/performance/check-rebalancing-after-partitioning.md
@@ -0,0 +1,41 @@
+Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined.
+
+To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning:
+
+
+
+To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table:
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql \
+{{page.certs}} \
+--host= \
+--database=movr \
+--execute="SELECT * FROM \
+[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \
+WHERE \"start_key\" IS NOT NULL \
+ AND \"start_key\" NOT LIKE '%Prefix%';"
+~~~
+
+~~~
+ start_key | end_key | range_id | replicas | lease_holder
++------------------+----------------------------+----------+----------+--------------+
+ /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3
+ /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8
+ /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3
+ /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8
+ /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5
+ /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1
+(6 rows)
+~~~
+
+For reference, here's how the nodes map to zones:
+
+Node IDs | Zone
+---------|-----
+1-3 | `us-east1-b` (South Carolina)
+4-6 | `us-west1-a` (Oregon)
+7-9 | `us-west2-a` (Los Angeles)
+
+We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`.
diff --git a/_includes/v20.2/performance/check-rebalancing.md b/_includes/v20.2/performance/check-rebalancing.md
new file mode 100644
index 00000000000..fff329ec7cc
--- /dev/null
+++ b/_includes/v20.2/performance/check-rebalancing.md
@@ -0,0 +1,33 @@
+Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones.
+
+To check this, access the Admin UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes:
+
+
+
+For reference, here's how the nodes map to zones:
+
+Node IDs | Zone
+---------|-----
+1-3 | `us-east1-b` (South Carolina)
+4-6 | `us-west1-a` (Oregon)
+7-9 | `us-west2-a` (Los Angeles)
+
+To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement:
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql \
+{{page.certs}} \
+--host= \
+--database=movr \
+--execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;"
+~~~
+
+~~~
+ start_key | end_key | range_id | replicas | lease_holder
++-----------+---------+----------+----------+--------------+
+ NULL | NULL | 33 | {3,4,7} | 7
+(1 row)
+~~~
+
+In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone.
diff --git a/_includes/v20.2/performance/configure-network.md b/_includes/v20.2/performance/configure-network.md
new file mode 100644
index 00000000000..7cd3e3cbcc6
--- /dev/null
+++ b/_includes/v20.2/performance/configure-network.md
@@ -0,0 +1,18 @@
+CockroachDB requires TCP communication on two ports:
+
+- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster)
+- **8080** (`tcp:8080`) for accessing the Admin UI
+
+Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the Admin UI from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls):
+
+Field | Recommended Value
+------|------------------
+Name | **cockroachweb**
+Source filter | IP ranges
+Source IP ranges | Your local network's IP ranges
+Allowed protocols | **tcp:8080**
+Target tags | `cockroachdb`
+
+{{site.data.alerts.callout_info}}
+The **tag** feature will let you easily apply the rule to your instances.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/performance/import-movr.md b/_includes/v20.2/performance/import-movr.md
new file mode 100644
index 00000000000..a0fe2dc710a
--- /dev/null
+++ b/_includes/v20.2/performance/import-movr.md
@@ -0,0 +1,160 @@
+Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle).
+
+1. Still on the fourth instance, start the [built-in SQL shell](cockroach-sql.html), pointing it at one of the CockroachDB nodes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql {{page.certs}} --host=
+ ~~~
+
+2. Create the `movr` database and set it as the default:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE movr;
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SET DATABASE = movr;
+ ~~~
+
+3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > IMPORT TABLE users (
+ id UUID NOT NULL,
+ city STRING NOT NULL,
+ name STRING NULL,
+ address STRING NULL,
+ credit_card STRING NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC)
+ )
+ CSV DATA (
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv'
+ );
+ ~~~
+
+ ~~~
+ job_id | status | fraction_completed | rows | index_entries | system_records | bytes
+ +--------------------+-----------+--------------------+------+---------------+----------------+--------+
+ 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052
+ (1 row)
+
+ Time: 2.882582355s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > IMPORT TABLE vehicles (
+ id UUID NOT NULL,
+ city STRING NOT NULL,
+ type STRING NULL,
+ owner_id UUID NULL,
+ creation_time TIMESTAMP NULL,
+ status STRING NULL,
+ ext JSON NULL,
+ mycol STRING NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
+ INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC)
+ )
+ CSV DATA (
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv'
+ );
+ ~~~
+
+ ~~~
+ job_id | status | fraction_completed | rows | index_entries | system_records | bytes
+ +--------------------+-----------+--------------------+-------+---------------+----------------+---------+
+ 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767
+ (1 row)
+
+ Time: 5.803841493s
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > IMPORT TABLE rides (
+ id UUID NOT NULL,
+ city STRING NOT NULL,
+ vehicle_city STRING NULL,
+ rider_id UUID NULL,
+ vehicle_id UUID NULL,
+ start_address STRING NULL,
+ end_address STRING NULL,
+ start_time TIMESTAMP NULL,
+ end_time TIMESTAMP NULL,
+ revenue DECIMAL(10,2) NULL,
+ CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
+ INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
+ INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
+ CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
+ )
+ CSV DATA (
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv',
+ 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv'
+ );
+ ~~~
+
+ ~~~
+ job_id | status | fraction_completed | rows | index_entries | system_records | bytes
+ +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+
+ 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841
+ (1 row)
+
+ Time: 44.620371424s
+ ~~~
+
+ {{site.data.alerts.callout_success}}
+ You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](admin-ui-jobs-page.html) of the Admin UI.
+ {{site.data.alerts.end}}
+
+7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables:
+
+ Referencing columns | Referenced columns
+ --------------------|-------------------
+ `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id`
+ `rides.city`, `rides.rider_id` | `users.city`, `users.id`
+ `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id`
+
+ As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > ALTER TABLE vehicles
+ ADD CONSTRAINT fk_city_ref_users
+ FOREIGN KEY (city, owner_id)
+ REFERENCES users (city, id);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > ALTER TABLE rides
+ ADD CONSTRAINT fk_city_ref_users
+ FOREIGN KEY (city, rider_id)
+ REFERENCES users (city, id);
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > ALTER TABLE rides
+ ADD CONSTRAINT fk_vehicle_city_ref_vehicles
+ FOREIGN KEY (vehicle_city, vehicle_id)
+ REFERENCES vehicles (city, id);
+ ~~~
+
+4. Exit the built-in SQL shell:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > \q
+ ~~~
diff --git a/_includes/v20.2/performance/overview.md b/_includes/v20.2/performance/overview.md
new file mode 100644
index 00000000000..8707c1daf10
--- /dev/null
+++ b/_includes/v20.2/performance/overview.md
@@ -0,0 +1,38 @@
+### Topology
+
+You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload:
+
+
+
+{{site.data.alerts.callout_info}}
+Within a single GCE zone, network latency between instances should be sub-millisecond.
+{{site.data.alerts.end}}
+
+You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload:
+
+
+
+{{site.data.alerts.callout_info}}
+Network latencies will increase with geographic distance between nodes. You can observe this in the [Network Latency page](admin-ui-network-latency-page.html) of the Admin UI.
+{{site.data.alerts.end}}
+
+To reproduce the performance demonstrated in this tutorial:
+
+- For each CockroachDB node, you'll use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk.
+- For running the client application workload, you'll use smaller instances, such as `n1-standard-1`.
+
+### Schema
+
+Your schema and data will be based on our open-source, fictional peer-to-peer vehicle-sharing application, [MovR](movr.html).
+
+
+
+A few notes about the schema:
+
+- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated.
+- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling.
+- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later.
+
+### Important concepts
+
+To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here.
diff --git a/_includes/v20.2/performance/partition-by-city.md b/_includes/v20.2/performance/partition-by-city.md
new file mode 100644
index 00000000000..d1c4df6e6ec
--- /dev/null
+++ b/_includes/v20.2/performance/partition-by-city.md
@@ -0,0 +1,419 @@
+For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region.
+
+1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/).
+
+2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](enterprise-licensing.html#set-a-license):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --host= \
+ --execute="SET CLUSTER SETTING cluster.organization = '';"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --host= \
+ --execute="SET CLUSTER SETTING enterprise.license = '';"
+ ~~~
+
+3. Define partitions for all tables and their secondary indexes.
+
+ Start with the `users` table:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER TABLE users \
+ PARTITION BY LIST (city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ Now define partitions for the `vehicles` table and its secondary indexes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER TABLE vehicles \
+ PARTITION BY LIST (city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \
+ PARTITION BY LIST (city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ Next, define partitions for the `rides` table and its secondary indexes:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER TABLE rides \
+ PARTITION BY LIST (city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \
+ PARTITION BY LIST (city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \
+ PARTITION BY LIST (vehicle_city) ( \
+ PARTITION new_york VALUES IN ('new york'), \
+ PARTITION boston VALUES IN ('boston'), \
+ PARTITION washington_dc VALUES IN ('washington dc'), \
+ PARTITION seattle VALUES IN ('seattle'), \
+ PARTITION san_francisco VALUES IN ('san francisco'), \
+ PARTITION los_angeles VALUES IN ('los angeles') \
+ );"
+ ~~~
+
+ Finally, drop an unused index on `rides` rather than partition it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql \
+ {{page.certs}} \
+ --database=movr \
+ --host= \
+ --execute="DROP INDEX rides_start_time_idx;"
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ The `rides` table contains 1 million rows, so dropping this index will take a few minutes.
+ {{site.data.alerts.end}}
+
+7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require city data to be stored on specific nodes based on node locality.
+
+ City | Locality
+ -----|---------
+ New York | `zone=us-east1-b`
+ Boston | `zone=us-east1-b`
+ Washington DC | `zone=us-east1-b`
+ Seattle | `zone=us-west1-a`
+ San Francisco | `zone=us-west2-a`
+ Los Angeles | `zone=us-west2-a`
+
+ {{site.data.alerts.callout_info}}
+ Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead.
+ {{site.data.alerts.end}}
+
+ Start with the `users` table partitions:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ Move on to the `vehicles` table and secondary index partitions:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX vehicles_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ Finish with the `rides` table and secondary index partitions:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION new_york OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION boston OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION washington_dc OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION seattle OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION san_francisco OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_city_ref_users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --execute="ALTER PARTITION los_angeles OF INDEX rides_auto_index_fk_vehicle_city_ref_vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
+ {{page.certs}} \
+ --host=
+ ~~~
diff --git a/_includes/v20.2/performance/scale-cluster.md b/_includes/v20.2/performance/scale-cluster.md
new file mode 100644
index 00000000000..92aeaddf5b8
--- /dev/null
+++ b/_includes/v20.2/performance/scale-cluster.md
@@ -0,0 +1,61 @@
+1. SSH to one of the `n1-standard-4` instances in the `us-west1-a` zone.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+3. Run the [`cockroach start`](cockroach-start.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ {{page.certs}} \
+ --advertise-host= \
+ --join= \
+ --locality=cloud=gce,region=us-west1,zone=us-west1-a \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances in the `us-west1-a` zone.
+
+5. SSH to one of the `n1-standard-4` instances in the `us-west2-a` zone.
+
+6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+7. Run the [`cockroach start`](cockroach-start.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ {{page.certs}} \
+ --advertise-host= \
+ --join= \
+ --locality=cloud=gce,region=us-west2,zone=us-west2-a \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+8. Repeat steps 5 - 7 for the other two `n1-standard-4` instances in the `us-west2-a` zone.
diff --git a/_includes/v20.2/performance/start-cluster.md b/_includes/v20.2/performance/start-cluster.md
new file mode 100644
index 00000000000..0847b3b268f
--- /dev/null
+++ b/_includes/v20.2/performance/start-cluster.md
@@ -0,0 +1,60 @@
+#### Start the nodes
+
+1. SSH to the first `n1-standard-4` instance.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+3. Run the [`cockroach start`](cockroach-start.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ {{page.certs}} \
+ --advertise-host= \
+ --join=:26257,:26257,:26257 \
+ --locality=cloud=gce,region=us-east1,zone=us-east1-b \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time.
+
+#### Initialize the cluster
+
+1. SSH to the fourth instance, the one not running a CockroachDB node.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+4. Run the [`cockroach init`](cockroach-init.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach init {{page.certs}} --host=
+ ~~~
+
+ Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients.
diff --git a/_includes/v20.2/performance/test-performance-after-partitioning.md b/_includes/v20.2/performance/test-performance-after-partitioning.md
new file mode 100644
index 00000000000..16c07a9f92d
--- /dev/null
+++ b/_includes/v20.2/performance/test-performance-after-partitioning.md
@@ -0,0 +1,93 @@
+After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city.
+
+To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance).
+
+#### Reads
+
+Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
+
+1. SSH to the instance in `us-east1-b` with the Python client.
+
+2. Query for the data:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ {{page.app}} \
+ --host= \
+ --statement="SELECT id, ext FROM vehicles \
+ WHERE city = 'new york' \
+ AND type = 'bike' \
+ AND status = 'in_use'" \
+ --repeat=50 \
+ --times
+ ~~~
+
+ ~~~
+ Result:
+ ['id', 'ext']
+ ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
+ ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
+ ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
+ ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
+ ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
+ ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
+ ...
+
+ Times (milliseconds):
+ [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805]
+
+ Median time (milliseconds):
+ 7.62641429901
+ ~~~
+
+Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms.
+
+#### Writes
+
+Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts:
+
+1. SSH to the instance in `us-west1-a` with the Python client.
+
+2. Create 100 Seattle-based users:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {{page.app}} \
+ --host= \
+ --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
+ --repeat=100 \
+ --times
+ ~~~
+
+ ~~~
+ Times (milliseconds):
+ [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883]
+
+ Median time (milliseconds):
+ 8.90052318573
+ ~~~
+
+ Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms.
+
+3. SSH to the instance in `us-east1-b` with the Python client.
+
+4. Create 100 new NY-based users:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {{page.app}} \
+ --host= \
+ --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
+ --repeat=100 \
+ --times
+ ~~~
+
+ ~~~
+ Times (milliseconds):
+ [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719]
+
+ Median time (milliseconds):
+ 9.26303863525
+ ~~~
+
+ Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms.
diff --git a/_includes/v20.2/performance/test-performance.md b/_includes/v20.2/performance/test-performance.md
new file mode 100644
index 00000000000..2009ac9653f
--- /dev/null
+++ b/_includes/v20.2/performance/test-performance.md
@@ -0,0 +1,146 @@
+In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases.
+
+#### Reads
+
+For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
+
+1. SSH to the instance in `us-east1-b` with the Python client.
+
+2. Query for the data:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ {{page.app}} \
+ --host= \
+ --statement="SELECT id, ext FROM vehicles \
+ WHERE city = 'new york' \
+ AND type = 'bike' \
+ AND status = 'in_use'" \
+ --repeat=50 \
+ --times
+ ~~~
+
+ ~~~
+ Result:
+ ['id', 'ext']
+ ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
+ ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
+ ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
+ ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
+ ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
+ ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
+ ...
+
+ Times (milliseconds):
+ [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121]
+
+ Median time (milliseconds):
+ 72.0270872116
+ ~~~
+
+As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client.
+
+For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use:
+
+1. SSH to the instance in `us-west2-a` with the Python client.
+
+2. Query for the data:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ {{page.app}} \
+ --host= \
+ --statement="SELECT id, ext FROM vehicles \
+ WHERE city = 'los angeles' \
+ AND type = 'bike' \
+ AND status = 'in_use'" \
+ --repeat=50 \
+ --times
+ ~~~
+
+ ~~~
+ Result:
+ ['id', 'ext']
+ ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"]
+ ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"]
+ ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"]
+ ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"]
+ ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"]
+
+ Times (milliseconds):
+ [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375]
+
+ Median time (milliseconds):
+ 7.6071023941
+ ~~~
+
+Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms.
+
+#### Writes
+
+The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts:
+
+1. SSH to the instance in `us-west1-a` with the Python client.
+
+2. Create 100 Seattle-based users:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {{page.app}} \
+ --host= \
+ --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
+ --repeat=100 \
+ --times
+ ~~~
+
+ ~~~
+ Times (milliseconds):
+ [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918]
+
+ Median time (milliseconds):
+ 48.4025478363
+ ~~~
+
+3. SSH to the instance in `us-east1-b` with the Python client.
+
+4. Create 100 new NY-based users:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {{page.app}} \
+ --host= \
+ --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
+ --repeat=100 \
+ --times
+ ~~~
+
+ ~~~
+ Times (milliseconds):
+ [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297]
+
+ Median time (milliseconds):
+ 116.868495941
+ ~~~
+
+It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table:
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach sql \
+{{page.certs}} \
+--host= \
+--database=movr \
+--execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;"
+~~~
+
+~~~
+ start_key | end_key | range_id | replicas | lease_holder
++-----------+---------+----------+----------+--------------+
+ NULL | NULL | 49 | {2,6,8} | 6
+(1 row)
+~~~
+
+For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that:
+
+- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client.
+- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east.
diff --git a/_includes/v20.2/performance/tuning-secure.py b/_includes/v20.2/performance/tuning-secure.py
new file mode 100644
index 00000000000..a644dbb1c87
--- /dev/null
+++ b/_includes/v20.2/performance/tuning-secure.py
@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+
+import argparse
+import psycopg2
+import time
+
+parser = argparse.ArgumentParser(
+ description="test performance of statements against movr database")
+parser.add_argument("--host", required=True,
+ help="ip address of one of the CockroachDB nodes")
+parser.add_argument("--statement", required=True,
+ help="statement to execute")
+parser.add_argument("--repeat", type=int,
+ help="number of times to repeat the statement", default = 20)
+parser.add_argument("--times",
+ help="print time for each repetition of the statement", action="store_true")
+parser.add_argument("--cumulative",
+ help="print cumulative time for all repetitions of the statement", action="store_true")
+args = parser.parse_args()
+
+conn = psycopg2.connect(
+ database='movr',
+ user='root',
+ host=args.host,
+ port=26257,
+ sslmode='require',
+ sslrootcert='certs/ca.crt',
+ sslkey='certs/client.root.key',
+ sslcert='certs/client.root.crt'
+)
+conn.set_session(autocommit=True)
+cur = conn.cursor()
+
+def median(lst):
+ n = len(lst)
+ if n < 1:
+ return None
+ if n % 2 == 1:
+ return sorted(lst)[n//2]
+ else:
+ return sum(sorted(lst)[n//2-1:n//2+1])/2.0
+
+times = list()
+for n in range(args.repeat):
+ start = time.time()
+ statement = args.statement
+ cur.execute(statement)
+ if n < 1:
+ if cur.description is not None:
+ colnames = [desc[0] for desc in cur.description]
+ print("")
+ print("Result:")
+ print(colnames)
+ rows = cur.fetchall()
+ for row in rows:
+ print([str(cell) for cell in row])
+ end = time.time()
+ times.append((end - start)* 1000)
+
+cur.close()
+conn.close()
+
+print("")
+if args.times:
+ print("Times (milliseconds):")
+ print(times)
+ print("")
+# print("Average time (milliseconds):")
+# print(float(sum(times))/len(times))
+# print("")
+print("Median time (milliseconds):")
+print(median(times))
+print("")
+if args.cumulative:
+ print("Cumulative time (milliseconds):")
+ print(sum(times))
+ print("")
diff --git a/_includes/v20.2/performance/tuning.py b/_includes/v20.2/performance/tuning.py
new file mode 100644
index 00000000000..dcb567dad91
--- /dev/null
+++ b/_includes/v20.2/performance/tuning.py
@@ -0,0 +1,73 @@
+#!/usr/bin/env python
+
+import argparse
+import psycopg2
+import time
+
+parser = argparse.ArgumentParser(
+ description="test performance of statements against movr database")
+parser.add_argument("--host", required=True,
+ help="ip address of one of the CockroachDB nodes")
+parser.add_argument("--statement", required=True,
+ help="statement to execute")
+parser.add_argument("--repeat", type=int,
+ help="number of times to repeat the statement", default = 20)
+parser.add_argument("--times",
+ help="print time for each repetition of the statement", action="store_true")
+parser.add_argument("--cumulative",
+ help="print cumulative time for all repetitions of the statement", action="store_true")
+args = parser.parse_args()
+
+conn = psycopg2.connect(
+ database='movr',
+ user='root',
+ host=args.host,
+ port=26257
+)
+conn.set_session(autocommit=True)
+cur = conn.cursor()
+
+def median(lst):
+ n = len(lst)
+ if n < 1:
+ return None
+ if n % 2 == 1:
+ return sorted(lst)[n//2]
+ else:
+ return sum(sorted(lst)[n//2-1:n//2+1])/2.0
+
+times = list()
+for n in range(args.repeat):
+ start = time.time()
+ statement = args.statement
+ cur.execute(statement)
+ if n < 1:
+ if cur.description is not None:
+ colnames = [desc[0] for desc in cur.description]
+ print("")
+ print("Result:")
+ print(colnames)
+ rows = cur.fetchall()
+ for row in rows:
+ print([str(cell) for cell in row])
+ end = time.time()
+ times.append((end - start)* 1000)
+
+cur.close()
+conn.close()
+
+print("")
+if args.times:
+ print("Times (milliseconds):")
+ print(times)
+ print("")
+# print("Average time (milliseconds):")
+# print(float(sum(times))/len(times))
+# print("")
+print("Median time (milliseconds):")
+print(median(times))
+print("")
+if args.cumulative:
+ print("Cumulative time (milliseconds):")
+ print(sum(times))
+ print("")
diff --git a/_includes/v20.2/performance/use-hash-sharded-indexes.md b/_includes/v20.2/performance/use-hash-sharded-indexes.md
new file mode 100644
index 00000000000..ff487520578
--- /dev/null
+++ b/_includes/v20.2/performance/use-hash-sharded-indexes.md
@@ -0,0 +1 @@
+For performance reasons, we [discourage indexing on sequential keys](indexes.html#indexing-columns). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](indexes.html#hash-sharded-indexes). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hotspots and improving write performance on sequentially-keyed indexes at a small cost to read performance.
\ No newline at end of file
diff --git a/_includes/v20.2/prod-deployment/advertise-addr-join.md b/_includes/v20.2/prod-deployment/advertise-addr-join.md
new file mode 100644
index 00000000000..67019d1fcea
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/advertise-addr-join.md
@@ -0,0 +1,4 @@
+Flag | Description
+-----|------------
+`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
+`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
diff --git a/_includes/v20.2/prod-deployment/backup.sh b/_includes/v20.2/prod-deployment/backup.sh
new file mode 100644
index 00000000000..b1621eeb96a
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/backup.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+set -euo pipefail
+
+# This script creates full backups when run on the configured
+# day of the week and incremental backups when run on other days, and tracks
+# recently created backups in a file to pass as the base for incremental backups.
+
+what="" # Leave empty for full cluster backup, or add "DATABASE database_name" to backup a database.
+base="/backups" # The URL where you want to store the backup.
+extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g. AWS key params.
+recent=recent_backups.txt # File in which recent backups are tracked.
+backup_parameters= # e.g. "WITH revision_history"
+
+# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client.
+runsql() { cockroach sql --insecure -e "$1"; }
+
+destination="${base}/$(date +"%Y-%V")${extra}" # %V is the week number of the year, with Monday as the first day of the week.
+
+runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters"
+echo "backed up to ${destination}"
diff --git a/_includes/v20.2/prod-deployment/insecure-initialize-cluster.md b/_includes/v20.2/prod-deployment/insecure-initialize-cluster.md
new file mode 100644
index 00000000000..b21a1a6fd97
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-initialize-cluster.md
@@ -0,0 +1,12 @@
+On your local machine, complete the node startup process and have them join together as a cluster:
+
+1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
+
+2. Run the [`cockroach init`](cockroach-init.html) command, with the `--host` flag set to the address of any node:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach init --insecure --host=
+ ~~~
+
+ Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/_includes/v20.2/prod-deployment/insecure-recommendations.md b/_includes/v20.2/prod-deployment/insecure-recommendations.md
new file mode 100644
index 00000000000..11bcbe83d83
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-recommendations.md
@@ -0,0 +1,13 @@
+- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks:
+ - Your cluster is open to any client that can access any node's IP addresses.
+ - Any user, even `root`, can log in without providing a password.
+ - Any user, connecting as `root`, can read or write any data in your cluster.
+ - There is no network encryption or authentication, and thus no confidentiality.
+
+- Decide how you want to access your Admin UI:
+
+ Access Level | Description
+ -------------|------------
+ Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
+ Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
+ Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/_includes/v20.2/prod-deployment/insecure-requirements.md b/_includes/v20.2/prod-deployment/insecure-requirements.md
new file mode 100644
index 00000000000..170a566be3a
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-requirements.md
@@ -0,0 +1,9 @@
+- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
+
+- Your network configuration must allow TCP communication on the following ports:
+ - `26257` for intra-cluster and client-cluster communication
+ - `8080` to expose your Admin UI
+
+- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html).
+
+{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %}
\ No newline at end of file
diff --git a/_includes/v20.2/prod-deployment/insecure-scale-cluster.md b/_includes/v20.2/prod-deployment/insecure-scale-cluster.md
new file mode 100644
index 00000000000..349f159c2f0
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-scale-cluster.md
@@ -0,0 +1,117 @@
+You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
+
+
+
+
+
+
+
+
+
+For each additional node you want to add to the cluster, complete the following steps:
+
+1. SSH to the machine where you want the node to run.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ --insecure \
+ --advertise-addr= \
+ --join=,, \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+5. Update your load balancer to recognize the new node.
+
+
+
+
+
+For each additional node you want to add to the cluster, complete the following steps:
+
+1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Create the Cockroach directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir /var/lib/cockroach
+ ~~~
+
+5. Create a Unix user named `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ useradd cockroach
+ ~~~
+
+6. Change the ownership of `Cockroach` directory to the user `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ chown cockroach /var/lib/cockroach
+ ~~~
+
+7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
+ ~~~
+
+ Alternatively, you can create the file yourself and copy the script into it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
+ ~~~
+
+ Save the file in the `/etc/systemd/system/` directory
+
+8. Customize the sample configuration template for your deployment:
+
+ Specify values for the following flags in the sample configuration template:
+
+ {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
+
+9. Repeat these steps for each additional node that you want in your cluster.
+
+
diff --git a/_includes/v20.2/prod-deployment/insecure-start-nodes.md b/_includes/v20.2/prod-deployment/insecure-start-nodes.md
new file mode 100644
index 00000000000..6f962336a73
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-start-nodes.md
@@ -0,0 +1,148 @@
+You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
+
+
+
+
+
+
+
+
+
+For each initial node of your cluster, complete the following steps:
+
+{{site.data.alerts.callout_info}}
+After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
+{{site.data.alerts.end}}
+
+1. SSH to the machine where you want the node to run.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Run the [`cockroach start`](cockroach-start.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ --insecure \
+ --advertise-addr= \
+ --join=,, \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+ This command primes the node to start, using the following flags:
+
+ Flag | Description
+ -----|------------
+ `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication.
+ `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
+ `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
+ `--cache` `--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
+ `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
+
+ When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
+
+ For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
+
+5. Repeat these steps for each additional node that you want in your cluster.
+
+
+
+
+
+For each initial node of your cluster, complete the following steps:
+
+{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
+
+1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Create the Cockroach directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir /var/lib/cockroach
+ ~~~
+
+5. Create a Unix user named `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ useradd cockroach
+ ~~~
+
+6. Change the ownership of `Cockroach` directory to the user `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ chown cockroach /var/lib/cockroach
+ ~~~
+
+7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
+ ~~~
+
+ Alternatively, you can create the file yourself and copy the script into it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
+ ~~~
+
+8. In the sample configuration template, specify values for the following flags:
+
+ {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
+
+ When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
+
+ For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](cockroach-start.html).
+
+9. Start the CockroachDB cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ systemctl start insecurecockroachdb
+ ~~~
+
+10. Repeat these steps for each additional node that you want in your cluster.
+
+{{site.data.alerts.callout_info}}
+`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb`
+{{site.data.alerts.end}}
+
+
diff --git a/_includes/v20.2/prod-deployment/insecure-test-cluster.md b/_includes/v20.2/prod-deployment/insecure-test-cluster.md
new file mode 100644
index 00000000000..996b3c9a2d7
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-test-cluster.md
@@ -0,0 +1,41 @@
+CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.
+
+When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.
+
+Use the [built-in SQL client](cockroach-sql.html) locally as follows:
+
+1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --insecure --host=
+ ~~~
+
+2. Create an `insecurenodetest` database:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE insecurenodetest;
+ ~~~
+
+3. View the cluster's databases, which will include `insecurenodetest`:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SHOW DATABASES;
+ ~~~
+
+ ~~~
+ +--------------------+
+ | Database |
+ +--------------------+
+ | crdb_internal |
+ | information_schema |
+ | insecurenodetest |
+ | pg_catalog |
+ | system |
+ +--------------------+
+ (5 rows)
+ ~~~
+
+4. Use `\q` to exit the SQL shell.
diff --git a/_includes/v20.2/prod-deployment/insecure-test-load-balancing.md b/_includes/v20.2/prod-deployment/insecure-test-load-balancing.md
new file mode 100644
index 00000000000..9e594e0a864
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecure-test-load-balancing.md
@@ -0,0 +1,41 @@
+CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
+
+{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
+
+1. SSH to the machine where you want the run the sample TPC-C workload.
+
+ This should be a machine that is not running a CockroachDB node.
+
+2. Download `workload` and make it executable:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
+ ~~~
+
+3. Rename and copy `workload` into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i workload.LATEST /usr/local/bin/workload
+ ~~~
+
+4. Start the TPC-C workload, pointing it at the IP address of the load balancer:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ workload run tpcc \
+ --drop \
+ --init \
+ --duration=20m \
+ --tolerate-errors \
+ "postgresql://root@:26257/tpcc?sslmode=disable"
+ ~~~
+
+ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
+
+ {{site.data.alerts.callout_success}}For more tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
+
+4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
+
+ Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/_includes/v20.2/prod-deployment/insecurecockroachdb.service b/_includes/v20.2/prod-deployment/insecurecockroachdb.service
new file mode 100644
index 00000000000..b027b941009
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/insecurecockroachdb.service
@@ -0,0 +1,16 @@
+[Unit]
+Description=Cockroach Database cluster node
+Requires=network.target
+[Service]
+Type=notify
+WorkingDirectory=/var/lib/cockroach
+ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
+TimeoutStopSec=60
+Restart=always
+RestartSec=10
+StandardOutput=syslog
+StandardError=syslog
+SyslogIdentifier=cockroach
+User=cockroach
+[Install]
+WantedBy=default.target
diff --git a/_includes/v20.2/prod-deployment/monitor-cluster.md b/_includes/v20.2/prod-deployment/monitor-cluster.md
new file mode 100644
index 00000000000..cb8185eac19
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/monitor-cluster.md
@@ -0,0 +1,3 @@
+Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
+
+For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html).
diff --git a/_includes/v20.2/prod-deployment/prod-see-also.md b/_includes/v20.2/prod-deployment/prod-see-also.md
new file mode 100644
index 00000000000..aa39f71bd9f
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/prod-see-also.md
@@ -0,0 +1,8 @@
+- [Production Checklist](recommended-production-settings.html)
+- [Manual Deployment](manual-deployment.html)
+- [Orchestrated Deployment](orchestration.html)
+- [Monitoring and Alerting](monitoring-and-alerting.html)
+- [Performance Benchmarking](performance-benchmarking-with-tpc-c-1k-warehouses.html)
+- [Performance Tuning](performance-tuning.html)
+- [Test Deployment](deploy-a-test-cluster.html)
+- [Local Deployment](start-a-local-cluster.html)
diff --git a/_includes/v20.2/prod-deployment/secure-generate-certificates.md b/_includes/v20.2/prod-deployment/secure-generate-certificates.md
new file mode 100644
index 00000000000..2792ddc6099
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-generate-certificates.md
@@ -0,0 +1,201 @@
+You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands.
+
+Locally, you'll need to [create the following certificates and keys](cockroach-cert.html):
+
+- A certificate authority (CA) key pair (`ca.crt` and `ca.key`).
+- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
+- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine.
+
+{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}}
+
+1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
+
+2. Create two directories:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir certs
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir my-safe-directory
+ ~~~
+ - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
+ - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
+
+3. Create the CA certificate and key:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-ca \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-node \
+ \
+ \
+ \
+ \
+ localhost \
+ 127.0.0.1 \
+ \
+ \
+ \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+5. Upload the CA certificate and node certificate and key to the first node:
+
+ {% if page.title contains "Google" %}
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ gcloud compute ssh \
+ --project \
+ --command "mkdir certs"
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details.
+ {{site.data.alerts.end}}
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/node.crt \
+ certs/node.key \
+ @:~/certs
+ ~~~
+
+ {% elsif page.title contains "AWS" %}
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh-add /path/.pem
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh @ "mkdir certs"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/node.crt \
+ certs/node.key \
+ @:~/certs
+ ~~~
+
+ {% else %}
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh @ "mkdir certs"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/node.crt \
+ certs/node.key \
+ @:~/certs
+ ~~~
+ {% endif %}
+
+6. Delete the local copy of the node certificate and key:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ rm certs/node.crt certs/node.key
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag.
+ {{site.data.alerts.end}}
+
+7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-node \
+ \
+ \
+ \
+ \
+ localhost \
+ 127.0.0.1 \
+ \
+ \
+ \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+8. Upload the CA certificate and node certificate and key to the second node:
+
+ {% if page.title contains "AWS" %}
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh @ "mkdir certs"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/node.crt \
+ certs/node.key \
+ @:~/certs
+ ~~~
+
+ {% else %}
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh @ "mkdir certs"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/node.crt \
+ certs/node.key \
+ @:~/certs
+ ~~~
+ {% endif %}
+
+9. Repeat steps 6 - 8 for each additional node.
+
+10. Create a client certificate and key for the `root` user:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach cert create-client \
+ root \
+ --certs-dir=certs \
+ --ca-key=my-safe-directory/ca.key
+ ~~~
+
+11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ ssh @ "mkdir certs"
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ scp certs/ca.crt \
+ certs/client.root.crt \
+ certs/client.root.key \
+ @:~/certs
+ ~~~
+
+ In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well.
+
+{{site.data.alerts.callout_info}}
+On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster).
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/prod-deployment/secure-initialize-cluster.md b/_includes/v20.2/prod-deployment/secure-initialize-cluster.md
new file mode 100644
index 00000000000..77443246e9b
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-initialize-cluster.md
@@ -0,0 +1,8 @@
+On your local machine, run the [`cockroach init`](cockroach-init.html) command to complete the node startup process and have them join together as a cluster:
+
+{% include copy-clipboard.html %}
+~~~ shell
+$ cockroach init --certs-dir=certs --host=
+~~~
+
+After running this command, each node prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/_includes/v20.2/prod-deployment/secure-recommendations.md b/_includes/v20.2/prod-deployment/secure-recommendations.md
new file mode 100644
index 00000000000..85b0b0b31d0
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-recommendations.md
@@ -0,0 +1,7 @@
+- Decide how you want to access your Admin UI:
+
+ Access Level | Description
+ -------------|------------
+ Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
+ Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
+ Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/_includes/v20.2/prod-deployment/secure-requirements.md b/_includes/v20.2/prod-deployment/secure-requirements.md
new file mode 100644
index 00000000000..78ba1467141
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-requirements.md
@@ -0,0 +1,11 @@
+- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates.
+
+- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
+
+- Your network configuration must allow TCP communication on the following ports:
+ - `26257` for intra-cluster and client-cluster communication
+ - `8080` to expose your Admin UI
+
+- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html).
+
+{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %}
\ No newline at end of file
diff --git a/_includes/v20.2/prod-deployment/secure-scale-cluster.md b/_includes/v20.2/prod-deployment/secure-scale-cluster.md
new file mode 100644
index 00000000000..b86aab81822
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-scale-cluster.md
@@ -0,0 +1,124 @@
+You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
+
+
+
+
+
+
+
+
+
+For each additional node you want to add to the cluster, complete the following steps:
+
+1. SSH to the machine where you want the node to run.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier).
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ --certs-dir=certs \
+ --advertise-addr= \
+ --join=,, \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+5. Update your load balancer to recognize the new node.
+
+
+
+
+
+For each additional node you want to add to the cluster, complete the following steps:
+
+1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Create the Cockroach directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir /var/lib/cockroach
+ ~~~
+
+5. Create a Unix user named `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ useradd cockroach
+ ~~~
+
+6. Move the `certs` directory to the `cockroach` directory.
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mv certs /var/lib/cockroach/
+ ~~~
+
+7. Change the ownership of `Cockroach` directory to the user `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ chown -R cockroach.cockroach /var/lib/cockroach
+ ~~~
+
+8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service):
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
+ ~~~
+
+ Alternatively, you can create the file yourself and copy the script into it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
+ ~~~
+
+ Save the file in the `/etc/systemd/system/` directory.
+
+9. Customize the sample configuration template for your deployment:
+
+ Specify values for the following flags in the sample configuration template:
+
+ {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
+
+10. Repeat these steps for each additional node that you want in your cluster.
+
+
diff --git a/_includes/v20.2/prod-deployment/secure-start-nodes.md b/_includes/v20.2/prod-deployment/secure-start-nodes.md
new file mode 100644
index 00000000000..5a3b441913c
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-start-nodes.md
@@ -0,0 +1,153 @@
+You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
+
+
+
+
+
+
+
+
+
+For each initial node of your cluster, complete the following steps:
+
+{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
+
+1. SSH to the machine where you want the node to run.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Run the [`cockroach start`](cockroach-start.html) command:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach start \
+ --certs-dir=certs \
+ --advertise-addr= \
+ --join=,, \
+ --cache=.25 \
+ --max-sql-memory=.25 \
+ --background
+ ~~~
+
+ This command primes the node to start, using the following flags:
+
+ Flag | Description
+ -----|------------
+ `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node.
+ `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
+ `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
+ `--cache` `--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
+ `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
+
+ When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
+
+ For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
+
+5. Repeat these steps for each additional node that you want in your cluster.
+
+
+
+
+
+For each initial node of your cluster, complete the following steps:
+
+{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
+
+1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
+
+2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
+ | tar xvz
+ ~~~
+
+3. Copy the binary into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
+ ~~~
+
+ If you get a permissions error, prefix the command with `sudo`.
+
+4. Create the Cockroach directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mkdir /var/lib/cockroach
+ ~~~
+
+5. Create a Unix user named `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ useradd cockroach
+ ~~~
+
+6. Move the `certs` directory to the `cockroach` directory.
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ mv certs /var/lib/cockroach/
+ ~~~
+
+7. Change the ownership of `Cockroach` directory to the user `cockroach`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ chown -R cockroach.cockroach /var/lib/cockroach
+ ~~~
+
+8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
+ ~~~
+
+ Alternatively, you can create the file yourself and copy the script into it:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
+ ~~~
+
+9. In the sample configuration template, specify values for the following flags:
+
+ {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
+
+ When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality).
+
+ For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html).
+
+10. Start the CockroachDB cluster:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ systemctl start securecockroachdb
+ ~~~
+
+11. Repeat these steps for each additional node that you want in your cluster.
+
+{{site.data.alerts.callout_info}}
+`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb`
+{{site.data.alerts.end}}
+
+
diff --git a/_includes/v20.2/prod-deployment/secure-test-cluster.md b/_includes/v20.2/prod-deployment/secure-test-cluster.md
new file mode 100644
index 00000000000..f3a1a4a5e89
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-test-cluster.md
@@ -0,0 +1,41 @@
+CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway.
+
+When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes.
+
+Use the [built-in SQL client](cockroach-sql.html) locally as follows:
+
+1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cockroach sql --certs-dir=certs --host=
+ ~~~
+
+2. Create a `securenodetest` database:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > CREATE DATABASE securenodetest;
+ ~~~
+
+3. View the cluster's databases, which will include `securenodetest`:
+
+ {% include copy-clipboard.html %}
+ ~~~ sql
+ > SHOW DATABASES;
+ ~~~
+
+ ~~~
+ +--------------------+
+ | Database |
+ +--------------------+
+ | crdb_internal |
+ | information_schema |
+ | securenodetest |
+ | pg_catalog |
+ | system |
+ +--------------------+
+ (5 rows)
+ ~~~
+
+4. Use `\q` to exit the SQL shell.
\ No newline at end of file
diff --git a/_includes/v20.2/prod-deployment/secure-test-load-balancing.md b/_includes/v20.2/prod-deployment/secure-test-load-balancing.md
new file mode 100644
index 00000000000..45fb876eaf6
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/secure-test-load-balancing.md
@@ -0,0 +1,43 @@
+CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
+
+{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
+
+1. SSH to the machine where you want to run the sample TPC-C workload.
+
+ This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files.
+
+2. Download `workload` and make it executable:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
+ ~~~
+
+3. Rename and copy `workload` into the `PATH`:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ cp -i workload.LATEST /usr/local/bin/workload
+ ~~~
+
+4. Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the `ca.crt`, `client.root.crt`, and `client.root.key` files:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ workload run tpcc \
+ --drop \
+ --init \
+ --duration=20m \
+ --tolerate-errors \
+ "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
+ ~~~
+
+ This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
+
+ {{site.data.alerts.callout_success}}For more tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
+
+5. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
+
+ For each user who should have access to the Admin UI for a secure cluster, [create a user with a password](create-user.html#create-a-user-with-a-password) and [assign them to an `admin` role if necessary](admin-ui-overview.html#admin-ui-access). On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.
+
+ Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/_includes/v20.2/prod-deployment/securecockroachdb.service b/_includes/v20.2/prod-deployment/securecockroachdb.service
new file mode 100644
index 00000000000..39054cf2e1d
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/securecockroachdb.service
@@ -0,0 +1,16 @@
+[Unit]
+Description=Cockroach Database cluster node
+Requires=network.target
+[Service]
+Type=notify
+WorkingDirectory=/var/lib/cockroach
+ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
+TimeoutStopSec=60
+Restart=always
+RestartSec=10
+StandardOutput=syslog
+StandardError=syslog
+SyslogIdentifier=cockroach
+User=cockroach
+[Install]
+WantedBy=default.target
diff --git a/_includes/v20.2/prod-deployment/synchronize-clocks.md b/_includes/v20.2/prod-deployment/synchronize-clocks.md
new file mode 100644
index 00000000000..9d0fed14d5d
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/synchronize-clocks.md
@@ -0,0 +1,179 @@
+CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
+
+{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %}
+
+[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.
+
+1. SSH to the first machine.
+
+2. Disable `timesyncd`, which tends to be active by default on some Linux distributions:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo timedatectl set-ntp no
+ ~~~
+
+ Verify that `timesyncd` is off:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ timedatectl
+ ~~~
+
+ Look for `Network time on: no` or `NTP enabled: no` in the output.
+
+3. Install the `ntp` package:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo apt-get install ntp
+ ~~~
+
+4. Stop the NTP daemon:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo service ntp stop
+ ~~~
+
+5. Sync the machine's clock with Google's NTP service:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo ntpd -b time.google.com
+ ~~~
+
+ To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
+
+ {% include copy-clipboard.html %}
+ ~~~
+ server time1.google.com iburst
+ server time2.google.com iburst
+ server time3.google.com iburst
+ server time4.google.com iburst
+ ~~~
+
+ Restart the NTP daemon:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo service ntp start
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
+ {{site.data.alerts.end}}
+
+6. Verify that the machine is using a Google NTP server:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo ntpq -p
+ ~~~
+
+ The active NTP server will be marked with an asterisk.
+
+7. Repeat these steps for each machine where a CockroachDB node will run.
+
+{% elsif page.title contains "Google" %}
+
+Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should:
+
+- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/managing-instances#configure_ntp_for_your_instances).
+- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
+
+{% elsif page.title contains "AWS" %}
+
+Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.
+
+- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
+ - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out.
+ - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server.
+- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
+
+{% elsif page.title contains "Azure" %}
+
+[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
+
+1. SSH to the first machine.
+
+2. Find the ID of the Hyper-V Time Synchronization device:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
+ ~~~
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
+ ~~~
+
+ ~~~
+ VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization]
+ Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee}
+ Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee
+ Rel_ID=12, target_cpu=0
+ ~~~
+
+3. Unbind the device, using the `Device_ID` from the previous command's output:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind
+ ~~~
+
+4. Install the `ntp` package:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo apt-get install ntp
+ ~~~
+
+5. Stop the NTP daemon:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo service ntp stop
+ ~~~
+
+6. Sync the machine's clock with Google's NTP service:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo ntpd -b time.google.com
+ ~~~
+
+ To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
+
+ {% include copy-clipboard.html %}
+ ~~~
+ server time1.google.com iburst
+ server time2.google.com iburst
+ server time3.google.com iburst
+ server time4.google.com iburst
+ ~~~
+
+ Restart the NTP daemon:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo service ntp start
+ ~~~
+
+ {{site.data.alerts.callout_info}}
+ We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details.
+ {{site.data.alerts.end}}
+
+7. Verify that the machine is using a Google NTP server:
+
+ {% include copy-clipboard.html %}
+ ~~~ shell
+ $ sudo ntpq -p
+ ~~~
+
+ The active NTP server will be marked with an asterisk.
+
+8. Repeat these steps for each machine where a CockroachDB node will run.
+
+{% endif %}
diff --git a/_includes/v20.2/prod-deployment/topology-recommendations.md b/_includes/v20.2/prod-deployment/topology-recommendations.md
new file mode 100644
index 00000000000..05e3cd71a0c
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/topology-recommendations.md
@@ -0,0 +1,13 @@
+- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](cockroach-start.html#store).
+
+- When starting each node, use the [`--locality`](cockroach-start.html#locality) flag to describe the node's location, for example, `--locality=region=west,zone=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes.
+
+- When deploying in a single availability zone:
+
+ - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [`default` 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority.
+
+ - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes and [increase the `default` replication factor for user data](configure-replication-zones.html#edit-the-default-replication-zone) to 5. The replication factor for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) is 5 by default, so no adjustments are needed for internal data. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority.
+
+- When deploying across multiple availability zones:
+ - To be able to tolerate the failure of 1 entire AZ in a region, use at least 3 AZs per region and set `--locality` on each node to spread data evenly across regions and AZs. In this case, if 1 AZ goes offline, the 2 remaining AZs retain a majority of replicas.
+ - To be able to tolerate the failure of 1 entire region, use at least 3 regions.
\ No newline at end of file
diff --git a/_includes/v20.2/prod-deployment/use-cluster.md b/_includes/v20.2/prod-deployment/use-cluster.md
new file mode 100644
index 00000000000..e513a09f046
--- /dev/null
+++ b/_includes/v20.2/prod-deployment/use-cluster.md
@@ -0,0 +1,11 @@
+Now that your deployment is working, you can:
+
+1. [Implement your data model](sql-statements.html).
+2. [Create users](create-user.html) and [grant them privileges](grant.html).
+3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node.
+
+You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html).
+
+{{site.data.alerts.callout_danger}}
+When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/sql/aggregates.md b/_includes/v20.2/sql/aggregates.md
new file mode 100644
index 00000000000..d74c6ba9c61
--- /dev/null
+++ b/_includes/v20.2/sql/aggregates.md
@@ -0,0 +1,189 @@
+
Calculates the bitwise XOR of the selected values.
+
+
+
diff --git a/_includes/v20.2/sql/begin-transaction-as-of-system-time-example.md b/_includes/v20.2/sql/begin-transaction-as-of-system-time-example.md
new file mode 100644
index 00000000000..ca8735152cd
--- /dev/null
+++ b/_includes/v20.2/sql/begin-transaction-as-of-system-time-example.md
@@ -0,0 +1,19 @@
+{% include copy-clipboard.html %}
+~~~ sql
+> BEGIN AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00';
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM orders;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM products;
+~~~
+
+{% include copy-clipboard.html %}
+~~~ sql
+> COMMIT;
+~~~
diff --git a/_includes/v20.2/sql/combine-alter-table-commands.md b/_includes/v20.2/sql/combine-alter-table-commands.md
new file mode 100644
index 00000000000..62839cce017
--- /dev/null
+++ b/_includes/v20.2/sql/combine-alter-table-commands.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_success}}
+This command can be combined with other `ALTER TABLE` commands in a single statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically).
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/sql/connection-parameters.md b/_includes/v20.2/sql/connection-parameters.md
new file mode 100644
index 00000000000..417a19d4e56
--- /dev/null
+++ b/_includes/v20.2/sql/connection-parameters.md
@@ -0,0 +1,8 @@
+Flag | Description
+-----|------------
+`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.
**Env Variable:** `COCKROACH_HOST` **Default:** `localhost:26257`
+`--port` `-p` | The server port to connect to. Note: The port number can also be specified via `--host`.
**Env Variable:** `COCKROACH_PORT` **Default:** `26257`
+`--user` `-u` | The [SQL user](create-user.html) that will own the client session.
**Env Variable:** `COCKROACH_USER` **Default:** `root`
+`--insecure` | Use an insecure connection.
**Env Variable:** `COCKROACH_INSECURE` **Default:** `false`
+`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.
**Env Variable:** `COCKROACH_CERTS_DIR` **Default:** `${HOME}/.cockroach-certs/`
+ `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.
**Env Variable:** `COCKROACH_URL` **Default:** no URL
diff --git a/_includes/v20.2/sql/crdb-internal-partitions-example.md b/_includes/v20.2/sql/crdb-internal-partitions-example.md
new file mode 100644
index 00000000000..39819038fce
--- /dev/null
+++ b/_includes/v20.2/sql/crdb-internal-partitions-example.md
@@ -0,0 +1,43 @@
+## Querying partitions programmatically
+
+The `crdb_internal.partitions` internal table contains information about the partitions in your database. In testing, scripting, and other programmatic environments, we recommend querying this table for partition information instead of using the `SHOW PARTITIONS` statement. For example, to get all `us_west` partitions of in your database, you can run the following query:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT * FROM crdb_internal.partitions WHERE name='us_west';
+~~~
+
+~~~
+ table_id | index_id | parent_name | name | columns | column_names | list_value | range_value | zone_id | subzone_id
++----------+----------+-------------+---------+---------+--------------+-------------------------------------------------+-------------+---------+------------+
+ 53 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 0 | 0
+ 54 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 1
+ 54 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 2
+ 55 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 1
+ 55 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 2
+ 55 | 3 | NULL | us_west | 1 | vehicle_city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 3
+ 56 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 56 | 1
+ 58 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 58 | 1
+(8 rows)
+~~~
+
+Other internal tables, like `crdb_internal.tables`, include information that could be useful in conjunction with `crdb_internal.partitions`.
+
+For example, if you want the output for your partitions to include the name of the table and database, you can perform a join of the two tables:
+
+{% include copy-clipboard.html %}
+~~~ sql
+> SELECT
+ partitions.name AS partition_name, column_names, list_value, tables.name AS table_name, database_name
+ FROM crdb_internal.partitions JOIN crdb_internal.tables ON partitions.table_id=tables.table_id
+ WHERE tables.name='users';
+~~~
+
+~~~
+ partition_name | column_names | list_value | table_name | database_name
++----------------+--------------+-------------------------------------------------+------------+---------------+
+ us_west | city | ('seattle'), ('san francisco'), ('los angeles') | users | movr
+ us_east | city | ('new york'), ('boston'), ('washington dc') | users | movr
+ europe_west | city | ('amsterdam'), ('paris'), ('rome') | users | movr
+(3 rows)
+~~~
diff --git a/_includes/v20.2/sql/crdb-internal-partitions.md b/_includes/v20.2/sql/crdb-internal-partitions.md
new file mode 100644
index 00000000000..ebab5abe4ed
--- /dev/null
+++ b/_includes/v20.2/sql/crdb-internal-partitions.md
@@ -0,0 +1,3 @@
+{{site.data.alerts.callout_success}}
+In testing, scripting, and other programmatic environments, we recommend querying the `crdb_internal.partitions` internal table for partition information instead of using the `SHOW PARTITIONS` statement. For more information, see [Querying partitions programmatically](show-partitions.html#querying-partitions-programmatically).
+{{site.data.alerts.end}}
diff --git a/_includes/v20.2/sql/diagrams/add_column.html b/_includes/v20.2/sql/diagrams/add_column.html
new file mode 100644
index 00000000000..f59fd135d0e
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/add_column.html
@@ -0,0 +1,52 @@
+
diff --git a/_includes/v20.2/sql/diagrams/add_constraint.html b/_includes/v20.2/sql/diagrams/add_constraint.html
new file mode 100644
index 00000000000..a8f3b1c9c61
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/add_constraint.html
@@ -0,0 +1,38 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_column.html b/_includes/v20.2/sql/diagrams/alter_column.html
new file mode 100644
index 00000000000..538c7895cd9
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_column.html
@@ -0,0 +1,90 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_index_partition_by.html b/_includes/v20.2/sql/diagrams/alter_index_partition_by.html
new file mode 100644
index 00000000000..55136f4ad23
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_index_partition_by.html
@@ -0,0 +1,72 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_primary_key.html b/_includes/v20.2/sql/diagrams/alter_primary_key.html
new file mode 100644
index 00000000000..5996d6b8681
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_primary_key.html
@@ -0,0 +1,71 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_role.html b/_includes/v20.2/sql/diagrams/alter_role.html
new file mode 100644
index 00000000000..6291ec7cca0
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_role.html
@@ -0,0 +1,29 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_sequence_options.html b/_includes/v20.2/sql/diagrams/alter_sequence_options.html
new file mode 100644
index 00000000000..7d3068e25e9
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_sequence_options.html
@@ -0,0 +1,72 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_table.html b/_includes/v20.2/sql/diagrams/alter_table.html
new file mode 100644
index 00000000000..47092b1ed90
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_table.html
@@ -0,0 +1,255 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_table_partition_by.html b/_includes/v20.2/sql/diagrams/alter_table_partition_by.html
new file mode 100644
index 00000000000..073c8794394
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_table_partition_by.html
@@ -0,0 +1,81 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/alter_type.html b/_includes/v20.2/sql/diagrams/alter_type.html
new file mode 100644
index 00000000000..ace962f3b99
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_type.html
@@ -0,0 +1,45 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_user_password.html b/_includes/v20.2/sql/diagrams/alter_user_password.html
new file mode 100644
index 00000000000..0e014933d1b
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_user_password.html
@@ -0,0 +1,31 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_view.html b/_includes/v20.2/sql/diagrams/alter_view.html
new file mode 100644
index 00000000000..2e481fa60aa
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_view.html
@@ -0,0 +1,36 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/alter_zone_database.html b/_includes/v20.2/sql/diagrams/alter_zone_database.html
new file mode 100644
index 00000000000..11eeb471abb
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_zone_database.html
@@ -0,0 +1,61 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_zone_index.html b/_includes/v20.2/sql/diagrams/alter_zone_index.html
new file mode 100644
index 00000000000..ef64e2314d3
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_zone_index.html
@@ -0,0 +1,66 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_zone_partition.html b/_includes/v20.2/sql/diagrams/alter_zone_partition.html
new file mode 100644
index 00000000000..69ee2d0eb57
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_zone_partition.html
@@ -0,0 +1,84 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_zone_range.html b/_includes/v20.2/sql/diagrams/alter_zone_range.html
new file mode 100644
index 00000000000..890dcc7240c
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_zone_range.html
@@ -0,0 +1,61 @@
+
diff --git a/_includes/v20.2/sql/diagrams/alter_zone_table.html b/_includes/v20.2/sql/diagrams/alter_zone_table.html
new file mode 100644
index 00000000000..11c233ebc84
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/alter_zone_table.html
@@ -0,0 +1,61 @@
+
diff --git a/_includes/v20.2/sql/diagrams/backup.html b/_includes/v20.2/sql/diagrams/backup.html
new file mode 100644
index 00000000000..b2e6b998113
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/backup.html
@@ -0,0 +1,50 @@
+
diff --git a/_includes/v20.2/sql/diagrams/begin_transaction.html b/_includes/v20.2/sql/diagrams/begin_transaction.html
new file mode 100644
index 00000000000..7e40de65c56
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/begin_transaction.html
@@ -0,0 +1,50 @@
+
diff --git a/_includes/v20.2/sql/diagrams/cancel.html b/_includes/v20.2/sql/diagrams/cancel.html
new file mode 100644
index 00000000000..5091140ae13
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/cancel.html
@@ -0,0 +1,19 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/cancel_job.html b/_includes/v20.2/sql/diagrams/cancel_job.html
new file mode 100644
index 00000000000..e8cbeb150fe
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/cancel_job.html
@@ -0,0 +1,24 @@
+
diff --git a/_includes/v20.2/sql/diagrams/cancel_query.html b/_includes/v20.2/sql/diagrams/cancel_query.html
new file mode 100644
index 00000000000..612db072eb4
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/cancel_query.html
@@ -0,0 +1,36 @@
+
diff --git a/_includes/v20.2/sql/diagrams/cancel_session.html b/_includes/v20.2/sql/diagrams/cancel_session.html
new file mode 100644
index 00000000000..857f87adb18
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/cancel_session.html
@@ -0,0 +1,36 @@
+
diff --git a/_includes/v20.2/sql/diagrams/check_column_level.html b/_includes/v20.2/sql/diagrams/check_column_level.html
new file mode 100644
index 00000000000..59eec3e3c15
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/check_column_level.html
@@ -0,0 +1,70 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/check_table_level.html b/_includes/v20.2/sql/diagrams/check_table_level.html
new file mode 100644
index 00000000000..6066d637220
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/check_table_level.html
@@ -0,0 +1,60 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/col_qual_list.html b/_includes/v20.2/sql/diagrams/col_qual_list.html
new file mode 100644
index 00000000000..290034152a6
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/col_qual_list.html
@@ -0,0 +1,115 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/col_qualification.html b/_includes/v20.2/sql/diagrams/col_qualification.html
new file mode 100644
index 00000000000..71573b90314
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/col_qualification.html
@@ -0,0 +1,154 @@
+
diff --git a/_includes/v20.2/sql/diagrams/column_def.html b/_includes/v20.2/sql/diagrams/column_def.html
new file mode 100644
index 00000000000..284e8dc5838
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/column_def.html
@@ -0,0 +1,23 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/comment.html b/_includes/v20.2/sql/diagrams/comment.html
new file mode 100644
index 00000000000..d9933514585
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/comment.html
@@ -0,0 +1,53 @@
+
diff --git a/_includes/v20.2/sql/diagrams/commit_transaction.html b/_includes/v20.2/sql/diagrams/commit_transaction.html
new file mode 100644
index 00000000000..12914f3e1cb
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/commit_transaction.html
@@ -0,0 +1,17 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/create_as_col_qual_list.html b/_includes/v20.2/sql/diagrams/create_as_col_qual_list.html
new file mode 100644
index 00000000000..791829c3ba3
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_as_col_qual_list.html
@@ -0,0 +1,17 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_as_constraint_def.html b/_includes/v20.2/sql/diagrams/create_as_constraint_def.html
new file mode 100644
index 00000000000..3699d2b1833
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_as_constraint_def.html
@@ -0,0 +1,20 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_changefeed.html b/_includes/v20.2/sql/diagrams/create_changefeed.html
new file mode 100644
index 00000000000..82b77b8360e
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_changefeed.html
@@ -0,0 +1,46 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_database.html b/_includes/v20.2/sql/diagrams/create_database.html
new file mode 100644
index 00000000000..c621b08e138
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_database.html
@@ -0,0 +1,42 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/create_index.html b/_includes/v20.2/sql/diagrams/create_index.html
new file mode 100644
index 00000000000..efef77b3721
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_index.html
@@ -0,0 +1,122 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_inverted_index.html b/_includes/v20.2/sql/diagrams/create_inverted_index.html
new file mode 100644
index 00000000000..92de493da93
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_inverted_index.html
@@ -0,0 +1,99 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_role.html b/_includes/v20.2/sql/diagrams/create_role.html
new file mode 100644
index 00000000000..3c9c43dedf3
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_role.html
@@ -0,0 +1,28 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/create_sequence.html b/_includes/v20.2/sql/diagrams/create_sequence.html
new file mode 100644
index 00000000000..134f4e59596
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_sequence.html
@@ -0,0 +1,80 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_stats.html b/_includes/v20.2/sql/diagrams/create_stats.html
new file mode 100644
index 00000000000..c02186ee5cb
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_stats.html
@@ -0,0 +1,25 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_table.html b/_includes/v20.2/sql/diagrams/create_table.html
new file mode 100644
index 00000000000..5a24a98e25c
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_table.html
@@ -0,0 +1,71 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_table_as.html b/_includes/v20.2/sql/diagrams/create_table_as.html
new file mode 100644
index 00000000000..8023d7826d5
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_table_as.html
@@ -0,0 +1,79 @@
+
diff --git a/_includes/v20.2/sql/diagrams/create_user.html b/_includes/v20.2/sql/diagrams/create_user.html
new file mode 100644
index 00000000000..1dc78bb289a
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_user.html
@@ -0,0 +1,39 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/create_view.html b/_includes/v20.2/sql/diagrams/create_view.html
new file mode 100644
index 00000000000..fe10170bff9
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/create_view.html
@@ -0,0 +1,51 @@
+
diff --git a/_includes/v20.2/sql/diagrams/default_value_column_level.html b/_includes/v20.2/sql/diagrams/default_value_column_level.html
new file mode 100644
index 00000000000..0ba9afca9c4
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/default_value_column_level.html
@@ -0,0 +1,64 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/delete.html b/_includes/v20.2/sql/diagrams/delete.html
new file mode 100644
index 00000000000..746c3d41e08
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/delete.html
@@ -0,0 +1,52 @@
+
diff --git a/_includes/v20.2/sql/diagrams/drop.html b/_includes/v20.2/sql/diagrams/drop.html
new file mode 100644
index 00000000000..c6034b008aa
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop.html
@@ -0,0 +1,33 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_column.html b/_includes/v20.2/sql/diagrams/drop_column.html
new file mode 100644
index 00000000000..384f5219d9d
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_column.html
@@ -0,0 +1,43 @@
+
diff --git a/_includes/v20.2/sql/diagrams/drop_constraint.html b/_includes/v20.2/sql/diagrams/drop_constraint.html
new file mode 100644
index 00000000000..77cea230ccd
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_constraint.html
@@ -0,0 +1,45 @@
+
diff --git a/_includes/v20.2/sql/diagrams/drop_database.html b/_includes/v20.2/sql/diagrams/drop_database.html
new file mode 100644
index 00000000000..038eb0befc1
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_database.html
@@ -0,0 +1,31 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_index.html b/_includes/v20.2/sql/diagrams/drop_index.html
new file mode 100644
index 00000000000..3e50bf8d4b1
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_index.html
@@ -0,0 +1,41 @@
+
diff --git a/_includes/v20.2/sql/diagrams/drop_role.html b/_includes/v20.2/sql/diagrams/drop_role.html
new file mode 100644
index 00000000000..0037ebf56ce
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_role.html
@@ -0,0 +1,25 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_sequence.html b/_includes/v20.2/sql/diagrams/drop_sequence.html
new file mode 100644
index 00000000000..6507f7dec30
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_sequence.html
@@ -0,0 +1,34 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_table.html b/_includes/v20.2/sql/diagrams/drop_table.html
new file mode 100644
index 00000000000..18ad4fdd502
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_table.html
@@ -0,0 +1,34 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_user.html b/_includes/v20.2/sql/diagrams/drop_user.html
new file mode 100644
index 00000000000..57c3db991b9
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_user.html
@@ -0,0 +1,28 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/drop_view.html b/_includes/v20.2/sql/diagrams/drop_view.html
new file mode 100644
index 00000000000..d95db116000
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/drop_view.html
@@ -0,0 +1,34 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/experimental_audit.html b/_includes/v20.2/sql/diagrams/experimental_audit.html
new file mode 100644
index 00000000000..46cc527074a
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/experimental_audit.html
@@ -0,0 +1,39 @@
+
diff --git a/_includes/v20.2/sql/diagrams/explain.html b/_includes/v20.2/sql/diagrams/explain.html
new file mode 100644
index 00000000000..eb8f361e704
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/explain.html
@@ -0,0 +1,35 @@
+
diff --git a/_includes/v20.2/sql/diagrams/explain_analyze.html b/_includes/v20.2/sql/diagrams/explain_analyze.html
new file mode 100644
index 00000000000..37d76fa8351
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/explain_analyze.html
@@ -0,0 +1,36 @@
+
diff --git a/_includes/v20.2/sql/diagrams/export.html b/_includes/v20.2/sql/diagrams/export.html
new file mode 100644
index 00000000000..05ad8e2a864
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/export.html
@@ -0,0 +1,36 @@
+
diff --git a/_includes/v20.2/sql/diagrams/family_def.html b/_includes/v20.2/sql/diagrams/family_def.html
new file mode 100644
index 00000000000..1dda01d9e79
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/family_def.html
@@ -0,0 +1,30 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/foreign_key_column_level.html b/_includes/v20.2/sql/diagrams/foreign_key_column_level.html
new file mode 100644
index 00000000000..a963e586425
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/foreign_key_column_level.html
@@ -0,0 +1,75 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/foreign_key_table_level.html b/_includes/v20.2/sql/diagrams/foreign_key_table_level.html
new file mode 100644
index 00000000000..2eb3498af46
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/foreign_key_table_level.html
@@ -0,0 +1,85 @@
+
\ No newline at end of file
diff --git a/_includes/v20.2/sql/diagrams/grammar.html b/_includes/v20.2/sql/diagrams/grammar.html
new file mode 100644
index 00000000000..71d8cf930a9
--- /dev/null
+++ b/_includes/v20.2/sql/diagrams/grammar.html
@@ -0,0 +1,10848 @@
+