diff --git a/src/archived/training/architecture-overview.md b/src/archived/training/architecture-overview.md
deleted file mode 100644
index 621efbbe5e9..00000000000
--- a/src/archived/training/architecture-overview.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Architecture Overview
-summary: Learn more about CockroachDB's underlying architecture
-toc: false
-sidebar_data: sidebar-data-training.json
----
-
-Watch the rest of Alex Robinson's talk, where he explains the CockroachDB architecture and how it was built. You can also read through a related set of slides.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-## What's next?
-
-[Cluster Startup and Scaling](cluster-startup-and-scaling.html)
diff --git a/src/archived/training/backup-and-restore.md b/src/archived/training/backup-and-restore.md
deleted file mode 100644
index 32cf9154bee..00000000000
--- a/src/archived/training/backup-and-restore.md
+++ /dev/null
@@ -1,335 +0,0 @@
----
-title: Backup and Restore
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a 3-node cluster
-
-Start and initialize an insecure cluster like you did in previous modules.
-
-1. Start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Perform a "core" backup
-
-1. Use the [`cockroach gen`](../cockroach-gen.html) command to generate an example `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data startrek | cockroach sql --insecure
- ~~~
-
-2. Check the contents of the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT * FROM startrek.episodes LIMIT 1;" \
- --execute="SELECT * FROM startrek.quotes LIMIT 1;"
- ~~~
-
- ~~~
- id | season | num | title | stardate
- +----+--------+-----+--------------+-------------+
- 1 | 1 | 1 | The Man Trap | 1531.100000
- (1 row)
- quote | characters | stardate | episode
- +----------------------------------------------------------------------+------------------------+----------+---------+
- "... freedom ... is a worship word..." "It is our worship word too." | Cloud William and Kirk | NULL | 52
- (1 row)
- ~~~
-
-3. Use the [`cockroach dump`](../cockroach-dump.html) command to create a SQL dump file for the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach dump startrek \
- --insecure \
- --host=localhost:26257 > startrek_backup.sql
- ~~~
-
-4. Take a look at the generated `startrek_backup.sql` file.
-
- You'll see that it contains the SQL for recreating the two tables in the `startrek` database and inserting all current rows into those tables.
-
- ~~~
- CREATE TABLE episodes (
- id INT NOT NULL,
- season INT NULL,
- num INT NULL,
- title STRING NULL,
- stardate DECIMAL NULL,
- CONSTRAINT "primary" PRIMARY KEY (id ASC),
- FAMILY "primary" (id, season, num, title, stardate)
- );
-
- CREATE TABLE quotes (
- quote STRING NULL,
- characters STRING NULL,
- stardate DECIMAL NULL,
- episode INT NULL,
- CONSTRAINT fk_episode_ref_episodes FOREIGN KEY (episode) REFERENCES episodes (id),
- INDEX quotes_episode_idx (episode ASC),
- FAMILY "primary" (quote, characters, stardate, episode, rowid)
- );
-
- INSERT INTO episodes (id, season, num, title, stardate) VALUES
- (1, 1, 1, 'The Man Trap', 1531.1),
- (2, 1, 2, 'Charlie X', 1533.6),
- (3, 1, 3, 'Where No Man Has Gone Before', 1312.4),
- (4, 1, 4, 'The Naked Time', 1704.2),
- (5, 1, 5, 'The Enemy Within', 1672.1),
- ...
- ~~~
-
-## Step 3. Perform a "core" restore
-
-Now imagine the tables in the `startrek` database have changed and you want to restore them to their state at the time of the dump.
-
-1. Drop the tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="DROP TABLE startrek.episodes,startrek.quotes CASCADE;"
- ~~~
-
-2. Confirm that the tables in the `startrek` database are gone:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW TABLES FROM startrek;"
- ~~~
-
- ~~~
- table_name
- +------------+
- (0 rows)
- ~~~
-
-3. Restore the tables in the `startrek` database from the dump file:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database=startrek < startrek_backup.sql
- ~~~
-
-3. Check the contents of the `startrek` database again:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT * FROM startrek.episodes LIMIT 1;" \
- --execute="SELECT * FROM startrek.quotes LIMIT 1;"
- ~~~
-
- ~~~
- id | season | num | title | stardate
- +----+--------+-----+--------------+-------------+
- 1 | 1 | 1 | The Man Trap | 1531.100000
- (1 row)
- quote | characters | stardate | episode
- +----------------------------------------------------------------------+------------------------+----------+---------+
- "... freedom ... is a worship word..." "It is our worship word too." | Cloud William and Kirk | NULL | 52
- (1 row)
- ~~~
-
-## Step 4. Perform an "enterprise" backup
-
-Next, you'll use the enterprise `BACKUP` feature to create a backup of the `startrek` database on S3.
-
-1. If you requested and enabled a trial enterprise license in the [Geo-Partitioning](geo-partitioning.html) module, skip to step 2. Otherwise, [request a trial enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/) and then enable your license:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING cluster.organization = '';"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING enterprise.license = '';"
- ~~~
-
-2. Use the `BACKUP` SQL statement to generate a backup of the `startrek` database and store it on S3. To ensure your backup doesn't conflict with anyone else's, prefix the filename with your name:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="BACKUP DATABASE startrek TO 's3://cockroach-training/[name]-training?AWS_ACCESS_KEY_ID={{site.training.aws_access_key}}&AWS_SECRET_ACCESS_KEY={{site.training.aws_secret_access_key}}';"
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+------+---------------+----------------+-------+
- 441707640059723777 | succeeded | 1 | 279 | 200 | 0 | 30519
- (1 row)
- ~~~
-
-## Step 5. Perform an "enterprise" restore
-
-Again, imagine the tables in the `startrek` database have changed and you want to restore them from the enterprise backup.
-
-1. Drop the tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="DROP TABLE startrek.episodes,startrek.quotes CASCADE;"
- ~~~
-
-2. Confirm that the tables in the `startrek` database are gone:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW TABLES FROM startrek;"
- ~~~
-
- ~~~
- table_name
- +------------+
- (0 rows)
- ~~~
-
-3. Restore the `startrek` database from the enterprise backup, again making sure to prefix the filename with your name:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="RESTORE startrek.* FROM 's3://cockroach-training/[name]-training?AWS_ACCESS_KEY_ID={{site.training.aws_access_key}}&AWS_SECRET_ACCESS_KEY={{site.training.aws_secret_access_key}}';"
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+------+---------------+----------------+-------+
- 441707928464261121 | succeeded | 1 | 279 | 200 | 0 | 30519
- (1 row)
- ~~~
-
-4. Check the contents of the `startrek` database again:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT * FROM startrek.episodes LIMIT 1;" \
- --execute="SELECT * FROM startrek.quotes LIMIT 1;"
- ~~~
-
- ~~~
- id | season | num | title | stardate
- +----+--------+-----+--------------+-------------+
- 1 | 1 | 1 | The Man Trap | 1531.100000
- (1 row)
- quote | characters | stardate | episode
- +----------------------------------------------------------------------+------------------------+----------+---------+
- "... freedom ... is a worship word..." "It is our worship word too." | Cloud William and Kirk | NULL | 52
- (1 row)
- ~~~
-
-## Step 6. Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Terminate all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3
- ~~~
-
-## What's next?
-
-[Cluster Upgrade](cluster-upgrade.html)
diff --git a/src/archived/training/client-connection-troubleshooting.md b/src/archived/training/client-connection-troubleshooting.md
deleted file mode 100644
index b081ffc4543..00000000000
--- a/src/archived/training/client-connection-troubleshooting.md
+++ /dev/null
@@ -1,168 +0,0 @@
----
-title: Client Connection Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-Make sure you have already completed [Node Startup Troubleshooting](node-startup-troubleshooting.html) and have 6 nodes running securely.
-
-## Problem 1: SSL required
-
-In this scenario, you try to connect a user without providing a client certificate.
-
-### Step 1. Simulate the problem
-
-1. In a new terminal, as the `root` users, create a new user called `kirk`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --execute="CREATE USER kirk;"
- ~~~
-
-2. As the `kirk` user, try to connect to the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --user=kirk \
- --execute="SHOW DATABASES;"
- ~~~
-
- Because `kirk` doesn't have a client certificate in the `certs` directory, the cluster asks for the user's password:
-
- ~~~
- Enter password:
- ~~~
-
-4. Because `kirk` doesn't have a password, press **Enter**.
-
- The connection attempt fails, and the following error is printed to `stderr`:
-
- ~~~
- Error: pq: invalid password
- Failed running "sql"
- ~~~
-
-### Step 2. Resolve the problem
-
-To successfully connect the user, you must first either generate a client certificate or create a password for the user. It's generally best to use certificates over passwords, so do that here.
-
-1. Generate a client certificate for the `kirk` user:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- kirk \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-2. As the `kirk` user, try to connect to the cluster again:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --user=kirk \
- --execute="SHOW DATABASES;"
- ~~~
-
- This time, the connection attempt succeeds:
-
- ~~~
- database_name
- +---------------+
- (0 rows)
- ~~~
-
-## Problem 2: Wrong host or port
-
-In this scenario, you try to connect the `kirk` user again but specify a `--port` that is not in use by any of the existing nodes.
-
-### Step 1. Simulate the problem
-
-Try to connect the `kirk` user:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
---certs-dir=certs \
---host=localhost:26257 \
---user=kirk \
---port=20000 \
---execute="SHOW DATABASES;"
-~~~
-
-The connection attempt fails, and the following is printed to `stderr`:
-
-~~~
-Error: unable to connect or connection lost.
-
-Please check the address and credentials such as certificates (if attempting to
-communicate with a secure cluster).
-
-dial tcp [::1]:20000: connect: connection refused
-Failed running "sql"
-~~~
-
-### Step 2. Resolve the problem
-
-To successfully connect the user, try again using a correct `--port`:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
---certs-dir=certs \
---host=localhost:26257 \
---user=kirk \
---port=26259 \
---execute="SHOW DATABASES;"
-~~~
-
-This time, the connection attempt succeeds:
-
-~~~
- database_name
-+---------------+
-(0 rows)
-~~~
-
-## Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Terminate all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4 node5 node6
- ~~~
-
-## What's next?
-
-[Under-Replication Troubleshooting](under-replication-troubleshooting.html)
diff --git a/src/archived/training/cluster-startup-and-scaling.md b/src/archived/training/cluster-startup-and-scaling.md
deleted file mode 100644
index bc6a1cadc3b..00000000000
--- a/src/archived/training/cluster-startup-and-scaling.md
+++ /dev/null
@@ -1,319 +0,0 @@
----
-title: Cluster Startup and Scaling
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-
-## Step 1. Install CockroachDB
-
-1. Download the CockroachDB archive for your OS, and extract the binary:
-
-
-
- {{site.data.alerts.callout_info}}
- If you get a permissions error, prefix the command with `sudo`.
- {{site.data.alerts.end}}
-
-3. Clean up the directory where you unpacked the binary:
-
-
-
- You can also execute the `cockroach` binary directly from its download
- location, but the rest of training documentation assumes you have the
- binary in your `PATH`.
-
-## Step 2. Start a node
-
-Use the [`cockroach start`](../cockroach-start.html) command to start a node:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---insecure \
---store=node1 \
---listen-addr=localhost:26257 \
---http-addr=localhost:8080 \
---join=localhost:26257,localhost:26258,localhost:26259 \
---background
-~~~
-
-You'll see the following message:
-
-~~~
-*
-* WARNING: RUNNING IN INSECURE MODE!
-*
-* - Your cluster is open for any client that can access localhost.
-* - Any user, even root, can log in without providing a password.
-* - Any user, connecting as root, can read or write any data in your cluster.
-* - There is no network encryption nor authentication, and thus no confidentiality.
-*
-* Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.2/secure-a-cluster.html
-*
-*
-* INFO: initial startup completed, will now wait for `cockroach init`
-* or a join to a running cluster to start accepting clients.
-* Check the log file(s) for progress.
-*
-~~~
-
-## Step 3. Understand the flags you used
-
-Before moving on, take a moment to understand the flags you used with the `cockroach start` command:
-
-Flag | Description
------|------------
-`--insecure` | Indicates that the node will communicate without encryption.
You'll start all other nodes with this flag, as well as all other `cockroach` commands you'll use against the cluster.
Without this flag, `cockroach` expects to be able to find security certificates to encrypt its communication. More about these in a later module.
-`--store` | The location where the node stores its data and logs.
Since you'll be running all nodes on your computer, you need to specify a unique storage location for each node. In contrast, in a real deployment, with one node per machine, it's fine to let `cockroach` use its default storage location (`cockroach-data`).
-`--listen-addr` `--http-addr` | The IP address/hostname and port to listen on for connections from other nodes and clients and for Admin UI HTTP request, respectively.
Again, since you'll be running all nodes on your computer, you need to specify unique ports for each node. In contrast, in a real deployment, with one node per machine, it's fine to let `cockroach` use its default TPC port (`26257`) and HTTP port (`8080`).
-`--join` | The addresses and ports of all of your initial nodes.
You'll use this exact `--join` flag when starting all other nodes.
-`--background` | The node will run in the background.
-
-{{site.data.alerts.callout_success}}
-You can run `cockroach start --help` to get help on this command directly in your terminal and `cockroach --help` to get help on other commands.
-{{site.data.alerts.end}}
-
-## Step 4. Start two more nodes
-
-Start two more nodes, using the same `cockroach start` command as earlier but with unique `--store`, `--listen-addr`, and `--http-addr` flags for each new node.
-
-1. Start the second node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start the third node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-## Step 5. Initialize the cluster
-
-1. Use the [`cockroach init`](../cockroach-init.html) command to perform a one-time initialization of the cluster, sending the request to any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
- You'll see the following message:
-
- ~~~
- Cluster successfully initialized
- ~~~
-
-2. Look at the startup details in the server log:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ grep 'node starting' node1/logs/cockroach.log -A 11
- ~~~
-
- The output will look something like this:
-
- ~~~
- CockroachDB node starting at 2019-10-01 20:14:55.358954 +0000 UTC (took 27.9s)
- build: CCL {{page.release_info.version}} @ 2019/09/25 15:18:08 (go1.12.6)
- webui: http://localhost:8080
- sql: postgresql://root@localhost:26257?sslmode=disable
- client flags: cockroach --host=localhost:26257 --insecure
- logs: /Users//cockroachdb-training/node1/logs
- temp dir: /Users//cockroachdb-training/node1/cockroach-temp462678173
- external I/O path: /Users//cockroachdb-training/node1/extern
- store[0]: path=/Users//cockroachdb-training/node1
- status: initialized new cluster
- clusterID: fdc056a4-0cc0-4b29-b435-60e1db239f82
- nodeID: 1
- ~~~
-
- Field | Description
- ------|------------
- `build` | The version of CockroachDB you are running.
- `webui` | The URL for accessing the Admin UI.
- `sql` | The connection URL for your client.
- `client flags` | The flags to use when connecting to the node via [`cockroach` client commands](../cockroach-commands.html).
- `logs` | The directory containing debug log data.
- `temp dir` | The temporary store directory of the node.
- `external I/O path` | The external IO directory with which the local file access paths are prefixed while performing [backup](../backup.html) and [restore](../restore.html) operations using local node directories or NFS drives.
- `store[n]` | The directory containing store data, where `[n]` is the index of the store, e.g., `store[0]` for the first store, `store[1]` for the second store.
- `status` | Whether the node is the first in the cluster (`initialized new cluster`), joined an existing cluster for the first time (`initialized new node, joined pre-existing cluster`), or rejoined an existing cluster (`restarted pre-existing node`).
- `clusterID` | The ID of the cluster.
- `nodeID` | The ID of the node.
-
-## Step 6. Verify that the cluster is live
-
-1. Use the [`cockroach node status`](../cockroach-node.html) command to check that all 3 nodes are part of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach node status --insecure --host=localhost:26257
- ~~~
-
- ~~~
- id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live
- +----+-----------------+-----------------+-----------------------------------------+----------------------------------+----------------------------------+----------+--------------+---------+
- 1 | localhost:26257 | localhost:26257 | v19.2.0 | 2019-10-01 20:14:55.249457+00:00 | 2019-10-01 20:16:07.283866+00:00 | | true | true
- 2 | localhost:26258 | localhost:26258 | v19.2.0 | 2019-10-01 20:14:55.445079+00:00 | 2019-10-01 20:16:02.972943+00:00 | | true | true
- 3 | localhost:26259 | localhost:26259 | v19.2.0 | 2019-10-01 20:14:55.857631+00:00 | 2019-10-01 20:16:03.389338+00:00 | | true | true
- (3 rows)
- ~~~
-
-2. Use the [`cockroach sql`](../cockroach-sql.html) command to query the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW DATABASES;"
- ~~~
-
- ~~~
- database_name
- +---------------+
- defaultdb
- postgres
- system
- (3 rows)
- ~~~
-
- You just queried the node listening on `26257`, but every other node is a SQL gateway to the cluster as well. We'll learn more about CockroachDB SQL and the built-in SQL client in a later module.
-
-## Step 7. Look at the current state of replication
-
-1. To understand replication in CockroachDB, it's important to review a few concepts from the architecture:
-
- Concept | Description
- --------|------------
- **Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 512 MiB in size, it splits into two ranges. This process continues as the table and its indexes continue growing.
- **Replica** | CockroachDB replicates each range 3 times by default and stores each replica on a different node.
In a later module, you'll learn how to control replication.
-
-2. With those concepts in mind, open the Admin UI at http://localhost:8080 and view the **Node List**:
-
-
-
- Note that the **Replicas** count is the same on all three nodes. This indicates:
- - There are this many initial "ranges" of data in the cluster. These are all internal "system" ranges since you haven't added any table data yet.
- - Each range has been replicated 3 times (according to the CockroachDB default).
- - For each range, each replica is stored on different nodes.
-
-## Step 8. Scale the cluster
-
-Adding more nodes to your cluster is even easier than starting the cluster. Just like before, you use the `cockroach start` command with unique `--store`, `--listen-addr`, and `--http-addr` flags for each new node. But this time, you do not have to follow-up with the `cockroach init` command or any other commands.
-
-1. Start the fourth node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- cockroach start \
- --insecure \
- --store=node4 \
- --listen-addr=localhost:26260 \
- --http-addr=localhost:8083 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start the fifth node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- cockroach start \
- --insecure \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
- As soon as you run these commands, the nodes join the cluster. There's no need to run the `cockroach init` command or any other commands.
-
-## Step 9. Watch data rebalance across all 5 nodes
-
-Go back to the **Live Nodes** list in the Admin UI and watch how the **Replicas** are automatically rebalanced to utilize the additional capacity of the new nodes:
-
-
-
-Another way to observe this is to click **Metrics** in the upper left and scroll down to the **Replicas per Node** graph:
-
-
-
-## What's next?
-
-[Fault Tolerance and Automated Repair](fault-tolerance-and-automated-repair.html)
diff --git a/src/archived/training/cluster-unavailability-troubleshooting.md b/src/archived/training/cluster-unavailability-troubleshooting.md
deleted file mode 100644
index f14010dcf44..00000000000
--- a/src/archived/training/cluster-unavailability-troubleshooting.md
+++ /dev/null
@@ -1,107 +0,0 @@
----
-title: Cluster Unavailability Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-Make sure you have already completed [Under-Replication Troubleshooting](under-replication-troubleshooting.html) and have a cluster of 3 nodes running.
-
-## Step 1. Simulate the problem
-
-1. In the terminal where node 2 is running, press **CTRL-C**.
-
-2. In the terminal where node 3 is running, press **CTRL-C**. You may need to press **CRTL + C** a second time to force this node to terminate.
-
-## Step 2. Troubleshoot the problem
-
-1. Go back to the Admin UI:
-
-
-
- You'll notice that an error is shown and timeseries metrics are no longer being reported.
-
-2. In a new terminal, try to query the one node that was not terminated:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW DATABASES;" \
- --logtostderr=WARNING
- ~~~
-
- Because all ranges in the cluster, specifically the system ranges, no longer have a majority of their replicas, the cluster as a whole cannot make progress, and so the query will hang indefinitely.
-
-## Step 3. Resolve the problem
-
-1. In the terminal where node 2 was running, restart the node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-2. In the terminal where node 3 was running, restart the node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. Go back to the terminal where you issued the query.
-
- All ranges have a majority of their replicas again, and so the query executes and succeeds:
-
- ~~~
- database_name
- +---------------+
- defaultdb
- postgres
- system
- (3 rows)
- ~~~
-
-## Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Terminate all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3
- ~~~
-
-## What's next?
-
-[Data Unavailability Troubleshooting](data-unavailability-troubleshooting.html)
diff --git a/src/archived/training/cluster-upgrade.md b/src/archived/training/cluster-upgrade.md
deleted file mode 100644
index d82f12b67cc..00000000000
--- a/src/archived/training/cluster-upgrade.md
+++ /dev/null
@@ -1,249 +0,0 @@
----
-title: Cluster Upgrade
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Install CockroachDB v19.1
-
-1. Download the CockroachDB v19.1 archive for your OS, and extract the binary:
-
-
-
-
-
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-v19.1.1.darwin-10.9-amd64.tgz \
- | tar -xz
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-v19.1.1.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-
-2. Move the v19.1 binary into the parent `cockroachdb-training` directory:
-
-
-
-## Step 2. Start a cluster running v19.1
-
-Start and initialize a cluster like you did in previous modules, but this time using the v19.1 binary.
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./cockroach-v19.1 start \
- --insecure \
- --store=node1 \
- --host=localhost \
- --port=26257 \
- --http-port=8080 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-2. In a new terminal, start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./cockroach-v19.1 start \
- --insecure \
- --store=node2 \
- --host=localhost \
- --port=26258 \
- --http-port=8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. In a new terminal, start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./cockroach-v19.1 start \
- --insecure \
- --store=node3 \
- --host=localhost \
- --port=26259 \
- --http-port=8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-4. In a new terminal, perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ./cockroach-v19.1 init --insecure
- ~~~
-
-{{site.data.alerts.callout_info}}
-You can disable a (manual or automatic) cluster version upgrade from the specified version until you reset your cluster by using the `cluster.preserve_downgrade_option` cluster setting. See the full [Cluster Upgrade](../upgrade-cockroach-version.html) documentation for details.
-{{site.data.alerts.end}}
-
-## Step 3. Upgrade the first node to v19.2
-
-1. In node 1's terminal, press **CTRL-C** to terminate the `cockroach` process.
-
-2. Verify that node 1 has been terminated:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ps | grep cockroach
- ~~~
-
- You should **not** see a `cockroach` process with `--store=node1` and `--port=26257`.
-
- ~~~
- 49659 ttys001 0:02.43 ./cockroach-v19.1 start --insecure --store=node2 --host=localhost --port=26258 --http-port=8081 --join=localhost:26257,localhost:26258,localhost:26259
- 49671 ttys002 0:02.32 ./cockroach-v19.1 start --insecure --store=node3 --host=localhost --port=26259 --http-port=8082 --join=localhost:26257,localhost:26258,localhost:26259
- 49705 ttys015 0:00.00 grep cockroach
- ~~~~
-
-3. In node 1's terminal, restart the node using the v19.2 binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-4. Go to the Admin UI at http://localhost:8081 to view the **Node List** and then verify that the node has rejoined the cluster using the new version of the binary.
-
-## Step 4. Upgrade the rest of the nodes to v19.2
-
-1. In node 2's terminal, press **CTRL-C** to terminate the `cockroach` process.
-
-2. Verify that node 2 has been terminated:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ps | grep cockroach
- ~~~
-
- You should not see a `cockroach` process with `--store=node2` and `--port=26258`.
-
- ~~~
- 49659 ttys001 0:07.05 ./cockroach-v19.1 start --insecure --store=node3 --host=localhost --port=26259 --http-port=8082 --join=localhost:26257,localhost:26258,localhost:26259
- 49824 ttys002 0:00.00 grep cockroach
- 49717 ttys015 0:05.76 ./cockroach start --insecure --store=node1 --listen-addr=localhost:26257 --http-addr=localhost:8080 --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. Restart the node using the v19.2 binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-4. Wait 1 minute.
-
-5. In node 3's terminal, press **CTRL-C** to terminate the `cockroach` process.
-
-6. Verify that node 3 has been terminated:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ps | grep cockroach
- ~~~
-
- You should not see a `cockroach` process with `--store=node3` and `--port=26259`.
-
- ~~~
- 49869 ttys001 0:00.01 grep cockroach
- 49849 ttys002 0:02.38 ./cockroach start --insecure --store=node2 --listen-addr=localhost:26258 --http-addr=localhost:8081 --join=localhost:26257,localhost:26258,localhost:26259
- 49717 ttys015 0:10.88 ./cockroach start --insecure --store=node1 --listen-addr=localhost:26257 --http-addr=localhost:8080 --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-7. Restart the node using the v19.2 binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-## Step 5. Check your cluster's versions
-
-Back in the Admin UI, you'll see that all 3 nodes now have the same, upgraded version. You can also use the `cockroach node status` command to check each node's version:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach node status \
---insecure
-~~~
-
-~~~
- id | address | sql_address | build | started_at | updated_at | locality | is_available | is_live
-+----+-----------------+-----------------+-----------------------------------------+----------------------------------+----------------------------------+----------+--------------+---------+
- 1 | localhost:26257 | localhost:26257 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:14:55.249457+00:00 | 2019-10-01 20:16:07.283866+00:00 | | true | true
- 2 | localhost:26258 | localhost:26258 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:14:55.445079+00:00 | 2019-10-01 20:16:02.972943+00:00 | | true | true
- 3 | localhost:26259 | localhost:26259 | v19.2.0-alpha.20190606-2479-gd98e0839dc | 2019-10-01 20:14:55.857631+00:00 | 2019-10-01 20:16:03.389338+00:00 | | true | true
-(3 rows)
-~~~
-
-## Step 6. Clean up
-
-This is the last module of the training, so feel free to stop you cluster and clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3
- ~~~
diff --git a/src/archived/training/data-corruption-troubleshooting.md b/src/archived/training/data-corruption-troubleshooting.md
deleted file mode 100644
index 71ce5a7e675..00000000000
--- a/src/archived/training/data-corruption-troubleshooting.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-title: Data Corruption Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a 3-node cluster
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~~
-
-2. In a new terminal, start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~
-
-3. In a new terminal, start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~
-
-4. In a new terminal, perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Prepare to simulate the problem
-
-Before you can manually corrupt data, you need to import enough data so that the cluster creates persistent `.sst` files.
-
-1. Create a database into which you'll import a new table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="CREATE DATABASE import_test;"
- ~~~
-
-2. Run the [`IMPORT`](../import.html) command, using schema and data files we've made publicly available on Google Cloud Storage:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database="import_test" \
- --execute="IMPORT TABLE orders CREATE USING 'https://storage.googleapis.com/cockroach-fixtures/tpch-csv/schema/orders.sql' CSV DATA ('https://storage.googleapis.com/cockroach-fixtures/tpch-csv/sf-1/orders.tbl.1') WITH delimiter = '|';"
- ~~~
-
- The import will take a minute or two. Once it completes, you'll see a confirmation with details:
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+--------+---------------+----------------+----------+
- 378521252933861377 | succeeded | 1 | 187500 | 375000 | 0 | 26346739
- (1 row)
- ~~~
-
-## Step 2. Simulate the problem
-
-1. In the same terminal, look in the data directory of `node3`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ ls node3
- ~~~
-
- ~~~
- 000003.log IDENTITY OPTIONS-000005 cockroach.http-addr
- 000006.sst LOCK auxiliary cockroach.listen-addr
- COCKROACHDB_VERSION MANIFEST-000001 cockroach-temp478417278 logs
- CURRENT MANIFEST-000007 cockroach.advertise-addr temp-dirs-record.txt
- ~~~
-
-2. Delete one of the `.sst` files.
-
-3. In the terminal where node 3 is running, press **CTRL-C** to stop it.
-
-4. Try to restart node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~
-
- The startup process will fail, and you'll see the following printed to `stderr`:
-
- ~~~
- W180209 10:45:03.684512 1 cli/start.go:697 Using the default setting for --cache (128 MiB).
- A significantly larger value is usually needed for good performance.
- If you have a dedicated server a reasonable setting is --cache=25% (2.0 GiB).
- W180209 10:45:03.805541 37 gossip/gossip.go:1241 [n?] no incoming or outgoing connections
- E180209 10:45:03.808537 1 cli/error.go:68 cockroach server exited with error: failed to create engines: could not open rocksdb instance: Corruption: Sst file size mismatch: /Users/jesseseldess/cockroachdb-training/cockroach-{{page.release_info.version}}.darwin-10.9-amd64/node3/000006.sst. Size recorded in manifest 2626945, actual size 2626210
- *
- * ERROR: cockroach server exited with error: failed to create engines: could not open rocksdb instance: Corruption: Sst file size mismatch: /Users/jesseseldess/cockroachdb-training/cockroach-{{page.release_info.version}}.darwin-10.9-amd64/node3/000006.sst. Size recorded in manifest 2626945, actual size 2626210
- *
- *
- Failed running "start"
- ~~~
-
- The error tells you that the failure has to do with RocksDB-level (i.e., storage-level) corruption. Because the node's data is corrupt, the node will not restart.
-
-## Step 3. Resolve the problem
-
-Because only 1 node's data is corrupt, the solution is to completely remove the node's data directory and restart the node.
-
-1. Remove the `node3` data directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node3
- ~~~
-
-2. In the terminal where node 3 was running, restart the node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~
-
-In this case, the cluster repairs the node using data from the other nodes. In more severe emergencies where multiple disks are corrupted, there are tools like `cockroach debug rocksdb` to let you inspect the files in more detail and try to repair them. If enough nodes/files are corrupted, [restoring to a enterprise backup](../restore.html) is best.
-
-{{site.data.alerts.callout_danger}}
-In all cases of data corruption, you should [get support from Cockroach Labs](how-to-get-support.html).
-{{site.data.alerts.end}}
-
-## What's next?
-
-[Software Panic Troubleshooting](software-panic-troubleshooting.html)
diff --git a/src/archived/training/data-import.md b/src/archived/training/data-import.md
deleted file mode 100644
index f447c3aec50..00000000000
--- a/src/archived/training/data-import.md
+++ /dev/null
@@ -1,339 +0,0 @@
----
-title: Data Import
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous lab.
-
-## Step 1. Start a 3-node cluster
-
-Start and initialize a cluster like you did in previous modules.
-
-{{site.data.alerts.callout_info}}
-To simplify the process of running multiple nodes on your local computer, you'll start them in the [background](../cockroach-start.html#general) instead of in separate terminals.
-{{site.data.alerts.end}}
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Import CSV data from remote file storage
-
-1. In a new terminal, create a database into which you'll import a new table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="CREATE DATABASE IF NOT EXISTS tabular_import;"
- ~~~
-
-2. Run the [`IMPORT`](../import.html) statement, using schema and data files we've made publicly available on Google Cloud Storage:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database="tabular_import" \
- --execute="IMPORT TABLE orders CREATE USING 'https://storage.googleapis.com/cockroach-fixtures/tpch-csv/schema/orders.sql' CSV DATA ('https://storage.googleapis.com/cockroach-fixtures/tpch-csv/sf-1/orders.tbl.1') WITH delimiter = '|';"
- ~~~
-
- The import will take a minute or two. To check the status of the import, navigate to the **Admin UI > [Jobs page](../admin-ui-jobs-page.html)**. Once it completes, you'll see a confirmation with details:
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+--------+---------------+----------------+----------+
- 378471816945303553 | succeeded | 1 | 187500 | 375000 | 0 | 26346739
- (1 row)
- ~~~
-
-3. Check the schema of the imported `orders` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database="tabular_import" \
- --execute="SHOW CREATE orders;"
- ~~~
-
- ~~~
- table_name | create_statement
- +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
- orders | CREATE TABLE orders (
- | o_orderkey INTEGER NOT NULL,
- | o_custkey INTEGER NOT NULL,
- | o_orderstatus STRING(1) NOT NULL,
- | o_totalprice DECIMAL(15,2) NOT NULL,
- | o_orderdate DATE NOT NULL,
- | o_orderpriority STRING(15) NOT NULL,
- | o_clerk STRING(15) NOT NULL,
- | o_shippriority INTEGER NOT NULL,
- | o_comment STRING(79) NOT NULL,
- | CONSTRAINT "primary" PRIMARY KEY (o_orderkey ASC),
- | INDEX o_ck (o_custkey ASC),
- | INDEX o_od (o_orderdate ASC),
- | FAMILY "primary" (o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment)
- | )
- (1 row)
- ~~~
-
- {{site.data.alerts.callout_info}}
- You can also view the schema by navigating to the **Admin UI > [Databases](../admin-ui-databases-page.html)** page and clicking on the table name.
- {{site.data.alerts.end}}
-
-4. Read some data from the imported `orders` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database="tabular_import" \
- --execute="SELECT o_orderkey, o_custkey, o_comment FROM orders WHERE o_orderstatus = 'O' LIMIT 10;"
- ~~~
-
- ~~~
- o_orderkey | o_custkey | o_comment
- +------------+-----------+-------------------------------------------------------------------------------+
- 1 | 36901 | nstructions sleep furiously among
- 2 | 78002 | foxes. pending accounts at the pending, silent asymptot
- 4 | 136777 | sits. slyly regular warthogs cajole. regular, regular theodolites acro
- 7 | 39136 | ly special requests
- 32 | 130057 | ise blithely bold, regular requests. quickly unusual dep
- 34 | 61001 | ly final packages. fluffily final deposits wake blithely ideas. spe
- 35 | 127588 | zzle. carefully enticing deposits nag furio
- 36 | 115252 | quick packages are blithely. slyly silent accounts wake qu
- 38 | 124828 | haggle blithely. furiously express ideas haggle blithely furiously regular re
- 39 | 81763 | ole express, ironic requests: ir
- (10 rows)
- ~~~
-
-## Step 3. Import a PostgreSQL dump file
-
-If you're importing data from a PostgreSQL database, you can import the `.sql` file generated by the [`pg_dump`][pg_dump] command, after editing the file to be compatible with CockroachDB.
-
-{{site.data.alerts.callout_success}}
-The `.sql` files generated by `pg_dump` provide better performance because they use the `COPY` statement instead of bulk `INSERT` statements.
-{{site.data.alerts.end}}
-
-1. Download our sample [`pg_dump.sql`](resources/pg_dump.sql) file using [`curl`][curl] or [`wget`][wget], depending on which you have installed:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O {{site.url}}/docs/{{page.version.version}}/training/resources/pg_dump.sql
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget {{site.url}}/docs/{{page.version.version}}/training/resources/pg_dump.sql
- ~~~
-
-2. Take a look at the `pg_dump.sql` file, which contains 2 tables, `customers` and `accounts`, as well as some constraints on both tables.
-
- Before this file can be imported into CockroachDB, it must be edited for compatibility as follows:
- - The `CREATE SCHEMA` statement must be removed.
- - The `ALTER SCHEMA` statement must be removed.
-
-3. Instead of manually cleaning the file, you can download our pre-cleaned version using [`curl`][curl] or [`wget`][wget]:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O {{site.url}}/docs/{{page.version.version}}/training/resources/pg_dump_cleaned.sql
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget {{site.url}}/docs/{{page.version.version}}/training/resources/pg_dump_cleaned.sql
- ~~~
-
-4. Create a database you can use for the import:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="CREATE DATABASE IF NOT EXISTS pg_import;"
- ~~~
-
-5. Import the dump:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database=pg_import \
- --execute="IMPORT PGDUMP '{{site.url}}/docs/{{page.version.version}}/training/resources/pg_dump_cleaned.sql';"
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- --------------------+-----------+--------------------+------+---------------+----------------+-------
- 409923615993004033 | succeeded | 1 | 10 | 5 | 0 | 258
- ~~~
-
-6. Read from the imported data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --database=pg_import \
- --execute="SELECT customers.name, accounts.balance FROM accounts JOIN customers ON accounts.customer_id = customers.id;"
- ~~~
-
- ~~~
- name | balance
- +------------------+---------+
- Bjorn Fairclough | 100
- Arturo Nevin | 200
- Juno Studwick | 400
- Naseem Joossens | 200
- Eutychia Roberts | 200
- (5 rows)
- ~~~
-
- {{site.data.alerts.callout_info}}
- You can view the schema by navigating to the **Admin UI > [Databases](../admin-ui-databases-page.html)** page and clicking on the table name.
- {{site.data.alerts.end}}
-
-## Step 4. Import a MySQL dump file
-
-If you're importing data from a MySQL database, you can import the `.sql` file generated by the [`mysqldump`][mysqldump] command.
-
-1. Download our sample [`mysql_dump.sql`](resources/mysql_dump.sql) file using [`curl`][curl] or [`wget`][wget]:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O {{site.url}}/docs/{{page.version.version}}/training/resources/mysql_dump.sql
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget {{site.url}}/docs/{{page.version.version}}/training/resources/mysql_dump.sql
- ~~~
-
-2. Take a look at the `pg_dump.sql` file, which contains 2 tables, `customers` and `accounts`, as well as some constraints on both tables.
-
-3. Create a database you can use for the import:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="CREATE DATABASE IF NOT EXISTS mysql_import;"
- ~~~
-
-4. Import the dump:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="IMPORT MYSQLDUMP '{{site.url}}/docs/{{page.version.version}}/training/resources/mysql_dump.sql';"
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- --------------------+-----------+--------------------+------+---------------+----------------+-------
- 409923615993004033 | succeeded | 1 | 10 | 5 | 0 | 258
- ~~~
-
-5. Read from the imported data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT customers.name, accounts.balance FROM accounts JOIN customers ON accounts.customer_id = customers.id;"
- ~~~
-
- ~~~
- name | balance
- +------------------+---------+
- Bjorn Fairclough | 100
- Arturo Nevin | 200
- Juno Studwick | 400
- Naseem Joossens | 200
- Eutychia Roberts | 200
- (5 rows)
- ~~~
-
- {{site.data.alerts.callout_info}}
- You can view the schema by navigating to the **Admin UI > [Databases](../admin-ui-databases-page.html)** page and clicking on the table name.
- {{site.data.alerts.end}}
-
-## What's next?
-
-[SQL Basics](sql-basics.html)
-
-
-
-[curl]: https://curl.haxx.se/
-[wget]: https://www.gnu.org/software/wget/
-[pg_dump]: https://www.postgresql.org/docs/current/app-pgdump.html
-[mysqldump]: https://dev.mysql.com/doc/refman/8.0/en/mysqldump-sql-format.html
diff --git a/src/archived/training/data-unavailability-troubleshooting.md b/src/archived/training/data-unavailability-troubleshooting.md
deleted file mode 100644
index 6ad7e7dd4ad..00000000000
--- a/src/archived/training/data-unavailability-troubleshooting.md
+++ /dev/null
@@ -1,304 +0,0 @@
----
-title: Data Unavailability Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a cluster spread across 3 separate localities
-
-Create a 9-node cluster, with 3 nodes in each of 3 different localities.
-
-1. In a new terminal, start node 1 in locality `us-east-1`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-1 \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. In the same terminal, perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-3. In a new terminal, start node 2 in locality `us-east-1`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-1 \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-4. In the same terminal, start node 3 in locality `us-east-1`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-1 \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-5. In the same terminal, start node 4 in locality `us-east-2`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-2 \
- --store=node4 \
- --listen-addr=localhost:26260 \
- --http-addr=localhost:8083 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-6. In the same terminal, start node 5 in locality `us-east-2`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-2 \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-7. In the same terminal, start node 6 in locality `us-east-2`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-2 \
- --store=node6 \
- --listen-addr=localhost:26262 \
- --http-addr=localhost:8085 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-8. In the same terminal, start node 7 in locality `us-east-3`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-3 \
- --store=node7 \
- --listen-addr=localhost:26263 \
- --http-addr=localhost:8086 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-9. In the same terminal, start node 8 in locality `us-east-3`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-3 \
- --store=node8 \
- --listen-addr=localhost:26264 \
- --http-addr=localhost:8087 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-10. In the same terminal, start node 9 in locality `us-east-3`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-3 \
- --store=node9 \
- --listen-addr=localhost:26265 \
- --http-addr=localhost:8088 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-## Step 2. Prepare to simulate the problem
-
-In preparation, add a table and use a replication zone to force the table's data onto the new nodes.
-
-1. In a new terminal, generate an `intro` database with a `mytable` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data intro | cockroach sql \
- --insecure \
- --host=localhost:26257
- ~~~
-
-2. Create a [replication zone](../configure-replication-zones.html) forcing the replicas of the `mytable` range to be located on nodes with the `datacenter=us-east-3` locality:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER TABLE intro.mytable CONFIGURE ZONE USING constraints='[+datacenter=us-east-3]';" --insecure --host=localhost:26257
- ~~~
-
-3. Use the `SHOW RANGES` SQL command to determine the nodes on which the replicas for the `mytable` table are now located:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW RANGES FROM TABLE intro.mytable;"
- ~~~
-
- ~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
- +-----------+---------+----------+---------------+--------------+-----------------------+----------+------------------------------------------------------------------+
- NULL | NULL | 25 | 0.003054 | 9 | datacenter=us-east-3 | {7,8,9} | {datacenter=us-east-3,datacenter=us-east-3,datacenter=us-east-3}
- (1 row)
- ~~~
-
-4. The node IDs above may not match the order in which we started the nodes because node IDs only get allocated after `cockroach init` is run. You can verify that the nodes listed by `SHOW RANGES` are all in the `datacenter=us-east-3` locality by opening the **Node Diagnostics** debug page at http://localhost:8080/#/reports/nodes and checking the locality for each of the 3 node IDs.
-
-
-
-## Step 3. Simulate the problem
-
-Stop 2 of the nodes containing `mytable` replicas. This will cause the range to lose a majority of its replicas and become unavailable. However, all other ranges are spread evenly across all three localities because the replication zone only applies to `mytable`, so the cluster as a whole will remain available.
-
-1. Kill nodes 8 and 9:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit \
- --insecure \
- --host=localhost:26264
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit \
- --insecure \
- --host=localhost:26265
- ~~~
-
-## Step 4. Troubleshoot the problem
-
-1. In a new terminal, try to insert into the `mytable` table, pointing at a node that is still online:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="INSERT INTO intro.mytable VALUES (42, '')" \
- --logtostderr=WARNING
- ~~~
-
- Because the range for `mytable` no longer has a majority of its replicas, the query will hang indefinitely.
-
-2. Go back to the Admin UI at http://localhost:8080 and click **Metrics** on the left.
-
-3. Select the **Replication** dashboard.
-
-4. Hover over the **Ranges** graph:
-
-
-
- You should see that 1 range is now unavailable. If the unavailable count is larger than 1, that would mean that some system ranges had a majority of replicas on the down nodes as well.
-
- The **Summary** panel on the right should tell you the same thing:
-
-
-
-5. For more insight into the ranges that are unavailable, go to the **Problem Ranges Report** at http://localhost:8080/#/reports/problemranges.
-
-
-
-## Step 5. Resolve the problem
-
-1. In a new terminal, restart the stopped nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-3 \
- --store=node8 \
- --listen-addr=localhost:26264 \
- --http-addr=localhost:8087 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=datacenter=us-east-3 \
- --store=node9 \
- --listen-addr=localhost:26265 \
- --http-addr=localhost:8088 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Go back to the Admin UI, click **Metrics** on the left, and verify that ranges are no longer unavailable.
-
-4. Check back on your `INSERT` statement that was stuck and verify that it completed successfully.
-
-## Step 6. Clean up
-
-In the next lab, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4 node5 node6 node7 node8 node9
- ~~~
-
-## What's next?
-
-[Data Corruption Troubleshooting](data-corruption-troubleshooting.html)
diff --git a/src/archived/training/fault-tolerance-and-automated-repair.md b/src/archived/training/fault-tolerance-and-automated-repair.md
deleted file mode 100644
index d07358334b8..00000000000
--- a/src/archived/training/fault-tolerance-and-automated-repair.md
+++ /dev/null
@@ -1,281 +0,0 @@
----
-title: Fault Tolerance and Automated Repair
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-
-## Before You Begin
-
-Make sure you have already completed [Cluster Startup and Scaling](cluster-startup-and-scaling.html) and have 5 nodes running locally.
-
-## Step 1. Set up load balancing
-
-In this module, you'll run a sample workload to simulate multiple client connections. Each node is an equally suitable SQL gateway for the load, but it's always recommended to spread requests evenly across nodes. You'll use the open-source [HAProxy](http://www.haproxy.org/) load balancer to do that here.
-
-1. In a new terminal, install HAProxy.
-
-
-
-
-
-
-
-
- If you're on a Mac and use Homebrew, run:
- {% include copy-clipboard.html %}
- ~~~ shell
- $ brew install haproxy
- ~~~
-
-
-
- If you're using Linux and use apt-get, run:
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install haproxy
- ~~~
-
-
-2. Run the [`cockroach gen haproxy`](../cockroach-gen.html) command, specifying the port of any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen haproxy \
- --insecure \
- --host=localhost \
- --port=26257
- ~~~
-
- This command generates an `haproxy.cfg` file automatically configured to work with the nodes of your running cluster.
-
-3. In `haproxy.cfg`, change `bind :26257` to `bind :26000`. This changes the port on which HAProxy accepts requests to a port that is not already in use by a node.
-
- {% include copy-clipboard.html %}
- ~~~ shell
- sed -i.saved 's/^ bind :26257/ bind :26000/' haproxy.cfg
- ~~~
-
-4. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ haproxy -f haproxy.cfg &
- ~~~
-
-## Step 2. Run a sample workload
-
-Now that you have a load balancer running in front of your cluster, use the YCSB workload built into CockroachDB to simulate multiple client connections, each performing mixed read/write workloads.
-
-1. In a new terminal, load the initial `ycsb` schema and data, pointing it at HAProxy's port:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload init ycsb \
- 'postgresql://root@localhost:26000?sslmode=disable'
- ~~~
-
-2. Run the `ycsb` workload, pointing it at HAProxy's port:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload run ycsb \
- --duration=20m \
- --concurrency=3 \
- --max-rate=1000 \
- --splits=50 \
- 'postgresql://root@localhost:26000?sslmode=disable'
- ~~~
-
- This command initiates 3 concurrent client workloads for 20 minutes, but limits the total load to 1000 operations per second (since you're running everything on a single machine).
-
- Also, the `--splits` flag tells the workload to manually split ranges a number of times. This is not something you'd normally do, but for the purpose of this training, it makes it easier to visualize the movement of data in the cluster.
-
-## Step 3. Check the workload
-
-Initially, the workload creates a new database called `ycsb`, creates a `usertable` table in that database, and inserts a bunch of rows into the table. Soon, the load generator starts executing approximately 95% reads and 5% writes.
-
-1. To check the SQL queries getting executed, go back to the Admin UI at http://localhost:8080, click **Metrics** on the left, and hover over the **SQL Queries** graph at the top:
-
-
-
-2. To check the client connections from the load generator, select the **SQL** dashboard and hover over the **SQL Connections** graph:
-
-
-
- You'll notice 3 client connections for the 3 concurrent workloads from the load generator. If you want to check that HAProxy balanced each connection to a different node, you can change the **Graph** dropdown from **Cluster** to each of the nodes. For three of the nodes, you'll see a single client connection.
-
-3. To see more details about the `ycsb` database and `usertable` table, click **Databases** in the upper left and then scroll down until you see **ycsb**:
-
-
-
- You can also view the schema of the `usertable` by clicking the table name:
-
-
-
-## Step 4. Simulate a single node failure
-
-When a node fails, the cluster waits for the node to remain offline for 5 minutes by default before considering it dead, at which point the cluster automatically repairs itself by re-replicating any of the replicas on the down nodes to other available nodes.
-
-1. In a new terminal, reduce the amount of time the cluster waits before considering a node dead to the minimum allowed of 1 minute and 15 seconds:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26000 \
- --execute="SET CLUSTER SETTING server.time_until_store_dead = '1m15s';"
- ~~~
-
-2. Then use the [`cockroach quit`](../cockroach-quit.html) command to stop node 5:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit \
- --insecure \
- --host=localhost:26261
- ~~~
-
-## Step 5. Check load continuity and cluster health
-
-Go back to the Admin UI, click **Metrics** on the left, and verify that the cluster as a whole continues serving data, despite one of the nodes being unavailable and marked as **Suspect**:
-
-
-
-This shows that when all ranges are replicated 3 times (the default), the cluster can tolerate a single node failure because the surviving nodes have a majority of each range's replicas (2/3).
-
-## Step 6. Watch the cluster repair itself
-
-Scroll down to the **Replicas per Node** graph:
-
-
-
-Because you reduced the time it takes for the cluster to consider the down node dead, after 1 minute or so, you'll see the replica count on nodes 1 through 4 increase. This shows the cluster repairing itself by re-replicating missing replicas.
-
-## Step 7. Prepare for two simultaneous node failures
-
-At this point, the cluster has recovered and is ready to handle another failure. However, the cluster cannot handle two _near-simultaneous_ failures in this configuration. Failures are "near-simultaneous" if they are closer together than the `server.time_until_store_dead` setting plus the time taken for the number of replicas on the dead node to drop to zero. If two failures occurred in this configuration, some ranges would become unavailable until one of the nodes recovers.
-
-To be able to tolerate 2 of 5 nodes failing simultaneously without any service interruption, ranges must be replicated 5 times.
-
-1. Restart node 5, using the same command you used to start the node initially:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-2. In a new terminal, use the [`ALTER RANGE ... CONFIGURE ZONE`](../configure-zone.html) command to change the cluster's `default` replication factor to 5:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=5;" --insecure --host=localhost:26000
- ~~~
-
-3. Back in the Admin UI **Metrics** dashboard, watch the **Replicas per Node** graph to see how the replica count increases and evens out across all 5 nodes:
-
-
-
- This shows the cluster up-replicating so that each range has 5 replicas, one on each node.
-
-## Step 8. Simulate two simultaneous node failures
-
-1. Use the [`cockroach quit`](../cockroach-quit.html) command to stop nodes 4 and 5:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit --insecure --host=localhost:26260
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach quit --insecure --host=localhost:26261
- ~~~
-
-## Step 9. Check load continuity and cluster health
-
-1. Like before, go to the Admin UI, click **Metrics** on the left, and verify that the cluster as a whole continues serving data, despite 2 nodes being offline:
-
-
-
- This shows that when all ranges are replicated 5 times, the cluster can tolerate 2 simultaneous node outages because the surviving nodes have a majority of each range's replicas (3/5).
-
-2. To verify this further, use the `cockroach sql` command to count the number of rows in the `ycsb.usertable` table and verify that it is still serving reads:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT count(*) FROM ycsb.usertable;"
- ~~~
-
- ~~~
- count
- +-------+
- 10000
- (1 row)
- ~~~
-
- And writes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="INSERT INTO ycsb.usertable VALUES ('asdf', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT count(*) FROM ycsb.usertable;"
- ~~~
-
- ~~~
- count
- +-------+
- 10001
- (1 row)
- ~~~
-
-## Step 10. Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Stop all CockroachDB nodes, HAProxy, and the YCSB load generator:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach haproxy ycsb
- ~~~
-
- This simplified shutdown process is only appropriate for a lab/evaluation scenario.
-
-2. Remove the nodes' data directories and the HAProxy config:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4 node5 haproxy.cfg
- ~~~
-
-## What's Next?
-
-[Locality and Replication Zones](locality-and-replication-zones.html)
diff --git a/src/archived/training/geo-partitioning.md b/src/archived/training/geo-partitioning.md
deleted file mode 100644
index 732cf0dbce3..00000000000
--- a/src/archived/training/geo-partitioning.md
+++ /dev/null
@@ -1,580 +0,0 @@
----
-title: Geo-Partitioning
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a cluster in one US region
-
-Start a cluster like you did previously, using the [`--locality`](../configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) flag to indicate that the nodes are in the `us-east1` region, with each node in a distinct datacenter:
-
-1. Start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-east1,datacenter=us-east1-a \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-east1,datacenter=us-east1-b \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-east1,datacenter=us-east1-c \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Use the [`cockroach init`](../cockroach-init.html) command to perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Expand into 2 more US regions
-
-Add 6 more nodes, 3 in the `us-west1` region and 3 in the `us-west2` region, with each node in a distinct datacenter:
-
-1. Start node 4:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west1,datacenter=us-west1-a \
- --store=node4 \
- --listen-addr=localhost:26260 \
- --http-addr=localhost:8083 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 5:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west1,datacenter=us-west1-b \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-3. Start node 6:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west1,datacenter=us-west1-c \
- --store=node6 \
- --listen-addr=localhost:26262 \
- --http-addr=localhost:8085 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-4. Start node 7:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west2,datacenter=us-west2-a \
- --store=node7 \
- --listen-addr=localhost:26263 \
- --http-addr=localhost:8086 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-5. Start node 8:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west2,datacenter=us-west2-b \
- --store=node8 \
- --listen-addr=localhost:26264 \
- --http-addr=localhost:8087 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-6. Start node 9:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us-west2,datacenter=us-west2-c \
- --store=node9 \
- --listen-addr=localhost:26265 \
- --http-addr=localhost:8088 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-## Step 3. Enable a trial enterprise license
-
-The table partitioning feature requires an [enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/).
-
-1. [Request a trial enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). You should receive your trial license via email within a few minutes.
-
-2. Enable your trial license:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING cluster.organization = '';"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING enterprise.license = '';"
- ~~~
-
-
-## Step 4. Load the MovR dataset
-
-Now you'll import data representing users, vehicles, and rides for the fictional vehicle-sharing app, [MovR](../movr.html).
-
-1. Use the [`cockroach workload`](../cockroach-demo.html) command:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach workload init movr --num-users=5000 --num-rides=50000 --num-vehicles=500
- ~~~
-
- This command creates the `movr` database with six tables: `users`, `vehicles`, `rides`, `promo_codes`, `vehicle_location_histories`, and `user_promo_codes`. The [`--num`](../cockroach-workload.html#movr-workload) flags specify a larger quantity of data to generate for the `users`, `rides`, and `vehicles` tables.
-
-
-2. Start the [built-in SQL shell](../cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=localhost
- ~~~
-
-3. Use [`SHOW TABLES`](../show-tables.html) to verify that `cockroach workload` created the `movr` tables:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SHOW TABLES FROM movr;
- ~~~
- ~~~
- table_name
- +----------------------------+
- promo_codes
- rides
- user_promo_codes
- users
- vehicle_location_histories
- vehicles
- (6 rows)
- ~~~
-
-## Step 5. Check data distribution before partitioning
-
-At this point, the data for the three MovR tables (`users`, `rides`, and `vehicles`) is evenly distributed across all three localities. For example, let's check where the replicas of the `vehicles` and `users` tables are located:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW RANGES FROM TABLE vehicles;
-~~~
-
-~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
-+-----------+---------+----------+---------------+--------------+---------------------------------------+----------+---------------------------------------------------------------------------------------------------------------------------+
- NULL | NULL | 26 | 0.123054 | 2 | region=us-east1,datacenter=us-east1-b | {2,5,7} | {"region=us-east1,datacenter=us-east1-b","region=us-west1,datacenter=us-west1-b","region=us-west2,datacenter=us-west2-a"}
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW RANGES FROM TABLE users;
-~~~
-
-~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
-+-----------+---------+----------+---------------+--------------+---------------------------------------+----------+---------------------------------------------------------------------------------------------------------------------------+
- NULL | NULL | 25 | 0.554324 | 3 | region=us-east1,datacenter=us-east1-c | {3,6,9} | {"region=us-east1,datacenter=us-east1-c","region=us-west1,datacenter=us-west1-c","region=us-west2,datacenter=us-west2-c"}
-(1 row)
-~~~
-
-Note: you may need to execute `use movr;` to be in the proper database context. For added clarity, here's a key showing how nodes map to localities:
-
-Node ID | Region | Datacenter
---------|--------|-----------
-1 | `us-east1` | `us-east1-a`
-2 | `us-east1` | `us-east1-b`
-3 | `us-east1` | `us-east1-c`
-4 | `us-west1` | `us-west1-a`
-5 | `us-west1` | `us-west1-b`
-6 | `us-west1` | `us-west1-c`
-7 | `us-west2` | `us-west2-a`
-8 | `us-west2` | `us-west2-b`
-9 | `us-west2` | `us-west2-c`
-
-In this case, for the single range containing `vehicles` data, replicas are in all three regions, and the leaseholder is in the `us-east1` region. For the single range containing `users` data, replicas are in all three regions, and the leaseholder is in the `us-east1` region.
-
-## Step 6. Consider performance before partitioning
-
-In a real deployment, with nodes truly distributed across 3 regions of the US, having the MovR data evenly spread out would mean that reads and writes would often bounce back and forth across the country, causing high read and write latencies.
-
-### Reads
-
-For example, imagine you are a MovR administrator in San Francisco, and you want to get the IDs and descriptions of all San Francisco-based bikes that are currently in use. You issue the following query to one of the nodes in the `us-west2` region:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, ext FROM vehicles
-WHERE city = 'san francisco' AND type = 'bike' AND status = 'in_use';
-~~~
-
-All requests initially go to the leaseholder for the relevant range. As you saw earlier, the leaseholder for the single range of the `vehicles` table is in the `us-east1` region, so in this case, the following would happen:
-
-1. The node receiving the request (the gateway node) in the `us-west2` region would route the request to the node in the `us-east1` region with the leaseholder.
-
-2. The leaseholder node would execute the query and return the data to the gateway node.
-
-3. The gateway node would return the data to the client.
-
-In summary, this simple read request have to travel back and forth across the entire country.
-
-### Writes
-
-The geographic distribution of the MovR data is even more likely to impact write performance. For example, imagine that a user in New York and a user in Seattle want to create new MovR accounts:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users
-VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users
-VALUES (gen_random_uuid(), 'seattle', 'Seattler', '111 East Street', '1736352379937347');
-~~~
-
-For the single range containing `users` data, one replica is in each region, with the leaseholder in the `us-west1` region. This means that:
-
-- When creating the user in Seattle, the request doesn't have to leave the region to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-east1` (New York, Boston, Washington DC) or `us-west2` (Los Angeles, San Francisco) before committing and then returning confirmation to the client.
-
-- When creating the user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1`. It then has to wait for confirmation from either the replica in `us-east1` (New York, Boston, Washington DC) or `us-west2` (Los Angeles, San Francisco) before committing and then returning confirmation to the client back in the west.
-
-## Step 7. Partition data by city
-
-For this service, the most effective technique for improving read and write latency is to geo-partition the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges.
-
-1. Partition the `users` table by city:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE users
- PARTITION BY LIST (city) (
- PARTITION new_york VALUES IN ('new york'),
- PARTITION boston VALUES IN ('boston'),
- PARTITION washington_dc VALUES IN ('washington dc'),
- PARTITION seattle VALUES IN ('seattle'),
- PARTITION san_francisco VALUES IN ('san francisco'),
- PARTITION los_angeles VALUES IN ('los angeles')
- );
- ~~~
-
-2. Partition the `vehicles` table by city:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE vehicles
- PARTITION BY LIST (city) (
- PARTITION new_york VALUES IN ('new york'),
- PARTITION boston VALUES IN ('boston'),
- PARTITION washington_dc VALUES IN ('washington dc'),
- PARTITION seattle VALUES IN ('seattle'),
- PARTITION san_francisco VALUES IN ('san francisco'),
- PARTITION los_angeles VALUES IN ('los angeles')
- );
- ~~~
-
-3. Partition the `rides` table by city:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- PARTITION BY LIST (city) (
- PARTITION new_york VALUES IN ('new york'),
- PARTITION boston VALUES IN ('boston'),
- PARTITION washington_dc VALUES IN ('washington dc'),
- PARTITION seattle VALUES IN ('seattle'),
- PARTITION san_francisco VALUES IN ('san francisco'),
- PARTITION los_angeles VALUES IN ('los angeles')
- );
- ~~~
-
-{{site.data.alerts.callout_info}}
-You didn't create any secondary indexes on your MovR tables. However, if you had, it would be important to partition the secondary indexes as well.
-{{site.data.alerts.end}}
-
-## Step 8. Pin partitions close to users
-
-With the data partitioned by city, you can now use [replication zones](../configure-replication-zones.html#create-a-replication-zone-for-a-partition) to require that city data be stored on specific nodes based on locality:
-
-City | Locality
------|---------
-New York | `region=us-east1`
-Boston | `region=us-east1`
-Washington DC | `region=us-east1`
-Seattle | `region=us-west1`
-San Francisco | `region=us-west2`
-Los Angeles | `region=us-west2`
-
-1. Start with the `users` table partitions:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER PARTITION new_york OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION boston OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION washington_dc OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION seattle OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-west1]';
-
- > ALTER PARTITION san_francisco OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
-
- > ALTER PARTITION los_angeles OF TABLE movr.users
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
- ~~~
-
-2. Move on to the `vehicles` table partitions:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER PARTITION new_york OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION boston OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION washington_dc OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION seattle OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-west1]';
-
- > ALTER PARTITION san_francisco OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
-
- > ALTER PARTITION los_angeles OF TABLE movr.vehicles
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
- ~~~
-
-3. Finish with the `rides` table partitions:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER PARTITION new_york OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION boston OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION washington_dc OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-east1]';
-
- > ALTER PARTITION seattle OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-west1]';
-
- > ALTER PARTITION san_francisco OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
-
- > ALTER PARTITION los_angeles OF TABLE movr.rides
- CONFIGURE ZONE USING constraints='[+region=us-west2]';
- ~~~
-
-{{site.data.alerts.callout_info}}
-If you had created any secondary index partitions, it would be important to create replication zones for each such partition as well.
-{{site.data.alerts.end}}
-
-## Step 9. Check data distribution after partitioning
-
-Over the next few minutes, CockroachDB will rebalance all partitions based on the constraints you defined.
-
-To check this, run the `SHOW RANGES` statement on the `vehicles` and `users` tables:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM [SHOW RANGES FROM TABLE vehicles]
-WHERE "start_key" NOT LIKE '%Prefix%';
-~~~
-
-~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
-+------------------+----------------------------+----------+---------------+--------------+---------------------------------------+----------+---------------------------------------------------------------------------------------------------------------------------+
- /"boston" | /"boston"/PrefixEnd | 67 | 0.000144 | 1 | region=us-east1,datacenter=us-east1-a | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"washington dc" | /"washington dc"/PrefixEnd | 69 | 0.000151 | 1 | region=us-east1,datacenter=us-east1-a | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"new york" | /"new york"/PrefixEnd | 65 | 0.000304 | 2 | region=us-east1,datacenter=us-east1-b | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"seattle" | /"seattle"/PrefixEnd | 71 | 0.000167 | 5 | region=us-west1,datacenter=us-west1-b | {4,5,6} | {"region=us-west1,datacenter=us-west1-a","region=us-west1,datacenter=us-west1-b","region=us-west1,datacenter=us-west1-c"}
- /"los angeles" | /"los angeles"/PrefixEnd | 75 | 0.000158 | 8 | region=us-west2,datacenter=us-west2-b | {7,8,9} | {"region=us-west2,datacenter=us-west2-a","region=us-west2,datacenter=us-west2-b","region=us-west2,datacenter=us-west2-c"}
- /"san francisco" | /"san francisco"/PrefixEnd | 73 | 0.000307 | 8 | region=us-west2,datacenter=us-west2-b | {7,8,9} | {"region=us-west2,datacenter=us-west2-a","region=us-west2,datacenter=us-west2-b","region=us-west2,datacenter=us-west2-c"}
-(6 rows)
-~~~
-
-{{site.data.alerts.callout_info}}
-The `WHERE` clause in this query excludes the empty ranges between the city ranges. These empty ranges use the default replication zone configuration, not the zone configuration you set for the cities.
-{{site.data.alerts.end}}
-
-For added clarity, here's a key showing how nodes map to datacenters and cities:
-
-Node IDs | Region | Cities
----------|--------|-------
-1 - 3 | `region=us-east1` | New York, Boston, Washington DC
-4 - 6 | `region=us-west1` | Seattle
-7 - 9 | `region=us-west2` | San Francisco, Los Angeles
-
-We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1`, replicas for Seattle are located on nodes 4-6 in `us-west1`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2`.
-
-The same data distribution is in place for the partitions of other tables as well. For example, here's the `users` table:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM [SHOW RANGES FROM TABLE users]
-WHERE "start_key" IS NOT NULL AND "start_key" NOT LIKE '%Prefix%';
-~~~
-
-~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
-+------------------+----------------------------+----------+---------------+--------------+---------------------------------------+----------+---------------------------------------------------------------------------------------------------------------------------+
- /"washington dc" | /"washington dc"/PrefixEnd | 49 | 0.000468 | 2 | region=us-east1,datacenter=us-east1-b | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"boston" | /"boston"/PrefixEnd | 47 | 0.000438 | 3 | region=us-east1,datacenter=us-east1-c | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"new york" | /"new york"/PrefixEnd | 45 | 0.000553 | 3 | region=us-east1,datacenter=us-east1-c | {1,2,3} | {"region=us-east1,datacenter=us-east1-a","region=us-east1,datacenter=us-east1-b","region=us-east1,datacenter=us-east1-c"}
- /"seattle" | /"seattle"/PrefixEnd | 51 | 0.00044 | 4 | region=us-west1,datacenter=us-west1-a | {4,5,6} | {"region=us-west1,datacenter=us-west1-a","region=us-west1,datacenter=us-west1-b","region=us-west1,datacenter=us-west1-c"}
- /"los angeles" | /"los angeles"/PrefixEnd | 55 | 0.000457 | 7 | region=us-west2,datacenter=us-west2-a | {7,8,9} | {"region=us-west2,datacenter=us-west2-a","region=us-west2,datacenter=us-west2-b","region=us-west2,datacenter=us-west2-c"}
- /"san francisco" | /"san francisco"/PrefixEnd | 53 | 0.000437 | 7 | region=us-west2,datacenter=us-west2-a | {7,8,9} | {"region=us-west2,datacenter=us-west2-a","region=us-west2,datacenter=us-west2-b","region=us-west2,datacenter=us-west2-c"}
-(6 rows)
-~~~
-
-## Step 10. Consider performance after partitioning
-
-After partitioning, reads and writes for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city. To think this through, let's reconsider the read and write examples from before partitioning.
-
-### Reads
-
-Once again, imagine you are a MovR administrator in San Francisco, and you want to get the IDs and descriptions of all San Francisco-based bikes that are currently in use. You issue the following query to one of the nodes in the `us-west2` region:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, ext FROM vehicles
-WHERE city = 'san francisco' AND type = 'bike' AND status = 'in_use';
-~~~
-
-- Before partitioning, the leaseholder for the `vehicles` table was in the `us-east1` region, causing the request to travel back and forth across the entire country.
-
-- Now, as you saw above, the leaseholder for the San Francisco partition of the `vehicles` table is the `us-west2` datacenter. This means that the read request does not need to leave the region.
-
-### Writes
-
-Now once again imagine that a user in Seattle and a user in New York want to create new MovR accounts.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users
-VALUES (gen_random_uuid(), 'seattle', 'Seattler', '111 East Street', '1736352379937347');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO users
-VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347');
-~~~
-
-- Before partitioning, the leaseholder wasn't necessarily in the same region as the node receiving the request, and replicas required to reach consensus were spread across all regions, causing increased latency.
-
-- Now, as you saw above, all 3 replicas for the Seattle partition of the `users` table are in the `us-west1` datacenter, and all 3 replicas for the New York partition of the `users` table are the `us-east1` datacenter. This means that the write requests to do not need to leave their respective regions to achieve consensus and commit.
-
-## Step 11. Clean up
-
-In the next module, you'll start with a fresh cluster, so take a moment to clean things up.
-
-1. Exit the SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-2. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
- This simplified shutdown process is only appropriate for a lab/evaluation scenario.
-
-3. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4 node5 node6 node7 node8 node9
- ~~~
-
-## What's next?
-
-[Orchestration with Kubernetes](orchestration-with-kubernetes.html)
diff --git a/src/archived/training/how-cockroach-labs-debugs.md b/src/archived/training/how-cockroach-labs-debugs.md
deleted file mode 100644
index bc6a46e043d..00000000000
--- a/src/archived/training/how-cockroach-labs-debugs.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: How Cockroach Labs Debugs
-toc: false
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-## What's next?
-
-[How to Get Support](how-to-get-support.html)
diff --git a/src/archived/training/how-to-get-support.md b/src/archived/training/how-to-get-support.md
deleted file mode 100644
index b29eb1c79cd..00000000000
--- a/src/archived/training/how-to-get-support.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: How to Get Support
-toc: false
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-When you encounter a problem that you cannot troubleshoot yourself (e.g., data corruption or software panic), [file an issue in the `cockroach` GitHub repository](https://github.com/cockroachdb/cockroach/issues/new) and include the following details.
-
-## Description of the problem
-
-- What happened?
-- What did you expect to happen?
-
-## Steps to reproduce
-
-Make these as granular and precise as possible.
-
-## Screenshots
-
-If any Admin UI graphs or Debug pages show the problem, include screenshots.
-
-## Debug zip of active nodes
-
-Use the [`cockroach debug zip`](../cockroach-debug-zip.html) command to create a single file with the following details from all active nodes in your cluster:
-
-- Log files
-- Schema change events
-- Node liveness
-- Gossip data
-- Stack traces
-- Range lists
-- A list of databases and tables
-
-## Logs of offline nodes
-
-If any nodes are down, manually collect the logs of the down nodes, zip them up, and include them.
diff --git a/src/archived/training/index.md b/src/archived/training/index.md
deleted file mode 100644
index e0c76fbea92..00000000000
--- a/src/archived/training/index.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: CockroachDB Training
-summary: Learn how to use CockroachDB for your applications
-toc: false
-feedback: false
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-
-This training introduces you to the fundamentals of CockroachDB, with an emphasis on:
-
-- **Understanding the architecture**
-- **Operational basics**
-
-The modules build on each other, so it's important to complete them in order. As you go, feel free to ask questions on our public [CockroachDB Community Slack](https://cockroachdb.slack.com) or [support forum](https://forum.cockroachlabs.com/).
-
-
-
-## What's first?
-
-[Why CockroachDB?](why-cockroachdb.html)
diff --git a/src/archived/training/locality-and-replication-zones.md b/src/archived/training/locality-and-replication-zones.md
deleted file mode 100644
index fd82446e6e1..00000000000
--- a/src/archived/training/locality-and-replication-zones.md
+++ /dev/null
@@ -1,406 +0,0 @@
----
-title: Locality and Replication Zones
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a cluster in a single US region
-
-Start a cluster like you did previously, but this time use the [`--locality`](../configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) flag to indicate that the nodes are all in a datacenter in the Eastern region of the US.
-
-{{site.data.alerts.callout_info}}
-To simplify the process of running multiple nodes on your local computer, you'll start them in the [background](../cockroach-start.html#general) instead of in separate terminals.
-{{site.data.alerts.end}}
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-east \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-east \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-east \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Use the [`cockroach init`](../cockroach-init.html) command to perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Check data distribution
-
-By default, CockroachDB tries to balance data evenly across specified "localities". At this point, since all three of the initial nodes have the same locality, the data is distributed across the 3 nodes. This means that for each range, one replica is on each node.
-
-To check this, open the Web UI at http://localhost:8080, view **Node List**, and check the replica count is the same on all nodes.
-
-## Step 3. Expand into 2 more US regions
-
-Add 6 more nodes, this time using the [`--locality`](../configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) flag to indicate that 3 nodes are in the Central region and 3 nodes are in the Western region of the US.
-
-1. In a new terminal, start node 4:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-central \
- --store=node4 \
- --listen-addr=localhost:26260 \
- --http-addr=localhost:8083 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 5:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-central \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 6:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-central \
- --store=node6 \
- --listen-addr=localhost:26262 \
- --http-addr=localhost:8085 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
- You started nodes 4, 5, and 6 in the Central region.
-
-4. Start node 7:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-west \
- --store=node7 \
- --listen-addr=localhost:26263 \
- --http-addr=localhost:8086 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-5. Start node 8:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-west \
- --store=node8 \
- --listen-addr=localhost:26264 \
- --http-addr=localhost:8087 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-6. Start node 9:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=us,datacenter=us-west \
- --store=node9 \
- --listen-addr=localhost:26265 \
- --http-addr=localhost:8088 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
- You started nodes 7, 8, and 9 in the West region.
-
-## Step 4. Write data and verify data distribution
-
-Now that there are 3 distinct localities in the cluster, the cluster will automatically ensure that, for every range, one replica is on a node in `us-east`, one is on a node in `us-central`, and one is on a node in `us-west`.
-
-To check this, let's create a table, which initially maps to a single underlying range, and check where the replicas of the range end up.
-
-1. Use the `cockroach gen` command to generate an example `intro` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data intro | cockroach sql \
- --insecure \
- --host=localhost:26257
- ~~~
-
-2. Use the `cockroach sql` command to verify that the table was added:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT * FROM intro.mytable WHERE (l % 2) = 0;"
- ~~~
-
- ~~~
- l | v
- +----+------------------------------------------------------+
- 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,,
- 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^
- 4 | ! "?##mW##?"-
- 6 | ! C O N G R A T S _am#Z??A#ma, Y
- 8 | ! _ummY" "9#ma, A
- 10 | ! vm#Z( )Xmms Y
- 12 | ! .j####mmm#####mm#m##6.
- 14 | ! W O W ! jmm###mm######m#mmm##6
- 16 | ! ]#me*Xm#m#mm##m#m##SX##c
- 18 | ! dm#||+*$##m#mm#m#Svvn##m
- 20 | ! :mmE=|+||S##m##m#1nvnnX##; A
- 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M
- 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A
- 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z
- 28 | ! U D 4##c|+|+|]m#kvnvnno##P E
- 30 | ! I 4#ma+|++]mmhvnnvq##P` !
- 32 | ! D I ?$#q%+|dmmmvnnm##!
- 34 | ! T -4##wu#mm#pw##7'
- 36 | ! -?$##m####Y'
- 38 | ! !! "Y##Y"-
- 40 | !
- (21 rows)
- ~~~
-
-3. Use the `SHOW RANGES` SQL command to find the IDs of the nodes where the new table's replicas ended up:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW RANGES FROM TABLE intro.mytable;"
- ~~~
-
- ~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
-+-----------+---------+----------+---------------+--------------+------------------------------+----------+---------------------------------------------------------------------------------------------------+
- NULL | NULL | 45 | 0.003054 | 9 | region=us,datacenter=us-west | {2,4,9} | {"region=us,datacenter=us-east","region=us,datacenter=us-central","region=us,datacenter=us-west"}
-(1 row)
- ~~~
-
-## Step 5. Expand into Europe
-
-Let's say your user-base has expanded into Europe and you want to store data there. To do so, add 3 more nodes, this time using the [`--locality`](../configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) flag to indicate that nodes are in the Western region of Europe.
-
-1. Start node 10:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=eu,datacenter=eu-west \
- --store=node10 \
- --listen-addr=localhost:26266 \
- --http-addr=localhost:8089 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 11:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=eu,datacenter=eu-west \
- --store=node11 \
- --listen-addr=localhost:26267 \
- --http-addr=localhost:8090 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-3. Start node 12:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --locality=region=eu,datacenter=eu-west \
- --store=node12 \
- --listen-addr=localhost:26268 \
- --http-addr=localhost:8091 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-## Step 6. Add EU-specific data
-
-Now imagine that `intro` database you created earlier is storing data for a US-based application, and you want a completely separate database to store data for an EU-based application.
-
-1. Use the `cockroach gen` command to generate an example `startrek` database with 2 tables, `episodes` and `quotes`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data startrek | cockroach sql \
- --insecure \
- --host=localhost:26257
- ~~~
-
-2. Use the `cockroach sql` command to verify that the tables were added:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SELECT * FROM startrek.episodes LIMIT 5;" \
- --execute="SELECT quote FROM startrek.quotes WHERE characters = 'Spock and Kirk';"
- ~~~
-
- ~~~
- id | season | num | title | stardate
- +----+--------+-----+------------------------------+-------------+
- 1 | 1 | 1 | The Man Trap | 1531.100000
- 2 | 1 | 2 | Charlie X | 1533.600000
- 3 | 1 | 3 | Where No Man Has Gone Before | 1312.400000
- 4 | 1 | 4 | The Naked Time | 1704.200000
- 5 | 1 | 5 | The Enemy Within | 1672.100000
- (5 rows)
- quote
- +--------------------------------------------+
- "Beauty is transitory." "Beauty survives."
- (1 row)
- ~~~
-
-## Step 7. Constrain data to specific regions
-
-Because you used the `--locality` flag to indicate the region for each of your nodes, constraining data to specific regions is simple.
-
-1. Use the [`ALTER DATABASE ... CONFIGURE ZONE`](../configure-zone.html) statement to create a replication zone for the `startrek` database, forcing all the data in the database to be located on EU-based nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER DATABASE startrek CONFIGURE ZONE USING constraints='[+region=eu]';" --insecure --host=localhost:26257
- ~~~
-
-2. Use the [`ALTER DATABASE ... CONFIGURE ZONE`](../configure-zone.html) statement to create a distinct replication zone for the `intro` database, forcing all the data in the database to be located on US-based nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER DATABASE intro CONFIGURE ZONE USING constraints='[+region=us]';" --insecure --host=localhost:26257
- ~~~
-
-## Step 8. Verify data distribution
-
-Now verify that the data for the table in the `intro` database is located on US-based nodes, and the data for the tables in the `startrek` database is located on EU-based nodes.
-
-1. Find the IDs of the nodes where replicas are stored for the `intro.mytable`, `startrek.episodes`, and `startrek.quotes` tables:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=127.0.0.1:54942 \
- --execute="SHOW RANGES FROM TABLE intro.mytable;" \
- --execute="SHOW RANGES FROM TABLE startrek.episodes;" \
- --execute="SHOW RANGES FROM TABLE startrek.quotes;"
- ~~~
-
- Note: your result set will differ slightly from ours.
-
- ~~~
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
- +-----------+---------+----------+---------------+--------------+---------------------------------+----------+---------------------------------------------------------------------------------------------------+
- NULL | NULL | 45 | 0.003054 | 5 | region=us,datacenter=us-central | {3,5,8} | {"region=us,datacenter=us-east","region=us,datacenter=us-central","region=us,datacenter=us-west"}
- (1 row)
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
- +-----------+---------+----------+---------------+--------------+------------------------------+----------+---------------------------------------------------------------------------------------------------+
- NULL | NULL | 46 | 0.004276 | 8 | region=us,datacenter=us-west | {3,5,8} | {"region=us,datacenter=us-east","region=us,datacenter=us-central","region=us,datacenter=us-west"}
- (1 row)
- start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
- +-----------+---------+----------+---------------+--------------+---------------------------------+----------+---------------------------------------------------------------------------------------------------+
- NULL | NULL | 47 | 0.03247 | 5 | region=us,datacenter=us-central | {3,5,8} | {"region=us,datacenter=us-east","region=us,datacenter=us-central","region=us,datacenter=us-west"}
- (1 row)
- ~~~
-
-{{site.data.alerts.callout_info}}
-You can also use the Web UI's Data Distribution matrix to view the distribution of data across nodes.
-{{site.data.alerts.end}}
-
-## Step 9. Clean up
-
-Take a moment to clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
- This simplified shutdown process is only appropriate for a lab/evaluation scenario.
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node{1,2,3,4,5,6,7,8,9,10,11,12}
- ~~~
-
-## What's next?
-
-[Geo-Partitioning](geo-partitioning.html)
diff --git a/src/archived/training/logs.md b/src/archived/training/logs.md
deleted file mode 100644
index 45fbdc34bbe..00000000000
--- a/src/archived/training/logs.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Debug Logs
-toc: false
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-## What's next?
-
-[Node Startup Troubleshooting](node-startup-troubleshooting.html)
diff --git a/src/archived/training/monitoring-and-alerting.md b/src/archived/training/monitoring-and-alerting.md
deleted file mode 100644
index 1da08be725e..00000000000
--- a/src/archived/training/monitoring-and-alerting.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Monitoring and Alerting
-toc: false
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-## What's next?
-
-[Debug Logs](logs.html)
diff --git a/src/archived/training/network-partition-troubleshooting.md b/src/archived/training/network-partition-troubleshooting.md
deleted file mode 100644
index 0eb6d94ca55..00000000000
--- a/src/archived/training/network-partition-troubleshooting.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-title: Network Partition Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
----
-
-
-
-
-
-## Before you begin
-
-Note that this lab involves running a cluster in Docker so that you can use it to fake a partition between datacenters. You will need to have [Docker Compose](https://docs.docker.com/compose/install/) on your local machine, so you may just want to observe this one.
-
-## Step 1. Create a cluster in Docker across 3 simulated datacenters
-
-1. Download the Docker Compose file that defines a 6-node CockroachDB cluster spread across 3 separate networks:
-
-
-
-2. Create the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ COCKROACH_VERSION={{ page.release_info.version }} docker-compose up
- ~~~~
-
-3. In a new terminal, initialize the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker exec -it roach-0 /cockroach/cockroach init --insecure
- ~~~~
-
-4. Verify that the cluster is working by opening the Admin UI at http://localhost:8080.
-
-## Step 2. Create a partition in the network
-
-1. Disconnect the nodes in `dc-2` from the shared network backbone:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker network disconnect cockroachdb-training-shared roach-4
- ~~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker network disconnect cockroachdb-training-shared roach-5
- ~~~~
-
-## Step 3. Troubleshoot the problem
-
-1. The Admin UI should now show that 2 of the nodes in the cluster have changed from "Healthy" to "Suspect":
-
-
-
-2. Check whether the "Suspect" nodes are still running by hitting their `/health` endpoints:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl localhost:8085/health
- ~~~~
-
- ~~~
- {
- "nodeId": 3,
- "address": {
- "networkField": "tcp",
- "addressField": "roach-5:26257"
- },
- "buildInfo": {
- "goVersion": "go1.10",
- "tag": "v2.0.0",
- "time": "2018/04/03 20:56:09",
- "revision": "a6b498b7aff14234bcde23107b9e7fa14e6a34a8",
- "cgoCompiler": "gcc 6.3.0",
- "cgoTargetTriple": "x86_64-unknown-linux-gnu",
- "platform": "linux amd64",
- "distribution": "CCL",
- "type": "release",
- "channel": "official-binary",
- "dependencies": null
- }
- }
- ~~~
-
-3. Check whether the "Suspect" nodes consider themselves live by hitting their `/_admin/v1/health` endpoints:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl localhost:8085/_admin/v1/health
- ~~~~
-
- ~~~
- {
- "error": "node is not live",
- "code": 14
- }
- ~~~
-
-4. Check the logs of the downed nodes to see if they contain any clues, where you should find errors like "Error while dialing", "no such host", and "the connection is unavailable":
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker logs roach-5
- ~~~~
-
- ~~~
- ...
- I180213 20:38:14.728914 211 vendor/google.golang.org/grpc/grpclog/grpclog.go:75 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup roach-1 on 127.0.0.11:53: no such host"; Reconnecting to {roach-1:26257 }
- ...
- W180213 20:38:20.093309 286 storage/node_liveness.go:342 [n5,hb] failed node liveness heartbeat: failed to send RPC: sending to all 3 replicas failed; last error: { rpc error: code = Unavailable desc = grpc: the connection is unavailable}
- ~~~
-
-5. Check whether the majority nodes are able to talk to the minority nodes at all by looking at the network latency debug page at http://localhost:8080/#/reports/network:
-
-
-
-6. If you really want to confirm that the network isn't working, try manually pinging a node in `dc-2` from a node in `dc-0`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker exec -t roach-0 ping roach-5
- ~~~~
-
- ~~~
- ping: unknown host
- ~~~
-
-## Step 4. Fix the partition
-
-1. Reconnect the nodes in `dc-2` to the shared network backbone:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker network connect cockroachdb-training-shared roach-4
- ~~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker network connect cockroachdb-training-shared roach-5
- ~~~~
-
-2. After a few seconds, you should see the nodes go back to healthy again.
-
-## Step 5. Clean up
-
-You will not be using this Docker cluster in any other labs, so take a moment to clean things up.
-
-1. In the terminal where you ran `docker-compose up`, press **CTRL-C** to stop all the CockroachDB nodes.
-
-2. Delete all Docker resources created by the tutorial:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ docker-compose down
- ~~~
-
-## What's next?
-
-[How Cockroach Labs Debugs](how-cockroach-labs-debugs.html)
diff --git a/src/archived/training/node-decommissioning.md b/src/archived/training/node-decommissioning.md
deleted file mode 100644
index fb9b87e8645..00000000000
--- a/src/archived/training/node-decommissioning.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-title: Node Decommissioning
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Before you begin
-
-Make sure you have already completed [Planned Maintenance](planned-maintenance.html) and have 3 nodes running locally.
-
-## Step 1. Try to decommission a node
-
-Run the `cockroach quit` command with the `--decommission` flag against node 3:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach quit \
---insecure \
---decommission \
---host=localhost:26259
-~~~
-
-{{site.data.alerts.callout_info}}
-For the purposes of this training, you use the `cockroach quit` command with the `--decommission` flag. However, in production, you'd use `cockroach decommission` and then instruct your process manager to end the process.
-{{site.data.alerts.end}}
-
-Because the cluster has 3 nodes, with every range on every node, it is not possible to rebalance node 3's data, so the decommission process hangs:
-
-~~~
- id | is_live | replicas | is_decommissioning | is_draining
-+----+---------+----------+--------------------+-------------+
- 3 | true | 23 | true | false
-(1 row)
-............
-~~~
-
-## Step 2. Add a fourth node
-
-In a new terminal, to make it possible for node 3 to decommission, add a fourth node:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---insecure \
---store=node4 \
---listen-addr=localhost:26260 \
---http-addr=localhost:8083 \
---join=localhost:26257,localhost:26258,localhost:26259
-~~~
-
-## Step 3. Verify that decommissioning completed
-
-1. Go back to the terminal where you triggered the decommission process.
-
- You'll see that, after the fourth node was added, the node's `gossiped_replicas` count decreased to 0 and the process completed with a confirmation:
-
- ~~~
-
- id | is_live | replicas | is_decommissioning | is_draining
- +----+---------+----------+--------------------+-------------+
- 3 | true | 4 | true | false
- (1 row)
- ......
- id | is_live | replicas | is_decommissioning | is_draining
- +----+---------+----------+--------------------+-------------+
- 3 | true | 3 | true | false
- (1 row)
- ............
- id | is_live | replicas | is_decommissioning | is_draining
- +----+---------+----------+--------------------+-------------+
- 3 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ok
- ~~~
-
-2. Open the Admin UI at http://localhost:8080, click **Metrics** on the left, and hover over the **Replicas per Node** graph in the **Overview** dashboard.
-
- You'll see that node 3 now has 0 replicas while the other nodes have equal replica counts.
-
-
-
-3. Click **Overview** on the left. About 5 minutes after the decommission process completes, you'll see node 3 listed under **Decommissioned Nodes**.
-
-
-
-## Step 4. Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4
- ~~~
-
-## What's next?
-
-[Backup and Restore](backup-and-restore.html)
diff --git a/src/archived/training/node-startup-troubleshooting.md b/src/archived/training/node-startup-troubleshooting.md
deleted file mode 100644
index e2bc5e8e62c..00000000000
--- a/src/archived/training/node-startup-troubleshooting.md
+++ /dev/null
@@ -1,322 +0,0 @@
----
-title: Node Startup Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Problem 1: SSL required
-
-In this scenario, you try to add a node to a secure cluster without providing the node's security certificate. You'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-### Step 1. Generate security certificates
-
-1. Create two directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs my-safe-directory
- ~~~
-
-2. Create the CA certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-3. Create the certificate and key for the your nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- localhost \
- $(hostname) \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Create client certificates and keys for the `root` and `spock` users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- spock \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-### Step 2. Start a secure 3-node cluster
-
-1. Start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --certs-dir=certs --host=localhost:26257
- ~~~
-
-### Step 3. Simulate the problem
-
-In the same terminal, try to add another node, but leave out the `--certs-dir` flag:
-
-{{site.data.alerts.callout_info}}
-The `--logtostderr=WARNING` flag will make warnings and errors print to `stderr` so you do not have to manually look in the logs.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---store=node4 \
---listen-addr=localhost:26260 \
---http-addr=localhost:8083 \
---join=localhost:26257,localhost:26258,localhost:26259 \
---logtostderr=WARNING
-~~~
-
-The startup process will fail, and you'll see the following printed to `stderr`:
-
-~~~
-W191203 19:35:14.018995 1 cli/start.go:1046 Using the default setting for --cache (128 MiB).
- A significantly larger value is usually needed for good performance.
- If you have a dedicated server a reasonable setting is --cache=.25 (8.0 GiB).
-W191203 19:35:14.019049 1 cli/start.go:1059 Using the default setting for --max-sql-memory (128 MiB).
- A significantly larger value is usually needed in production.
- If you have a dedicated server a reasonable setting is --max-sql-memory=.25 (8.0 GiB).
-*
-* ERROR: cannot load certificates.
-* Check your certificate settings, set --certs-dir, or use --insecure for insecure clusters.
-*
-* failed to start server: problem with CA certificate: not found
-*
-E191203 19:35:14.137329 1 cli/error.go:233 cannot load certificates.
-Check your certificate settings, set --certs-dir, or use --insecure for insecure clusters.
-
-failed to start server: problem with CA certificate: not found
-Error: cannot load certificates.
-Check your certificate settings, set --certs-dir, or use --insecure for insecure clusters.
-
-failed to start server: problem with CA certificate: not found
-Failed running "start"
-~~~
-
-The error tells you that the failure has to do with security. Because the cluster is secure, it requires the new node to provide its security certificate in order to join.
-
-### Step 4. Resolve the problem
-
-To successfully join the node to the cluster, start the node again, but this time include the `--certs-dir` flag:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---certs-dir=certs \
---store=node4 \
---listen-addr=localhost:26260 \
---http-addr=localhost:8083 \
---join=localhost:26257,localhost:26258,localhost:26259
-~~~
-
-## Problem 2: Wrong join address
-
-In this scenario, you try to add another node to the cluster, but the `--join` address is not pointing at any of the existing nodes.
-
-### Step 1. Simulate the problem
-
-In a new terminal, try to add another node:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---certs-dir=certs \
---store=node5 \
---listen-addr=localhost:26261 \
---http-addr=localhost:8084 \
---join=localhost:20000 \
---logtostderr=WARNING
-~~~
-
-The process will never complete, and you'll see a continuous stream of warnings like this:
-
-~~~
-W180817 17:01:56.506968 886 vendor/google.golang.org/grpc/clientconn.go:942 Failed to dial localhost:20000: grpc: the connection is closing; please retry.
-W180817 17:01:56.510430 914 vendor/google.golang.org/grpc/clientconn.go:1293 grpc: addrConn.createTransport failed to connect to {localhost:20000 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:20000: connect: connection refused". Reconnecting...
-~~~
-
-These warnings tell you that the node cannot establish a connection with the address specified in the `--join` flag. Without a connection to the cluster, the node cannot join.
-
-### Step 2. Resolve the problem
-
-1. Press **CTRL-C** twice to stop the previous startup attempt.
-
-2. To successfully join the node to the cluster, start the node again, but this time include a correct `--join` address:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node5 \
- --listen-addr=localhost:26261 \
- --http-addr=localhost:8084 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-## Problem 3: Missing join address
-
-In this scenario, you try to add another node to the cluster, but the `--join` address missing entirely, which causes the new node to start its own separate cluster.
-
-### Step 1. Simulate the problem
-
-1. In a new terminal, try to add another node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node6 \
- --listen-addr=localhost:26262 \
- --http-addr=localhost:8085
- ~~~
-
- The startup process succeeds but, because a `--join` address wasn't specified, the node initializes itself as a new cluster instead of joining the existing cluster. You can see this in the `status` field printed to `stdout`:
-
- ~~~
- CockroachDB node starting at 2018-02-08 16:30:26.690638 +0000 UTC (took 0.2s)
- build: CCL {{page.release_info.version}} @ 2018/01/08 17:30:06 (go1.8.3)
- webui: https://localhost:8085
- sql: postgresql://root@localhost:26262?sslcert=certs%2Fclient.root.crt&sslkey=certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=certs%2Fca.crt
- RPC client flags: cockroach --host=localhost:26262 --certs-dir=certs
- logs: /Users/crossman/node6/logs
- temp dir: /Users/crossman/node6/cockroach-temp138121774
- external I/O path: /Users/crossman/node6/extern
- store[0]: path=/Users/crossman/node6
- status: initialized new cluster
- clusterID: e2514c0a-9dd5-4b2e-a20f-85183365c207
- nodeID: 1
- ~~~
-
-2. Press **CTRL-C** to stop the new node.
-
-3. Start the node again, but this time include a correct `--join` address:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node6 \
- --listen-addr=localhost:26262 \
- --http-addr=localhost:8085 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --logtostderr=WARNING
- ~~~
-
- The startup process fails because the cluster notices that the node's cluster ID does not match the cluster ID of the nodes it is trying to join to:
-
- ~~~
- W180815 17:21:00.316845 237 gossip/client.go:123 [n1] failed to start gossip client to localhost:26258: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "9a6ed934-50e8-472a-9d55-c6ecf9130984" doesn't match server cluster ID "ab6960bb-bb61-4e6f-9190-992f219102c6"
- ~~~
-
-4. Press **CTRL-C** to stop the new node.
-
-### Step 2. Resolve the problem
-
-To successfully join the node to the cluster, you need to remove the node's data directory, which is where its incorrect cluster ID is stored, and start the node again:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ rm -rf node6
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---certs-dir=certs \
---store=node6 \
---listen-addr=localhost:26262 \
---http-addr=localhost:8085 \
---join=localhost:26257,localhost:26258,localhost:26259 \
---background
-~~~
-
-This time, the startup process succeeds, and the `status` (added to the logs because you used `--background`) tells you that the node joined the intended cluster:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ grep -A 11 'CockroachDB node starting at' ./node6/logs/cockroach.log
-~~~
-
-~~~
-CockroachDB node starting at 2019-07-23 04:21:33.130572 +0000 UTC (took 0.2s)
-build: CCL {{page.release_info.version}} @ 2019/05/22 22:44:42 (go1.12.5)
-webui: https://localhost:8085
-sql: postgresql://root@localhost:26262?sslcert=certs%2Fclient.root.crt&sslkey=certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=certs%2Fca.crt
-client flags: cockroach --host=localhost:26262 --certs-dir=certs
-logs: /Users/will/Downloads/temp-cockroach-cluster/node6/logs
-temp dir: /Users/will/Downloads/temp-cockroach-cluster/node6/cockroach-temp509471222
-external I/O path: /Users/will/Downloads/temp-cockroach-cluster/node6/extern
-store[0]: path=/Users/will/Downloads/temp-cockroach-cluster/node6
-status: initialized new node, joined pre-existing cluster
-clusterID: e40f17e6-b4aa-4e69-bd7e-ecd6556194c3
-nodeID: 6
-~~~
-
-## What's next?
-
-[Client Connection Troubleshooting](client-connection-troubleshooting.html)
diff --git a/src/archived/training/orchestration-with-kubernetes.md b/src/archived/training/orchestration-with-kubernetes.md
deleted file mode 100644
index c60b5fd3911..00000000000
--- a/src/archived/training/orchestration-with-kubernetes.md
+++ /dev/null
@@ -1,525 +0,0 @@
----
-title: Orchestration with Kubernetes
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-Also, before getting started, it's helpful to review some Kubernetes-specific terminology:
-
-Feature | Description
---------|------------
-[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
-[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one or more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
-[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets require Kubernetes version 1.9 or newer.
-[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.
When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
-[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.
-
-## Step 1. Start Kubernetes
-
-1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation.
-
- {{site.data.alerts.callout_info}}
- Make sure you install `minikube` version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the `maxUnavailability` field and `PodDisruptionBudget` resource type used in the CockroachDB StatefulSet configuration.
- {{site.data.alerts.end}}
-
-2. Start a local Kubernetes cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ minikube start
- ~~~
-
-## Step 2. Start CockroachDB
-
-To start your CockroachDB cluster, you can use our StatefulSet configuration and related files directly.
-
-1. From your local workstation, use our [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
- ~~~
-
- ~~~
- serviceaccount/cockroachdb created
- role.rbac.authorization.k8s.io/cockroachdb created
- clusterrole.rbac.authorization.k8s.io/cockroachdb created
- rolebinding.rbac.authorization.k8s.io/cockroachdb created
- clusterrolebinding.rbac.authorization.k8s.io/cockroachdb created
- service/cockroachdb-public created
- service/cockroachdb created
- poddisruptionbudget.policy/cockroachdb-budget created
- statefulset.apps/cockroachdb created
- ~~~
-
-2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
-
- 1. Get the name of the `Pending` CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.node.cockroachdb-0 24s system:serviceaccount:default:cockroachdb Pending
- default.node.cockroachdb-1 23s system:serviceaccount:default:cockroachdb Pending
- default.node.cockroachdb-2 23s system:serviceaccount:default:cockroachdb Pending
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Wed, 15 May 2019 17:11:34 -0400
- Requesting User: system:serviceaccount:default:cockroachdb
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- cockroachdb-0.cockroachdb.default.svc.cluster.local
- cockroachdb-0.cockroachdb
- cockroachdb-public
- cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.cockroachdb-0 approved
- ~~~
-
- 4. Repeat steps 1-3 for the other 2 pods.
-
-3. Initialize the cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-01ba3ca6-7756-11e9-b9fb-080027246063 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 13m
- pvc-01ccc75a-7756-11e9-b9fb-080027246063 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 13m
- pvc-01d111aa-7756-11e9-b9fb-080027246063 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 13m
- ~~~
-
- 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
- ~~~
-
- ~~~
- job.batch/cluster-init-secure created
- ~~~
-
- 4. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.client.root approved
- ~~~
-
- 5. Confirm that cluster initialization has completed successfully. The job
- should be considered successful and the CockroachDB pods should soon be
- considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init-secure
- ~~~
-
- ~~~
- NAME COMPLETIONS DURATION AGE
- cluster-init-secure 1/1 10s 16s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-secure-fxdjl 0/1 Completed 0 53s
- cockroachdb-0 1/1 Running 0 15m
- cockroachdb-1 1/1 Running 0 15m
- cockroachdb-2 1/1 Running 0 15m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
-
-## Step 3. Use the built-in SQL client
-
-To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, get a shell into the pod, and then start the built-in SQL client.
-
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create \
- -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
- ~~~
-
- ~~~
- pod/cockroachdb-client-secure created
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Get a shell into the pod and start the CockroachDB [built-in SQL client](../cockroach-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach sql \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the cockroach SQL interface.
- # All statements must be terminated by a semicolon.
- # To exit: CTRL + D.
- #
- # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) (same version as client)
- # Cluster ID: 7e1db24d-0f11-45d4-b472-bbd5f1fff858
- #
- # Enter \? for a brief introduction.
- #
- root@cockroachdb-public:26257/defaultdb>
- ~~~
-
-3. Run some basic [CockroachDB SQL statements](../learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- +----+---------+
- | id | balance |
- +----+---------+
- | 1 | 1000.5 |
- +----+---------+
- (1 row)
- ~~~
-
-4. [Create a user with a password](../create-user.html#create-a-user-with-a-password):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the Admin UI later.
-
-5. Exit the SQL shell and pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-{{site.data.alerts.callout_success}}
-This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](../cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
-
-If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
-{{site.data.alerts.end}}
-
-## Step 4. Access the Admin UI
-
-To access the cluster's [Admin UI](../admin-ui-overview.html):
-
-1. Port-forward from your local machine to one of the pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward cockroachdb-0 8080
- ~~~
-
- ~~~
- Forwarding from 127.0.0.1:8080 -> 8080
- ~~~
-
- {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
-
-2. Go to https://localhost:8080 and log in with the username and password you created earlier.
-
-3. In the UI, verify that the cluster is running as expected:
- - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster.
- - Click the **Databases** tab on the left to verify that `bank` is listed.
-
-## Step 5. Simulate node failure
-
-Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
-
-To see this in action:
-
-1. Stop one of CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-2. In the Admin UI, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
-
-3. Back in the terminal, verify that the pod was automatically restarted:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
-
-## Step 6. Add nodes
-
-1. Use the `kubectl scale` command to add a pod for another CockroachDB node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=4
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-2. Get the name of the `Pending` CSR for the new pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 8m system:serviceaccount:default:cockroachdb Approved,Issued
- default.node.cockroachdb-0 22m system:serviceaccount:default:cockroachdb Approved,Issued
- default.node.cockroachdb-1 22m system:serviceaccount:default:cockroachdb Approved,Issued
- default.node.cockroachdb-2 22m system:serviceaccount:default:cockroachdb Approved,Issued
- default.node.cockroachdb-3 2m system:serviceaccount:default:cockroachdb Pending
- ~~~
-
-3. Approve the CSR for the new pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.cockroachdb-3
- ~~~
-
- ~~~
- certificatesigningrequest.certificates.k8s.io/default.node.cockroachdb-3 approved
- ~~~
-
-4. Confirm that pod for the fourth node, `cockroachdb-3`, is `Running` successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cluster-init-secure-fxdjl 0/1 Completed 0 13m
- cockroachdb-0 1/1 Running 1 28m
- cockroachdb-1 1/1 Running 1 28m
- cockroachdb-2 1/1 Running 0 8m
- cockroachdb-3 1/1 Running 0 7m
- cockroachdb-client-secure 1/1 Running 0 12m
- ~~~
-
-## Step 7. Remove nodes
-
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `--replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](../remove-nodes.html).
-{{site.data.alerts.end}}
-
-1. Get a shell into the `cockroachdb-client-secure` pod you created earlier and use the `cockroach node status` command to get the internal IDs of nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node status \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+-----------------------------------------------------------+---------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v19.1.0 | 2019-05-15 21:37:09.875482+00:00 | 2019-05-15 21:40:41.467829+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v19.1.0 | 2019-05-15 21:31:50.21661+00:00 | 2019-05-15 21:40:41.308529+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v19.1.0 | 2019-05-15 21:37:09.746432+00:00 | 2019-05-15 21:40:41.336179+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v19.1.0 | 2019-05-15 21:37:34.962546+00:00 | 2019-05-15 21:40:44.08081+00:00 | true | true
- (4 rows)
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](../cockroach-node.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the `--replica` count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure \
- -- ./cockroach node decommission \
- --certs-dir=/cockroach-certs \
- --host=cockroachdb-public
- ~~~
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 73 | true | false
- (1 row)
- ~~~
-
- Once the node has been fully decommissioned and stopped, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, use the `kubectl scale` command to remove a pod from your StatefulSet:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset.apps/cockroachdb scaled
- ~~~
-
-## Step 8. Clean up
-
-In the next module, you'll start with a fresh, non-orchestrated cluster. Delete the StatefulSet configuration file and use the `minikube delete` command to shut down and delete the minikube virtual machine and all the resources you created, including persistent volumes:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ kubectl delete \
--f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ kubectl delete job.batch/cluster-init-secure
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ kubectl delete pod cockroachdb-client-secure
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ minikube delete
-~~~
-
-~~~
-Deleting local Kubernetes cluster...
-Machine deleted.
-~~~
-
-{{site.data.alerts.callout_success}}
-To retain logs, copy them from each pod's `stderr` before deleting the cluster and all its resources. To access a pod's standard error stream, run `kubectl logs <podname>`.
-{{site.data.alerts.end}}
-
-{{site.data.alerts.callout_info}}
-For information on how to optimize your deployment of CockroachDB on Kubernetes, see [CockroachDB Performance on Kubernetes](../kubernetes-performance.html).
-{{site.data.alerts.end}}
-
-## What's next?
-
-[Performance Benchmarking](performance-benchmarking.html)
diff --git a/src/archived/training/performance-benchmarking.md b/src/archived/training/performance-benchmarking.md
deleted file mode 100644
index e2e185524bc..00000000000
--- a/src/archived/training/performance-benchmarking.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-title: Performance Benchmarking with TPC-C
-summary: Learn how to benchmark CockroachDB against TPC-C.
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
----
-
-This lab walks you through [TPC-C](http://www.tpc.org/tpcc/) performance benchmarking on CockroachDB. It measures tpmC (new order transactions/minute) on a TPC-C dataset of 10 warehouses (for a total dataset size of 2GB) on 3 nodes.
-
-{{site.data.alerts.callout_info}}
-For training purposes, the dataset used in this lab is small. For instructions on how to benchmark with a larger dataset, see [Performance Benchmarking with TPC-C](../performance-benchmarking-with-tpc-c-1k-warehouses.html).
-{{site.data.alerts.end}}
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a 3-node cluster
-
-Start and initialize a cluster like you did in previous modules.
-
-{{site.data.alerts.callout_info}}
-To simplify the process of running multiple nodes on your local computer, you'll start them in the [background](../cockroach-start.html#general) instead of in separate terminals.
-{{site.data.alerts.end}}
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259,localhost:26260 \
- --background
- ~~~~
-
-2. Perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-3. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259,localhost:26260 \
- --background
- ~~~
-
-4. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259,localhost:26260 \
- --background
- ~~~
-
-5. Start node 4, which will be used to run the TPC-C benchmark:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node4 \
- --listen-addr=localhost:26260 \
- --http-addr=localhost:8083 \
- --join=localhost:26257,localhost:26258,localhost:26259,localhost:26260 \
- --background
- ~~~
-
-{{site.data.alerts.callout_danger}}
-This configuration is intended for training and performance benchmarking only. For production deployments, there are other important considerations, such as ensuring that data is balanced across at least three availability zones for resiliency. See the [Production Checklist](../recommended-production-settings.html) for more details.
-{{site.data.alerts.end}}
-
-## Step 3. Load data for the benchmark
-
-CockroachDB comes with built-in load generators for simulating different types of client workloads, printing out per-operation statistics every second and totals after a specific duration or max number of operations. This step features CockroachDB's version of the TPC-C workload.
-
-On the fourth node, use `cockroach workload` to load the initial schema and data:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach workload init tpcc \
---warehouses=10 \
-'postgresql://root@localhost:26260?sslmode=disable'
-~~~
-
-This will take about ten minutes to load.
-
-{{site.data.alerts.callout_success}}
-For more `tpcc` options, use `workload run tpcc --help`. For details about other load generators included in `workload`, use `workload run --help`.
-{{site.data.alerts.end}}
-
-## Step 4. Run the benchmark
-
-Run the workload for ten "warehouses" of data for five minutes (300 seconds):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach workload run tpcc \
---warehouses=10 \
---ramp=30s \
---duration=300s \
---split \
---scatter \
-'postgresql://root@localhost:26260?sslmode=disable'
-~~~
-
-## Step 5. Interpret the results
-
-Once the `workload` has finished running, you should see a final output line:
-
-~~~ shell
-_elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms)
- 300.0s 120.8 93.9% 52.9 48.2 75.5 96.5 134.2 243.3
-~~~
-
-You will also see some audit checks and latency statistics for each individual query. For this run, some of those checks might indicate that they were `SKIPPED` due to insufficient data. For a more comprehensive test, run `workload` for a longer duration (e.g., two hours). The `tpmC` (new order transactions/minute) number is the headline number and `efc` ("efficiency") tells you how close CockroachDB gets to theoretical maximum `tpmC`.
-
-The [TPC-C specification](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf) has p90 latency requirements in the order of seconds, but as you see here, CockroachDB far surpasses that requirement with p90 latencies in the hundreds of milliseconds.
-
-## Step 6. Clean up
-
-In the next module, you'll start with a fresh cluster, so take a moment to clean things up.
-
-1. Exit the SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-2. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
- This simplified shutdown process is only appropriate for a lab/evaluation scenario.
-
-3. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 node4
- ~~~
-
-## What's next?
-
-[Data Import](data-import.html)
diff --git a/src/archived/training/planned-maintenance.md b/src/archived/training/planned-maintenance.md
deleted file mode 100644
index a383e5574ff..00000000000
--- a/src/archived/training/planned-maintenance.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-title: Planned Maintenance
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a 3-node cluster
-
-Start and initialize an insecure cluster like you did in previous modules.
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-2. In a new terminal, start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. In a new terminal, start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-4. In a new terminal, perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Increase the time until a node is considered dead
-
-Let's say you need to perform some maintenance on each of your nodes, e.g., upgrade system software. For each node, you expect the maintenance and restart process to take no more than 15 minutes, and you do not want the cluster to consider a node dead and rebalance its data during this process.
-
-1. In the same terminal, increase the `server.time_until_store_dead` cluster setting:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING server.time_until_store_dead = '15m0s';"
- ~~~
-
- {{site.data.alerts.callout_info}}
- Use caution when changing the `server.time_until_store_dead` setting. Setting it too high creates some risk of unavailability since CockroachDB does not respond to down nodes as quickly. However, setting it too low causes increased network and disk I/O costs, as CockroachDB rebalances data around temporary outages.
- {{site.data.alerts.end}}
-
-2. Then verify the new setting:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW CLUSTER SETTING server.time_until_store_dead;"
- ~~~
-
- ~~~
- server.time_until_store_dead
- +------------------------------+
- 15m
- (1 row)
- ~~~
-
-## Step 3. Stop, maintain, and restart nodes
-
-Stop, maintain, and restart one node at a time. This ensures that, at any point, the cluster has a majority of replicas and remains available.
-
-1. In the first node's terminal, press **CTRL-C** to stop the node.
-
-2. Imagine that you are doing some maintenance on the node.
-
- While the node is offline, you can verify the cluster's health by pointing the Admin UI to one of the nodes that is still up: http://localhost:8081 or http://localhost:8082.
-
-3. In the same terminal, rejoin the node to the cluster, using the same command that you used to start it initially:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-4. In the second node's terminal, press **CTRL-C** to stop the node.
-
-5. Imagine that you are doing some maintenance on the node.
-
- While the node is offline, you can verify the cluster's health by pointing the Admin UI to one of the nodes that is still up: http://localhost:8080 or http://localhost:8082.
-
-6. In the same terminal, rejoin the node to the cluster, using the same command that you used to start it initially:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-7. In the third node's terminal, press **CTRL-C** to stop the node.
-
-8. Imagine that you are doing some maintenance on the node.
-
- While the node is offline, you can verify the cluster's health by pointing the Admin UI to one of the nodes that is still up: http://localhost:8080 or http://localhost:8081.
-
-9. In the same terminal, rejoin the node to the cluster, using the same command that you used to start it initially:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-## Step 4. Reset the time until a node is considered dead
-
- Now that all nodes have been maintained and restarted, you can reset the time until the cluster considers a node dead and rebalances its data.
-
-1. In a new terminal, change the `server.time_until_store_dead` cluster setting back to the default of `5m0s`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="RESET CLUSTER SETTING server.time_until_store_dead;"
- ~~~
-
-2. Then verify the new setting:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW CLUSTER SETTING server.time_until_store_dead;"
- ~~~
-
- ~~~
- server.time_until_store_dead
- +------------------------------+
- 5m
- (1 row)
- ~~~
-
-## What's next?
-
-[Node Decommissioning](node-decommissioning.html)
diff --git a/src/archived/training/production-deployment.md b/src/archived/training/production-deployment.md
deleted file mode 100644
index 5bd18f42332..00000000000
--- a/src/archived/training/production-deployment.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Production Deployment
-toc: false
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-## What's next?
-
-[Monitoring and Alerting](monitoring-and-alerting.html)
diff --git a/src/archived/training/resources/docker-compose.yaml b/src/archived/training/resources/docker-compose.yaml
deleted file mode 100644
index 09ce3d4b9b7..00000000000
--- a/src/archived/training/resources/docker-compose.yaml
+++ /dev/null
@@ -1,95 +0,0 @@
-version: '3.5'
-
-networks:
- cockroachdb-training-shared:
- name: cockroachdb-training-shared
- driver: bridge
- cockroachdb-training-dc0:
- name: cockroachdb-training-dc0
- driver: bridge
- cockroachdb-training-dc1:
- name: cockroachdb-training-dc1
- driver: bridge
- cockroachdb-training-dc2:
- name: cockroachdb-training-dc2
- driver: bridge
-
-services:
-
- # DC 0 nodes
-
- roach-0:
- container_name: roach-0
- hostname: roach-0
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc0
- command: start --logtostderr --insecure --locality=datacenter=dc-0 --join=roach-0,roach-1,roach-2
- ports:
- - 8080:8080
- - 26257:26257
-
- roach-1:
- container_name: roach-1
- hostname: roach-1
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc0
- command: start --logtostderr --insecure --locality=datacenter=dc-0 --join=roach-0,roach-1,roach-2
- ports:
- - 8081:8080
- - 26258:26257
-
- # DC 1 nodes
-
- roach-2:
- container_name: roach-2
- hostname: roach-2
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc1
- command: start --logtostderr --insecure --locality=datacenter=dc-1 --join=roach-0,roach-1,roach-2
- ports:
- - 8082:8080
- - 26259:26257
-
- roach-3:
- container_name: roach-3
- hostname: roach-3
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc1
- command: start --logtostderr --insecure --locality=datacenter=dc-1 --join=roach-0,roach-1,roach-2
- ports:
- - 8083:8080
- - 26260:26257
-
- # DC 2 nodes
-
- roach-4:
- container_name: roach-4
- hostname: roach-4
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc2
- command: start --logtostderr --insecure --locality=datacenter=dc-2 --join=roach-0,roach-1,roach-2
- ports:
- - 8084:8080
- - 26261:26257
-
- roach-5:
- container_name: roach-5
- hostname: roach-5
- image: cockroachdb/cockroach:${COCKROACH_VERSION:-v2.0.0}
- networks:
- - cockroachdb-training-shared
- - cockroachdb-training-dc2
- command: start --logtostderr --insecure --locality=datacenter=dc-2 --join=roach-0,roach-1,roach-2
- ports:
- - 8085:8080
- - 26262:26257
diff --git a/src/archived/training/resources/mysql_dump.sql b/src/archived/training/resources/mysql_dump.sql
deleted file mode 100644
index 4cd953f840b..00000000000
--- a/src/archived/training/resources/mysql_dump.sql
+++ /dev/null
@@ -1,77 +0,0 @@
--- MySQL dump 10.13 Distrib 5.7.22, for osx10.12 (x86_64)
---
--- Host: localhost Database: mysql_import
--- ------------------------------------------------------
--- Server version 5.7.22
-
-/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
-/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
-/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
-/*!40101 SET NAMES utf8 */;
-/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
-/*!40103 SET TIME_ZONE='+00:00' */;
-/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
-/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
-/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
-/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
-
---
--- Table structure for table `accounts`
---
-
-DROP TABLE IF EXISTS `accounts`;
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE `accounts` (
- `customer_id` int(11) NOT NULL,
- `id` int(11) NOT NULL,
- `balance` decimal(10,0) DEFAULT NULL,
- PRIMARY KEY (`customer_id`),
- CONSTRAINT `accounts_customer_id_fkey` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Dumping data for table `accounts`
---
-
-LOCK TABLES `accounts` WRITE;
-/*!40000 ALTER TABLE `accounts` DISABLE KEYS */;
-INSERT INTO `accounts` VALUES (1,1,100),(2,2,200),(3,3,200),(4,4,400),(5,5,200);
-/*!40000 ALTER TABLE `accounts` ENABLE KEYS */;
-UNLOCK TABLES;
-
---
--- Table structure for table `customers`
---
-
-DROP TABLE IF EXISTS `customers`;
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE `customers` (
- `id` int(11) NOT NULL,
- `name` text,
- PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Dumping data for table `customers`
---
-
-LOCK TABLES `customers` WRITE;
-/*!40000 ALTER TABLE `customers` DISABLE KEYS */;
-INSERT INTO `customers` VALUES (1,'Bjorn Fairclough'),(2,'Arturo Nevin'),(3,'Naseem Joossens'),(4,'Juno Studwick'),(5,'Eutychia Roberts');
-/*!40000 ALTER TABLE `customers` ENABLE KEYS */;
-UNLOCK TABLES;
-/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
-
-/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
-/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
-/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
-/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
-/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
-/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
-/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
-
--- Dump completed on 2018-12-20 11:34:30
diff --git a/src/archived/training/resources/pg_dump.sql b/src/archived/training/resources/pg_dump.sql
deleted file mode 100644
index 6fd9ec472bb..00000000000
--- a/src/archived/training/resources/pg_dump.sql
+++ /dev/null
@@ -1,125 +0,0 @@
---
--- PostgreSQL database dump
---
-
--- Dumped from database version 10.1
--- Dumped by pg_dump version 10.1
-
-SET statement_timeout = 0;
-SET lock_timeout = 0;
-SET idle_in_transaction_session_timeout = 0;
-SET client_encoding = 'UTF8';
-SET standard_conforming_strings = on;
-SET check_function_bodies = false;
-SET client_min_messages = warning;
-SET row_security = off;
-
---
--- Name: test; Type: SCHEMA; Schema: -; Owner: seanloiselle
---
-
-CREATE SCHEMA test;
-
-
-ALTER SCHEMA test OWNER TO seanloiselle;
-
---
--- Name: plpgsql; Type: EXTENSION; Schema: -; Owner:
---
-
-CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
-
-
---
--- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
---
-
-COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
-
-
-SET search_path = public, pg_catalog;
-
-SET default_tablespace = '';
-
-SET default_with_oids = false;
-
---
--- Name: accounts; Type: TABLE; Schema: public; Owner: seanloiselle
---
-
-CREATE TABLE accounts (
- customer_id integer,
- id integer NOT NULL,
- balance numeric,
- CONSTRAINT accounts_balance_check CHECK ((balance > (0)::numeric))
-);
-
-
-ALTER TABLE accounts OWNER TO seanloiselle;
-
---
--- Name: customers; Type: TABLE; Schema: public; Owner: seanloiselle
---
-
-CREATE TABLE customers (
- id integer NOT NULL,
- name text
-);
-
-
-ALTER TABLE customers OWNER TO seanloiselle;
-
---
--- Data for Name: accounts; Type: TABLE DATA; Schema: public; Owner: seanloiselle
---
-
-COPY accounts (customer_id, id, balance) FROM stdin;
-1 1 100
-2 2 200
-3 3 200
-4 4 400
-5 5 200
-\.
-
-
---
--- Data for Name: customers; Type: TABLE DATA; Schema: public; Owner: seanloiselle
---
-
-COPY customers (id, name) FROM stdin;
-1 Bjorn Fairclough
-2 Arturo Nevin
-3 Naseem Joossens
-4 Juno Studwick
-5 Eutychia Roberts
-\.
-
-
---
--- Name: accounts accounts_pkey; Type: CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY accounts
- ADD CONSTRAINT accounts_pkey PRIMARY KEY (id);
-
-
---
--- Name: customers customers_pkey; Type: CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY customers
- ADD CONSTRAINT customers_pkey PRIMARY KEY (id);
-
-
---
--- Name: accounts accounts_customer_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY accounts
- ADD CONSTRAINT accounts_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES customers(id);
-
-
---
--- PostgreSQL database dump complete
---
-
diff --git a/src/archived/training/resources/pg_dump_cleaned.sql b/src/archived/training/resources/pg_dump_cleaned.sql
deleted file mode 100644
index 52f4147a257..00000000000
--- a/src/archived/training/resources/pg_dump_cleaned.sql
+++ /dev/null
@@ -1,119 +0,0 @@
---
--- PostgreSQL database dump
---
-
--- Dumped from database version 10.1
--- Dumped by pg_dump version 10.1
-
-SET statement_timeout = 0;
-SET lock_timeout = 0;
-SET idle_in_transaction_session_timeout = 0;
-SET client_encoding = 'UTF8';
-SET standard_conforming_strings = on;
-SET check_function_bodies = false;
-SET client_min_messages = warning;
-SET row_security = off;
-
---
--- Name: test; Type: SCHEMA; Schema: -; Owner: seanloiselle
---
-
---
--- Name: plpgsql; Type: EXTENSION; Schema: -; Owner:
---
-
-CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
-
-
---
--- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
---
-
-COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
-
-
-SET search_path = public, pg_catalog;
-
-SET default_tablespace = '';
-
-SET default_with_oids = false;
-
---
--- Name: accounts; Type: TABLE; Schema: public; Owner: seanloiselle
---
-
-CREATE TABLE accounts (
- customer_id integer,
- id integer NOT NULL,
- balance numeric,
- CONSTRAINT accounts_balance_check CHECK ((balance > (0)::numeric))
-);
-
-
-ALTER TABLE accounts OWNER TO seanloiselle;
-
---
--- Name: customers; Type: TABLE; Schema: public; Owner: seanloiselle
---
-
-CREATE TABLE customers (
- id integer NOT NULL,
- name text
-);
-
-
-ALTER TABLE customers OWNER TO seanloiselle;
-
---
--- Data for Name: accounts; Type: TABLE DATA; Schema: public; Owner: seanloiselle
---
-
-COPY accounts (customer_id, id, balance) FROM stdin;
-1 1 100
-2 2 200
-3 3 200
-4 4 400
-5 5 200
-\.
-
-
---
--- Data for Name: customers; Type: TABLE DATA; Schema: public; Owner: seanloiselle
---
-
-COPY customers (id, name) FROM stdin;
-1 Bjorn Fairclough
-2 Arturo Nevin
-3 Naseem Joossens
-4 Juno Studwick
-5 Eutychia Roberts
-\.
-
-
---
--- Name: accounts accounts_pkey; Type: CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY accounts
- ADD CONSTRAINT accounts_pkey PRIMARY KEY (id);
-
-
---
--- Name: customers customers_pkey; Type: CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY customers
- ADD CONSTRAINT customers_pkey PRIMARY KEY (id);
-
-
---
--- Name: accounts accounts_customer_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: seanloiselle
---
-
-ALTER TABLE ONLY accounts
- ADD CONSTRAINT accounts_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES customers(id);
-
-
---
--- PostgreSQL database dump complete
---
diff --git a/src/archived/training/security.md b/src/archived/training/security.md
deleted file mode 100644
index 76f6396daef..00000000000
--- a/src/archived/training/security.md
+++ /dev/null
@@ -1,263 +0,0 @@
----
-title: Secure Your Cluster
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous lab.
-
-## Step 1. Generate security certificates
-
-1. Create two directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs my-safe-directory
- ~~~
-
- Directory | Description
- ----------|------------
- `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory.
- `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
-
-2. Create the CA certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-3. Create the certificate and key for the your nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- localhost \
- $(hostname) \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
- Because you're running a local cluster and all nodes use the same hostname (`localhost`), you only need a single node certificate. Note that this is different than running a production cluster, where you would need to generate a certificate and key for each node, issued to all common names and IP addresses you might use to refer to the node as well as to any load balancer instances.
-
-4. Create client certificates and keys for the `root` and `spock` users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- spock \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-## Step 2. Start a secure cluster
-
-Restart the nodes using the same commands you used to start them initially, but this time use the `--certs-dir` flag to point to the node certificate, and leave out the `--insecure` flag.
-
-1. Start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~~
-
-2. Start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-3. Start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259 \
- --background
- ~~~
-
-4. Perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --certs-dir=certs --host=localhost:26257
- ~~~
-
-## Step 3. Add data to your cluster
-
-1. Use the `cockroach gen` command to generate an example `startrek` database with 2 tables, `episodes` and `quotes`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data startrek | cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257
- ~~~
-
-2. Create a new user called `spock` and grant `spock` the `SELECT` privilege on the `startrek.quotes` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --execute="CREATE USER spock; GRANT SELECT ON TABLE startrek.quotes TO spock;"
- ~~~
-
-## Step 4. Authenticate a user (via client cert)
-
-1. As the `spock` user, read from the `startrek.quotes` table, using the `--certs-dir` to point to the user's client cert:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --user=spock \
- --database=startrek \
- --execute="SELECT * FROM quotes WHERE quote ~* 'creature';"
- ~~~
-
- ~~~
- quote | characters | stardate | episode
- +-------------------------------------------------------------+------------+----------+---------+
- There is a multi-legged creature crawling on your shoulder. | Spock | 3193.9 | 23
- (1 row)
- ~~~
-
-## Step 5. Authenticate a user (via password)
-
-Although we recommend always using TLS certificates to authenticate users, it's possible to authenticate a user with just a password.
-
-{{site.data.alerts.callout_info}}
-For multiple users to access the Admin UI, the `root` user must [create users with passwords](../create-user.html#create-a-user-with-a-password).
-{{site.data.alerts.end}}
-
-1. As the `root` user, open the built-in SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257
- ~~~
-
-2. Create a new `kirk` user with the password `enterprise`. You'll have to type in the password twice at the prompt:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE USER kirk WITH PASSWORD 'enterprise';
- ~~~
-
-3. Exit the SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-4. As the `root` user, grant `kirk` the `SELECT` privilege on the tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --user=root \
- --execute="GRANT SELECT ON startrek.* TO kirk;"
- ~~~
-
-5. As the `kirk` user, read from the `startrek.quotes` table:
-
- {{site.data.alerts.callout_info}}
- It's necessary to include the `--certs-dir` flag even though you haven't created a cert for this user. When the cluster does not find a suitable client cert, it falls back on password authentication.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --user=kirk \
- --database=startrek \
- --execute="SELECT * FROM quotes WHERE quote ~* 'danger';"
- ~~~
-
- Enter `enterprise` as the password:
-
- ~~~
- Enter password:
- ~~~
-
- You'll then see the response:
-
- ~~~
- quote | characters | stardate | episode
- +------------------------------------------+---------------+----------+---------+
- Insufficient facts always invite danger. | Spock | 3141.9 | 22
- Power is danger. | The Centurion | 1709.2 | 14
- (2 rows)
- ~~~
-
-## Clean up
-
-In the next module, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3 my-safe-directory certs
- ~~~
-
-## What's next?
-
-[Production Deployment](production-deployment.html)
diff --git a/src/archived/training/software-panic-troubleshooting.md b/src/archived/training/software-panic-troubleshooting.md
deleted file mode 100644
index f6300a301b6..00000000000
--- a/src/archived/training/software-panic-troubleshooting.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-title: Software Panic Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Before you begin
-
-Make sure you have already completed [Data Corruption Troubleshooting](data-corruption-troubleshooting.html) and have a cluster of 3 nodes running.
-
-## Step 1. Simulate the problem
-
-1. In a new terminal, issue a "query of death" against node 3. The query will crash the node, the connection will then fail, and you'll see an error message printed to `stderr`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26259 \
- --execute="SELECT crdb_internal.force_panic('foo');"
- ~~~
-
- ~~~
- Error: driver: bad connection
- Failed running "sql"
- ~~~
-
-## Step 2. Troubleshoot the problem
-
-In the terminal where node 3 was running, check the `stdout` for details:
-
-{{site.data.alerts.callout_info}}
-You can also look in the node's full logs at `node3/logs`.
-{{site.data.alerts.end}}
-
-~~~
-E180209 14:47:54.819282 2149 sql/session.go:1370 [client=127.0.0.1:53558,user=root,n1] a SQL panic has occurred!
-*
-* ERROR: [client=127.0.0.1:53558,user=root,n1] a SQL panic has occurred!
-*
-E180209 14:47:54.819378 2149 util/log/crash_reporting.go:113 [n1] a panic has occurred!
-*
-* ERROR: [n1] a panic has occurred!
-*
-panic while executing "select crdb_internal.force_panic('foo');": foo
-
-goroutine 2149 [running]:
-runtime/debug.Stack(0x8246800, 0xc42038e540, 0x3)
- /usr/local/go/src/runtime/debug/stack.go:24 +0x79
-github.com/cockroachdb/cockroach/pkg/util/log.ReportPanic(0x8246800, 0xc42038e540, 0xc4201d2000, 0x56f2fe0, 0xc4203632a0, 0x1)
-...
-~~~
-
-The cause of the panic is clearly identified before the stack trace:
-
-~~~
-panic while executing "select crdb_internal.force_panic('foo');": foo
-~~~
-
-## Step 3. Resolve the problem
-
-With the cause identified, you should:
-
-1. Update your application to stop issuing the "query of death".
-
-2. Restart the down node.
-
-3. File an issue with Cockroach Labs. We'll cover the ideal way to do this in an upcoming module.
-
-## Step 4. Clean up
-
-In the next lab, you'll start a new cluster from scratch, so take a moment to clean things up.
-
-1. Stop the other CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3
- ~~~
-
-## What's next?
-
-[Network Partition Troubleshooting](network-partition-troubleshooting.html)
diff --git a/src/archived/training/sql-basics.md b/src/archived/training/sql-basics.md
deleted file mode 100644
index d938ad842b2..00000000000
--- a/src/archived/training/sql-basics.md
+++ /dev/null
@@ -1,439 +0,0 @@
----
-title: SQL Basics
-toc: true
-sidebar_data: sidebar-data-training.json
----
-
-
-
-
-## Before you begin
-
-Make sure you have already completed [Data Import](data-import.html).
-
-## Step 1. Start a SQL shell
-
-Use the [`cockroach sql`](../cockroach-start.html) command to open the built-in SQL client:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --host=localhost:26257
-~~~
-
-## Step 2. Create a database and table
-
-In this training, you'll create a bank with customers and accounts. First, you'll need to [create a database](../create-database.html) on your Cockroach cluster to store this information.
-
-1. In the built-in SQL client, create a database:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- Databases do not directly store any data; you need to describe the
- shape of the data you intend to store by [creating tables](../create-table.html) within your database.
-
-2. Create a table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.customers (
- customer_id INTEGER PRIMARY KEY,
- name STRING,
- address STRING
- );
- ~~~
-
- You created a table called `customers` in the `bank` database with three columns: `customer_id`, `name`, and `address`. Each column has a [data type](../data-types.html). This means that the column will only accept the specified data type (i.e., `customer_id` can only be an [`INTEGER`](../int.html), `name` can only be a [`STRING`](../string.html), and `address` can only be a `STRING`).
-
- The `customer_id` column is also the table's [primary key](../primary-key.html). In CockroachDB, and most SQL databases, it is always more efficient to search a table by primary key than by any other field because there can only be one primary key column, and the primary key column must be unique for every row. Therefore, the `name` column would be an unsuitable primary key because it's likely that your bank will eventually have two customers with the same name.
-
-## Step 3. Insert data into your table
-
-Now that you have a table, [insert](../insert.html) some data into it.
-
-1. Insert a row into the table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.customers
- VALUES (1, 'Petee', '101 5th Ave, New York, NY 10003');
- ~~~
-
- `INSERT` statements add new rows to a table. Values must be specified in the same order that the columns were declared in the `CREATE TABLE` statement. Note that a string needs to be surrounded with single quotes (`'`), but integers do not.
-
-2. Verify that the data was inserted successfully by using a [`SELECT` statement](../select.html) to retrieve data from the table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT customer_id, name, address FROM bank.customers;
- ~~~
-
- ~~~~
- customer_id | name | address
- +-------------+-------+------------------------------------------------+
- 1 | Petee | 101 5th Ave, New York, NY 10003
- (1 row)
- ~~~~
-
-3. Insert another row:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.customers VALUES (2, 'Carl', NULL);
- ~~~
-
- We do not know Carl's address, so we use the special `NULL` value to indicate "unknown."
-
-4. Insert two rows in the same statement:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.customers VALUES
- (3, 'Lola', NULL),
- (4, 'Ernie', '1600 Pennsylvania Ave NW, Washington, DC 20500');
- ~~~
-
-5. Verify that the data was inserted successfully:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.customers;
- ~~~
-
- ~~~
- customer_id | name | address
- +-------------+-------+------------------------------------------------+
- 1 | Petee | 101 5th Ave, New York, NY 10003
- 2 | Carl | NULL
- 3 | Lola | NULL
- 4 | Ernie | 1600 Pennsylvania Ave NW, Washington, DC 20500
- (4 rows)
- ~~~
-
- The `SELECT *` shorthand is used to indicate that you want all the columns in the table without explicitly enumerating them.
-
- {{site.data.alerts.callout_info}}
- Tables are also called **relations**, in the mathematical sense of the word, which is why SQL databases are sometimes referred to as relational databases.
- {{site.data.alerts.end}}
-
-## Step 4. Create an `accounts` table
-
-Now that you have a place to store personal information about customers, create a table to store data about the customers' account(s) and balance.
-
-1. Create an `accounts` table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (
- type STRING,
- balance DECIMAL(8, 2),
- customer_id INTEGER REFERENCES bank.customers (customer_id)
- );
- ~~~
-
- This table demonstrates two new SQL features.
-
- The first new feature is the balance column's [`DECIMAL` type](../decimal.html), which is capable of storing fractional numbers (the previously used `INTEGER` columns can only store whole numbers). The numbers in parenthesis indicate the maximum size of the decimal number. `DECIMAL(8, 2)` means that a number with up to eight digits can be stored with up to two digits past the decimal point. This means we can store account balances as large as `999999.99`, but no larger.
-
- The second new feature is the [foreign key](../foreign-key.html) created by the `REFERENCES` clause. Foreign keys are how SQL maintains referential integrity across different tables. Here, the foreign key guarantees that every account belongs to a real customer. Let's verify this works as intended.
-
-2. Try to open an account for a customer that doesn't exist:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES ('checking', 0.00, 5);
- ~~~
-
- ~~~
- pq: foreign key violation: value [5] not found in customers@primary [customer_id] (txn="sql txn" id=fd9f171c key=/Min rw=false pri=0.00960426 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534557981.019071738,0 orig=1534557981.019071738,0 max=1534557981.519071738,0 wto=false rop=false seq=1)
- ~~~
-
- As expected, the statement fails with a "foreign key violation" error, indicating that no customer with ID `5` exists.
-
-3. Now open an account for a valid customer:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES ('checking', 0.00, 1);
- ~~~
-
-4. Try to [delete](../delete.html) a customer record:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > DELETE FROM bank.customers WHERE customer_id = 1;
- ~~~
-
- The `WHERE` clause here is a constraint. It indicates that we do not want to delete all the data in the `customers` table, but just the row where `customer_id=1`.
-
- ~~~
- pq: foreign key violation: values [1] in columns [customer_id] referenced in table "accounts"
- ~~~
-
- You weren't able to delete Petee's information because the customer still has accounts open (i.e., there are records in the `accounts` table).
-
-5. Delete a customer's account:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > DELETE FROM bank.accounts WHERE customer_id = 1;
- ~~~
-
-6. Now that the customer's account is deleted, you can delete the customer record:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > DELETE FROM bank.customers WHERE customer_id = 1;
- ~~~
-
-7. Create accounts for Carl (`customer_id=2`) and Ernie (`customer_id=4`):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES
- ('checking', 250.00, 2),
- ('savings', 314.15, 2),
- ('savings', 42000.00, 4);
- ~~~
-
-## Step 5. List account balances
-
-1. View account balances using a `SELECT` statement:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- type | balance | customer_id
- +----------+----------+-------------+
- checking | 250.00 | 2
- savings | 314.15 | 2
- savings | 42000.00 | 4
- (3 rows)
- ~~~
-
-2. Use a [join](../joins.html) to match customer IDs with the name and address in the `customers` table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.customers NATURAL JOIN bank.accounts;
- ~~~
-
- ~~~
- customer_id | name | address | type | balance
- +-------------+-------+------------------------------------------------+----------+----------+
- 2 | Carl | NULL | checking | 250.00
- 2 | Carl | NULL | savings | 314.15
- 4 | Ernie | 1600 Pennsylvania Ave NW, Washington, DC 20500 | savings | 42000.00
- (3 rows)
- ~~~
-
- Now you have one view where you can see accounts alongside their customer information.
-
- While you could create one big table with all the above information, it's recommended that you create separate tables and join them. With this setup, you would only need to update data in one place.
-
-3. [Update](../update.html) Carl's address:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE bank.customers
- SET address = '4 Privet Drive, Little Whinging, England'
- WHERE customer_id = 2;
- ~~~
-
-4. With the join, the address is updated on both accounts:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.customers NATURAL JOIN bank.accounts;
- ~~~
-
- ~~~
- customer_id | name | address | type | balance
- +-------------+-------+------------------------------------------------+----------+----------+
- 2 | Carl | 4 Privet Drive, Little Whinging, England | checking | 250.00
- 2 | Carl | 4 Privet Drive, Little Whinging, England | savings | 314.15
- 4 | Ernie | 1600 Pennsylvania Ave NW, Washington, DC 20500 | savings | 42000.00
- (3 rows)
- ~~~
-
- If you only had one big table, you'd have to remember to update Carl's address on every account, and the multiple copies would likely get out of sync.
-
- {{site.data.alerts.callout_info}}
- Designing a schema so that there is exactly one copy of each piece of data is called **normalization**, and is a key concept in relational databases.
- {{site.data.alerts.end}}
-
-## Step 6. Transactions
-
-Suppose Carl wants to withdraw $250 from his checking account. First, check that he has $250 in his account with one query, then perform the withdrawal in another.
-
-Here's how that would look (do not run this example yet):
-
-~~~ sql
-> SELECT balance >= 250 FROM bank.accounts WHERE type = 'checking' AND customer_id = 2;
-
- balance >= 250
-+----------------+
- true
-(1 row)
-
--- If false, quit. Otherwise, continue.
-
-> UPDATE bank.accounts SET balance = balance - 250 WHERE type = 'checking' AND customer_id = 2;
-~~~
-
-This would work most of the time, but there's a security flaw. Suppose Carl issues two transfer requests for $250 at the exact same time; let's call them transfer A and transfer B.
-
-First, transfer A checks to see if there's at least $250 in Carl's checking account. There is, so it proceeds with the transfer. But before transfer A can deduct the $250 from his account, transfer B checks to see if there's $250 in the account. Transfer A hasn't deducted the money yet, so transfer B sees enough money and decides to proceed, too. When the two transfers complete, Carl will have withdrawn $250 that wasn't in his account, and the bank will have to cover the loss.
-
-This issue can be solved by using a [transaction](../transactions.html). If two transactions attempt to modify the same data at the same time, one of the transactions will get canceled.
-
-Using transactions is as simple as issuing a [`BEGIN` statement](../begin-transaction.html) to start a transaction and a [`COMMIT` statement](../commit-transaction.html) to finish it. You can also [`ROLLBACK` a transaction](../rollback-transaction.html) midway if, for example, you discover that the transfer has insufficient funds.
-
-Here's the above example in a transaction. Again, do not run this example yet.
-
-~~~ sql
-> BEGIN;
--- Now adding input for a multi-line SQL transaction client-side.
--- Press Enter two times to send the SQL text collected so far to the server, or Ctrl+C to cancel.
--- You can also use \show to display the statements entered so far.
- -> SELECT balance >= 250 FROM bank.accounts WHERE type = 'checking' AND customer_id = 2;
- -> -- press Enter again
-
- balance >= 250
-+----------------+
- true
-(1 row)
-
--- If false, issue a ROLLBACK statement. Otherwise, continue.
-
-OPEN> UPDATE bank.accounts SET balance = balance - 250 WHERE type = 'checking' AND customer_id = 2;
-UPDATE 1;
-OPEN> COMMIT;
-COMMIT
-~~~
-
-Now try running two copies of the above transaction in parallel:
-
-1. In the SQL shell, run:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > BEGIN;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT balance >= 250 FROM bank.accounts WHERE type = 'checking' AND customer_id = 2;
- ~~~
-
-2. Press enter.
-
-3. Open a new terminal, start a second SQL shell, and run the same:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > BEGIN;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT balance >= 250 FROM bank.accounts WHERE type = 'checking' AND customer_id = 2;
- ~~~
-
-4. Press enter a second time to send the SQL statement to the server.
-
-5. Run:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE bank.accounts SET balance = balance - 250 WHERE type = 'checking' AND customer_id = 2;
- ~~~
-6. Press enter a second time to send the SQL statement to the server.
-
-7. Commit the transaction:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > COMMIT;
- ~~~
-
-8. Back in the first SQL shell, run:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > UPDATE bank.accounts SET balance = balance - 250 WHERE type = 'checking' AND customer_id = 2;
- ~~~
-
-9. Press enter a second time to send the SQL statement to the server.
-
-10. Commit the transaction:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > COMMIT;
- ~~~
-
-When you reach the `COMMIT` statement, you'll see one transaction fail with an error like this:
-
-~~~
-pq: restart transaction: HandledRetryableTxnError: TransactionRetryError: retry txn (RETRY_WRITE_TOO_OLD)
-~~~
-
-CockroachDB detected that the two transactions are attempting conflicting withdrawals and canceled one of them.
-
-{{site.data.alerts.callout_info}}
-Any number of `SELECT`, `INSERT`, `UPDATE`, and `DELETE` queries can be placed in a transaction. This is what makes traditional SQL databases so powerful.
-{{site.data.alerts.end}}
-
-## Step 7. Aggregations
-
-`SELECT` statements aren't limited to combining data from different tables. They can also combine data in the same table using **aggregations**.
-
-1. Add all of the balances in the `accounts` table:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT SUM(balance) FROM bank.accounts;
- ~~~
-
- ~~~
- sum
- +----------+
- 42314.15
- (1 row)
- ~~~
-
-2. View the balance grouped by account type:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT type, SUM(balance) FROM bank.accounts GROUP BY type;
- ~~~
-
- ~~~
- type | sum
- +----------+----------+
- checking | 250.00
- savings | 42064.15
- (2 rows)
- ~~~
-
-Joins and aggregations can be combined and nested to express nearly any query.
-
-## What's Next?
-
-- [Users and Privileges](users-and-privileges.html)
diff --git a/src/archived/training/under-replication-troubleshooting.md b/src/archived/training/under-replication-troubleshooting.md
deleted file mode 100644
index 192f664d84f..00000000000
--- a/src/archived/training/under-replication-troubleshooting.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: Under-Replication Troubleshooting
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Before you begin
-
-In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
-
-## Step 1. Start a 3-node cluster
-
-1. In a new terminal, start node 1:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node1 \
- --listen-addr=localhost:26257 \
- --http-addr=localhost:8080 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~~
-
-2. In another terminal, start node 2:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node2 \
- --listen-addr=localhost:26258 \
- --http-addr=localhost:8081 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. In another terminal, start node 3:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-4. In another terminal, perform a one-time initialization of the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=localhost:26257
- ~~~
-
-## Step 2. Simulate the problem
-
-1. In the same terminal, reduce the amount of time the cluster waits before considering a node dead to just 1 minute and 15 seconds:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SET CLUSTER SETTING server.time_until_store_dead = '1m15s';"
- ~~~
-
-2. In the terminal where node 3 is running, press **CTRL-C** to stop the node.
-
-## Step 3. Troubleshoot the problem
-
-1. Open the Admin UI at http://localhost:8080 and click **Metrics** on the left.
-
-2. In the upper right you'll see the **Replication Status** widget:
-
-
-
- You'll see that there are 24 ranges total, and 24 ranges are under-replicated, which means that every range in the cluster is missing 1 of 3 replicas. This is a vulnerable state because, if another node were to go offline, all ranges would lose consensus, and the entire cluster would become unavailable.
-
-## Step 4. Resolve the problem
-
-To bring the cluster back to a safe state, you need to either restart the down node or add a new node.
-
-1. In the terminal where node 3 was running, restart the node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --store=node3 \
- --listen-addr=localhost:26259 \
- --http-addr=localhost:8082 \
- --join=localhost:26257,localhost:26258,localhost:26259
- ~~~
-
-3. Soon, you'll see that there are no longer any under-replicated ranges.
-
-## What's next?
-
-[Cluster Unavailability Troubleshooting](cluster-unavailability-troubleshooting.html)
diff --git a/src/archived/training/users-and-privileges.md b/src/archived/training/users-and-privileges.md
deleted file mode 100644
index 5cd0730ec84..00000000000
--- a/src/archived/training/users-and-privileges.md
+++ /dev/null
@@ -1,320 +0,0 @@
----
-title: Users and Privileges
-toc: true
-toc_not_nested: true
-sidebar_data: sidebar-data-training.json
-block_search: false
-
----
-
-
-
-
-
-## Before you begin
-
-1. Make sure you have already completed [SQL Basics](sql-basics.html).
-
-2. Use the `cockroach gen` command to generate an example `startrek` database with 2 tables, `episodes` and `quotes`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach gen example-data startrek | cockroach sql \
- --insecure \
- --host=localhost:26257
- ~~~
-
-## Step 1. Check initial privileges
-
-Initially, no users other than `root` have privileges, and root has `ALL` privileges on everything in the cluster.
-
-1. Check the privileges on the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW GRANTS ON DATABASE startrek;"
- ~~~
-
- You'll see that only the `root` user (and `admin` role to which `root` belongs) has access to the database:
-
- ~~~
- database_name | schema_name | grantee | privilege_type
- +---------------+--------------------+---------+----------------+
- startrek | crdb_internal | admin | ALL
- startrek | crdb_internal | root | ALL
- startrek | information_schema | admin | ALL
- startrek | information_schema | root | ALL
- startrek | pg_catalog | admin | ALL
- startrek | pg_catalog | root | ALL
- startrek | public | admin | ALL
- startrek | public | root | ALL
- (8 rows)
- ~~~
-
-2. Check the privileges on the tables inside in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW GRANTS ON startrek.episodes, startrek.quotes;"
- ~~~
-
- Again, you'll see that only the `root` user (and `admin` role to which `root` belongs) has access to the database:
-
- ~~~
- database_name | schema_name | table_name | grantee | privilege_type
- +---------------+-------------+------------+---------+----------------+
- startrek | public | episodes | admin | ALL
- startrek | public | episodes | root | ALL
- startrek | public | quotes | admin | ALL
- startrek | public | quotes | root | ALL
- (4 rows)
- ~~~
-
-## Step 2. Create a user
-
-1. Create a new user, `spock`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --certs-dir=certs \
- --host=localhost:26257 \
- --execute="CREATE USER spock;"
- ~~~
-
-2. Try to read from a table in the `startrek` database as `spock`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --user=spock \
- --database=startrek \
- --execute="SELECT count(*) FROM episodes;"
- ~~~
-
- Initially, `spock` has no privileges, so the query fails:
-
- ~~~
- Error: pq: user spock does not have SELECT privilege on relation episodes
- Failed running "sql"
- ~~~
-
-## Step 3. Grant privileges to the user
-
-1. As the `root` user, grant `spock` the `SELECT` privilege on all tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="GRANT SELECT ON TABLE startrek.* TO spock;"
- ~~~
-
-2. As the `root` user, grant `spock` the `INSERT` privilege on just the `startrek.quotes` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="GRANT INSERT ON TABLE startrek.quotes TO spock;"
- ~~~
-
-3. As the `root` user, show the privileges granted on tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW GRANTS ON TABLE startrek.quotes, startrek.episodes;"
- ~~~
-
- ~~~
- database_name | schema_name | table_name | grantee | privilege_type
- +---------------+-------------+------------+---------+----------------+
- startrek | public | episodes | admin | ALL
- startrek | public | episodes | root | ALL
- startrek | public | episodes | spock | SELECT
- startrek | public | quotes | admin | ALL
- startrek | public | quotes | root | ALL
- startrek | public | quotes | spock | INSERT
- startrek | public | quotes | spock | SELECT
- (7 rows)
- ~~~
-
-## Step 4. Connect as the user
-
-1. As the `spock` user, read from the tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --user=spock \
- --execute="SELECT count(*) FROM startrek.quotes;" \
- --execute="SELECT count(*) FROM startrek.episodes;"
- ~~~
-
- Because `spock` has the `SELECT` privilege on the tables, the query succeeds:
-
- ~~~
- count
- +-------+
- 200
- (1 row)
- count
- +-------+
- 79
- (1 row)
- ~~~
-
-2. As the `spock` user, insert a row into the `startrek.quotes` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --user=spock \
- --execute="INSERT INTO startrek.quotes VALUES ('Blah blah', 'Spock', NULL, 52);"
- ~~~
-
- Because `spock` has the `INSERT` privilege on the table, the query succeeds:
-
- ~~~
- INSERT 1
- ~~~
-
-3. As the `spock` user, try to insert a row into the `startrek.episodes` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --user=spock \
- --execute="INSERT INTO startrek.episodes VALUES (80, 3, 25, 'The Episode That Never Was', 5951.5);"
- ~~~
-
- Because `spock` does not have the `INSERT` privilege on the table, the query fails:
-
- ~~~
- Error: pq: user spock does not have INSERT privilege on relation episodes
- Failed running "sql"
- ~~~
-
-## Step 5. Revoke privileges from the user
-
-1. As the `root` user, show the privileges granted on tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW GRANTS ON TABLE startrek.quotes, startrek.episodes;"
- ~~~
-
- ~~~
- database_name | schema_name | table_name | grantee | privilege_type
- +---------------+-------------+------------+---------+----------------+
- startrek | public | episodes | admin | ALL
- startrek | public | episodes | root | ALL
- startrek | public | episodes | spock | SELECT
- startrek | public | quotes | admin | ALL
- startrek | public | quotes | root | ALL
- startrek | public | quotes | spock | INSERT
- startrek | public | quotes | spock | SELECT
- (7 rows)
- ~~~
-
-2. As the `root` user, revoke the `SELECT` privilege on the `startrek.episodes` table from `spock`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="REVOKE SELECT ON TABLE startrek.episodes FROM spock;"
- ~~~
-
-3. As the `root` user, again show the privileges granted on tables in the `startrek` database:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --execute="SHOW GRANTS ON TABLE startrek.quotes, startrek.episodes;"
- ~~~
-
- Note that `spock` no longer has the `SELECT` privilege on the `episodes` table.
-
- ~~~
- database_name | schema_name | table_name | grantee | privilege_type
- +---------------+-------------+------------+---------+----------------+
- startrek | public | episodes | admin | ALL
- startrek | public | episodes | root | ALL
- startrek | public | quotes | admin | ALL
- startrek | public | quotes | root | ALL
- startrek | public | quotes | spock | INSERT
- startrek | public | quotes | spock | SELECT
- (6 rows)
- ~~~
-
-4. Now as the `spock` user, try to read from the `startrek.episodes` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- --insecure \
- --host=localhost:26257 \
- --user=spock \
- --execute="SELECT count(*) FROM startrek.episodes;"
- ~~~
-
- Because `spock` no longer has the `SELECT` privilege on the table, the query fails:
-
- ~~~
- Error: pq: user spock does not have SELECT privilege on relation episodes
- Failed running "sql"
- ~~~
-
-## Step 6. Clean up
-
-In the next module, you'll start with a fresh cluster, so take a moment to clean things up.
-
-1. Stop all CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ pkill -9 cockroach
- ~~~
-
- This simplified shutdown process is only appropriate for a lab/evaluation scenario.
-
-2. Remove the nodes' data directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm -rf node1 node2 node3
- ~~~
-
-## What's next?
-
-[Security](security.html)
diff --git a/src/archived/training/why-cockroachdb.md b/src/archived/training/why-cockroachdb.md
deleted file mode 100644
index 58dad1059af..00000000000
--- a/src/archived/training/why-cockroachdb.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Why CockroachDB?
-toc: false
-sidebar_data: sidebar-data-training.json
-
----
-
-Kick off your training by watching Alex Robinson, a CockroachDB Engineer, explain the history of databases and why CockroachDB was built. You can also read through a related set of slides.
-
-
{% comment %} take the version name, force it to be lowercase, and replace all periods with hyphens. {% endcomment %}
+
+{% if release.release_type == "Testing" %}
+{% include releases/experimental-test-release.md version=release.release_name %}
+{% endif %}
+
+{% if release.withdrawn == true %}
+{% include releases/withdrawn.md %}
+{% elsif release.cloud_only == true %} {% comment %}Show the Cloud-first info instead of download links {% endcomment %}
+{{site.data.alerts.callout_info}}
+{{ r.cloud_only_message }}
+{{site.data.alerts.end}}
+{% else %}
+
+{{site.data.alerts.callout_info}}
+Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.
+{{site.data.alerts.end}}
+
+{% comment %}Assign the JS for the experimental download prompt and store it in the Liquid variable experimental_download_js {% endcomment %}
+ {% capture experimental_download_js %}{% include_cached releases/experimental_download_dialog.md %}{% endcapture %}
+ {% capture onclick_string %}onclick="{{ experimental_download_js }}"{% endcapture %}
+ {% capture linux_arm_button_text_addendum %}{% if r.linux.linux_arm_experimental == true %} (Experimental){% endif %}{% if r.linux.linux_arm_limited_access == true %} (Limited Access){% endif %}{% endcapture %}
+
+
+
+ {% if release.docker.docker_arm == true %}
+[Multi-platform images](https://docs.docker.com/build/building/multi-platform/) include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.
+
+ {% if release.docker.docker_arm_limited_access == true %}
+Within the multi-platform image:
The ARM image is in **Limited Access**.
The Intel image is **Generally Available** for production use.
diff --git a/src/current/_includes/releases/v23.1/v23.1.0-beta.3.md b/src/current/_includes/releases/v23.1/v23.1.0-beta.3.md
index cb2d0ba34fc..ac37fbc1402 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0-beta.3.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0-beta.3.md
@@ -2,7 +2,7 @@
Release Date: April 24, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Backward-incompatible changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.0-rc.1.md b/src/current/_includes/releases/v23.1/v23.1.0-rc.1.md
index d97e46bc2e8..0b41f9eb609 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0-rc.1.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0-rc.1.md
@@ -2,7 +2,7 @@
Release Date: May 2, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.0-rc.2.md b/src/current/_includes/releases/v23.1/v23.1.0-rc.2.md
index 9015eab3cc5..8da919350b2 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0-rc.2.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0-rc.2.md
@@ -2,7 +2,7 @@
Release Date: May 4, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.1/v23.1.0.md b/src/current/_includes/releases/v23.1/v23.1.0.md
index 89d652811d9..cd65a6571dc 100644
--- a/src/current/_includes/releases/v23.1/v23.1.0.md
+++ b/src/current/_includes/releases/v23.1/v23.1.0.md
@@ -4,7 +4,7 @@ Release Date: May 15, 2023
With the release of CockroachDB v23.1, we've added new capabilities in CockroachDB to help you migrate, build, and operate more efficiently. Check out a [summary of the most significant user-facing changes](#v23-1-0-feature-highlights) and then [upgrade to CockroachDB v23.1](https://www.cockroachlabs.com/docs/v23.1/upgrade-cockroach-version).
-{% include releases/release-downloads-docker-image.md release=include.release advisory_key="a103220"%}
+{% include releases/new-release-downloads-docker-image.md release=include.release advisory_key="a103220"%}
Feature highlights
@@ -32,6 +32,8 @@ The features highlighted below are freely available in CockroachDB {{ site.data.
}
+
+
SQL
Queries
@@ -429,6 +431,8 @@ This change will only apply to new clusters. Existing clusters will retain the 2
+
+
Backward-incompatible changes
Before [upgrading to CockroachDB v23.1](https://www.cockroachlabs.com/docs/v23.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes, as well as key cluster setting changes, and adjust your deployment as necessary.
diff --git a/src/current/_includes/releases/v23.1/v23.1.1.md b/src/current/_includes/releases/v23.1/v23.1.1.md
index 593beea4824..8098b813b27 100644
--- a/src/current/_includes/releases/v23.1/v23.1.1.md
+++ b/src/current/_includes/releases/v23.1/v23.1.1.md
@@ -2,7 +2,7 @@
Release Date: May 16, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.1/v23.1.10.md b/src/current/_includes/releases/v23.1/v23.1.10.md
index fb5db56f9e5..63e6b1f0015 100644
--- a/src/current/_includes/releases/v23.1/v23.1.10.md
+++ b/src/current/_includes/releases/v23.1/v23.1.10.md
@@ -2,7 +2,7 @@
Release Date: September 18, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.1/v23.1.11.md b/src/current/_includes/releases/v23.1/v23.1.11.md
index c888032e31f..cc6ab8d2b3b 100644
--- a/src/current/_includes/releases/v23.1/v23.1.11.md
+++ b/src/current/_includes/releases/v23.1/v23.1.11.md
@@ -2,7 +2,7 @@
Release Date: October 2, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.12.md b/src/current/_includes/releases/v23.1/v23.1.12.md
index 04d969044aa..f82e0ee773e 100644
--- a/src/current/_includes/releases/v23.1/v23.1.12.md
+++ b/src/current/_includes/releases/v23.1/v23.1.12.md
@@ -2,7 +2,7 @@
Release Date: November 13, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.1/v23.1.13.md b/src/current/_includes/releases/v23.1/v23.1.13.md
index d2a14a51091..25745b30199 100644
--- a/src/current/_includes/releases/v23.1/v23.1.13.md
+++ b/src/current/_includes/releases/v23.1/v23.1.13.md
@@ -2,7 +2,7 @@
Release Date: December 11, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.14.md b/src/current/_includes/releases/v23.1/v23.1.14.md
index a47a633ac6b..546e625a65b 100644
--- a/src/current/_includes/releases/v23.1/v23.1.14.md
+++ b/src/current/_includes/releases/v23.1/v23.1.14.md
@@ -2,7 +2,7 @@
Release Date: January 17, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
@@ -17,16 +17,16 @@ Release Date: January 17, 2024
- The [**Cluster Overview** page](https://www.cockroachlabs.com/docs/v23.1/ui-cluster-overview-page) now correctly renders the background color for email signups, fixing an issue where it was difficult to read the text. [#114546][#114546]
- Updated the **CPU Time** label to **SQL CPU Time** on the [Overview page](https://www.cockroachlabs.com/docs/v23.1/ui-overview-dashboard) and clarified the tooltip. [#116448][#116448]
-- Fixed an issue where the following `AggHistogram`-powered metrics reported quantiles incorrectly in the [Overview page](https://www.cockroachlabs.com/docs/v23.1/ui-overview-dashboard). The list of affected metrics is:
- - `changefeed.message_size_hist`
- - `changefeed.parallel_io_queue_nanos`
- - `changefeed.sink_batch_hist_nanos`
- - `changefeed.flush_hist_nanos`
- - `changefeed.commit_latency`
- - `changefeed.admit_latency`
- - `jobs.row_level_ttl.span_total_duration`
- - `jobs.row_level_ttl.select_duration`
- - `jobs.row_level_ttl.delete_duration`
+- Fixed an issue where the following `AggHistogram`-powered metrics reported quantiles incorrectly in the [Overview page](https://www.cockroachlabs.com/docs/v23.1/ui-overview-dashboard). The list of affected metrics is:
+ - `changefeed.message_size_hist`
+ - `changefeed.parallel_io_queue_nanos`
+ - `changefeed.sink_batch_hist_nanos`
+ - `changefeed.flush_hist_nanos`
+ - `changefeed.commit_latency`
+ - `changefeed.admit_latency`
+ - `jobs.row_level_ttl.span_total_duration`
+ - `jobs.row_level_ttl.select_duration`
+ - `jobs.row_level_ttl.delete_duration`
This bug affected only DB Console dashboards and not the Prometheus-compatible endpoint `/_status/vars`. [#114747][#114747]
- In the **SQL Activity Transaction Details** page, you can now view a transaction fingerprint ID across multiple applications by passing a comma-separated encoded string of transaction fingerprint IDs in the `appNames` URL search parameter. [#116102][#116102]
diff --git a/src/current/_includes/releases/v23.1/v23.1.15.md b/src/current/_includes/releases/v23.1/v23.1.15.md
index 2c1ba09eb34..732fcea9b4b 100644
--- a/src/current/_includes/releases/v23.1/v23.1.15.md
+++ b/src/current/_includes/releases/v23.1/v23.1.15.md
@@ -2,7 +2,7 @@
Release Date: February 20, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
@@ -71,4 +71,3 @@ We would like to thank the following contributors from the CockroachDB community
[566a30300]: https://github.com/cockroachdb/cockroach/commit/566a30300
[7667710a0]: https://github.com/cockroachdb/cockroach/commit/7667710a0
[ce971160e]: https://github.com/cockroachdb/cockroach/commit/ce971160e
-
diff --git a/src/current/_includes/releases/v23.1/v23.1.16.md b/src/current/_includes/releases/v23.1/v23.1.16.md
index bf41904e4a4..f6040c4ed7a 100644
--- a/src/current/_includes/releases/v23.1/v23.1.16.md
+++ b/src/current/_includes/releases/v23.1/v23.1.16.md
@@ -2,7 +2,7 @@
Release Date: February 27, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.1/v23.1.17.md b/src/current/_includes/releases/v23.1/v23.1.17.md
index 798cb9438ff..9ff6f415e85 100644
--- a/src/current/_includes/releases/v23.1/v23.1.17.md
+++ b/src/current/_includes/releases/v23.1/v23.1.17.md
@@ -2,7 +2,7 @@
Release Date: March 19, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.1/v23.1.18.md b/src/current/_includes/releases/v23.1/v23.1.18.md
index ed5e3422fa0..932db309005 100644
--- a/src/current/_includes/releases/v23.1/v23.1.18.md
+++ b/src/current/_includes/releases/v23.1/v23.1.18.md
@@ -2,7 +2,7 @@
Release Date: April 9, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.19.md b/src/current/_includes/releases/v23.1/v23.1.19.md
index 1aca1134d2d..ecae59e1828 100644
--- a/src/current/_includes/releases/v23.1/v23.1.19.md
+++ b/src/current/_includes/releases/v23.1/v23.1.19.md
@@ -2,7 +2,7 @@
Release Date: April 18, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.1/v23.1.2.md b/src/current/_includes/releases/v23.1/v23.1.2.md
index db689d992b3..595b8442645 100644
--- a/src/current/_includes/releases/v23.1/v23.1.2.md
+++ b/src/current/_includes/releases/v23.1/v23.1.2.md
@@ -2,7 +2,7 @@
Release Date: May 30, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Backward-incompatible changes
@@ -122,11 +122,11 @@ Release Date: May 30, 2023
- Fixed a bug where [`COPY`](https://www.cockroachlabs.com/docs/v23.1/copy-from) in v23.1.0 and beta versions would incorrectly encode data with multiple column families. The data must be dropped and re-imported to be encoded correctly. [#103355][#103355]
- Optimized over-head of [`pg_catalog.pg_description`](https://www.cockroachlabs.com/docs/v23.1/pg-catalog) and [`pg_catalog.pg_shdescription`](https://www.cockroachlabs.com/docs/v23.1/pg-catalog), which can lead to performance regression relative to v22.2 [#103331][#103331]
- Timeseries [metric](https://www.cockroachlabs.com/docs/v23.1/metrics) counts will now show cumulative counts for a histogram rather than a windowed count. A `-sum` timeseries is also exported to keep track of the cumulative sum of all samples in the histogram. [#103444][#103444]
-- Fixed a bug where CockroachDB could produce incorrect results when evaluating queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v23.1/order-by) clause in rare circumstances. In particular, some rows could be duplicated if all of the following conditions were met:
+- Fixed a bug where CockroachDB could produce incorrect results when evaluating queries with [`ORDER BY`](https://www.cockroachlabs.com/docs/v23.1/order-by) clause in rare circumstances. In particular, some rows could be duplicated if all of the following conditions were met:
1. The query had a `LIMIT` clause.
1. The `SORT` operation had to spill to disk (meaning that `LIMIT` number of rows used up non-trivial amounts of memory, e.g. the rows were "wide").
- 1. The `ORDER BY` clause contained multiple columns **and** the ordering on the prefix of those columns was already provided by the index.
-
+ 1. The `ORDER BY` clause contained multiple columns **and** the ordering on the prefix of those columns was already provided by the index.
+
The bug has been present since at least v22.1. [#102790][#102790]
- Fixed a bug where CockroachDB could previously encounter a nil pointer crash when populating data for [SQL Activity](https://www.cockroachlabs.com/docs/v23.1/ui-overview#sql-activity) page in some rare cases. The bug was present in v22.2.9 and v23.1.1 releases. [#103521][#103521]
- Fixed calls to undefined objects. [#103520][#103520]
@@ -153,8 +153,8 @@ Release Date: May 30, 2023
- Improved performance when joining with the `pg_description` table. [#103331][#103331]
- Added concurrency to speed up the phase of the [restore](https://www.cockroachlabs.com/docs/v23.1/restore) that ingests backed up table statisitcs. [#102694][#102694]
- Added support for constrained scans using computed columns which are part of an [index](https://www.cockroachlabs.com/docs/v23.1/indexes) when there is an `IN` list or `OR`'ed predicate on the columns that appear in the computed column expression. [#103412][#103412]
-- Added two new statistics which are useful for tracking the efficiency of [snapshot transfers](https://www.cockroachlabs.com/docs/v23.1/architecture/replication-layer#snapshots). Some snapshots will always fail due to system level "races", but the goal is to keep it as low as possible.
- - `range.snapshots.recv-failed` - The number of snapshots sent attempts that are initiated but not accepted by the recipient.
+- Added two new statistics which are useful for tracking the efficiency of [snapshot transfers](https://www.cockroachlabs.com/docs/v23.1/architecture/replication-layer#snapshots). Some snapshots will always fail due to system level "races", but the goal is to keep it as low as possible.
+ - `range.snapshots.recv-failed` - The number of snapshots sent attempts that are initiated but not accepted by the recipient.
- `range.snapshots.recv-unusable` - The number of snapshots that were fully transmitted but not used. [#101837][#101837]
Build changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.20.md b/src/current/_includes/releases/v23.1/v23.1.20.md
new file mode 100644
index 00000000000..770b9a3bd46
--- /dev/null
+++ b/src/current/_includes/releases/v23.1/v23.1.20.md
@@ -0,0 +1,35 @@
+## v23.1.20
+
+Release Date: May 1, 2024
+
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
+
+
SQL language changes
+
+- Added a [session variable]({% link v23.1/set-vars.md %}) `optimizer_use_improved_multi_column_selectivity_estimate`, which if enabled, causes the [optimizer]({% link v23.1/cost-based-optimizer.md %}) to use an improved selectivity estimate for multi-column predicates. This setting will default to `true` on versions 24.2+, and `false` on prior versions. [#123152][#123152]
+- Added three new [cluster settings]({% link v23.1/cluster-settings.md %}) for controlling [optimizer table statistics]({% link v23.1/cost-based-optimizer.md %}#table-statistics) forecasting:
+ 1. `sql.stats.forecasts.min_observations` is the minimum number of observed statistics required to produce a forecast.
+ 1. `sql.stats.forecasts.min_goodness_of_fit` is the minimum R² (goodness of fit) measurement required from all predictive models to use a forecast.
+ 1. `sql.stats.forecasts.max_decrease` is the most a prediction can decrease, expressed as the minimum ratio of the prediction to the lowest prior observation. [#123149][#123149]
+
+
Bug fixes
+
+- Statistics forecasts of zero rows by [the optimizer]({% link v23.1/cost-based-optimizer.md %}#table-statistics) can cause bad plans. This commit changes forecasting to avoid predicting zero rows for most downward-trending statistics. [#123149][#123149]
+- A [job]({% link v23.1/show-jobs.md %}) will now [log]({% link v23.1/logging.md %}#ops) rather than fail if it reports an out-of bound progress fraction. [#123133][#123133]
+
+
Performance improvements
+
+- Added a new [session variable]({% link v23.1/set-vars.md %}) `optimizer_use_improved_zigzag_join_costing`. When enabled, the cost of [zigzag joins]({% link v23.1/cost-based-optimizer.md %}#zigzag-joins) is updated so zigzag joins will only be chosen over scans if the zigzag joins produce fewer rows. This change only applies if the session variable `enable_zigzag_join` is also `on`. [#123152][#123152]
+- Improved the selectivity estimation of multi-column filters by the [optimizer]({% link v23.1/cost-based-optimizer.md %}) when the multi-column distinct count is high. This avoids cases where CockroachDB significantly over-estimates the selectivity of a multi-column predicate and as a result can prevent the optimizer from choosing a bad query plan. [#123152][#123152]
+
+
+
+
Contributors
+
+This release includes 5 merged PRs by 5 authors.
+
+
+
+- The `FORCE_INVERTED_INDEX` hint causes the [optimizer]({% link v23.1/cost-based-optimizer.md %}) to prefer a query plan scan over any [inverted index]({% link v23.1/inverted-indexes.md %}) of the hinted table. An error is emitted if no such query plan can be generated. [#122301][#122301]
+- Introduced three new [cluster settings]({% link v23.1/cluster-settings.md %}) for controlling [table statistics]({% link v23.1/cost-based-optimizer.md %}#table-statistics) forecasting:
+ - [`sql.stats.forecasts.min_observations`]({% link v23.1/cluster-settings.md %}#setting-sql-stats-forecasts-min-observations) is the minimum number of observed statistics required to produce a forecast.
+ - [`sql.stats.forecasts.min_goodness_of_fit`]({% link v23.1/cluster-settings.md %}#setting-sql-stats-forecasts-min-goodness-of-fit) is the minimum R² (goodness of fit) measurement required from all predictive models to use a forecast.
+ - [`sql.stats.forecasts.max_decrease`]({% link v23.1/cluster-settings.md %}#setting-sql-stats-forecasts-max-decrease) is the most a prediction can decrease, expressed as the minimum ratio of the prediction to the lowest prior observation. [#122990][#122990]
+- Added a [session variable]({% link v23.1/set-vars.md %}) `optimizer_use_improved_multi_column_selectivity_estimate`, which if enabled, causes the [optimizer]({% link v23.1/cost-based-optimizer.md %}) to use an improved selectivity estimate for multi-column predicates. This setting will default to `true` on v24.2 and later, and `false` on prior versions. [#123068][#123068]
+
+
Operational changes
+
+- A minimum [Raft scheduler]({% link v23.1/architecture/replication-layer.md %}#raft) concurrency is now enforced per [store]({% link v23.1/architecture/storage-layer.md %}#overview) so that nodes with many stores do not spread workers too thinly. This helps to avoid high scheduler latency across replicas on a store when load is imbalanced. [#120797][#120797]
+
+
Bug fixes
+
+- Fixed a bug introduced in v22.2.9 that could cause a slow memory leak that can accumulate when opening many new connections. [#121056][#121056]
+- [Sequence]({% link v23.1/create-sequence.md %}) options for `NO MINVALUE` and `NO MAXVALUE` now match [PostgreSQL behavior](https://www.postgresql.org/docs/current/sql-createsequence.html). Sequence `MINVALUE` and `MAXVALUE` now automatically adjust to the bounds of a new integer type in [`ALTER SEQUENCE ... AS`]({% link v23.1/alter-sequence.md %}), matching PostgreSQL behavior. [#121307][#121307]
+- Fixed a bug where the [timeseries graphs shown on the **SQL Activity Statement Fingerprint** page]({% link v23.1/ui-statements-page.md %}#charts) in the [DB Console]({% link v23.1/ui-overview.md %}) were not rendering properly. This involved fixing a bug related to setting the time range of the charts. [#121382][#121382] [#122235][#122235]
+- Fixed a bug where CockroachDB could incorrectly evaluate `IN` expressions that had `INT2` or `INT4` type on the left side, and values on the right side that were outside the range of the left side. The bug had been present since at least v21.1. [#121955][#121955]
+- Previously, on long-running [sessions]({% link v23.1/show-sessions.md %}) that issue many (hundreds of thousands or more) [transactions]({% link v23.1/transactions.md %}), CockroachDB's internal memory accounting system, the limit for which is configured via the [`--max-sql-memory` flag]({% link v23.1/cockroach-start.md %}#general) could leak. This bug, in turn, could result in the error message `"root: memory budget exceeded"` for other queries. The bug was present in v23.1.17 and is now fixed. [#121949][#121949] [#122235][#122235]
+- Reintroduced [cluster setting]({% link v23.1/cluster-settings.md %}) `sql.auth.modify_cluster_setting_applies_to_all.enabled` so that mixed-version clusters can migrate off of this setting, which is deprecated in favor of the privilege [`MODIFYSQLCLUSTERSETTING`]({% link v23.1/set-cluster-setting.md %}#required-privileges). [#122055][#122055] [#122635][#122635]
+- Fixed a bug where a [`GRANT ... ON ALL TABLES`]({% link v23.1/grant.md %}) statement could fail if sequences existed and they did not support a privilege (e.g., `BACKUP`). [#122057][#122057]
+- Fixed a bug where [client certificate authentication]({% link v23.1/authentication.md %}#client-authentication) combined with [identity maps]({% link v23.1/sso-sql.md %}#identity-map-configuration) (`server.identity_map.configuration`) did not work. For the feature to work correctly, the client must specify a valid database user in the [connection string]({% link v23.1/connection-parameters.md %}). This bug had been present since v23.1. [#122746][#122746]
+- Statistics forecasts of zero rows can cause suboptimal [query plans]({% link v23.1/cost-based-optimizer.md %}). Forecasting will now avoid predicting zero rows for most downward-trending statistics. [#122990][#122990]
+
+
Performance improvements
+
+- More efficient [query plans]({% link v23.1/cost-based-optimizer.md %}) are now generated for queries with text similarity filters, for example, `text_col % 'foobar'`. These plans are generated if the `optimizer_use_trigram_similarity_optimization` [session setting]({% link v23.1/set-vars.md %}) is enabled. It is disabled by default. [#122683][#122683]
+- Added a new [session variable]({% link v23.1/set-vars.md %}) `optimizer_use_improved_zigzag_join_costing`. When enabled, the cost of [zigzag joins]({% link v23.1/cost-based-optimizer.md %}#zigzag-joins) is updated so zigzag joins will only be chosen over scans if the zigzag joins produce fewer rows. This change only applies if the session variable `enable_zigzag_join` is also `on`. [#123068][#123068]
+- Improved the selectivity estimation of multi-column filters by the [optimizer]({% link v23.1/cost-based-optimizer.md %}) when the multi-column distinct count is high. This avoids cases where CockroachDB significantly over-estimates the selectivity of a multi-column predicate and as a result can prevent the optimizer from choosing a bad query plan. [#123068][#123068]
+
+
Contributors
+
+This release includes 59 merged PRs by 26 authors.
+
+
+
+[#120797]: https://github.com/cockroachdb/cockroach/pull/120797
+[#121056]: https://github.com/cockroachdb/cockroach/pull/121056
+[#121307]: https://github.com/cockroachdb/cockroach/pull/121307
+[#121382]: https://github.com/cockroachdb/cockroach/pull/121382
+[#121949]: https://github.com/cockroachdb/cockroach/pull/121949
+[#121955]: https://github.com/cockroachdb/cockroach/pull/121955
+[#122055]: https://github.com/cockroachdb/cockroach/pull/122055
+[#122057]: https://github.com/cockroachdb/cockroach/pull/122057
+[#122235]: https://github.com/cockroachdb/cockroach/pull/122235
+[#122301]: https://github.com/cockroachdb/cockroach/pull/122301
+[#122635]: https://github.com/cockroachdb/cockroach/pull/122635
+[#122683]: https://github.com/cockroachdb/cockroach/pull/122683
+[#122746]: https://github.com/cockroachdb/cockroach/pull/122746
+[#122990]: https://github.com/cockroachdb/cockroach/pull/122990
+[#123068]: https://github.com/cockroachdb/cockroach/pull/123068
diff --git a/src/current/_includes/releases/v23.1/v23.1.3.md b/src/current/_includes/releases/v23.1/v23.1.3.md
index 11efcc273c1..2fa09a04c6c 100644
--- a/src/current/_includes/releases/v23.1/v23.1.3.md
+++ b/src/current/_includes/releases/v23.1/v23.1.3.md
@@ -6,7 +6,7 @@ Release Date: June 13, 2023
A [bug](https://github.com/cockroachdb/cockroach/issues/104798) was discovered in a change included in v23.1.3 (this release). This bug can affect clusters upgrading to v23.1.3 from [v22.2.x]({% link releases/v22.2.md %}). In an affected cluster, jobs that were running during the upgrade could hang or fail to run after the upgrade is finalized. Users upgrading from v22.2.x are advised to use [v23.1.2](#v23-1-2) to upgrade, or to set the [`cluster.preserve_downgrade_option`](https://www.cockroachlabs.com/docs/v23.1/upgrade-cockroach-version#step-3-decide-how-the-upgrade-will-be-finalized) cluster setting to delay finalization of the upgrade until they can upgrade to v23.1.4.
{{site.data.alerts.end}}
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.1/v23.1.4.md b/src/current/_includes/releases/v23.1/v23.1.4.md
index a712fd8832a..5aec9530091 100644
--- a/src/current/_includes/releases/v23.1/v23.1.4.md
+++ b/src/current/_includes/releases/v23.1/v23.1.4.md
@@ -2,7 +2,7 @@
Release Date: June 20, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.1/v23.1.5.md b/src/current/_includes/releases/v23.1/v23.1.5.md
index 34493bc3338..96e04fed756 100644
--- a/src/current/_includes/releases/v23.1/v23.1.5.md
+++ b/src/current/_includes/releases/v23.1/v23.1.5.md
@@ -2,7 +2,7 @@
Release Date: July 5, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.1/v23.1.6.md b/src/current/_includes/releases/v23.1/v23.1.6.md
index 68876c7b85e..18fd56abd79 100644
--- a/src/current/_includes/releases/v23.1/v23.1.6.md
+++ b/src/current/_includes/releases/v23.1/v23.1.6.md
@@ -2,22 +2,22 @@
Release Date: July 24, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
- Fixed a bug in v23.1.5 where [debug zips](https://www.cockroachlabs.com/docs/v23.1/cockroach-debug-zip) were empty in the `crdb_internal.cluster_settings.txt` file. Debug zips now properly show the information from `cluster_settings`. [#107105][#107105]
- Fixed a bug where some primary indexes would incorrectly be treated internally as secondary indexes, which could cause schema change operations to fail. The bug could occur if [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v23.1/alter-table#alter-primary-key) was used on v21.1 or earlier, and the cluster was upgraded. [#106426][#106426]
-- Extended the `cockroach debug doctor` to detect [indexes](https://www.cockroachlabs.com/docs/v23.1/indexes) which could potentially lose data by being dropped when a column is stored inside them and added a check inside [`DROP INDEX`](https://www.cockroachlabs.com/docs/v23.1/drop-index) to prevent dropping indexes with this problem to avoid data loss. [#106863][#106863]
+- Extended the `cockroach debug doctor` to detect [indexes](https://www.cockroachlabs.com/docs/v23.1/indexes) which could potentially lose data by being dropped when a column is stored inside them and added a check inside [`DROP INDEX`](https://www.cockroachlabs.com/docs/v23.1/drop-index) to prevent dropping indexes with this problem to avoid data loss. [#106863][#106863]
Contributors
-This release includes 3 merged PRs by 15 authors.
+This release includes 3 merged PRs by 15 authors.
[#106863]: https://github.com/cockroachdb/cockroach/pull/106863
[#106426]: https://github.com/cockroachdb/cockroach/pull/106426
-[#107105]: https://github.com/cockroachdb/cockroach/pull/107105
\ No newline at end of file
+[#107105]: https://github.com/cockroachdb/cockroach/pull/107105
diff --git a/src/current/_includes/releases/v23.1/v23.1.7.md b/src/current/_includes/releases/v23.1/v23.1.7.md
index fc863616e63..9784e30fdb6 100644
--- a/src/current/_includes/releases/v23.1/v23.1.7.md
+++ b/src/current/_includes/releases/v23.1/v23.1.7.md
@@ -2,7 +2,7 @@
Release Date: July 31, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v23.1/v23.1.8.md b/src/current/_includes/releases/v23.1/v23.1.8.md
index 1c529d95e80..43b294d26a2 100644
--- a/src/current/_includes/releases/v23.1/v23.1.8.md
+++ b/src/current/_includes/releases/v23.1/v23.1.8.md
@@ -2,7 +2,7 @@
Release Date: August 7, 2023
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
diff --git a/src/current/_includes/releases/v23.2/v23.2.0-rc.2.md b/src/current/_includes/releases/v23.2/v23.2.0-rc.2.md
index 942757d133a..7cdcddc0a66 100644
--- a/src/current/_includes/releases/v23.2/v23.2.0-rc.2.md
+++ b/src/current/_includes/releases/v23.2/v23.2.0-rc.2.md
@@ -2,7 +2,7 @@
Release Date: January 9, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.2/v23.2.0.md b/src/current/_includes/releases/v23.2/v23.2.0.md
index 36933a62ac7..c4a60255be8 100644
--- a/src/current/_includes/releases/v23.2/v23.2.0.md
+++ b/src/current/_includes/releases/v23.2/v23.2.0.md
@@ -4,7 +4,7 @@ Release Date: February 5, 2024
With the release of CockroachDB v23.2, we've added new capabilities to help you migrate, build, and operate more efficiently. See our summary of the most significant user-facing changes under [Feature Highlights](#v23-2-0-feature-highlights).
-{% include releases/release-downloads-docker-image.md release=include.release advisory_key="a103220"%}
+{% include releases/new-release-downloads-docker-image.md release=include.release advisory_key="a103220"%}
Feature highlights
@@ -23,49 +23,12 @@ This section summarizes the most significant user-facing changes in v23.2.0 and
- [Known limitations](#v23-2-0-known-limitations)
- [Additional resources](#v23-2-0-additional-resources)
-
-
{{ site.data.alerts.callout_info }}
In CockroachDB Self-Hosted, all available features are free to use unless their description specifies that an Enterprise license is required. For more information, see the [Licensing FAQ](https://www.cockroachlabs.com/docs/stable/licensing-faqs).
{{ site.data.alerts.end }}
+
+
Observability
@@ -380,6 +343,8 @@ In CockroachDB Self-Hosted, all available features are free to use unless their
+
+
Backward-incompatible changes
Before [upgrading to CockroachDB v23.2]({% link v23.2/upgrade-cockroach-version.md %}), be sure to review the following backward-incompatible changes, as well as [key cluster setting changes](#v23-2-0-cluster-settings), and adjust your deployment as necessary.
diff --git a/src/current/_includes/releases/v23.2/v23.2.1.md b/src/current/_includes/releases/v23.2/v23.2.1.md
index 3b4ed063546..929ae6c47c9 100644
--- a/src/current/_includes/releases/v23.2/v23.2.1.md
+++ b/src/current/_includes/releases/v23.2/v23.2.1.md
@@ -2,7 +2,7 @@
Release Date: February 20, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v23.2/v23.2.2.md b/src/current/_includes/releases/v23.2/v23.2.2.md
index acc6b3b0d20..59485e3d30f 100644
--- a/src/current/_includes/releases/v23.2/v23.2.2.md
+++ b/src/current/_includes/releases/v23.2/v23.2.2.md
@@ -2,7 +2,7 @@
Release Date: February 27, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Bug fixes
diff --git a/src/current/_includes/releases/v23.2/v23.2.3.md b/src/current/_includes/releases/v23.2/v23.2.3.md
index f7f88b45031..a03ac245ca0 100644
--- a/src/current/_includes/releases/v23.2/v23.2.3.md
+++ b/src/current/_includes/releases/v23.2/v23.2.3.md
@@ -2,7 +2,7 @@
Release Date: March 20, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
@@ -110,4 +110,4 @@ This release includes 118 merged PRs by 42 authors.
[#119738]: https://github.com/cockroachdb/cockroach/pull/119738
[#119768]: https://github.com/cockroachdb/cockroach/pull/119768
[#120076]: https://github.com/cockroachdb/cockroach/pull/120076
-[#120245]: https://github.com/cockroachdb/cockroach/pull/120245
\ No newline at end of file
+[#120245]: https://github.com/cockroachdb/cockroach/pull/120245
diff --git a/src/current/_includes/releases/v23.2/v23.2.4.md b/src/current/_includes/releases/v23.2/v23.2.4.md
index 15a74769583..f6b48661f8d 100644
--- a/src/current/_includes/releases/v23.2/v23.2.4.md
+++ b/src/current/_includes/releases/v23.2/v23.2.4.md
@@ -2,7 +2,7 @@
Release Date: April 11, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
@@ -53,4 +53,4 @@ This release includes 65 merged PRs by 37 authors
[#120396]: https://github.com/cockroachdb/cockroach/pull/120396
[#120933]: https://github.com/cockroachdb/cockroach/pull/120933
[#121329]: https://github.com/cockroachdb/cockroach/pull/121329
-[#121875]: https://github.com/cockroachdb/cockroach/pull/121875
\ No newline at end of file
+[#121875]: https://github.com/cockroachdb/cockroach/pull/121875
diff --git a/src/current/_includes/releases/v23.2/v23.2.5.md b/src/current/_includes/releases/v23.2/v23.2.5.md
new file mode 100644
index 00000000000..a5fc11c5696
--- /dev/null
+++ b/src/current/_includes/releases/v23.2/v23.2.5.md
@@ -0,0 +1,65 @@
+## v23.2.5
+
+Release Date: May 7, 2024
+
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
+
+
SQL language changes
+
+- The new [cluster setting](../v23.2/cluster-settings.html) [`sql.stats.virtual_computed_columns.enabled`](../v23.2/cluster-settings.html#setting-sql-stats-virtual-computed-columns-enabled) enables collection of [table statistics](../v23.2/cost-based-optimizer.html#table-statistics) on virtual [computed columns](../v23.2/computed-columns.html). [#120923][#120923]
+- The new [session variable](../v23.2/session-variables.html) `optimizer_use_virtual_computed_column_stats` configures the [optimizer](../v23.2/cost-based-optimizer.html) to consider table statistics on virtual computed columns. [#121179][#121179]
+- The new `FORCE_INVERTED_INDEX` [hint](../v23.2/indexes.html#selection) configures the [optimizer](../v23.2/cost-based-optimizer.html) to prefer a query plan scan over any inverted index of the hinted table. If no such query plan can be generated, an error is logged. [#122300][#122300]
+- The [optimizer](../v23.2/cost-based-optimizer.html) can now plan constrained scans over [partial indexes](../v23.2/partial-indexes.html) in more cases, particularly on partial indexes with predicates referencing virtual [computed columns](../v23.2/cluster-settings.html#setting-sql-stats-virtual-computed-columns-enabled). [#123408][#123408]
+
+
Operational changes
+
+- A minimum [Raft](../v23.2/architecture/replication-layer.html#raft) scheduler concurrency is now enforced per [store](../v23.2/cockroach-start.html#storage) so that a node with many stores does not spread workers too thinly. This avoids high scheduler latency across [replicas](../v23.2/architecture/glossary.html#replica) on a store when load is imbalanced. [#120798][#120798]
+- A [changefeed](../v23.2/change-data-capture-overview.html) optimization to reduce duplicates during aggregator restarts has been disabled due to poor performance. [#123596][#123596]
+
+
DB Console changes
+
+- The **Commit Latency** chart in the [Changefeed Dashboard](../v23.2/ui-cdc-dashboard.html) now aggregates by max instead of by sum for multi-node changefeeds. This more accurately reflects the amount of time for events to be acknowledged by the downstream sink. [#121235][#121235]
+
+
Bug fixes
+
+- Fixed a slow memory leak when opening many new [connections](../v23.2/connect-to-the-database.html). This bug was introduced in v22.2.9 and v23.1.0. [#121055][#121055]
+- Fixed a bug that occurred when using [`ALTER TABLE`](../v23.2/alter-table.html) to drop and re-add a [`CHECK` constraint](../v23.2/check.html) with the same name. [#121055][#121055]
+- [Sequence](../v23.2/create-sequence.html) options `MINVALUE` and `MAXVALUE` automatically adjust to new types bounds. This mirrors the behavior of PostgreSQL. [#121309][#121309]
+- Fixed a bug that could prevent timeseries graphs shown on the DB Console SQL Activity [Statement Details](../v23.2/ui-statements-page.html) page from rendering correctly when specifying a custom time range. [#121383][#121383]
+- Fixed a bug present since at least v21.1 that could lead to incorrect evaluation of an `IN` expression with:
+ - [`INT2` or `INT4`](../v23.2/int.html) type on the left side, and
+ - Values on the right side that are outside of the range of the left side.
+
+ [#121953][#121953]
+- Fixed a leak in reported memory usage (not the actual memory usage) by the internal memory accounting system, the limit for which is configured via the [`--max-sql-memory`](../v23.2/cockroach-start.html#flags) flag when a long-running sessions issues hundreds of thousands or more [transactions](../v23.2/transactions.html). This reporting bug could cause `root: memory budget exceeded` errors for other queries. The bug was introduced in v23.1.17 and v23.2.3. [#121950][#121950]
+- Fixed a bug introduced in v23.2.4 that could prevent collection of [table statistics](../v23.2/cost-based-optimizer.html#table-statistics) on tables that have on virtual [computed columns](../v23.2/computed-columns.html) of [user-defined type](../v23.2/create-type.html) when the newly-introduced [cluster setting](../v23.2/cluster-settings.html) [`sql.stats.virtual_computed_columns.enabled`](../v23.2/cluster-settings.html#setting-sql-stats-virtual-computed-columns-enabled) is set to `true` (defaults to `false`). The setting was introduced in v23.2.4 and is disabled by default. [#122319][#122319]
+- Fixed a bug where a [`GRANT ... ON ALL TABLES`](../v23.2/grant.html) statement could fail if a sequence existed that did not support the [privilege](../v23.2/security-reference/authorization.html#privileges) being granted. [#122034][#122034]
+- Fixed an existing bug where an unused value cannot be dropped from an [`ENUM`](../v23.2/enum.html) if the`ENUM` itself is referenced by a [user-defined function](../v23.2/user-defined-functions.html). A value can now be dropped from an`ENUM` as long as the value itself is not being referenced by any other data element, including a user-defined function. [#121237][#121237]
+
+
+
+
Contributors
+
+This release includes 79 merged PRs by 33 authors.
+
+
@@ -13,7 +13,6 @@ Release Date: March 7, 2024
- [`ALTER CHANGEFEED`]({% link v23.2/alter-changefeed.md %}) no longer removes a [CDC query]({% link v23.2/cdc-queries.md %}) when modifying changefeed properties. [#116498][#116498]
- `changefeed.balance_range_distribution.enable` is now deprecated. Instead, use the new [cluster setting]({% link v23.2/cluster-settings.md %}) `changefeed.default_range_distribution_strategy`. `changefeed.default_range_distribution_strategy='balanced_simple'` has the same effect as setting `changefeed.balance_range_distribution.enable=true`. It does not require `initial_scan='only'`, which was required by the old setting. [#115166][#115166]
- CDC queries now correctly handle the [`changefeed_creation_timestamp`]({% link v23.2/cdc-queries.md %}#cdc-query-function-support) function. [#117520][#117520]
-- The new `WITH PRIOR REPLICATION DETAILS` option can now be passed when inspecting a virtual cluster with [`SHOW VIRTUAL CLUSTER`]({% link v23.2/show-virtual-cluster.md %}). This will request additional details about where the virtual cluster was replicated **from** and when it was activated, if that virtual cluster was created via replication. [#117636][#117636]
- The new syntax `ALTER VIRTUAL CLUSTER virtual-cluster START REPLICATION OF virtual-cluster ON physical-cluster` can now be used to reconfigure virtual clusters previously serving as sources for [physical cluster replication]({% link v23.2/physical-cluster-replication-overview.md %}) to become standbys to a promoted standby. This reverses the direction of replication while maximizing data reuse. [#117656][#117656]
- [`BACKUP`]({% link v23.2/backup.md %})s now load range information that is used to avoid a spike in metadata lookups when backups begin. [#116520][#116520]
- Clusters created to run [physical cluster replication]({% link v23.2/physical-cluster-replication-overview.md %}) no longer automatically disable the [`spanconfig.range_coalescing.system.enabled`]({% link v23.2/cluster-settings.md %}#setting-spanconfig-storage-coalesce-adjacent-enabled) and [`spanconfig.range_coalescing.application.enabled`]({% link v23.2/cluster-settings.md %}#setting-spanconfig-tenant-coalesce-adjacent-enabled) cluster settings. Users who started using physical cluster replication on v23.1 or v23.2 may wish to manually reset these settings. [#119221][#119221]
@@ -64,7 +63,7 @@ Release Date: March 7, 2024
- `OUT` and `INOUT` parameter classes are now supported in [user-defined functions]({% link v23.2/user-defined-functions.md %}). [#118610][#118610]
- Out-of-process SQL servers will now start exporting a new `sql.aggregated_livebytes` [metric]({% link v23.2/metrics.md %}). This metric gets updated once every 60 seconds by default, and its update interval can be configured via the `tenant_global_metrics_exporter_interval` [cluster setting]({% link v23.2/cluster-settings.md %}). [#119140][#119140]
- Added support for index hints with [`INSERT`]({% link v23.2/insert.md %}) and [`UPSERT`]({% link v23.2/upsert.md %}) statements. This allows `INSERT ... ON CONFLICT` and `UPSERT` queries to use index hints in the same way they are already supported for [`UPDATE`]({% link v23.2/update.md %}) and [`DELETE`]({% link v23.2/delete.md %}) statements. [#119104][#119104]
-- Added a new `ttl_disable_changefeed_replication` table storage parameter that can be used to disable changefeed replication for [row-level TTL]({% link v23.2/row-level-ttl.md %}) on a per-table basis. [#119611][#119611]
+- Added a new [`ttl_disable_changefeed_replication`]({% link v24.1/row-level-ttl.md %}#filter-changefeeds-for-tables-using-row-level-ttl) table storage parameter that can be used to disable changefeed replication for [row-level TTL]({% link v23.2/row-level-ttl.md %}) on a per-table basis. [#119611][#119611]
Operational changes
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-alpha.2.md b/src/current/_includes/releases/v24.1/v24.1.0-alpha.2.md
index ee04e588c80..91703fa7ad2 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-alpha.2.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-alpha.2.md
@@ -2,7 +2,7 @@
Release Date: March 11, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-alpha.3.md b/src/current/_includes/releases/v24.1/v24.1.0-alpha.3.md
index 49b65e3da53..a9c256e8cf7 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-alpha.3.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-alpha.3.md
@@ -2,7 +2,7 @@
Release Date: March 18, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
@@ -72,5 +72,3 @@ We would like to thank the following contributors from the CockroachDB community
[#120097]: https://github.com/cockroachdb/cockroach/pull/120097
[#120137]: https://github.com/cockroachdb/cockroach/pull/120137
[#120145]: https://github.com/cockroachdb/cockroach/pull/120145
-
-
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-alpha.4.md b/src/current/_includes/releases/v24.1/v24.1.0-alpha.4.md
index ece44ea0cf8..0d48352728c 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-alpha.4.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-alpha.4.md
@@ -2,7 +2,7 @@
Release Date: March 25, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
@@ -10,15 +10,15 @@ Release Date: March 25, 2024
General changes
-- The following [metrics](../v24.1/metrics.html) were added for observability of per-store disk events:
- - `storage.disk.read.count`
- - `storage.disk.read.bytes`
- - `storage.disk.read.time`
- - `storage.disk.write.count`
- - `storage.disk.write.bytes`
- - `storage.disk.write.time`
- - `storage.disk.io.time`
- - `storage.disk.weightedio.time`
+- The following [metrics](../v24.1/metrics.html) were added for observability of per-store disk events:
+ - `storage.disk.read.count`
+ - `storage.disk.read.bytes`
+ - `storage.disk.read.time`
+ - `storage.disk.write.count`
+ - `storage.disk.write.bytes`
+ - `storage.disk.write.time`
+ - `storage.disk.io.time`
+ - `storage.disk.weightedio.time`
- `storage.disk.iopsinprogress`
The metrics match the definitions of the `sys.host.disk.*` system metrics. [#119885][#119885]
@@ -27,7 +27,7 @@ Release Date: March 25, 2024
- `server.controller.default_target_cluster` can now be set to any virtual cluster name by default, including a virtual cluster yet to be created or have service started. [#120080][#120080]
- The [`READ COMMITTED`](../v24.1/read-committed.html) isolation level now requires the cluster to have a valid enterprise license. [#120154][#120154]
-- The new boolean changefeed option `ignore_disable_changefeed_replication`, when set to `true`, prevents the changefeed from filtering events even if CDC filtering is configured via the `disable_changefeed_replication` [session variable](../v24.1/session-variables.html), `sql.ttl.changefeed_replication.disabled` [cluster setting](../v24.1/cluster-settings.html), or the `ttl_disable_changefeed_replication` [table storage parameter](../v24.1/alter-table.html#table-storage-parameters). [#120255][#120255]
+- The new boolean changefeed option [`ignore_disable_changefeed_replication`](../v24.1/create-changefeed.html#ignore-disable-changefeed), when set to `true`, prevents the changefeed from filtering events even if CDC filtering is configured via the `disable_changefeed_replication` [session variable](../v24.1/session-variables.html), `sql.ttl.changefeed_replication.disabled` [cluster setting](../v24.1/cluster-settings.html), or the `ttl_disable_changefeed_replication` [table storage parameter](../v24.1/alter-table.html#table-storage-parameters). [#120255][#120255]
SQL language changes
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-alpha.5.md b/src/current/_includes/releases/v24.1/v24.1.0-alpha.5.md
index b8f0b770c62..e9e58c7fe06 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-alpha.5.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-alpha.5.md
@@ -2,7 +2,7 @@
Release Date: April 1, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-beta.1.md b/src/current/_includes/releases/v24.1/v24.1.0-beta.1.md
index 5b59305b6c2..4a0ee755e5f 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-beta.1.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-beta.1.md
@@ -2,7 +2,7 @@
Release Date: April 17, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
SQL language changes
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-beta.2.md b/src/current/_includes/releases/v24.1/v24.1.0-beta.2.md
index 6f5a9f0a481..d4110a09d5a 100644
--- a/src/current/_includes/releases/v24.1/v24.1.0-beta.2.md
+++ b/src/current/_includes/releases/v24.1/v24.1.0-beta.2.md
@@ -2,7 +2,7 @@
Release Date: April 24, 2024
-{% include releases/release-downloads-docker-image.md release=include.release %}
+{% include releases/new-release-downloads-docker-image.md release=include.release %}
Security updates
diff --git a/src/current/_includes/releases/v24.1/v24.1.0-beta.3.md b/src/current/_includes/releases/v24.1/v24.1.0-beta.3.md
new file mode 100644
index 00000000000..0852831e401
--- /dev/null
+++ b/src/current/_includes/releases/v24.1/v24.1.0-beta.3.md
@@ -0,0 +1,60 @@
+## v24.1.0-beta.3
+
+Release Date: April 30, 2024
+
+{% include releases/release-downloads-docker-image.md release=include.release %}
+
+
SQL language changes
+
+- Updated the [`SHOW GRANTS`]({% link v24.1/show-grants.md %}) responses to display the `object_type` and `object_name`, which has replaced the `relation_name` column. [#122823][#122823]
+- Added [external connection]({% link v24.1/create-external-connection.md %}) granted privileges to the [`SHOW GRANTS`]({% link v24.1/show-grants.md %}) command. [#122823][#122823]
+- Introduced three new [cluster settings]({% link v24.1/cluster-settings.md %}) for controlling table statistics forecasting:
+ - [`sql.stats.forecasts.min_observations`]({% link v24.1/cluster-settings.md %}#setting-sql-stats-forecasts-min-observations) is the minimum number of observed statistics required to produce a forecast.
+ - [`sql.stats.forecasts.min_goodness_of_fit`]({% link v24.1/cluster-settings.md %}#setting-sql-stats-forecasts-min-goodness-of-fit) is the minimum R² (goodness of fit) measurement required from all predictive models to use a forecast.
+ - [`sql.stats.forecasts.max_decrease`]({% link v24.1/cluster-settings.md %}#setting-sql-stats-forecasts-max-decrease) is the most a prediction can decrease, expressed as the minimum ratio of the prediction to the lowest prior observation. [#122459][#122459]
+
+
Bug fixes
+
+- Fixed a bug that could lead to descriptors having privileges to roles that no longer exist. Added an automated clean up for [dropped roles]({% link v24.1/drop-role.md %}) inside descriptors. [#122701][#122701]
+- Fixed a bug where [client certificate authentication]({% link v24.1/authentication.md %}#client-authentication) combined with [identity maps]({% link v24.1/sso-sql.md %}#identity-map-configuration) (`server.identity_map.configuration`) did not work since v23.1. For the feature to work correctly, the client must specify a valid db user in the [connection string]({% link v24.1/connection-parameters.md %}). [#122738][#122738]
+- Fixed a bug where the [row-based execution engine]({% link v24.1/architecture/sql-layer.md %}#query-execution) could drop a [`LIMIT`]({% link v24.1/limit-offset.md %}) clause when there was an [`ORDER BY`]({% link v24.1/order-by.md %}) clause, and the ordering was partially provided by an input operator. For example, this bug could occur with an ordering such as `ORDER BY a, b` when the scanned index was only ordered on column `a`. The impact of this bug was that more rows may have been returned than specified by the `LIMIT` clause. This bug is only present when not using the [vectorized execution engine]({% link v24.1/architecture/sql-layer.md %}#vectorized-query-execution). That is, when running with `SET vectorize = off;`. This bug has existed since CockroachDB v22.1. [#122837][#122837]
+- Previously, CockroachDB could run into an internal error when evaluating [PL/pgSQL]({% link v24.1/plpgsql.md %}) routines with nested blocks. The bug is only present in 24.1.0-beta versions. This bug is now fixed. [#122939][#122939]
+- Fixed a bug where [`UPDATE`]({% link v24.1/update.md %}) and [`UPSERT`]({% link v24.1/upsert.md %}) queries with a subquery were sometimes inappropriately using implicit [`FOR UPDATE`]({% link v24.1/select-for-update.md %}) locking within the subquery. This bug has existed since implicit `FOR UPDATE` locking was introduced in v20.1. [#121391][#121391]
+- [Dropping]({% link v24.1/alter-table.md %}#drop-column) and [adding]({% link v24.1/alter-table.md %}#add-column) a column with the same name no longer results in a `"column already exists error"`. [#122631][#122631]
+- Fixed a bug that could cause an internal error of the form `invalid datum type given: ..., expected ...` when a `RECORD`-returning [user-defined function]({% link v24.1/user-defined-functions.md %}), used as a data source, was supplied a column definition list with mismatched types. This bug has existed since v23.1. [#122305][#122305]
+- Fixed a bug that could result in an internal error when attempting to create a [PL/pgSQL]({% link v24.1/plpgsql.md %}) routine using the (unsupported) `%ROWTYPE` syntax for a variable declaration. Now, an expected syntax error is returned instead. [#122966][#122966]
+- Fixed a bug that could result in an assertion error during evaluation of [PL/pgSQL]({% link v24.1/plpgsql.md %}) routines that invoke procedures while using `DEFAULT` arguments. The bug was present in v24.1.0-beta releases and is now fixed. [#122943][#122943]
+- Previously, privileges granted for [external connections]({% link v24.1/create-external-connection.md %}) were displaying in `SHOW SYSTEM GRANTS` with no associated object name. Now these privileges are no longer displayed. Instead, the statement `SHOW GRANTS ON EXTERNAL CONNECTION` should be used to view external connection privileges with their associated object name. [#122857][#122857]
+- Statistics forecasts of zero rows can cause suboptimal [query plans]({% link v24.1/cost-based-optimizer.md %}). Forecasting will now avoid predicting zero rows for most downward-trending statistics. [#122459][#122459]
+- Fixed a bug introduced in v23.2 that could cause a [PL/pgSQL]({% link v24.1/plpgsql.md %}) variable assignment to not be executed if the variable was never referenced after the assignment. [#123045][#123045]
+
+
Performance improvements
+
+- More efficient [query plans]({% link v24.1/cost-based-optimizer.md %}) are now generated for queries with text similarity filters, for example, `text_col % 'foobar'`. These plans are generated if the `optimizer_use_trigram_similarity_optimization` [session setting]({% link v24.1/set-vars.md %}) is enabled. It is disabled by default. [#122838][#122838]
+- The [optimizer]({% link v24.1/cost-based-optimizer.md %}) now costs `distinct-on` operators more accurately. It may produce more efficient query plans in some cases. [#122850][#122850]
+- Improved the speed for optimization of some statements using `GROUP BY` or `DISTINCT` or `ON CONFLICT` by skipping the [optimizer]({% link v24.1/cost-based-optimizer.md %}) rule `SplitGroupByScanIntoUnionScans` when it is not needed. [#123034][#123034]
+
+
+
+
Contributors
+
+This release includes 56 merged PRs by 25 authors.
+
+
+
+- Added a new [session setting]({% link v24.1/session-variables.md %}) `optimizer_use_improved_multi_column_selectivity_estimate`, which if enabled, causes the [optimizer]({% link v24.1/cost-based-optimizer.md %}) to use an improved selectivity estimate for multi-column predicates. This setting will default to `true` on v24.2 and later, and `false` on earlier versions. [#123106][#123106]
+
+
Operational changes
+
+- Added two new [metrics]({% link v24.1/metrics.md %}): `range.snapshots.upreplication.rcvd-bytes` counts the number of [Raft]({% link v24.1/architecture/replication-layer.md %}#raft) recovery snapshot bytes received, and `range.snapshots.upreplication.sent-bytes` counts the number of Raft recovery snapshot bytes sent. Also updated `range.snapshots.recovery.rcvd-bytes` and `range.snapshots.recovery.sent-bytes` to only include Raft snapshots. A new line was added to the [**Snapshot Data Received**]({% link v24.1/ui-replication-dashboard.md %}#snapshot-data-received) graph. [#123055][#123055]
+
+
DB Console changes
+
+- Added a **Replication Lag** graph to the [**Physical Cluster Replication**]({% link v24.1/physical-cluster-replication-monitoring.md %}) dashboard to measure replication lag between primary and standby clusters using [physical cluster replication]({% link v24.1/physical-cluster-replication-overview.md %}). [#123285][#123285]
+
+
Bug fixes
+
+- Fixed a bug that caused the [**Tables**]({% link v24.1/ui-databases-page.md %}#tables-view) and [**Table Details**]({% link v24.1/ui-databases-page.md %}#table-details) pages in the DB Console to display an incorrect value for **Table Stats Last Updated**. [#122816][#122816]
+- Fixed a bug in the DB Console's [**Custom Chart**]({% link v24.1/ui-custom-chart-debug-page.md %}) tool where store-level metrics were displayed only for the first store ID associated with the node. Now data is displayed for all stores present on a node, and a single time series is shown for each store, rather than an aggregated value for all of the node's stores. This allows finer-grained monitoring of store-level metrics. [#122705][#122705]
+- Fixed a bug introduced in v22.2 that could cause the internal error `attempting to append refresh spans after the tracked timestamp has moved forward` in some edge cases. [#123136][#123136]
+- Fixed a bug where a `TYPEDESC SCHEMA CHANGE` job could retry forever if the descriptor it targeted was already dropped. [#123273][#123273]
+- Fixed a bug where, if the legacy schema changer was enabled, the [`CREATE SEQUENCE`]({% link v24.1/create-sequence.md %}) command would incorrectly require the user to have the `CREATE` [privilege]({% link v24.1/security-reference/authorization.md %}#privileges) on the parent database rather than only on the parent schema.[#123289][#123289]
+- Fixed a bug where a [job]({% link v24.1/show-jobs.md %}) would fail if it reported an out-of-bound progress fraction. The error is now logged and no longer causes the job to fail. [#122965][#122965]
+
+
Performance improvements
+
+- Added a new [session setting]({% link v24.1/session-variables.md %}) `optimizer_use_improved_zigzag_join_costing`. When enabled and when the [cluster setting]({% link v24.1/cluster-settings.md %}) `enable_zigzag_join` is also enabled, the cost of zigzag joins is updated such that a zigzag join will be chosen over a scan only if it produces fewer rows than a scan.[#123106][#123106]
+- Improved the selectivity estimation of multi-column filters when the multi-column distinct count is high. This prevents the [optimizer]({% link v24.1/cost-based-optimizer.md %}) from choosing a bad query plan due to over-estimating the selectivity of a multi-column predicate. [#123106][#123106]
+- Improved the efficiency of error handling in the [vectorized execution engine]({% link v24.1/vectorized-execution.md %}), to reduce the CPU overhead of statement timeout handling and reduce the potential for more statement timeouts. [#123501][#123501]
+- Disabled a poorly-performing [changefeed]({% link v24.1/change-data-capture-overview.md %}) optimization that was intended to reduce duplicates during aggregator restarts. [#123597][#123597]
+
+
+
+
Contributors
+
+This release includes 57 merged PRs by 24 authors.
+
+
+
+[#122705]: https://github.com/cockroachdb/cockroach/pull/122705
+[#122816]: https://github.com/cockroachdb/cockroach/pull/122816
+[#122965]: https://github.com/cockroachdb/cockroach/pull/122965
+[#123055]: https://github.com/cockroachdb/cockroach/pull/123055
+[#123106]: https://github.com/cockroachdb/cockroach/pull/123106
+[#123136]: https://github.com/cockroachdb/cockroach/pull/123136
+[#123144]: https://github.com/cockroachdb/cockroach/pull/123144
+[#123273]: https://github.com/cockroachdb/cockroach/pull/123273
+[#123285]: https://github.com/cockroachdb/cockroach/pull/123285
+[#123289]: https://github.com/cockroachdb/cockroach/pull/123289
+[#123373]: https://github.com/cockroachdb/cockroach/pull/123373
+[#123501]: https://github.com/cockroachdb/cockroach/pull/123501
+[#123597]: https://github.com/cockroachdb/cockroach/pull/123597
diff --git a/src/current/_includes/releases/v24.1/v24.1.0.md b/src/current/_includes/releases/v24.1/v24.1.0.md
new file mode 100644
index 00000000000..5c9e947b332
--- /dev/null
+++ b/src/current/_includes/releases/v24.1/v24.1.0.md
@@ -0,0 +1,269 @@
+## v24.1.0
+
+Release Date: TBD TBD, 2024
+
+With the release of CockroachDB v24.1, we've added new capabilities to help you migrate, build, and operate more efficiently. Refer to our summary of the most significant user-facing changes under [Feature Highlights](#v24-1-0-feature-highlights).
+
+{% include releases/new-release-downloads-docker-image.md release=include.release advisory_key="a103220"%}
+
+
Feature highlights
+
+This section summarizes the most significant user-facing changes in v24.1.0 and other features recently made available to CockroachDB users across versions. For a complete list of features and changes in v24.1, including bug fixes and performance improvements, refer to the [release notes]({% link releases/index.md %}#testing-releases) for previous v24.1 testing releases. You can also search the docs for sections labeled [New in v24.1](https://www.cockroachlabs.com/docs/search?query=new+in+v24.1).
+
+- **Feature categories**
+ - [Observability](#v24-1-0-observability)
+ - [Migrations](#v24-1-0-migrations)
+ - [Security and compliance](#v24-1-0-security-and-compliance)
+ - [Disaster recovery](#v24-1-0-disaster-recovery)
+ - [Deployment and operations](#v24-1-0-deployment-and-operations)
+ - [SQL](#v24-1-0-sql)
+- **Additional information**
+ - [Backward-incompatible changes](#v24-1-0-backward-incompatible-changes)
+ - [Deprecations](#v24-1-0-deprecations)
+ - [Known limitations](#v24-1-0-known-limitations)
+ - [Additional resources](#v24-1-0-additional-resources)
+
+{{ site.data.alerts.callout_info }}
+In CockroachDB Self-Hosted, all available features are free to use unless their description specifies that an Enterprise license is required. For more information, refer to the [Licensing FAQ](https://www.cockroachlabs.com/docs/stable/licensing-faqs).
+{{ site.data.alerts.end }}
+
+
+
+
Observability
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD
+
+
TBD
+
{% include icon-yes.html %}
+
{% include icon-yes.html %}
+
{% include icon-no.html %}
+
+
+
+
+
Migrations
+
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD
+
+
TBD
+
{% include icon-yes.html %}
+
{% include icon-yes.html %}
+
{% include icon-yes.html %}
+
+
+
+
+
Disaster recovery
+
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD.
+
+
TBD
+
{% include icon-yes.html %}
+
{% include icon-no.html %}
+
{% include icon-no.html %}
+
+
+
+
+
Security and compliance
+
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD
+
+
TBD
+
{% include icon-no.html %}
+
{% include icon-yes.html %}
+
{% include icon-no.html %}
+
+
+
+
+
Deployment and operations
+
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD
+
+
TBD
+
{% include icon-no.html %}
+
{% include icon-yes.html %}
+
{% include icon-no.html %}
+
+
+
+
+
SQL
+
+
+
+
+
Feature
+
Availability
+
+
+
Ver.
+
Self-Hosted
+
Dedicated
+
Serverless
+
+
+
+
+
+
+
TBD
+
TBD
+
+
TBD
+
{% include icon-no.html %}
+
{% include icon-yes.html %}
+
{% include icon-no.html %}
+
+
+
+
+
+
+
+
Feature detail key
+
+
+
+
+
*
+
Features marked “All*” were recently made available in the CockroachDB Cloud platform. They are available for all supported versions of CockroachDB, under the deployment methods specified in their row under Availability.
+
+
+
**
+
Features marked “All**” were recently made available via migration tools maintained outside of the CockroachDB binary. They are available to use with all supported versions of CockroachDB, under the deployment methods specified in their row under Availability.
+
+
+
{% include icon-yes.html %}
+
Feature is available for this deployment method of CockroachDB as specified in the icon’s column: CockroachDB Self-Hosted, CockroachDB Dedicated, or CockroachDB Serverless.
+
+
+
{% include icon-no.html %}
+
Feature is not available for this deployment method of CockroachDB as specified in the icon’s column: CockroachDB Self-Hosted, CockroachDB Dedicated, or CockroachDB Serverless.
+
+
+
+
+
+
+
+
Backward-incompatible changes
+
+Before [upgrading to CockroachDB v24.1]({% link v24.1/upgrade-cockroach-version.md %}), be sure to review the following backward-incompatible changes, as well as [key cluster setting changes](#v24-1-0-cluster-settings), and adjust your deployment as necessary.
+
+- TBD
+
+
Key Cluster Setting Changes
+
+The following changes should be reviewed prior to upgrading. Default cluster settings will be used unless you have manually set a value for a setting. This can be confirmed by checking the `system.settings` table (`select * from system.settings`) to view the non-default settings.
+
+- TBD
+
+
Deprecations
+
+{% comment %}TODO: Intro para? Each sibling section has one.{% endcomment %}
+
+- TBD
+
+
Known limitations
+
+For information about new and unresolved limitations in CockroachDB v24.1, with suggested workarounds where applicable, refer to [Known Limitations](https://www.cockroachlabs.com/docs/v24.1/known-limitations).
+
+
Additional resources
+
+Resource | Topic | Description
+---------------------+--------------------------------------------+-------------
+Cockroach University | [Example link](https://example.com/course1)| Summary here
+Cockroach University | [Example link](https://example.com/course2)| Summary here
+Docs | [Example link](https://example.com/doc1) | Summary here
+Docs | [Example link](https://example.com/doc2) | Summary here
diff --git a/src/current/_includes/sidebar-all-releases.json b/src/current/_includes/sidebar-all-releases.json
index ec03c02ae25..266f6be8432 100644
--- a/src/current/_includes/sidebar-all-releases.json
+++ b/src/current/_includes/sidebar-all-releases.json
@@ -1,5 +1,17 @@
{%- assign versions = site.data.versions | where_exp: "versions", "versions.major_version != site.versions['dev']" | where_exp: "versions", "versions.major_version != site.versions['stable']" | map: "major_version" -%}
{%- comment -%} versions iterates through the list of major versions (e.g., v21.2) in _data/versions.csv and returns all releases that are not dev or stable. We then pull only the major version name instead of the whole dictionary with map: "major_version" {%- endcomment -%}
+{
+ "title": "Latest Production Release",
+ "urls": [
+ "/releases/{{ site.versions["stable"] }}.html"
+ ]
+}{% unless site.versions["stable"] == site.versions["dev"] %},
+{
+ "title": "Latest Testing Release",
+ "urls": [
+ "/releases/{{ site.versions["dev"] }}.html"
+ ]
+}{% endunless %},
{
"title": "CockroachDB Releases",
"urls": [
@@ -7,13 +19,7 @@
"/releases/"
{% for v in versions %}
,"/releases/{{ v }}.html"
- {% endfor %}
- ]
-},
-{
- "title": "CockroachDB Kubernetes Operator",
- "urls": [
- "/releases/kubernetes-operator.html"
+ {% endfor %}
]
},
{
@@ -43,4 +49,16 @@
,"{{ x.url }}"
{% endfor %}
]
+},
+{
+ "title": "Cloud Releases",
+ "urls": [
+ "/releases/cloud.html"
+ ]
+},
+{
+ "title": "Kubernetes Operator",
+ "urls": [
+ "/releases/kubernetes-operator.html"
+ ]
}
diff --git a/src/current/_includes/sidebar-data-v23.2.json b/src/current/_includes/sidebar-data-v23.2.json
index d253cde31b1..f2aedcaf392 100644
--- a/src/current/_includes/sidebar-data-v23.2.json
+++ b/src/current/_includes/sidebar-data-v23.2.json
@@ -7,7 +7,7 @@
]
},
{% include_cached v23.2/sidebar-data/get-started.json %},
- {% include_cached v23.2/sidebar-data/latest-releases.json %},
+ {% include_cached v23.2/sidebar-data/releases.json %},
{% include_cached v23.2/sidebar-data/feature-overview.json %},
{% include_cached v23.2/sidebar-data/connect-to-cockroachdb.json %},
{% include_cached v23.2/sidebar-data/migrate.json %},
@@ -22,6 +22,5 @@
{% include_cached v23.2/sidebar-data/sql.json %},
{% include_cached v23.2/sidebar-data/reference.json %},
{% include_cached v23.2/sidebar-data/faqs.json %},
- {% include_cached v23.2/sidebar-data/releases.json %},
{% include_cached sidebar-data-cockroach-university.json %}
]
diff --git a/src/current/_includes/sidebar-data-v24.1.json b/src/current/_includes/sidebar-data-v24.1.json
index 5401d2dd934..d9eb75825d8 100644
--- a/src/current/_includes/sidebar-data-v24.1.json
+++ b/src/current/_includes/sidebar-data-v24.1.json
@@ -7,7 +7,7 @@
]
},
{% include_cached v24.1/sidebar-data/get-started.json %},
- {% include_cached v24.1/sidebar-data/latest-releases.json %},
+ {% include_cached v24.1/sidebar-data/releases.json %},
{% include_cached v24.1/sidebar-data/feature-overview.json %},
{% include_cached v24.1/sidebar-data/connect-to-cockroachdb.json %},
{% include_cached v24.1/sidebar-data/migrate.json %},
@@ -22,6 +22,5 @@
{% include_cached v24.1/sidebar-data/sql.json %},
{% include_cached v24.1/sidebar-data/reference.json %},
{% include_cached v24.1/sidebar-data/faqs.json %},
- {% include_cached v24.1/sidebar-data/releases.json %},
{% include_cached sidebar-data-cockroach-university.json %}
]
diff --git a/src/current/_includes/sidebar-releases.json b/src/current/_includes/sidebar-releases.json
index 92e4984ad0c..26a6c95da8a 100644
--- a/src/current/_includes/sidebar-releases.json
+++ b/src/current/_includes/sidebar-releases.json
@@ -7,8 +7,20 @@
{% assign v_test = site.data.versions | where_exp: "v_test", "v_test.release_date == 'N/A'" | sort: "release_date" | last | map: "major_version" %}
{% comment %} v_test iterates through the list of major versions in _data/versions.csv and returns the single latest testing version (if the release date is in the future or not otherwise specified). It's possible there is no testing release (in between GA of a version and the first alpha of the following version). {% endcomment %}
{
- "title": "CockroachDB",
+ "title": "CockroachDB Releases",
"items": [
+ {
+ "title": "Latest Production Release",
+ "urls": [
+ "/releases/{{ site.versions["stable"] }}.html"
+ ]
+ }{% unless site.versions["stable"] == site.versions["dev"] %},
+ {
+ "title": "Latest Testing Release",
+ "urls": [
+ "/releases/{{ site.versions["dev"] }}.html"
+ ]
+ },{% endunless %}
{
"title": "All Releases",
"urls": [
@@ -23,15 +35,12 @@
]
},
{% endfor %}
- {% if v_test[0] %} {% comment %} check if a testing version is available {% endcomment %}
{
- "title": "Latest Testing Release",
+ "title": "Staged Release Process",
"urls": [
- {% comment %} check if a testing version is available and pull the latest testing version {% endcomment %}
- "/releases/{{ v_test[0] }}.html"
+ "/releases/staged-release-process.html"
]
},
- {% endif %}
{
"title": "Release Support Policy",
"urls": [
@@ -46,18 +55,6 @@
}
]
},
-{
- "title": "CockroachDB Cloud",
- "urls": [
- "/releases/cloud.html"
- ]
-},
-{
- "title": "CockroachDB Kubernetes Operator",
- "urls": [
- "/releases/kubernetes-operator.html"
- ]
-},
{% assign advisories = site.pages | where_exp: "advisories", "advisories.path contains 'advisories'" | where_exp: "advisories", "advisories.index != 'true'" %}
{
"title": "Technical Advisories",
@@ -67,4 +64,16 @@
,"{{ x.url }}"
{% endfor %}
]
+},
+{
+ "title": "Cloud Releases",
+ "urls": [
+ "/releases/cloud.html"
+ ]
+},
+{
+ "title": "Kubernetes Operator",
+ "urls": [
+ "/releases/kubernetes-operator.html"
+ ]
}
diff --git a/src/current/_includes/unsupported-version.md b/src/current/_includes/unsupported-version.md
index 4588b7babab..a29ce9a863c 100644
--- a/src/current/_includes/unsupported-version.md
+++ b/src/current/_includes/unsupported-version.md
@@ -1,15 +1,34 @@
{% assign x = site.data.versions | where_exp: "m", "m.major_version == include.major_version" | first %}
-{% unless x.maint_supp_exp_date == "N/A" or x.asst_supp_exp_date == "N/A" %}
+{% unless x.maint_supp_exp_date == "N/A" or x.asst_supp_exp_date == "N/A" %}{% comment %}Not yet GA{% endcomment %}
{% assign today = "today" | date: "%s" %} {% comment %} Fetch today's date and format it in seconds. {% endcomment %}
{% assign m = x.maint_supp_exp_date | date: "%s" %} {% comment %} Format m_raw in seconds. {% endcomment %}
{% assign a = x.asst_supp_exp_date | date: "%s" %} {% comment %} Format a_raw in seconds. {% endcomment %}
- {% if a < today %} {% comment %} If the assistance support expiration date has passed, show the unsupported message. {% endcomment %}
+ {% unless x.lts_maint_supp_exp_date == "N/A" or x.lts_asst_supp_exp_date == "N/A" %}{% comment %}No LTS releases{% endcomment %}
+ {% assign lm = x.lts_maint_supp_exp_date | date: "%s" %} {% comment %} Format m_raw in seconds. {% endcomment %}
+ {% assign la = x.lts_asst_supp_exp_date | date: "%s" %} {% comment %} Format a_raw in seconds. {% endcomment %}
+ {% endunless %}
+{% endunless %}
+
+ {% if la < today %} {% comment %} If the LTS assistance support expiration date has passed, show the unsupported message. {% endcomment %}
+ {{site.data.alerts.callout_danger}}
+ CockroachDB {{ include.major_version }} (LTS) is no longer supported as of {{ x.lts_asst_supp_exp_date | date: "%B %e, %Y"}}. For more details, refer to the Release Support Policy.
+ {{site.data.alerts.end}}
+ {% elsif la >= today and lm < today %}{% comment %} If the LTS maintenance support expiration has passed but the version is still within the LTS assistance support period, show this message and pass the LTS assistance support expiration date. {% endcomment %}
+ {{site.data.alerts.callout_danger}}
+ Cockroach Labs will stop providing LTS Assistance Support for {{ include.major_version }} on {{ x.lts_asst_supp_exp_date | date: "%B %e, %Y" }}. Prior to that date, upgrade to a more recent version to continue receiving support. For more details, refer to the Release Support Policy.
+ {{site.data.alerts.end}}
+ {% elsif a < today and lm > today %} {% comment %} If the assistance support expiration date has passed but the LTS maintenance phase has not {% endcomment %}
+ {% if la > today %}
+ {{site.data.alerts.callout_danger}}
+ GA releases for CockroachDB {{ include.major_version }} are no longer supported. Cockroach Labs will stop providing LTS Assistance Support for {{ include.major_version }} LTS releases on {{ x.lts_asst_supp_exp_date | date: "%B %e, %Y" }}. Prior to that date, upgrade to a more recent version to continue receiving support. For more details, refer to the Release Support Policy.
+ {{site.data.alerts.end}}
+ {% endif %}
+ {% elsif a < today %}{% comment %}show the unsupported message. {% endcomment %}
{{site.data.alerts.callout_danger}}
- CockroachDB {{ include.major_version }} is no longer supported. For more details, see the Release Support Policy.
+ CockroachDB {{ include.major_version }} is no longer supported as of {{ x.asst_supp_exp_date | date: "%B %e, %Y"}}. For more details, refer to the Release Support Policy.
{{site.data.alerts.end}}
{% elsif a >= today and m < today %} {% comment %} If the maintenance support expiration has passed but the version is still within the assistance support period, show this message and pass the assistance support expiration date. {% endcomment %}
{{site.data.alerts.callout_danger}}
- Cockroach Labs will stop providing Assistance Support for {{ include.major_version }} on {{ x.asst_supp_exp_date | date: "%B %e, %Y" }}. Prior to that date, upgrade to a more recent version to continue receiving support. For more details, see the Release Support Policy.
+ Cockroach Labs will stop providing Assistance Support for {{ include.major_version }} on {{ x.asst_supp_exp_date | date: "%B %e, %Y" }}. Prior to that date, upgrade to a more recent version to continue receiving support. For more details, refer to the Release Support Policy.
{{site.data.alerts.end}}
{% endif %}
-{% endunless %}
diff --git a/src/current/_includes/v22.2/sql/privileges.md b/src/current/_includes/v22.2/sql/privileges.md
index 0e57e08012b..bf3e8066507 100644
--- a/src/current/_includes/v22.2/sql/privileges.md
+++ b/src/current/_includes/v22.2/sql/privileges.md
@@ -18,7 +18,7 @@ Privilege | Levels | Description
`RESTORE` | System, Database | Grants the ability to restore [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) at the system or database level. Refer to `RESTORE` [Required privileges]({% link {{ page.version.version }}/restore.md %}#required-privileges) for more details.
`SELECT` | Table, Sequence | Grants the ability to run [selection queries]({% link {{ page.version.version }}/query-data.md %}) at the table or sequence level.
`UPDATE` | Table, Sequence | Grants the ability to run [update statements]({% link {{ page.version.version }}/update-data.md %}) at the table or sequence level.
-`USAGE` | Function, Schema, Sequence, Type | Grants the ability to use [functions]({% link {{ page.version.version }}/functions-and-operators.md %}), [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
+`USAGE` | Schema, Sequence, Type | Grants the ability to use [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
`VIEWACTIVITY` | System | Grants the ability to view other user's activity statistics of a cluster.
`VIEWACTIVITYREDACTED` | System | Grants the ability to view other user's activity statistics, but prevents the role from accessing the statement diagnostics bundle in the DB Console, and viewing some columns in introspection queries that contain data about the cluster.
`VIEWCLUSTERMETADATA` | System | Grants the ability to view range information, data distribution, store information, and Raft information.
diff --git a/src/current/_includes/v23.1/sql/privileges.md b/src/current/_includes/v23.1/sql/privileges.md
index e6d3456e398..23c0193653d 100644
--- a/src/current/_includes/v23.1/sql/privileges.md
+++ b/src/current/_includes/v23.1/sql/privileges.md
@@ -22,7 +22,7 @@ Privilege | Levels | Description
`RESTORE` | System, Database | Grants the ability to restore [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) at the system or database level. Refer to `RESTORE` [Required privileges]({% link {{ page.version.version }}/restore.md %}#required-privileges) for more details.
`SELECT` | Table, Sequence | Grants the ability to run [selection queries]({% link {{ page.version.version }}/query-data.md %}) at the table or sequence level.
`UPDATE` | Table, Sequence | Grants the ability to run [update statements]({% link {{ page.version.version }}/update-data.md %}) at the table or sequence level.
-`USAGE` | Function, Schema, Sequence, Type | Grants the ability to use [functions]({% link {{ page.version.version }}/functions-and-operators.md %}), [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
+`USAGE` | Schema, Sequence, Type | Grants the ability to use [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
`VIEWACTIVITY` | System | Grants the ability to view other user's activity statistics of a cluster.
`VIEWACTIVITYREDACTED` | System | Grants the ability to view other user's activity statistics, but prevents the role from accessing the statement diagnostics bundle in the DB Console, and viewing some columns in introspection queries that contain data about the cluster.
`VIEWCLUSTERMETADATA` | System | Grants the ability to view range information, data distribution, store information, and Raft information.
diff --git a/src/current/_includes/v23.2/cdc/disable-replication-ttl.md b/src/current/_includes/v23.2/cdc/disable-replication-ttl.md
new file mode 100644
index 00000000000..44f71135ac5
--- /dev/null
+++ b/src/current/_includes/v23.2/cdc/disable-replication-ttl.md
@@ -0,0 +1 @@
+{% include_cached new-in.html version="v23.2" %} To prevent changefeeds from emitting deletes issued by all TTL jobs on a cluster, set the `sql.ttl.changefeed_replication.disabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to `true`.
\ No newline at end of file
diff --git a/src/current/_includes/v23.2/misc/session-vars.md b/src/current/_includes/v23.2/misc/session-vars.md
index a0d4d56b65f..7d978c32894 100644
--- a/src/current/_includes/v23.2/misc/session-vars.md
+++ b/src/current/_includes/v23.2/misc/session-vars.md
@@ -15,6 +15,7 @@
| `default_transaction_quality_of_service` | The default transaction quality of service for the current session. The supported options are `regular`, `critical`, and `background`. See [Set quality of service level]({% link {{ page.version.version }}/admission-control.md %}#set-quality-of-service-level-for-a-session). | `regular` | Yes | Yes |
| `default_transaction_read_only` | The default transaction access mode for the current session. If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`]({% link {{ page.version.version }}/set-transaction.md %}) for more details. | `off` | Yes | Yes |
| `default_transaction_use_follower_reads` | If set to on, all read-only transactions use [`AS OF SYSTEM TIME follower_read_timestamp()`]({% link {{ page.version.version }}/as-of-system-time.md %}) to allow the transaction to use follower reads. If set to `off`, read-only transactions will only use follower reads if an `AS OF SYSTEM TIME` clause is specified in the statement, with an interval of at least 4.8 seconds. | `off` | Yes | Yes |
+| `disable_changefeed_replication` | When `true`, [changefeeds]({% link {{ page.version.version }}/changefeed-messages.md %}#filtering-changefeed-messages) will not emit messages for any changes (e.g., `INSERT`, `UPDATE`) issued to watched tables during that session. | `false` | Yes | Yes |
| `disallow_full_table_scans` | If set to `on`, queries on "large" tables with a row count greater than [`large_full_scan_rows`](#large-full-scan-rows) will not use full table or index scans. If no other query plan is possible, queries will return an error message. This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable. | `off` | Yes | Yes |
| `distsql` | The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node. | `auto` | Yes | Yes |
| `enable_auto_rehoming` | When enabled, the [home regions]({% link {{ page.version.version }}/alter-table.md %}#crdb_region) of rows in [`REGIONAL BY ROW`]({% link {{ page.version.version }}/alter-table.md %}#set-the-table-locality-to-regional-by-row) tables are automatically set to the region of the [gateway node]({% link {{ page.version.version }}/ui-sessions-page.md %}#session-details-gateway-node) from which any [`UPDATE`]({% link {{ page.version.version }}/update.md %}) or [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}) statements that operate on those rows originate. | `off` | Yes | Yes |
diff --git a/src/current/_includes/v23.2/physical-replication/interface-virtual-cluster.md b/src/current/_includes/v23.2/physical-replication/interface-virtual-cluster.md
index 96c5ddd74ab..8fb43c6ee1c 100644
--- a/src/current/_includes/v23.2/physical-replication/interface-virtual-cluster.md
+++ b/src/current/_includes/v23.2/physical-replication/interface-virtual-cluster.md
@@ -1,2 +1,2 @@
-- The system virtual cluster manages the cluster's control plane and the replication of the cluster's data. Admins connect to the system virtual cluster to configure and manage the underlying CockroachDB cluster, set up physical cluster replication, create and manage a virtual clusters, and observe metrics and logs for the CockroachDB cluster and each virtual cluster.
+- The system virtual cluster manages the cluster's control plane and the replication of the cluster's data. Admins connect to the system virtual cluster to configure and manage the underlying CockroachDB cluster, set up physical cluster replication, create and manage a virtual cluster, and observe metrics and logs for the CockroachDB cluster and each virtual cluster.
- Each other virtual cluster manages its own data plane. Users connect to a virtual cluster by default, rather than the system virtual cluster. To connect to the system virtual cluster, the connection string must be modified. Virtual clusters contain user data and run application workloads. When physical cluster replication is enabled, the non-system virtual cluster on both primary and secondary clusters is named `application`.
diff --git a/src/current/_includes/v23.2/physical-replication/show-virtual-cluster-data-state.md b/src/current/_includes/v23.2/physical-replication/show-virtual-cluster-data-state.md
index a39bef0fe3e..ec7230a434b 100644
--- a/src/current/_includes/v23.2/physical-replication/show-virtual-cluster-data-state.md
+++ b/src/current/_includes/v23.2/physical-replication/show-virtual-cluster-data-state.md
@@ -6,4 +6,4 @@ State | Description
`replication paused` | The replication job is paused due to an error or a manual request with [`ALTER VIRTUAL CLUSTER ... PAUSE REPLICATION`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}).
`replication pending cutover` | The replication job is running and the cutover time has been set. Once the the replication reaches the cutover time, the cutover will begin automatically.
`replication cutting over` | The job has started cutting over. The cutover time can no longer be changed. Once cutover is complete, A virtual cluster will be available for use with [`ALTER VIRTUAL CLUSTER ... START SHARED SERVICE`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}).
-`replication error` | An error has occurred. You can find more detail in the error message and the logs.
+`replication error` | An error has occurred. You can find more detail in the error message and the [logs]({% link {{ page.version.version }}/configure-logs.md %}). **Note:** A physical cluster replication job will retry for 3 minutes before failing.
diff --git a/src/current/_includes/v23.2/sidebar-data/latest-releases.json b/src/current/_includes/v23.2/sidebar-data/latest-releases.json
deleted file mode 100644
index e25f8c0fa7a..00000000000
--- a/src/current/_includes/v23.2/sidebar-data/latest-releases.json
+++ /dev/null
@@ -1,7 +0,0 @@
-{
- "title": "Latest Releases",
- "is_top_level": true,
- "items": [
- {% include_cached sidebar-latest-releases.json %}
- ]
- }
diff --git a/src/current/_includes/v23.2/sidebar-data/releases.json b/src/current/_includes/v23.2/sidebar-data/releases.json
index fb49f7c9acc..8ee555fdc9a 100644
--- a/src/current/_includes/v23.2/sidebar-data/releases.json
+++ b/src/current/_includes/v23.2/sidebar-data/releases.json
@@ -1,5 +1,5 @@
{
- "title": "CockroachDB Releases",
+ "title": "Releases",
"is_top_level": true,
"items": [
{% include_cached sidebar-all-releases.json %}
diff --git a/src/current/_includes/v23.2/sql/privileges.md b/src/current/_includes/v23.2/sql/privileges.md
index a3a25489b48..223ce20ad7c 100644
--- a/src/current/_includes/v23.2/sql/privileges.md
+++ b/src/current/_includes/v23.2/sql/privileges.md
@@ -22,7 +22,7 @@ Privilege | Levels | Description
`RESTORE` | System, Database | Grants the ability to restore [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) at the system or database level. Refer to `RESTORE` [Required privileges]({% link {{ page.version.version }}/restore.md %}#required-privileges) for more details.
`SELECT` | Table, Sequence | Grants the ability to run [selection queries]({% link {{ page.version.version }}/query-data.md %}) at the table or sequence level.
`UPDATE` | Table, Sequence | Grants the ability to run [update statements]({% link {{ page.version.version }}/update-data.md %}) at the table or sequence level.
-`USAGE` | Function, Schema, Sequence, Type | Grants the ability to use [functions]({% link {{ page.version.version }}/functions-and-operators.md %}), [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
+`USAGE` | Schema, Sequence, Type | Grants the ability to use [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
`VIEWACTIVITY` | System | Grants the ability to view other user's activity statistics of a cluster.
`VIEWACTIVITYREDACTED` | System | Grants the ability to view other user's activity statistics, but prevents the role from accessing the statement diagnostics bundle in the DB Console, and viewing some columns in introspection queries that contain data about the cluster.
`VIEWCLUSTERMETADATA` | System | Grants the ability to view range information, data distribution, store information, and Raft information.
diff --git a/src/current/_includes/v24.1/cdc/disable-replication-ttl.md b/src/current/_includes/v24.1/cdc/disable-replication-ttl.md
new file mode 100644
index 00000000000..27b658a747a
--- /dev/null
+++ b/src/current/_includes/v24.1/cdc/disable-replication-ttl.md
@@ -0,0 +1,26 @@
+{% include_cached new-in.html version="v24.1" %} Use the `ttl_disable_changefeed_replication` table storage parameter to prevent changefeeds from sending `DELETE` messages issued by row-level TTL jobs for a table. Include the storage parameter when you create or alter the table. For example:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE TABLE tbl (
+ id UUID PRIMARY KEY default gen_random_uuid(),
+ value TEXT
+) WITH (ttl_expire_after = '3 weeks', ttl_job_cron = '@daily', ttl_disable_changefeed_replication = 'true');
+~~~
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+ALTER TABLE events SET (ttl_expire_after = '1 year', ttl_disable_changefeed_replication = 'true');
+~~~
+
+You can also widen the scope to the cluster by setting the `sql.ttl.changefeed_replication.disabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to `true`. This will prevent changefeeds from emitting deletes issued by all TTL jobs on a cluster.
+
+If you want to have a changefeed ignore the storage parameter or cluster setting that disables changefeed replication, you can set the changefeed option `ignore_disable_changefeed_replication` to `true`:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE CHANGEFEED FOR TABLE table_name INTO 'external://changefeed-sink'
+ WITH resolved, ignore_disable_changefeed_replication = true;
+~~~
+
+This is useful when you have multiple use cases for different changefeeds on the same table. For example, you have a table with a changefeed streaming changes to another database for analytics workflows in which you do not want to reflect row-level TTL deletes. Secondly, you have a changefeed on the same table for audit-logging purposes for which you need to persist every change through the changefeed.
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/known-limitations/plpgsql-feature-limitations.md b/src/current/_includes/v24.1/known-limitations/plpgsql-feature-limitations.md
index 076b86d0aa5..953bb475587 100644
--- a/src/current/_includes/v24.1/known-limitations/plpgsql-feature-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/plpgsql-feature-limitations.md
@@ -1,10 +1,8 @@
-- PL/pgSQL blocks cannot be nested. [#114775](https://github.com/cockroachdb/cockroach/issues/114775)
- PL/pgSQL arguments cannot be referenced with ordinals (e.g., `$1`, `$2`). [#114701](https://github.com/cockroachdb/cockroach/issues/114701)
-- `FOR` loops, including `FOR` cursor loops, `FOR` query loops, and `FOREACH` loops, are not supported. [#105246](https://github.com/cockroachdb/cockroach/issues/105246)
-- `RETURN NEXT` and `RETURN QUERY` statements are not supported. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
-- `EXIT` and `CONTINUE` labels and conditions are not supported. [#115271](https://github.com/cockroachdb/cockroach/issues/115271)
-- `CASE` statements are not supported. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
-- `PERFORM`, `EXECUTE`, `GET DIAGNOSTICS`, and `NULL` statements are not supported for PL/pgSQL. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
+- The following statements are not supported:
+ - `FOR` loops, including `FOR` cursor loops, `FOR` query loops, and `FOREACH` loops. [#105246](https://github.com/cockroachdb/cockroach/issues/105246)
+ - `RETURN NEXT` and `RETURN QUERY`. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
+ - `PERFORM`, `EXECUTE`, `GET DIAGNOSTICS`, and `CASE`. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
- PL/pgSQL exception blocks cannot catch [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}). [#111446](https://github.com/cockroachdb/cockroach/issues/111446)
- `RAISE` statements cannot be annotated with names of schema objects related to the error (i.e., using `COLUMN`, `CONSTRAINT`, `DATATYPE`, `TABLE`, or `SCHEMA`). [#106237](https://github.com/cockroachdb/cockroach/issues/106237)
- `RAISE` statements message the client directly, and do not produce log output. [#117750](https://github.com/cockroachdb/cockroach/issues/117750)
diff --git a/src/current/_includes/v24.1/known-limitations/udf-stored-proc-limitations.md b/src/current/_includes/v24.1/known-limitations/udf-stored-proc-limitations.md
index 120ddc3b95f..0ff2a43cd90 100644
--- a/src/current/_includes/v24.1/known-limitations/udf-stored-proc-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/udf-stored-proc-limitations.md
@@ -2,7 +2,7 @@
- User-defined functions are not currently supported in:
- Expressions (column, index, constraint) in tables. [#87699](https://github.com/cockroachdb/cockroach/issues/87699)
- Views. [#87699](https://github.com/cockroachdb/cockroach/issues/87699)
- - Other user-defined functions. [#93049](https://github.com/cockroachdb/cockroach/issues/93049)
+- User-defined functions cannot call themselves recursively. [#93049](https://github.com/cockroachdb/cockroach/issues/93049)
- [Common table expressions]({% link {{ page.version.version }}/common-table-expressions.md %}) (CTE), recursive or non-recursive, are not supported in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) (UDF). That is, you cannot use a `WITH` clause in the body of a UDF. [#92961](https://github.com/cockroachdb/cockroach/issues/92961)
- The `setval` function cannot be resolved when used inside UDF bodies. [#110860](https://github.com/cockroachdb/cockroach/issues/110860)
{% endif %}
diff --git a/src/current/_includes/v24.1/misc/session-vars.md b/src/current/_includes/v24.1/misc/session-vars.md
index 8659c3e3c91..a8be2430bcd 100644
--- a/src/current/_includes/v24.1/misc/session-vars.md
+++ b/src/current/_includes/v24.1/misc/session-vars.md
@@ -16,6 +16,7 @@
| `default_transaction_quality_of_service` | The default transaction quality of service for the current session. The supported options are `regular`, `critical`, and `background`. See [Set quality of service level]({% link {{ page.version.version }}/admission-control.md %}#set-quality-of-service-level-for-a-session). | `regular` | Yes | Yes |
| `default_transaction_read_only` | The default transaction access mode for the current session. If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`]({% link {{ page.version.version }}/set-transaction.md %}) for more details. | `off` | Yes | Yes |
| `default_transaction_use_follower_reads` | If set to on, all read-only transactions use [`AS OF SYSTEM TIME follower_read_timestamp()`]({% link {{ page.version.version }}/as-of-system-time.md %}) to allow the transaction to use follower reads. If set to `off`, read-only transactions will only use follower reads if an `AS OF SYSTEM TIME` clause is specified in the statement, with an interval of at least 4.8 seconds. | `off` | Yes | Yes |
+| `disable_changefeed_replication` | When `true`, [changefeeds]({% link {{ page.version.version }}/changefeed-messages.md %}#filtering-changefeed-messages) will not emit messages for any changes (e.g., `INSERT`, `UPDATE`) issued to watched tables during that session. | `false` | Yes | Yes |
| `disallow_full_table_scans` | If set to `on`, queries on "large" tables with a row count greater than [`large_full_scan_rows`](#large-full-scan-rows) will not use full table or index scans. If no other query plan is possible, queries will return an error message. This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable. | `off` | Yes | Yes || `distsql` | The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node. | `auto` | Yes | Yes |
| `enable_auto_rehoming` | When enabled, the [home regions]({% link {{ page.version.version }}/alter-table.md %}#crdb_region) of rows in [`REGIONAL BY ROW`]({% link {{ page.version.version }}/alter-table.md %}#set-the-table-locality-to-regional-by-row) tables are automatically set to the region of the [gateway node]({% link {{ page.version.version }}/ui-sessions-page.md %}#session-details-gateway-node) from which any [`UPDATE`]({% link {{ page.version.version }}/update.md %}) or [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}) statements that operate on those rows originate. | `off` | Yes | Yes |
| `enable_durable_locking_for_serializable` | Indicates whether CockroachDB replicates [`FOR UPDATE` and `FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}#lock-strengths) locks via [Raft]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft), allowing locks to be preserved when leases are transferred. Note that replicating `FOR UPDATE` and `FOR SHARE` locks will add latency to those statements. This setting only affects `SERIALIZABLE` transactions and matches the default `READ COMMITTED` behavior when enabled. | `off` | Yes | Yes |
@@ -50,6 +51,7 @@
| `optimizer_use_lock_op_for_serializable` | If `on`, the optimizer uses a `Lock` operator to construct query plans for `SELECT` statements using the [`FOR UPDATE` and `FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) clauses. This setting only affects `SERIALIZABLE` transactions. `READ COMMITTED` transactions are evaluated with the `Lock` operator regardless of the setting. | `off` | Yes | Yes |
| `optimizer_use_multicol_stats` | If `on`, the optimizer uses collected multi-column statistics for cardinality estimation. | `on` | No | Yes |
| `optimizer_use_not_visible_indexes` | If `on`, the optimizer uses not visible indexes for planning. | `off` | No | Yes |
+| `plpgsql_use_strict_into` | If `on`, PL/pgSQL [`SELECT ... INTO` and `RETURNING ... INTO` statements]({% link {{ page.version.version }}/plpgsql.md %}#assign-a-result-to-a-variable) behave as though the `STRICT` option is specified. This causes the SQL statement to error if it does not return exactly one row. | `off` | Yes | Yes |
| `pg_trgm.similarity_threshold` | The threshold above which a [`%`]({% link {{ page.version.version }}/functions-and-operators.md %}#operators) string comparison returns `true`. The value must be between `0` and `1`. For more information, see [Trigram Indexes]({% link {{ page.version.version }}/trigram-indexes.md %}). | `0.3` | Yes | Yes |
| `prefer_lookup_joins_for_fks` | If `on`, the optimizer prefers [`lookup joins`]({% link {{ page.version.version }}/joins.md %}#lookup-joins) to [`merge joins`]({% link {{ page.version.version }}/joins.md %}#merge-joins) when performing [`foreign key`]({% link {{ page.version.version }}/foreign-key.md %}) checks. | `off` | Yes | Yes |
| `reorder_joins_limit` | Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan.
For more information, see [Join reordering]({% link {{ page.version.version }}/cost-based-optimizer.md %}#join-reordering). | `8` | Yes | Yes |
diff --git a/src/current/_includes/v24.1/physical-replication/alter-virtual-cluster-diagram.html b/src/current/_includes/v24.1/physical-replication/alter-virtual-cluster-diagram.html
index d0444ee3525..c5400f6e9ed 100644
--- a/src/current/_includes/v24.1/physical-replication/alter-virtual-cluster-diagram.html
+++ b/src/current/_includes/v24.1/physical-replication/alter-virtual-cluster-diagram.html
@@ -1,273 +1,431 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/physical-replication/fast-cutback-syntax.md b/src/current/_includes/v24.1/physical-replication/fast-cutback-syntax.md
new file mode 100644
index 00000000000..18f5f46b5f9
--- /dev/null
+++ b/src/current/_includes/v24.1/physical-replication/fast-cutback-syntax.md
@@ -0,0 +1,8 @@
+{% include_cached new-in.html version="v24.1" %} To cut back to a cluster that was previously the primary cluster, use the [`ALTER VIRTUAL CLUSTER`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}) syntax:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+ALTER VIRTUAL CLUSTER {original_primary_vc} START REPLICATION FROM {promoted_standby_vc} ON connection_string_standby;
+~~~
+
+The original primary virtual cluster may be almost up to date with the promoted standby's virtual cluster. The difference in data between the two virtual clusters will include only the writes that have been applied to the promoted standby after cutover from the primary cluster.
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/physical-replication/interface-virtual-cluster.md b/src/current/_includes/v24.1/physical-replication/interface-virtual-cluster.md
index 96c5ddd74ab..aaca2fd0cb0 100644
--- a/src/current/_includes/v24.1/physical-replication/interface-virtual-cluster.md
+++ b/src/current/_includes/v24.1/physical-replication/interface-virtual-cluster.md
@@ -1,2 +1,2 @@
-- The system virtual cluster manages the cluster's control plane and the replication of the cluster's data. Admins connect to the system virtual cluster to configure and manage the underlying CockroachDB cluster, set up physical cluster replication, create and manage a virtual clusters, and observe metrics and logs for the CockroachDB cluster and each virtual cluster.
-- Each other virtual cluster manages its own data plane. Users connect to a virtual cluster by default, rather than the system virtual cluster. To connect to the system virtual cluster, the connection string must be modified. Virtual clusters contain user data and run application workloads. When physical cluster replication is enabled, the non-system virtual cluster on both primary and secondary clusters is named `application`.
+- The system virtual cluster manages the cluster's control plane and the replication of the cluster's data. Admins connect to the system virtual cluster to configure and manage the underlying CockroachDB cluster, set up physical cluster replication, create and manage a virtual cluster, and observe metrics and logs for the CockroachDB cluster and each virtual cluster.
+- Each other virtual cluster manages its own data plane. Users connect to a virtual cluster by default, rather than the system virtual cluster. To connect to the system virtual cluster, the connection string must be modified. Virtual clusters contain user data and run application workloads. When physical cluster replication is enabled, the non-system virtual cluster on both primary and secondary clusters is named `main`.
diff --git a/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-data-state.md b/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-data-state.md
index a39bef0fe3e..e12fd972ba2 100644
--- a/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-data-state.md
+++ b/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-data-state.md
@@ -5,5 +5,5 @@ State | Description
`replicating` | The replication job has started and is replicating data.
`replication paused` | The replication job is paused due to an error or a manual request with [`ALTER VIRTUAL CLUSTER ... PAUSE REPLICATION`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}).
`replication pending cutover` | The replication job is running and the cutover time has been set. Once the the replication reaches the cutover time, the cutover will begin automatically.
-`replication cutting over` | The job has started cutting over. The cutover time can no longer be changed. Once cutover is complete, A virtual cluster will be available for use with [`ALTER VIRTUAL CLUSTER ... START SHARED SERVICE`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}).
-`replication error` | An error has occurred. You can find more detail in the error message and the logs.
+`replication cutting over` | The job has started cutting over. The cutover time can no longer be changed. Once cutover is complete, a virtual cluster will be available for use with [`ALTER VIRTUAL CLUSTER ... START SHARED SERVICE`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}).
+`replication error` | An error has occurred. You can find more detail in the error message and the [logs]({% link {{ page.version.version }}/configure-logs.md %}). **Note:** A physical cluster replication job will retry for 3 minutes before failing.
diff --git a/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-responses.md b/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-responses.md
index a537bc94939..545fc73058c 100644
--- a/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-responses.md
+++ b/src/current/_includes/v24.1/physical-replication/show-virtual-cluster-responses.md
@@ -3,12 +3,13 @@ Field | Response
`id` | The ID of a virtual cluster.
`name` | The name of the standby (destination) virtual cluster.
`data_state` | The state of the data on a virtual cluster. This can show one of the following: `initializing replication`, `ready`, `replicating`, `replication paused`, `replication pending cutover`, `replication cutting over`, `replication error`. Refer to [Data state](#data-state) for more detail on each response.
-`service_mode` | The service mode shows whether a virtual cluster is ready to accept SQL requests. This can show one `none` or `shared`. When `shared`, a virtual cluster's SQL connections will be served by the same nodes that are serving the system virtual cluster.
+`service_mode` | The service mode shows whether a virtual cluster is ready to accept SQL requests. This can show `none` or `shared`. When `shared`, a virtual cluster's SQL connections will be served by the same nodes that are serving the system virtual cluster.
`source_tenant_name` | The name of the primary (source) virtual cluster.
`source_cluster_uri` | The URI of the primary (source) cluster. The standby cluster connects to the primary cluster using this URI when [starting a replication stream]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#step-4-start-replication).
-`replication_job_id` | The ID of the replication job.
-`replicated_time` | The latest timestamp at which the standby cluser has consistent data — that is, the latest time you can cut over to. This time advances automatically as long as the replication proceeds without error. `replicated_time` is updated periodically (every `30s`).
+`replicated_time` | The latest timestamp at which the standby cluster has consistent data — that is, the latest time you can cut over to. This time advances automatically as long as the replication proceeds without error. `replicated_time` is updated periodically (every `30s`).
`retained_time` | The earliest timestamp at which the standby cluster has consistent data — that is, the earliest time you can cut over to.
+`replication_lag` | The time between the most up-to-date replicated time and the actual time. Refer to the [Technical Overview]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}) for more detail.
`cutover_time` | The time at which the cutover will begin. This can be in the past or the future. Refer to [Cut over to a point in time]({% link {{ page.version.version }}/cutover-replication.md %}#cut-over-to-a-point-in-time).
+`status` | The status of the replication stream. This can show one of the following: `initializing replication`, `ready`, `replicating`, `replication paused`, `replication pending cutover`, `replication cutting over`, `replication error`. Refer to [Data state](#data-state) for more detail on each response.
`capability_name` | The [capability]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities) name.
`capability_value` | Whether the [capability]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities) is enabled for a virtual cluster.
diff --git a/src/current/_includes/v24.1/sidebar-data/latest-releases.json b/src/current/_includes/v24.1/sidebar-data/latest-releases.json
index e25f8c0fa7a..e69de29bb2d 100644
--- a/src/current/_includes/v24.1/sidebar-data/latest-releases.json
+++ b/src/current/_includes/v24.1/sidebar-data/latest-releases.json
@@ -1,7 +0,0 @@
-{
- "title": "Latest Releases",
- "is_top_level": true,
- "items": [
- {% include_cached sidebar-latest-releases.json %}
- ]
- }
diff --git a/src/current/_includes/v24.1/sidebar-data/releases.json b/src/current/_includes/v24.1/sidebar-data/releases.json
index fb49f7c9acc..18f2a1b7c6a 100644
--- a/src/current/_includes/v24.1/sidebar-data/releases.json
+++ b/src/current/_includes/v24.1/sidebar-data/releases.json
@@ -1,7 +1,7 @@
{
- "title": "CockroachDB Releases",
+ "title": "Releases",
"is_top_level": true,
"items": [
- {% include_cached sidebar-all-releases.json %}
+ {% include_cached sidebar-releases.json %}
]
}
diff --git a/src/current/_includes/v24.1/sidebar-data/self-hosted-deployments.json b/src/current/_includes/v24.1/sidebar-data/self-hosted-deployments.json
index d21e7c0d9d1..c9d29959c93 100644
--- a/src/current/_includes/v24.1/sidebar-data/self-hosted-deployments.json
+++ b/src/current/_includes/v24.1/sidebar-data/self-hosted-deployments.json
@@ -465,6 +465,12 @@
"/${VERSION}/ui-ttl-dashboard.html"
]
},
+ {
+ "title": "Physical Cluster Replication Dashboard",
+ "urls": [
+ "/${VERSION}/ui-physical-cluster-replication-dashboard.html"
+ ]
+ },
{
"title": "Custom Chart",
"urls": [
diff --git a/src/current/_includes/v24.1/sql/privileges.md b/src/current/_includes/v24.1/sql/privileges.md
index a3a25489b48..223ce20ad7c 100644
--- a/src/current/_includes/v24.1/sql/privileges.md
+++ b/src/current/_includes/v24.1/sql/privileges.md
@@ -22,7 +22,7 @@ Privilege | Levels | Description
`RESTORE` | System, Database | Grants the ability to restore [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) at the system or database level. Refer to `RESTORE` [Required privileges]({% link {{ page.version.version }}/restore.md %}#required-privileges) for more details.
`SELECT` | Table, Sequence | Grants the ability to run [selection queries]({% link {{ page.version.version }}/query-data.md %}) at the table or sequence level.
`UPDATE` | Table, Sequence | Grants the ability to run [update statements]({% link {{ page.version.version }}/update-data.md %}) at the table or sequence level.
-`USAGE` | Function, Schema, Sequence, Type | Grants the ability to use [functions]({% link {{ page.version.version }}/functions-and-operators.md %}), [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
+`USAGE` | Schema, Sequence, Type | Grants the ability to use [schemas]({% link {{ page.version.version }}/schema-design-overview.md %}), [sequences]({% link {{ page.version.version }}/create-sequence.md %}), or [user-defined types]({% link {{ page.version.version }}/create-type.md %}).
`VIEWACTIVITY` | System | Grants the ability to view other user's activity statistics of a cluster.
`VIEWACTIVITYREDACTED` | System | Grants the ability to view other user's activity statistics, but prevents the role from accessing the statement diagnostics bundle in the DB Console, and viewing some columns in introspection queries that contain data about the cluster.
`VIEWCLUSTERMETADATA` | System | Grants the ability to view range information, data distribution, store information, and Raft information.
diff --git a/src/current/_includes/v24.1/ui/insights.md b/src/current/_includes/v24.1/ui/insights.md
index 8e6270a695e..dc2a1ba3f1b 100644
--- a/src/current/_includes/v24.1/ui/insights.md
+++ b/src/current/_includes/v24.1/ui/insights.md
@@ -67,7 +67,7 @@ Additional information is displayed for the following insight types:
All transaction executions flagged with this insight type will display a **Transaction with ID {transaction ID} waited on** section which provides details of the blocking transaction execution.
1. [**Failed Execution**](#failed-execution):
Certain transaction executions flagged with this insight type will display a **Failed Execution** section with **Conflicting Transaction** and **Conflicting Location** information. The following 3 conditions are required:
- - The `sql.contention.record_serialization_conflicts.enabled` [cluster setting]({{ link_prefix }}cluster-settings.html) is set to `true`. By default, this is set to `false`.
+ - The [`sql.contention.record_serialization_conflicts.enabled`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-record-serialization-conflicts-enabled) cluster setting is set to `true` (default).
- **Error Code** is `40001`, a `serialization_failure`.
- **Error Message** includes [`RETRY_SERIALIZABLE`]({{ link_prefix }}transaction-retry-error-reference.html#retry_serializable)` - failed preemptive refresh due to conflicting locks`.
@@ -199,7 +199,7 @@ The following screenshot shows the conditional details of the preceding failed t
-To capture more information in the event of a failed transaction execution due to a serialization conflict, set the `sql.contention.record_serialization_conflicts.enabled` [cluster setting]({{ link_prefix }}cluster-settings.html) to `true` (default is `false`). With this setting enabled, when the **Error Code** is `40001` and the **Error Message** specifically has [`RETRY_SERIALIZABLE - failed preemptive refresh`]({{ link_prefix }}transaction-retry-error-reference.html#failed_preemptive_refresh)` due to conflicting locks`, a conditional **Failed Execution** section is displayed with **Conflicting Transaction** and **Conflicting Location** information.
+To capture more information in the event of a failed transaction execution due to a serialization conflict, set the [`sql.contention.record_serialization_conflicts.enabled`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-record-serialization-conflicts-enabled) cluster setting to `true` (default). With this setting enabled, when the **Error Code** is `40001` and the **Error Message** specifically has [`RETRY_SERIALIZABLE - failed preemptive refresh`]({{ link_prefix }}transaction-retry-error-reference.html#failed_preemptive_refresh)` due to conflicting locks`, a conditional **Failed Execution** section is displayed with **Conflicting Transaction** and **Conflicting Location** information.
To troubleshoot, refer to the performance tuning recipe for [transaction retry errors]({{ link_prefix }}performance-recipes.html#transaction-retry-error).
@@ -313,7 +313,7 @@ You can configure the behavior of insights using the following [cluster settings
### Workload insights settings
-You can configure [**Workload Insights**](#workload-insights-tab) with the following [{{ link_prefix }}cluster settings]({{ link_prefix }}cluster-settings.html):
+You can configure [**Workload Insights**](#workload-insights-tab) with the following [cluster settings]({{ link_prefix }}cluster-settings.html):
| Setting | Description | Where used |
|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------|
@@ -325,7 +325,7 @@ You can configure [**Workload Insights**](#workload-insights-tab) with the follo
|[`sql.insights.execution_insights_capacity`]({{ link_prefix }}cluster-settings.html#setting-sql-insights-execution-insights-capacity) | The maximum number of execution insights stored in each node. | [Statement executions](#statement-executions-view) |
|[`sql.contention.event_store.capacity`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-event-store-capacity) | The in-memory storage capacity of the contention event store in each nodes. | [Transaction executions](#transaction-executions-view) |
|[`sql.contention.event_store.duration_threshold`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-event-store-duration-threshold) | The minimum contention duration to cause contention events to be collected into the `crdb_internal.transaction_contention_events` table. | [Transaction executions](#transaction-executions-view) |
-|`sql.contention.record_serialization_conflicts.enabled` | enables recording `40001` errors with conflicting txn meta as `SERIALIZATION_CONFLICT` contention events into `crdb_internal.transaction_contention_events` | [Transaction executions](#transaction-executions-view) |
+|[`sql.contention.record_serialization_conflicts.enabled`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-record-serialization-conflicts-enabled) | enables recording `40001` errors, along with metadata about conflicting transactions, as `SERIALIZATION_CONFLICT` contention events into `crdb_internal.transaction_contention_events`
**Default**: `true` | [Transaction executions](#transaction-executions-view) |
#### Detect slow executions
diff --git a/src/current/_includes/v24.1/ui/statement-details.md b/src/current/_includes/v24.1/ui/statement-details.md
index 690cc701eaa..c2e91b92609 100644
--- a/src/current/_includes/v24.1/ui/statement-details.md
+++ b/src/current/_includes/v24.1/ui/statement-details.md
@@ -30,7 +30,7 @@ The **Overview** section also displays the SQL statement fingerprint statistics
|**Execution Retries** | The number of [retries]({{ link_prefix }}transactions.html#transaction-retries). |
|**Execution Count** | The total number of executions. It is calculated as the sum of first attempts and retries. |
|**Contention Time** | The amount of time spent waiting for resources. For more information about contention, see [Understanding and avoiding transaction contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). |
-|**CPU Time** | The amount of CPU time spent executing the statement. The CPU time represents the time spent and work done within SQL execution operators. |
+|**SQL CPU Time** | The amount of SQL CPU time spent executing the statement. The SQL CPU time represents the time spent and work done within SQL execution operators. It does not include SQL planning time or KV execution time. |
|**Client Wait Time** | The time spent waiting for the client to send the statement while holding the transaction open. A high wait time indicates that you should revisit the entire transaction and [batch your statements]({{ link_prefix }}transactions.html#batched-statements). |
The following screenshot shows the statement fingerprint of the query described in [Use the right index]({{ link_prefix }}apply-statement-performance-rules.html#rule-2-use-the-right-index):
@@ -60,7 +60,7 @@ Charts following the execution attributes display statement fingerprint statisti
|**Execution Retries** | The number of [retries]({{ link_prefix }}transactions.html#transaction-retries). |
|**Execution Count** | The total number of executions. It is calculated as the sum of first attempts and retries. |
|**Contention Time** | The amount of time spent waiting for resources. For more information about contention, see [Understanding and avoiding transaction contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). |
-|**CPU Time** | The amount of CPU time spent executing the statement. The CPU time represents the time spent and work done within SQL execution operators. |
+|**SQL CPU Time** | The amount of SQL CPU time spent executing the statement. The SQL CPU time represents the time spent and work done within SQL execution operators. It does not include SQL planning time or KV execution time. |
|**Client Wait Time** | The time spent waiting for the client to send the statement while holding the transaction open. A high wait time indicates that you should revisit the entire transaction and [batch your statements]({{ link_prefix }}transactions.html#batched-statements). |
The following charts summarize the executions of the statement fingerprint illustrated in [Overview](#overview):
@@ -124,10 +124,7 @@ If you click **Apply** to create the index and then execute the statement again,
The **Diagnostics** tab allows you to activate and download diagnostics for a SQL statement fingerprint.
{{site.data.alerts.callout_info}}
-The **Diagnostics** tab is not visible:
-
-- On CockroachDB {{ site.data.products.serverless }} clusters.
-- For roles with the `VIEWACTIVITYREDACTED` [system privilege]({{ link_prefix }}security-reference/authorization.html#supported-privileges) (or the legacy `VIEWACTIVITYREDACTED` [role option]({{ link_prefix }}security-reference/authorization.html#role-options)) defined.
+The **Diagnostics** tab is not visible for roles with the `VIEWACTIVITYREDACTED` [system privilege]({{ link_prefix }}security-reference/authorization.html#supported-privileges) (or the legacy `VIEWACTIVITYREDACTED` [role option]({{ link_prefix }}security-reference/authorization.html#role-options)) defined.
{{site.data.alerts.end}}
When you activate diagnostics for a fingerprint, CockroachDB waits for the next SQL query that matches this fingerprint to be run on any node. On the next match, information about the SQL statement is written to a diagnostics bundle that you can download. This bundle consists of [statement traces]({{ link_prefix }}show-trace.html) in various formats (including a JSON file that can be [imported to Jaeger]({{ link_prefix }}query-behavior-troubleshooting.html#visualize-statement-traces-in-jaeger)), a physical query plan, execution statistics, and other information about the query. The bundle contents are identical to those produced by [`EXPLAIN ANALYZE (DEBUG)`]({{ link_prefix }}explain-analyze.html#debug-option). You can use the information collected in the bundle to diagnose problematic SQL statements, such as [slow queries]({{ link_prefix }}query-behavior-troubleshooting.html#query-is-always-slow). We recommend that you share the diagnostics bundle with our [support team]({{ link_prefix }}support-resources.html), which can help you interpret the results.
diff --git a/src/current/_includes/v24.1/ui/statements-filter.md b/src/current/_includes/v24.1/ui/statements-filter.md
index d5a34084385..c750a860614 100644
--- a/src/current/_includes/v24.1/ui/statements-filter.md
+++ b/src/current/_includes/v24.1/ui/statements-filter.md
@@ -7,7 +7,7 @@ The statement fingerprints returned are determined by the selected **Search Crit
By default, the **Top** `100` statement fingerprints **By** [`% of All Runtime`](#percent-of-all-runtime) for the `Past Hour` are returned.
1. To change the number of results returned, select `25`, `50`, `100`, or `500` from the **Top** dropdown. To return a larger number, select `More` and choose an option: `1000`, `5000`, `10000`.
-1. To change the sort column, from the **By** dropdown, select a commonly sorted column: `% of All Runtime`, `CPU Time`, `Contention Time`, `Execution Count`, `P99 Latency`, `Statement Time`. To sort by other columns, select `More` from the dropdown and choose an option: `Last Execution Time`, `Max Latency`,`Max Memory`, `Min Latency`, `Network`, `P50 Latency`, `P90 Latency`, `Retries`, `Rows Processed`.
+1. To change the sort column, from the **By** dropdown, select a commonly sorted column: `% of All Runtime`, `SQL CPU Time`, `Contention Time`, `Execution Count`, `P99 Latency`, `Statement Time`. To sort by other columns, select `More` from the dropdown and choose an option: `Last Execution Time`, `Max Latency`,`Max Memory`, `Min Latency`, `Network`, `P50 Latency`, `P90 Latency`, `Retries`, `Rows Processed`.
{{site.data.alerts.callout_info}}
The `More` options may increase the page loading time and are not generally recommended.
{{site.data.alerts.end}}
diff --git a/src/current/_includes/v24.1/ui/statements-table.md b/src/current/_includes/v24.1/ui/statements-table.md
index ea89ec91e6d..07b200d542b 100644
--- a/src/current/_includes/v24.1/ui/statements-table.md
+++ b/src/current/_includes/v24.1/ui/statements-table.md
@@ -13,7 +13,7 @@ Application Name | The name specified by the [`application_name`]({{ link_prefix
Statement Time | Average [planning and execution time]({{ link_prefix }}architecture/sql-layer.html#sql-parser-planner-executor) of statements with this statement fingerprint within the time interval.
The gray bar indicates the mean latency. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values.
% of All Runtime | The percentage of execution time taken by this statement fingerprint compared to all other statements executed within the time period, including those not displayed. Runtime is calculated as the mean execution latency multiplied by the execution count.
Note: The sum of the values in this column may not equal 100%. Each fingerprint's percentage is calculated by dividing the fingerprint's runtime by the sum of the runtimes for all statement fingerprints in the time interval. "All statement fingerprints" means all user statement fingerprints (not only those displayed by the [search criteria](#search-criteria)), as well as internal statement fingerprints that are never included in the displayed result set. The search criteria are applied after the `% of All Runtime` calculation.
Contention Time | Average time statements with this fingerprint were [in contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other transactions within the time interval.
The gray bar indicates mean contention time. The blue bar indicates one standard deviation from the mean. Hover over the bar to display exact values.
-CPU Time | Average CPU time spent executing within the specified time interval. The gray bar indicates mean CPU time. The blue bar indicates one standard deviation from the mean.
The CPU time includes time spent in the [SQL layer]({{ link_prefix }}architecture/sql-layer.html). It does not include time spent in the [storage layer]({{ link_prefix }}architecture/storage-layer.html).
+SQL CPU Time | Average SQL CPU time spent executing within the specified time interval. It does not include SQL planning time or KV execution time. The gray bar indicates mean SQL CPU time. The blue bar indicates one standard deviation from the mean.
The SQL CPU time includes time spent in the [SQL layer]({{ link_prefix }}architecture/sql-layer.html). It does not include time spent in the [storage layer]({{ link_prefix }}architecture/storage-layer.html).
P50 Latency | The 50th latency percentile for sampled statement executions with this fingerprint.
P90 Latency | The 90th latency percentile for sampled statement executions with this fingerprint.
P99 Latency | The 99th latency percentile for sampled statement executions with this fingerprint.
@@ -27,7 +27,7 @@ Retries | Cumulative number of automatic (internal) [retries]({{ link_prefix }}t
Regions/Nodes | The regions and nodes on which statements with this fingerprint executed.
Nodes are not visible for CockroachDB {{ site.data.products.serverless }} clusters or for clusters that are not multi-region.
Last Execution Time (UTC)| The timestamp when the statement was last executed.
Statement Fingerprint ID | The ID of the statement fingerprint.
-Diagnostics | Activate and download [diagnostics](#diagnostics) for this fingerprint. To activate, click the **Activate** button. The [Activate statement diagnostics](#activate-diagnostics-collection-and-download-bundles) dialog displays. After you complete the dialog, the column displays the status of diagnostics collection (**WAITING**, **READY**, or **ERROR**). Click and select a bundle to download or select **Cancel request** to cancel diagnostics bundle collection.
Statements are periodically cleared from the Statements page based on the start time. To access the full history of diagnostics for the fingerprint, see the [Diagnostics](#diagnostics) tab of the Statement Details page.
Diagnostics is not visible for CockroachDB {{ site.data.products.serverless }} clusters.
+Diagnostics | Activate and download [diagnostics](#diagnostics) for this fingerprint. To activate, click the **Activate** button. The [Activate statement diagnostics](#activate-diagnostics-collection-and-download-bundles) dialog displays. After you complete the dialog, the column displays the status of diagnostics collection (**WAITING**, **READY**, or **ERROR**). Click and select a bundle to download or select **Cancel request** to cancel diagnostics bundle collection.
Statements are periodically cleared from the Statements page based on the start time. To access the full history of diagnostics for the fingerprint, see the [Diagnostics](#diagnostics) tab of the Statement Details page.
{{site.data.alerts.callout_info}}
To obtain the execution statistics, CockroachDB samples a percentage of the executions. If you see `no samples` displayed in the **Contention**, **Max Memory**, or **Network** columns, there are two possibilities:
diff --git a/src/current/cockroachcloud/authorization.md b/src/current/cockroachcloud/authorization.md
index c92c2ce1f92..426e9d79479 100644
--- a/src/current/cockroachcloud/authorization.md
+++ b/src/current/cockroachcloud/authorization.md
@@ -1,6 +1,6 @@
---
title: CockroachDB Cloud Access Management (Authorization) Overview
-summary: Learn about the CockroachDB {{ site.data.products.cloud }} Authorization features and concepts
+summary: Learn about CockroachDB Cloud authorization features and concepts
toc: true
docs_area: manage
---
diff --git a/src/current/cockroachcloud/ccloud-faq.md b/src/current/cockroachcloud/ccloud-faq.md
index a9feebb4436..b17c95c12ce 100644
--- a/src/current/cockroachcloud/ccloud-faq.md
+++ b/src/current/cockroachcloud/ccloud-faq.md
@@ -1,6 +1,6 @@
---
title: CockroachDB Cloud Access Management (Authorization) FAQ
-summary: Frequently asked questions about CockroachDB {{ site.data.products.cloud }}
+summary: Frequently asked questions about CockroachDB Cloud
toc: true
docs_area: manage
---
diff --git a/src/current/cockroachcloud/cloud-sso-sql.md b/src/current/cockroachcloud/cloud-sso-sql.md
index a0852729cc6..86afd9cdada 100644
--- a/src/current/cockroachcloud/cloud-sso-sql.md
+++ b/src/current/cockroachcloud/cloud-sso-sql.md
@@ -1,6 +1,6 @@
---
title: Cluster Single Sign-on (SSO) using the Cloud Console
-summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB {{ site.data.products.cloud }}, review of authenticating users, configuring required cluster settings.
+summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB Cloud, review of authenticating users, configuring required cluster settings.
toc: true
docs_area: manage
---
diff --git a/src/current/cockroachcloud/cluster-overview-page.md b/src/current/cockroachcloud/cluster-overview-page.md
index 27df7c9043d..66f8f10d35f 100644
--- a/src/current/cockroachcloud/cluster-overview-page.md
+++ b/src/current/cockroachcloud/cluster-overview-page.md
@@ -1,6 +1,6 @@
---
title: Cluster Overview Page
-summary: How to use the Cluster Overview page to view cluster details on CockroachDB {{ site.data.products.cloud }}.
+summary: How to use the Cluster Overview page to view cluster details on CockroachDB Cloud.
toc: true
docs_area: manage
---
diff --git a/src/current/cockroachcloud/cmek.md b/src/current/cockroachcloud/cmek.md
index c7f9f75a170..eb4603ff144 100644
--- a/src/current/cockroachcloud/cmek.md
+++ b/src/current/cockroachcloud/cmek.md
@@ -1,6 +1,6 @@
---
title: Customer-Managed Encryption Keys (CMEK) Overview
-summary: Use cryptographic keys that you manage to protect data at rest in a CockroachDB {{ site.data.products.dedicated }} cluster.
+summary: Use cryptographic keys that you manage to protect data at rest in a CockroachDB Cloud cluster.
toc: true
docs_area: manage.security
cloud: true
diff --git a/src/current/cockroachcloud/cockroachdb-dedicated-on-azure.md b/src/current/cockroachcloud/cockroachdb-dedicated-on-azure.md
index 0bf8f3a16c3..916e6ef5eb1 100644
--- a/src/current/cockroachcloud/cockroachdb-dedicated-on-azure.md
+++ b/src/current/cockroachcloud/cockroachdb-dedicated-on-azure.md
@@ -12,14 +12,9 @@ This page provides information about CockroachDB {{ site.data.products.dedicated
CockroachDB {{ site.data.products.dedicated }} clusters on Azure have the following temporary limitations. To express interest or request more information about a given limitation, contact your Cockroach Labs account team. For more details, refer to the [FAQs](#faqs).
-### Regions
-
-For the list of supported Azure regions, refer to [Azure Regions]({% link cockroachcloud/regions.md %}?filters=dedicated#azure-regions).
-
### Editing and scaling
- A cluster must have at minimum three nodes. A multi-region cluster must have at minimum three nodes per region. Single-node clusters are not supported.
-- To add or remove regions from a cluster on Azure, you must use the CockroachDB {{ site.data.products.cloud }} API. Refer to [Scale, edit or upgrade a cluster](https://www.cockroachlabs.com/docs/api/cloud/v1#patch-/api/v1/clusters/-cluster_id-).
- After it is created, a cluster's storage can be increased in place, but cannot subsequently be decreased or removed.
### Networking
@@ -28,7 +23,6 @@ For the list of supported Azure regions, refer to [Azure Regions]({% link cockro
### Observability
-- Exporting metrics to [Datadog](https://www.datadoghq.com/) is available. Enable the Datadog integration in the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-dedicated-with-datadog) or with the [Cloud API]({% link cockroachcloud/export-metrics.md %}?filters=datadog-metrics-export).
- Exporting metrics to Azure Monitor is not yet available. To express interest, contact your Cockroach Labs account team.
- [Log Export]({% link cockroachcloud/export-logs.md %}) is not yet available.
@@ -44,21 +38,17 @@ For the list of supported Azure regions, refer to [Azure Regions]({% link cockro
The following sections provide more details about CockroachDB {{ site.data.products.dedicated }} on Azure.
-### Are multi-region clusters supported?
-
-Yes.
-
### Can CockroachDB {{ site.data.products.serverless }} clusters be deployed on Azure?
CockroachDB {{ site.data.products.serverless }} is not currently available on Azure.
-### Are horizontal and vertical scaling supported?
+### Can we use {{ site.data.products.db }} credits to pay for clusters on Azure?
-Yes. Refer to [Cluster Management]({% link cockroachcloud/cluster-management.md %}).
+Yes, a CockroachDB {{ site.data.products.cloud }} organization can pay for the usage of CockroachDB {{ site.data.products.dedicated }} clusters on Azure with {{ site.data.products.db }} credits. To add additional credits to your CockroachDB {{ site.data.products.cloud }} organization, contact your Cockroach Labs account team.
-### What Azure regions can we choose?
+### Can we migrate from PostgreSQL to CockroachDB {{ site.data.products.dedicated }} on Azure?
-Refer to [Azure Regions]({% link cockroachcloud/regions.md %}?filters=dedicated#azure-regions).
+CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and the majority of PostgreSQL syntax. Refer to [Supported SQL Feature Support](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/sql-feature-support). The same CockroachDB binaries are used across CockroachDB {{ site.data.products.cloud }} deployment environments, and all SQL features behave the same on Azure as on GCP or AWS.
### What kind of compute and storage resources are used?
@@ -66,54 +56,36 @@ Refer to [Azure Regions]({% link cockroachcloud/regions.md %}?filters=dedicated#
CockroachDB {{ site.data.products.dedicated }} clusters can be created with a minimum of 4 vcPUs per node on Azure.
-### Can we use {{ site.data.products.db }} credits to pay for clusters on Azure?
-
-Yes, existing CockroachDB {{ site.data.products.cloud }} customers can pay for the usage of CockroachDB {{ site.data.products.dedicated }} clusters on Azure with their available credits. To add additional credits to your CockroachDB {{ site.data.products.cloud }} organization, contact your Cockroach Labs account team.
-
### What backup and restore options are available for clusters on Azure?
[Managed-service backups]({% link cockroachcloud/use-managed-service-backups.md %}?filters=dedicated) automatically back up clusters on Azure, and customers can [take and restore from manual backups to Azure storage]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) ([Blob Storage](https://azure.microsoft.com/products/storage/blobs) or [ADLS Gen 2](https://learn.microsoft.com/azure/storage/blobs/data-lake-storage-introduction)). Refer to the blog post [CockroachDB locality-aware Backups for Azure Blob](https://www.cockroachlabs.com/blog/locality-aware-backups-azure-blob/) for an example.
-### Is it possible to take encrypted backups?
-
-Yes, customers can [take and restore from encrypted backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) on Azure storage by using an RSA key stored in [Azure Key Vault](https://learn.microsoft.com/azure/key-vault/keys/about-keys).
+You can [take and restore from encrypted backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}) on Azure storage by using an RSA key stored in [Azure Key Vault](https://learn.microsoft.com/azure/key-vault/keys/about-keys).
### Are changefeeds available?
Yes, customers can create and configure [changefeeds](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/changefeed-messages) to send data events in real-time from a CockroachDB {{ site.data.products.dedicated }} cluster to a [downstream sink](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/changefeed-sinks.html) such as Kafka, Azure storage, or Webhook. [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/azure-event-hubs-kafka-overview) provides an Azure-native service that can be used with a Kafka endpoint as a sink.
-### Can we export logs and metrics from a cluster on Azure to Azure Monitor or a third-party observability service?
-
-Exporting metrics to Datadog is supported. Refer to [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/export-metrics.md %}). It’s not yet possible to export cluster logs or metrics to Azure Monitor or to another third-party observability service. To express interest in this feature, contact your Cockroach Labs account team.
+### What secure and centralized authentication methods are available for {{ site.data.products.dedicated }} clusters on Azure?
-### Are CockroachDB user-defined functions available for clusters on Azure?
+Human users can connect using [Cluster SSO]({% link cockroachcloud/cloud-sso-sql.md %}), [client certificates](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/authentication.html#using-digital-certificates-with-cockroachdb), or the [`ccloud` command]({% link cockroachcloud/ccloud-get-started.md %}) or SQL clients.
-Yes, [user-defined functions](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/user-defined-functions) are supported for CockroachDB {{ site.data.products.dedicated }} clusters on Azure. The same CockroachDB binaries are used across CockroachDB {{ site.data.products.cloud }} deployment environments, and all SQL features behave the same on Azure as on GCP or AWS.
+Application users can connect using [JWT tokens](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/sso-sql) or [client certificates](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/authentication.html#using-digital-certificates-with-cockroachdb).
-### Can we migrate from PostgreSQL to CockroachDB {{ site.data.products.dedicated }} on Azure?
+### Can we use private connectivity methods, such as Private Link, to securely connect to a cluster on Azure?
-CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and the majority of PostgreSQL syntax. Refer to [Supported SQL Feature Support](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/sql-feature-support). The same CockroachDB binaries are used across CockroachDB {{ site.data.products.cloud }} deployment environments, and all SQL features behave the same on Azure as on GCP or AWS.
+You can configure IP allowlisting to limit the IP addresses or CIDR ranges that can access a CockroachDB {{ site.data.products.dedicated }} cluster on Azure. [Azure Private Link](https://learn.microsoft.com/azure/private-link/private-link-overview) is not yet available. To express interest, contact your Cockroach Labs account team.
### How are clusters on Azure isolated from each other? Do they follow a similar approach as on AWS and GCP?
CockroachDB {{ site.data.products.cloud }} follows a similar tenant isolation approach on Azure as on GCP and AWS. Each {{ site.data.products.dedicated }} cluster is created on an [AKS cluster](https://azure.microsoft.com/products/kubernetes-service) in a unique [VNet](https://learn.microsoft.com/azure/virtual-network/virtual-networks-overview). Implementation details are subject to change.
-### Can we use Single-Sign On to sign-in to {{ site.data.products.db }} and manage clusters on Azure?
-
-Yes, [Cloud Organization SSO]({% link cockroachcloud/cloud-org-sso.md %}) is supported. This feature is unrelated to the cluster's deployment environment.
-
-### What secure and centralized authentication methods are available for {{ site.data.products.dedicated }} clusters on Azure?
-
-Human users can connect using [Cluster SSO]({% link cockroachcloud/cloud-sso-sql.md %}), [client certificates](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/authentication.html#using-digital-certificates-with-cockroachdb), or the [`ccloud` command]({% link cockroachcloud/ccloud-get-started.md %}) or SQL clients.
-
-Application users can connect using [JWT tokens](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/sso-sql) or [client certificates](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/authentication.html#using-digital-certificates-with-cockroachdb).
-
### How is data encrypted at rest in a cluster on Azure?
-Customer data at rest on cluster disks is encrypted using [server-side encryption of Azure disk storage](https://learn.microsoft.com/azure/virtual-machines/disk-encryption). CockroachDB’s [file-based encryption at rest](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/security-reference/encryption#cockroachdb-self-hosted-clusters) and [Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) are not yet available. To express interest, contact your Cockroach Labs account team.
+Customer data at rest on cluster disks is encrypted using [server-side encryption of Azure disk storage](https://learn.microsoft.com/azure/virtual-machines/disk-encryption). [Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) are not yet available. To express interest, contact your Cockroach Labs account team.
-All client connections to a CockroachDB {{ site.data.products.dedicated }} cluster on Azure, as well as connections between nodes, are encrypted using TLS.
+All client connections to a CockroachDB {{ site.data.products.dedicated }} cluster, as well as connections between nodes, are encrypted using TLS.
-### Can we use private connectivity methods, such as Private Link, to securely connect to a cluster on Azure?
+### Do CockroachDB {{ site.data.products.dedicated }} clusters on Azure comply with SOC 2?
-You can configure IP allowlisting to limit the IP addresses or CIDR ranges that can access a CockroachDB {{ site.data.products.dedicated }} cluster on Azure. [Azure Private Link](https://learn.microsoft.com/azure/private-link/private-link-overview) is not yet available. To express interest, contact your Cockroach Labs account team.
+CockroachDB Dedicated on Azure meets or exceeds the requirements of SOC 2 Type 2. Refer to [Regulatory Compliance in CockroachDB {{ site.data.products.dedicated }}]({% link cockroachcloud/compliance.md %}).
diff --git a/src/current/cockroachcloud/egress-perimeter-controls.md b/src/current/cockroachcloud/egress-perimeter-controls.md
index e5abfb6ab42..8797fe43af1 100644
--- a/src/current/cockroachcloud/egress-perimeter-controls.md
+++ b/src/current/cockroachcloud/egress-perimeter-controls.md
@@ -1,6 +1,6 @@
---
title: Egress Perimeter Controls for CockroachDB Dedicated
-summary: Learn how to configure Egress Perimeter Controls for enhanced network security on a CockroachDB {{ site.data.products.dedicated }} cluster.
+summary: Learn how to configure Egress Perimeter Controls for enhanced network security on a CockroachDB Cloud cluster.
toc: true
toc_not_nested: true
docs_area: security
diff --git a/src/current/cockroachcloud/provision-a-cluster-with-terraform.md b/src/current/cockroachcloud/provision-a-cluster-with-terraform.md
index 94fd26fe304..425831c09aa 100644
--- a/src/current/cockroachcloud/provision-a-cluster-with-terraform.md
+++ b/src/current/cockroachcloud/provision-a-cluster-with-terraform.md
@@ -96,7 +96,6 @@ In this tutorial, you will create a CockroachDB {{ site.data.products.dedicated
cloud_provider_regions = ["{cloud provider region}"]
cluster_node_count = {number of nodes}
storage_gib = {storage in GiB}
- machine_type = "{cloud provider machine type}"
allow_list_name = "{allow list name}"
cidr_ip = "{allow list CIDR IP}"
cidr_mask = {allow list CIDR mask}
@@ -110,7 +109,6 @@ In this tutorial, you will create a CockroachDB {{ site.data.products.dedicated
- `{cloud provider regions}` is the region code or codes for the cloud infrastructure provider. For multi-region clusters, separate each region with a comma.
- `{number of nodes}` is the number of nodes in each region. Cockroach Labs recommends at least 3 nodes per region, and the same number of nodes in each region for multi-region clusters.
- `{storage in GiB}` is the amount of storage specified in GiB.
- - `{cloud provider machine type}` is the machine type for the cloud infrastructure provider.
- `{allow list name}` is the name for the [IP allow list]({% link cockroachcloud/network-authorization.md %}#ip-allowlisting). Use a descriptive name to identify the IP allow list.
- `{allow list CIDR IP}` is the Classless Inter-Domain Routing (CIDR) IP address base.
- `{allow list CIDR mask}` is the CIDR mask.
@@ -126,7 +124,6 @@ In this tutorial, you will create a CockroachDB {{ site.data.products.dedicated
cloud_provider_regions = ["us-west2"]
cluster_node_count = 3
storage_gib = 15
- machine_type = "n1-standard-2"
allow_list_name = "Max's home network"
cidr_ip = "1.2.3.4"
cidr_mask = 32
@@ -251,7 +248,6 @@ Terraform will perform the following actions:
+ creator_id = (known after apply)
+ dedicated = {
+ disk_iops = (known after apply)
- + machine_type = (known after apply)
+ memory_gib = (known after apply)
+ num_virtual_cpus = (known after apply)
+ storage_gib = (known after apply)
@@ -288,7 +284,6 @@ Terraform will perform the following actions:
+ creator_id = (known after apply)
+ dedicated = {
+ disk_iops = (known after apply)
- + machine_type = "n1-standard-2"
+ memory_gib = (known after apply)
+ num_virtual_cpus = (known after apply)
+ storage_gib = 15
@@ -362,7 +357,6 @@ cluster = {
"creator_id" = tostring(null)
"dedicated" = {
"disk_iops" = 450
- "machine_type" = "n1-standard-2"
"memory_gib" = 7.5
"num_virtual_cpus" = 2
"storage_gib" = 15
@@ -452,7 +446,6 @@ resource "cockroach_cluster" "example" {
creator_id = "98e75f0a-072b-44dc-95d2-cc36cd425cab"
dedicated = {
disk_iops = 450
- machine_type = "n1-standard-2"
memory_gib = 7.5
num_virtual_cpus = 2
storage_gib = 15
@@ -481,7 +474,6 @@ data "cockroach_cluster" "example" {
cockroach_version = "v22.2.0"
dedicated = {
disk_iops = 450
- machine_type = "n1-standard-2"
memory_gib = 7.5
num_virtual_cpus = 2
storage_gib = 15
@@ -506,7 +498,6 @@ cluster = {
creator_id = null
dedicated = {
disk_iops = 450
- machine_type = "n1-standard-2"
memory_gib = 7.5
num_virtual_cpus = 2
storage_gib = 15
diff --git a/src/current/cockroachcloud/serverless-unsupported-features.md b/src/current/cockroachcloud/serverless-unsupported-features.md
index 6dd94904d0d..a6705f694a0 100644
--- a/src/current/cockroachcloud/serverless-unsupported-features.md
+++ b/src/current/cockroachcloud/serverless-unsupported-features.md
@@ -50,7 +50,6 @@ The Cloud Console provides a subset of observability information from the DB Con
The Cloud Console also does not currently provide the following features available in the DB Console:
-- [Statement diagnostic bundles](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/ui-statements-page#diagnostics) on the **Statements** Page
- [Direct actions to drop unused indexes](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/ui-databases-page#index-recommendations) on the **Insights** and **Databases** pages
- [Direct actions to create missing indexes](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/ui-insights-page#schema-insights-tab) and [replace existing indexes](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/ui-insights-page#schema-insights-tab) on the **Insights** page
diff --git a/src/current/cockroachcloud/sql-audit-logging.md b/src/current/cockroachcloud/sql-audit-logging.md
index 665a093d4e4..e11be58c2d9 100644
--- a/src/current/cockroachcloud/sql-audit-logging.md
+++ b/src/current/cockroachcloud/sql-audit-logging.md
@@ -1,6 +1,6 @@
---
title: SQL Audit Logging
-summary: Learn about the SQL Audit Logging feature for CockroachDB {{ site.data.products.cloud }} clusters.
+summary: Learn about the SQL Audit Logging feature for CockroachDB Cloud clusters.
toc: true
docs_area: manage
---
diff --git a/src/current/cockroachcloud/upgrade-policy.md b/src/current/cockroachcloud/upgrade-policy.md
index 14a2b906b20..3ac7718365d 100644
--- a/src/current/cockroachcloud/upgrade-policy.md
+++ b/src/current/cockroachcloud/upgrade-policy.md
@@ -1,46 +1,45 @@
---
-title: CockroachDB Cloud Upgrade Policy
+title: CockroachDB Cloud Support and Upgrade Policy
summary: Learn about the upgrade policy for clusters deployed in CockroachDB Cloud.
toc: true
docs_area: manage
---
-This page describes the upgrade policy for CockroachDB {{ site.data.products.cloud }}. For self-hosted clusters, see the CockroachDB [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
+This page describes the support and upgrade policy for clusters deployed in CockroachDB {{ site.data.products.cloud }}. For CockroachDB Self-Hosted, refer to the CockroachDB [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
-Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (“1” or “2”, representing a typical biannual cycle), and `PP` indicates the patch release version. Example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1. For more details, refer to [Release Naming](https://cockroachlabs.com/docs/releases/index#release-naming).
+Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (historically “1” or “2”, representing a biannual cycle), and `PP` indicates the patch release version. For example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1. For more details, refer to [Release Naming](https://cockroachlabs.com/docs/releases/index#release-naming).
-CockroachDB {{ site.data.products.cloud }} supports the latest major version of CockroachDB and the major version immediately preceding it. Support for these versions includes patch version upgrades and security patches.
+CockroachDB {{ site.data.products.cloud }} provides support for the latest major version of CockroachDB and the major version immediately preceding it.
-{{site.data.alerts.callout_success}}
-Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.dedicated }} clusters can optionally be upgraded to a [Pre-Production Preview](#pre-production-preview-upgrades) release—a beta or release candidate (RC) testing release for testing and validation of that next major version. To learn more, refer to [Upgrade to v23.2 Pre-Production Preview]({% link cockroachcloud/upgrade-to-v23.2.md %}).
-{{site.data.alerts.end}}
+CockroachDB Dedicated clusters are automatically upgraded to the latest patch of the cluster’s current major version of CockroachDB, but an account administrator must initiate an upgrade to a new major version.
-{{site.data.alerts.callout_danger}}
-[CockroachDB {{ site.data.products.serverless }}]({% link cockroachcloud/quickstart.md %}) clusters are subject to automatic upgrades for both major and patch releases.
+CockroachDB Serverless clusters are upgraded to the latest major version and each patch automatically.
+
+{{site.data.alerts.callout_success}}
+Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.dedicated }} clusters can optionally be upgraded to a [Pre-Production Preview](#pre-production-preview-upgrades) release—a beta or release candidate (RC) testing release for testing and validation of that next major version. To learn more, refer to [Upgrade to v24.1 Pre-Production Preview]({% link cockroachcloud/upgrade-to-v24.1.md %}).
{{site.data.alerts.end}}
## Patch version upgrades
Patch version [releases](https://www.cockroachlabs.com/docs/releases), or "maintenance" releases, contain stable, backward-compatible improvements to the major versions of CockroachDB (for example, v23.1.12 and v23.1.13).
-For CockroachDB {{ site.data.products.dedicated }} clusters, [Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) can [set a weekly 6-hour maintenance window]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window) during which available maintenance and patch upgrades will be applied. During the window, your cluster may experience restarts, degraded performance, and downtime for single-node clusters. Upgrades may not always be completed by the end of the window, and maintenance that is critical for security or stability may occur outside the window. Patch upgrades can also be [deferred for 60 days]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window). If no maintenance window is configured, CockroachDB {{ site.data.products.dedicated }} clusters will be automatically upgraded to the latest supported patch version as soon as it becomes available.
+For CockroachDB {{ site.data.products.dedicated }} clusters, [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can [set a weekly 6-hour maintenance window]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window) during which available patch upgrades will be applied. During the window, your cluster may experience restarts, degraded performance, and, for single-node clusters, downtime. Upgrades may not always be completed by the end of the window, and maintenance that is critical for security or stability may occur outside the window. Patch upgrades can also be [deferred for 60 days]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window). If no maintenance window is configured, CockroachDB {{ site.data.products.dedicated }} clusters will be automatically upgraded to the latest supported patch version as soon as it becomes available.
CockroachDB {{ site.data.products.serverless }} clusters are subject to automatic upgrades to the latest supported patch version.
-**To minimize disruption to clients during cluster upgrades, it's important to use [connection retry logic]({% link cockroachcloud/production-checklist.md %}#keeping-connections-current) in your application.**
-
{{site.data.alerts.callout_danger}}
Single-node clusters will experience some downtime during cluster maintenance.
{{site.data.alerts.end}}
## Major version upgrades
-Major version [releases](https://www.cockroachlabs.com/docs/releases) (for example, v23.1.0 and v23.2.0) contain new functionality and potentially backward-incompatible changes to CockroachDB.
+Major version [releases](https://www.cockroachlabs.com/docs/releases) (for example, v23.1.0 and v23.2.0) contain new functionality and may include backward-incompatible changes to CockroachDB.
-Major version upgrades are automatic for CockroachDB {{ site.data.products.serverless }} clusters and opt-in for CockroachDB {{ site.data.products.dedicated }} clusters. [Org Administrators]({% link cockroachcloud/authorization.md %}#org-administrator) must initiate major version upgrades for CockroachDB {{ site.data.products.dedicated }} clusters. When a new major version is available, Admins will be able to [start an upgrade]({% link cockroachcloud/upgrade-to-v23.1.md %}) from the CockroachDB {{ site.data.products.cloud }} Console for clusters using the paid version of CockroachDB {{ site.data.products.dedicated }}. When a major version upgrade is initiated for a cluster, it will upgrade to the latest patch version as well.
+Major version upgrades are automatic for CockroachDB {{ site.data.products.serverless }} clusters and opt-in for CockroachDB {{ site.data.products.dedicated }} clusters. An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) must initiate major version upgrades for CockroachDB {{ site.data.products.dedicated }} clusters. When a new major version is available, Admins will be able to [start an upgrade]({% link cockroachcloud/upgrade-to-v23.1.md %}) from the CockroachDB {{ site.data.products.cloud }} Console for clusters using CockroachDB {{ site.data.products.dedicated }}. When a major version upgrade is initiated for a cluster, it will upgrade to the latest patch version as well.
### Pre-production preview upgrades
-Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.cloud }} organizations can create new clusters or upgrade existing clusters to a Pre-Production Preview release for testing and experimentation using a beta or release candidate (RC) of that next major version. Upgrading to a Pre-Production Preview is a major-version upgrade. After a cluster is upgraded to a Pre-Production Preview release, it is automatically upgraded to all subsequent releases within the same major version—including additional beta and RC releases, the GA release, and subsequent patch releases after GA, as [patch version upgrades](#patch-version-upgrades). To learn more, refer to [Upgrade to v23.2 Pre-Production Preview](https://cockroachlabs.com/docs/cockroachcloud/upgrade-to-v23.2).
+
+Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.cloud }} organizations can create new {{ site.data.products.dedicated }} clusters or upgrade existing clusters to a Pre-Production Preview release for testing and experimentation using a beta or release candidate (RC) of that next major version. Upgrading to a Pre-Production Preview is a major-version upgrade. After a cluster is upgraded to a Pre-Production Preview release, it is automatically upgraded to all subsequent releases within the same major version—including additional beta and RC releases, the GA release, and subsequent patch releases after GA, as [patch version upgrades](#patch-version-upgrades). To learn more, refer to [Upgrade to v23.2 Pre-Production Preview](https://cockroachlabs.com/docs/cockroachcloud/upgrade-to-v24.1).
### Rollback support
@@ -52,18 +51,18 @@ To stop the upgrade and roll back to the latest patch release of the previous ma
If you choose to roll back a major version upgrade, your cluster will be rolled back to the latest patch release of the previous major version, which may differ from the patch release you were running before you initiated the upgrade.
{{site.data.alerts.end}}
-During rollback, nodes are reverted to that prior version one at a time, without interrupting the cluster's health and availability.
+During rollback, nodes are reverted to that prior major version's latest patch one at a time, without interrupting the cluster's health and availability.
-If you see problems after a major version upgrade has been finalized, it will not be possible to roll back via the CockroachDB {{ site.data.products.cloud }} Console. For assistance, [contact support](https://support.cockroachlabs.com/hc/requests/new).
+If you notice problems after a major version upgrade has been finalized, it will not be possible to roll back via the CockroachDB {{ site.data.products.cloud }} Console. For assistance, [contact support](https://support.cockroachlabs.com/hc/requests/new).
-### End of Support for older CockroachDB versions
+### End of Support for CockroachDB versions
As CockroachDB releases new major versions, older versions reach their End of Support (EOS) on CockroachDB {{ site.data.products.cloud }}. A CockroachDB version reaches EOS when it is two major versions behind the latest version. For example, when CockroachDB v21.2 was released, CockroachDB v20.2 reached EOS.
Clusters running unsupported CockroachDB versions are not eligible for our [availability SLA](https://www.cockroachlabs.com/cloud-terms-and-conditions/). Further downgrades in support may occur as per the [CockroachDB Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
-If you are running a CockroachDB version nearing EOS, you will be reminded at least one month before that version’s EOS that your clusters must be upgraded by the EOS date to avoid losing support. A Org Administrator can [upgrade your cluster]({% link cockroachcloud/upgrade-to-v23.1.md %}) directly from the CockroachDB {{ site.data.products.cloud }} Console.
+If you are running a CockroachDB version nearing EOS, you will be reminded at minimum one month before that version’s EOS that your clusters must be upgraded by the EOS date to avoid losing support. A Org Administrator can [upgrade your cluster]({% link cockroachcloud/upgrade-to-v23.2.md %}) directly from the CockroachDB {{ site.data.products.cloud }} Console.
-## See also
+## Additional information
For more details about the upgrade and finalization process, see [Upgrade to the Latest CockroachDB Version](https://cockroachlabs.com/docs/cockroachcloud/upgrade-to-v23.1).
diff --git a/src/current/cockroachcloud/upgrade-to-v24.1.md b/src/current/cockroachcloud/upgrade-to-v24.1.md
index 39d4f3516f7..ff894212a60 100644
--- a/src/current/cockroachcloud/upgrade-to-v24.1.md
+++ b/src/current/cockroachcloud/upgrade-to-v24.1.md
@@ -6,7 +6,7 @@ docs_area: manage
page_version: v24.1
prev_version: v23.2
pre_production_preview: true
-pre_production_preview_version: v24.1.0-beta.1
+pre_production_preview_version: v24.1.0-rc.1
---
{% if page.pre_production_preview == true %}
diff --git a/src/current/images/v24.1/ui_statement_fingerprint_charts.png b/src/current/images/v24.1/ui_statement_fingerprint_charts.png
index fd65961ceb8..5ab63bfe0fb 100644
Binary files a/src/current/images/v24.1/ui_statement_fingerprint_charts.png and b/src/current/images/v24.1/ui_statement_fingerprint_charts.png differ
diff --git a/src/current/images/v24.1/ui_statement_fingerprint_overview.png b/src/current/images/v24.1/ui_statement_fingerprint_overview.png
index bd2a24d3a24..18c40a80f7f 100644
Binary files a/src/current/images/v24.1/ui_statement_fingerprint_overview.png and b/src/current/images/v24.1/ui_statement_fingerprint_overview.png differ
diff --git a/src/current/releases/index.md b/src/current/releases/index.md
index 6ec063dfbc7..19655d9ffea 100644
--- a/src/current/releases/index.md
+++ b/src/current/releases/index.md
@@ -1,11 +1,11 @@
---
title: Releases
-summary: Release notes for older versions of CockroachDB.
+summary: Information about CockroachDB releases with an index of available releases and their release notes and binaries.
toc: true
docs_area: releases
toc_not_nested: true
-pre_production_preview: false
-pre_production_preview_version: v23.2.0-beta.3
+pre_production_preview: true
+pre_production_preview_version: v24.1.0-beta.1
---
{% comment %}
@@ -14,11 +14,10 @@ of this file, block-level HTML is indented in relation to the other HTML, and bl
indented in relation to the other Liquid. Please try to keep the indentation consistent. Thank you!
{% endcomment %}
-After downloading your desired release, learn how to [install CockroachDB](https://www.cockroachlabs.com/docs/stable/install-cockroachdb). Also be sure to review Cockroach Labs' [Release Support Policy]({% link releases/release-support-policy.md %}).
+After downloading a supported CockroachDB binary, learn how to [install CockroachDB](https://www.cockroachlabs.com/docs/stable/install-cockroachdb). Be sure to review Cockroach Labs' [Release Support Policy]({% link releases/release-support-policy.md %}).
-- **Generally Available (GA)** releases are qualified for production environments.
-- **Limited Access** binaries allow you to validate CockroachDB on architectures that will soon become generally available. In certain cases, limited access binaries are available only to enrolled organizations. To enroll your organization, contact your account representative.
-- **Testing** releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments. Testing releases allow you to begin testing and validating the next major version of CockroachDB early. Testing releases are not eligible for support or uptime SLA commitments.
+- **Generally Available (GA)** releases (also known as Production releases) are qualified for production environments. These may have either a default GA support type or an extended LTS (Long-Term Support) designation. Refer to [Release Support Policy]({% link releases/release-support-policy.md %}) for more information.
+- **Testing** releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments. Testing releases allow you to begin testing and validating the next major version of CockroachDB early.
- **Experimental** binaries allow you to deploy and develop with CockroachDB on architectures that are not yet qualified for production use. Experimental binaries are not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.
For more details, refer to [Release Naming](#release-naming).
@@ -29,7 +28,7 @@ In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary
## Staged release process
-As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads.
+As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for select CockroachDB Cloud organizations for two weeks before binaries are published for CockroachDB Self-Hosted downloads.
{{ experimental_js_warning }}
@@ -109,9 +108,12 @@ As of 2024, CockroachDB is released under a staged delivery process. New release
{% for r in releases %}
+
+ {% capture lts_link_linux %}{% if r.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}{% endcapture %}
+
{% comment %} Add "Latest" class to release if it's the latest release. {% endcomment %}
- {{ r.release_name }} {% comment %} Add link to each release r. {% endcomment %}
+ {{ r.release_name }}{{ lts_link_linux }}{% comment %} Add link to each release r, decorate with link about LTS if applicable. {% endcomment %}
{% if r.release_name == latest_hotfix.release_name %}
Latest {% comment %} Add "Latest" badge to release if it's the latest release. {% endcomment %}
{% endif %}
@@ -138,7 +140,7 @@ As of 2024, CockroachDB is released under a staged delivery process. New release
{% if r.linux.linux_arm_experimental == true %}Experimental:{% endif %}
{% comment %} If a sha256sum is available for a particular release, we display a link to the file containing the sha256sum alongside the download link of the release. {% endcomment %}
{% if r.has_sql_only == true %}
-
{% comment %} If a sha256sum is available for a particular release, we display a link to the file containing the sha256sum alongside the download link of the release. {% endcomment %}
+
{% comment %} If a sha256sum is available for a particular release, we display a link to the file containing the sha256sum alongside the download link of the release. {% endcomment %}
{% endif %}
{% endif %}
@@ -192,9 +194,9 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
{% break %}
{% else %}
{% endif %}
@@ -269,9 +271,14 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
{% for r in releases %}
+
+ {% capture lts_link_docker %}{% if r.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}{% endcapture %}
+
{% comment %} Add "Latest" class to release if it's the latest release. {% endcomment %}
- {{ r.release_name }} {% comment %} Add link to each release r. {% endcomment %}
+ {{ r.release_name }} {% comment %} Add link to each release r.
+{% endcomment %}
{% if r.release_name == latest_hotfix.release_name %}
Latest {% comment %} Add "Latest" badge to release if it's the latest release. {% endcomment %}
{% endif %}
@@ -297,7 +304,7 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
{% elsif r.docker.docker_arm_experimental == true %}
**Intel**: GA **ARM**: Experimental
{% else %}
- GA
+ GA{{ lts_link_docker }}
{% endif %}
@@ -318,9 +325,10 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
{% for r in releases %}
+
{% comment %} Add "Latest" class to release if it's the latest release. {% endcomment %}
- {{ r.release_name }} {% comment %} Add link to each release r. {% endcomment %}
+ {{ r.release_name }}{% comment %} Add link to each release r {% endcomment %}
{% if r.release_name == latest_hotfix.release_name %}
Latest {% comment %} Add "Latest" badge to release if it's the latest release. {% endcomment %}
{% endif %}
@@ -353,7 +361,7 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
## Release naming
-Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (“1” or “2”, representing a typical biannual cycle), and `PP` indicates the patch release version. Example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1.
+Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (historically “1” or “2”, representing a typical biannual cycle), and `PP` indicates the patch release version. Example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1.
A major release is typically produced twice a year indicating major enhancements to product functionality. A change in the `YY.R` component denotes a major release.
@@ -364,5 +372,5 @@ During development of a major version of CockroachDB, releases are produced acco
- Alpha releases are the earliest testing releases leading up to a major version's initial GA (generally available) release, and have `alpha` in the version name. Example: `v23.1.0-alpha.1`.
- Beta releases are produced after the series of alpha releases leading up to a major version's initial GA release, and tend to be more stable and introduce fewer changes than alpha releases. They have `beta` in the version name. Example: `v23.1.0-beta.1`.
- Release candidates are produced after the series of beta releases and are nearly identical to what will become the initial generally available (GA) release. Release candidates have `rc` in the version name. Example: `v23.1.0-rc.1`.
-- A major version's GA release is produced after the series of release candidates for a major version, and ends with `0`. Example: `v23.1.0`. GA releases are validated and suitable for production environments.
+- A major version's initial GA release is produced after the series of release candidates for a major version, and ends with `0`. Example: `v23.1.0`. GA releases are validated and suitable for production environments.
- Patch (maintenance) releases are produced after a major version's GA release, and are numbered sequentially. Example: `v23.1.13`.
diff --git a/src/current/releases/release-support-policy.md b/src/current/releases/release-support-policy.md
index 755bab83c46..e897b9b4229 100644
--- a/src/current/releases/release-support-policy.md
+++ b/src/current/releases/release-support-policy.md
@@ -9,27 +9,38 @@ docs_area: releases
{% assign versions = site.data.versions | where_exp: "versions", "versions.release_date <= today" | sort: "release_date" | reverse %} {% comment %} Get all versions (e.g., v21.2) sorted in reverse chronological order. {% endcomment %}
-This page explains Cockroach Labs' policy for supporting [major releases]({% link releases/index.md %}) of CockroachDB.
+This page explains Cockroach Labs' policy for supporting [production releases]({% link releases/index.md %}) of CockroachDB Self-Hosted. For clusters deployed in {{ site.data.products.cloud }}, refer to the [CockroachDB {{ site.data.products.cloud }} Support and Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy).
-{{site.data.alerts.callout_info}}
-For CockroachDB {{ site.data.products.cloud }} clusters, see the [CockroachDB {{ site.data.products.cloud }} Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy).
-{{site.data.alerts.end}}
+There are two support types: GA and LTS (Long-Term Support). Each patch release of CockroachDB is assigned one of these types. The default is GA, unless otherwise specified.
-## Support cycle
+Initially, a major release series has GA support. After the series demonstrates a continuously high level of stability and performance, new patch releases are designated as LTS releases, which provide extended support windows. Specifically, the distinction determines the time spans of a release’s support phases: Maintenance Support, Assistance Support, and EOL (End of Life).
-Each major release of CockroachDB goes through the following support cycle:
+## Support Phases
-- **Maintenance Support:** For at least 365 days from the major release date, Cockroach Labs will produce regular patch releases that include critical security fixes and resolutions to problems identified by users.
+- **Maintenance Support**: Cockroach Labs will produce regular patch releases that include critical security fixes and resolutions to problems identified by users.
-- **Assistance Support:** Following the maintenance support period, Cockroach Labs will provide assistance support for at least an additional 180 days. During this period, the following guidelines will apply:
- - New enhancements and error corrections will not be made to the major release.
- - Cockroach Labs will direct customers to existing fixes/patches and workarounds applicable to the reported case.
- - Cockroach Labs may direct customers to [upgrade](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version) to a more current version of the product if a workaround does not exist.
- - Cockroach Labs will continue to add critical security fixes to the major release in the form of patch releases.
+- **Assistance Support**: Immediately follows the Maintenance Support period. During this period, the following guidelines apply:
+ - New enhancements will not be made to the major release.
+ - Cockroach Labs will continue to add critical security fixes to the major release in the form of patch releases.
+ - Patch releases for the purpose of resolving bugs or other errors may no longer be made to the major release.
+ - Cockroach Labs may direct customers to workarounds or other fixes applicable to the reported case.
+ - Cockroach Labs may direct customers to [upgrade](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version) to a later version of the product, to resolve or further troubleshoot an issue.
-- **End of Life (EOL):** Following the assistance support period, Cockroach Labs will no longer provide any support for the release.
+- **End of Life (EOL)**: Following the assistance support period, Cockroach Labs will no longer provide any support for the release.
-Cockroach Labs will notify you by mail or email 6 months in advance of a major release transitioning into **Assistance Support** or **EOL**.
+## Support Types
+
+* **GA Support**: The default support type for production releases, starting with the initial production release of a major version, followed by each subsequent patch release before LTS releases begin for that major version.
+ * **Maintenance support ends**:
+ * **365 days** **after** the day of the **first production release** of the major version (i.e. the ‘GA release,’ ending in .0).
+ * **Assistance support ends**:
+ * **180 days after** the **Maintenance Support end date** of the release.
+ * Major versions prior to v23.1 will not have LTS releases.
+* **LTS (Long-Term Support)**: Conferred to an initial LTS maintenance release of a given major version and its subsequent maintenance releases. LTS provides extended support windows while also indicating our highest level of expected release stability and performance.
+ * **Maintenance support ends**:
+ * **365 days** **after** the day of the **first LTS release** of the major version.
+ * **Assistance support ends**:
+ * **365 days after** the **Maintenance Support end date** of the release.
## Current supported releases
@@ -40,22 +51,134 @@ Date format: YYYY-MM-DD
-
Version
-
Release Date
+
Major Version
+
Patch Versions
+
Support Type
+
Initial Release
Maintenance Support ends
-
Assistance Support ends (EOL Date)
+
Assistance Support ends
+
{% for v in versions %}
{% assign r_latest = site.data.releases | where_exp: "r_latest", "r_latest.major_version == v.major_version" | where_exp: "r_latest", "r_latest.withdrawn != true" | sort: "release_date" | last | map: "version" %} {% comment %} Calculate the latest non-withdrawn release for a version v. {% endcomment %}
-
{% if v.maint_supp_exp_date != "N/A" %}{{ v.maint_supp_exp_date }}{% endif %}
+
{% if v.asst_supp_exp_date != "N/A" %}{{ v.asst_supp_exp_date }}{% endif %}
+
+ {% endif %}
+
+ {% endfor %} {% comment %} Display each non-EOL version, its release date, its maintenance support expiration date, and its assistance support expiration date, and its LTS maintenance and assistance support dates. Also include links to the latest hotfix version. {% endcomment %}
+
-* Version has reached EOL
+* : This major version will receive LTS patch releases, which will be listed on an additional row, upon their availability.
+
+## End-of-life (EOL) releases
+
+The following releases are no longer supported.
+
+
+
+
+
Major Version
+
Patch Versions
+
Support Type
+
Initial Release
+
Maintenance Support ended
+
Assistance Support ended
+
+
+
+ {% for v in versions %}
+ {% assign r_latest = site.data.releases | where_exp: "r_latest", "r_latest.major_version == v.major_version" | where_exp: "r_latest", "r_latest.withdrawn != true" | sort: "release_date" | last | map: "version" %} {% comment %} Calculate the latest non-withdrawn release for a version v. {% endcomment %}
+
+ {% comment %}Evaluate whether the version is EOL for GA or LTS or both{% endcomment %}
+ {% assign r_eol = false %}
+ {% assign r_lts_eol = false %}
+ {% if v.lts_asst_supp_exp_date != "N/A" %}
+ {% comment %}This major version has LTS releases{% endcomment %}
+ {% assign r_has_lts = true %}
+ {% if v.lts_asst_supp_exp_date < today %}
+ {% comment %}LTS releases exist for this major version and are EOL{% endcomment %}
+ {% assign r_lts_eol = true %}
+ {% endif %}
+ {% endif %}
+ {% if v.asst_supp_exp_date < today %}
+ {% comment %}GA releases in this version are EOL{% endcomment %}
+ {% assign r_eol = true %}
+ {% endif %}
+
+ {% if r_eol == true %}
+ {% if r_lts_eol == true %}
+
{% comment %} For LTS releases print an LTS row first{% endcomment %}
+
{% if v.maint_supp_exp_date != "N/A" %}{{ v.maint_supp_exp_date }}{% endif %}
+
{% if v.asst_supp_exp_date != "N/A" %}{{ v.asst_supp_exp_date }}{% endif %}
+
+ {% endif %}
+
+ {% endfor %} {% comment %} Display each EOL version, its release date, its maintenance support expiration date, and its assistance support expiration date, and its LTS maintenance and assistance support dates. Also include links to the latest hotfix version. {% endcomment %}
+
+
diff --git a/src/current/releases/v24.1.md b/src/current/releases/v24.1.md
index 8e5bbbedc6d..3c1cb5de992 100644
--- a/src/current/releases/v24.1.md
+++ b/src/current/releases/v24.1.md
@@ -30,7 +30,7 @@ Get future release notes emailed to you:
{% comment %}
{% unless vers.release_date == "N/A" or vers.release_date > today %}
-To upgrade to {{ page.major_version }}, see [Upgrade to CockroachDB {{ page.major_version }}](https://www.cockroachlabs.com/docs/{{ page.major_version }}/upgrade-cockroach-version).
+To upgrade to {{ page.major_version }}, refer to [Upgrade to CockroachDB {{ page.major_version }}](https://www.cockroachlabs.com/docs/{{ page.major_version }}/upgrade-cockroach-version).
{% endunless %}
{% endcomment %}
diff --git a/src/current/v22.2/sso-sql.md b/src/current/v22.2/sso-sql.md
index fe480dffec6..444c562aa0d 100644
--- a/src/current/v22.2/sso-sql.md
+++ b/src/current/v22.2/sso-sql.md
@@ -1,6 +1,6 @@
---
title: Cluster Single Sign-on (SSO) using a JSON web token (JWT)
-summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB {{ site.data.products.core }}, review of authenticating users, configuring required cluster settings.
+summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB Cloud, review of authenticating users, configuring required cluster settings.
toc: true
docs_area: manage
---
diff --git a/src/current/v23.1/cockroach-debug-zip.md b/src/current/v23.1/cockroach-debug-zip.md
index be7f866e364..30e01d01685 100644
--- a/src/current/v23.1/cockroach-debug-zip.md
+++ b/src/current/v23.1/cockroach-debug-zip.md
@@ -80,7 +80,6 @@ The following information is also contained in the `.zip` file, and cannot be fi
- [Cluster Settings]({% link {{ page.version.version }}/cluster-settings.md %})
- [Metrics]({% link {{ page.version.version }}/metrics.md %})
- [Replication Reports]({% link {{ page.version.version }}/query-replication-reports.md %})
-- Problem ranges
- CPU profiles
- A script (`hot-ranges.sh`) that summarizes the hottest ranges (ranges receiving a high number of reads or writes)
@@ -114,7 +113,7 @@ Flag | Description
`--files-until` | End timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:
`--files-until='2021-07-01 16:00'`
When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.
**Default:** 24 hours beyond now (to include files created during `.zip` creation)
`--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:
`--include-files=*.pprof`
Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`.
`--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.
**Default:** true
-`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). Note however many higher-level issues are ultimately related to the underlying KV layer described by these files so only set this to `false` if directed to do so by Cockroach Labs support.
**Default:** true
+`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.
In addition, include problem ranges information in `reports/problemranges.json`.
**Default:** true
`--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:
`--nodes=1,10,13-15`
`--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example.
`--redact-logs` | **Deprecated** Redact sensitive data from collected log files only. Use the `--redact` flag instead, which redacts sensitive data across the entire generated `.zip` as well as the collected log files. Passing the `--redact-logs` flag will be interpreted as the `--redact` flag.
diff --git a/src/current/v23.1/sso-sql.md b/src/current/v23.1/sso-sql.md
index 1f8baedfc7e..87cb19e3bd4 100644
--- a/src/current/v23.1/sso-sql.md
+++ b/src/current/v23.1/sso-sql.md
@@ -1,6 +1,6 @@
---
title: Cluster Single Sign-on (SSO) using JSON web tokens (JWTs)
-summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB {{ site.data.products.core }}, review of authenticating users, configuring required cluster settings.
+summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB Cloud, review of authenticating users, configuring required cluster settings.
toc: true
docs_area: manage
---
diff --git a/src/current/v23.1/upgrade-cockroach-version.md b/src/current/v23.1/upgrade-cockroach-version.md
index 368cae848d3..a8ff465b97c 100644
--- a/src/current/v23.1/upgrade-cockroach-version.md
+++ b/src/current/v23.1/upgrade-cockroach-version.md
@@ -14,16 +14,16 @@ docs_area: manage
Because of CockroachDB's [multi-active availability]({% link {{ page.version.version }}/multi-active-availability.md %}) design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations.
-This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**. To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead.
+This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**{% if latest.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}. To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead.
## Terminology
Before upgrading, review the CockroachDB [release](../releases/) terminology:
-- A new *major release* is performed every 6 months. The major version number indicates the year of release followed by the release number, which will be either 1 or 2. For example, the latest major release is {{ actual_latest_prod.major_version }} (also written as {{ actual_latest_prod.major_version }}.0).
-- Each [supported](https://www.cockroachlabs.com/docs/releases/release-support-policy) major release is maintained across *patch releases* that fix crashes, security issues, and data correctness issues. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of {{ actual_latest_prod.major_version }} use the format {{ actual_latest_prod.major_version }}.x.
-- All major and patch releases are suitable for production usage, and are therefore considered "production releases". For example, the latest production release is {{ actual_latest_prod.release_name }}.
-- Prior to an upcoming major release, alpha and beta releases and release candidates are made available. These "testing releases" are not suitable for production usage. They are intended for users who need early access to a feature before it is available in a production release. These releases append the terms `alpha`, `beta`, or `rc` to the version number.
+- A new *major release* is performed multiple times per year. The major version number indicates the year of release followed by the release number, starting with 1. For example, the latest major release is {{ actual_latest_prod.major_version }}.
+- Each [supported](https://www.cockroachlabs.com/docs/releases/release-support-policy) major release is maintained across *patch releases* that contain improvements including performance or security enhancements and bug fixes. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of {{ actual_latest_prod.major_version }} use the format {{ actual_latest_prod.major_version }}.x.
+- All major and patch releases are suitable for production environments, and are therefore considered "production releases". For example, the latest production release is {{ actual_latest_prod.release_name }}.
+- Prior to an upcoming major release, alpha, beta, and release candidate (RC) binaries are made available for users who need early access to a feature before it is available in a production release. These releases append the terms `alpha`, `beta`, or `rc` to the version number.These "testing releases" are not suitable for production environments and are not eligible for spuport or uptime SLA commitments. For more information, refer to the [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
{{site.data.alerts.callout_info}}
There are no "minor releases" of CockroachDB.
diff --git a/src/current/v23.2/changefeed-messages.md b/src/current/v23.2/changefeed-messages.md
index 9edddc161cb..57af36ec66b 100644
--- a/src/current/v23.2/changefeed-messages.md
+++ b/src/current/v23.2/changefeed-messages.md
@@ -17,6 +17,7 @@ This page describes the format and behavior of changefeed messages. You will fin
- [Resolved messages](#resolved-messages): The resolved timestamp option and how to configure it.
- [Duplicate messages](#duplicate-messages): The causes of duplicate messages from a changefeed.
- [Schema changes](#schema-changes): The effect of schema changes on a changefeed.
+- [Filtering changefeed messages](#filtering-changefeed-messages): The settings and syntax to prevent and filter the messages that changefeeds emit.
- [Message formats](#message-formats): The limitations and type mapping when creating a changefeed with different message formats.
{{site.data.alerts.callout_info}}
@@ -478,6 +479,31 @@ Refer to the [`CREATE CHANGEFEED` option table]({% link {{ page.version.version
{% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %}
{{site.data.alerts.end}}
+## Filtering changefeed messages
+
+There are several ways to define messages, filter different types of message, or prevent all changefeed messages from emitting to the sink. The following sections outline configurable settings and SQL syntax to handle different use cases.
+
+### Prevent changefeeds from emitting row-level TTL deletes
+
+{% include_cached new-in.html version="v23.2" %} To prevent changefeeds from emitting deletes issued by all [TTL jobs]({% link {{ page.version.version }}/row-level-ttl.md %}) on a cluster, set the `sql.ttl.changefeed_replication.disabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to `true`.
+
+### Disable changefeeds from emitting messages
+
+{% include_cached new-in.html version="v23.2" %} To prevent changefeeds from emitting messages for any changes (e.g., `INSERT`, `UPDATE`) issued to watched tables during that session, set the `disable_changefeed_replication` [session variable]({% link {{ page.version.version }}/session-variables.md %}) to `true`.
+
+### Define the change data emitted to a sink
+
+When you create a changefeed, use change data capture queries to define the change data emitted to your sink.
+
+For example:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE CHANGEFEED INTO 'scheme://sink-URI' WITH updated AS SELECT column, column FROM table;
+~~~
+
+For details on syntax and examples, refer to the [Change Data Capture Queries]({% link {{ page.version.version }}/cdc-queries.md %}) page.
+
## Message formats
{% include {{ page.version.version }}/cdc/message-format-list.md %}
diff --git a/src/current/v23.2/cockroach-debug-zip.md b/src/current/v23.2/cockroach-debug-zip.md
index 93f2cc90a9e..261d5c22e08 100644
--- a/src/current/v23.2/cockroach-debug-zip.md
+++ b/src/current/v23.2/cockroach-debug-zip.md
@@ -80,7 +80,6 @@ The following information is also contained in the `.zip` file, and cannot be fi
- [Cluster Settings]({% link {{ page.version.version }}/cluster-settings.md %})
- [Metrics]({% link {{ page.version.version }}/metrics.md %})
- [Replication Reports]({% link {{ page.version.version }}/query-replication-reports.md %})
-- Problem ranges
- CPU profiles
- A script (`hot-ranges.sh`) that summarizes the hottest ranges (ranges receiving a high number of reads or writes)
@@ -114,7 +113,7 @@ Flag | Description
`--files-until` | End timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:
`--files-until='2021-07-01 16:00'`
When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.
**Default:** 24 hours beyond now (to include files created during `.zip` creation)
`--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:
`--include-files=*.pprof`
Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`.
`--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.
**Default:** true
-`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). Note however many higher-level issues are ultimately related to the underlying KV layer described by these files so only set this to `false` if directed to do so by Cockroach Labs support.
**Default:** true
+`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.
In addition, include problem ranges information in `reports/problemranges.json`.
**Default:** true
`--include-running-job-traces` | Include information about each running, traceable job (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.
**Default:** true
`--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:
`--nodes=1,10,13-15`
`--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example.
diff --git a/src/current/v23.2/create-virtual-cluster.md b/src/current/v23.2/create-virtual-cluster.md
index 52a9214901c..377a280cf4a 100644
--- a/src/current/v23.2/create-virtual-cluster.md
+++ b/src/current/v23.2/create-virtual-cluster.md
@@ -57,7 +57,7 @@ When you [initiate a replication stream]({% link {{ page.version.version }}/set-
{% include_cached copy-clipboard.html %}
~~~
-'postgresql://{replication user}:{password}@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt'
+'postgresql://{replication user}:{password}@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt'
~~~
To form a connection string similar to the example, include the following values and query parameters. Replace values in `{...}` with the appropriate values for your configuration:
diff --git a/src/current/v23.2/cutover-replication.md b/src/current/v23.2/cutover-replication.md
index b6da91cf3de..1f767452159 100644
--- a/src/current/v23.2/cutover-replication.md
+++ b/src/current/v23.2/cutover-replication.md
@@ -44,9 +44,9 @@ SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
{% include_cached copy-clipboard.html %}
~~~
- id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
------+--------------------+-------------+--------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------
- 5 | application | replicating | none | application | postgresql://user:redacted@host/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 911803003607220225 | 2023-10-26 17:36:52.27978+00 | 2023-10-26 14:36:52.279781+00 | NULL
+ id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
+-----+--------------------+-------------+--------------+--------------------+-----------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------
+ 5 | application | replicating | none | application | postgresql://user:redacted@host?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 911803003607220225 | 2023-10-26 17:36:52.27978+00 | 2023-10-26 14:36:52.279781+00 | NULL
~~~
Run the following from the standby cluster's SQL shell to start the cutover:
@@ -82,7 +82,7 @@ The `retained_time` response provides the earliest time to which you can cut ove
~~~
id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
---+--------------------+--------------------+--------------+--------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
-3 | application | replicating | none | application | postgresql://{user}:redacted@{hostname}:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
+3 | application | replicating | none | application | postgresql://{user}:redacted@{hostname}:26257?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
(1 row)
~~~
@@ -117,9 +117,9 @@ To monitor for when the replication stream completes, use [`SHOW VIRTUAL CLUSTER
SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
~~~
~~~
- id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
- -----+---------------------+-----------------------------+--------------+--------------------+-------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------------------------
- 4 | application | replication pending cutover | none | application | postgresql://user:redacted@3ip:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 903895265809498113 | 2023-09-28 17:41:18.03092+00 | 2023-09-28 16:09:04.327473+00 | 1695922878030920020.0000000000
+ id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
+ -----+---------------------+-----------------------------+--------------+--------------------+---------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------------------------
+ 4 | application | replication pending cutover | none | application | postgresql://{user}:{password}@{hostname}:26257?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 903895265809498113 | 2023-09-28 17:41:18.03092+00 | 2023-09-28 16:09:04.327473+00 | 1695922878030920020.0000000000
(1 row)
~~~
diff --git a/src/current/v23.2/physical-cluster-replication-monitoring.md b/src/current/v23.2/physical-cluster-replication-monitoring.md
index bf0c8075833..a99e1b6547c 100644
--- a/src/current/v23.2/physical-cluster-replication-monitoring.md
+++ b/src/current/v23.2/physical-cluster-replication-monitoring.md
@@ -35,7 +35,7 @@ Refer to [Responses](#responses) for a description of each field.
~~~
id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
---+--------------------+--------------------+--------------+--------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
-3 | application | replicating | none | application | postgresql://{user}:{password}@{hostname}:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
+3 | application | replicating | none | application | postgresql://{user}:{password}@{hostname}:26257?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
(1 row)
~~~
diff --git a/src/current/v23.2/physical-cluster-replication-overview.md b/src/current/v23.2/physical-cluster-replication-overview.md
index 5f90783055e..4c904174d4c 100644
--- a/src/current/v23.2/physical-cluster-replication-overview.md
+++ b/src/current/v23.2/physical-cluster-replication-overview.md
@@ -91,14 +91,14 @@ To connect to a virtualized cluster using the SQL shell:
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" --certs-dir "certs"
+ cockroach sql --url "postgresql://root@{your IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full" --certs-dir "certs"
~~~
- For the application virtual cluster, include the `options=-ccluster=application` parameter in the `postgresql` connection URL:
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=application&sslmode=verify-full" --certs-dir "certs"
+ cockroach sql --url "postgresql://root@{your IP or hostname}:26257?options=-ccluster=application&sslmode=verify-full" --certs-dir "certs"
~~~
{{site.data.alerts.callout_info}}
diff --git a/src/current/v23.2/row-level-ttl.md b/src/current/v23.2/row-level-ttl.md
index 228b8464162..a6278c4f231 100644
--- a/src/current/v23.2/row-level-ttl.md
+++ b/src/current/v23.2/row-level-ttl.md
@@ -135,7 +135,7 @@ SHOW CREATE TABLE ttl_test_per_table;
The settings that control the behavior of Row-Level TTL are provided using [storage parameters]({% link {{ page.version.version }}/sql-grammar.md %}#opt_with_storage_parameter_list). These parameters can be set during table creation using [`CREATE TABLE`](#create-a-table-with-a-ttl_expiration_expression), added to an existing table using the [`ALTER TABLE`](#add-or-update-the-row-level-ttl-for-an-existing-table) statement, or [reset to default values](#reset-a-storage-parameter-to-its-default-value).
-| Description | Option | Associated cluster setting |
+| Option | Description | Associated cluster setting |
|----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------|
| `ttl_expiration_expression` | **Recommended in v22.2+**. SQL expression that defines the TTL expiration. Must evaluate to a [`TIMESTAMPTZ`]({% link {{ page.version.version }}/timestamp.md %}). This and/or [`ttl_expire_after`](#param-ttl-expire-after) are required to enable TTL. This parameter is useful when you want to set the TTL for individual rows in the table. For an example, see [Create a table with a `ttl_expiration_expression`](#create-a-table-with-a-ttl_expiration_expression). | N/A |
| `ttl_expire_after` | The [interval]({% link {{ page.version.version }}/interval.md %}) when a TTL will expire. This and/or [`ttl_expiration_expression`](#param-ttl-expiration-expression) are required to enable TTL. Minimum value: `'1 microsecond'`. | N/A |
@@ -556,6 +556,8 @@ Row-level TTL interacts with [changefeeds]({% link {{ page.version.version }}/cr
- When expired rows are deleted, a [changefeed delete message]({% link {{ page.version.version }}/changefeed-messages.md %}#delete-messages) is emitted.
+{% include {{ page.version.version }}/cdc/disable-replication-ttl.md %}
+
For guidance on how to filter changefeed messages to emit row-level TTL deletes only, refer to [Change Data Capture Queries]({% link {{ page.version.version }}/cdc-queries.md %}#reference-ttl-in-a-cdc-query).
## Backup and restore
diff --git a/src/current/v23.2/set-transaction.md b/src/current/v23.2/set-transaction.md
index b957aefaa04..c7794ae9451 100644
--- a/src/current/v23.2/set-transaction.md
+++ b/src/current/v23.2/set-transaction.md
@@ -5,7 +5,7 @@ toc: true
docs_area: reference.sql
---
-The `SET TRANSACTION` [statement]({% link {{ page.version.version }}/sql-statements.md %}) sets the transaction priority, access mode, and "as of" timestamp after you [`BEGIN`]({% link {{ page.version.version }}/begin-transaction.md %}) it but before executing the first statement that manipulates a database.
+The `SET TRANSACTION` [statement]({% link {{ page.version.version }}/sql-statements.md %}) sets the transaction priority, access mode, "as of" timestamp, and isolation level. These are applied after you [`BEGIN`]({% link {{ page.version.version }}/begin-transaction.md %}) the transaction and before executing the first statement that manipulates a database.
{{site.data.alerts.callout_info}}
{% include {{ page.version.version }}/sql/use-the-default-transaction-priority.md %}
diff --git a/src/current/v23.2/set-up-physical-cluster-replication.md b/src/current/v23.2/set-up-physical-cluster-replication.md
index 4259f4d5a19..1d97e29a142 100644
--- a/src/current/v23.2/set-up-physical-cluster-replication.md
+++ b/src/current/v23.2/set-up-physical-cluster-replication.md
@@ -78,7 +78,7 @@ Connect to your primary cluster's system virtual cluster using [`cockroach sql`]
{% include_cached copy-clipboard.html %}
~~~ shell
cockroach sql --url \
- "postgresql://root@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" \
+ "postgresql://root@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full" \
--certs-dir "certs"
~~~
@@ -147,7 +147,7 @@ The standby cluster connects to the primary cluster's system virtual cluster usi
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach workload init movr "postgresql://root@{node_advertise_address}:{node_advertise_port}/?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
+ cockroach workload init movr "postgresql://root@{node_advertise_address}:{node_advertise_port}?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
~~~
Replace `{node_advertise_address}` and `{node_advertise_port}` with a node's [`--advertise-addr`]({% link {{ page.version.version }}/cockroach-start.md %}#flags-advert-addr) IP address or hostname and port.
@@ -165,7 +165,7 @@ The standby cluster connects to the primary cluster's system virtual cluster usi
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach workload run movr --duration=5m "postgresql://root@{node_advertise_address}:{node_advertise_port}/?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
+ cockroach workload run movr --duration=5m "postgresql://root@{node_advertise_address}:{node_advertise_port}?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
~~~
1. To connect to the primary cluster's application virtual cluster, use the `ccluster=application` parameter:
@@ -173,7 +173,7 @@ The standby cluster connects to the primary cluster's system virtual cluster usi
{% include_cached copy-clipboard.html %}
~~~ shell
cockroach sql --url \
- "postgresql://root@{node IP or hostname}:26257/?options=-ccluster=application&sslmode=verify-full" \
+ "postgresql://root@{node IP or hostname}:26257?options=-ccluster=application&sslmode=verify-full" \
--certs-dir "certs"
~~~
@@ -219,7 +219,7 @@ Connect to your standby cluster's system virtual cluster using [`cockroach sql`]
{% include_cached copy-clipboard.html %}
~~~ shell
cockroach sql --url \
- "postgresql://root@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" \
+ "postgresql://root@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full" \
--certs-dir "certs"
~~~
@@ -310,7 +310,7 @@ The system virtual cluster in the standby cluster initiates and controls the rep
~~~ sql
CREATE VIRTUAL CLUSTER application LIKE template
FROM REPLICATION OF application
- ON 'postgresql://{replication user}:{password}@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt';
+ ON 'postgresql://{replication user}:{password}@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt';
~~~
{% include {{ page.version.version }}/physical-replication/like-description.md %}
@@ -356,9 +356,9 @@ The system virtual cluster in the standby cluster initiates and controls the rep
{% include_cached copy-clipboard.html %}
~~~
- id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
- ---+--------------------+--------------------+--------------+--------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
- 3 | application | replicating | none | application | postgresql://{user}:{password}@{hostname}:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
+ id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
+ ---+--------------------+--------------------+--------------+--------------------+---------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
+ 3 | application | replicating | none | application | postgresql://{user}:{password}@{hostname}:26257?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
(1 row)s
~~~
@@ -372,9 +372,9 @@ For additional detail on the standard CockroachDB connection parameters, refer t
Cluster | Interface | Usage | URL and Parameters
--------+-----------+-------+------------+----
-Primary | System | Set up a replication user and view running virtual clusters. Connect with [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). | `"postgresql://root@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full"`
`options=-ccluster=system`
`sslmode=verify-full`
Use the `--certs-dir` flag to specify the path to your certificate.
-Primary | Application | Add and run a workload with [`cockroach workload`]({% link {{ page.version.version }}/cockroach-workload.md %}). | `"postgresql://root@{node IP or hostname}:{26257}/?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"`
{% include {{ page.version.version }}/connect/cockroach-workload-parameters.md %} As a result, for the example in this tutorial, you will need:
`options=-ccluster=application`
`sslmode=verify-full`
`sslrootcert={path}/certs/ca.crt`
`sslcert={path}/certs/client.root.crt`
`sslkey={path}/certs/client.root.key`
-Standby | System | Manage the replication stream. Connect with [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). | `"postgresql://root@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full"`
`options=-ccluster=system`
`sslmode=verify-full`
Use the `--certs-dir` flag to specify the path to your certificate.
+Primary | System | Set up a replication user and view running virtual clusters. Connect with [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). | `"postgresql://root@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full"`
`options=-ccluster=system`
`sslmode=verify-full`
Use the `--certs-dir` flag to specify the path to your certificate.
+Primary | Application | Add and run a workload with [`cockroach workload`]({% link {{ page.version.version }}/cockroach-workload.md %}). | `"postgresql://root@{node IP or hostname}:{26257}?options=-ccluster=application&sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"`
{% include {{ page.version.version }}/connect/cockroach-workload-parameters.md %} As a result, for the example in this tutorial, you will need:
`options=-ccluster=application`
`sslmode=verify-full`
`sslrootcert={path}/certs/ca.crt`
`sslcert={path}/certs/client.root.crt`
`sslkey={path}/certs/client.root.key`
+Standby | System | Manage the replication stream. Connect with [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). | `"postgresql://root@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full"`
`options=-ccluster=system`
`sslmode=verify-full`
Use the `--certs-dir` flag to specify the path to your certificate.
## What's next
diff --git a/src/current/v23.2/show-virtual-cluster.md b/src/current/v23.2/show-virtual-cluster.md
index 761c3f2ae41..a10395705a9 100644
--- a/src/current/v23.2/show-virtual-cluster.md
+++ b/src/current/v23.2/show-virtual-cluster.md
@@ -77,7 +77,7 @@ SHOW VIRTUAL CLUSTER application;
~~~
id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
-----+--------------------+-------------+--------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------
- 5 | application | replicating | none | application | postgresql://user:redacted@host/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 911803003607220225 | 2023-10-26 17:36:52.27978+00 | 2023-10-26 14:36:52.279781+00 | NULL
+ 5 | application | replicating | none | application | postgresql://user:redacted@host?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 911803003607220225 | 2023-10-26 17:36:52.27978+00 | 2023-10-26 14:36:52.279781+00 | NULL
~~~
### Show replication status
diff --git a/src/current/v23.2/sso-sql.md b/src/current/v23.2/sso-sql.md
index 163700877e4..aaf460bc01e 100644
--- a/src/current/v23.2/sso-sql.md
+++ b/src/current/v23.2/sso-sql.md
@@ -1,6 +1,6 @@
---
title: Cluster Single Sign-on (SSO) using JSON web tokens (JWTs)
-summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB {{ site.data.products.core }}, review of authenticating users, configuring required cluster settings.
+summary: Overview of Cluster Single Sign-on (SSO) for CockroachDB Self-Hosted, review of authenticating users, configuring required cluster settings.
toc: true
docs_area: manage
---
diff --git a/src/current/v23.2/upgrade-cockroach-version.md b/src/current/v23.2/upgrade-cockroach-version.md
index 45ab6da0936..42be2dd9389 100644
--- a/src/current/v23.2/upgrade-cockroach-version.md
+++ b/src/current/v23.2/upgrade-cockroach-version.md
@@ -14,16 +14,16 @@ docs_area: manage
Because of CockroachDB's [multi-active availability]({% link {{ page.version.version }}/multi-active-availability.md %}) design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations.
-This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**. To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead.
+This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**{% if latest.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}. To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead.
## Terminology
Before upgrading, review the CockroachDB [release](../releases/) terminology:
-- A new *major release* is performed every 6 months. The major version number indicates the year of release followed by the release number, which will be either 1 or 2. For example, the latest major release is {{ actual_latest_prod.major_version }} (also written as {{ actual_latest_prod.major_version }}.0).
-- Each [supported](https://www.cockroachlabs.com/docs/releases/release-support-policy) major release is maintained across *patch releases* that fix crashes, security issues, and data correctness issues. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of {{ actual_latest_prod.major_version }} use the format {{ actual_latest_prod.major_version }}.x.
-- All major and patch releases are suitable for production usage, and are therefore considered "production releases". For example, the latest production release is {{ actual_latest_prod.release_name }}.
-- Prior to an upcoming major release, alpha and beta releases and release candidates are made available. These "testing releases" are not suitable for production usage. They are intended for users who need early access to a feature before it is available in a production release. These releases append the terms `alpha`, `beta`, or `rc` to the version number.
+- A new *major release* is performed multiple times per year. The major version number indicates the year of release followed by the release number, starting with 1. For example, the latest major release is {{ actual_latest_prod.major_version }}.
+- Each [supported](https://www.cockroachlabs.com/docs/releases/release-support-policy) major release is maintained across *patch releases* that contain improvements including performance or security enhancements and bug fixes. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of {{ actual_latest_prod.major_version }} use the format {{ actual_latest_prod.major_version }}.x.
+- All major and patch releases are suitable for production environments, and are therefore considered "production releases". For example, the latest production release is {{ actual_latest_prod.release_name }}.
+- Prior to an upcoming major release, alpha, beta, and release candidate (RC) binaries are made available for users who need early access to a feature before it is available in a production release. These releases append the terms `alpha`, `beta`, or `rc` to the version number.These "testing releases" are not suitable for production environments and are not eligible for spuport or uptime SLA commitments. For more information, refer to the [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
{{site.data.alerts.callout_info}}
There are no "minor releases" of CockroachDB.
diff --git a/src/current/v23.2/work-with-virtual-clusters.md b/src/current/v23.2/work-with-virtual-clusters.md
index 8ec5cc4409f..972b00508b7 100644
--- a/src/current/v23.2/work-with-virtual-clusters.md
+++ b/src/current/v23.2/work-with-virtual-clusters.md
@@ -34,7 +34,7 @@ For example:
{% include_cached copy-clipboard.html %}
~~~ shell
cockroach sql --url \
-"postgresql://root@{node IP or hostname}:26257/?options=-options=-ccluster={virtual_cluster_name}&sslmode=verify-full" \
+"postgresql://root@{node IP or hostname}:26257?options=-options=-ccluster={virtual_cluster_name}&sslmode=verify-full" \
--certs-dir "certs"
~~~
@@ -58,7 +58,7 @@ For example, to connect to the system virtual cluster using the `cockroach sql`
{% include_cached copy-clipboard.html %}
~~~ shell
cockroach sql --url \
-"postgresql://root@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" \
+"postgresql://root@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full" \
--certs-dir "certs"
~~~
diff --git a/src/current/v24.1/alter-virtual-cluster.md b/src/current/v24.1/alter-virtual-cluster.md
index 3253008dc52..6855d076760 100644
--- a/src/current/v24.1/alter-virtual-cluster.md
+++ b/src/current/v24.1/alter-virtual-cluster.md
@@ -44,13 +44,15 @@ Parameter | Description
`RESUME REPLICATION` | Resume the replication stream.
`COMPLETE REPLICATION TO` | Set the time to complete the replication. Use:
`SYSTEM TIME` to specify a [timestamp]({% link {{ page.version.version }}/as-of-system-time.md %}). Refer to [Cut over to a point in time]({% link {{ page.version.version }}/cutover-replication.md %}#cut-over-to-a-point-in-time) for an example.
`LATEST` to specify the most recent replicated timestamp. Refer to [Cut over to a point in time]({% link {{ page.version.version }}/cutover-replication.md %}#cut-over-to-the-most-recent-replicated-time) for an example.
`SET REPLICATION RETENTION = duration` | Change the [duration]({% link {{ page.version.version }}/interval.md %}) of the retention window that will control how far in the past you can [cut over]({% link {{ page.version.version }}/cutover-replication.md %}) to.
{% include {{ page.version.version }}/physical-replication/retention.md %}
+`SET REPLICATION EXPIRATION WINDOW = duration` | Override the default producer job's expiration window of 24 hours. The producer job expiration window determines how long the producer job will continue to run without a heartbeat from the consumer job. Refer to the [Technical Overview]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}) for more details.
+`START REPLICATION OF virtual_cluster_spec ON physical_cluster` | Reset a virtual cluster to the time when the virtual cluster on the promoted standby diverged from it. To reuse as much of the existing data on the original primary cluster as possible, you can run this statement as part of the [cutback]({% link {{ page.version.version }}/cutover-replication.md %}#cut-back-to-the-primary-cluster) process. This command fails if the virtual cluster was not originally replicated from the original primary cluster.
+`START SERVICE SHARED` | Start a virtual cluster so it is ready to accept SQL connections after cutover.
+`RENAME TO virtual_cluster_spec` | Rename a virtual cluster.
+`STOP SERVICE` | Stop the `shared` service for a virtual cluster. The virtual cluster's `data_state` will still be `ready` so that the service can be restarted.
`GRANT ALL CAPABILITIES` | Grant a virtual cluster all [capabilities]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities).
`REVOKE ALL CAPABILITIES` | Revoke all [capabilities]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities) from a virtual cluster.
`GRANT CAPABILITY virtual_cluster_capability_list` | Specify a [capability]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities) to grant to a virtual cluster.
`REVOKE CAPABILITY virtual_cluster_capability_list` | Revoke a [capability]({% link {{ page.version.version }}/create-virtual-cluster.md %}#capabilities) from a virtual cluster.
-`RENAME TO virtual_cluster_spec` | Rename a virtual cluster.
-`START SERVICE SHARED` | Start a virtual cluster. That is, start the standby's virtual cluster so it is ready to accept SQL connections after cutover.
-`STOP SERVICE` | Stop the `shared` service for a virtual cluster. Note that the virtual cluster's `data_state` will remain as `ready` for the service to be started once again.
## Examples
@@ -60,7 +62,7 @@ To start the [cutover]({% link {{ page.version.version }}/cutover-replication.md
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application COMPLETE REPLICATION TO {cutover time specification};
+ALTER VIRTUAL CLUSTER main COMPLETE REPLICATION TO {cutover time specification};
~~~
You can use either:
@@ -68,13 +70,17 @@ You can use either:
- `SYSTEM TIME` to specify a [timestamp]({% link {{ page.version.version }}/as-of-system-time.md %}).
- `LATEST` to specify the most recent replicated timestamp.
+### Start the cutback process
+
+{% include {{ page.version.version }}/physical-replication/fast-cutback-syntax.md %}
+
### Set a retention window
You can change the retention window to protect data from [garbage collection]({% link {{ page.version.version }}/architecture/storage-layer.md %}#garbage-collection). The retention window controls how far in the past you can [cut over]({% link {{ page.version.version }}/cutover-replication.md %}) to:
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application SET REPLICATION RETENTION = '24h';
+ALTER VIRTUAL CLUSTER main SET REPLICATION RETENTION = '24h';
~~~
{% include {{ page.version.version }}/physical-replication/retention.md %}
@@ -85,14 +91,14 @@ When a virtual cluster is [`ready`]({% link {{ page.version.version }}/show-virt
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application START SHARED SERVICE;
+ALTER VIRTUAL CLUSTER main START SHARED SERVICE;
~~~
To stop the `shared` service for a virtual cluster and prevent it from accepting SQL connections:
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application STOP SERVICE;
+ALTER VIRTUAL CLUSTER main STOP SERVICE;
~~~
## See also
diff --git a/src/current/v24.1/automatic-cpu-profiler.md b/src/current/v24.1/automatic-cpu-profiler.md
index abb173e88d5..1bdcde8af51 100644
--- a/src/current/v24.1/automatic-cpu-profiler.md
+++ b/src/current/v24.1/automatic-cpu-profiler.md
@@ -8,42 +8,28 @@ docs_area: manage
This feature automatically captures CPU profiles, which can make it easier to investigate and troubleshoot spikes in CPU usage or erratic CPU load on certain nodes. A CPU profile shows the functions that use the most CPU time, sampled over a window of time. You can collect a CPU Profile manually on the [Advanced Debug page]({% link {{ page.version.version }}/ui-debug-pages.md %}). However, it may be difficult to manually capture a profile of a short CPU spike at the right point in time. Automatic CPU profile capture enables the investigation of CPU load in this and other cases, such as periodic high CPU.
-{{site.data.alerts.callout_danger}}
-We strongly recommend only using the Automatic CPU Profiler when working directly with the [Cockroach Labs support team]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
## Configuration
You can configure automatic CPU profile capture with the following [cluster settings]({% link {{ page.version.version }}/cluster-settings.md %}):
-Cluster Setting | Description | Default Value | Recommended Value
-----------------|-------------|---------------|------------------
-`server.cpu_profile.cpu_usage_combined_threshold` | The baseline value for when CPU profiles should be taken. Collect profiles from each node that meets threshold. This setting enables and disables automatic cpu profiling.
If a value of 0 is set, a profile will be taken every time the `server.cpu_profile.interval` has passed or the provided usage is increasing.
If a value greater than 0 and less than or equal to 100 is set, the profiler is enabled.
If a value over 100 is set, the profiler is disabled.
| MAX integer, such as `9223372036854775807` | `80`
-`server.cpu_profile.interval` | The period of time after which the [high-water mark](#high-water-mark-threshold) resets to the baseline value. | `5m0s` (5 minutes) | `1m40s` (100 seconds)
-`server.cpu_profile.duration` | The length of time a CPU profile is taken. | `10s` (10 seconds) | `1s` or `2s`
-`server.cpu_profile.total_dump_size_limit` | Maximum combined disk size for preserving CPU profiles. | `128 MiB` (128 Mebibytes) |
-
-### Enabling automatic CPU profile capture
-
-To enable automatic CPU profile capture, you must [set]({% link {{ page.version.version }}/set-cluster-setting.md %}) `server.cpu_profile.cpu_usage_combined_threshold` to a value between `0` and `100`. Preferably, use the [recommended value](#recommended-values).
-
-### Recommended values
+Cluster Setting | Description | Default Value
+----------------|-------------|---------------
+`server.cpu_profile.cpu_usage_combined_threshold` | The baseline threshold of CPU usage at which a CPU profile is taken from a node. This value is a percentage.
If a value of `0` is set, a profile is taken every time the `server.cpu_profile.interval` has passed or the provided usage is increasing.
If a value greater than `0` and less than or equal to `100` is set, the profiler is enabled (default)
If a value greater than `100` is set, the profiler is disabled.
| `65`
+`server.cpu_profile.interval` | The period of time after which the [high-water mark](#high-water-mark-threshold) resets to the baseline value. | `20m0s` (20 minutes)
+`server.cpu_profile.duration` | The length of time a CPU profile is taken. | `10s` (10 seconds)
+`server.cpu_profile.total_dump_size_limit` | Maximum combined disk size for preserving CPU profiles. | `128 MiB` (128 Mebibytes)
-- Set `server.cpu_profile.cpu_usage_combined_threshold` to `80` for 80%.
-- Set `server.cpu_profile.duration` to a lower value, for example `1s` or `2s`. This minimizes the impact of [overhead](#overhead) on your cluster compared to the current default value.
-- Set `server.cpu_profile.interval` to a lower value, for example `1m40s` (1 minute 40 seconds).
-
-### High-water mark threshold
+## High-water mark threshold
The Automatic CPU Profiler runs asynchronously in the background. After every second, the Automatic CPU Profiler checks if the CPU usage exceeds the high-water mark threshold. If so, it captures a CPU profile. If a profile capture is already in progress, a second profile is not taken.
-The Automatic CPU Profiler uses the configuration options to determine the high-water mark threshold. For example, with `duration` set to `2s` , `interval` set to `1m40s`, and `cpu_usage_combined_threshold` set to `80`:
+The Automatic CPU Profiler uses the configuration options to determine the high-water mark threshold. For example, with `duration` set to `10s` , `interval` set to `20m0s`, and `cpu_usage_combined_threshold` set to `65`:
-- At `time0` the CPU usage polled is 82 percent which exceeds the baseline threshold of `80`, so a `2s` profile is captured and the high-water mark threshold becomes `82`.
-- After the `2s` profile capture, the Automatic CPU Profiler continues to check every second if the CPU usage now exceeds `82` percent.
-- At `time1` the CPU usage polled is 85 percent, another profile is taken for `2s` and the high-water mark threshold becomes `85`.
-- At `time2`, the `1m40s` interval after `time0`, the high-water mark threshold is reset to the baseline threshold of `80`.
-- The Automatic CPU Profiler continues its every second polling and captures profiles when CPU usage exceeds the high-water mark threshold beginning at the `80` percent baseline.
+- At `time0` the CPU usage polled is `70` percent. This exceeds the baseline threshold of `65`, so a `10s` profile is captured and the high-water mark threshold becomes `70`.
+- After the `10s` profile capture, the Automatic CPU Profiler continues to check every second if the CPU usage now exceeds `70` percent.
+- At `time1` the CPU usage polled is `80` percent, another profile is taken for `10s`, and the high-water mark threshold becomes `80`.
+- At `time2`, the `20m0s` interval after `time0`, the high-water mark threshold is reset to the baseline threshold of `65`.
+- The Automatic CPU Profiler continues to poll every second, and captures a profile whenever CPU usage exceeds the high-water mark threshold.
## Accessing CPU profiles
@@ -55,19 +41,7 @@ The Automatic CPU Profiler uses the configuration options to determine the high-
Enabling the automatic CPU profile capture on a cluster will add overhead to the cluster in the form of potential increases in latency and CPU usage.
-{{site.data.alerts.callout_info}}
-The decision to enable this feature should be done when advised by the [Cockroach Labs support team]({% link {{ page.version.version }}/support-resources.md %}).
-{{site.data.alerts.end}}
-
- Monitor the following metrics:
- [P99 latency]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#service-latency-sql-99th-percentile)
- P50 latency by creating a [custom chart]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}) for the `sql.exec.latency-p50` metric
- [CPU usage]({% link {{ page.version.version }}/ui-hardware-dashboard.md %}#cpu-percent)
-- We anticipate a sub-10% regression on these foreground latency metrics. This overhead to your cluster may be deemed acceptable in order to collect CPU profiles that are necessary to troubleshoot problems in your cluster. Please consult with Cockroach Labs Support.
-- Overhead only occurs during profile capture, not when it is idle. If the cluster can tolerate a `server.cpu_profile.duration` (for example, 1 second) increase in latency to capture the CPU profile, consider enabling the Automatic CPU Profiler.
-
-{{site.data.alerts.callout_danger}}
-Do not have the Automatic CPU Profiler enabled by default.
-
-If you enabled the Automatic CPU Profiler and then notice unacceptable overhead to your cluster, we recommend you immediately disable the Automatic CPU Profiler.
-{{site.data.alerts.end}}
diff --git a/src/current/v24.1/changefeed-messages.md b/src/current/v24.1/changefeed-messages.md
index 489481a3142..67492e0b81e 100644
--- a/src/current/v24.1/changefeed-messages.md
+++ b/src/current/v24.1/changefeed-messages.md
@@ -17,6 +17,7 @@ This page describes the format and behavior of changefeed messages. You will fin
- [Resolved messages](#resolved-messages): The resolved timestamp option and how to configure it.
- [Duplicate messages](#duplicate-messages): The causes of duplicate messages from a changefeed.
- [Schema changes](#schema-changes): The effect of schema changes on a changefeed.
+- [Filtering changefeed messages](#filtering-changefeed-messages): The settings and syntax to prevent and filter the messages that changefeeds emit.
- [Message formats](#message-formats): The limitations and type mapping when creating a changefeed with different message formats.
{{site.data.alerts.callout_info}}
@@ -478,6 +479,31 @@ Refer to the [`CREATE CHANGEFEED` option table]({% link {{ page.version.version
{% include {{ page.version.version }}/cdc/virtual-computed-column-cdc.md %}
{{site.data.alerts.end}}
+## Filtering changefeed messages
+
+There are several ways to define messages, filter different types of message, or prevent all changefeed messages from emitting to the sink. The following sections outline configurable settings and SQL syntax to handle different use cases.
+
+### Prevent changefeeds from emitting row-level TTL deletes
+
+{% include {{ page.version.version }}/cdc/disable-replication-ttl.md %}
+
+### Disable changefeeds from emitting messages
+
+To prevent changefeeds from emitting messages for any changes (e.g., `INSERT`, `UPDATE`) issued to watched tables during that session, set the `disable_changefeed_replication` [session variable]({% link {{ page.version.version }}/session-variables.md %}) to `true`.
+
+### Define the change data emitted to a sink
+
+When you create a changefeed, use change data capture queries to define the change data emitted to your sink.
+
+For example:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE CHANGEFEED INTO 'scheme://sink-URI' WITH updated AS SELECT column, column FROM table;
+~~~
+
+For details on syntax and examples, refer to the [Change Data Capture Queries]({% link {{ page.version.version }}/cdc-queries.md %}) page.
+
## Message formats
{% include {{ page.version.version }}/cdc/message-format-list.md %}
diff --git a/src/current/v24.1/cockroach-debug-zip.md b/src/current/v24.1/cockroach-debug-zip.md
index 93f2cc90a9e..261d5c22e08 100644
--- a/src/current/v24.1/cockroach-debug-zip.md
+++ b/src/current/v24.1/cockroach-debug-zip.md
@@ -80,7 +80,6 @@ The following information is also contained in the `.zip` file, and cannot be fi
- [Cluster Settings]({% link {{ page.version.version }}/cluster-settings.md %})
- [Metrics]({% link {{ page.version.version }}/metrics.md %})
- [Replication Reports]({% link {{ page.version.version }}/query-replication-reports.md %})
-- Problem ranges
- CPU profiles
- A script (`hot-ranges.sh`) that summarizes the hottest ranges (ranges receiving a high number of reads or writes)
@@ -114,7 +113,7 @@ Flag | Description
`--files-until` | End timestamp for log file, goroutine dump, and heap profile collection. This can be used to limit the size of the generated `.zip`, which is increased by these files. The timestamp uses the format `YYYY-MM-DD`, followed optionally by `HH:MM:SS` or `HH:MM`. For example:
`--files-until='2021-07-01 16:00'`
When specifying a narrow time window, we recommend adding extra seconds/minutes to account for uncertainties such as clock drift.
**Default:** 24 hours beyond now (to include files created during `.zip` creation)
`--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:
`--include-files=*.pprof`
Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`.
`--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.
**Default:** true
-`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). Note however many higher-level issues are ultimately related to the underlying KV layer described by these files so only set this to `false` if directed to do so by Cockroach Labs support.
**Default:** true
+`--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.
This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.
In addition, include problem ranges information in `reports/problemranges.json`.
**Default:** true
`--include-running-job-traces` | Include information about each running, traceable job (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.
**Default:** true
`--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:
`--nodes=1,10,13-15`
`--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example.
diff --git a/src/current/v24.1/create-changefeed.md b/src/current/v24.1/create-changefeed.md
index 99884b4a267..6a084278824 100644
--- a/src/current/v24.1/create-changefeed.md
+++ b/src/current/v24.1/create-changefeed.md
@@ -175,6 +175,7 @@ Option | Value | Description
`format` | `json` / `avro` / `csv` / `parquet` | Format of the emitted message.
`avro`: For mappings of CockroachDB types to Avro types, [refer to the table]({% link {{ page.version.version }}/changefeed-messages.md %}#avro-types) and detail on [Avro limitations]({% link {{ page.version.version }}/changefeed-messages.md %}#avro-limitations). **Note:** [`confluent_schema_registry`](#confluent-registry) is required with `format=avro`.
`csv`: You cannot combine `format=csv` with the [`diff`](#diff-opt) or [`resolved`](#resolved-option) options. Changefeeds use the same CSV format as the [`EXPORT`](export.html) statement. Refer to [Export data with changefeeds]({% link {{ page.version.version }}/export-data-with-changefeeds.md %}) for details using these options to create a changefeed as an alternative to `EXPORT`. **Note:** [`initial_scan = 'only'`](#initial-scan) is required with `format=csv`.
`parquet`: Cloud storage is the only supported sink. The [`topic_in_value`](#topic-in-value) option is not compatible with `parquet` format.
Default: `format=json`.
`full_table_name` | N/A | Use fully qualified table name in topics, subjects, schemas, and record output instead of the default table name. This can prevent unintended behavior when the same table name is present in multiple databases.
**Note:** This option cannot modify existing table names used as topics, subjects, etc., as part of an [`ALTER CHANGEFEED`]({% link {{ page.version.version }}/alter-changefeed.md %}) statement. To modify a topic, subject, etc., to use a fully qualified table name, create a new changefeed with this option.
Example: `CREATE CHANGEFEED FOR foo... WITH full_table_name` will create the topic name `defaultdb.public.foo` instead of `foo`.
`gc_protect_expires_after` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Automatically expires protected timestamp records that are older than the defined duration. In the case where a changefeed job remains paused, `gc_protect_expires_after` will trigger the underlying protected timestamp record to expire and cancel the changefeed job to prevent accumulation of protected data.
Refer to [Protect Changefeed Data from Garbage Collection]({% link {{ page.version.version }}/protect-changefeed-data.md %}) for more detail on protecting changefeed data.
+New in v24.1:`ignore_disable_changefeed_replication` | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | When set to `true`, the changefeed **will emit** events even if CDC filtering for TTL jobs is configured using the `disable_changefeed_replication` [session variable]({% link {{ page.version.version }}/set-vars.md %}), `sql.ttl.changefeed_replication.disabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}), or the `ttl_disable_changefeed_replication` [table storage parameter]({% link {{ page.version.version }}/row-level-ttl.md %}).
Refer to [Filter changefeeds for tables using TTL](#filter-changefeeds-for-tables-using-row-level-ttl) for usage details.
`initial_scan` | `yes`/`no`/`only` | Control whether or not an initial scan will occur at the start time of a changefeed. Only one `initial_scan` option (`yes`, `no`, or `only`) can be used. If none of these are set, an initial scan will occur if there is no [`cursor`](#cursor-option), and will not occur if there is one. This preserves the behavior from previous releases. With `initial_scan = 'only'` set, the changefeed job will end with a successful status (`succeeded`) after the initial scan completes. You cannot specify `yes`, `no`, `only` simultaneously.
If used in conjunction with `cursor`, an initial scan will be performed at the cursor timestamp. If no `cursor` is specified, the initial scan is performed at `now()`.
Although the [`initial_scan` / `no_initial_scan`](https://www.cockroachlabs.com/docs/v21.2/create-changefeed#initial-scan) syntax from previous versions is still supported, you cannot combine the previous and current syntax.
Default: `initial_scan = 'yes'`
`kafka_sink_config` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Set fields to configure the required level of message acknowledgement from the Kafka server, the version of the server, and batching parameters for Kafka sinks. Set the message file compression type. See [Kafka sink configuration]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka-sink-configuration) for more detail on configuring all the available fields for this option.
Example: `CREATE CHANGEFEED FOR table INTO 'kafka://localhost:9092' WITH kafka_sink_config='{"Flush": {"MaxMessages": 1, "Frequency": "1s"}, "RequiredAcks": "ONE"}'`
`key_column` | `'column'` | Override the key used in [message metadata]({% link {{ page.version.version }}/changefeed-messages.md %}). This changes the key hashed to determine downstream partitions. In sinks that support partitioning by message, CockroachDB uses the [32-bit FNV-1a](https://wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function) hashing algorithm to determine which partition to send to.
**Note:** `key_column` does not preserve ordering of messages from CockroachDB to the downstream sink, therefore you must also include the [`unordered`](#unordered) option in your changefeed creation statement. It does not affect per-key [ordering guarantees]({% link {{ page.version.version }}/changefeed-messages.md %}#ordering-and-delivery-guarantees) or the output of [`key_in_value`](#key-in-value).
See the [Define a key to determine the changefeed sink partition](#define-a-key-to-determine-the-changefeed-sink-partition) example.
@@ -343,6 +344,12 @@ CREATE CHANGEFEED FOR TABLE table_name INTO 'external://kafka_sink'
WITH resolved;
~~~
+### Filter changefeeds for tables using row-level TTL
+
+{% include {{ page.version.version }}/cdc/disable-replication-ttl.md %}
+
+For guidance on how to filter changefeed messages to emit [row-level TTL]({% link {{ page.version.version }}/row-level-ttl.md %}) deletes only, refer to [Change Data Capture Queries]({% link {{ page.version.version }}/cdc-queries.md %}#reference-ttl-in-a-cdc-query).
+
### Manage a changefeed
For {{ site.data.products.enterprise }} changefeeds, use [`SHOW CHANGEFEED JOBS`]({% link {{ page.version.version }}/show-jobs.md %}) to check the status of your changefeed jobs:
diff --git a/src/current/v24.1/create-function.md b/src/current/v24.1/create-function.md
index f88e368119d..11a00b5d8fc 100644
--- a/src/current/v24.1/create-function.md
+++ b/src/current/v24.1/create-function.md
@@ -194,6 +194,82 @@ SELECT last_rider();
(1 row)
~~~
+### Create a function that uses `OUT` and `INOUT` parameters
+
+The following statement uses a combination of `OUT` and `INOUT` parameters to modify a provided value and output the result. An `OUT` parameter returns a value, while an `INOUT` parameter passes an input value and returns a value.
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE OR REPLACE FUNCTION double_triple(INOUT double INT, OUT triple INT) AS
+ $$
+ BEGIN
+ double := double * 2;
+ triple := double * 3;
+ END;
+ $$ LANGUAGE PLpgSQL;
+~~~
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SELECT double_triple(1);
+~~~
+
+~~~
+ double_triple
+-----------------
+ (2,6)
+~~~
+
+The `CREATE FUNCTION` statement does not need a `RETURN` statement because this is added implicitly for a function with `OUT` parameters:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SHOW CREATE FUNCTION double_triple;
+~~~
+
+~~~
+ function_name | create_statement
+----------------+---------------------------------------------------------------------------
+ double_triple | CREATE FUNCTION public.double_triple(INOUT double INT8, OUT triple INT8)
+ | RETURNS RECORD
+ | VOLATILE
+ | NOT LEAKPROOF
+ | CALLED ON NULL INPUT
+ | LANGUAGE plpgsql
+ | AS $$
+ | BEGIN
+ | double := double * 2;
+ | triple := double * 3;
+ | END;
+ | $$
+~~~
+
+### Create a function that invokes a function
+
+The following statement defines a function that invokes the [`double_triple` example function](#create-a-function-that-uses-out-and-inout-parameters).
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE OR REPLACE FUNCTION f(input_value INT)
+ RETURNS RECORD
+ AS $$
+ BEGIN
+ RETURN double_triple(input_value);
+ END;
+ $$ LANGUAGE PLpgSQL;
+~~~
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SELECT f(1);
+~~~
+
+~~~
+ f
+---------
+ (2,6)
+~~~
+
### Create a function that uses a loop
{% include {{ page.version.version }}/sql/udf-plpgsql-example.md %}
diff --git a/src/current/v24.1/create-procedure.md b/src/current/v24.1/create-procedure.md
index 767ca831ed7..7bdf233dd02 100644
--- a/src/current/v24.1/create-procedure.md
+++ b/src/current/v24.1/create-procedure.md
@@ -68,6 +68,67 @@ NOTICE: (1,foo)
CALL
~~~
+### Create a stored procedure that uses `OUT` and `INOUT` parameters
+
+The following example uses a combination of `OUT` and `INOUT` parameters to modify a provided value and output the result. An `OUT` parameter returns a value, while an `INOUT` parameter passes an input value and returns a value.
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE OR REPLACE PROCEDURE double_triple(INOUT double INT, OUT triple INT) AS
+ $$
+ BEGIN
+ double := double * 2;
+ triple := double * 3;
+ END;
+ $$ LANGUAGE PLpgSQL;
+~~~
+
+When calling a procedure, you need to supply placeholder values for any `OUT` parameters. A `NULL` value is commonly used. When [calling a procedure from another routine](#create-a-stored-procedure-that-calls-a-procedure), you should declare variables that will store the results of the `OUT` parameters.
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CALL double_triple(1, NULL);
+~~~
+
+~~~
+ double | triple
+---------+---------
+ 2 | 6
+~~~
+
+### Create a stored procedure that calls a procedure
+
+The following example defines a procedure that calls the [`double_triple` example procedure](#create-a-stored-procedure-that-uses-out-and-inout-parameters). The `triple_result` variable is assigned the result of the `OUT` parameter, while the `double_input` variable both provides the input and stores the result of the `INOUT` parameter.
+
+{{site.data.alerts.callout_info}}
+A procedure with `OUT` parameters can only be [called from a PL/pgSQL routine]({% link {{ page.version.version }}/plpgsql.md %}#call-a-procedure).
+{{site.data.alerts.end}}
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE OR REPLACE PROCEDURE p(double_input INT) AS
+ $$
+ DECLARE
+ triple_result INT;
+ BEGIN
+ CALL double_triple(double_input, triple_result);
+ RAISE NOTICE 'Doubled value: %', double_input;
+ RAISE NOTICE 'Tripled value: %', triple_result;
+ END
+ $$ LANGUAGE PLpgSQL;
+~~~
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CALL p(1);
+~~~
+
+~~~
+NOTICE: Doubled value: 2
+NOTICE: Tripled value: 6
+CALL
+~~~
+
### Create a stored procedure that uses conditional logic
The following example uses [PL/pgSQL conditional statements]({% link {{ page.version.version }}/plpgsql.md %}#write-conditional-statements):
diff --git a/src/current/v24.1/create-virtual-cluster.md b/src/current/v24.1/create-virtual-cluster.md
index 44b989c75a5..1f886e276f3 100644
--- a/src/current/v24.1/create-virtual-cluster.md
+++ b/src/current/v24.1/create-virtual-cluster.md
@@ -57,7 +57,7 @@ When you [initiate a replication stream]({% link {{ page.version.version }}/set-
{% include_cached copy-clipboard.html %}
~~~
-'postgresql://{replication user}:{password}@{node IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt'
+'postgresql://{replication user}:{password}@{node IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{primary cert}.crt'
~~~
To form a connection string similar to the example, include the following values and query parameters. Replace values in `{...}` with the appropriate values for your configuration:
@@ -77,20 +77,20 @@ Value | Description
Cockroach Labs does not recommend changing the default capabilities of created virtual clusters.
{{site.data.alerts.end}}
-_Capabilities_ control what a virtual cluster can do. The [configuration profile]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#start-the-standby-cluster) included at startup creates the `template` virtual cluster with the same set of capabilities per CockroachDB version. When you start a replication stream, you can specify the `template` VC with `LIKE` to ensure other virtual clusters on the standby cluster will work in the same way. `LIKE` will refer to a virtual cluster on the CockroachDB cluster you're running the statement from.
+_Capabilities_ control what a virtual cluster can do. When you start a replication stream, you can specify a virtual cluster with `LIKE` to ensure other virtual clusters on the standby cluster will work in the same way. `LIKE` will refer to a virtual cluster on the CockroachDB cluster you're running the statement from.
## Examples
### Start a replication stream
-To start a replication stream to the standby of the primary's application virtual cluster:
+To start a replication stream to the standby of the primary's virtual cluster:
{% include_cached copy-clipboard.html %}
~~~ sql
-CREATE VIRTUAL CLUSTER application LIKE template FROM REPLICATION OF application ON 'postgresql://{connection string to primary}';
+CREATE VIRTUAL CLUSTER main FROM REPLICATION OF main ON 'postgresql://{connection string to primary}';
~~~
-This will create a virtual cluster in the standby cluster that is based on the `template` virtual cluster, which is created during [cluster startup with `--config-profile`]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#start-the-primary-cluster). The standby's system virtual cluster will connect to the primary cluster to initiate the replication stream job. For detail on the replication stream, refer to the [Responses]({% link {{ page.version.version }}/show-virtual-cluster.md %}#responses) for `SHOW VIRTUAL CLUSTER`.
+This will create a `main` virtual cluster in the standby cluster. The standby's system virtual cluster will connect to the primary cluster to initiate the replication stream job. For detail on the replication stream, refer to the [Responses]({% link {{ page.version.version }}/show-virtual-cluster.md %}#responses) for `SHOW VIRTUAL CLUSTER`.
### Specify a retention window for a replication stream
@@ -98,10 +98,10 @@ When you initiate a replication stream, you can specify a retention window to pr
{% include_cached copy-clipboard.html %}
~~~ sql
-CREATE VIRTUAL CLUSTER application LIKE template FROM REPLICATION OF application ON 'postgresql://{connection string to primary}' WITH RETENTION '36h';
+CREATE VIRTUAL CLUSTER main FROM REPLICATION OF main ON 'postgresql://{connection string to primary}' WITH RETENTION '36h';
~~~
-This will initiate a replication stream from the primary cluster into the standby cluster's new `standbyapplication` virtual cluster. The `RETENTION` option allows you to specify a timestamp in the past for cutover to the standby cluster. After cutover, the `standbyapplication` will be transactionally consistent to any timestamp within that retention window.
+This will initiate a replication stream from the primary cluster into the standby cluster's new `main` virtual cluster. The `RETENTION` option allows you to specify a timestamp in the past for cutover to the standby cluster. After cutover, the standby `main` virtual cluster will be transactionally consistent to any timestamp within that retention window.
{% include {{ page.version.version }}/physical-replication/retention.md %}
diff --git a/src/current/v24.1/cutover-replication.md b/src/current/v24.1/cutover-replication.md
index 0ead39ad859..9c9f49c482f 100644
--- a/src/current/v24.1/cutover-replication.md
+++ b/src/current/v24.1/cutover-replication.md
@@ -20,7 +20,7 @@ The cutover is a two-step process on the standby cluster:
Initiating a cutover is a manual process that makes the standby cluster ready to accept SQL connections. However, the cutover process does **not** automatically redirect traffic to the standby cluster. Once the cutover is complete, you must redirect application traffic to the standby (new) cluster. If you do not manually redirect traffic, writes to the primary (original) cluster may be lost.
{{site.data.alerts.end}}
-After a cutover, you may want to _cut back_ to the original primary cluster. That is, set up the original primary cluster to once again accept application traffic. This requires you to configure another full replication stream in the opposite direction from the original standby (now primary) to the original primary. For more detail, refer to [Cut back to the primary cluster](#cut-back-to-the-primary-cluster).
+After a cutover, you may want to _cut back_ to the original primary cluster (or a different cluster) to set up the original primary cluster to once again accept application traffic. For more details, refer to [Cut back to the primary cluster](#cut-back-to-the-primary-cluster).
## Step 1. Initiate the cutover
@@ -39,21 +39,22 @@ To view the current replication timestamp, use:
{% include_cached copy-clipboard.html %}
~~~ sql
-SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
+SHOW VIRTUAL CLUSTER main WITH REPLICATION STATUS;
~~~
{% include_cached copy-clipboard.html %}
~~~
- id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
------+--------------------+-------------+--------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------
- 5 | application | replicating | none | application | postgresql://user:redacted@host/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 911803003607220225 | 2023-10-26 17:36:52.27978+00 | 2023-10-26 14:36:52.279781+00 | NULL
+ id | name | source_tenant_name | source_cluster_uri | retained_time | replicated_time | replication_lag | cutover_time | status
+-----+------+--------------------+-------------------------------------------------+---------------------------------+------------------------+-----------------+--------------+--------------
+ 3 | main | main | postgresql://user@hostname or IP:26257?redacted | 2024-04-18 10:07:45.000001+00 | 2024-04-18 14:07:45+00 | 00:00:19.602682 | NULL | replicating
+(1 row)
~~~
Run the following from the standby cluster's SQL shell to start the cutover:
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application COMPLETE REPLICATION TO LATEST;
+ALTER VIRTUAL CLUSTER main COMPLETE REPLICATION TO LATEST;
~~~
The `cutover_time` is the timestamp at which the replicated data is consistent. The cluster will revert any data above this timestamp:
@@ -73,16 +74,15 @@ To select a [specific time]({% link {{ page.version.version }}/as-of-system-time
{% include_cached copy-clipboard.html %}
~~~ sql
-SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
+SHOW VIRTUAL CLUSTER main WITH REPLICATION STATUS;
~~~
The `retained_time` response provides the earliest time to which you can cut over.
-{% include_cached copy-clipboard.html %}
~~~
-id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
----+--------------------+--------------------+--------------+--------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
-3 | application | replicating | none | application | postgresql://{user}:redacted@{hostname}:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
+ id | name | source_tenant_name | source_cluster_uri | retained_time | replicated_time | replication_lag | cutover_time | status
+-----+------+--------------------+-------------------------------------------------+-------------------------------+------------------------+-----------------+--------------+--------------
+ 3 | main | main | postgresql://user@hostname or IP:26257?redacted | 2024-04-18 10:07:45.000001+00 | 2024-04-18 14:07:45+00 | 00:00:19.602682 | NULL | replicating
(1 row)
~~~
@@ -90,7 +90,7 @@ Specify a timestamp:
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application COMPLETE REPLICATION TO SYSTEM TIME '-1h';
+ALTER VIRTUAL CLUSTER main COMPLETE REPLICATION TO SYSTEM TIME '-1h';
~~~
Refer to [Using different timestamp formats]({% link {{ page.version.version }}/as-of-system-time.md %}#using-different-timestamp-formats) for more information.
@@ -99,13 +99,16 @@ Similarly, to cut over to a specific time in the future:
{% include_cached copy-clipboard.html %}
~~~ sql
-ALTER VIRTUAL CLUSTER application COMPLETE REPLICATION TO SYSTEM TIME '+5h';
+ALTER VIRTUAL CLUSTER main COMPLETE REPLICATION TO SYSTEM TIME '+5h';
~~~
A future cutover will proceed once the replicated data has reached the specified time.
{{site.data.alerts.callout_info}}
-To monitor for when the replication stream completes, use [`SHOW VIRTUAL CLUSTER ... WITH REPLICATION STATUS`]({% link {{ page.version.version }}/show-virtual-cluster.md %}) to find the replication stream's `replication_job_id`, which you can pass to `SHOW JOB WHEN COMPLETE job_id` as the `job_id`. Refer to the `SHOW JOBS` page for [details]({% link {{ page.version.version }}/show-jobs.md %}#parameters) and an [example]({% link {{ page.version.version }}/show-jobs.md %}#show-job-when-complete).
+To monitor for when the replication stream completes, do the following:
+
+1. Find the replication stream's `job_id` using `SELECT * FROM [SHOW JOBS] WHERE job_type = 'REPLICATION STREAM INGESTION';`
+1. Run `SHOW JOB WHEN COMPLETE job_id`. Refer to the `SHOW JOBS` page for [details]({% link {{ page.version.version }}/show-jobs.md %}#parameters) and an [example]({% link {{ page.version.version }}/show-jobs.md %}#show-job-when-complete).
{{site.data.alerts.end}}
## Step 2. Complete the cutover
@@ -114,12 +117,12 @@ To monitor for when the replication stream completes, use [`SHOW VIRTUAL CLUSTER
{% include_cached copy-clipboard.html %}
~~~ sql
- SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
+ SHOW VIRTUAL CLUSTER main WITH REPLICATION STATUS;
~~~
~~~
- id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
- -----+---------------------+-----------------------------+--------------+--------------------+-------------------------------------------------------------------------------------------------------------------+--------------------+------------------------------+-------------------------------+---------------------------------
- 4 | application | replication pending cutover | none | application | postgresql://user:redacted@3ip:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 903895265809498113 | 2023-09-28 17:41:18.03092+00 | 2023-09-28 16:09:04.327473+00 | 1695922878030920020.0000000000
+ id | name | source_tenant_name | source_cluster_uri | retained_time | replicated_time | replication_lag | cutover_time | status
+ ---+------+--------------------+-------------------------------------------------+-------------------------------+------------------------------+-----------------+--------------------------------+--------------
+ 3 | main | main | postgresql://user@hostname or IP:26257?redacted | 2023-09-28 16:09:04.327473+00 | 2023-09-28 17:41:18.03092+00 | 00:00:19.602682 | 1695922878030920020.0000000000 | replication pending cutover
(1 row)
~~~
@@ -129,15 +132,14 @@ To monitor for when the replication stream completes, use [`SHOW VIRTUAL CLUSTER
{% include_cached copy-clipboard.html %}
~~~ sql
- ALTER VIRTUAL CLUSTER application START SERVICE SHARED;
+ ALTER VIRTUAL CLUSTER main START SERVICE SHARED;
~~~
~~~
id | name | data_state | service_mode
-----+---------------------+--------------------+---------------
1 | system | ready | shared
- 2 | template | ready | none
- 3 | application | ready | shared
+ 3 | main | ready | shared
(3 rows)
~~~
@@ -145,23 +147,123 @@ To monitor for when the replication stream completes, use [`SHOW VIRTUAL CLUSTER
{% include_cached copy-clipboard.html %}
~~~ sql
- SET CLUSTER SETTING server.controller.default_target_cluster='application';
+ SET CLUSTER SETTING server.controller.default_target_cluster='main';
~~~
At this point, the primary and standby clusters are entirely independent. You will need to use your own network load balancers, DNS servers, or other network configuration to direct application traffic to the standby (now primary). To enable physical cluster replication again, from the new primary to the original primary (or a completely different cluster), refer to [Cut back to the primary cluster](#cut-back-to-the-primary-cluster).
## Cut back to the primary cluster
-After cutting over to the standby cluster, you may need to move back to the original primary cluster, or a completely different cluster. This process is manual and requires starting a new replication stream.
+After cutting over to the standby cluster, you may need to cut back to the original primary cluster to serve your application.
+
+{% include {{ page.version.version }}/physical-replication/fast-cutback-syntax.md %}
+
+{{site.data.alerts.callout_info}}
+To move back to a different cluster, follow the physical cluster replication [setup]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}).
+{{site.data.alerts.end}}
+
+### Example
+
+This section illustrates the steps to cut back to the original primary cluster from the promoted standby cluster that is currently serving traffic.
+
+- **Cluster A** = original primary cluster
+- **Cluster B** = original standby cluster
+
+**Cluster B** is serving application traffic after the [cutover](#step-2-complete-the-cutover).
+
+1. To begin the cutback to **Cluster A**, the virtual cluster must first stop accepting connections. Connect to the system virtual on **Cluster A**:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ cockroach sql --url \
+ "postgresql://{user}@{node IP or hostname cluster A}:26257?options=-ccluster=system&sslmode=verify-full" \
+ --certs-dir "certs"
+ ~~~
+
+1. From the system virtual cluster on **Cluster A**, ensure that service to the virtual cluster has stopped:
-For example, if you had [set up physical cluster replication]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}) between a primary and standby cluster and then cut over to the standby, the workflow to cut back to the original primary cluster would be as follows:
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ ALTER VIRTUAL CLUSTER {cluster_a} STOP SERVICE;
+ ~~~
+
+1. Open another terminal window and connect to the system virtual cluster for **Cluster B**:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ shell
+ cockroach sql --url \
+ "postgresql://{user}@{node IP or hostname cluster B}:26257?options=-ccluster=system&sslmode=verify-full" \
+ --certs-dir "certs"
+ ~~~
+
+1. From the system virtual cluster on **Cluster B**, enable rangefeeds:
-- Original primary cluster = Cluster A
-- Original standby cluster = Cluster B
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ SET CLUSTER SETTING kv.rangefeed.enabled = 'true';
+ ~~~
+
+1. From the system virtual cluster on **Cluster A**, start the replication from cluster B to cluster A:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ ALTER VIRTUAL CLUSTER {cluster_a} START REPLICATION OF {cluster_b} ON 'postgresql://{user}@{ node IP or hostname cluster B}:26257?options=-ccluster=system&sslmode=verify-full&sslrootcert=certs/{standby cert}.crt';
+ ~~~
+
+ This will reset the virtual cluster on **Cluster A** back to the time at which the same virtual cluster on **Cluster B** diverged from it. **Cluster A** will check with **Cluster B** to confirm that its virtual cluster was replicated from **Cluster A** as part of the original [physical cluster replication stream]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}).
+
+ {{site.data.alerts.callout_success}}
+ For details on connection strings, refer to the [Connection reference]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#connection-reference).
+ {{site.data.alerts.end}}
+
+1. Check the status of the virtual cluster on **A**:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ SHOW VIRTUAL CLUSTER {cluster_a};
+ ~~~
+
+ {% include_cached copy-clipboard.html %}
+ ~~~
+ id | name | data_state | service_mode
+ ----+--------+--------------------+---------------
+ 1 | system | ready | shared
+ 3 | {vc_a} | replicating | none
+ 4 | test | replicating | none
+ (2 rows)
+ ~~~
+
+1. From **Cluster A**, start the cutover:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ ALTER VIRTUAL CLUSTER {cluster_a} COMPLETE REPLICATION TO LATEST;
+ ~~~
+
+ The `cutover_time` is the timestamp at which the replicated data is consistent. The cluster will revert any data above this timestamp:
+
+ ~~~
+ cutover_time
+ ----------------------------------
+ 1714497890000000000.0000000000
+ (1 row)
+ ~~~
+
+1. From **Cluster A**, bring the virtual cluster online:
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ ALTER VIRTUAL CLUSTER {cluster_a} START SERVICE SHARED;
+ ~~~
+
+1. To make **Cluster A's** virtual cluster the default for [connection strings]({% link {{ page.version.version }}/work-with-virtual-clusters.md %}#sql-clients), set the following [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}):
+
+ {% include_cached copy-clipboard.html %}
+ ~~~ sql
+ SET CLUSTER SETTING server.controller.default_target_cluster='{cluster_a}';
+ ~~~
-1. Cluster B is now serving application traffic after the cutover.
-1. Drop the application virtual cluster from the cluster A with `DROP VIRTUAL CLUSTER`. {% comment %}link here{% endcomment %}
-1. Start a replication stream that sends updates from cluster B to cluster A. Refer to [Start replication]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#step-4-start-replication).
+At this point, **Cluster A** is once again the primary and **Cluster B** is once again the standby. The clusters are entirely independent. To direct application traffic to the primary (**Cluster A**), you will need to use your own network load balancers, DNS servers, or other network configuration to direct application traffic to **Cluster A**. To enable physical cluster replication again, from the primary to the standby (or a completely different cluster), refer to [Set Up Physical Cluster Replication]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}).
## See also
diff --git a/src/current/v24.1/default-value.md b/src/current/v24.1/default-value.md
index fbb7fb2db08..61cdbed5a66 100644
--- a/src/current/v24.1/default-value.md
+++ b/src/current/v24.1/default-value.md
@@ -1,6 +1,6 @@
---
title: Default Value Constraint
-summary: The Default Value constraint specifies a value to populate a column with if none is provided.
+summary: The DEFAULT constraint specifies a value to populate a column with if none is provided.
toc: true
docs_area: reference.sql
---
@@ -9,7 +9,7 @@ The `DEFAULT` value [constraint]({% link {{ page.version.version }}/constraints.
## Details
-- The [data type]({% link {{ page.version.version }}/data-types.md %}) of the Default Value must be the same as the data type of the column.
+- The [data type]({% link {{ page.version.version }}/data-types.md %}) of the `DEFAULT` value must be the same as the data type of the column.
- The `DEFAULT` value constraint only applies if the column does not have a value specified in the [`INSERT`]({% link {{ page.version.version }}/insert.md %}) statement. You can still insert a `NULL` into an optional (nullable) column by explicitly inserting `NULL`. For example, `INSERT INTO foo VALUES (1, NULL);`.
## Syntax
diff --git a/src/current/v24.1/drop-virtual-cluster.md b/src/current/v24.1/drop-virtual-cluster.md
index 03bb7a7de7f..f9471bb839e 100644
--- a/src/current/v24.1/drop-virtual-cluster.md
+++ b/src/current/v24.1/drop-virtual-cluster.md
@@ -55,7 +55,7 @@ To remove a virtual cluster from a CockroachDB cluster:
{% include_cached copy-clipboard.html %}
~~~ sql
-DROP VIRTUAL CLUSTER IF EXISTS application;
+DROP VIRTUAL CLUSTER IF EXISTS main;
~~~
### Remove a virtual cluster without waiting for garbage collection
@@ -64,7 +64,7 @@ Use `IMMEDIATE` to drop a virtual cluster instead of waiting for data to be garb
{% include_cached copy-clipboard.html %}
~~~ sql
-DROP VIRTUAL CLUSTER IF EXISTS application IMMEDIATE;
+DROP VIRTUAL CLUSTER IF EXISTS main IMMEDIATE;
~~~
## See also
diff --git a/src/current/v24.1/licensing-faqs.md b/src/current/v24.1/licensing-faqs.md
index 97fce8eafa6..1b9bd2a5abd 100644
--- a/src/current/v24.1/licensing-faqs.md
+++ b/src/current/v24.1/licensing-faqs.md
@@ -139,7 +139,10 @@ I171116 18:11:48.279604 1514 sql/event_log.go:102 [client=[::1]:56357,user=root
## Monitor for license expiry
-You can monitor the time until your license expires with [Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}). The `seconds_until_enterprise_license_expiry` metric reports the number of seconds until the Enterprise license on a cluster expires. It will report 0 if there is no license or a negative number if the license has already expired. For more information, see [Monitoring and Alerting]({% link {{ page.version.version }}/monitoring-and-alerting.md %}).
+You can monitor the time until your license expires in two ways:
+
+1. [DB console]({% link {{ page.version.version }}/ui-overview.md %}): The [license expiration message]({% link {{ page.version.version }}/ui-overview.md %}#license-expiration-message) displays the number of days until the expiration date or the days since the expiration date.
+1. [Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}): The `seconds_until_enterprise_license_expiry` metric reports the number of seconds until the enterprise license on a cluster expires. It will report `0` if there is no license, and a negative number if the license has already expired. For more information, see [Monitoring and Alerting]({% link {{ page.version.version }}/monitoring-and-alerting.md %}).
## Renew an expired license
diff --git a/src/current/v24.1/physical-cluster-replication-monitoring.md b/src/current/v24.1/physical-cluster-replication-monitoring.md
index 2a3c4e62d0b..e7aacd012ea 100644
--- a/src/current/v24.1/physical-cluster-replication-monitoring.md
+++ b/src/current/v24.1/physical-cluster-replication-monitoring.md
@@ -12,7 +12,7 @@ docs_area: manage
You can monitor a physical cluster replication stream using:
- [`SHOW VIRTUAL CLUSTER ... WITH REPLICATION STATUS`](#sql-shell) in the SQL shell.
-- The [Physical Replication dashboard](#db-console) on the DB Console.
+- The [**Physical Cluster Replication** dashboard]({% link {{ page.version.version }}/ui-physical-cluster-replication-dashboard.md %}) on the [DB Console](#db-console).
- [Prometheus and Alertmanager](#prometheus) to track and alert on replication metrics.
- [`SHOW EXPERIMENTAL_FINGERPRINTS`](#data-verification) to verify data at a point in time is correct on the standby cluster.
@@ -26,16 +26,15 @@ In the standby cluster's SQL shell, you can query `SHOW VIRTUAL CLUSTER ... WITH
{% include_cached copy-clipboard.html %}
~~~ sql
-SHOW VIRTUAL CLUSTER application WITH REPLICATION STATUS;
+SHOW VIRTUAL CLUSTER main WITH REPLICATION STATUS;
~~~
Refer to [Responses](#responses) for a description of each field.
-{% include_cached copy-clipboard.html %}
~~~
-id | name | data_state | service_mode | source_tenant_name | source_cluster_uri | replication_job_id | replicated_time | retained_time | cutover_time
----+--------------------+--------------------+--------------+--------------------+----------------------------------------------------------------------------------------------------------------------+--------------------+-------------------------------+-------------------------------+---------------
-3 | application | replicating | none | application | postgresql://{user}:{password}@{hostname}:26257/?options=-ccluster%3Dsystem&sslmode=verify-full&sslrootcert=redacted | 899090689449132033 | 2023-09-11 22:29:35.085548+00 | 2023-09-11 16:51:43.612846+00 | NULL
+id | name | source_tenant_name | source_cluster_uri | retained_time | replicated_time | replication_lag | cutover_time | status
+---+------+--------------------+-------------------------------------------------+-------------------------------+------------------------------+-----------------+--------------------------------+--------------
+3 | main | main | postgresql://user@hostname or IP:26257?redacted | 2023-09-28 16:09:04.327473+00 | 2023-09-28 17:41:18.03092+00 | 00:00:19.602682 | 1695922878030920020.0000000000 | replicating
(1 row)
~~~
@@ -49,41 +48,7 @@ id | name | data_state | service_mode | source_tenant_name
## DB Console
-You can access the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) for your standby cluster at `https://{your IP or hostname}:8080/`. Select the **Metrics** page from the left-hand navigation bar, and then select **Physical Cluster Replication** from the **Dashboard** dropdown. The user that accesses the DB Console must have `admin` privileges to view this dashboard.
-
-{% include {{ page.version.version }}/ui/ui-metrics-navigation.md %}
-
-{{site.data.alerts.callout_info}}
-The **Physical Cluster Replication** dashboard tracks metrics related to physical cluster replication jobs. This is distinct from the [**Replication** dashboard]({% link {{ page.version.version }}/ui-replication-dashboard.md %}), which tracks metrics related to how data is replicated across the cluster, e.g., range status, replicas per store, and replica quiescence.
-{{site.data.alerts.end}}
-
-The **Physical Cluster Replication** dashboard contains graphs for monitoring:
-
-### Logical bytes
-
-
-
-The **Logical Bytes** graph shows you the throughput of the replicated bytes.
-
-Hovering over the graph displays:
-
-- The date and time.
-- The number of logical bytes replicated in MiB.
-
-{{site.data.alerts.callout_info}}
-When you [start a replication stream]({% link {{ page.version.version }}/set-up-physical-cluster-replication.md %}#step-4-start-replication), the **Logical Bytes** graph will record a spike of throughput as the initial scan completes. {% comment %}link to technical details here{% endcomment %}
-{{site.data.alerts.end}}
-
-### SST bytes
-
-
-
-The **SST Bytes** graph shows you the rate at which all [SST]({% link {{ page.version.version }}/architecture/storage-layer.md %}#ssts) bytes are sent to the [KV layer]({% link {{ page.version.version }}/architecture/storage-layer.md %}) by physical cluster replication jobs.
-
-Hovering over the graph displays:
-
-- The date and time.
-- The number of SST bytes replicated in MiB.
+You can use the [**Physical Cluster Replication** dashboard]({% link {{ page.version.version }}/ui-physical-cluster-replication-dashboard.md %}) of the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) to monitor [logical bytes]({% link {{ page.version.version }}/ui-physical-cluster-replication-dashboard.md %}#logical-bytes) and [SST bytes]({% link {{ page.version.version }}/ui-physical-cluster-replication-dashboard.md %}#sst-bytes) on the standby cluster.
## Prometheus
@@ -109,7 +74,7 @@ To verify that the data at a certain point in time is correct on the standby clu
{% include_cached copy-clipboard.html %}
~~~ sql
- SELECT replicated_time FROM [SHOW VIRTUAL CLUSTER standbyapplication WITH REPLICATION STATUS];
+ SELECT replicated_time FROM [SHOW VIRTUAL CLUSTER standbymain WITH REPLICATION STATUS];
~~~
~~~
replicated_time
@@ -124,12 +89,12 @@ To verify that the data at a certain point in time is correct on the standby clu
{% include_cached copy-clipboard.html %}
~~~ sql
- SELECT * FROM [SHOW EXPERIMENTAL_FINGERPRINTS FROM VIRTUAL CLUSTER application] AS OF SYSTEM TIME '2024-01-09 16:15:45.291575+00';
+ SELECT * FROM [SHOW EXPERIMENTAL_FINGERPRINTS FROM VIRTUAL CLUSTER main] AS OF SYSTEM TIME '2024-01-09 16:15:45.291575+00';
~~~
~~~
tenant_name | end_ts | fingerprint
------------+--------------------------------+----------------------
- application | 1704816945291575000.0000000000 | 2646132238164576487
+ main | 1704816945291575000.0000000000 | 2646132238164576487
(1 row)
~~~
@@ -139,12 +104,12 @@ To verify that the data at a certain point in time is correct on the standby clu
{% include_cached copy-clipboard.html %}
~~~ sql
- SELECT * FROM [SHOW EXPERIMENTAL_FINGERPRINTS FROM VIRTUAL CLUSTER standbyapplication] AS OF SYSTEM TIME '2024-01-09 16:15:45.291575+00';
+ SELECT * FROM [SHOW EXPERIMENTAL_FINGERPRINTS FROM VIRTUAL CLUSTER standbymain] AS OF SYSTEM TIME '2024-01-09 16:15:45.291575+00';
~~~
~~~
tenant_name | end_ts | fingerprint
--------------------+--------------------------------+----------------------
- standbyapplication | 1704816945291575000.0000000000 | 2646132238164576487
+ standbymain | 1704816945291575000.0000000000 | 2646132238164576487
(1 row)
~~~
diff --git a/src/current/v24.1/physical-cluster-replication-overview.md b/src/current/v24.1/physical-cluster-replication-overview.md
index 80b1ac8550a..47f10380a97 100644
--- a/src/current/v24.1/physical-cluster-replication-overview.md
+++ b/src/current/v24.1/physical-cluster-replication-overview.md
@@ -23,7 +23,7 @@ You can use physical cluster replication in a disaster recovery plan to:
- Meet your RTO (Recovery Time Objective) and RPO (Recovery Point Objective) requirements. Physical cluster replication provides lower RTO and RPO than [backup and restore]({% link {{ page.version.version }}/backup-and-restore-overview.md %}).
- Automatically replicate everything in your primary cluster to recover quickly from a control plane or full cluster failure.
-- Protect against region failure when you cannot use individual [multi-region clusters]({% link {{ page.version.version }}/multiregion-overview.md %}) — for example, if you have a two-datacenter architecture and do not have access to three regions; or, you need low-write latency in a single region. Physical cluster replication allows for an active-passive (primary-standby) structure across two clusters with the passive cluster in different region.
+- Protect against region failure when you cannot use individual [multi-region clusters]({% link {{ page.version.version }}/multiregion-overview.md %}) — for example, if you have a two-datacenter architecture and do not have access to three regions; or, you need low-write latency in a single region. Physical cluster replication allows for an active-passive (primary-standby) structure across two clusters with the passive cluster in a different region.
- Avoid conflicts in data after recovery; the replication completes to a transactionally consistent state as of a certain point in time.
## Features
@@ -56,18 +56,18 @@ For more comprehensive guides, refer to:
### Start clusters
-To initiate physical cluster replication on clusters, you must [start]({% link {{ page.version.version }}/cockroach-start.md %}) the primary and standby CockroachDB clusters with the `--config-profile` flag. This enables [cluster virtualization]({% link {{ page.version.version }}/cluster-virtualization-overview.md %}) and sets up each cluster ready for replication.
+To use physical cluster replication on clusters, you must [initialize]({% link {{ page.version.version }}/cockroach-start.md %}) the primary and standby CockroachDB clusters with the `--virtualized` and `--virtualized-empty` flags respectively. This enables [cluster virtualization]({% link {{ page.version.version }}/cluster-virtualization-overview.md %}) and sets up each cluster ready for replication.
The active primary cluster that serves application traffic:
~~~shell
-cockroach start ... --config-profile replication-source
+cockroach init ... --virtualized
~~~
The passive standby cluster that will ingest the replicated data:
~~~shell
-cockroach start ... --config-profile replication-target
+cockroach init ... --virtualized-empty
~~~
The node topology of the two clusters does not need to be the same. For example, you can provision the standby cluster with fewer nodes. However, consider that:
@@ -91,14 +91,14 @@ To connect to a virtualized cluster using the SQL shell:
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" --certs-dir "certs"
+ cockroach sql --url "postgresql://root@{your IP or hostname}:26257?options=-ccluster=system&sslmode=verify-full" --certs-dir "certs"
~~~
-- For the application virtual cluster, include the `options=-ccluster=application` parameter in the `postgresql` connection URL:
+- For the virtual cluster, include the `options=-ccluster=main` parameter in the `postgresql` connection URL:
{% include_cached copy-clipboard.html %}
~~~ shell
- cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=application&sslmode=verify-full" --certs-dir "certs"
+ cockroach sql --url "postgresql://root@{your IP or hostname}:26257?options=-ccluster=main&sslmode=verify-full" --certs-dir "certs"
~~~
{{site.data.alerts.callout_info}}
@@ -122,11 +122,11 @@ Statement | Action
### Cluster versions and upgrades
-The standby cluster host will need to be at the same major version as, or one version ahead of, the primary's application virtual cluster at the time of cutover.
+The standby cluster host will need to be at the same major version as, or one version ahead of, the primary's virtual cluster at the time of cutover.
To [upgrade]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}) a virtualized cluster, you must carefully and manually apply the upgrade. For details, refer to [Upgrades]({% link {{ page.version.version }}/work-with-virtual-clusters.md %}#upgrade-a-cluster) in the [Cluster Virtualization Overview]({% link {{ page.version.version }}/cluster-virtualization-overview.md %}).
-When physical cluster replication is enabled, we recommend following this procedure on the standby cluster first, before upgrading the primary cluster. It is preferable to avoid a situation in which the application virtual cluster, which is being replicated, is a version higher than what the standby cluster can serve if you were to cut over.
+When physical cluster replication is enabled, we recommend following this procedure on the standby cluster first, before upgrading the primary cluster. It is preferable to avoid a situation in which the virtual cluster, which is being replicated, is a version higher than what the standby cluster can serve if you were to cut over.
## Demo video
diff --git a/src/current/v24.1/physical-cluster-replication-technical-overview.md b/src/current/v24.1/physical-cluster-replication-technical-overview.md
index 60585a150e7..8b0c1e9888c 100644
--- a/src/current/v24.1/physical-cluster-replication-technical-overview.md
+++ b/src/current/v24.1/physical-cluster-replication-technical-overview.md
@@ -31,7 +31,6 @@ The stream initialization proceeds as follows:
1. The initial scan runs on the primary and backfills all data from the primary virtual cluster as of the starting timestamp of the replication stream.
1. Once the initial scan is complete, the primary then begins streaming all changes from the point of the starting timestamp.
-{% comment %}TODO Kathryn to update this graphic {% endcomment%}
### During the replication stream
@@ -41,7 +40,7 @@ The replication happens at the byte level, which means that the job is unaware o
During the job, [rangefeeds]({% link {{ page.version.version }}/create-and-configure-changefeeds.md %}#enable-rangefeeds) are periodically emitting resolved timestamps, which is the time where the ingested data is known to be consistent. Resolved timestamps provide a guarantee that there are no new writes from before that timestamp. This allows the standby cluster to move the [protected timestamp]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) forward as the replicated timestamp advances. This information is sent to the primary cluster, which allows for [garbage collection]({% link {{ page.version.version }}/architecture/storage-layer.md %}#garbage-collection) to continue as the replication stream on the standby cluster advances.
{{site.data.alerts.callout_info}}
-If the primary cluster does not receive replicated time information from the standby after 3 days, it cancels the replication job. This ensures that an inactive replication job will not prevent garbage collection. The time at which the job is removed is configurable via the `stream_replication.job_liveness_timeout` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}).
+If the primary cluster does not receive replicated time information from the standby after 24 hours, it cancels the replication job. This ensures that an inactive replication job will not prevent garbage collection. The time at which the job is removed is configurable with [`ALTER VIRTUAL CLUSTER virtual_cluster EXPIRATION WINDOW = duration`]({% link {{ page.version.version }}/alter-virtual-cluster.md %}) syntax.
{{site.data.alerts.end}}
### Cutover and promotion process
diff --git a/src/current/v24.1/plpgsql.md b/src/current/v24.1/plpgsql.md
index 82b1b176f0d..3985eed1480 100644
--- a/src/current/v24.1/plpgsql.md
+++ b/src/current/v24.1/plpgsql.md
@@ -40,6 +40,23 @@ At the highest level, a PL/pgSQL block looks like the following:
END
~~~
+PL/pgSQL blocks can be nested. An optional label can be placed above each block. Block labels can be targeted by [`EXIT` statements](#exit-and-continue-statements).
+
+~~~ sql
+[ <> ]
+ [ DECLARE
+ declarations ]
+ BEGIN
+ statements
+ [ <> ]
+ [ DECLARE
+ declarations ]
+ BEGIN
+ statements
+ END;
+ END
+~~~
+
When you create a function or procedure, you can enclose the entire PL/pgSQL block in dollar quotes (`$$`). Dollar quotes are not required, but are easier to use than single quotes, which require that you escape other single quotes that are within the function or procedure body.
{% include_cached copy-clipboard.html %}
@@ -61,8 +78,7 @@ For complete examples, see [Create a user-defined function using PL/pgSQL](#crea
### Declare a variable
-`DECLARE` specifies all variable definitions that are used in the function or procedure body.
-
+`DECLARE` specifies all variable definitions that are used in a block.
~~~ sql
DECLARE
variable_name [ CONSTANT ] data_type [ := expression ];
@@ -71,7 +87,7 @@ DECLARE
- `variable_name` is an arbitrary variable name.
- `data_type` can be a supported [SQL data type]({% link {{ page.version.version }}/data-types.md %}), [user-defined type]({% link {{ page.version.version }}/create-type.md %}), or the PL/pgSQL `REFCURSOR` type, when declaring [cursor](#declare-cursor-variables) variables.
- `CONSTANT` specifies that the variable cannot be [reassigned](#assign-a-result-to-a-variable), ensuring that its value remains constant within the block.
-- `expression` is an [expression](https://www.postgresql.org/docs/16/plpgsql-expressions.html) that provides an optional default value for the variable.
+- `expression` is an [expression](https://www.postgresql.org/docs/16/plpgsql-expressions.html) that provides an optional default value for the variable. Default values are evaluated every time a block is entered in a function or procedure.
For example:
@@ -105,14 +121,14 @@ For information about opening and using cursors, see [Open and use cursors](#ope
### Assign a result to a variable
-Use the PL/pgSQL `INTO` clause to assign a result of a [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}) or mutation ([`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`DELETE`]({% link {{ page.version.version }}/delete.md %})) statement to a specified variable:
+Use the PL/pgSQL `INTO` clause to assign a result of a [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}) or mutation ([`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`DELETE`]({% link {{ page.version.version }}/delete.md %})) statement to a specified variable. The optional `STRICT` clause specifies that the statement must return exactly one row; otherwise, the function or procedure will error. This behavior can be enabled by default using the [`plpgsql_use_strict_into`]({% link {{ page.version.version }}/session-variables.md %}#plpgsql-use-strict-into) session setting.
~~~ sql
-SELECT expression INTO target FROM ...;
+SELECT expression INTO [ STRICT ] target FROM ...;
~~~
~~~ sql
-[ INSERT | UPDATE | DELETE ] ... RETURNING expression INTO target;
+[ INSERT | UPDATE | DELETE ] ... RETURNING expression INTO [ STRICT ] target;
~~~
- `expression` is an [expression](https://www.postgresql.org/docs/16/plpgsql-expressions.html) that defines the result to be assigned to the variable.
@@ -146,7 +162,7 @@ NOTICE: New Row: 2
CALL
~~~
-The following [user-defined function]({% link {{ page.version.version }}/user-defined-functions.md %}) uses the `max()` [built-in function]({% link {{ page.version.version }}/functions-and-operators.md %}#aggregate-functions) to find the maximum `col` value in table `t`, and assigns the result to `i`.
+The following [user-defined function]({% link {{ page.version.version }}/user-defined-functions.md %}) uses the `max` [built-in function]({% link {{ page.version.version }}/functions-and-operators.md %}#aggregate-functions) to find the maximum `col` value in table `t`, and assigns the result to `i`.
{% include_cached copy-clipboard.html %}
~~~ sql
@@ -218,7 +234,7 @@ For usage examples of conditional statements, see [Examples](#examples).
### Write loops
-Use looping syntax to repeatedly execute statements.
+Write a loop to repeatedly execute statements.
On its own, `LOOP` executes statements infinitely.
@@ -238,25 +254,59 @@ WHILE condition LOOP
For an example, see [Create a stored procedure that uses a `WHILE` loop]({% link {{ page.version.version }}/create-procedure.md %}#create-a-stored-procedure-that-uses-a-while-loop).
-Add an `EXIT` statement to end a `LOOP` or `WHILE` statement block. This should be combined with a [conditional statement](#write-conditional-statements).
+### `EXIT` and `CONTINUE` statements
+
+Add an `EXIT` statement to end a [loop](#write-loops). An `EXIT` statement can be combined with an optional `WHEN` boolean condition.
~~~ sql
LOOP
statements;
- IF condition THEN
- EXIT;
- END IF;
+ EXIT [ WHEN condition ];
END LOOP;
~~~
-Add a `CONTINUE` statement to end a `LOOP` or `WHILE` statement block, skipping any statements below `CONTINUE`, and begin the next iteration of the loop. This should be combined with a [conditional statement](#write-conditional-statements). In the following example, if the `IF` condition is met, then `CONTINUE` causes the loop to skip the second block of statements and begin again.
+Add a label to an `EXIT` statement to target a block that has a matching label. An `EXIT` statement with a label can target either a loop or a [block](#structure). An `EXIT` statement inside a block must have a label.
+
+The following `EXIT` statement will end the `label` block before the statements are executed.
+
+~~~ sql
+BEGIN
+ <