diff --git a/src/current/_includes/v22.2/backups/locality-aware-access.md b/src/current/_includes/v22.2/backups/locality-aware-access.md
new file mode 100644
index 00000000000..3d3885de4ac
--- /dev/null
+++ b/src/current/_includes/v22.2/backups/locality-aware-access.md
@@ -0,0 +1 @@
+A successful locality-aware backup job requires that each node in the cluster has access to each storage location. This is because any node in the cluster can claim the job and become the [_coordinator_ ](backup-architecture.html#job-creation-phase) node.
\ No newline at end of file
diff --git a/src/current/_includes/v22.2/backups/serverless-locality-aware.md b/src/current/_includes/v22.2/backups/serverless-locality-aware.md
new file mode 100644
index 00000000000..a5661dc1806
--- /dev/null
+++ b/src/current/_includes/v22.2/backups/serverless-locality-aware.md
@@ -0,0 +1 @@
+CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture](https://www.cockroachlabs.com/docs/cockroachcloud/architecture#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, Serverless clusters are more likely to have ranges that do not match with any of the cluster's localities, which can lead to more ranges backed up to a storage bucket in a different locality. You should consider this as you plan a backup strategy that must comply with [data domiciling](data-domiciling.html) requirements.
\ No newline at end of file
diff --git a/src/current/_includes/v23.1/backups/serverless-locality-aware.md b/src/current/_includes/v23.1/backups/serverless-locality-aware.md
new file mode 100644
index 00000000000..25b22f1f00c
--- /dev/null
+++ b/src/current/_includes/v23.1/backups/serverless-locality-aware.md
@@ -0,0 +1 @@
+CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, Serverless clusters are more likely to have ranges that do not match with any of the cluster's localities, which can lead to more ranges backed up to a storage bucket in a different locality. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.1/data-domiciling.md %}) requirements.
\ No newline at end of file
diff --git a/src/current/_includes/v23.2/backups/serverless-locality-aware.md b/src/current/_includes/v23.2/backups/serverless-locality-aware.md
new file mode 100644
index 00000000000..e5da195923a
--- /dev/null
+++ b/src/current/_includes/v23.2/backups/serverless-locality-aware.md
@@ -0,0 +1 @@
+CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, Serverless clusters are more likely to have ranges that do not match with any of the cluster's localities, which can lead to more ranges backed up to a storage bucket in a different locality. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements.
\ No newline at end of file
diff --git a/src/current/cockroachcloud/serverless-unsupported-features.md b/src/current/cockroachcloud/serverless-unsupported-features.md
index 80f53df4d35..7d0e3e8f4f1 100644
--- a/src/current/cockroachcloud/serverless-unsupported-features.md
+++ b/src/current/cockroachcloud/serverless-unsupported-features.md
@@ -15,11 +15,15 @@ You can't configure [alerts on changefeeds](https://www.cockroachlabs.com/docs/{
## Backups
-CockroachDB {{ site.data.products.serverless }} only support automated full backups. Automated [incremental](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups) and [revision history](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-backups-with-revision-history-and-restore-from-a-point-in-time) backups are not supported. However, [user managed incremental and revision history backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}#back-up-data) using user provided storage locations are supported.
+CockroachDB {{ site.data.products.serverless }} clusters only support automated full backups. Automated [incremental](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups) and [revision history](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-backups-with-revision-history-and-restore-from-a-point-in-time) backups are not supported. However, you can take manual [incremental and revision history backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}#back-up-data) to your own [cloud storage location](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/use-cloud-storage).
-Automated database and table level backups are not supported in CockroachDB {{ site.data.products.serverless }}. However, [user managed database and table level backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}#back-up-data) using user provided storage locations are supported.
+Automated database and table level backups are not supported in CockroachDB {{ site.data.products.serverless }}. However, you can take manual [database and table level backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}#back-up-data) to your own [cloud storage location](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/use-cloud-storage).
-Both CockroachDB {{ site.data.products.serverless }} and CockroachDB {{ site.data.products.dedicated }} clusters do not support automated [locality-aware backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-and-restore-locality-aware-backups). However, user managed locality-aware backups using user provided storage locations are supported in CockroachDB {{ site.data.products.serverless }}, CockroachDB {{ site.data.products.dedicated }}, and CockroachDB {{ site.data.products.core }} clusters. That is, you need to configure and manage your own locality-aware backups.
+Both CockroachDB {{ site.data.products.serverless }} and CockroachDB {{ site.data.products.dedicated }} clusters do not support automated [locality-aware backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-and-restore-locality-aware-backups). However, you can take manual locality-aware backups to your own [cloud storage location](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/use-cloud-storage).
+
+{{site.data.alerts.callout_info}}
+{% include v23.2/backups/serverless-locality-aware.md %}
+{{site.data.alerts.end}}
## Adding and removing regions
diff --git a/src/current/cockroachcloud/use-managed-service-backups.md b/src/current/cockroachcloud/use-managed-service-backups.md
index 4509a354d74..aa4fd316825 100644
--- a/src/current/cockroachcloud/use-managed-service-backups.md
+++ b/src/current/cockroachcloud/use-managed-service-backups.md
@@ -22,6 +22,7 @@ Cockroach Labs runs [full cluster backups](https://www.cockroachlabs.com/docs/{{
- By default, full backups are retained for 30 days. However, if you delete the backup schedule manually you will not be able to restore from these backups.
- Once a cluster is deleted, Cockroach Labs retains the full backups for 30 days and incremental backups for 7 days.
+- Backups are stored in the same region that a [single-region cluster]({% link cockroachcloud/plan-your-cluster.md %}#cluster-configuration) is running in, or the primary region of a [multi-region cluster](plan-your-cluster.html#multi-region-clusters). Every backup will be stored entirely in a single region, which is chosen at random from the list of cluster regions at the time of cluster creation. This region will be used indefinitely.
@@ -30,7 +31,7 @@ Cockroach Labs runs [full cluster backups](https://www.cockroachlabs.com/docs/{{
- By default, full backups are retained for 30 days, while incremental backups are retained for 7 days. However, if you delete the backup schedule manually or enable [CMEK]({% link cockroachcloud/cmek.md %}) on the cluster, this will affect the availability of managed backups. Refer to the [CockroachDB Cloud FAQs]({% link cockroachcloud/frequently-asked-questions.md %}#who-is-responsible-for-backup) for more detail.
- Once a cluster is deleted, Cockroach Labs retains the full backups for 30 days and incremental backups for 7 days.
-- Backups are stored in the same region that a [single-region cluster]({% link cockroachcloud/plan-your-cluster.md %}#cluster-configuration) is running in or the primary region of a [multi-region cluster](plan-your-cluster.html#multi-region-clusters).
+- Backups are stored in the same region that a [single-region cluster]({% link cockroachcloud/plan-your-cluster.md %}#cluster-configuration) is running in, or the primary region of a [multi-region cluster](plan-your-cluster.html#multi-region-clusters). Every backup will be stored entirely in a single region, which is chosen at random from the list of cluster regions at the time of cluster creation. This region will be used indefinitely.
{{site.data.alerts.callout_info}}
You cannot restore a backup of a multi-region database into a single-region database.
diff --git a/src/current/v22.2/backup-and-restore-overview.md b/src/current/v22.2/backup-and-restore-overview.md
index 7fae953edc9..773764b6688 100644
--- a/src/current/v22.2/backup-and-restore-overview.md
+++ b/src/current/v22.2/backup-and-restore-overview.md
@@ -18,13 +18,13 @@ For an explanation of how a backup works, see [Backup Architecture](backup-archi
## CockroachDB backup types
-{% include cockroachcloud/backup-types.md %}
+{% include cockroachcloud/backup-types.md %}
## Backup and restore product support
This table outlines the level of product support for backup and restore features in CockroachDB. See each of the pages linked in the table for usage examples:
-Backup / Restore | Description | Product Support
+Backup / Restore | Description | Product Support
------------------+--------------+-----------------
[Full backup](take-full-and-incremental-backups.html) | An un-replicated copy of your cluster, database, or table's data. A full backup is the base for any further backups. |
- All products (Enterprise license not required)
[Incremental backup](take-full-and-incremental-backups.html) | A copy of the changes in your data since the specified base backup (either a full backup or a full backup plus an incremental backup). | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — managed-service backups and customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
@@ -32,7 +32,7 @@ Backup / Restore | Description | Product Support
[Backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) | A backup with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
[Point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) | A restore from an arbitrary point in time within the revision history of a backup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
[Encrypted backup and restore](take-and-restore-encrypted-backups.html) | An encrypted backup using a KMS or passphrase. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
-[Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) | A backup where each node writes files only to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
+[Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) | A backup where each node writes files to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license](enterprise-licensing.html)
{% include {{ page.version.version }}/backups/scheduled-backups-tip.md %}
@@ -40,14 +40,14 @@ Backup / Restore | Description | Product Support
The following table outlines SQL statements you can use to create, configure, pause, and show backup and restore jobs:
- SQL Statement | Description
+ SQL Statement | Description
----------------|---------------------------------------------------------------------------------------------
[`BACKUP`](backup.html) | Create full and incremental backups.
[`SHOW JOBS`](show-jobs.html) | Show a list of all running jobs or show the details of a specific job by its `job ID`.
[`PAUSE JOB`](pause-job.html) | Pause a backup or restore job with its `job ID`.
-[`RESUME JOB`](resume-job.html) | Resume a backup or restore job with its `job ID`.
+[`RESUME JOB`](resume-job.html) | Resume a backup or restore job with its `job ID`.
[`CANCEL JOB`](cancel-job.html) | Cancel a backup or restore job with its `job ID`.
-[`SHOW BACKUP`](show-backup.html) | Show a backup's details at the [backup collection's](take-full-and-incremental-backups.html#backup-collections) storage location.
+[`SHOW BACKUP`](show-backup.html) | Show a backup's details at the [backup collection's](take-full-and-incremental-backups.html#backup-collections) storage location.
[`RESTORE`](restore.html) | Restore full and incremental backups.
[`ALTER BACKUP`](alter-backup.html) | Add a new [KMS encryption key](take-and-restore-encrypted-backups.html#use-key-management-service) to an encrypted backup.
[`CREATE SCHEDULE FOR BACKUP`](create-schedule-for-backup.html) | Create a schedule for periodic backups.
diff --git a/src/current/v22.2/backup-architecture.md b/src/current/v22.2/backup-architecture.md
index c267938a01d..898e7705f43 100644
--- a/src/current/v22.2/backup-architecture.md
+++ b/src/current/v22.2/backup-architecture.md
@@ -5,10 +5,10 @@ toc: true
docs_area: manage
---
-CockroachDB backups operate as _jobs_, which are potentially long-running operations that could span multiple SQL sessions. Unlike regular SQL statements, which CockroachDB routes to the [optimizer](cost-based-optimizer.html) for processing, a [`BACKUP`](backup.html) statement will move into a job workflow. A backup job has four main phases:
+CockroachDB backups operate as _jobs_, which are potentially long-running operations that could span multiple SQL sessions. Unlike regular SQL statements, which CockroachDB routes to the [optimizer](cost-based-optimizer.html) for processing, a [`BACKUP`](backup.html) statement will move into a job workflow. A backup job has four main phases:
-1. [Job creation](#job-creation-phase)
-1. [Resolution](#resolution-phase)
+1. [Job creation](#job-creation-phase)
+1. [Resolution](#resolution-phase)
1. [Export data](#export-phase)
1. [Metadata writing](#metadata-writing-phase)
@@ -27,7 +27,7 @@ The following diagram illustrates the flow from `BACKUP` statement through to a
-## Job creation phase
+## Job creation phase
A backup begins by validating the general sense of the proposed backup.
@@ -44,40 +44,40 @@ CockroachDB will verify the options passed in the `BACKUP` statement and check t
The ultimate aim of the job creation phase is to complete all of these checks and write the detail of what the backup job should complete to a _job record_.
If a [`detached`](backup.html#detached) backup was requested, the `BACKUP` statement is complete as it has created an uncommitted, but otherwise ready-to-run backup job. You'll find the job ID returned as output. Without the `detached` option, the job is committed and the statement waits to return the results until the backup job starts, runs (as described in the following sections), and terminates.
-
+
Once the job record is committed, the cluster will try to run the backup job even if a client disconnects or the node handling the `BACKUP` statement terminates. From this point, the backup is a persisted job that any node in the cluster can take over executing to ensure it runs. The job record will move to the system jobs table, ready for a node to claim it.
-## Resolution phase
+## Resolution phase
-Once one of the nodes has claimed the job from the system jobs table, it will take the job record’s information and outline a plan. This node becomes the _coordinator_. In our example, **Node 2** becomes the coordinator and starts to complete the following to prepare and resolve the targets for distributed backup work:
+Once one of the nodes has claimed the job from the system jobs table, it will take the job record’s information and outline a plan. This node becomes the _coordinator_. In our example, **Node 2** becomes the coordinator and starts to complete the following to prepare and resolve the targets for distributed backup work:
- Test the connection to the storage bucket URL (`'s3://bucket'`).
- Determine the specific subdirectory for this backup, including if it should be incremental from any discovered existing directories.
- Calculate the keys of the backup data, as well as the time ranges if the backup is incremental.
-- Determine the [leaseholder](architecture/overview.html#architecture-leaseholder) nodes for the keys to back up.
+- Determine the [leaseholder](architecture/overview.html#architecture-leaseholder) nodes for the keys to back up.
- Provide a plan to the nodes that will execute the data export (typically the leaseholder node).
-To map out the storage location's directory to which the nodes will write the data, the coordinator identifies the [type](backup-and-restore-overview.html#backup-and-restore-product-support) of backup. This determines the name of the new (or edited) directory to store the backup files in. For example, if there is an existing full backup in the target storage location, the upcoming backup will be [incremental](take-full-and-incremental-backups.html#incremental-backups) and therefore append to the full backup after any existing incremental layers discovered in it.
+To map out the storage location's directory to which the nodes will write the data, the coordinator identifies the [type](backup-and-restore-overview.html#backup-and-restore-product-support) of backup. This determines the name of the new (or edited) directory to store the backup files in. For example, if there is an existing full backup in the target storage location, the upcoming backup will be [incremental](take-full-and-incremental-backups.html#incremental-backups) and therefore append to the full backup after any existing incremental layers discovered in it.
For more information on how CockroachDB structures backups in storage, see [Backup collections](take-full-and-incremental-backups.html#backup-collections).
### Key and time range resolution
-In this part of the resolution phase, the coordinator will calculate all the necessary spans of keys and their time ranges that the cluster needs to export for this backup. It divides the key spans based on which node is the [leaseholder](architecture/overview.html#architecture-leaseholder) of the range for that key span. Every node has a SQL processor on it to process the backup plan that the coordinator will pass to it. Typically, it is the backup SQL processor on the leaseholder node for the key span that will complete the export work.
+In this part of the resolution phase, the coordinator will calculate all the necessary spans of keys and their time ranges that the cluster needs to export for this backup. It divides the key spans based on which node is the [leaseholder](architecture/overview.html#architecture-leaseholder) of the range for that key span. Every node has a SQL processor on it to process the backup plan that the coordinator will pass to it. Typically, it is the backup SQL processor on the leaseholder node for the key span that will complete the export work.
Each of the node's backup SQL processors are responsible for:
1. Asking the [storage layer](architecture/storage-layer.html) for the content of each key span.
-1. Receiving the content from the storage layer.
-1. Writing it to the backup storage location or [locality-specific location](take-and-restore-locality-aware-backups.html) (whichever locality best matches the node).
+1. Receiving the content from the storage layer.
+1. Writing it to the backup storage location or [locality-specific location](take-and-restore-locality-aware-backups.html) (whichever locality best matches the node).
Since any node in a cluster can become the coordinator and all nodes could be responsible for exporting data during a backup, it is necessary that all nodes can connect to the backup storage location.
-## Export phase
+## Export phase
Once the coordinator has provided a plan to each of the backup SQL processors that specifies the backup data, the distributed export of the backup data begins.
-In the following diagram, **Node 2** and **Node 3** contain the leaseholders for the **R1** and **R2** [ranges](architecture/overview.html#architecture-range). Therefore, in this example backup job, the backup data will be exported from these nodes to the specified storage location.
+In the following diagram, **Node 2** and **Node 3** contain the leaseholders for the **R1** and **R2** [ranges](architecture/overview.html#architecture-range). Therefore, in this example backup job, the backup data will be exported from these nodes to the specified storage location.
While processing, the nodes emit progress data that tracks their backup work to the coordinator. In the diagram, **Node 3** will send progress data to **Node 2**. The coordinator node will then aggregate the progress data into checkpoint files in the storage bucket. The checkpoint files provide a marker for the backup to resume after a retryable state, such as when it has been [paused](pause-job.html).
@@ -91,6 +91,30 @@ The backup metadata files describe everything a backup contains. That is, all th
With the full backup complete, the specified storage location will contain the backup data and its metadata ready for a potential [restore](restore.html). After subsequent backups of the `movr` database to this storage location, CockroachDB will create a _backup collection_. See [Backup collections](take-full-and-incremental-backups.html#backup-collections) for information on how CockroachDB structures a collection of multiple backups.
+## Backup jobs with locality
+
+CockroachDB supports [locality-aware backup](#job-coordination-and-export-of-locality-aware-backups) that use a node's locality to determine how a backup job runs or where the backup data is stored. This section provides a technical overview of how the backup job process works for this backup type.
+
+### Job coordination and export of locality-aware backups
+
+When you create a [locality-aware backup](take-and-restore-locality-aware-backups.html) job, any node in the cluster can [claim the backup job](#job-creation-phase). A successful locality-aware backup job requires that each node in the cluster has access to each storage location. This is because any node in the cluster can claim the job and become the coordinator node. Once each node informs the coordinator node that it has completed exporting the row data, the coordinator will start to write metadata, which involves writing to each locality bucket a partial manifest recording what row data was written to that [storage bucket](use-cloud-storage.html).
+
+Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder](architecture/replication-layer.html#leases) at the time the coordinator planned the [distributed backup flow](#resolution-phase).
+
+Every node backing up a [range](architecture/overview.html#cockroachdb-architecture-terms) will back up to the storage bucket that most closely matches that node's locality. The backup job attempts to back up ranges through nodes matching the range's locality. As a result, there is no guarantee that all ranges will be backed up to their locality's storage bucket. For additional detail on locality-aware backups in the context of a CockroachDB {{ site.data.products.serverless }} cluster, refer to [Job coordination on Serverless clusters](#job-coordination-on-serverless-clusters).
+
+The node exporting the row data, and the leaseholder of the range being backed up, are usually the same. However, these nodes can differ when lease transfers have occurred during the [execution](#export-phase) of the backup. In this case, the leaseholder node returns the files to the node exporting the backup data (usually a local transfer), which then writes the file to the external storage location with a locality that usually matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
+
+For example, in the following diagram there is a three-node cluster split across three regions. The leaseholders write the ranges to be backed up to the external storage in the same region. As **Nodes 1** and **3** complete their work, they send updates to the coordinator node (**Node 2**). The coordinator will then [write the partial manifest files](#metadata-writing-phase) containing metadata about the backup work completed on each external storage location, which is stored with the backup SST files written to that storage location.
+
+During a [restore](restore.html) job, the job creation statement will need access to each of the storage locations to read the metadata files in order to complete a successful restore.
+
+
+
+#### Job coordination on Serverless clusters
+
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
+
## See also
- CockroachDB's general [Architecture Overview](architecture/overview.html)
diff --git a/src/current/v22.2/take-and-restore-locality-aware-backups.md b/src/current/v22.2/take-and-restore-locality-aware-backups.md
index c745a64ba2f..ddaa665d416 100644
--- a/src/current/v22.2/take-and-restore-locality-aware-backups.md
+++ b/src/current/v22.2/take-and-restore-locality-aware-backups.md
@@ -5,28 +5,31 @@ toc: true
docs_area: manage
---
-This page provides information about how to take and restore locality-aware backups.
-
{{site.data.alerts.callout_info}}
Locality-aware [`BACKUP`](backup.html) is an [Enterprise-only](https://www.cockroachlabs.com/product/cockroachdb/) feature. However, you can take [full backups](take-full-and-incremental-backups.html) without an Enterprise license.
{{site.data.alerts.end}}
-You can create locality-aware backups such that each node writes files only to the backup destination that matches the [node locality](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) configured at [node startup](cockroach-start.html).
+Locality-aware backups allow you to partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the [cloud storage](use-cloud-storage.html) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
-This is useful for:
+{{site.data.alerts.callout_danger}}
+While a locality-aware backup will always match the node locality and storage bucket locality, a [range's](architecture/overview.html#cockroachdb-architecture-terms) locality will not necessarily match the node's locality. The backup job will attempt to back up ranges through nodes matching that range's locality, however this is not always possible. As a result, **Cockroach Labs cannot guarantee that all ranges will be backed up to a cloud storage bucket with the same locality.** You should consider this as you plan a backup strategy that must comply with [data domiciling](data-domiciling.html) requirements.
+{{site.data.alerts.end}}
-- Reducing cloud storage data transfer costs by keeping data within cloud regions.
-- Helping you comply with data domiciling requirements.
+A locality-aware backup is specified by a list of URIs, each of which has a `COCKROACH_LOCALITY` URL parameter whose single value is either `default` or a single locality key-value pair such as `region=us-east`. At least one `COCKROACH_LOCALITY` must be the `default`. [Restore jobs can read from a locality-aware backup](#restore-from-a-locality-aware-backup) when you provide the list of URIs that together contain the locations of all of the files for a single locality-aware backup.
-A locality-aware backup is specified by a list of URIs, each of which has a `COCKROACH_LOCALITY` URL parameter whose single value is either `default` or a single locality key-value pair such as `region=us-east`. At least one `COCKROACH_LOCALITY` must be the `default`. Given a list of URIs that together contain the locations of all of the files for a single locality-aware backup, [`RESTORE` can read in that backup](#restore-from-a-locality-aware-backup).
+{% include {{ page.version.version }}/backups/locality-aware-access.md %}
-{{site.data.alerts.callout_info}}
-The locality query string parameters must be [URL-encoded](https://en.wikipedia.org/wiki/Percent-encoding).
-{{site.data.alerts.end}}
+## Technical overview
-Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder](architecture/replication-layer.html#leases) at the time the [distributed backup flow](architecture/sql-layer.html#distsql) was planned. The locality of the node running the distributed backup flow determines where the backup files will be placed in a locality-aware backup. The node running the backup flow, and the leaseholder node of the range being backed up are usually the same, but can differ when lease transfers have occurred during the execution of the backup. The leaseholder node returns the files to the node running the backup flow (usually a local transfer), which then writes the file to the external storage location with a locality that matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
+For a technical overview of how a locality-aware backup works, refer to [Job coordination and export of locality-aware backups]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-and-export-of-locality-aware-backups).
-{% include {{ page.version.version }}/backups/support-products.md %}
+## Supported products
+
+Locality-aware backups are available in **CockroachDB {{ site.data.products.dedicated }}**, **CockroachDB {{ site.data.products.serverless }}**, and **CockroachDB {{ site.data.products.core }}** clusters when you are running [customer-owned backups](backup-and-restore-overview.html#cockroachdb-backup-types). For a full list of features, see [Backup and restore product support](backup-and-restore-overview.html#backup-and-restore-product-support).
+
+{{site.data.alerts.callout_info}}
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
+{{site.data.alerts.end}}
## Create a locality-aware backup
diff --git a/src/current/v23.1/backup-and-restore-overview.md b/src/current/v23.1/backup-and-restore-overview.md
index 0d352eb0975..c4c4fdd0374 100644
--- a/src/current/v23.1/backup-and-restore-overview.md
+++ b/src/current/v23.1/backup-and-restore-overview.md
@@ -33,7 +33,7 @@ Backup / Restore | Description | Product Support
[Backups with revision history]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %}) | A backup with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Point-in-time restore]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %}) | A restore from an arbitrary point in time within the revision history of a backup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Encrypted backup and restore]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}) | An encrypted backup using a KMS or passphrase. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
-[Locality-aware backup and restore]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) | A backup where each node writes files only to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
+[Locality-aware backup and restore]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) | A backup where each node writes files to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) | A backup with the `EXECUTION LOCALITY` option restricts the nodes that can execute a backup job with a defined locality filter. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
### Additional backup and restore features
@@ -45,7 +45,7 @@ Backup / Restore | Description | Product Support
{% include {{ page.version.version }}/backups/scheduled-backups-tip.md %}
-CockroachDB supports [creating schedules for periodic backups]({% link {{ page.version.version }}/create-schedule-for-backup.md %}). Scheduled backups ensure that the data to be backed up is protected from garbage collection until it has been successfully backed up. This active management of [protected timestamps]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) means that you can run scheduled backups at a cadence independent from the [GC TTL]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds) of the data.
+CockroachDB supports [creating schedules for periodic backups]({% link {{ page.version.version }}/create-schedule-for-backup.md %}). Scheduled backups ensure that the data to be backed up is protected from garbage collection until it has been successfully backed up. This active management of [protected timestamps]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) means that you can run scheduled backups at a cadence independent from the [GC TTL]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds) of the data.
For detail on scheduled backup features CockroachDB supports:
@@ -57,7 +57,7 @@ For detail on scheduled backup features CockroachDB supports:
CockroachDB supports two backup features that use a node's locality to determine how a backup job runs or where the backup data is stored:
- [Locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}): Specify a set of locality filters for a backup job in order to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket.
-- [Locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs.
+- [Locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}): Partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
## Backup and restore SQL statements
diff --git a/src/current/v23.1/backup-architecture.md b/src/current/v23.1/backup-architecture.md
index 2200ded6940..7ee9fe99046 100644
--- a/src/current/v23.1/backup-architecture.md
+++ b/src/current/v23.1/backup-architecture.md
@@ -14,7 +14,7 @@ CockroachDB backups operate as _jobs_, which are potentially long-running operat
The [Overview](#overview) section that follows provides an outline of a backup job's process. For a more detailed explanation of how a backup job works, read from the [Job creation phase](#job-creation-phase) section.
-For a technical overview of [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) or [locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), refer to the [Backup jobs with locality requirements](#backup-jobs-with-locality-requirements) section.
+For a technical overview of [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) or [locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), refer to the [Backup jobs with locality requirements](#backup-jobs-with-locality) section.
## Overview
@@ -95,20 +95,22 @@ The backup metadata files describe everything a backup contains. That is, all th
With the full backup complete, the specified storage location will contain the backup data and its metadata ready for a potential [restore]({% link {{ page.version.version }}/restore.md %}). After subsequent backups of the `movr` database to this storage location, CockroachDB will create a _backup collection_. Refer to [Backup collections]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) for information on how CockroachDB structures a collection of multiple backups.
-## Backup jobs with locality requirements
+## Backup jobs with locality
CockroachDB supports two backup features that use a node's locality to determine how a backup job runs or where the backup data is stored. This section provides a technical overview of how the backup job process works for each of these backup features:
-- [Locality-aware backup](#job-coordination-and-export-of-locality-aware-backups): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs.
+- [Locality-aware backup](#job-coordination-and-export-of-locality-aware-backups): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality.
- [Locality-restricted backup execution](#job-coordination-using-the-execution-locality-option): Specify a set of locality filters for a backup job in order to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket.
### Job coordination and export of locality-aware backups
When you create a [locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) job, any node in the cluster can [claim the backup job](#job-creation-phase). A successful locality-aware backup job requires that each node in the cluster has access to each storage location. This is because any node in the cluster can claim the job and become the coordinator node. Once each node informs the coordinator node that it has completed exporting the row data, the coordinator will start to write metadata, which involves writing to each locality bucket a partial manifest recording what row data was written to that [storage bucket]({% link {{ page.version.version }}/use-cloud-storage.md %}).
-Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) at the time the coordinator planned the [distributed backup flow]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase). The locality of the node ([configured at node startup]({% link {{ page.version.version }}/cockroach-start.md %}#locality)) exporting the row data determines where the backups files will be placed in a locality-aware backup.
+Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) at the time the coordinator planned the [distributed backup flow]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase).
-The node exporting the row data, and the leaseholder of the range being backed up, are usually the same. However, these nodes can differ when lease transfers have occurred during the [execution](#export-phase) of the backup. In this case, the leaseholder node returns the files to the node exporting the backup data (usually a local transfer), which then writes the file to the external storage location with a locality that matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
+Every node backing up a [range]({% link {{ page.version.version }}/architecture/overview.md %}#range) will back up to the storage bucket that most closely matches that node's locality. The backup job attempts to back up ranges through nodes matching the range's locality. As a result, there is no guarantee that all ranges will be backed up to their locality's storage bucket. For additional detail on locality-aware backups in the context of a CockroachDB {{ site.data.products.serverless }} cluster, refer to [Job coordination on Serverless clusters](#job-coordination-on-serverless-clusters).
+
+The node exporting the row data, and the leaseholder of the range being backed up, are usually the same. However, these nodes can differ when lease transfers have occurred during the [execution](#export-phase) of the backup. In this case, the leaseholder node returns the files to the node exporting the backup data (usually a local transfer), which then writes the file to the external storage location with a locality that usually matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
For example, in the following diagram there is a three-node cluster split across three regions. The leaseholders write the ranges to be backed up to the external storage in the same region. As **Nodes 1** and **3** complete their work, they send updates to the coordinator node (**Node 2**). The coordinator will then [write the partial manifest files](#metadata-writing-phase) containing metadata about the backup work completed on each external storage location, which is stored with the backup SST files written to that storage location.
@@ -116,19 +118,27 @@ During a [restore]({% link {{ page.version.version }}/restore.md %}) job, the jo
+#### Job coordination on Serverless clusters
+
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
+
### Job coordination using the `EXECUTION LOCALITY` option
When you start or [resume]({% link {{ page.version.version }}/resume-job.md %}) a backup with [`EXECUTION LOCALITY`]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), the backup job must determine the [coordinating node for the job](#job-creation-phase). If a node that does not match the locality filter is the first node to claim the job, the node is responsible for finding a node that does match the filter and transferring the execution to it. This transfer can result in a short delay in starting or resuming a backup job that has execution locality requirements.
If you create a backup job on a [gateway node]({% link {{ page.version.version }}/architecture/sql-layer.md %}#overview) with a locality filter that does **not** meet the filter requirement in `EXECUTION LOCALITY`, and the job does not use the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option, the job will return an error indicating that it moved execution to another node. This error is returned because when you create a job without the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option, the [job execution]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) must run to completion by the gateway node while it is still attached to the SQL client to return the result.
-Once the coordinating node is determined, it will [assign chunks of row data]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) to eligible nodes, and each node reads its assigned row data and backs it up. The coordinator will assign row data only to those nodes that match the backup job's the locality filter in full. The following situations could occur:
+Once the coordinating node is determined, it will [assign chunks of row data]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) to eligible nodes, and each node reads its assigned row data and backs it up. The coordinator will assign row data only to those nodes that match the backup job's locality filter in full. The following situations could occur:
- If the [leaseholder]({% link {{ page.version.version }}/architecture/reads-and-writes-overview.md %}#architecture-leaseholder) for part of the row data matches the filter, the coordinator will assign it the matching row data to process.
- If the leaseholder does not match the locality filter, the coordinator will select a node from the eligible nodes with a preference for those with localities that are closest to the leaseholder.
When the coordinator assigns row data to a node matching the locality filter to back up, that node will read from the closest [replica]({% link {{ page.version.version }}/architecture/reads-and-writes-overview.md %}#architecture-replica). If the node is the leaseholder, or is itself a replica, it can read from itself. In the scenario where no replicas are available in the region of the assigned node, it may then read from a replica in a different region. As a result, you may want to consider [placing replicas]({% link {{ page.version.version }}/configure-replication-zones.md %}), including potentially non-voting replicas that will have less impact on read latency, in the locality or region you plan on pinning for backup job execution.
+{{site.data.alerts.callout_info}}
+Similarly to [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}), the backup job will send [ranges]({% link {{ page.version.version }}/architecture/overview.md %}#range) to the cloud storage bucket matching the node's locality.
+{{site.data.alerts.end}}
+
## See also
- CockroachDB's general [Architecture Overview]({% link {{ page.version.version }}/architecture/overview.md %})
diff --git a/src/current/v23.1/take-and-restore-locality-aware-backups.md b/src/current/v23.1/take-and-restore-locality-aware-backups.md
index b65bd119ba5..0caab37db9d 100644
--- a/src/current/v23.1/take-and-restore-locality-aware-backups.md
+++ b/src/current/v23.1/take-and-restore-locality-aware-backups.md
@@ -9,12 +9,11 @@ docs_area: manage
Locality-aware [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) is an [Enterprise-only](https://www.cockroachlabs.com/product/cockroachdb/) feature. However, you can take [full backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}) without an Enterprise license.
{{site.data.alerts.end}}
-Locality-aware backups allow you to partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
+Locality-aware backups allow you to partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
-This is useful for:
-
-- Reducing cloud storage data transfer costs by keeping data within cloud regions.
-- Helping you comply with data domiciling requirements.
+{{site.data.alerts.callout_danger}}
+While a locality-aware backup will always match the node locality and storage bucket locality, a [range's]({% link {{ page.version.version }}/architecture/overview.md %}#range) locality will not necessarily match the node's locality. The backup job will attempt to back up ranges through nodes matching that range's locality, however this is not always possible. As a result, **Cockroach Labs cannot guarantee that all ranges will be backed up to a cloud storage bucket with the same locality.** You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link {{ page.version.version }}/data-domiciling.md %}) requirements.
+{{site.data.alerts.end}}
A locality-aware backup is specified by a list of URIs, each of which has a `COCKROACH_LOCALITY` URL parameter whose single value is either `default` or a single locality key-value pair such as `region=us-east`. At least one `COCKROACH_LOCALITY` must be the `default`. [Restore jobs can read from a locality-aware backup](#restore-from-a-locality-aware-backup) when you provide the list of URIs that together contain the locations of all of the files for a single locality-aware backup.
@@ -24,12 +23,16 @@ A locality-aware backup is specified by a list of URIs, each of which has a `COC
For a technical overview of how a locality-aware backup works, refer to [Job coordination and export of locality-aware backups]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-and-export-of-locality-aware-backups).
-{% include {{ page.version.version }}/backups/support-products.md %}
+## Supported products
+
+Locality-aware backups are available in **CockroachDB {{ site.data.products.dedicated }}**, **CockroachDB {{ site.data.products.serverless }}**, and **CockroachDB {{ site.data.products.core }}** clusters when you are running [customer-owned backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}#cockroachdb-backup-types). For a full list of features, see [Backup and restore product support]({% link {{ page.version.version }}/backup-and-restore-overview.md %}#backup-and-restore-product-support).
{{site.data.alerts.callout_info}}
-CockroachDB also supports _locality-restricted backup execution_, which allow you to specify a set of locality filters for a backup job to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket. Refer to [Take Locality-restricted Backups]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) for more detail.
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
{{site.data.alerts.end}}
+CockroachDB also supports _locality-restricted backup execution_, which allows you to specify a set of locality filters for a backup job to restrict the nodes that can participate in the backup process to that locality. This allows only nodes to execute a backup that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket. Refer to [Take Locality-restricted Backups]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) for more detail.
+
## Create a locality-aware backup
For example, to create a locality-aware backup where nodes with the locality `region=us-west` write backup files to `s3://us-west-bucket`, and all other nodes write to `s3://us-east-bucket` by default, run:
diff --git a/src/current/v23.1/take-locality-restricted-backups.md b/src/current/v23.1/take-locality-restricted-backups.md
index 9745a2542ec..8e3e7db329f 100644
--- a/src/current/v23.1/take-locality-restricted-backups.md
+++ b/src/current/v23.1/take-locality-restricted-backups.md
@@ -15,7 +15,7 @@ Defining an execution locality for a backup job is useful in the following cases
- When a multi-region cluster is running heavy workloads and an aggressive backup schedule, designating a region as the "backup" locality may improve latency. For an example, refer to [Create a non-primary region for backup jobs](#create-a-non-primary-region-for-backup-jobs).
{{site.data.alerts.callout_info}}
-CockroachDB also supports _locality-aware backups_, which allow you to partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs. Refer to [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) for more detail.
+CockroachDB also supports _locality-aware backups_, which allow you to partition and store backup data in a way that is optimized for locality. In general, when you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}). Refer to [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) for more detail.
{{site.data.alerts.end}}
## Technical overview
@@ -40,6 +40,7 @@ When you run a backup or restore that uses `EXECUTION LOCALITY`, consider the fo
- The backup or restore job may take slightly more time to start, because it must select the node that coordinates the backup or restore (the _coordinating node_). Refer to [Job coordination using the `EXECUTION LOCALITY` option]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-using-the-execution-locality-option).
- Even after a backup or restore job has been pinned to a locality filter, it may still read data from another locality if no replicas of the data are available in the locality specified by the backup job's locality filter.
- If the job is created on a node that does not match the locality filter, you will receive an error even when the **job creation was successful**. This error indicates that the job execution moved to another node. To avoid this error when creating a manual job (as opposed to a [scheduled job]({% link {{ page.version.version }}/create-schedule-for-backup.md %})), you can use the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option with `EXECUTION LOCALITY`. Then, use the [`SHOW JOB WHEN COMPLETE`]({% link {{ page.version.version }}/show-jobs.md %}#show-job-when-complete) statement to determine when the job has finished. For more details, refer to [Job coordination using the `EXECUTION LOCALITY` option]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-using-the-execution-locality-option).
+- The backup job will send [ranges]({% link {{ page.version.version }}/architecture/overview.md %}#range) to the cloud storage bucket matching the node's locality. However, a range's locality will not necessarily match the node's locality. The backup job will attempt to back up ranges through nodes matching that range's locality, however this is not always possible.
## Examples
diff --git a/src/current/v23.2/backup-and-restore-overview.md b/src/current/v23.2/backup-and-restore-overview.md
index 0d352eb0975..c4c4fdd0374 100644
--- a/src/current/v23.2/backup-and-restore-overview.md
+++ b/src/current/v23.2/backup-and-restore-overview.md
@@ -33,7 +33,7 @@ Backup / Restore | Description | Product Support
[Backups with revision history]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %}) | A backup with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Point-in-time restore]({% link {{ page.version.version }}/take-backups-with-revision-history-and-restore-from-a-point-in-time.md %}) | A restore from an arbitrary point in time within the revision history of a backup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Encrypted backup and restore]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}) | An encrypted backup using a KMS or passphrase. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
-[Locality-aware backup and restore]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) | A backup where each node writes files only to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
+[Locality-aware backup and restore]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) | A backup where each node writes files to the backup destination that matches the node locality configured at node startup. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
[Locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) | A backup with the `EXECUTION LOCALITY` option restricts the nodes that can execute a backup job with a defined locality filter. | - CockroachDB {{ site.data.products.serverless }} — customer-owned backups
- CockroachDB {{ site.data.products.dedicated }} — customer-owned backups
- CockroachDB {{ site.data.products.core }} with an [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/enterprise-licensing.md %})
### Additional backup and restore features
@@ -45,7 +45,7 @@ Backup / Restore | Description | Product Support
{% include {{ page.version.version }}/backups/scheduled-backups-tip.md %}
-CockroachDB supports [creating schedules for periodic backups]({% link {{ page.version.version }}/create-schedule-for-backup.md %}). Scheduled backups ensure that the data to be backed up is protected from garbage collection until it has been successfully backed up. This active management of [protected timestamps]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) means that you can run scheduled backups at a cadence independent from the [GC TTL]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds) of the data.
+CockroachDB supports [creating schedules for periodic backups]({% link {{ page.version.version }}/create-schedule-for-backup.md %}). Scheduled backups ensure that the data to be backed up is protected from garbage collection until it has been successfully backed up. This active management of [protected timestamps]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) means that you can run scheduled backups at a cadence independent from the [GC TTL]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds) of the data.
For detail on scheduled backup features CockroachDB supports:
@@ -57,7 +57,7 @@ For detail on scheduled backup features CockroachDB supports:
CockroachDB supports two backup features that use a node's locality to determine how a backup job runs or where the backup data is stored:
- [Locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}): Specify a set of locality filters for a backup job in order to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket.
-- [Locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs.
+- [Locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}): Partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
## Backup and restore SQL statements
diff --git a/src/current/v23.2/backup-architecture.md b/src/current/v23.2/backup-architecture.md
index 2f2f148fa3d..1add5cda90b 100644
--- a/src/current/v23.2/backup-architecture.md
+++ b/src/current/v23.2/backup-architecture.md
@@ -14,7 +14,7 @@ CockroachDB backups operate as _jobs_, which are potentially long-running operat
The [Overview](#overview) section that follows provides an outline of a backup job's process. For a more detailed explanation of how a backup job works, read from the [Job creation phase](#job-creation-phase) section.
-For a technical overview of [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) or [locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), refer to the [Backup jobs with locality requirements](#backup-jobs-with-locality-requirements) section.
+For a technical overview of [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) or [locality-restricted backup execution]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), refer to the [Backup jobs with locality requirements](#backup-jobs-with-locality) section.
## Overview
@@ -95,20 +95,22 @@ The backup metadata files describe everything a backup contains. That is, all th
With the full backup complete, the specified storage location will contain the backup data and its metadata ready for a potential [restore]({% link {{ page.version.version }}/restore.md %}). After subsequent backups of the `movr` database to this storage location, CockroachDB will create a _backup collection_. Refer to [Backup collections]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) for information on how CockroachDB structures a collection of multiple backups.
-## Backup jobs with locality requirements
+## Backup jobs with locality
CockroachDB supports two backup features that use a node's locality to determine how a backup job runs or where the backup data is stored. This section provides a technical overview of how the backup job process works for each of these backup features:
-- [Locality-aware backup](#job-coordination-and-export-of-locality-aware-backups): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs.
+- [Locality-aware backup](#job-coordination-and-export-of-locality-aware-backups): Partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality.
- [Locality-restricted backup execution](#job-coordination-using-the-execution-locality-option): Specify a set of locality filters for a backup job in order to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket.
### Job coordination and export of locality-aware backups
When you create a [locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) job, any node in the cluster can [claim the backup job](#job-creation-phase). A successful locality-aware backup job requires that each node in the cluster has access to each storage location. This is because any node in the cluster can claim the job and become the coordinator node. Once each node informs the coordinator node that it has completed exporting the row data, the coordinator will start to write metadata, which involves writing to each locality bucket a partial manifest recording what row data was written to that [storage bucket]({% link {{ page.version.version }}/use-cloud-storage.md %}).
-Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) at the time the coordinator planned the [distributed backup flow]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase). The locality of the node ([configured at node startup]({% link {{ page.version.version }}/cockroach-start.md %}#locality)) exporting the row data determines where the backups files will be placed in a locality-aware backup.
+Every node involved in the backup is responsible for backing up the ranges for which it was the [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) at the time the coordinator planned the [distributed backup flow]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase).
-The node exporting the row data, and the leaseholder of the range being backed up, are usually the same. However, these nodes can differ when lease transfers have occurred during the [execution](#export-phase) of the backup. In this case, the leaseholder node returns the files to the node exporting the backup data (usually a local transfer), which then writes the file to the external storage location with a locality that matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
+Every node backing up a [range]({% link {{ page.version.version }}/architecture/overview.md %}#range) will back up to the storage bucket that most closely matches that node's locality. The backup job attempts to back up ranges through nodes matching the range's locality. As a result, there is no guarantee that all ranges will be backed up to their locality's storage bucket. For additional detail on locality-aware backups in the context of a CockroachDB {{ site.data.products.serverless }} cluster, refer to [Job coordination on Serverless clusters](#job-coordination-on-serverless-clusters).
+
+The node exporting the row data, and the leaseholder of the range being backed up, are usually the same. However, these nodes can differ when lease transfers have occurred during the [execution](#export-phase) of the backup. In this case, the leaseholder node returns the files to the node exporting the backup data (usually a local transfer), which then writes the file to the external storage location with a locality that usually matches its own localities (with an overall preference for more specific values in the locality hierarchy). If there is no match, the `default` locality is used.
For example, in the following diagram there is a three-node cluster split across three regions. The leaseholders write the ranges to be backed up to the external storage in the same region. As **Nodes 1** and **3** complete their work, they send updates to the coordinator node (**Node 2**). The coordinator will then [write the partial manifest files](#metadata-writing-phase) containing metadata about the backup work completed on each external storage location, which is stored with the backup SST files written to that storage location.
@@ -116,19 +118,27 @@ During a [restore]({% link {{ page.version.version }}/restore.md %}) job, the jo
+#### Job coordination on Serverless clusters
+
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
+
### Job coordination using the `EXECUTION LOCALITY` option
When you start or [resume]({% link {{ page.version.version }}/resume-job.md %}) a backup with [`EXECUTION LOCALITY`]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}), the backup job must determine the [coordinating node for the job](#job-creation-phase). If a node that does not match the locality filter is the first node to claim the job, the node is responsible for finding a node that does match the filter and transferring the execution to it. This transfer can result in a short delay in starting or resuming a backup job that has execution locality requirements.
If you create a backup job on a [gateway node]({% link {{ page.version.version }}/architecture/sql-layer.md %}#overview) with a locality filter that does **not** meet the filter requirement in `EXECUTION LOCALITY`, and the job does not use the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option, the job will return an error indicating that it moved execution to another node. This error is returned because when you create a job without the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option, the [job execution]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) must run to completion by the gateway node while it is still attached to the SQL client to return the result.
-Once the coordinating node is determined, it will [assign chunks of row data]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) to eligible nodes, and each node reads its assigned row data and backs it up. The coordinator will assign row data only to those nodes that match the backup job's the locality filter in full. The following situations could occur:
+Once the coordinating node is determined, it will [assign chunks of row data]({% link {{ page.version.version }}/backup-architecture.md %}#resolution-phase) to eligible nodes, and each node reads its assigned row data and backs it up. The coordinator will assign row data only to those nodes that match the backup job's locality filter in full. The following situations could occur:
- If the [leaseholder]({% link {{ page.version.version }}/architecture/reads-and-writes-overview.md %}#architecture-leaseholder) for part of the row data matches the filter, the coordinator will assign it the matching row data to process.
- If the leaseholder does not match the locality filter, the coordinator will select a node from the eligible nodes with a preference for those with localities that are closest to the leaseholder.
When the coordinator assigns row data to a node matching the locality filter to back up, that node will read from the closest [replica]({% link {{ page.version.version }}/architecture/reads-and-writes-overview.md %}#architecture-replica). If the node is the leaseholder, or is itself a replica, it can read from itself. In the scenario where no replicas are available in the region of the assigned node, it may then read from a replica in a different region. As a result, you may want to consider [placing replicas]({% link {{ page.version.version }}/configure-replication-zones.md %}), including potentially non-voting replicas that will have less impact on read latency, in the locality or region you plan on pinning for backup job execution.
+{{site.data.alerts.callout_info}}
+Similarly to [locality-aware backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}), the backup job will send [ranges]({% link {{ page.version.version }}/architecture/overview.md %}#range) to the cloud storage bucket matching the node's locality.
+{{site.data.alerts.end}}
+
## See also
- CockroachDB's general [Architecture Overview]({% link {{ page.version.version }}/architecture/overview.md %})
diff --git a/src/current/v23.2/take-and-restore-locality-aware-backups.md b/src/current/v23.2/take-and-restore-locality-aware-backups.md
index b65bd119ba5..0caab37db9d 100644
--- a/src/current/v23.2/take-and-restore-locality-aware-backups.md
+++ b/src/current/v23.2/take-and-restore-locality-aware-backups.md
@@ -9,12 +9,11 @@ docs_area: manage
Locality-aware [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) is an [Enterprise-only](https://www.cockroachlabs.com/product/cockroachdb/) feature. However, you can take [full backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}) without an Enterprise license.
{{site.data.alerts.end}}
-Locality-aware backups allow you to partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
+Locality-aware backups allow you to partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}).
-This is useful for:
-
-- Reducing cloud storage data transfer costs by keeping data within cloud regions.
-- Helping you comply with data domiciling requirements.
+{{site.data.alerts.callout_danger}}
+While a locality-aware backup will always match the node locality and storage bucket locality, a [range's]({% link {{ page.version.version }}/architecture/overview.md %}#range) locality will not necessarily match the node's locality. The backup job will attempt to back up ranges through nodes matching that range's locality, however this is not always possible. As a result, **Cockroach Labs cannot guarantee that all ranges will be backed up to a cloud storage bucket with the same locality.** You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link {{ page.version.version }}/data-domiciling.md %}) requirements.
+{{site.data.alerts.end}}
A locality-aware backup is specified by a list of URIs, each of which has a `COCKROACH_LOCALITY` URL parameter whose single value is either `default` or a single locality key-value pair such as `region=us-east`. At least one `COCKROACH_LOCALITY` must be the `default`. [Restore jobs can read from a locality-aware backup](#restore-from-a-locality-aware-backup) when you provide the list of URIs that together contain the locations of all of the files for a single locality-aware backup.
@@ -24,12 +23,16 @@ A locality-aware backup is specified by a list of URIs, each of which has a `COC
For a technical overview of how a locality-aware backup works, refer to [Job coordination and export of locality-aware backups]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-and-export-of-locality-aware-backups).
-{% include {{ page.version.version }}/backups/support-products.md %}
+## Supported products
+
+Locality-aware backups are available in **CockroachDB {{ site.data.products.dedicated }}**, **CockroachDB {{ site.data.products.serverless }}**, and **CockroachDB {{ site.data.products.core }}** clusters when you are running [customer-owned backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}#cockroachdb-backup-types). For a full list of features, see [Backup and restore product support]({% link {{ page.version.version }}/backup-and-restore-overview.md %}#backup-and-restore-product-support).
{{site.data.alerts.callout_info}}
-CockroachDB also supports _locality-restricted backup execution_, which allow you to specify a set of locality filters for a backup job to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket. Refer to [Take Locality-restricted Backups]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) for more detail.
+{% include {{ page.version.version }}/backups/serverless-locality-aware.md %}
{{site.data.alerts.end}}
+CockroachDB also supports _locality-restricted backup execution_, which allows you to specify a set of locality filters for a backup job to restrict the nodes that can participate in the backup process to that locality. This allows only nodes to execute a backup that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket. Refer to [Take Locality-restricted Backups]({% link {{ page.version.version }}/take-locality-restricted-backups.md %}) for more detail.
+
## Create a locality-aware backup
For example, to create a locality-aware backup where nodes with the locality `region=us-west` write backup files to `s3://us-west-bucket`, and all other nodes write to `s3://us-east-bucket` by default, run:
diff --git a/src/current/v23.2/take-locality-restricted-backups.md b/src/current/v23.2/take-locality-restricted-backups.md
index 0fb069e1608..7c97020aa91 100644
--- a/src/current/v23.2/take-locality-restricted-backups.md
+++ b/src/current/v23.2/take-locality-restricted-backups.md
@@ -7,7 +7,7 @@ docs_area: manage
The `EXECUTION LOCALITY` option allows you to restrict the nodes that can execute a [backup]({% link {{ page.version.version }}/backup.md %}) job by using a [locality filter]({% link {{ page.version.version }}/cockroach-start.md %}#locality) when you create the backup. This will pin the [coordination of the backup job]({% link {{ page.version.version }}/backup-architecture.md %}#job-creation-phase) and the nodes that [process the row data]({% link {{ page.version.version }}/backup-architecture.md %}#export-phase) to the defined locality filter.
-{% include_cached new-in.html version="v23.2" %} Pass the `WITH EXECUTION LOCALITY` option for [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) to restrict execution of the job to nodes with matching localities.
+{% include_cached new-in.html version="v23.2" %} Pass the `WITH EXECUTION LOCALITY` option for [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) to restrict execution of the job to nodes with matching localities.
Defining an execution locality for a backup job is useful in the following cases:
@@ -15,7 +15,7 @@ Defining an execution locality for a backup job is useful in the following cases
- When a multi-region cluster is running heavy workloads and an aggressive backup schedule, designating a region as the "backup" locality may improve latency. For an example, refer to [Create a non-primary region for backup jobs](#create-a-non-primary-region-for-backup-jobs).
{{site.data.alerts.callout_info}}
-CockroachDB also supports _locality-aware backups_, which allow you to partition and store backup data in a way that is optimized for locality. This means that nodes write backup data to the cloud storage bucket that is closest to the node's locality. This is helpful if you want to reduce network costs or have data domiciling needs. Refer to [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) for more detail.
+CockroachDB also supports _locality-aware backups_, which allow you to partition and store backup data in a way that is optimized for locality. In general, when you run a locality-aware backup, nodes write backup data to the [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) bucket that is closest to the node locality configured at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}). Refer to [Take and Restore Locality-aware Backups]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}) for more detail.
{{site.data.alerts.end}}
## Technical overview
@@ -39,6 +39,7 @@ When you run a backup or restore that uses `EXECUTION LOCALITY`, consider the fo
- The backup or restore job may take slightly more time to start, because it must select the node that coordinates the backup or restore (the _coordinating node_). Refer to [Job coordination using the `EXECUTION LOCALITY` option]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-using-the-execution-locality-option).
- Even after a backup or restore job has been pinned to a locality filter, it may still read data from another locality if no replicas of the data are available in the locality specified by the backup job's locality filter.
- If the job is created on a node that does not match the locality filter, you will receive an error even when the **job creation was successful**. This error indicates that the job execution moved to another node. To avoid this error when creating a manual job (as opposed to a [scheduled job]({% link {{ page.version.version }}/create-schedule-for-backup.md %})), you can use the [`DETACHED`]({% link {{ page.version.version }}/backup.md %}#detached) option with `EXECUTION LOCALITY`. Then, use the [`SHOW JOB WHEN COMPLETE`]({% link {{ page.version.version }}/show-jobs.md %}#show-job-when-complete) statement to determine when the job has finished. For more details, refer to [Job coordination using the `EXECUTION LOCALITY` option]({% link {{ page.version.version }}/backup-architecture.md %}#job-coordination-using-the-execution-locality-option).
+- The backup job will send [ranges]({% link {{ page.version.version }}/architecture/overview.md %}#range) to the cloud storage bucket matching the node's locality. However, a range's locality will not necessarily match the node's locality. The backup job will attempt to back up ranges through nodes matching that range's locality, however this is not always possible.
## Examples