diff --git a/src/current/_includes/common/tls-cipher-suites.md b/src/current/_includes/common/tls-cipher-suites.md index d6dff943ccd..a437ecacdb0 100644 --- a/src/current/_includes/common/tls-cipher-suites.md +++ b/src/current/_includes/common/tls-cipher-suites.md @@ -1,24 +1,13 @@ {% if include.list == 'enabled' %} -- `TLS_DHE_RSA_WITH_AES_128_GCM_SHA256` -- `TLS_DHE_RSA_WITH_AES_256_GCM_SHA384` -- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` -- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` +- `TLS_AES_128_GCM_SHA256`, +- `TLS_AES_256_GCM_SHA384`, +- `TLS_CHACHA20_POLY1305_SHA256` - `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256` - `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` -- `TLS_DHE_RSA_WITH_AES_128_CCM` -- `TLS_DHE_RSA_WITH_AES_256_CCM` +- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` +- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` - `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256` - `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` -- `TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256` -- `TLS_DHE_PSK_WITH_AES_128_GCM_SHA256` -- `TLS_DHE_PSK_WITH_AES_256_GCM_SHA384` -- `TLS_DHE_PSK_WITH_AES_128_CCM` -- `TLS_DHE_PSK_WITH_AES_256_CCM` -- `TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256` -- `TLS_ECDHE_PSK_WITH_AES_256_GCM_SHA384` -- `TLS_ECDHE_PSK_WITH_AES_128_CCM_SHA256` -- `TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256` -- `TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256` {% endif %} {% if include.list == 'disabled' %} - `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` diff --git a/src/current/cockroachcloud/managing-cmek.md b/src/current/cockroachcloud/managing-cmek.md index dffd36a08e3..bda1a00ffbc 100644 --- a/src/current/cockroachcloud/managing-cmek.md +++ b/src/current/cockroachcloud/managing-cmek.md @@ -5,7 +5,7 @@ toc: true docs_area: manage.security --- -[Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) for CockroachDB {{ site.data.products.dedicated }} advanced allows the customer to delegate responsibility for the work of encrypting their cluster data to CockroachDB {{ site.data.products.cloud }}, while maintaining the ability to completely revoke CockroachDB {{ site.data.products.cloud }}'s access. +[Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) for CockroachDB {{ site.data.products.cloud }} advanced allows the customer to delegate responsibility for the work of encrypting their cluster data to CockroachDB {{ site.data.products.cloud }}, while maintaining the ability to completely revoke CockroachDB {{ site.data.products.cloud }}'s access. This page shows how to enable [Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) for CockroachDB {{ site.data.products.advanced }} advanced. @@ -27,15 +27,90 @@ This section shows how to enable CMEK on a CockroachDB {{ site.data.products.adv +### Before you begin +
-### Step 1. Provision the cross-account IAM role +1. Make a note of your {{ site.data.products.cloud }} organization ID in the [Organization settings page](https://cockroachlabs.cloud/settings). +1. Find your {{ site.data.products.advanced }} cluster's ID. From the CockroachDB {{ site.data.products.cloud }} console [Clusters list](https://cockroachlabs.cloud/clusters), click the name of a cluster to open its **Cluster Overview** page. From the page's URL make a note of the **last 12 digits** of the portion of the URL before `/overview/`. This is the cluster ID. +1. Use the cluster ID to find the cluster's associated cross-account IAM role, which is managed by CockroachDB {{ site.data.products.cloud }}. + {% include_cached copy-clipboard.html %} + ~~~shell + curl --request GET \ + --url https://cockroachlabs.cloud/api/v1/clusters/{YOUR_CLUSTER_ID} \ + --header 'Authorization: Bearer {YOUR_API_KEY}' | jq .account_id + ~~~ -Follow these steps to create a cross-account IAM role and give it permission to access the CMEK in AWS KMS. CockroachDB Cloud will assume this role to encrypt and decrypt using the CMEK. + In the response, verify that the `id` field matches the cluster ID you specified, then make a note of the `account_id`, which is the ID for the cross-account IAM role. + +
+ +
+ +1. Make a note of your {{ site.data.products.cloud }} organization ID in the [Organization settings page](https://cockroachlabs.cloud/settings). +1. Find your {{ site.data.products.advanced }} cluster's ID. From the CockroachDB {{ site.data.products.cloud }} console [Clusters list](https://cockroachlabs.cloud/clusters), click the name of a cluster to open its **Cluster Overview** page. From the page's URL make a note of the **last 12 digits** of the portion of the URL before `/overview/`. This is the cluster ID. +1. Use the cluster ID to find the cluster's associated GCP Project ID, which is managed by CockroachDB {{ site.data.products.cloud }}. Query the `clusters/` endpoint of the CockroachDB {{ site.data.products.cloud }} API: -1. In CockroachDB Cloud, visit the CockroachDB {{ site.data.products.cloud }} [organization settings page](https://cockroachlabs.cloud/settings). Copy your organization ID, which you will need to create the IAM role: + {% include_cached copy-clipboard.html %} + ```shell + CLUSTER_ID= #{ your cluster ID } + API_KEY= #{ your API key } + curl --request GET \ + --url https://cockroachlabs.cloud/api/v1/clusters/${CLUSTER_ID} \ + --header "Authorization: Bearer ${API_KEY}" + ``` + + In the response, verify that the `id` field matches the cluster ID you specified, then make a note of the following: + - `account_id`: the GCP project ID. + - `regions`/`name`: one entry for each of the cluster's regions. CMEK must be configured in each of a cluster's regions. + + ```json + { + "id": "blahblahblah-9ebd-43d9-8f42-589c9e6fc081", + "name": "crl-prod-xyz", + "cockroach_version": "v22.1.1", + "plan": "DEDICATED", + "cloud_provider": "GCP", + "account_id": "crl-prod-xyz", + "state": "CREATED", + "creator_id": "blahblahblah-3457-471c-b0cb-c2ab15834329", + "operation_status": "CLUSTER_STATUS_UNSPECIFIED", + "config": { + "dedicated": { + "machine_type": "n1-standard-2", + "num_virtual_cpus": 2, + "storage_gib": 15, + "memory_gib": 7.5, + "disk_iops": 450 + } + }, + "regions": [ + { + "name": "us-east4", + "sql_dns": "crl-prod-xyz.gcp-us-east4.cockroachlabs.cloud", + "ui_dns": "crl-prod-xyz.gcp-us-east4.cockroachlabs.cloud", + "node_count": 1 + } + ], + "created_at": "2022-06-16T17:24:06.262259Z", + "updated_at": "2022-06-16T17:43:59.189571Z", + "deleted_at": null + } + ``` -1. Visit the [Clusters page](https://cockroachlabs.cloud/clusters). Click on the name of your cluster to open its cluster overview page. In the URL, copy the cluster ID: `https://cockroachlabs.cloud/cluster/{YOUR_CLUSTER_ID}/overview`. +1. Formulate the service account's email address, which is in the following format. Replace `{cluster_id}` with the cluster ID, and replace `{account_id}` with the GCP project ID. + + {% include_cached copy-clipboard.html %} + ~~~ text + crl-kms-user-{cluster_id}@{account_id}.iam.gserviceaccount.com + ~~~ +
+ +
+ +### Step 1. Provision the cross-account IAM role + +Follow these steps to create a cross-account IAM role and give it permission to access the CMEK in AWS KMS. CockroachDB Cloud will assume this role to encrypt and decrypt using the CMEK. 1. Use the CockroachDB Cloud API to find the ID of the AWS account managed by CockroachDB {{ site.data.products.cloud }} that is associated with your cluster (not your own AWS account): @@ -50,8 +125,8 @@ Follow these steps to create a cross-account IAM role and give it permission to 1. In the AWS console, visit the [IAM page](https://console.aws.amazon.com/iam/) and select **Roles** and click **Create role**. - For **Trusted entity type**, select **AWS account**. - - Select **Another AWS account** and set **Account ID**, provide the AWS account ID for your cluster. - - Select **Require external ID** and set **External ID** to your CockroachDB {{ site.data.products.cloud }} organization ID. + - Select **Another AWS account** and set **Account ID**, provide the AWS account ID that you found in [Before you begin](#before-you-begin). + - Select **Require external ID** and set **External ID** to your CockroachDB {{ site.data.products.cloud }} organization ID, which you found in [Before you begin](#before-you-begin). - Provide a name for the role. Do not enable any permissions. 1. Make a note of the Amazon Resource Name (ARN) for the new IAM role. @@ -62,27 +137,10 @@ Follow these steps to create a cross-account IAM role and give it permission to ### Step 1. Provision the cross-tenant service account -1. In CockroachDB Cloud, visit the CockroachDB {{ site.data.products.cloud }} [organization settings page](https://cockroachlabs.cloud/settings). Copy your organization ID, which you will need to create the cross-tenant service account. - -1. Visit the [Clusters page](https://cockroachlabs.cloud/clusters). Click on the name of your cluster to open its cluster overview page. In the URL, copy the cluster ID: `https://cockroachlabs.cloud/cluster/{YOUR_CLUSTER_ID}/overview`. - -1. Use the CockroachDB Cloud API to find the ID of the AWS account managed by CockroachDB {{ site.data.products.cloud }} that is associated with your cluster (not your own GCP project): - - {% include_cached copy-clipboard.html %} - ~~~ shell - CLUSTER_ID= #{ your cluster ID } - API_KEY= #{ your API key } - curl --request GET \ - --url https://cockroachlabs.cloud/api/v1/clusters/${CLUSTER_ID} \ - --header "Authorization: Bearer ${API_KEY}" - ~~~ - - In the response, the ID is stored in the `account_id` field. - 1. In the GCP Console, visit the [IAM service accounts page](https://console.cloud.google.com/iam-admin/serviceaccounts) for your project and click **+ Create service account**. Select **Cross-tenant**. 1. Click the new service account to open its details. 1. In **PERMISSIONS**, click **GRANT ACCESS**. - - For **New principals**, enter the service account ID for your cluster, which you found earlier. + - For **New principals**, enter the service account ID for your cluster, which you found in [Before you begin](#before-you-begin) - For **Role**, enter **Service Account Token Creator**. Click **SAVE**. @@ -318,7 +376,7 @@ Compile the information about the service account and key we've just created int {% include_cached copy-clipboard.html %} ~~~shell - export CLUSTER_REGION= # the region of the {{ site.data.products.dedicated}}-controlled GCP project where your cluster is located + export CLUSTER_REGION= # the region of the {{ site.data.products.advanced}}-controlled GCP project where your cluster is located export GCP_PROJECT_ID= # your GCP project ID export KEY_LOCATION= # location of your KMS key (region or 'global') export KEY_RING= # your KMS key ring name @@ -361,7 +419,7 @@ Compile the information about the service account and key we've just created int cat cmek_config.json | jq ~~~ -After you have built your CMEK configuration manifest with the details of your cluster and provisioned the service account and KMS key in GCP, return to [Enabling CMEK for a CockroachDB {{ site.data.products.dedicated }} cluster]({% link cockroachcloud/managing-cmek.md %}#step-4-activate-cmek). +After you have built your CMEK configuration manifest with the details of your cluster and provisioned the service account and KMS key in GCP, return to [Enabling CMEK for a CockroachDB {{ site.data.products.advanced }} cluster]({% link cockroachcloud/managing-cmek.md %}#step-4-activate-cmek). ### Step 4. Activate CMEK @@ -445,7 +503,7 @@ Within your KMS, **do not revoke** access to a CMEK that is in use by one or mor 1. In your cloud provider's KMS platform, revoke CockroachDB {{ site.data.products.cloud }}'s access to your CMEK key at the IAM level, either by removing the authorization the cross-account IAM role or by removing the cross-account IAM role's permission to access the key. - This will **not** immediately stop your cluster from encrypting and decrypting data, which does not take effect until you update your cluster in the next step. That is because CockroachDB does not use your CMEK key to encrypt/decrypt your cluster data itself. CockroachDB {{ site.data.products.dedicated }} accesses your CMEK key to encrypt/decrypt a key encryption key (KEK). This KEK is used to encrypt a data encryption key (DEK), which is used to encrypt/decrypt your application data. Your cluster will continue to use the already-provisioned DEK until you make the Cloud API call to revoke CMEK. + This will **not** immediately stop your cluster from encrypting and decrypting data, which does not take effect until you update your cluster in the next step. That is because CockroachDB does not use your CMEK key to encrypt/decrypt your cluster data itself. CockroachDB {{ site.data.products.advanced }} accesses your CMEK key to encrypt/decrypt a key encryption key (KEK). This KEK is used to encrypt a data encryption key (DEK), which is used to encrypt/decrypt your application data. Your cluster will continue to use the already-provisioned DEK until you make the Cloud API call to revoke CMEK. 1. Your cluster will continue to operate with the CMEK until you update it to revoke CMEK. To revoke access: @@ -466,35 +524,35 @@ Within your KMS, **do not revoke** access to a CMEK that is in use by one or mor ## Appendix: IAM policy for the CMEK key -This IAM policy is to be attached to the CMEK key. It grants the required KMS permissions to the cross-account IAM role to be used by CockroachDB {{ site.data.products.dedicated }}. +This IAM policy is to be attached to the CMEK key. It grants the required KMS permissions to the cross-account IAM role to be used by CockroachDB {{ site.data.products.advanced }}. Note that this IAM policy refers to the ARN for the cross-account IAM role you created at the end of [Step 1. Provision the cross-account IAM role](#step-1-provision-the-cross-account-iam-role). {% include_cached copy-clipboard.html %} ~~~json { - "Version": "2012-10-17", - "Id": "crdb-cmek-kms", - "Statement": [ - { - "Sid": "Allow use of the key for CMEK", - "Effect": "Allow", - "Principal": { - "AWS": "{ARN_OF_CROSS_ACCOUNT_IAM_ROLE}" - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:GenerateDataKey*", - "kms:DescribeKey", - "kms:ReEncrypt*" - ], - "Resource": "*" - }, - { - {OTHER_POLICY_STATEMENT_FOR_ADMINISTRATING_KEY} - } - ] + "Version": "2012-10-17", + "Id": "crdb-cmek-kms", + "Statement": [ + { + "Sid": "Allow use of the key for CMEK", + "Effect": "Allow", + "Principal": { + "AWS": "{ARN_OF_CROSS_ACCOUNT_IAM_ROLE}" + }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:GenerateDataKey*", + "kms:DescribeKey", + "kms:ReEncrypt*" + ], + "Resource": "*" + }, + { + {OTHER_POLICY_STATEMENT_FOR_ADMINISTRATING_KEY} + } + ] } ~~~ diff --git a/src/current/v24.1/cockroach-debug-zip.md b/src/current/v24.1/cockroach-debug-zip.md index 0779631dd4e..6ff6779ea90 100644 --- a/src/current/v24.1/cockroach-debug-zip.md +++ b/src/current/v24.1/cockroach-debug-zip.md @@ -114,7 +114,7 @@ Flag | Description `--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:

`--include-files=*.pprof`

Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`. `--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.

**Default:** true `--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.

This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.

In addition, include problem ranges information in `reports/problemranges.json`.

**Default:** true -`--include-running-job-traces` | Include information about each running, traceable job (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true +`--include-running-job-traces` | Include information about each traceable job that is running or reverting (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true `--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:

`--nodes=1,10,13-15` `--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example. `--redact-logs` | **Deprecated** Redact sensitive data from collected log files only. Use the `--redact` flag instead, which redacts sensitive data across the entire generated `.zip` as well as the collected log files. Passing the `--redact-logs` flag will be interpreted as the `--redact` flag. diff --git a/src/current/v24.2/cockroach-debug-zip.md b/src/current/v24.2/cockroach-debug-zip.md index 0779631dd4e..6ff6779ea90 100644 --- a/src/current/v24.2/cockroach-debug-zip.md +++ b/src/current/v24.2/cockroach-debug-zip.md @@ -114,7 +114,7 @@ Flag | Description `--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:

`--include-files=*.pprof`

Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`. `--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.

**Default:** true `--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.

This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.

In addition, include problem ranges information in `reports/problemranges.json`.

**Default:** true -`--include-running-job-traces` | Include information about each running, traceable job (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true +`--include-running-job-traces` | Include information about each traceable job that is running or reverting (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true `--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:

`--nodes=1,10,13-15` `--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example. `--redact-logs` | **Deprecated** Redact sensitive data from collected log files only. Use the `--redact` flag instead, which redacts sensitive data across the entire generated `.zip` as well as the collected log files. Passing the `--redact-logs` flag will be interpreted as the `--redact` flag. diff --git a/src/current/v24.3/cockroach-debug-zip.md b/src/current/v24.3/cockroach-debug-zip.md index 0779631dd4e..6ff6779ea90 100644 --- a/src/current/v24.3/cockroach-debug-zip.md +++ b/src/current/v24.3/cockroach-debug-zip.md @@ -114,7 +114,7 @@ Flag | Description `--include-files` | [Files](#files) to include in the generated `.zip`. This can be used to limit the size of the generated `.zip`, and affects logs, heap profiles, goroutine dumps, and/or CPU profiles. The files are specified as a comma-separated list of [glob patterns](https://wikipedia.org/wiki/Glob_(programming)). For example:

`--include-files=*.pprof`

Note that this flag is applied _before_ `--exclude-files`. Use [`cockroach debug list-files`]({% link {{ page.version.version }}/cockroach-debug-list-files.md %}) with this flag to see a list of files that will be contained in the `.zip`. `--include-goroutine-stacks` | Fetch stack traces for all goroutines running on each targeted node in `nodes/*/stacks.txt` and `nodes/*/stacks_with_labels.txt` files. Note that fetching stack traces for all goroutines is a "stop-the-world" operation, which can momentarily have negative impacts on SQL service latency. Exclude these goroutine stacks by using the `--include-goroutine-stacks=false` flag. Note that any periodic goroutine dumps previously taken on the node will still be included in `nodes/*/goroutines/*.txt.gz`, as these would have already been generated and don't require any additional stop-the-world operations to be collected.

**Default:** true `--include-range-info` | Include one file per node with information about the KV ranges stored on that node, in `nodes/{node ID}/ranges.json`.

This information can be vital when debugging issues that involve the [KV layer]({% link {{ page.version.version }}/architecture/overview.md %}#layers) (which includes everything below the SQL layer), such as data placement, load balancing, performance or other behaviors. In certain situations, on large clusters with large numbers of ranges, these files can be omitted if and only if the issue being investigated is already known to be in another layer of the system (for example, an error message about an unsupported feature or incompatible value in a SQL schema change or statement). However, many higher-level issues are ultimately related to the underlying KV layer described by these files. Only set this to `false` if directed to do so by Cockroach Labs support.

In addition, include problem ranges information in `reports/problemranges.json`.

**Default:** true -`--include-running-job-traces` | Include information about each running, traceable job (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true +`--include-running-job-traces` | Include information about each traceable job that is running or reverting (such as [backup]({% link {{ page.version.version }}/backup.md %}), [restore]({% link {{ page.version.version }}/restore.md %}), [import]({% link {{ page.version.version }}/import-into.md %}), [physical cluster replication]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %})) in `jobs/*/*/trace.zip` files. This involves collecting cluster-wide traces for each running job in the cluster.

**Default:** true `--nodes` | Specify nodes to inspect as a comma-separated list or range of node IDs. For example:

`--nodes=1,10,13-15` `--redact` | Redact sensitive data from the generated `.zip`, with the exception of range keys, which must remain unredacted because they are essential to support CockroachDB. This flag replaces the deprecated `--redact-logs` flag, which only applied to log messages contained within `.zip`. See [Redact sensitive information](#redact-sensitive-information) for an example. `--redact-logs` | **Deprecated** Redact sensitive data from collected log files only. Use the `--redact` flag instead, which redacts sensitive data across the entire generated `.zip` as well as the collected log files. Passing the `--redact-logs` flag will be interpreted as the `--redact` flag.