From ee45f96ba78cbf63fd5e8791efe3c721404d41e5 Mon Sep 17 00:00:00 2001 From: Mike Lewis <76072290+mikeCRL@users.noreply.github.com> Date: Thu, 25 Jan 2024 16:17:50 -0500 Subject: [PATCH 01/18] Add v23.1 wildcarded path to Algolia exclusions DOC-9542 (#18246) * Add v23.1 wildcarded path to Algolia exclusions * Update exclusion approach --- src/current/_config_base.yml | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/current/_config_base.yml b/src/current/_config_base.yml index df20c90b60c..e0f68c68ea2 100644 --- a/src/current/_config_base.yml +++ b/src/current/_config_base.yml @@ -1,9 +1,11 @@ algolia: application_id: 7RXZLDVR5F files_to_exclude: - - index.html - - index.md - - search.html + - index.html + - index.md + - search.html + - src/current/v23.1/** + - v23.1/** index_name: cockroachcloud_docs search_api_key: 372a10456f4ed7042c531ff3a658771b settings: From 6d081f931c2f67f6180dc4386f35c67fc2d8c71a Mon Sep 17 00:00:00 2001 From: "Matt Linville (he/him)" Date: Thu, 25 Jan 2024 16:42:17 -0800 Subject: [PATCH 02/18] [DOC-9504] Document that before using the IdP-initiated SAML flow, you must contact support to be enabled (#18244) --- src/current/cockroachcloud/cloud-org-sso.md | 6 +++++- src/current/cockroachcloud/configure-cloud-org-sso.md | 4 ++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/src/current/cockroachcloud/cloud-org-sso.md b/src/current/cockroachcloud/cloud-org-sso.md index 3b25153344b..529565c5471 100644 --- a/src/current/cockroachcloud/cloud-org-sso.md +++ b/src/current/cockroachcloud/cloud-org-sso.md @@ -113,7 +113,11 @@ Yes. When Cloud Organization SSO is enabled for your CockroachDB {{ site.data.pr The following flows are supported: - The _service provider-initiated flow_, where you initiate configuration of Cloud Organization SSO through the CockroachDB {{ site.data.products.cloud }} Console. -- An _identity provider-initiated flow_, where you initiate configuration through an IdP such as Okta. +- The _identity provider-initiated flow_, where you initiate configuration through an IdP such as Okta. + + {{site.data.alerts.callout_info}} + To enable the IdP-initiated flow for your CockroachDB Cloud organization, contact [Cockroach Labs support](https://support.cockroachlabs.com/hc). + {{site.data.alerts.end}} #### What default role is assigned to users when autoprovisioning is enabled in a CockroachDB {{ site.data.products.cloud }} organization? diff --git a/src/current/cockroachcloud/configure-cloud-org-sso.md b/src/current/cockroachcloud/configure-cloud-org-sso.md index 1df4a4ba724..67831fd5dc7 100644 --- a/src/current/cockroachcloud/configure-cloud-org-sso.md +++ b/src/current/cockroachcloud/configure-cloud-org-sso.md @@ -154,7 +154,7 @@ To enable autoprovisioning for an SSO authentication method: ## Add a custom authentication method -You can add a custom authentication method to connect to any IdP that supports [Security Access Markup Language (SAML)](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) or [OpenID Connect (OIDC)](https://openid.net/connect/). +You can add a custom authentication method to connect to any IdP that supports [OpenID Connect (OIDC)](https://openid.net/connect/) or [Security Access Markup Language (SAML)](https://wikipedia.org/wiki/Security_Assertion_Markup_Language). ### OIDC @@ -177,7 +177,7 @@ To configure a custom OIDC authentication method: ### SAML -To configure a custom SAML authentication method: +To configure a custom SAML authentication method using the service provider-initiated flow, follow these steps. If you need to use the identity provider-initiated flow instead, contact [Cockroach Labs support](https://support.cockroachlabs.com/hc). 1. Log in to your IdP and gather the following information, which you will use to configure CockroachDB {{ site.data.products.cloud }} SSO: 1. In a separate browser, log in to [CockroachDB {{ site.data.products.cloud }} Console](https://cockroachlabs.cloud) as a user with the [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) role. From b0288d6e581ed0f6b95da895779be35dc42ee88b Mon Sep 17 00:00:00 2001 From: "Matt Linville (he/him)" Date: Thu, 25 Jan 2024 16:46:12 -0800 Subject: [PATCH 03/18] [DOC-9553] Correct the note about updating the CA certificate, document how to roll out a CA certificate update gradually (#18242) --- src/current/cockroachcloud/client-certs-dedicated.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/src/current/cockroachcloud/client-certs-dedicated.md b/src/current/cockroachcloud/client-certs-dedicated.md index f2ca9502678..e9f02a40772 100644 --- a/src/current/cockroachcloud/client-certs-dedicated.md +++ b/src/current/cockroachcloud/client-certs-dedicated.md @@ -239,12 +239,10 @@ resource "cockroach_client_ca_cert" "yourclustername" { ## Update the CA certificate for a cluster {{site.data.alerts.callout_danger}} -Clients must be provisioned with client certificates signed by the new CA prior to the update, or their new connections will be blocked. - -This operation also interrupts existing database connections. End users should be informed of a potential service interruption. +Clients must be provisioned with client certificates signed by the cluster's CA prior to the update, or their new connections will be blocked. {{site.data.alerts.end}} -This section shows how to replace the CA certificate used by your cluster for certificate-based client authentication. +This section shows how to replace the CA certificate used by your cluster for certificate-based client authentication. To roll out a new CA certificate gradually instead of following this procedure directly, CockroachDB supports the ability to include multiple CA certificates for a cluster by concatenating them in PEM format. This allows clients to connect as long as the client certificate is signed by either the old CA certificate or the new one. PEM format requires a blank line in between certificates. {{site.data.alerts.callout_success}} The [Cluster Administrator]({% link cockroachcloud/authorization.md %}#cluster-administrator) or [Org Administrator (legacy)]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) Organization role is required to manage the CA certificate for a CockroachDB {{ site.data.products.dedicated }} cluster. From 9a9f17f936f0d53eff4550e94280c8dcec410ff0 Mon Sep 17 00:00:00 2001 From: Amruta Ranade Date: Fri, 26 Jan 2024 11:35:33 -0500 Subject: [PATCH 04/18] documented the staged release process --- src/current/_includes/sidebar-all-releases.json | 6 ++++++ src/current/releases/index.md | 4 ++++ src/current/releases/staged-release-process.md | 7 +++++++ 3 files changed, 17 insertions(+) create mode 100644 src/current/releases/staged-release-process.md diff --git a/src/current/_includes/sidebar-all-releases.json b/src/current/_includes/sidebar-all-releases.json index c346cbdceb3..ec03c02ae25 100644 --- a/src/current/_includes/sidebar-all-releases.json +++ b/src/current/_includes/sidebar-all-releases.json @@ -16,6 +16,12 @@ "/releases/kubernetes-operator.html" ] }, +{ + "title": "Staged Release Process", + "urls": [ + "/releases/staged-release-process.html" + ] +}, { "title": "Release Support Policy", "urls": [ diff --git a/src/current/releases/index.md b/src/current/releases/index.md index e760bbe3b5d..278451bb1aa 100644 --- a/src/current/releases/index.md +++ b/src/current/releases/index.md @@ -27,6 +27,10 @@ For more details, refer to [Release Naming](#release-naming). In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the `master` branch cannot subsequently be upgraded to a production release. {{site.data.alerts.end}} +## Staged release process + +As of 2024, CockroachDB will be released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. + {{ experimental_js_warning }} {% assign sections = site.data.releases | map: "release_type" | uniq | reverse %} diff --git a/src/current/releases/staged-release-process.md b/src/current/releases/staged-release-process.md new file mode 100644 index 00000000000..50293634cf7 --- /dev/null +++ b/src/current/releases/staged-release-process.md @@ -0,0 +1,7 @@ +title: Staged Release Process +summary: Learn about Cockroach Labs' staged release process for CockroachDB Cloud and Self-Hosted releases. +toc: true +docs_area: releases +--- + +As of 2024, CockroachDB will be released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. \ No newline at end of file From 7201896c7fce20c9d3db464cb6b85d9d1161c549 Mon Sep 17 00:00:00 2001 From: Amruta Ranade Date: Fri, 26 Jan 2024 11:39:53 -0500 Subject: [PATCH 05/18] fixed nit --- src/current/releases/staged-release-process.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/current/releases/staged-release-process.md b/src/current/releases/staged-release-process.md index 50293634cf7..5878c5caba4 100644 --- a/src/current/releases/staged-release-process.md +++ b/src/current/releases/staged-release-process.md @@ -1,3 +1,4 @@ +--- title: Staged Release Process summary: Learn about Cockroach Labs' staged release process for CockroachDB Cloud and Self-Hosted releases. toc: true From 9383e4b014054ac113b7dfc03857160ee076d580 Mon Sep 17 00:00:00 2001 From: Amruta Ranade <11484018+Amruta-Ranade@users.noreply.github.com> Date: Fri, 26 Jan 2024 11:43:24 -0500 Subject: [PATCH 06/18] Update index.md --- src/current/releases/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/releases/index.md b/src/current/releases/index.md index 278451bb1aa..a40601b73a5 100644 --- a/src/current/releases/index.md +++ b/src/current/releases/index.md @@ -29,7 +29,7 @@ In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary ## Staged release process -As of 2024, CockroachDB will be released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. +As of 2024, CockroachDB will be released under a staged delivery process. New releases will be made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. {{ experimental_js_warning }} From c949c7e50f6a799811036bfb49b2a631e1625a7d Mon Sep 17 00:00:00 2001 From: Amruta Ranade <11484018+Amruta-Ranade@users.noreply.github.com> Date: Fri, 26 Jan 2024 11:43:36 -0500 Subject: [PATCH 07/18] Update staged-release-process.md --- src/current/releases/staged-release-process.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/releases/staged-release-process.md b/src/current/releases/staged-release-process.md index 5878c5caba4..11d7c6d2f24 100644 --- a/src/current/releases/staged-release-process.md +++ b/src/current/releases/staged-release-process.md @@ -5,4 +5,4 @@ toc: true docs_area: releases --- -As of 2024, CockroachDB will be released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. \ No newline at end of file +As of 2024, CockroachDB will be released under a staged delivery process. New releases will be made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. From b30f489684271da9931dbc33cb62eb3868b0b319 Mon Sep 17 00:00:00 2001 From: Amruta Ranade Date: Fri, 26 Jan 2024 13:28:42 -0500 Subject: [PATCH 08/18] Worked on feedback --- src/current/_data/releases.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/current/_data/releases.yml b/src/current/_data/releases.yml index b2c6760111d..95156dfc595 100644 --- a/src/current/_data/releases.yml +++ b/src/current/_data/releases.yml @@ -5385,10 +5385,10 @@ release_date: '2024-01-17' release_type: Production cloud_only: true - cloud_only_message_short: 'Available in CockroachDB Dedicated. Self-hosted binaries available February 5.' + cloud_only_message_short: 'Available in CockroachDB Dedicated. Self-hosted binaries available February 5 as per the staged release process.' cloud_only_message: > CockroachDB v23.2 is now generally available for CockroachDB Dedicated, - and is scheduled to be made available for CockroachDB Self-Hosted on February 5, 2024. + and is scheduled to be made available for CockroachDB Self-Hosted on February 5, 2024 as per the staged release process. For more information, refer to Create a CockroachDB Dedicated Cluster or Upgrade to CockroachDB v23.2. From c5416cd90f8606daa6d2c7687fafd2bffb83533e Mon Sep 17 00:00:00 2001 From: Amruta Ranade <11484018+Amruta-Ranade@users.noreply.github.com> Date: Fri, 26 Jan 2024 14:50:22 -0500 Subject: [PATCH 09/18] Apply suggestions from code review Co-authored-by: Mike Lewis <76072290+mikeCRL@users.noreply.github.com> --- src/current/_data/releases.yml | 4 ++-- src/current/releases/index.md | 2 +- src/current/releases/staged-release-process.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/src/current/_data/releases.yml b/src/current/_data/releases.yml index 95156dfc595..46ce49df64b 100644 --- a/src/current/_data/releases.yml +++ b/src/current/_data/releases.yml @@ -5385,10 +5385,10 @@ release_date: '2024-01-17' release_type: Production cloud_only: true - cloud_only_message_short: 'Available in CockroachDB Dedicated. Self-hosted binaries available February 5 as per the staged release process.' + cloud_only_message_short: 'Available in CockroachDB Dedicated. Self-hosted binaries available February 5 per the staged release process.' cloud_only_message: > CockroachDB v23.2 is now generally available for CockroachDB Dedicated, - and is scheduled to be made available for CockroachDB Self-Hosted on February 5, 2024 as per the staged release process. + and is scheduled to be made available for CockroachDB Self-Hosted on February 5, 2024 per the staged release process. For more information, refer to Create a CockroachDB Dedicated Cluster or Upgrade to CockroachDB v23.2. diff --git a/src/current/releases/index.md b/src/current/releases/index.md index a40601b73a5..a9417774619 100644 --- a/src/current/releases/index.md +++ b/src/current/releases/index.md @@ -29,7 +29,7 @@ In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary ## Staged release process -As of 2024, CockroachDB will be released under a staged delivery process. New releases will be made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. +As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. {{ experimental_js_warning }} diff --git a/src/current/releases/staged-release-process.md b/src/current/releases/staged-release-process.md index 11d7c6d2f24..2953a9a2dd9 100644 --- a/src/current/releases/staged-release-process.md +++ b/src/current/releases/staged-release-process.md @@ -5,4 +5,4 @@ toc: true docs_area: releases --- -As of 2024, CockroachDB will be released under a staged delivery process. New releases will be made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. +As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for CockroachDB Cloud clusters for two weeks before binaries are published for CockroachDB Self-Hosted downloads. From 0305dc9b38a9aa7cc9178f690d7102b9a851a13d Mon Sep 17 00:00:00 2001 From: Florence Morris Date: Mon, 29 Jan 2024 09:41:17 -0500 Subject: [PATCH 10/18] (1) On Export Metrics to Dedicated Cluster page, added Prometheus filter and sections. (2) Added Cloud release note with link to Export Metrics to Dedicated Cluster page with Prometheus filter. (3) On Monitor CockroachDB with Prometheus page for self-hosted, added tip with link to Export Metrics to Dedicated Cluster page. (#18252) --- src/current/_data/cloud_releases.csv | 1 + .../_includes/releases/cloud/2024-01-29.md | 5 + src/current/cockroachcloud/export-metrics.md | 170 ++++++++++++++++-- .../monitor-cockroachdb-with-prometheus.md | 4 + 4 files changed, 168 insertions(+), 12 deletions(-) create mode 100644 src/current/_includes/releases/cloud/2024-01-29.md diff --git a/src/current/_data/cloud_releases.csv b/src/current/_data/cloud_releases.csv index ab2c21b8bf3..4cd9e3d4432 100644 --- a/src/current/_data/cloud_releases.csv +++ b/src/current/_data/cloud_releases.csv @@ -75,3 +75,4 @@ date,sha 2023-12-19,null 2023-12-21,null 2024-01-17,null +2024-01-29,null \ No newline at end of file diff --git a/src/current/_includes/releases/cloud/2024-01-29.md b/src/current/_includes/releases/cloud/2024-01-29.md new file mode 100644 index 00000000000..7973a5d41a4 --- /dev/null +++ b/src/current/_includes/releases/cloud/2024-01-29.md @@ -0,0 +1,5 @@ +## January 29, 2024 + +

General updates

+ +- CockroachDB {{ site.data.products.dedicated }} clusters now [export metrics]({% link cockroachcloud/export-metrics.md %}#the-metricexport-endpoint) to third-party monitoring tool [Prometheus]({% link cockroachcloud/export-metrics.md %}?filters=prometheus-metrics-export). This feature is available in [preview]({% link {{site.current_cloud_version}}/cockroachdb-feature-availability.md %}). diff --git a/src/current/cockroachcloud/export-metrics.md b/src/current/cockroachcloud/export-metrics.md index 630ab856465..7be4eb9c976 100644 --- a/src/current/cockroachcloud/export-metrics.md +++ b/src/current/cockroachcloud/export-metrics.md @@ -6,13 +6,13 @@ docs_area: manage cloud: true --- -CockroachDB {{ site.data.products.dedicated }} users can use the [Cloud API]({% link cockroachcloud/cloud-api.md %}) to configure metrics export to [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) or [Datadog](https://www.datadoghq.com/). Once the export is configured, metrics will flow from all nodes in all regions of your CockroachDB {{ site.data.products.dedicated }} cluster to your chosen cloud metrics sink. +CockroachDB {{ site.data.products.dedicated }} users can use the [Cloud API]({% link cockroachcloud/cloud-api.md %}) to configure metrics export to [AWS CloudWatch](https://aws.amazon.com/cloudwatch/), [Datadog](https://www.datadoghq.com/), or [Prometheus](https://prometheus.io/). Once the export is configured, metrics will flow from all nodes in all regions of your CockroachDB {{ site.data.products.dedicated }} cluster to your chosen cloud metrics sink. {{site.data.alerts.callout_success}} -CockroachDB {{ site.data.products.dedicated }} clusters use Cloud Console instead of DB Console, and DB Console is disabled. To export metrics from a CockroachDB {{ site.data.products.core }} cluster, refer to [Monitoring and Alerting](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/monitoring-and-alerting) instead of this page. +To export metrics from a CockroachDB {{ site.data.products.core }} cluster, refer to [Monitoring and Alerting](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/monitoring-and-alerting) instead of this page. {{site.data.alerts.end}} -Exporting metrics to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. Metrics export to Datadog is supported on all CockroachDB {{ site.data.products.dedicated }} clusters. +Exporting metrics to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. Metrics export to Datadog and Prometheus is supported on all CockroachDB {{ site.data.products.dedicated }} clusters. {{site.data.alerts.callout_info}} {% include_cached feature-phases/preview.md %} @@ -28,8 +28,9 @@ Cloud metrics sink | Metrics export endpoint ------------------ | ---------------------------------------------------- AWS Cloudwatch | `https://cockroachlabs.cloud/api/v1/clusters/{your_cluster_id}/metricexport/cloudwatch` Datadog | `https://cockroachlabs.cloud/api/v1/clusters/{your_cluster_id}/metricexport/datadog` +Prometheus | `https://cockroachlabs.cloud/api/v1/clusters/{your_cluster_id}/metricexport/prometheus` -Access to the `metricexport` endpoints requires a valid CockroachDB {{ site.data.products.cloud }} [service account]({% link cockroachcloud/managing-access.md %}#manage-service-accounts) with the appropriate permissions (`admin` privilege or Cluster Admin role). +Access to the `metricexport` endpoints requires a valid CockroachDB {{ site.data.products.cloud }} [service account]({% link cockroachcloud/managing-access.md %}#manage-service-accounts) with the appropriate permissions (`admin` privilege, Cluster Administrator role, or Cluster Operator role). The following methods are available for use with the `metricexport` endpoints, and require the listed service account permissions: @@ -37,7 +38,7 @@ Method | Required permissions | Description --- | --- | --- `GET` | `ADMIN`, `EDIT`, or `READ` | Returns the current status of the metrics export configuration. `POST` | `ADMIN` or `EDIT` | Enables metrics export, or updates an existing metrics export configuration. -`DELETE` | `ADMIN` | Disables metrics export, halting all metrics export to AWS CloudWatch or Datadog. +`DELETE` | `ADMIN` | Disables metrics export, halting all metrics export to AWS CloudWatch, Datadog, or Prometheus. See [Service accounts]({% link cockroachcloud/managing-access.md %}#manage-service-accounts) for instructions on configuring a service account with these required permissions. @@ -46,11 +47,12 @@ See [Service accounts]({% link cockroachcloud/managing-access.md %}#manage-servi
+
-Exporting metrics to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can [export metrics to Datadog](export-metrics.html?filters=datadog-metrics-export) instead. +Exporting metrics to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can export metrics to [Datadog]({% link cockroachcloud/export-metrics.md %}?filters=datadog-metrics-export) or [Prometheus]({% link cockroachcloud/export-metrics.md %}?filters=prometheus-metrics-export) instead. {{site.data.alerts.callout_info}} Enabling metrics export will send around 250 metrics per node to AWS CloudWatch. Review the [AWS CloudWatch documentation](https://aws.amazon.com/cloudwatch/pricing/) to gauge how this adds to your AWS CloudWatch spend. @@ -64,7 +66,7 @@ Perform the following steps to enable metrics export from your CockroachDB {{ si 1. Visit the CockroachDB {{ site.data.products.cloud }} console [cluster page](https://cockroachlabs.cloud/clusters). 1. Click on the name of your cluster. - 1. Find your cluster ID in the URL of the single cluster overview page: `https://cockroachlabs.cloud/cluster/{your_cluster_id}/overview`. It should resemble `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + 1. Find your cluster ID in the URL of the single cluster overview page: `https://cockroachlabs.cloud/cluster/{your_cluster_id}/overview`. The ID should resemble `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. 1. Determine your cluster's cloud provider account ID. This command uses the third-party JSON parsing tool [`jq`](https://stedolan.github.io/jq/download/) to isolate just the needed `account_id` field: @@ -128,7 +130,7 @@ Perform the following steps to enable metrics export from your CockroachDB {{ si 1. Copy the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) of the `CockroachCloudMetricsExportRole` role found under **Summary**, which is needed for the next step. -1. Issue the following Cloud API command to enable metrics export for your CockroachDB {{ site.data.products.dedicated }} cluster: +1. Issue the following [Cloud API]({% link cockroachcloud/cloud-api.md %}) command to enable metrics export for your CockroachDB {{ site.data.products.dedicated }} cluster: {% include_cached copy-clipboard.html %} ~~~shell @@ -173,7 +175,7 @@ To enable metrics export for your CockroachDB {{ site.data.products.dedicated }} 1. Visit the CockroachDB {{ site.data.products.cloud }} console [cluster page](https://cockroachlabs.cloud/clusters). 1. Click on the name of your cluster. - 1. Find your cluster ID in the URL of the single cluster overview page: `https://cockroachlabs.cloud/cluster/{your_cluster_id}/overview`. It should resemble `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + 1. Find your cluster ID in the URL of the single cluster overview page: `https://cockroachlabs.cloud/cluster/{your_cluster_id}/overview`. The ID should resemble `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. 1. Determine the [Datadog API key](https://docs.datadoghq.com/account_management/api-app-keys/) you'd like to use. If you don't already have one, follow the steps to [add a new Datadog API key](https://docs.datadoghq.com/account_management/api-app-keys/#add-an-api-key-or-client-token). @@ -214,11 +216,118 @@ A subset of CockroachDB metrics require that you explicitly [enable percentiles]
+
+ +1. Find your CockroachDB {{ site.data.products.dedicated }} cluster ID: + + 1. Visit the CockroachDB {{ site.data.products.cloud }} console [cluster page](https://cockroachlabs.cloud/clusters). + 1. Click on the name of your cluster. + 1. Find your cluster ID in the URL of the single cluster overview page: `https://cockroachlabs.cloud/cluster/{your_cluster_id}/overview`. The ID should resemble `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + +1. Issue the following [Cloud API]({% link cockroachcloud/cloud-api.md %}) command to enable metrics export for your CockroachDB {{ site.data.products.dedicated }} cluster: + + {% include_cached copy-clipboard.html %} + ~~~shell + curl --request POST \ + --url https://cockroachlabs.cloud/api/v1/clusters/{cluster_id}/metricexport/prometheus \ + --header "Authorization: Bearer {secret_key}" + ~~~ + + Where: + - `{cluster_id}` is your CockroachDB {{ site.data.products.dedicated }} cluster ID as determined in step 1, resembling `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + - `{secret_key}` is your CockroachDB {{ site.data.products.dedicated }} API key. See [API Access]({% link cockroachcloud/managing-access.md %}) for instructions on generating this key. + +1. Depending on the size of your cluster and how many regions it spans, the configuration may take a moment. You can monitor the ongoing status of the configuration using the following Cloud API command: + + {% include_cached copy-clipboard.html %} + ~~~shell + curl --request GET \ + --url https://cockroachlabs.cloud/api/v1/clusters/{cluster_id}/metricexport/prometheus \ + --header "Authorization: Bearer {secret_key}" + ~~~ + + Run the command periodically until the command returns a `status` of `ENABLED`, at which point the configuration across all nodes is complete. The response also includes `targets`, a map of scrape endpoints exposing metrics to regions. For example: + + ~~~ + { + "cluster_id": "f78b7feb-b6cf-4396-9d7f-494982d7d81e", + "user_message": "This integration is active.", + "status": "ENABLED", + "targets": { + "us-east4": "https://cockroachlabs.cloud/api/v1/clusters/f78b7feb-b6cf-4396-9d7f-494982d7d81e/metricexport/prometheus/us-east4/scrape" + } + } + ~~~ + + There is a separate scrape endpoint per region if you have a [multi-region]({% link {{site.current_cloud_version}}/multiregion-overview.md %}) cluster. + +1. You can test a scrape endpoint by using the following Cloud API command: + + {% include_cached copy-clipboard.html %} + ~~~shell + curl --request GET \ + --url https://cockroachlabs.cloud/api/v1/clusters/{cluster_id}/metricexport/prometheus/{cluster_region}/scrape \ + --header "Authorization: Bearer {secret_key}" + ~~~ + + Where: + - `{cluster_id}` is your CockroachDB {{ site.data.products.dedicated }} cluster ID as determined in step 1, resembling `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + - `{cluster_region}` is a region of your CockroachDB {{ site.data.products.dedicated }} cluster as shown in the `targets` of step 3, such as `us-east4`. You can also find your cluster’s region(s) on the [Cluster Overview page]({% link cockroachcloud/cluster-overview-page.md %}). + - `{secret_key}` is your CockroachDB {{ site.data.products.dedicated }} API key. See [API Access]({% link cockroachcloud/managing-access.md %}) for instructions on generating this key. + + Metrics are labeled with the cluster name and ID, node, organization name, and region. The beginning lines of a metrics scrape response follows: + + ~~~ + # HELP crdb_dedicated_addsstable_applications Number of SSTable ingestions applied (i.e. applied by Replicas) + # TYPE crdb_dedicated_addsstable_applications counter + crdb_dedicated_addsstable_applications{cluster="test-gcp",instance="172.28.0.167:8080",node="cockroachdb-j5t6j",node_id="1",organization="CRL - Test",region="us-east4",store="1"} 0 + crdb_dedicated_addsstable_applications{cluster="test-gcp",instance="172.28.0.49:8080",node="cockroachdb-ttzj8",node_id="3",organization="CRL - Test",region="us-east4",store="3"} 0 + crdb_dedicated_addsstable_applications{cluster="test-gcp",instance="172.28.0.99:8080",node="cockroachdb-r5rns",node_id="2",organization="CRL - Test",region="us-east4",store="2"} 0 + # HELP crdb_dedicated_addsstable_copies number of SSTable ingestions that required copying files during application + # TYPE crdb_dedicated_addsstable_copies counter + crdb_dedicated_addsstable_copies{cluster="test-gcp",instance="172.28.0.167:8080",node="cockroachdb-j5t6j",node_id="1",organization="CRL - Test",region="us-east4",store="1"} 0 + crdb_dedicated_addsstable_copies{cluster="test-gcp",instance="172.28.0.49:8080",node="cockroachdb-ttzj8",node_id="3",organization="CRL - Test",region="us-east4",store="3"} 0 + crdb_dedicated_addsstable_copies{cluster="test-gcp",instance="172.28.0.99:8080",node="cockroachdb-r5rns",node_id="2",organization="CRL - Test",region="us-east4",store="2"} 0 + # HELP crdb_dedicated_addsstable_proposals Number of SSTable ingestions proposed (i.e. sent to Raft by lease holders) + # TYPE crdb_dedicated_addsstable_proposals counter + crdb_dedicated_addsstable_proposals{cluster="test-gcp",instance="172.28.0.167:8080",node="cockroachdb-j5t6j",node_id="1",organization="CRL - Test",region="us-east4",store="1"} 0 + crdb_dedicated_addsstable_proposals{cluster="test-gcp",instance="172.28.0.49:8080",node="cockroachdb-ttzj8",node_id="3",organization="CRL - Test",region="us-east4",store="3"} 0 + crdb_dedicated_addsstable_proposals{cluster="test-gcp",instance="172.28.0.99:8080",node="cockroachdb-r5rns",node_id="2",organization="CRL - Test",region="us-east4",store="2"} 0 + ~~~ + +1. Once metrics export has been enabled and the scrape endpoint(s) tested, you need to configure your metrics collector to periodically poll the scrape endpoint(s). Configure your [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) file's [`scrape_configs` section](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) as in the following example: + + {% include_cached copy-clipboard.html %} + ~~~yaml + global: + scrape_interval: 10s + evaluation_interval: 10s + + # Prometheus configuration for CockroachDB Dedicated for a single region + scrape_configs: + - job_name: '{job_name}' + metrics_path: '/api/v1/clusters/{cluster_id}/metricexport/prometheus/{cluster_region}/scrape' + static_configs: + - targets: ['cockroachlabs.cloud'] + scheme: 'https' + authorization: + credentials: '{secret_key}' + ~~~ + + Where: + - `{job_name}` is a job name you assign to scraped metrics by default, such as `scrape-cockroach-us-east4`. + - `{cluster_id}` is your CockroachDB {{ site.data.products.dedicated }} cluster ID as determined in step 1, resembling `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. + - `{cluster_region}` is a region of your CockroachDB {{ site.data.products.dedicated }} cluster as shown in the `targets` of step 3, such as `us-east4`. You can also find your cluster’s region(s) on the [Cluster Overview page]({% link cockroachcloud/cluster-overview-page.md %}). + - `{secret_key}` is your CockroachDB {{ site.data.products.dedicated }} API key. See [API Access]({% link cockroachcloud/managing-access.md %}) for instructions on generating this key. + +
+ ## Monitor the status of a metrics export configuration
+
@@ -257,15 +366,34 @@ Where:
+
+ +To check the status of an existing Prometheus metrics export configuration, use the following Cloud API command: + +{% include_cached copy-clipboard.html %} +~~~shell +curl --request GET \ + --url https://cockroachlabs.cloud/api/v1/clusters/{cluster_id}/metricexport/prometheus \ + --header "Authorization: Bearer {secret_key}" +~~~ + +Where: + +- `{cluster_id}` is your CockroachDB {{ site.data.products.dedicated }} cluster's cluster ID, which can be found in the URL of your [Cloud Console](https://cockroachlabs.cloud/clusters/) for the specific cluster you wish to configure, resembling `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. +- `{secret_key}` is your CockroachDB {{ site.data.products.dedicated }} API key. See [API Access]({% link cockroachcloud/managing-access.md %}) for instructions on generating this key. + +
+ ## Update an existing metrics export configuration -To update an existing CockroachDB {{ site.data.products.dedicated }} metrics export configuration, make any necessary changes to your cloud provider configuration (e.g., AWS CloudWatch or Datadog), then issue the same `POST` Cloud API command as shown in the [Enable metrics export](#enable-metrics-export) instructions for your cloud provider with the desired updated configuration. Follow the [Monitor the status of a metrics export configuration](#monitor-the-status-of-a-metrics-export-configuration) instructions to ensure the update completes successfully. +To update an existing CockroachDB {{ site.data.products.dedicated }} metrics export configuration, make any necessary changes to your cloud provider configuration (e.g., AWS CloudWatch, Datadog, or Prometheus), then issue the same `POST` Cloud API command as shown in the [Enable metrics export](#enable-metrics-export) instructions for your cloud provider with the desired updated configuration. Follow the [Monitor the status of a metrics export configuration](#monitor-the-status-of-a-metrics-export-configuration) instructions to ensure the update completes successfully. ## Disable metrics export
+
@@ -304,9 +432,27 @@ Where:
+
+ +To disable an existing Prometheus metrics export configuration, and stop sending metrics to Prometheus, use the following Cloud API command: + +{% include_cached copy-clipboard.html %} +~~~shell +curl --request DELETE \ + --url https://cockroachlabs.cloud/api/v1/clusters/{cluster_id}/metricexport/prometheus \ + --header "Authorization: Bearer {secret_key}" +~~~ + +Where: + +- `{cluster_id}` is your CockroachDB {{ site.data.products.dedicated }} cluster's cluster ID, which can be found in the URL of your [Cloud Console](https://cockroachlabs.cloud/clusters/) for the specific cluster you wish to configure, resembling `f78b7feb-b6cf-4396-9d7f-494982d7d81e`. +- `{secret_key}` is your CockroachDB {{ site.data.products.dedicated }} API key. See [API Access]({% link cockroachcloud/managing-access.md %}) for instructions on generating this key. + +
+ ## Limitations -- Metrics export to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can [export metrics to Datadog](export-metrics.html?filters=datadog-metrics-export) instead. +- Metrics export to AWS CloudWatch is only available on CockroachDB {{ site.data.products.dedicated }} clusters which are hosted on AWS. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can export metrics to [Datadog]({% link cockroachcloud/export-metrics.md %}?filters=datadog-metrics-export) or [Prometheus]({% link cockroachcloud/export-metrics.md %}?filters=prometheus-metrics-export) instead. - AWS CloudWatch does not currently support histograms. Any histogram-type metrics emitted from your CockroachDB {{ site.data.products.dedicated }} cluster are dropped by CloudWatch. See [Prometheus metric type conversion](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-metrics-conversion.html) for more information, and [Logging dropped Prometheus metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-troubleshooting-EKS.html#ContainerInsights-Prometheus-troubleshooting-droppedmetrics) for instructions on tracking dropped histogram metrics in CloudWatch. ## Troubleshooting @@ -317,7 +463,7 @@ Be sure you are providing **your own** AWS Account ID as shown on the AWS [IAM p If you are using an existing AWS role, or are otherwise using a role name different from the example name used in this tutorial, be sure to use your own role name in step 8 in place of `CockroachCloudMetricsExportRole`. -Your CockroachDB {{ site.data.products.dedicated }} cluster must be running on AWS (not GCP or Azure) to make use of metrics export to AWS CloudWatch. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can [export metrics to Datadog](export-metrics.html?filters=datadog-metrics-export) instead. +Your CockroachDB {{ site.data.products.dedicated }} cluster must be running on AWS (not GCP or Azure) to make use of metrics export to AWS CloudWatch. If your CockroachDB {{ site.data.products.dedicated }} cluster is hosted on GCP or Azure, you can export metrics to [Datadog]({% link cockroachcloud/export-metrics.md %}?filters=datadog-metrics-export) or [Prometheus]({% link cockroachcloud/export-metrics.md %}?filters=prometheus-metrics-export) instead. ## See Also diff --git a/src/current/v23.2/monitor-cockroachdb-with-prometheus.md b/src/current/v23.2/monitor-cockroachdb-with-prometheus.md index 6a28cbf42c0..c59763f8648 100644 --- a/src/current/v23.2/monitor-cockroachdb-with-prometheus.md +++ b/src/current/v23.2/monitor-cockroachdb-with-prometheus.md @@ -7,6 +7,10 @@ docs_area: manage CockroachDB generates detailed time series metrics for each node in a cluster. This page shows you how to pull these metrics into [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying time series data. It also shows you how to connect [Grafana](https://grafana.com/) and [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) to Prometheus for flexible data visualizations and notifications. +{{site.data.alerts.callout_success}} +This tutorial explores the CockroachDB {{ site.data.products.core }} integration with Prometheus. For the CockroachDB {{ site.data.products.dedicated }} integration with Prometheus, refer to [Export Metrics From a CockroachDB Dedicated Cluster]({% link cockroachcloud/export-metrics.md %}?filters=prometheus-metrics-export) instead of this page. +{{site.data.alerts.end}} + ## Before you begin - Make sure you have already started a CockroachDB cluster, either [locally]({% link {{ page.version.version }}/start-a-local-cluster.md %}) or in a [production environment]({% link {{ page.version.version }}/manual-deployment.md %}). From db438ff9065ede1f3035c1c640c6133e6d7af230 Mon Sep 17 00:00:00 2001 From: Mike Lewis <76072290+mikeCRL@users.noreply.github.com> Date: Mon, 29 Jan 2024 11:24:57 -0500 Subject: [PATCH 11/18] Add Enterprise license clarifications to 23.2 release notes (#18243) * Add Enterprise license clarifications to 23.2 release notes * Fix Public Preview mention * Fix second Public Preview mention * Remove duplicate entry and update PCR desc * Fix link * Update src/current/_includes/releases/v23.2/v23.2.0.md * Add column level encryption to enterprise features table --- .../_includes/releases/v23.2/v23.2.0.md | 40 +++++-------------- .../v23.2/misc/enterprise-features.md | 1 + 2 files changed, 10 insertions(+), 31 deletions(-) diff --git a/src/current/_includes/releases/v23.2/v23.2.0.md b/src/current/_includes/releases/v23.2/v23.2.0.md index 23c2da06d10..437d0028d32 100644 --- a/src/current/_includes/releases/v23.2/v23.2.0.md +++ b/src/current/_includes/releases/v23.2/v23.2.0.md @@ -14,7 +14,7 @@ This section summarizes the most significant user-facing changes in v23.2.0 and - [Observability](#v23-2-0-observability) - [Migrations](#v23-2-0-migrations) - [Security and compliance](#v23-2-0-security-and-compliance) - - [Recovery and I/O](#v23-2-0-recovery-and-io) + - [Disaster recovery](#v23-2-0-disaster-recovery) - [Deployment and operations](#v23-2-0-deployment-and-operations) - [SQL](#v23-2-0-sql) - **Additional information** @@ -62,6 +62,10 @@ This section summarizes the most significant user-facing changes in v23.2.0 and } +{{ site.data.alerts.callout_info }} +In CockroachDB Self-Hosted, all available features are free to use unless their description specifies that an Enterprise license is required. For more information, see the [Licensing FAQ](https://www.cockroachlabs.com/docs/stable/licensing-faqs). +{{ site.data.alerts.end }} +

Observability

@@ -120,7 +124,7 @@ This section summarizes the most significant user-facing changes in v23.2.0 and

Customize your own metric dashboard for CockroachDB serverless

The CockroachDB Cloud console supports additional metrics that can be customized in a single dashboard for CockroachDB Serverless.

- + @@ -188,8 +192,8 @@ This section summarizes the most significant user-facing changes in v23.2.0 and @@ -260,32 +264,6 @@ This section summarizes the most significant user-facing changes in v23.2.0 and
All>*All* {% include icon-no.html %} {% include icon-no.html %} {% include icon-yes.html %}
-

Physical Cluster Replication is now available in a public preview

-

Physical Cluster Replication is an asynchonous replication feature that allows your cluster to recover from full-cluster failure with a low RPO and RTO. In 23.2, it is an Enterprise-only Public Preview feature, requiring a CCL license, and only available for self-hosted CockroachDB deployments.

+

Physical Cluster Replication is now available in Preview

+

Physical Cluster Replication is an asynchonous replication feature that allows your cluster to recover from full-cluster failure with a low RPO and RTO. In 23.2, it is a Preview feature, requiring an Enterprise license, and only available for self-hosted CockroachDB deployments.

23.2 {% include icon-yes.html %}
-

Recovery and I/O

- - - - - - - - - - - - - - - - - - - - -
FeatureAvailability
Ver.Self-HostedDedicatedServerless
-

Physical Cluster Replication is now available in a public preview

-

Physical Cluster Replication is an asynchonous replication feature that allows your cluster to recover from full-cluster failure with a low RPO and RTO. In 23.2, it is an enterprise-only Public Preview feature, requiring a CCL license, and only available for self-hosted CockroachDB deployments.

-
23.2{% include icon-yes.html %}{% include icon-no.html %}{% include icon-no.html %}
-

SQL

@@ -352,7 +330,7 @@ This section summarizes the most significant user-facing changes in v23.2.0 and diff --git a/src/current/_includes/v23.2/misc/enterprise-features.md b/src/current/_includes/v23.2/misc/enterprise-features.md index c428ec79b07..b147d8398c2 100644 --- a/src/current/_includes/v23.2/misc/enterprise-features.md +++ b/src/current/_includes/v23.2/misc/enterprise-features.md @@ -20,6 +20,7 @@ Enterprise [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) and resto Feature | Description --------+------------------------- [Encryption at Rest]({% link {{ page.version.version }}/security-reference/encryption.md %}#encryption-at-rest-enterprise) | Enable automatic transparent encryption of a node's data on the local disk using AES in counter mode, with all key sizes allowed. This feature works together with CockroachDB's automatic encryption of data in transit. +[Column-level encryption]({% link {{ page.version.version }}/column-level-encryption.md %}) | Encrypt specific columns within a table. [GSSAPI with Kerberos Authentication]({% link {{ page.version.version }}/gssapi_authentication.md %}) | Authenticate to your cluster using identities stored in an external enterprise directory system that supports Kerberos, such as Active Directory. [Cluster Single Sign-on (SSO)]({% link {{ page.version.version }}/sso-sql.md %}) | Grant SQL access to a cluster using JSON Web Tokens (JWTs) issued by an external identity provider (IdP) or custom JWT issuer. [Single Sign-on (SSO) for DB Console]({% link {{ page.version.version }}/sso-db-console.md %}) | Grant access to a cluster's DB Console interface using SSO through an IdP that supports OIDC. From d1b805e6f5633318736e889655c27c560ae954b7 Mon Sep 17 00:00:00 2001 From: Mike Lewis <76072290+mikeCRL@users.noreply.github.com> Date: Mon, 29 Jan 2024 11:29:00 -0500 Subject: [PATCH 12/18] Remove details about multiple CA certs on Cloud (#18255) --- src/current/cockroachcloud/client-certs-dedicated.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/cockroachcloud/client-certs-dedicated.md b/src/current/cockroachcloud/client-certs-dedicated.md index e9f02a40772..3cf58ea7593 100644 --- a/src/current/cockroachcloud/client-certs-dedicated.md +++ b/src/current/cockroachcloud/client-certs-dedicated.md @@ -242,7 +242,7 @@ resource "cockroach_client_ca_cert" "yourclustername" { Clients must be provisioned with client certificates signed by the cluster's CA prior to the update, or their new connections will be blocked. {{site.data.alerts.end}} -This section shows how to replace the CA certificate used by your cluster for certificate-based client authentication. To roll out a new CA certificate gradually instead of following this procedure directly, CockroachDB supports the ability to include multiple CA certificates for a cluster by concatenating them in PEM format. This allows clients to connect as long as the client certificate is signed by either the old CA certificate or the new one. PEM format requires a blank line in between certificates. +This section shows how to replace the CA certificate used by your cluster for certificate-based client authentication. {{site.data.alerts.callout_success}} The [Cluster Administrator]({% link cockroachcloud/authorization.md %}#cluster-administrator) or [Org Administrator (legacy)]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) Organization role is required to manage the CA certificate for a CockroachDB {{ site.data.products.dedicated }} cluster. From cef704e48ae0429dc239208bbfada168b4dfeb2c Mon Sep 17 00:00:00 2001 From: Kathryn Hancox <44557882+kathancox@users.noreply.github.com> Date: Mon, 29 Jan 2024 15:14:36 -0500 Subject: [PATCH 13/18] Improve description of Glacier storage class and incremental backups (#18256) --- .../misc/storage-class-glacier-incremental.md | 4 +++- .../misc/storage-class-glacier-incremental.md | 4 +++- .../misc/storage-class-glacier-incremental.md | 4 +++- src/current/v22.2/use-cloud-storage.md | 14 +++++++++++--- src/current/v23.1/use-cloud-storage.md | 14 +++++++++++--- src/current/v23.2/use-cloud-storage.md | 14 +++++++++++--- 6 files changed, 42 insertions(+), 12 deletions(-) diff --git a/src/current/_includes/v22.2/misc/storage-class-glacier-incremental.md b/src/current/_includes/v22.2/misc/storage-class-glacier-incremental.md index 92d1f6cf90d..9daebd72c14 100644 --- a/src/current/_includes/v22.2/misc/storage-class-glacier-incremental.md +++ b/src/current/_includes/v22.2/misc/storage-class-glacier-incremental.md @@ -1,3 +1,5 @@ {{site.data.alerts.callout_danger}} -[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. +[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first [restoring the archived objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) to its S3 bucket. + +Refer to [Incremental backups and storage classes]({% link {{ page.version.version }}/use-cloud-storage.md %}#incremental-backups-and-archive-storage-classes) for more detail. {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v23.1/misc/storage-class-glacier-incremental.md b/src/current/_includes/v23.1/misc/storage-class-glacier-incremental.md index f2e2cfadcbb..9daebd72c14 100644 --- a/src/current/_includes/v23.1/misc/storage-class-glacier-incremental.md +++ b/src/current/_includes/v23.1/misc/storage-class-glacier-incremental.md @@ -1,3 +1,5 @@ {{site.data.alerts.callout_danger}} -[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. +[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first [restoring the archived objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) to its S3 bucket. + +Refer to [Incremental backups and storage classes]({% link {{ page.version.version }}/use-cloud-storage.md %}#incremental-backups-and-archive-storage-classes) for more detail. {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v23.2/misc/storage-class-glacier-incremental.md b/src/current/_includes/v23.2/misc/storage-class-glacier-incremental.md index f2e2cfadcbb..9daebd72c14 100644 --- a/src/current/_includes/v23.2/misc/storage-class-glacier-incremental.md +++ b/src/current/_includes/v23.2/misc/storage-class-glacier-incremental.md @@ -1,3 +1,5 @@ {{site.data.alerts.callout_danger}} -[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups. The Glacier Flexible Retrieval or Glacier Deep Archive storage classes do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. +[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first [restoring the archived objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) to its S3 bucket. + +Refer to [Incremental backups and storage classes]({% link {{ page.version.version }}/use-cloud-storage.md %}#incremental-backups-and-archive-storage-classes) for more detail. {{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/v22.2/use-cloud-storage.md b/src/current/v22.2/use-cloud-storage.md index 2c1db7aa5a4..6e716542d65 100644 --- a/src/current/v22.2/use-cloud-storage.md +++ b/src/current/v22.2/use-cloud-storage.md @@ -253,9 +253,17 @@ The following S3 connection URI uses the `INTELLIGENT_TIERING` storage class: While Cockroach Labs supports configuring an AWS storage class, we only test against S3 Standard. We recommend implementing your own testing with other storage classes. -{{site.data.alerts.callout_info}} -[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups, which is not possible with the Glacier Flexible Retrieval or Glacier Deep Archive storage classes as they do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. -{{site.data.alerts.end}} +#### Incremental backups and archive storage classes + +[Incremental backups](take-full-and-incremental-backups.html#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first restoring the archived object to its S3 bucket. + +Refer to the AWS documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for steps. + +When you are restoring archived backup files from Glacier Flexible Retrieval or Glacier Deep Archive back to an S3 bucket, you must restore both the full backup and incremental backup layers for that backup. By default, CockroachDB stores the incremental backup layers in a separate top-level directory at the backup's storage location. Refer to [Backup collections](take-full-and-incremental-backups.html#backup-collections) for detail on the backup directory structure at its storage location. + +Once you have restored all layers of a backup's archived files back to its S3 bucket, you can then [restore](restore.html) the backup to your CockroachDB cluster. + +#### Supported storage classes This table lists the valid CockroachDB parameters that map to an S3 storage class: diff --git a/src/current/v23.1/use-cloud-storage.md b/src/current/v23.1/use-cloud-storage.md index bb78325c6db..583532417cf 100644 --- a/src/current/v23.1/use-cloud-storage.md +++ b/src/current/v23.1/use-cloud-storage.md @@ -251,9 +251,17 @@ The following S3 connection URI uses the `INTELLIGENT_TIERING` storage class: While Cockroach Labs supports configuring an AWS storage class, we only test against S3 Standard. We recommend implementing your own testing with other storage classes. -{{site.data.alerts.callout_info}} -[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups, which is not possible with the Glacier Flexible Retrieval or Glacier Deep Archive storage classes as they do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. -{{site.data.alerts.end}} +#### Incremental backups and archive storage classes + +[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first restoring the archived object to its S3 bucket. + +Refer to the AWS documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for steps. + +When you are restoring archived backup files from Glacier Flexible Retrieval or Glacier Deep Archive back to an S3 bucket, you must restore both the full backup and incremental backup layers for that backup. By default, CockroachDB stores the incremental backup layers in a separate top-level directory at the backup's storage location. Refer to [Backup collections]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) for detail on the backup directory structure at its storage location. + +Once you have restored all layers of a backup's archived files back to its S3 bucket, you can then [restore]({% link {{ page.version.version }}/restore.md %}) the backup to your CockroachDB cluster. + +#### Supported storage classes This table lists the valid CockroachDB parameters that map to an S3 storage class: diff --git a/src/current/v23.2/use-cloud-storage.md b/src/current/v23.2/use-cloud-storage.md index e1cad97f69a..ac7219d1c30 100644 --- a/src/current/v23.2/use-cloud-storage.md +++ b/src/current/v23.2/use-cloud-storage.md @@ -255,9 +255,17 @@ The following S3 connection URI uses the `INTELLIGENT_TIERING` storage class: While Cockroach Labs supports configuring an AWS storage class, we only test against S3 Standard. We recommend implementing your own testing with other storage classes. -{{site.data.alerts.callout_info}} -[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require ad-hoc reading of previous backups, which is not possible with the Glacier Flexible Retrieval or Glacier Deep Archive storage classes as they do not allow immediate access to S3 objects without first restoring the objects. See Amazon's documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for more detail. -{{site.data.alerts.end}} +#### Incremental backups and archive storage classes + +[Incremental backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#incremental-backups) are **not** compatible with the [S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide//storage-class-intro.html#sc-glacier). Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first restoring the archived object to its S3 bucket. + +Refer to the AWS documentation on [Restoring an archived object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) for steps. + +When you are restoring archived backup files from Glacier Flexible Retrieval or Glacier Deep Archive back to an S3 bucket, you must restore both the full backup and incremental backup layers for that backup. By default, CockroachDB stores the incremental backup layers in a separate top-level directory at the backup's storage location. Refer to [Backup collections]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#backup-collections) for detail on the backup directory structure at its storage location. + +Once you have restored all layers of a backup's archived files back to its S3 bucket, you can then [restore]({% link {{ page.version.version }}/restore.md %}) the backup to your CockroachDB cluster. + +#### Supported storage classes This table lists the valid CockroachDB parameters that map to an S3 storage class: From f74b440bf367babaf3cc5a8060135fa1e1e1e967 Mon Sep 17 00:00:00 2001 From: Ryan Kuo Date: Mon, 29 Jan 2024 13:28:27 -0500 Subject: [PATCH 14/18] clarify READ UNCOMMITTED alias --- src/current/v23.2/transactions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/v23.2/transactions.md b/src/current/v23.2/transactions.md index ef083d1c541..149ac6118ca 100644 --- a/src/current/v23.2/transactions.md +++ b/src/current/v23.2/transactions.md @@ -214,7 +214,7 @@ CockroachDB uses slightly different isolation levels than [ANSI SQL isolation le `SNAPSHOT`, `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` are aliases for `SERIALIZABLE`. -{% include_cached new-in.html version="v23.2" %} If [`READ COMMITTED` isolation is enabled]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) using the `sql.txn.read_committed_isolation.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}), `READ COMMITTED` is not an alias for `SERIALIZABLE`. +{% include_cached new-in.html version="v23.2" %} If [`READ COMMITTED` isolation is enabled]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation) using the `sql.txn.read_committed_isolation.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}), `READ COMMITTED` is no longer an alias for `SERIALIZABLE`, and `READ UNCOMMITTED` becomes an alias for `READ COMMITTED`. #### Comparison From aa91545097dd7ae3b1f487facb497a567f2abf81 Mon Sep 17 00:00:00 2001 From: "Matt Linville (he/him)" Date: Tue, 30 Jan 2024 12:16:48 -0800 Subject: [PATCH 15/18] [DOC-9564, DOC-9585] Update docs on connecting to a cluster with no current downloads available (#18249) * [DOC-9564] Update docs on connecting to a cluster with no current downloads available --- src/current/_data/releases.yml | 3 +- .../cockroachcloud/download-the-binary.md | 32 ---- .../connect-to-a-serverless-cluster.md | 13 +- .../cockroachcloud/connect-to-your-cluster.md | 152 ++++++------------ ...ith-flask-kubernetes-and-cockroachcloud.md | 56 +++---- .../cockroachcloud/learn-cockroachdb-sql.md | 2 +- src/current/cockroachcloud/managing-access.md | 12 +- .../quickstart-trial-cluster.md | 35 +--- .../stream-changefeed-to-snowflake-aws.md | 19 +-- .../v22.2/install-cockroachdb-linux.md | 4 +- src/current/v22.2/install-cockroachdb-mac.md | 6 +- .../v22.2/install-cockroachdb-windows.md | 4 +- .../v23.1/install-cockroachdb-linux.md | 4 +- src/current/v23.1/install-cockroachdb-mac.md | 6 +- .../v23.1/install-cockroachdb-windows.md | 2 +- .../v23.2/install-cockroachdb-linux.md | 6 +- src/current/v23.2/install-cockroachdb-mac.md | 6 +- .../v23.2/install-cockroachdb-windows.md | 4 +- 18 files changed, 129 insertions(+), 237 deletions(-) delete mode 100644 src/current/_includes/cockroachcloud/download-the-binary.md diff --git a/src/current/_data/releases.yml b/src/current/_data/releases.yml index 46ce49df64b..ed0374f75ef 100644 --- a/src/current/_data/releases.yml +++ b/src/current/_data/releases.yml @@ -5390,8 +5390,7 @@ CockroachDB v23.2 is now generally available for CockroachDB Dedicated, and is scheduled to be made available for CockroachDB Self-Hosted on February 5, 2024 per the staged release process. For more information, refer to - Create a CockroachDB Dedicated Cluster or - Upgrade to CockroachDB v23.2. + [Upgrade to CockroachDB v23.2](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-to-v23.2). To connect to a CockroachDB Dedicated cluster on v23.2, refer to [Connect to a CockroachDB Dedicated Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/connect-to-your-cluster). go_version: go1.21 sha: c8ffbdc4eeb3f656085edc33d6965b8f30cd7514 has_sql_only: true diff --git a/src/current/_includes/cockroachcloud/download-the-binary.md b/src/current/_includes/cockroachcloud/download-the-binary.md deleted file mode 100644 index 3a028b2405c..00000000000 --- a/src/current/_includes/cockroachcloud/download-the-binary.md +++ /dev/null @@ -1,32 +0,0 @@ -
- If you have not done so already, run the first command in the dialog to install the CockroachDB binary in `/usr/local/bin`, which is usually in the system `PATH`. To install it into a different location, replace `/usr/local/bin/`. - - {% include_cached copy-clipboard.html %} - ~~~ shell - curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.darwin-10.9-amd64.tgz | tar -xJ && sudo cp -i cockroach-{{ page.release_info.version }}.darwin-10.9-amd64/cockroach /usr/local/bin/ - ~~~ - -
- -
- If you have not done so already, run the first command in the dialog to install the CockroachDB binary in `/usr/local/bin`, which is usually in the system `PATH`. To install it into a different location, replace `/usr/local/bin/`. - - {% include_cached copy-clipboard.html %} - ~~~ shell - curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz | tar -xz && sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -
- -
- - If you have not done so already, use PowerShell to run the first command in the dialog, which is a PowerShell script that installs the CockroachDB binary and adds its location in the system `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ErrorActionPreference = "Stop"; [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;$ProgressPreference = 'SilentlyContinue'; $null = New-Item -Type Directory -Force $env:appdata/cockroach; Invoke-WebRequest -Uri https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.windows-6.2-amd64.zip -OutFile cockroach.zip; Expand-Archive -Force -Path cockroach.zip; Copy-Item -Force "cockroach/cockroach-{{ page.release_info.version }}.windows-6.2-amd64/cockroach.exe" -Destination $env:appdata/cockroach; $Env:PATH += ";$env:appdata/cockroach" - ~~~ - - We recommend adding `;$env:appdata/cockroach` to the `PATH` variable for your system environment so you can run [`cockroach` commands](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-commands) from any shell. See [Microsoft's environment variable documentation](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_environment_variables#saving-changes-to-environment-variables) for more information. - -
diff --git a/src/current/cockroachcloud/connect-to-a-serverless-cluster.md b/src/current/cockroachcloud/connect-to-a-serverless-cluster.md index 1be84e83267..7277d9d6b08 100644 --- a/src/current/cockroachcloud/connect-to-a-serverless-cluster.md +++ b/src/current/cockroachcloud/connect-to-a-serverless-cluster.md @@ -41,7 +41,7 @@ AWS PrivateLink can be configured only after the cluster is created. For detaile Private connectivity is not available for CockroachDB {{ site.data.products.serverless }} clusters on GCP. {{site.data.alerts.end}} -## Select a connection method +## Connect to your cluster 1. Select your cluster to navigate to the cluster [**Overview** page]({% link cockroachcloud/cluster-overview-page.md %}). @@ -53,9 +53,7 @@ Private connectivity is not available for CockroachDB {{ site.data.products.serv - Select the SQL user you want to connect with from the **SQL user** dropdown. - Select the database you want to connect to from the **Database** dropdown. -## Connect to your cluster - -1. Select a connection method from the **Select option** dropdown (the instructions below will adjust accordingly): +1. Select a connection method from the **Select option / Language** dropdown (the instructions below will adjust accordingly):
@@ -87,10 +85,9 @@ For connection examples and code snippets in your language, see the following:
-1. In the **Download CA Cert** section of the dialog, select your operating system, and use the command provided to download the CA certificate to the default PostgreSQL certificate directory on your machine. +1. If you need to download the CA certificate, first set **Select option/language** to **General Connection String** and expand the **Downloada CA Cert** section. In the **Download CA Cert** section of the dialog, select your operating system, and use the command provided to download the CA certificate to the default PostgreSQL certificate directory on your machine. 1. If you [established a private connection using AWS PrivateLink](#establish-aws-privatelink), change **Connection type** from **Public connection** to **Private connection** to connect privately. 1. Select the **Parameters only** option of the **Select option** dropdown. - 1. Use the connection parameters provided in the dialog to connect to your cluster using a [CockroachDB-compatible tool](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/third-party-database-tools). Parameter | Description @@ -104,9 +101,11 @@ For connection examples and code snippets in your language, see the following:
+You can connect to your cluster with any [supported version](https://www.cockroachlabs.com/docs/releases/release-support-policy#current-supported-releases) of the full CockroachDB binary or the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql). To download the full binary and connect to a CockroachDB {{ site.data.products.serverless }} cluster, follow these steps. + 1. Select **CockroachDB Client** from the **Select option/language** dropdown. 1. In the **Download CA Cert** section of the dialog, select your operating system, and use the command provided to download the CA certificate to the default PostgreSQL certificate directory on your machine. -1. In the **Download the latest CockroachDB Client** section of the dialog, select your operating system, and use the command provided to install CockroachDB. +1. In the **Download the latest CockroachDB Client** section of the dialog, select your operating system, and use the command provided to install the latest downloadable version of CockroachDB on your local system. 1. If you [established a private connection using AWS PrivateLink](#establish-aws-privatelink), change **Connection type** from **Public connection** to **Private connection** to connect privately. 1. Copy the [`cockroach sql`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql) command and connection string provided in the **Connect** dialog, which will be used in the next step (and to connect to your cluster in the future). 1. In your terminal, enter the copied `cockroach sql` command and connection string to start the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql.html). diff --git a/src/current/cockroachcloud/connect-to-your-cluster.md b/src/current/cockroachcloud/connect-to-your-cluster.md index 44f77de6855..98ef6fea360 100644 --- a/src/current/cockroachcloud/connect-to-your-cluster.md +++ b/src/current/cockroachcloud/connect-to-your-cluster.md @@ -1,5 +1,5 @@ --- -title: Connect to a CockroachDB Cloud Dedicated Cluster +title: Connect to a CockroachDB Dedicated Cluster summary: Learn how to connect and start interacting with your cluster. toc: true docs_area: deploy @@ -63,132 +63,77 @@ Azure Private Link is not yet available for [CockroachDB {{ site.data.products.d 1. Run the command displayed on the **Accept VPC peering connection request** window using [Google Cloud Shell](https://cloud.google.com/shell) or using the [gcloud command-line tool](https://cloud.google.com/sdk/gcloud). 1. On the **Networking** page, verify the connection status is **Available**. -## Select a connection method -1. In the top right corner of the Console, click the **Connect** button. - - The **Connect** dialog displays with **IP Allowlist** selected by default. - -1. Select a **Network Security** option: - - You can use the **IP Allowlist** option if you have already [added an IP address to your allowlist.](#add-ip-addresses-to-the-allowlist) +## Connect to your cluster - For AWS clusters, you can select **AWS PrivateLink** if you have already [established a PrivateLink connection](#establish-gcp-vpc-peering-or-aws-privatelink). +1. In the top right corner of the CockroachDB {{ site.data.products.cloud }} Console, click the **Connect** button. - For GCP clusters, you can select **VPC Peering** if you have already: - - [Enabled VPC peering while creating your cluster]({% link cockroachcloud/create-your-cluster.md %}#step-8-enable-vpc-peering-optional) - - [Established a VPC Peering connection](#establish-gcp-vpc-peering-or-aws-privatelink) + The **Setup** page of the **Connect to cluster** dialog displays. -1. From the **User** dropdown, select the SQL user you created. -1. From the **Region** dropdown, select the region closest to where your client or application is running. -1. From the **Database** dropdown, select the database you want to connect to. +1. If you set up a private connection, click **AWS PrivateLink** (for clusters deployed in AWS) or **VPC Peering** (for clusters deployed in GCP) to connect privately. Otherwise, click **IP Allowlist**. +1. Select the **SQL User**. If you have only one SQL user, it is automatically selected. - The default database is `defaultdb`. For more information, see [Default databases](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/show-databases#preloaded-databases). + {{site.data.alerts.callout_info}} + If you forget your SQL user's password, an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) or a Cluster Admin on the cluster can change the password on the **SQL Users** page. + {{site.data.alerts.end}} +1. Select the **Database**. If you have only one database, it is automatically selected. +1. For a multiregion cluster, select the **Region** to connect to. If you have only one region, it is automatically selected. 1. Click **Next**. -1. Select a connection method (the instructions below will adjust accordingly): + The **Connect** page of the **Connection info** dialog displays. + +1. In the dialog, select the tab for a connection method, then follow the instructions below for that method.
- +
-## Connect to your cluster -
-To connect to your cluster with the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql): - -1. Select **Mac**, **Linux**, or **Windows** to adjust the commands used in the next steps accordingly. - -
- - - -
- -1. {% include cockroachcloud/download-the-binary.md %} - -1. In your terminal, run the second command from the dialog to create a new `certs` directory on your local machine and download the CA certificate to that directory: - - {% include cockroachcloud/download-the-cert.md %} +You can connect to your cluster with any [supported version](https://www.cockroachlabs.com/docs/releases/release-support-policy#current-supported-releases) of the full CockroachDB binary or the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql). To download the full binary and connect to a CockroachDB {{ site.data.products.dedicated }} cluster, follow these steps. -1. If you [established a private connection using VPC Peering or AWS PrivateLink](#establish-gcp-vpc-peering-or-aws-privatelink), click **VPC Peering** or **PrivateLink** to connect privately. - -1. Copy the [`cockroach sql`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql) command and connection string provided in the Console, which will be used in the next step (and to connect to your cluster in the future): - - {% include cockroachcloud/sql-connection-string.md %} +{{site.data.alerts.callout_success}} +To download a supported version of the SQL shell instead of the full binary, visit [Releases](https://cockroachlabs.com/releases). +{{site.data.alerts.end}} -1. In your terminal, enter the copied `cockroach sql` command and connection string to start the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql). +1. Select the **Command Line** tab. +1. If CockroachDB is not installed locally, copy the command to download and install it. In your terminal, run the command. +1. If the CA certificate for the cluster is not downloaded locally, copy the command to download it. In your terminal, run the command. +1. Copy the [`cockroach sql`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql) command, which will be used in the next step (and to connect to your cluster in the future). Click **Close**. +1. In your terminal, enter the copied `cockroach sql` command and connection string to start the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql.html). 1. Enter the SQL user's password and hit enter. {% include cockroachcloud/postgresql-special-characters.md %} - {{site.data.alerts.callout_info}} - If you forget your SQL user's password, an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) or a Cluster Admin on the cluster can change the password on the **SQL Users** page. Refer to: [Change a User's password](https://www.cockroachlabs.com/docs/cockroachcloud/managing-access#change-a-sql-users-password). - {{site.data.alerts.end}} - - You are now connected to the built-in SQL client, and can now run [CockroachDB SQL statements]({% link cockroachcloud/learn-cockroachdb-sql.md %}). - -
+ A welcome message displays: -
- -To connect to your cluster with your application, use the connection string provided in the Console: - -1. Select **Mac**, **Linux**, or **Windows** to adjust the commands used in the next steps accordingly. - -
- - - -
- -1. In your terminal, run the first command from the dialog to create a new `certs` directory on your local machine and download the CA certificate to that directory: - - {% include cockroachcloud/download-the-cert.md %} - -1. If you [established a private connection using VPC Peering or AWS PrivateLink](#establish-gcp-vpc-peering-or-aws-privatelink), click **VPC Peering** or **PrivateLink** to connect privately. - -1. Copy the connection string provided in the Console, which will be used to connect your application to CockroachDB {{ site.data.products.cloud }}: - -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - 'postgresql://@-..:26257/?sslmode=verify-full&sslrootcert='$HOME'/Library/CockroachCloud/certs/-ca.crt' ~~~ - -
- -
- - {% include_cached copy-clipboard.html %} - ~~~ shell - 'postgresql://@-..:26257/?sslmode=verify-full&sslrootcert='$HOME'/Library/CockroachCloud/certs/-ca.crt' + # + # Welcome to the CockroachDB SQL shell. + # All statements must be terminated by a semicolon. + # To exit, type: \q. + # ~~~ -
- -
+ You are now connected to the built-in SQL client, and can now run [CockroachDB SQL statements]({% link cockroachcloud/learn-cockroachdb-sql.md %}). - {% include_cached copy-clipboard.html %} - ~~~ shell - "postgresql://@-..:26257/?sslmode=verify-full&sslrootcert=$env:appdata\CockroachCloud\certs\$-ca.crt" - ~~~ +
-
+
-1. Add your copied connection string to your application code. +To connect to your cluster from your application: - {% include cockroachcloud/postgresql-special-characters.md %} +1. Select the **Connection string** tab. +1. If the CA certificate for the cluster is not downloaded locally, copy the command to download it. In your terminal, run the command. +1. Copy the connection string, which begins with `postgresql://`. This will be used to connect your application to CockroachDB {{ site.data.products.dedicated }}. +1. Add your copied connection string to your application code. For information about connecting to CockroachDB {{ site.data.products.serverless }} with a [supported client](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/third-party-database-tools), see [Connect to a CockroachDB Cluster](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/connect-to-the-database). +1. Click **Close**. - {{site.data.alerts.callout_info}} - If you forget your SQL user's password, an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator-legacy) or a Cluster Admin on the cluster can change the password on the **SQL Users** page. - {{site.data.alerts.end}} +{% include cockroachcloud/postgresql-special-characters.md %} For examples, see the following: @@ -199,12 +144,21 @@ For examples, see the following:
-To connect to your cluster with a [CockroachDB-compatible tool](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/third-party-database-tools), use the connection parameters provided in the Console. +To connect to your cluster with a [CockroachDB-compatible tool](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/third-party-database-tools): + +1. If the CA certificate for the cluster is not downloaded locally, select the **Connection string** tab, then copy the command to download the CA certificate. In your terminal, run the command. +1. Select the **Connection parameters** tab. +1. Use the connection parameters provided in the dialog to connect to your cluster using a [CockroachDB-compatible tool](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/third-party-database-tools). -1. From the cluster's **Details** page, click **Connect**. -1. If you [established a private connection using VPC Peering or AWS PrivateLink](#establish-gcp-vpc-peering-or-aws-privatelink), click **VPC Peering** or **PrivateLink** to connect privately. -1. Copy the connection string and provide it to the CockroachDB-compatible tool. + Parameter | Description + --------------|------------ + `{username}` | The [SQL user]({% link cockroachcloud/managing-access.md %}#create-a-sql-user) connecting to the cluster. + `{password}` | The password for the SQL user connecting to the cluster. + `{host}` | The host on which the CockroachDB node is running. + `{port}` | The port at which the CockroachDB node is listening. + `{database}` | The name of the (existing) database. +1. Click **Close**.
## What's next diff --git a/src/current/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md b/src/current/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md index 3c18abb51bf..beb314fbb1d 100644 --- a/src/current/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md +++ b/src/current/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud.md @@ -26,8 +26,8 @@ This tutorial shows you how to run a sample To-Do app in [Kubernetes](https://ku - [Step 1. Authorize your local workstation's network](#step-1-authorize-your-local-workstations-network) - [Step 2. Create a SQL user](#step-2-create-a-sql-user) -- [Step 3. Generate the CockroachDB client connection string](#step-3-generate-the-cockroachdb-client-connection-string) -- [Step 4. Create the CockroachDB {{ site.data.products.dedicated }} database](#step-4-create-the-cockroachdb-cloud-database) +- [Step 3. Connect to the cluster](#step-3-connect-to-the-cluster) +- [Step 4. Create the CockroachDB {{ site.data.products.dedicated }} database](#step-4-create-the-database) - [Step 5. Generate the application connection string](#step-5-generate-the-application-connection-string) ### Step 1. Authorize your local workstation's network @@ -50,36 +50,40 @@ Once you are [logged in](https://cockroachlabs.cloud/), you can use the Console {% include cockroachcloud/cockroachcloud-ask-admin.md %} -1. Navigate to your cluster's **SQL Users** page. -1. Click the **Add User** button in the top right corner. The **Add User** dialog displays. -1. Enter a **Username** and **Password**. -1. Click **Save**. +1. In the left navigation bar, click **SQL Users**. +1. Click **Add User**. The **Add User** dialog displays. +1. Enter a username and click **Generate & Save Password**. +1. Copy the generated password to a secure location, such as a password manager. +1. Click **Close**. Currently, all new SQL users are created with admin privileges. For more information and to change the default settings, see [Managing SQL users on a cluster]({% link cockroachcloud/managing-access.md %}#manage-sql-users-on-a-cluster). -### Step 3. Generate the CockroachDB client connection string +### Step 3. Connect to the cluster -1. In the top right corner of the Console, click the **Connect** button. The **Connection info** dialog displays. -1. From the **User** dropdown, select the user you created in [Step 2](#step-2-create-a-sql-user). -1. Select a **Region** to connect to. -1. From the **Database** dropdown, select `defaultdb`. -1. Run the following command to create a new `certs` directory on your local machine and download the CA certificate to that directory: -
- - - -
- {% include cockroachcloud/download-the-cert.md %} +In this step, you connect both your application and your local system to the cluster. -1. On the **Command Line** tab, copy the connection string. +1. In the top right corner of the CockroachDB {{ site.data.products.cloud }} Console, click the **Connect** button. - Edit the connection string to include your SQL user's password, then save the string in an accessible location since you'll need it to use the built-in SQL client later. + The **Setup** page of the **Connect to cluster** dialog displays. +1. Select the **SQL User** you created in [Step 2. Create a SQL user](#step-2-create-a-sql-user). +1. For **Database**, select `defaultdb`. You will change this after you follow the instructions in [Step 4. Create the database](#step-4-create-the-database). +1. Click **Next**. -### Step 4. Create the CockroachDB {{ site.data.products.cloud }} database + The **Connect** page of the **Connection info** dialog displays. -On your local workstation's terminal: +1. Select the **Command Line** tab. +1. If CockroachDB is not installed locally, copy the command to download and install it. In your terminal, run the command. +1. Select the **Connection string** tab. +1. If the CA certificate for the cluster is not downloaded locally, copy the command to download it. In your terminal, run the command. +1. Copy the connection string, which begins with `postgresql://`. This will be used to connect your application to CockroachDB {{ site.data.products.dedicated }}. +1. Click **Close**. +1. Use the connection string to connect to the cluster using `cockroach sql`: -1. If you haven't already, [Download the CockroachDB binary](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/install-cockroachdb) and copy it into the `PATH`: + {% include cockroachcloud/sql-connection-string.md %} + +### Step 4. Create the database + +On your local workstation's terminal:
@@ -88,11 +92,7 @@ On your local workstation's terminal:

- {% include cockroachcloud/download-the-binary.md %} - -1. Use the connection string generated in Step 3 to connect to CockroachDB's built-in SQL client: - - {% include cockroachcloud/sql-connection-string.md %} +1. Use the `cockroach sql` from [Step 3. Connect to the cluster](#step-3-connect-to-the-cluster) to connect to the cluster using the binary you just configured. 1. Create a database `todos`: diff --git a/src/current/cockroachcloud/learn-cockroachdb-sql.md b/src/current/cockroachcloud/learn-cockroachdb-sql.md index 46e10bbd7df..199e0d1902b 100644 --- a/src/current/cockroachcloud/learn-cockroachdb-sql.md +++ b/src/current/cockroachcloud/learn-cockroachdb-sql.md @@ -54,7 +54,7 @@ SHOW DATABASES; ## Set the default database -It's best to set the default database directly in your [connection string]({% link cockroachcloud/connect-to-your-cluster.md %}#select-a-connection-method). +It's best to set the default database directly in your connection string. Refer to [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}#connect-to-your-cluster). {% include_cached copy-clipboard.html %} ~~~ sql diff --git a/src/current/cockroachcloud/managing-access.md b/src/current/cockroachcloud/managing-access.md index c3dd37f5700..44e86129d3b 100644 --- a/src/current/cockroachcloud/managing-access.md +++ b/src/current/cockroachcloud/managing-access.md @@ -173,13 +173,11 @@ To change the API key name for an existing API key: {% include cockroachcloud/cockroachcloud-ask-admin.md %} 1. Navigate to your cluster's **SQL Users** page in the **Security** section of the left side navigation. -1. Click the **Add User** button in the top right corner. - - The **Create SQL user** modal displays. - -1. Enter a **Username**. -1. Click **Generate & save password**. -1. Copy the generated password and save it in a secure location. +1. In the left navigation bar, click **SQL Users**. +1. Click **Add User**. The **Add User** dialog displays. +1. Enter a username and click **Generate & Save Password**. +1. Copy the generated password to a secure location, such as a password manager. +1. Click **Close**. Currently, all new users are created with SQL admin privileges. For more information and to change the default settings, see [Grant privileges to a SQL user](#grant-privileges-to-a-sql-user) and [Use SQL roles to manage access](#use-sql-roles-to-manage-access).
diff --git a/src/current/cockroachcloud/quickstart-trial-cluster.md b/src/current/cockroachcloud/quickstart-trial-cluster.md index da23c80d975..ad8cd51d87d 100644 --- a/src/current/cockroachcloud/quickstart-trial-cluster.md +++ b/src/current/cockroachcloud/quickstart-trial-cluster.md @@ -53,8 +53,9 @@ Once your cluster is created, you will be redirected to the **Cluster Overview** 1. In the left navigation bar, click **SQL Users**. 1. Click **Add User**. The **Add User** dialog displays. -1. Enter a **Username** and **Password**. -1. Click **Save**. +1. Enter a username and click **Generate & Save Password**. +1. Copy the generated password to a secure location, such as a password manager. +1. Click **Close**. ## Step 3. Authorize your network @@ -64,35 +65,13 @@ Once your cluster is created, you will be redirected to the **Cluster Overview** 1. To allow the network to access the cluster's DB Console and to use the CockroachDB client to access the databases, select the **DB Console to monitor the cluster** and **CockroachDB Client to access the databases** checkboxes. 1. Click **Apply**. -## Step 4. Connect to your cluster +## Step 4. Connect to the cluster -
-{{site.data.alerts.callout_success}} -[PowerShell](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows) is required to complete these steps. -{{site.data.alerts.end}} -
- -1. In the top-right corner of the Console, click the **Connect** button. The **Connect** dialog will display. -1. From the **SQL user** dropdown, select the SQL user you created in [Step 2. Create a SQL user](#step-2-create-a-sql-user). -1. Verify that the `us-west2 GCP` region and `defaultdb` database are selected. -1. Click **Next**. The **Connect** tab is displayed. -1. Select **Mac**, **Linux**, or **Windows** to adjust the commands used in the next steps accordingly. - -
- - - -
- -1. {% include cockroachcloud/download-the-binary.md %} - -1. In your terminal, run the second command from the dialog to create a new `certs` directory on your local machine and download the CA certificate to that directory. - - {% include cockroachcloud/download-the-cert.md %} +To download CockroachDB locally and configure it to connect to the cluster with the SQL user you just created, refer to [Connect to a CockroachDB Serverless cluster](https://cockroachlabs.com/docs/cockroachcloud/connect-to-a-serverless-cluster). Make a note of the `cockroach sql` command provided in the **Connect** dialog. ## Step 5. Use the built-in SQL client -1. In your terminal, run the connection string provided in the third step of the dialog to connect to CockroachDB's built-in SQL client. Your username and cluster name are pre-populated for you in the dialog. +1. In your terminal, 1. Use the `cockroach sql` from [Step 4. Connect to the cluster](#step-4-connect-to-the-cluster) to connect to the cluster using the binary you just configured. {{site.data.alerts.callout_danger}} This connection string contains your password, which will be provided only once. Save it in a secure place (e.g., in a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. @@ -161,5 +140,5 @@ Learn more: Before you move into production: - [Authorize the network]({% link cockroachcloud/connect-to-your-cluster.md %}#authorize-your-network) from which your app will access the cluster. -- Download the `ca.crt` file to every machine from which you want to [connect to the cluster]({% link cockroachcloud/connect-to-your-cluster.md %}#select-a-connection-method). +- Configure every machine from which you want to [connect to the cluster]({% link cockroachcloud/connect-to-your-cluster.md %}#connect-to-your-cluster). - Review the [production checklist]({% link cockroachcloud/production-checklist.md %}). diff --git a/src/current/cockroachcloud/stream-changefeed-to-snowflake-aws.md b/src/current/cockroachcloud/stream-changefeed-to-snowflake-aws.md index 4464ad4ad6e..9453d7b23f3 100644 --- a/src/current/cockroachcloud/stream-changefeed-to-snowflake-aws.md +++ b/src/current/cockroachcloud/stream-changefeed-to-snowflake-aws.md @@ -32,19 +32,16 @@ Before you begin, make sure you have: If you have not done so already, [create a cluster]({% link cockroachcloud/create-your-cluster.md %}). -## Step 2. Configure your cluster +## Step 2. Connect to your cluster -1. Connect to the built-in SQL shell as a user with [`admin`](../{{site.current_cloud_version}}/security-reference/authorization.html#admin-role) privileges, replacing the placeholders in the [client connection string]({% link cockroachcloud/connect-to-your-cluster.md %}#select-a-connection-method) with the correct username, password, and path to the `ca.cert`: +Refer to [Connect to a CockroachDB Dedicated cluster](https://cockroachlabs.com/docs/cockroachcloud/connect-to-your-cluster) for detailed instructions on how to to: - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach sql \ - --url='postgres://:@:26257?sslmode=verify-full&sslrootcert=certs/ca.crt' - ~~~ +1. Download and install CockroachDB and your cluster's CA certificate locally. +1. Generate the `cockroach sql` command that you will use to connect to the cluster from the command line as a SQL user with [admin] privileges in the cluster. - {{site.data.alerts.callout_info}} - For more information on connecting to your cluster, refer to [Connect to your CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/connect-to-your-cluster.md %}) or [Connect to your CockroachDB {{ site.data.products.serverless }} Cluster](connect-to-a-serverless-cluster.html). - {{site.data.alerts.end}} +## Step 2. Configure your cluster + +1. In your terminal, enter the `cockroach sql` command and connection string from [Step 2. Connect to your cluster](#step-2-connect-to-your-cluster) to start the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql.html). 1. Enable [rangefeeds](../{{site.current_cloud_version}}/create-and-configure-changefeeds.html#enable-rangefeeds). Note that rangefeeds are enabled by default on {{ site.data.products.serverless }} clusters: @@ -243,4 +240,4 @@ The following points outline two potential workarounds. For detailed instruction - Snowpipe is unaware of CockroachDB [resolved timestamps](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-changefeed#resolved-option). This means CockroachDB transactions will not be loaded atomically and partial transactions can briefly be returned from Snowflake. - Snowpipe works best with append-only workloads, as Snowpipe lacks native ETL capabilities to perform updates to data. You may need to pre-process data before uploading it to Snowflake. -Refer to the [Create and Configure Changefeeds](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-and-configure-changefeeds#known-limitations) page for more general changefeed known limitations. \ No newline at end of file +Refer to the [Create and Configure Changefeeds](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-and-configure-changefeeds#known-limitations) page for more general changefeed known limitations. diff --git a/src/current/v22.2/install-cockroachdb-linux.md b/src/current/v22.2/install-cockroachdb-linux.md index 813b62a00a8..949a809330c 100644 --- a/src/current/v22.2/install-cockroachdb-linux.md +++ b/src/current/v22.2/install-cockroachdb-linux.md @@ -13,10 +13,10 @@ docs_area: deploy
-

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

- {% include cockroachcloud/use-cockroachcloud-instead.md %} +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). + Use one of the options below to install CockroachDB.
diff --git a/src/current/v22.2/install-cockroachdb-mac.md b/src/current/v22.2/install-cockroachdb-mac.md index 3e7389a3356..c1ee8ceebeb 100644 --- a/src/current/v22.2/install-cockroachdb-mac.md +++ b/src/current/v22.2/install-cockroachdb-mac.md @@ -13,15 +13,15 @@ docs_area: deploy
-

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

+{% include cockroachcloud/use-cockroachcloud-instead.md %} + +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). {% comment %}v22.2.0+{% endcomment %} {{site.data.alerts.callout_danger}}

On macOS ARM systems, spatial features are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead use Rosetta to run the Intel binary or use the Docker image distribution. Refer to GitHub tracking issue for more information.

{{site.data.alerts.end}} -{% include cockroachcloud/use-cockroachcloud-instead.md %} - {% capture arch_note_homebrew %}

For CockroachDB v22.2.x and above, Homebrew installs binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, Homebrew installs Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} {% capture arch_note_binaries %}

For CockroachDB v22.2.x and above, download the binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, download Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} diff --git a/src/current/v22.2/install-cockroachdb-windows.md b/src/current/v22.2/install-cockroachdb-windows.md index 2cf3080f6b4..1ad740a2de7 100644 --- a/src/current/v22.2/install-cockroachdb-windows.md +++ b/src/current/v22.2/install-cockroachdb-windows.md @@ -13,10 +13,10 @@ docs_area: deploy -See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade. - {% include cockroachcloud/use-cockroachcloud-instead.md %} +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). + Use one of the options below to install CockroachDB.
diff --git a/src/current/v23.1/install-cockroachdb-linux.md b/src/current/v23.1/install-cockroachdb-linux.md index fc616fb477e..ae280b247d7 100644 --- a/src/current/v23.1/install-cockroachdb-linux.md +++ b/src/current/v23.1/install-cockroachdb-linux.md @@ -13,10 +13,10 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

- {% include cockroachcloud/use-cockroachcloud-instead.md %} +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). + Use one of the options below to install CockroachDB. {{site.data.alerts.callout_success}} diff --git a/src/current/v23.1/install-cockroachdb-mac.md b/src/current/v23.1/install-cockroachdb-mac.md index bff4200d970..cb16bac1e86 100644 --- a/src/current/v23.1/install-cockroachdb-mac.md +++ b/src/current/v23.1/install-cockroachdb-mac.md @@ -13,15 +13,15 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

+{% include cockroachcloud/use-cockroachcloud-instead.md %} + +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). {% comment %}v22.2.0+{% endcomment %} {{site.data.alerts.callout_danger}}

On macOS ARM systems, spatial features are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead use Rosetta to run the Intel binary or use the Docker image distribution. Refer to GitHub tracking issue for more information.

{{site.data.alerts.end}} -{% include cockroachcloud/use-cockroachcloud-instead.md %} - {% capture arch_note_homebrew %}

For CockroachDB v22.2.x and above, Homebrew installs binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, Homebrew installs Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} {% capture arch_note_binaries %}

For CockroachDB v22.2.x and above, download the binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, download Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} diff --git a/src/current/v23.1/install-cockroachdb-windows.md b/src/current/v23.1/install-cockroachdb-windows.md index 82df6b2f703..4c56f056a0d 100644 --- a/src/current/v23.1/install-cockroachdb-windows.md +++ b/src/current/v23.1/install-cockroachdb-windows.md @@ -13,7 +13,7 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

+See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). {% include cockroachcloud/use-cockroachcloud-instead.md %} diff --git a/src/current/v23.2/install-cockroachdb-linux.md b/src/current/v23.2/install-cockroachdb-linux.md index 433e75956e0..5d0fe2ef7f1 100644 --- a/src/current/v23.2/install-cockroachdb-linux.md +++ b/src/current/v23.2/install-cockroachdb-linux.md @@ -13,15 +13,13 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

- {% include cockroachcloud/use-cockroachcloud-instead.md %} +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). + Use one of the options below to install CockroachDB. -{{site.data.alerts.callout_success}} To install a FIPS-compliant CockroachDB binary, refer to [Install a FIPS-compliant build of CockroachDB]({% link {{ page.version.version }}/fips.md %}). -{{site.data.alerts.end}} CockroachDB on ARM is Generally Available in v23.2.0 and above. For limitations specific to ARM, refer to Limitations. diff --git a/src/current/v23.2/install-cockroachdb-mac.md b/src/current/v23.2/install-cockroachdb-mac.md index bff4200d970..cb16bac1e86 100644 --- a/src/current/v23.2/install-cockroachdb-mac.md +++ b/src/current/v23.2/install-cockroachdb-mac.md @@ -13,15 +13,15 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

+{% include cockroachcloud/use-cockroachcloud-instead.md %} + +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). {% comment %}v22.2.0+{% endcomment %} {{site.data.alerts.callout_danger}}

On macOS ARM systems, spatial features are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead use Rosetta to run the Intel binary or use the Docker image distribution. Refer to GitHub tracking issue for more information.

{{site.data.alerts.end}} -{% include cockroachcloud/use-cockroachcloud-instead.md %} - {% capture arch_note_homebrew %}

For CockroachDB v22.2.x and above, Homebrew installs binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, Homebrew installs Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} {% capture arch_note_binaries %}

For CockroachDB v22.2.x and above, download the binaries for your system architecture, either Intel or ARM (Apple Silicon).

For previous releases, download Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

{% endcapture %} diff --git a/src/current/v23.2/install-cockroachdb-windows.md b/src/current/v23.2/install-cockroachdb-windows.md index 82df6b2f703..1ad740a2de7 100644 --- a/src/current/v23.2/install-cockroachdb-windows.md +++ b/src/current/v23.2/install-cockroachdb-windows.md @@ -13,10 +13,10 @@ docs_area: deploy -

See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

- {% include cockroachcloud/use-cockroachcloud-instead.md %} +See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). + Use one of the options below to install CockroachDB.
From 0704af06db73c836aeb5a8d675dddcc573ab1a52 Mon Sep 17 00:00:00 2001 From: Rich Loveland Date: Tue, 30 Jan 2024 17:47:22 -0500 Subject: [PATCH 16/18] Remove kv.snapshot_recovery.max_rate setting (#18247) Fixes DOC-9566 --- .../v23.2/cluster-setup-troubleshooting.md | 15 +++++---------- .../performance-benchmarking-with-tpcc-large.md | 1 - 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/src/current/v23.2/cluster-setup-troubleshooting.md b/src/current/v23.2/cluster-setup-troubleshooting.md index 05e019384a1..87ee65e2947 100644 --- a/src/current/v23.2/cluster-setup-troubleshooting.md +++ b/src/current/v23.2/cluster-setup-troubleshooting.md @@ -170,28 +170,23 @@ W180817 17:01:56.510430 914 vendor/google.golang.org/grpc/clientconn.go:1293 grp ###### Excessive snapshot rebalance and recovery rates -The `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` [cluster settings]({% link {{ page.version.version }}/cluster-settings.md %}) set the rate limits at which [snapshots]({% link {{ page.version.version }}/architecture/replication-layer.md %}#snapshots) are sent to nodes. These settings can be temporarily increased to expedite replication during an outage or when scaling a cluster up or down. +The `kv.snapshot_rebalance.max_rate` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-kv-snapshot-rebalance-max-rate) sets the rate limit at which [snapshots]({% link {{ page.version.version }}/architecture/replication-layer.md %}#snapshots) are sent to nodes. This setting can be temporarily increased to expedite replication during an outage or when scaling a cluster up or down. -However, if the settings are too high when nodes are added to the cluster, this can cause degraded performance and node crashes. We recommend **not** increasing these values by more than 2 times their [default values]({% link {{ page.version.version }}/cluster-settings.md %}) without explicit approval from Cockroach Labs. +However, if the setting is too high when nodes are added to the cluster, this can cause degraded performance and node crashes. We recommend **not** increasing this value by more than 2 times its [default value]({% link {{ page.version.version }}/cluster-settings.md %}#setting-kv-snapshot-rebalance-max-rate) without explicit approval from Cockroach Labs. -**Explanation:** If `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` are set too high for the cluster during scaling, this can cause nodes to experience ingestions faster than [compactions]({% link {{ page.version.version }}/architecture/storage-layer.md %}#compaction) can keep up, and result in an [inverted LSM]({% link {{ page.version.version }}/architecture/storage-layer.md %}#inverted-lsms). +**Explanation:** If `kv.snapshot_rebalance.max_rate` is set too high for the cluster during scaling, this can cause nodes to experience ingestions faster than [compactions]({% link {{ page.version.version }}/architecture/storage-layer.md %}#compaction) can keep up, and result in an [inverted LSM]({% link {{ page.version.version }}/architecture/storage-layer.md %}#inverted-lsms). **Solution:** [Check LSM health]({% link {{ page.version.version }}/common-issues-to-monitor.md %}#lsm-health). {% include {{ page.version.version }}/prod-deployment/resolution-inverted-lsm.md %} -After [compaction]({% link {{ page.version.version }}/architecture/storage-layer.md %}#compaction) has completed, lower `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` to their [default values]({% link {{ page.version.version }}/cluster-settings.md %}). As you add nodes to the cluster, slowly increase both cluster settings, if desired. This will control the rate of new ingestions for newly added nodes. Meanwhile, monitor the cluster for unhealthy increases in [IOPS]({% link {{ page.version.version }}/common-issues-to-monitor.md %}#disk-iops) and [CPU]({% link {{ page.version.version }}/common-issues-to-monitor.md %}#cpu). +After [compaction]({% link {{ page.version.version }}/architecture/storage-layer.md %}#compaction) has completed, lower `kv.snapshot_rebalance.max_rate` to its [default value]({% link {{ page.version.version }}/cluster-settings.md %}#setting-kv-snapshot-rebalance-max-rate). As you add nodes to the cluster, slowly increase the value of the cluster setting, if desired. This will control the rate of new ingestions for newly added nodes. Meanwhile, monitor the cluster for unhealthy increases in [IOPS]({% link {{ page.version.version }}/common-issues-to-monitor.md %}#disk-iops) and [CPU]({% link {{ page.version.version }}/common-issues-to-monitor.md %}#cpu). -Outside of performing cluster maintenance, return `kv.snapshot_rebalance.max_rate` and `kv.snapshot_recovery.max_rate` to their [default values]({% link {{ page.version.version }}/cluster-settings.md %}). +Outside of performing cluster maintenance, return `kv.snapshot_rebalance.max_rate` to its [default value]({% link {{ page.version.version }}/cluster-settings.md %}#setting-kv-snapshot-rebalance-max-rate). {% include_cached copy-clipboard.html %} ~~~ sql RESET CLUSTER SETTING kv.snapshot_rebalance.max_rate; ~~~ -{% include_cached copy-clipboard.html %} -~~~ sql -RESET CLUSTER SETTING kv.snapshot_recovery.max_rate; -~~~ - ## Client connection issues If a client cannot connect to the cluster, check basic network connectivity (`ping`), port connectivity (`telnet`), and certificate validity. diff --git a/src/current/v23.2/performance-benchmarking-with-tpcc-large.md b/src/current/v23.2/performance-benchmarking-with-tpcc-large.md index 9298eb40438..69774c56ea2 100644 --- a/src/current/v23.2/performance-benchmarking-with-tpcc-large.md +++ b/src/current/v23.2/performance-benchmarking-with-tpcc-large.md @@ -146,7 +146,6 @@ You'll be importing a large TPC-C data set. To speed that up, you can temporaril ~~~ sql SET CLUSTER SETTING kv.dist_sender.concurrency_limit = 2016; SET CLUSTER SETTING kv.snapshot_rebalance.max_rate = '256 MiB'; - SET CLUSTER SETTING kv.snapshot_recovery.max_rate = '256 MiB'; SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; SET CLUSTER SETTING schemachanger.backfiller.max_buffer_size = '5 GiB'; SET CLUSTER SETTING rocksdb.min_wal_sync_interval = '500us'; From 0ba9979a87a98702ff96b0ebd96167383f015cbf Mon Sep 17 00:00:00 2001 From: Ryan Kuo <8740013+taroface@users.noreply.github.com> Date: Tue, 30 Jan 2024 17:51:20 -0500 Subject: [PATCH 17/18] document Read Committed performance KLs (#18260) document Read Committed performance KLs --- src/current/v23.2/read-committed.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/current/v23.2/read-committed.md b/src/current/v23.2/read-committed.md index 022ce9a20ec..d5bed5359f3 100644 --- a/src/current/v23.2/read-committed.md +++ b/src/current/v23.2/read-committed.md @@ -19,6 +19,8 @@ docs_area: deploy Whereas `SERIALIZABLE` isolation guarantees data correctness by placing transactions into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}), `READ COMMITTED` isolation permits some [concurrency anomalies](#concurrency-anomalies) in exchange for minimizing transaction aborts, [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries), and blocking. Compared to `SERIALIZABLE` transactions, `READ COMMITTED` transactions return fewer [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that require client-side handling. See [`READ COMMITTED` transaction behavior](#read-committed-transaction-behavior). +If your workload is already running well under `SERIALIZABLE` isolation, Cockroach Labs does not recommend changing to `READ COMMITTED` isolation unless there is a specific need. + {{site.data.alerts.callout_info}} `READ COMMITTED` on CockroachDB provides stronger isolation than `READ COMMITTED` on PostgreSQL. On CockroachDB, `READ COMMITTED` prevents anomalies within single statements. For complete details on how `READ COMMITTED` is implemented on CockroachDB, see the [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md). {{site.data.alerts.end}} @@ -925,6 +927,12 @@ The following are not yet supported with `READ COMMITTED`: - [Shared locks](#locking-reads) cannot yet be promoted to exclusive locks. - [`SKIP LOCKED`]({% link {{ page.version.version }}/select-for-update.md %}#wait-policies) requests do not check for [replicated locks]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#unreplicated-locks), which can be acquired by `READ COMMITTED` transactions. +The following affect the performance of `READ COMMITTED` transactions: + +- Because locks acquired by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks, [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}), and [`SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) are fully replicated under `READ COMMITTED` isolation, some queries experience a delay for Raft replication. +- [Foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks are not performed in parallel under `READ COMMITTED` isolation. +- [`SELECT FOR UPDATE` and `SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) statements are less optimized under `READ COMMITTED` isolation than under `SERIALIZABLE` isolation. Under `READ COMMITTED` isolation, `SELECT FOR UPDATE` and `SELECT FOR SHARE` usually perform an extra lookup join for every locked table when compared to the same queries under `SERIALIZABLE`. In addition, some optimization steps (such as de-correlation of correlated [subqueries]({% link {{ page.version.version }}/subqueries.md %})) are not currently performed on these queries. + ## See also - [Transaction Layer]({% link {{ page.version.version }}/architecture/transaction-layer.md %}) From 0a273dcde3cafc4e258b2b557ac97cc3232e3664 Mon Sep 17 00:00:00 2001 From: Rich Loveland Date: Tue, 30 Jan 2024 18:07:29 -0500 Subject: [PATCH 18/18] Note that `pg_stat_activity` is empty (#18264) Fixes DOC-9336 --- src/current/v22.2/pg-catalog.md | 2 +- src/current/v23.1/pg-catalog.md | 2 +- src/current/v23.2/pg-catalog.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/current/v22.2/pg-catalog.md b/src/current/v22.2/pg-catalog.md index 600c9c92bc2..e7dcb0cc367 100644 --- a/src/current/v22.2/pg-catalog.md +++ b/src/current/v22.2/pg-catalog.md @@ -83,7 +83,7 @@ PostgreSQL 13 system catalog | `pg_catalog` table `pg_shdescription` | `pg_shdescription` `pg_shmem_allocations` | `pg_shmem_allocations` (empty) `pg_shseclabel` | `pg_shseclabel` -`pg_stat_activity` | `pg_stat_activity` +`pg_stat_activity` | `pg_stat_activity` (empty) `pg_stat_all_indexes` | `pg_stat_all_indexes` (empty) `pg_stat_all_tables` | `pg_stat_all_tables` (empty) `pg_stat_archiver` | `pg_stat_archiver` (empty) diff --git a/src/current/v23.1/pg-catalog.md b/src/current/v23.1/pg-catalog.md index dd110ba5d20..f3d3db93edc 100644 --- a/src/current/v23.1/pg-catalog.md +++ b/src/current/v23.1/pg-catalog.md @@ -83,7 +83,7 @@ PostgreSQL 13 system catalog | `pg_catalog` table `pg_shdescription` | `pg_shdescription` `pg_shmem_allocations` | `pg_shmem_allocations` (empty) `pg_shseclabel` | `pg_shseclabel` -`pg_stat_activity` | `pg_stat_activity` +`pg_stat_activity` | `pg_stat_activity` (empty) `pg_stat_all_indexes` | `pg_stat_all_indexes` (empty) `pg_stat_all_tables` | `pg_stat_all_tables` (empty) `pg_stat_archiver` | `pg_stat_archiver` (empty) diff --git a/src/current/v23.2/pg-catalog.md b/src/current/v23.2/pg-catalog.md index dd110ba5d20..f3d3db93edc 100644 --- a/src/current/v23.2/pg-catalog.md +++ b/src/current/v23.2/pg-catalog.md @@ -83,7 +83,7 @@ PostgreSQL 13 system catalog | `pg_catalog` table `pg_shdescription` | `pg_shdescription` `pg_shmem_allocations` | `pg_shmem_allocations` (empty) `pg_shseclabel` | `pg_shseclabel` -`pg_stat_activity` | `pg_stat_activity` +`pg_stat_activity` | `pg_stat_activity` (empty) `pg_stat_all_indexes` | `pg_stat_all_indexes` (empty) `pg_stat_all_tables` | `pg_stat_all_tables` (empty) `pg_stat_archiver` | `pg_stat_archiver` (empty)

Column Level Encryption

-

CockroachDB now supports column-level encryption through a set of built-in functions. This feature allows you to encrypt one or more columns in every row of a database table, and can be useful for compliance scenarios such as adhering to PCI or GDPR.

+

CockroachDB now supports column-level encryption through a set of built-in functions. This feature allows you to encrypt one or more columns in every row of a database table, and can be useful for compliance scenarios such as adhering to PCI or GDPR. An Enterprise license is required.

23.2 {% include icon-yes.html %}