Skip to content

Commit

Permalink
DOC-8113: Use Jekyll links to link to different docs pages (#17553)
Browse files Browse the repository at this point in the history
Fixes DOC-8113

What this PR does:

- For all docs that have links pointing to the same folder (e.g., v23.1, cockroachcloud, releases), use `{% link path/to/page.md %}`.
- For all docs that have links pointing to a different folder, use the fully qualified URL (e.g., `https://www.cockroachlabs.com/docs/releases/v23.1`). This is because we'll eventually be breaking off each folder into its own microsite, so rather than update the link twice, we just update it once.
- For any reference to `/{{ site.versions["stable"] }}/` or `/stable/` within links in cockroachcloud only, we replace it with `/{{ site.current_cloud_version }}/`.
- Updated all advisories page to point links to docs to the latest applicable version to the tech advisory instead of stable docs.
- Updated all releases pages to point links to the docs of the same version as the release instead of the stable docs.
- Moved all HTML comments to Liquid comments, as the contents of HTML comments both get parsed by the Liquid parser and appear on the production site.
  • Loading branch information
nickvigilante authored Aug 14, 2023
1 parent a8ca78b commit 35f3263
Show file tree
Hide file tree
Showing 1,403 changed files with 20,182 additions and 19,460 deletions.
1 change: 0 additions & 1 deletion src/current/.htmltest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ IgnoreURLs:
- "https://docs.github.com/*"
- "https://movr.cloud"
- "https://support.cockroachlabs.com/*"
- "https://www.cockroachlabs.com/*"
- "https://www.php.net/*"
- "https://crates.io/*"
- "https://docs.pipenv.org/*"
Expand Down
4 changes: 2 additions & 2 deletions src/current/_includes/cockroachcloud/app/before-you-begin.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
1. [Install CockroachDB](../{{site.current_cloud_version}}/install-cockroachdb.html).
1. Start up a [secure](../{{site.current_cloud_version}}/secure-a-cluster.html) or [insecure](../{{site.current_cloud_version}}/start-a-local-cluster.html) local cluster.
1. [Install CockroachDB](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/install-cockroachdb).
1. Start up a [secure](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/secure-a-cluster) or [insecure](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/start-a-local-cluster) local cluster.
1. Choose the instructions that correspond to whether your cluster is secure or insecure:

<div class="filters filters-big clearfix">
Expand Down
12 changes: 6 additions & 6 deletions src/current/_includes/cockroachcloud/app/see-also-links.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
You might also be interested in the following pages:

- [Client Connection Parameters](../{{site.current_cloud_version}}/connection-parameters.html)
- [Data Replication](../{{site.current_cloud_version}}/demo-data-replication.html)
- [Fault Tolerance & Recovery](../{{site.current_cloud_version}}/demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](../{{site.current_cloud_version}}/demo-automatic-rebalancing.html)
- [Cross-Cloud Migration](../{{site.current_cloud_version}}/demo-automatic-cloud-migration.html)
- [Automated Operations](../{{site.current_cloud_version}}/orchestrate-a-local-cluster-with-kubernetes-insecure.html)
- [Client Connection Parameters](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/connection-parameters)
- [Data Replication](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/demo-data-replication)
- [Fault Tolerance & Recovery](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/demo-fault-tolerance-and-recovery)
- [Automatic Rebalancing](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/demo-automatic-rebalancing)
- [Cross-Cloud Migration](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/demo-automatic-cloud-migration)
- [Automated Operations](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/orchestrate-a-local-cluster-with-kubernetes-insecure)
10 changes: 5 additions & 5 deletions src/current/_includes/cockroachcloud/backup-examples.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#### Back up a cluster

To take a [full backup](../{{site.current_cloud_version}}/take-full-and-incremental-backups.html#full-backups) of a cluster:
To take a [full backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#full-backups) of a cluster:

{% include_cached copy-clipboard.html %}
~~~ sql
Expand All @@ -11,7 +11,7 @@ AS OF SYSTEM TIME '-10s';

#### Back up a database

To take a [full backup](../{{site.current_cloud_version}}/take-full-and-incremental-backups.html#full-backups) of a single database:
To take a [full backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#full-backups) of a single database:

{% include_cached copy-clipboard.html %}
~~~ sql
Expand All @@ -20,7 +20,7 @@ INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY
AS OF SYSTEM TIME '-10s';
~~~

To take a [full backup](../{{site.current_cloud_version}}/take-full-and-incremental-backups.html#full-backups) of multiple databases:
To take a [full backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#full-backups) of multiple databases:

{% include_cached copy-clipboard.html %}
~~~ sql
Expand All @@ -31,7 +31,7 @@ AS OF SYSTEM TIME '-10s';

#### Back up a table or view

To take a [full backup](../{{site.current_cloud_version}}/take-full-and-incremental-backups.html#full-backups) of a single table or view:
To take a [full backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#full-backups) of a single table or view:

{% include_cached copy-clipboard.html %}
~~~ sql
Expand All @@ -40,4 +40,4 @@ INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY
AS OF SYSTEM TIME '-10s';
~~~

To resolve database or table naming conflicts during a restore, see [Troubleshooting naming conflicts](use-managed-service-backups.html#troubleshooting).
To resolve database or table naming conflicts during a restore, see [Troubleshooting naming conflicts]({% link cockroachcloud/use-managed-service-backups.md %}#troubleshooting).
4 changes: 2 additions & 2 deletions src/current/_includes/cockroachcloud/backup-types.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
CockroachDB supports two types of backups:

- **Managed-service backups**: Cockroach Labs takes automated backups of {{ site.data.products.serverless }} and {{ site.data.products.dedicated }} clusters that are stored in Cockroach Labs' cloud storage. {% if page.name != "use-managed-service-backups.md" %} See [Use Managed-Service Backups](../cockroachcloud/use-managed-service-backups.html) to learn more about the type and frequency of backups supported for both {{ site.data.products.db }} clusters. {% else %} {% endif %}
- {% if page.name == "take-and-restore-customer-owned-backups.md" %} **Customer-owned backups**: {% else %} **[Customer-owned backups](../cockroachcloud/take-and-restore-customer-owned-backups.html)**: {% endif %} You can take manual backups and store them in your [cloud storage buckets](../{{site.versions["stable"]}}/use-cloud-storage.html) using the [`BACKUP`](../{{site.versions["stable"]}}/backup.html) statement. Customer-owned backups are supported in {{ site.data.products.serverless }}, {{ site.data.products.dedicated }}, and {{ site.data.products.core }}.
- **Managed-service backups**: Cockroach Labs takes automated backups of {{ site.data.products.serverless }} and {{ site.data.products.dedicated }} clusters that are stored in Cockroach Labs' cloud storage. {% if page.name != "use-managed-service-backups.md" %} See [Use Managed-Service Backups]({% link cockroachcloud/use-managed-service-backups.md %}) to learn more about the type and frequency of backups supported for both {{ site.data.products.db }} clusters. {% else %} {% endif %}
- {% if page.name == "take-and-restore-customer-owned-backups.md" %} **Customer-owned backups**: {% else %} **[Customer-owned backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %})**: {% endif %} You can take manual backups and store them in your [cloud storage buckets](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/use-cloud-storage) using the [`BACKUP`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/backup) statement. Customer-owned backups are supported in {{ site.data.products.serverless }}, {{ site.data.products.dedicated }}, and {{ site.data.products.core }}.
8 changes: 4 additions & 4 deletions src/current/_includes/cockroachcloud/cdc/cdc-bulk-examples.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.

A changefeed targets an allowlist of tables, called "watched rows". Each change to a watched row is emitted as a record to a configurable sink, like [Kafka](../{{site.current_cloud_version}}/create-changefeed.html#create-a-changefeed-connected-to-kafka) or a [cloud storage sink](../{{site.current_cloud_version}}/create-changefeed.html#create-a-changefeed-connected-to-a-cloud-storage-sink). You can manage your changefeeds with [create](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#create), [pause](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#pause), [resume](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#resume), or [cancel](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#cancel) in this version of {{ site.data.products.db }}.
A changefeed targets an allowlist of tables, called "watched rows". Each change to a watched row is emitted as a record to a configurable sink, like [Kafka](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-changefeed#create-a-changefeed-connected-to-kafka) or a [cloud storage sink](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-changefeed#create-a-changefeed-connected-to-a-cloud-storage-sink). You can manage your changefeeds with [create](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#create), [pause](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#pause), [resume](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#resume), or [cancel](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#cancel) in this version of {{ site.data.products.db }}.

#### Create a changefeed connected to Kafka

Expand All @@ -20,10 +20,10 @@ A changefeed targets an allowlist of tables, called "watched rows". Each change
~~~

{{site.data.alerts.callout_info}}
Currently, [changefeeds](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html) connected to [Kafka versions < v1.0](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) are not supported in CockroachDB v21.1.
Currently, [changefeeds](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds) connected to [Kafka versions < v1.0](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) are not supported in CockroachDB v21.1.
{{site.data.alerts.end}}

For more information on how to create a changefeed connected to Kafka, see [Stream Data Out of CockroachDB Using Changefeeds](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#create-a-changefeed-connected-to-kafka) and [`CREATE CHANGEFEED`](../{{site.current_cloud_version}}/create-changefeed.html).
For more information on how to create a changefeed connected to Kafka, see [Stream Data Out of CockroachDB Using Changefeeds](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#create-a-changefeed-connected-to-kafka) and [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-changefeed).

#### Create a changefeed connected to a cloud storage sink

Expand All @@ -46,4 +46,4 @@ For more information on how to create a changefeed connected to Kafka, see [Stre
(1 row)
~~~

For more information on how to create a changefeed connected to a cloud storage sink, see [Stream Data Out of CockroachDB Using Changefeeds](../{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds.html#create-a-changefeed-connected-to-a-cloud-storage-sink) and [`CREATE CHANGEFEED`](../{{site.current_cloud_version}}/create-changefeed.html).
For more information on how to create a changefeed connected to a cloud storage sink, see [Stream Data Out of CockroachDB Using Changefeeds](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/stream-data-out-of-cockroachdb-using-changefeeds#create-a-changefeed-connected-to-a-cloud-storage-sink) and [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/create-changefeed).
2 changes: 1 addition & 1 deletion src/current/_includes/cockroachcloud/cdc/core-csv.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{{site.data.alerts.callout_info}}
To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](../{{site.current_cloud_version}}/cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](../{{site.current_cloud_version}}/cockroach-sql.html), or set the [`\set display_format=csv` option](../{{site.current_cloud_version}}/cockroach-sql.html#client-side-options) once the SQL client is open.
To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql#sql-flag-format) when starting the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql), or set the [`\set display_format=csv` option](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql#client-side-options) once the SQL client is open.
{{site.data.alerts.end}}
2 changes: 1 addition & 1 deletion src/current/_includes/cockroachcloud/cdc/core-url.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{{site.data.alerts.callout_info}}
Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](../{{site.current_cloud_version}}/cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.
Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cancel-query) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.
{{site.data.alerts.end}}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.

1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster:
1. Use the [`cockroach start-single-node`]({% link cockroachcloud/cockroach-start-single-node.md %}) command to start a single-node cluster:

{% include_cached copy-clipboard.html %}
~~~ shell
Expand All @@ -21,7 +21,7 @@ In this example, you'll set up a core changefeed for a single-node cluster that

Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives).

1. As the `root` user, open the [built-in SQL client](cockroach-sql.html):
1. As the `root` user, open the [built-in SQL client](https://www.cockroachlabs.com/docs/{{ site.current_cloud_version }}/cockroach-sql):

{% include_cached copy-clipboard.html %}
~~~ shell
Expand All @@ -32,7 +32,7 @@ In this example, you'll set up a core changefeed for a single-node cluster that

{% include {{ page.version.version }}/cdc/core-csv.md %}

1. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html):
1. Enable the `kv.rangefeed.enabled` [cluster setting]({% link cockroachcloud/cluster-settings.md %}):

{% include_cached copy-clipboard.html %}
~~~ sql
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
In this example, you'll set up a core changefeed on your {{ site.data.products.serverless }} cluster.

1. As the `root` user, open the [built-in SQL client](../{{site.current_cloud_version}}/cockroach-sql.html):
1. As the `root` user, open the [built-in SQL client](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql):

{% include_cached copy-clipboard.html %}
~~~ shell
Expand All @@ -11,7 +11,7 @@ In this example, you'll set up a core changefeed on your {{ site.data.products.s

{% include cockroachcloud/cdc/core-csv.md %}

1. Enable the `kv.rangefeed.enabled` [cluster setting](../{{site.current_cloud_version}}/cluster-settings.html):
1. Enable the `kv.rangefeed.enabled` [cluster setting](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cluster-settings):

{% include_cached copy-clipboard.html %}
~~~ sql
Expand Down
Loading

0 comments on commit 35f3263

Please sign in to comment.