diff --git a/src/current/v23.2/configure-logs.md b/src/current/v23.2/configure-logs.md index 15a21c3f4e7..d282926f7b2 100644 --- a/src/current/v23.2/configure-logs.md +++ b/src/current/v23.2/configure-logs.md @@ -257,9 +257,10 @@ Along with the [common sink parameters](#common-sink-parameters), each HTTP serv | `disable-keep-alives` | When `true`, disallows reuse of the server connection across requests.

**Default:** `false` (reuses connections) | | `compression` | Compression method for the HTTP request body. Valid values `gzip` or `none`.

**Default:** `gzip` | | `headers` | Map of key-value string pairs which will be appended to every request as custom HTTP headers. | +| `file-based-headers` | Map of key-filepath string pairs. The file specified by the filepath only contains a value that corresponds to the key. The key from the YAML configuration and the value contained in the file will be appended to every request as a custom HTTP header.

For example: `file-based-headers: {X-CRDB-HEADER-KEY: /some/path/to/file_with_value, X-ANOTHER-HEADER-KEY: /other/path/to/file_with_another_value}`

A value in a file can be updated without restarting the `cockroach` process. Instead, send SIGHUP to the `cockroach` process to notify it to refresh values.| | `buffering` | Configures buffering of log messages for the sink, with the following sub-parameters:

When `max-staleness` and `flush-trigger-size` are used together, whichever is reached first will trigger the flush. `buffering` is enabled by default for [HTTP](#output-to-http-network-collectors) log sinks. To explicitly disable log buffering, specify `buffering: NONE` instead. This setting is typically disabled for [security-related logs]({% link {{ page.version.version }}/logging-use-cases.md %}#security-and-audit-monitoring). See [Log buffering](#log-buffering-for-network-sinks) for more details and usage.| -For an example network logging configuration, see [Logging use cases]({% link {{ page.version.version }}/logging-use-cases.md %}#network-logging). For an example that uses `compression` and `headers` parameters, see [Configure an HTTP network collector for Datadog]({% link {{ page.version.version }}/log-sql-statistics-to-datadog.md %}#step-2-configure-an-http-network-collector-for-datadog). +For an example network logging configuration, see [Logging use cases]({% link {{ page.version.version }}/logging-use-cases.md %}#network-logging). For an example that uses `compression`, `headers`, `file-based-headers`, and `buffering` parameters, see [Configure an HTTP network collector for Datadog]({% link {{ page.version.version }}/log-sql-statistics-to-datadog.md %}#step-2-configure-an-http-network-collector-for-datadog). ### Output to `stderr` diff --git a/src/current/v23.2/log-sql-statistics-to-datadog.md b/src/current/v23.2/log-sql-statistics-to-datadog.md index e6103e24f69..495fc398998 100644 --- a/src/current/v23.2/log-sql-statistics-to-datadog.md +++ b/src/current/v23.2/log-sql-statistics-to-datadog.md @@ -49,7 +49,7 @@ sinks: format: json method: POST compression: gzip - headers: {DD-API-KEY: "{DATADOG API KEY}"} # replace with actual API key + headers: {DD-API-KEY: "DATADOG_API_KEY"} # replace with actual DATADOG API key buffering: format: json-array max-staleness: 5s @@ -60,6 +60,17 @@ sinks: channels: [] # set to empty square brackets ~~~ +{{site.data.alerts.callout_success}} +If you prefer to keep the `DD-API-KEY` in a file other than the `logs.yaml`, replace the `headers` parameter with the [`file-based-headers` parameter]({% link {{ page.version.version }}/configure-logs.md %}#file-based-headers): + +{% include_cached copy-clipboard.html %} +~~~ yaml + file-based-headers: {DD-API-KEY: "path/to/file"} # replace with path of file containing DATADOG API key +~~~ + +The value in the file containing the Datadog API key can be updated without restarting the `cockroach` process. Instead, send SIGHUP to the `cockroach` process to notify it to refresh the value. +{{site.data.alerts.end}} + Pass the [`logs.yaml` file]({% link {{ page.version.version }}/configure-logs.md %}#yaml-payload) to the `cockroach` process with either `--log-config-file` or ` --log` flag. ## Step 3. Configure CockroachDB to emit query events