Skip to content

Commit

Permalink
On Configure Logs and Log SQL statistics to Datadog pages, added file…
Browse files Browse the repository at this point in the history
…-based-headers parameter with note about refreshing values with SIGHUP. (#18182)
  • Loading branch information
florence-crl authored Feb 20, 2024
1 parent 7dde81d commit fdb32a2
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 2 deletions.
3 changes: 2 additions & 1 deletion src/current/v23.2/configure-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,9 +257,10 @@ Along with the [common sink parameters](#common-sink-parameters), each HTTP serv
| `disable-keep-alives` | When `true`, disallows reuse of the server connection across requests.<br><br>**Default:** `false` (reuses connections) |
| `compression` | Compression method for the HTTP request body. Valid values `gzip` or `none`.<br><br>**Default:** `gzip` |
| `headers` | Map of key-value string pairs which will be appended to every request as custom HTTP headers. |
|<a id="file-based-headers"></a> `file-based-headers` | Map of key-filepath string pairs. The file specified by the filepath only contains a value that corresponds to the key. The key from the YAML configuration and the value contained in the file will be appended to every request as a custom HTTP header.<br><br>For example: `file-based-headers: {X-CRDB-HEADER-KEY: /some/path/to/file_with_value, X-ANOTHER-HEADER-KEY: /other/path/to/file_with_another_value}`<br><br>A value in a file can be updated without restarting the `cockroach` process. Instead, send SIGHUP to the `cockroach` process to notify it to refresh values.|
| `buffering` | Configures buffering of log messages for the sink, with the following sub-parameters:<br><br><ul><li>`max-staleness`: The maximum time a log message will wait in the buffer before a flush is triggered. Set to `0` to disable flushing based on elapsed time. Default: `5s`</li><li>`flush-trigger-size`: The number of bytes that will trigger the buffer to flush. Set to `0` to disable flushing based on accumulated size. Default: `1MiB`</li><li>`max-buffer-size`: The maximum size of the buffer: new log messages received when the buffer is full cause older messages to be dropped. Default: `50MiB`</li></ul>When `max-staleness` and `flush-trigger-size` are used together, whichever is reached first will trigger the flush. `buffering` is enabled by default for [HTTP](#output-to-http-network-collectors) log sinks. To explicitly disable log buffering, specify `buffering: NONE` instead. This setting is typically disabled for [security-related logs]({% link {{ page.version.version }}/logging-use-cases.md %}#security-and-audit-monitoring). See [Log buffering](#log-buffering-for-network-sinks) for more details and usage.|
For an example network logging configuration, see [Logging use cases]({% link {{ page.version.version }}/logging-use-cases.md %}#network-logging). For an example that uses `compression` and `headers` parameters, see [Configure an HTTP network collector for Datadog]({% link {{ page.version.version }}/log-sql-statistics-to-datadog.md %}#step-2-configure-an-http-network-collector-for-datadog).
For an example network logging configuration, see [Logging use cases]({% link {{ page.version.version }}/logging-use-cases.md %}#network-logging). For an example that uses `compression`, `headers`, `file-based-headers`, and `buffering` parameters, see [Configure an HTTP network collector for Datadog]({% link {{ page.version.version }}/log-sql-statistics-to-datadog.md %}#step-2-configure-an-http-network-collector-for-datadog).
### Output to `stderr`
Expand Down
13 changes: 12 additions & 1 deletion src/current/v23.2/log-sql-statistics-to-datadog.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ sinks:
format: json
method: POST
compression: gzip
headers: {DD-API-KEY: "{DATADOG API KEY}"} # replace with actual API key
headers: {DD-API-KEY: "DATADOG_API_KEY"} # replace with actual DATADOG API key
buffering:
format: json-array
max-staleness: 5s
Expand All @@ -60,6 +60,17 @@ sinks:
channels: [] # set to empty square brackets
~~~

{{site.data.alerts.callout_success}}
If you prefer to keep the `DD-API-KEY` in a file other than the `logs.yaml`, replace the `headers` parameter with the [`file-based-headers` parameter]({% link {{ page.version.version }}/configure-logs.md %}#file-based-headers):

{% include_cached copy-clipboard.html %}
~~~ yaml
file-based-headers: {DD-API-KEY: "path/to/file"} # replace with path of file containing DATADOG API key
~~~

The value in the file containing the Datadog API key can be updated without restarting the `cockroach` process. Instead, send SIGHUP to the `cockroach` process to notify it to refresh the value.
{{site.data.alerts.end}}

Pass the [`logs.yaml` file]({% link {{ page.version.version }}/configure-logs.md %}#yaml-payload) to the `cockroach` process with either `--log-config-file` or ` --log` flag.

## Step 3. Configure CockroachDB to emit query events
Expand Down

0 comments on commit fdb32a2

Please sign in to comment.