From 2ea57efead536ed003df44a980c040d1167120dd Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Thu, 21 Mar 2024 18:58:14 -0700 Subject: [PATCH 1/8] Create new page that pulls together internal observability documentation --- .../en/docs/collector/internal-telemetry.md | 491 ++++++++++++++++++ 1 file changed, 491 insertions(+) create mode 100644 content/en/docs/collector/internal-telemetry.md diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md new file mode 100644 index 000000000000..7973e86265bf --- /dev/null +++ b/content/en/docs/collector/internal-telemetry.md @@ -0,0 +1,491 @@ +# Internal Telemetry + +The Collector offers you multiple ways to measure and monitor its health as well +as investigate issues. In this section, you'll learn how to enable internal +observability, what types of telemetry are available, and how best to use them +to monitor your Collector deployment. + +## Enabling observability internal to the Collector + +By default, the Collector exposes service telemetry in two ways currently: + +- internal metrics are exposed via a Prometheus interface which defaults to port + `8888` +- logs are emitted to stdout + +Traces are not exposed by default. There is an effort underway to [change +this][issue7532]. The work includes supporting configuration of the +OpenTelemetry SDK used to produce the Collector's internal telemetry. This +feature is currently behind two feature gates: + +```bash + --feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry +``` + +The gate `useOtelWithSDKConfigurationForInternalTelemetry` enables the Collector +to parse configuration that aligns with the [OpenTelemetry Configuration] +schema. The support for this schema is still experimental, but it does allow +telemetry to be exported via OTLP. + +The following configuration can be used in combination with the feature gates +aforementioned to emit internal metrics and traces from the Collector to an OTLP +backend: + +```yaml +service: + telemetry: + metrics: + readers: + - periodic: + interval: 5000 + exporter: + otlp: + protocol: grpc/protobuf + endpoint: https://backend:4317 + traces: + processors: + - batch: + exporter: + otlp: + protocol: grpc/protobuf + endpoint: https://backend2:4317 +``` + +See the configuration's [example][kitchen-sink] for additional configuration +options. + +Note that this configuration does not support emitting logs as there is no +support for [logs] in OpenTelemetry Go SDK at this time. + + + +To see logs for the Collector: + +On a Linux systemd system, logs can be found using `journalctl`: +`journalctl | grep otelcol` + +or to find only errors: +`journalctl | grep otelcol | grep Error` + +## Types of internal observability + + + + + +### Current values that need observation + +- Resource consumption: CPU, RAM (in the future also IO - if we implement + persistent queues) and any other metrics that may be available to Go apps + (e.g. garbage size, etc). + +- Receiving data rate, broken down by receivers and by data type + (traces/metrics). + +- Exporting data rate, broken down by exporters and by data type + (traces/metrics). + +- Data drop rate due to throttling, broken down by data type. + +- Data drop rate due to invalid data received, broken down by data type. + +- Current throttling state: Not Throttled/Throttled by Downstream/Internally + Saturated. + +- Incoming connection count, broken down by receiver. + +- Incoming connection rate (new connections per second), broken down by + receiver. + +- In-memory queue size (in bytes and in units). Note: measurements in bytes may + be difficult / expensive to obtain and should be used cautiously. + +- Persistent queue size (when supported). + +- End-to-end latency (from receiver input to exporter output). Note that with + multiple receivers/exporters we potentially have NxM data paths, each with + different latency (plus different pipelines in the future), so realistically + we should likely expose the average of all data paths (perhaps broken down by + pipeline). + +- Latency broken down by pipeline elements (including exporter network roundtrip + latency for request/response protocols). + +“Rate” values must reflect the average rate of the last 10 seconds. Rates must +exposed in bytes/sec and units/sec (e.g. spans/sec). + +Note: some of the current values and rates may be calculated as derivatives of +cumulative values in the backend, so it is an open question if we want to expose +them separately or no. + +### Cumulative values that need observation + +- Total received data, broken down by receivers and by data type + (traces/metrics). + +- Total exported data, broken down by exporters and by data type + (traces/metrics). + +- Total dropped data due to throttling, broken down by data type. + +- Total dropped data due to invalid data received, broken down by data type. + +- Total incoming connection count, broken down by receiver. + +- Uptime since start. + +### Trace or log on events + +We want to generate the following events (log and/or send as a trace with +additional data): + +- Collector started/stopped. + +- Collector reconfigured (if we support on-the-fly reconfiguration). + +- Begin dropping due to throttling (include throttling reason, e.g. local + saturation, downstream saturation, downstream unavailable, etc). + +- Stop dropping due to throttling. + +- Begin dropping due to invalid data (include sample/first invalid data). + +- Stop dropping due to invalid data. + +- Crash detected (differentiate clean stopping and crash, possibly include crash + data if available). + +For begin/stop events we need to define an appropriate hysteresis to avoid +generating too many events. Note that begin/stop events cannot be detected in +the backend simply as derivatives of current rates, the events include +additional data that is not present in the current value. + +### Host metrics + +The service should collect host resource metrics in addition to service's own +process metrics. This may help to understand that the problem that we observe in +the service is induced by a different process on the same host. + +### Data ingress + +The `otelcol_receiver_accepted_spans` and +`otelcol_receiver_accepted_metric_points` metrics provide information about the +data ingested by the Collector. + +### Data egress + +The `otecol_exporter_sent_spans` and `otelcol_exporter_sent_metric_points` +metrics provide information about the data exported by the Collector. + + + +## Using metrics to monitor the Collector + + + +### Critical monitoring + +#### Data loss + +Use rate of `otelcol_processor_dropped_spans > 0` and +`otelcol_processor_dropped_metric_points > 0` to detect data loss, depending on +the requirements set up a minimal time window before alerting, avoiding +notifications for small losses that are not considered outages or within the +desired reliability level. + +#### Low on CPU resources + +Monitoring CPU resources depends on the CPU metrics available on the deployment. +For example, a Kubernetes deployment may include +`kube_pod_container_resource_limits{resource="cpu", unit="core"}`. Let's call it +`available_cores` below. The idea here is to have an upper bound of the number +of available cores, and the maximum expected ingestion rate considered safe, +let's call it `safe_rate`, per core. This should trigger increase of resources/ +instances (or raise an alert as appropriate) whenever +`(actual_rate/available_cores) < safe_rate`. + +The `safe_rate` depends on the specific configuration being used. + + + +### Secondary monitoring + +#### Queue length + +Most exporters offer a +[queue/retry mechanism](../exporter/exporterhelper/README.md) that is +recommended as the retry mechanism for the Collector and as such should be used +in any production deployment. + +The `otelcol_exporter_queue_capacity` indicates the capacity of the retry queue +(in batches). The `otelcol_exporter_queue_size` indicates the current size of +retry queue. So you can use these two metrics to check if the queue capacity is +enough for your workload. + +The `otelcol_exporter_enqueue_failed_spans`, +`otelcol_exporter_enqueue_failed_metric_points` and +`otelcol_exporter_enqueue_failed_log_records` indicate the number of span/metric +points/log records failed to be added to the sending queue. This may be cause by +a queue full of unsettled elements, so you may need to decrease your sending +rate or horizontally scale collectors. + +The queue/retry mechanism also supports logging for monitoring. Check the logs +for messages like `"Dropping data because sending_queue is full"`. + +#### Receive failures + +Sustained rates of `otelcol_receiver_refused_spans` and +`otelcol_receiver_refused_metric_points` indicate too many errors returned to +clients. Depending on the deployment and the client’s resilience this may +indicate data loss at the clients. + +Sustained rates of `otelcol_exporter_send_failed_spans` and +`otelcol_exporter_send_failed_metric_points` indicate that the Collector is not +able to export data as expected. It doesn't imply data loss per se since there +could be retries but a high rate of failures could indicate issues with the +network or backend receiving the data. + +### Data flow + + + +### Logs + +Logs can be helpful in identifying issues. Always start by checking the log +output and looking for potential issues. The verbosity level defaults to `INFO` +and can be adjusted. + +Set the log level in the config `service::telemetry::logs` + +```yaml +service: + telemetry: + logs: + level: 'debug' +``` + +### Metrics + +Prometheus metrics are exposed locally on port `8888` and path `/metrics`. For +containerized environments it may be desirable to expose this port on a public +interface instead of just locally. + +Set the address in the config `service::telemetry::metrics` + +```yaml +service: + telemetry: + metrics: + address: ':8888' +``` + +A Grafana dashboard for these metrics can be found +[here](https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/). + +You can enhance metrics telemetry level using `level` field. The following is a +list of all possible values and their explanations. + +- "none" indicates that no telemetry data should be collected; +- "basic" is the recommended and covers the basics of the service telemetry. +- "normal" adds some other indicators on top of basic. +- "detailed" adds dimensions and views to the previous levels. + +For example: + +```yaml +service: + telemetry: + metrics: + level: detailed + address: ':8888' +``` + +Also note that a Collector can be configured to scrape its own metrics and send +it through configured pipelines. For example: + +```yaml +receivers: + prometheus: + config: + scrape_configs: + - job_name: 'otelcol' + scrape_interval: 10s + static_configs: + - targets: ['0.0.0.0:8888'] + metric_relabel_configs: + - source_labels: [__name__] + regex: '.*grpc_io.*' + action: drop +exporters: + debug: +service: + pipelines: + metrics: + receivers: [prometheus] + processors: [] + exporters: [debug] +``` + +### Traces + +OpenTelemetry Collector has an ability to send it's own traces using OTLP +exporter. You can send the traces to OTLP server running on the same +OpenTelemetry Collector, so it goes through configured pipelines. For example: + +```yaml +service: + telemetry: + traces: + processors: + batch: + exporter: + otlp: + protocol: grpc/protobuf + endpoint: ${MY_POD_IP}:4317 +``` + +### zPages + +The +[zpages](https://github.com/open-telemetry/opentelemetry-collector/tree/main/extension/zpagesextension/README.md) +extension, which if enabled is exposed locally on port `55679`, can be used to +check receivers and exporters trace operations via `/debug/tracez`. `zpages` may +contain error logs that the Collector does not emit. + +For containerized environments it may be desirable to expose this port on a +public interface instead of just locally. This can be configured via the +extensions configuration section. For example: + +```yaml +extensions: + zpages: + endpoint: 0.0.0.0:55679 +``` + +### Local exporters + +[Local exporters](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter#general-information) +can be configured to inspect the data being processed by the Collector. + +For live troubleshooting purposes consider leveraging the `debug` exporter, +which can be used to confirm that data is being received, processed and exported +by the Collector. + +```yaml +receivers: + zipkin: +exporters: + debug: +service: + pipelines: + traces: + receivers: [zipkin] + processors: [] + exporters: [debug] +``` + +Get a Zipkin payload to test. For example create a file called `trace.json` that +contains: + +```json +[ + { + "traceId": "5982fe77008310cc80f1da5e10147519", + "parentId": "90394f6bcffb5d13", + "id": "67fae42571535f60", + "kind": "SERVER", + "name": "/m/n/2.6.1", + "timestamp": 1516781775726000, + "duration": 26000, + "localEndpoint": { + "serviceName": "api" + }, + "remoteEndpoint": { + "serviceName": "apip" + }, + "tags": { + "data.http_response_code": "201" + } + } +] +``` + +With the Collector running, send this payload to the Collector. For example: + +```console +$ curl -X POST localhost:9411/api/v2/spans -H'Content-Type: application/json' -d @trace.json +``` + +You should see a log entry like the following from the Collector: + +``` +2023-09-07T09:57:43.468-0700 info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2} +``` + +You can also configure the `debug` exporter so the entire payload is printed: + +```yaml +exporters: + debug: + verbosity: detailed +``` + +With the modified configuration if you re-run the test above the log output +should look like: + +``` +2023-09-07T09:57:12.820-0700 info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2} +2023-09-07T09:57:12.821-0700 info ResourceSpans #0 +Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 +Resource attributes: + -> service.name: Str(telemetrygen) +ScopeSpans #0 +ScopeSpans SchemaURL: +InstrumentationScope telemetrygen +Span #0 + Trace ID : 0c636f29e29816ea76e6a5b8cd6601cf + Parent ID : 1a08eba9395c5243 + ID : 10cebe4b63d47cae + Name : okey-dokey + Kind : Internal + Start time : 2023-09-07 16:57:12.045933 +0000 UTC + End time : 2023-09-07 16:57:12.046058 +0000 UTC + Status code : Unset + Status message : +Attributes: + -> span.kind: Str(server) + -> net.peer.ip: Str(1.2.3.4) + -> peer.service: Str(telemetrygen) +``` + +### Health Check + +The +[health_check](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/extension/healthcheckextension/README.md) +extension, which by default is available on all interfaces on port `13133`, can +be used to ensure the Collector is functioning properly. + +```yaml +extensions: + health_check: +service: + extensions: [health_check] +``` + +It returns a response like the following: + +```json +{ + "status": "Server available", + "upSince": "2020-11-11T04:12:31.6847174Z", + "uptime": "49.0132518s" +} +``` + +### pprof + +The +[pprof](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/extension/pprofextension/README.md) +extension, which by default is available locally on port `1777`, allows you to +profile the Collector as it runs. This is an advanced use-case that should not +be needed in most circumstances. From dbca99676bed01364ef2cae70c9ae1daa91d5d3a Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Thu, 21 Mar 2024 19:12:34 -0700 Subject: [PATCH 2/8] Add front matter --- content/en/docs/collector/internal-telemetry.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index 7973e86265bf..02c3d5022814 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -1,4 +1,8 @@ -# Internal Telemetry +--- +title: Internal Telemetry +weight: 25 +cSpell:ignore: +--- The Collector offers you multiple ways to measure and monitor its health as well as investigate issues. In this section, you'll learn how to enable internal From 3ce34debed85fb60b097df826378744632a69366 Mon Sep 17 00:00:00 2001 From: Patrice Chalin Date: Tue, 2 Apr 2024 06:53:33 -0400 Subject: [PATCH 3/8] Add local dictionary entries --- content/en/docs/collector/internal-telemetry.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index 02c3d5022814..f9e515a447dd 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -1,7 +1,7 @@ --- title: Internal Telemetry weight: 25 -cSpell:ignore: +cSpell:ignore: journalctl kube otecol pprof tracez zpages --- The Collector offers you multiple ways to measure and monitor its health as well @@ -65,10 +65,10 @@ support for [logs] in OpenTelemetry Go SDK at this time. To see logs for the Collector: -On a Linux systemd system, logs can be found using `journalctl`: +On a Linux systemd system, logs can be found using `journalctl`: `journalctl | grep otelcol` -or to find only errors: +or to find only errors: `journalctl | grep otelcol | grep Error` ## Types of internal observability From 62378119e84f2ffb833a3a538b23de653766ccad Mon Sep 17 00:00:00 2001 From: Tiffany Hrabusa <30397949+tiffany76@users.noreply.github.com> Date: Wed, 3 Apr 2024 14:41:22 -0700 Subject: [PATCH 4/8] Apply suggestions from code review Co-authored-by: Patrice Chalin --- content/en/docs/collector/internal-telemetry.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index f9e515a447dd..fc0b223bba0a 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -22,12 +22,12 @@ this][issue7532]. The work includes supporting configuration of the OpenTelemetry SDK used to produce the Collector's internal telemetry. This feature is currently behind two feature gates: -```bash +```sh --feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry ``` The gate `useOtelWithSDKConfigurationForInternalTelemetry` enables the Collector -to parse configuration that aligns with the [OpenTelemetry Configuration] +to parse configuration that aligns with the [OpenTelemetry Configuration](../configuration/) schema. The support for this schema is still experimental, but it does allow telemetry to be exported via OTLP. @@ -283,8 +283,7 @@ service: address: ':8888' ``` -A Grafana dashboard for these metrics can be found -[here](https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/). +To visualize these metrics, you can use the [Grafana dashboard](https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/), for example. You can enhance metrics telemetry level using `level` field. The following is a list of all possible values and their explanations. @@ -416,8 +415,8 @@ contains: With the Collector running, send this payload to the Collector. For example: -```console -$ curl -X POST localhost:9411/api/v2/spans -H'Content-Type: application/json' -d @trace.json +```sh +curl -X POST localhost:9411/api/v2/spans -H'Content-Type: application/json' -d @trace.json ``` You should see a log entry like the following from the Collector: From 024800c7e7d79ff2bb5db1da7ff776c0652266b2 Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Wed, 3 Apr 2024 14:53:54 -0700 Subject: [PATCH 5/8] Fix link to Exporter Helper --- .../en/docs/collector/internal-telemetry.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index fc0b223bba0a..c7e56fef4c4f 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -27,9 +27,10 @@ feature is currently behind two feature gates: ``` The gate `useOtelWithSDKConfigurationForInternalTelemetry` enables the Collector -to parse configuration that aligns with the [OpenTelemetry Configuration](../configuration/) -schema. The support for this schema is still experimental, but it does allow -telemetry to be exported via OTLP. +to parse configuration that aligns with the +[OpenTelemetry Configuration](../configuration/) schema. The support for this +schema is still experimental, but it does allow telemetry to be exported via +OTLP. The following configuration can be used in combination with the feature gates aforementioned to emit internal metrics and traces from the Collector to an OTLP @@ -68,8 +69,7 @@ To see logs for the Collector: On a Linux systemd system, logs can be found using `journalctl`: `journalctl | grep otelcol` -or to find only errors: -`journalctl | grep otelcol | grep Error` +or to find only errors: `journalctl | grep otelcol | grep Error` ## Types of internal observability @@ -217,9 +217,9 @@ The `safe_rate` depends on the specific configuration being used. #### Queue length Most exporters offer a -[queue/retry mechanism](../exporter/exporterhelper/README.md) that is -recommended as the retry mechanism for the Collector and as such should be used -in any production deployment. +[queue/retry mechanism](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md) +that is recommended as the retry mechanism for the Collector and as such should +be used in any production deployment. The `otelcol_exporter_queue_capacity` indicates the capacity of the retry queue (in batches). The `otelcol_exporter_queue_size` indicates the current size of @@ -283,7 +283,9 @@ service: address: ':8888' ``` -To visualize these metrics, you can use the [Grafana dashboard](https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/), for example. +To visualize these metrics, you can use the +[Grafana dashboard](https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/), +for example. You can enhance metrics telemetry level using `level` field. The following is a list of all possible values and their explanations. From 353d6a01cf62c97c325846b14c4e4a581fcf0719 Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Wed, 3 Apr 2024 15:16:55 -0700 Subject: [PATCH 6/8] Fix linter issues --- .../en/docs/collector/internal-telemetry.md | 18 ++++++++++-------- static/refcache.json | 4 ++++ 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index c7e56fef4c4f..ddb01ad673ae 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -18,12 +18,13 @@ By default, the Collector exposes service telemetry in two ways currently: - logs are emitted to stdout Traces are not exposed by default. There is an effort underway to [change -this][issue7532]. The work includes supporting configuration of the -OpenTelemetry SDK used to produce the Collector's internal telemetry. This -feature is currently behind two feature gates: +this][https://github.com/open-telemetry/opentelemetry-collector/issues/7532]. +The work includes supporting configuration of the OpenTelemetry SDK used to +produce the Collector's internal telemetry. This feature is currently behind two +feature gates: ```sh - --feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry +--feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry ``` The gate `useOtelWithSDKConfigurationForInternalTelemetry` enables the Collector @@ -56,8 +57,9 @@ service: endpoint: https://backend2:4317 ``` -See the configuration's [example][kitchen-sink] for additional configuration -options. +See the configuration's +[example][https://github.com/open-telemetry/opentelemetry-configuration/blob/main/examples/kitchen-sink.yaml] +for additional configuration options. Note that this configuration does not support emitting logs as there is no support for [logs] in OpenTelemetry Go SDK at this time. @@ -423,7 +425,7 @@ curl -X POST localhost:9411/api/v2/spans -H'Content-Type: application/json' -d @ You should see a log entry like the following from the Collector: -``` +```sh 2023-09-07T09:57:43.468-0700 info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2} ``` @@ -438,7 +440,7 @@ exporters: With the modified configuration if you re-run the test above the log output should look like: -``` +```sh 2023-09-07T09:57:12.820-0700 info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2} 2023-09-07T09:57:12.821-0700 info ResourceSpans #0 Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0 diff --git a/static/refcache.json b/static/refcache.json index 293e8bbcc98b..01a15ae73afa 100644 --- a/static/refcache.json +++ b/static/refcache.json @@ -4459,6 +4459,10 @@ "StatusCode": 200, "LastSeen": "2024-01-24T14:54:56.149229+01:00" }, + "https://grafana.com/grafana/dashboards/15983-opentelemetry-collector/": { + "StatusCode": 200, + "LastSeen": "2024-04-03T14:58:37.509463052-07:00" + }, "https://grafana.com/oss/opentelemetry/": { "StatusCode": 200, "LastSeen": "2024-01-18T08:52:48.999991-05:00" From b57db9632aaea29d1c58b7ad66039f02d3ab5e2d Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Wed, 3 Apr 2024 15:19:48 -0700 Subject: [PATCH 7/8] Fix parentheses in links --- content/en/docs/collector/internal-telemetry.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index ddb01ad673ae..ea8f8b591eeb 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -18,7 +18,7 @@ By default, the Collector exposes service telemetry in two ways currently: - logs are emitted to stdout Traces are not exposed by default. There is an effort underway to [change -this][https://github.com/open-telemetry/opentelemetry-collector/issues/7532]. +this](https://github.com/open-telemetry/opentelemetry-collector/issues/7532). The work includes supporting configuration of the OpenTelemetry SDK used to produce the Collector's internal telemetry. This feature is currently behind two feature gates: @@ -58,7 +58,7 @@ service: ``` See the configuration's -[example][https://github.com/open-telemetry/opentelemetry-configuration/blob/main/examples/kitchen-sink.yaml] +[example](https://github.com/open-telemetry/opentelemetry-configuration/blob/main/examples/kitchen-sink.yaml) for additional configuration options. Note that this configuration does not support emitting logs as there is no From 651887c0e8e4863f6fd4ae67a77eafd74fadeccc Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Wed, 3 Apr 2024 15:24:03 -0700 Subject: [PATCH 8/8] Prettier fixes --- content/en/docs/collector/internal-telemetry.md | 4 ++-- static/refcache.json | 4 ++++ 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index ea8f8b591eeb..1a581bd5b252 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -17,8 +17,8 @@ By default, the Collector exposes service telemetry in two ways currently: `8888` - logs are emitted to stdout -Traces are not exposed by default. There is an effort underway to [change -this](https://github.com/open-telemetry/opentelemetry-collector/issues/7532). +Traces are not exposed by default. There is an effort underway to +[change this](https://github.com/open-telemetry/opentelemetry-collector/issues/7532). The work includes supporting configuration of the OpenTelemetry SDK used to produce the Collector's internal telemetry. This feature is currently behind two feature gates: diff --git a/static/refcache.json b/static/refcache.json index 01a15ae73afa..98e040706d2c 100644 --- a/static/refcache.json +++ b/static/refcache.json @@ -3043,6 +3043,10 @@ "StatusCode": 200, "LastSeen": "2024-01-18T19:36:56.082576-05:00" }, + "https://github.com/open-telemetry/opentelemetry-collector/issues/7532": { + "StatusCode": 200, + "LastSeen": "2024-04-03T15:23:23.779279304-07:00" + }, "https://github.com/open-telemetry/opentelemetry-collector/pull/6140": { "StatusCode": 200, "LastSeen": "2024-01-30T05:18:24.402543-05:00"