From 878c850a3ca8e845ae3f06f6cc1ccbf29e9cd709 Mon Sep 17 00:00:00 2001 From: tiffany76 <30397949+tiffany76@users.noreply.github.com> Date: Thu, 13 Jun 2024 15:39:48 -0700 Subject: [PATCH] Remove CPU monitoring section --- content/en/docs/collector/internal-telemetry.md | 13 ------------- 1 file changed, 13 deletions(-) diff --git a/content/en/docs/collector/internal-telemetry.md b/content/en/docs/collector/internal-telemetry.md index 6ce1dee0a61b..b54a555eca02 100644 --- a/content/en/docs/collector/internal-telemetry.md +++ b/content/en/docs/collector/internal-telemetry.md @@ -289,19 +289,6 @@ your project's requirements, select a narrow time window before alerting begins to avoid notifications for small losses that are within the desired reliability range and not considered outages. -#### Low on CPU resources - -This depends on the CPU metrics available on the deployment, eg.: -`kube_pod_container_resource_limits{resource="cpu", unit="core"}` for -Kubernetes. Let's call it `available_cores`. The idea here is to have an upper -bound of the number of available cores, and the maximum expected ingestion rate -considered safe, let's call it `safe_rate`, per core. This should trigger -increase of resources/ instances (or raise an alert as appropriate) whenever -`(actual_rate/available_cores) < safe_rate`. - -The `safe_rate` depends on the specific configuration being used. // TODO: -Provide reference `safe_rate` for a few selected configurations. - ### Secondary monitoring #### Queue length