Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pulling refs/heads/main into main #1474

Merged
merged 6 commits into from
Aug 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,7 @@ Get started with the Collector for Kubernetes
Default Kubernetes metrics <metrics-ootb-k8s.rst>
Upgrade <kubernetes-upgrade.rst>
Uninstall <kubernetes-uninstall.rst>
Troubleshoot <troubleshoot-k8s.rst>
Troubleshoot containers <troubleshoot-k8s-container.rst>
Troubleshoot <k8s-troubleshooting/troubleshoot-k8s-landing.rst>
Support <kubernetes-support.rst>
Tutorial: Monitor your Kubernetes environment <k8s-infrastructure-tutorial/about-k8s-tutorial.rst>
Tutorial: Configure the Collector for Kubernetes <collector-configuration-tutorial-k8s/about-collector-config-tutorial.rst>
Expand Down Expand Up @@ -75,8 +74,7 @@ To upgrade or uninstall, see:

If you have any installation or configuration issues, refer to:

* :ref:`otel-troubleshooting`
* :ref:`troubleshoot-k8s`
* :ref:`troubleshoot-k8s-landing`
* :ref:`kubernetes-support`

.. raw:: html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Tutorial: Monitor your Kubernetes environment in Splunk Observability Cloud
k8s-monitor-with-navigators
k8s-activate-detector

Deploy the Splunk Distribution of OpenTelemetry Collector in a Kubernetes cluster and start monitoring your Kubernetes platform using Splunk Observability Cloud.
Deploy the Splunk Distribution of the OpenTelemetry Collector in a Kubernetes cluster and start monitoring your Kubernetes platform using Splunk Observability Cloud.

.. raw:: html

Expand Down
Original file line number Diff line number Diff line change
@@ -1,41 +1,19 @@
.. _troubleshoot-k8s-container:

***************************************************************
Troubleshoot the Collector for Kubernetes containers
Troubleshoot Kubernetes and container runtime compatibility
***************************************************************

.. meta::
:description: Describes troubleshooting specific to the Collector for Kubernetes containers.
:description: Describes troubleshooting specific to Kubernetes and container runtime compatibility.

.. note:: For general troubleshooting, see :ref:`otel-troubleshooting` and :ref:`troubleshoot-k8s`.
.. note::

See also:

Verify if your container is running out of memory
=======================================================================

Even if you didn't provide enough resources for the Collector containers, under normal circumstances the Collector doesn't run out of memory (OOM). This can only happen if the Collector is heavily throttled by the backend and exporter sending queue growing faster than collector can control memory utilization. In that case you should see ``429`` errors for metrics and traces or ``503`` errors for logs.

For example:

.. code-block::

2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"}
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348}

If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter:

.. code-block:: yaml

agent:
config:
exporters:
sapm:
sending_queue:
queue_size: 512

You can apply a similar configuration to any other failing exporter.

Kubernetes and container runtime compatibility
=============================================================================================
* :ref:`troubleshoot-k8s-general`
* :ref:`troubleshoot-k8s-sizing`
* :ref:`troubleshoot-k8s-missing-metrics`

Kubernetes requires you to install a container runtime on each node in the cluster so that pods can run there. The Splunk Distribution of the Collector for Kubernetes supports container runtimes such as containerd, CRI-O, Docker, and Mirantis Kubernetes Engine (formerly Docker Enterprise/UCP).

Expand All @@ -52,7 +30,7 @@ For more information about runtimes, see :new-page:`Container runtime <https://k
.. _check-runtimes:

Troubleshoot the container runtime compatibility
--------------------------------------------------------------------
=============================================================================================

To check if you're having compatibility issues with Kubernets and the container runtime, follow these steps:

Expand All @@ -77,7 +55,7 @@ To check if you're having compatibility issues with Kubernets and the container
.. _ts-k8s-stats:

Check the integrity of your container stats
--------------------------------------------------------------------
=============================================================================================

Use the Kubelet Summary API to verify container, pod, and node stats. The Kubelet provides the Summary API to discover and retrieve per-node summarized stats available through the ``/stats`` endpoint.

Expand All @@ -88,7 +66,7 @@ All of the stats shown in these examples should be present unless otherwise note
.. _verify-node-stats:

Verify a node's stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

To verify a node's stats:

Expand Down Expand Up @@ -176,7 +154,7 @@ For reference, the following table shows the mapping for the node stat names to
.. _verify-pod-stats:

Verify a pod's stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

.. note::

Expand Down Expand Up @@ -268,7 +246,7 @@ For reference, the following table shows the mapping for the pod stat names to t
.. _verify-container-stats:

Verify a container's stats
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

.. note:: Carry out steps 1 and 2 in both :ref:`verify-node-stats` and :ref:`verify-pod-stats` before completing this section.

Expand Down Expand Up @@ -340,14 +318,14 @@ For reference, the following table shows the mappings for the container stat nam
- ``container.memory.major_page_faults``

Reported incompatible Kubernetes and container runtime issues
--------------------------------------------------------------------
=============================================================================================

.. note:: Managed Kubernetes services might use a modified container runtime, and the service provider might have applied custom patches or bug fixes that are not present within an unmodified container runtime.

This section describes known incompatibilities and container runtime issues.

containerd with Kubernetes 1.21.0 to 1.21.11
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

When using Kubernetes 1.21.0 to 1.21.11 with containerd, memory and network stats or metrics might be missing. The following is a list of affected metrics:

Expand All @@ -367,7 +345,7 @@ Try one of the following workarounds to resolve the issue:
- Upgrade containerd to version 1.4.x or 1.5.x.

containerd 1.4.0 to 1.4.12 with Kubernetes 1.22.0 to 1.22.8
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

When using Kubernetes 1.22.0 to 1.22.8 with containerd 1.4.0 to 1.4.12, memory and network stats or metrics can be missing. The following is a list of affected metrics:

Expand All @@ -388,7 +366,7 @@ Try one of the following workarounds to resolve the issue:
- Upgrade containerd to at least version 1.4.13 or 1.5.0 to fix the missing pod memory metrics.

containerd with Kubernetes 1.23.0 to 1.23.6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------------

When using Kubernetes versions 1.23.0 to 1.23.6 with containerd, memory stats or metrics can be missing. The following is a list of affected metrics:

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _troubleshoot-k8s-landing:

*****************************************************************************************
Troubleshoot the Collector for Kubernetes
*****************************************************************************************

.. meta::
:description: Learn how to deploy the Splunk Distribution of the OpenTelemetry Collector on a Kubernetes cluster, view your cluster data, and create a detector to issue alerts.

.. toctree::
:hidden:
:maxdepth: 3

Debugging and logs <troubleshoot-k8s>
Sizing <troubleshoot-k8s-sizing>
Missing metrics <troubleshoot-k8s-missing-metrics>
Container runtime compatibility <troubleshoot-k8s-container>


To troubleshoot the Splunk Distribution of the OpenTelemetry Collector for Kubernetes see:

* :ref:`troubleshoot-k8s`
* :ref:`troubleshoot-k8s-sizing`
* :ref:`troubleshoot-k8s-missing-metrics`
* :ref:`troubleshoot-k8s-container`


Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
.. _troubleshoot-k8s-missing-metrics:

***************************************************************
Troubleshoot missing metrics
***************************************************************

.. meta::
:description: Describes troubleshooting specific to missing metrics in the Collector for Kubernetes.

.. note::

See also:

* :ref:`troubleshoot-k8s-general`
* :ref:`troubleshoot-k8s-sizing`
* :ref:`troubleshoot-k8s-container`

The Splunk Collector for Kubernetes is missing metrics starting with ``k8s.pod.*`` and ``k8s.node.*``
========================================================================================================

After deploying the Splunk Distribution of the OpenTelemetry Collector for Kubernetes Chart version 0.87.0 or higher as either a new install or upgrade the following pod and node metrics are not being collected:

* ``k8s.(pod/node).cpu.time``
* ``k8s.(pod/node).cpu.utilization``
* ``k8s.(pod/node).filesystem.available``
* ``k8s.(pod/node).filesystem.capacity``
* ``k8s.(pod/node).filesystem.usage``
* ``k8s.(pod/node).memory.available``
* ``k8s.(pod/node).memory.major_page_faults``
* ``k8s.(pod/node).memory.page_faults``
* ``k8s.(pod/node).memory.rss``
* ``k8s.(pod/node).memory.usage``
* ``k8s.(pod/node).memory.working_set``
* ``k8s.(pod/node).network.errors``
* ``k8s.(pod/node).network.io``

Confirm the metrics are missing
--------------------------------------------------------------------

To confirm these metrics are missing perform the following steps:

1. Confirm that the metrics are missing with the following Splunk Search Processing Language (SPL) command:

.. code-block::

| mstats count(_value) as "Val" where index="otel_metrics_0_93_3" AND metric_name IN (k8s.pod.*, k8s.node.*) by metric_name

2. Check the Collector's pod logs from the CLI of the Kubernetes node with this command:

.. code-block::

kubectl -n {namespace} logs {collector-agent-pod-name}

Note: Update ``namespace`` and ``collector-agent-pod-name`` based on your environment.

3. You will see a "tls: failed to verify certificate" error similar to the one below in the agent pod logs:

.. code-block::

2024-02-28T01:11:24.614Z error scraperhelper/scrapercontroller.go:200 Error scraping metrics {"kind": "receiver", "name": "kubeletstats", "data_type": "metrics", "error": "Get \"https://10.202.38.255:10250/stats/summary\": tls: failed to verify certificate: x509: cannot validate certificate for 10.202.38.255 because it doesn't contain any IP SANs", "scraper": "kubeletstats"}
go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).scrapeMetricsAndReport
go.opentelemetry.io/collector/[email protected]/scraperhelper/scrapercontroller.go:200
go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).startScraping.func1
go.opentelemetry.io/collector/[email protected]/scraperhelper/scrapercontroller.go:176

Resolution
--------------------------------------------------------------------

The :ref:`kubelet-stats-receiver` collects k8s.(pod or node) metrics from the Kubernetes endpoint ``/stats/summary``. As of version 0.87.0 of the Splunk OTel Collector the kubelet certificate is verified during this process to confirm it's valid. If you are using a self signed or invalid certificate the Kubelet stats receiver cannot collect the metrics.

You have two alternatives to resolve this error:

1. Add valid a certificate to your Kubernetes cluster. See how at :ref:`otel-kubernetes-config`. After updating the ``values.yaml`` file use the Helm upgrade command to upgrade your Collector deployment.

2. Disable certificate verification in the OTel agent Kubelet Stats receiver by setting ``insecure_skip_verify: true`` for the Kubelet stats receiver in the agent.config section of the values.yaml.

For example, use the configuration below to disable certificate verification:

.. code-block::

agent:
config:
receivers:
kubeletstats:
insecure_skip_verify: true

.. caution:: Keep in mind your security requirements before disabling certificate verification.



Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
.. _troubleshoot-k8s-sizing:

***************************************************************
Troubleshoot sizing for the Collector for Kubernetes
***************************************************************

.. meta::
:description: Describes troubleshooting specific to sizing the Collector for Kubernetes containers.

.. note::

See also:

* :ref:`troubleshoot-k8s-general`
* :ref:`troubleshoot-k8s-missing-metrics`
* :ref:`troubleshoot-k8s-container`

Size your Collector instance
=============================================================================================

Set the resources allocated to your Collector instance based on the amount of data you expecte to handle. For more information, see :ref:`otel-sizing`.

Use the following configuration to bump resource limits for the agent:

.. code-block:: yaml

agent:
resources:
limits:
cpu: 500m
memory: 1Gi

Set the resources allocated to your cluster receiver deployment based on the cluster size. For example, for a cluster with 100 nodes alllocate these resources:

.. code-block:: yaml

clusterReceiver:
resources:
limits:
cpu: 1
memory: 2Gi


Verify if your container is running out of memory
=======================================================================

Even if you didn't provide enough resources for the Collector containers, under normal circumstances the Collector doesn't run out of memory (OOM). This can only happen if the Collector is heavily throttled by the backend and exporter sending queue growing faster than collector can control memory utilization. In that case you should see ``429`` errors for metrics and traces or ``503`` errors for logs.

For example:

.. code-block::

2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"}
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348}

If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter:

.. code-block:: yaml

agent:
config:
exporters:
sapm:
sending_queue:
queue_size: 512

You can apply a similar configuration to any other failing exporter.

Loading
Loading