Skip to content

Commit

Permalink
[chore] Add example showing how to use the Operator Target Allocator …
Browse files Browse the repository at this point in the history
…alongside the OpenTelemetry Collector daemonset (#1358)
  • Loading branch information
atoulme authored Aug 5, 2024
1 parent 86483ec commit e8915e2
Show file tree
Hide file tree
Showing 11 changed files with 1,082 additions and 0 deletions.
49 changes: 49 additions & 0 deletions examples/target-allocator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Example of chart configuration

## Use the OpenTelemetry Operator Target Allocator

**Notice: Operator related features should be considered to have an alpha maturity level and be experimental. There may be breaking changes or Operator features may be replaced entirely with a better alternative in the future.**

This example shows how to use the [OpenTelemetry Operator Target Allocator](https://opentelemetry.io/docs/kubernetes/operator/target-allocator/) with our Helm chart.

> The OpenTelemetry Operator comes with an optional component, the Target Allocator (TA). In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector’s Prometheus Receiver.
>
> The TA serves two functions:
>
> Even distribution of Prometheus targets among a pool of Collectors
> Discovery of Prometheus Custom Resources
In this example, we deploy the Target Allocator separately as a Kubernetes deployment with the [`target-allocator.yaml`](./target-allocator.yaml) file.

This file configures the Target Allocator to monitor all service monitors and pod monitors across all namespaces and is offered as a suggestion. It should not be used in production.

To deploy the Target Allocator, one can run:
```
$> kubectl apply -f target-allocator.yaml
```


We configure the daemonset to connect to the Target Allocator service to receive scrape targets.

```yaml
agent:
config:
receivers:
prometheus/crd:
config:
global:
scrape_interval: 5s
target_allocator:
endpoint: http://targetallocator-service.default.svc.cluster.local:80
interval: 10s
collector_id: ${env:K8S_POD_NAME}
service:
pipelines:
metrics:
receivers:
- hostmetrics
- kubeletstats
- otlp
- prometheus/crd
- signalfx
```
92 changes: 92 additions & 0 deletions examples/target-allocator/rendered_manifests/clusterRole.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---
# Source: splunk-otel-collector/templates/clusterRole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.105.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.105.0"
app: splunk-otel-collector
chart: splunk-otel-collector-0.105.0
release: default
heritage: Helm
rules:
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- nodes/stats
- nodes/proxy
- pods
- pods/status
- persistentvolumeclaims
- persistentvolumes
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
- list
- watch
- apiGroups:
- events.k8s.io
resources:
- events
- namespaces
verbs:
- get
- list
- watch
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
# Source: splunk-otel-collector/templates/clusterRoleBinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.105.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.105.0"
app: splunk-otel-collector
chart: splunk-otel-collector-0.105.0
release: default
heritage: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: default-splunk-otel-collector
subjects:
- kind: ServiceAccount
name: default-splunk-otel-collector
namespace: default
Loading

0 comments on commit e8915e2

Please sign in to comment.