The prometheus-meta-operator watches Cluster CRs and creates prometheus-operator CRs. It is implemented using operatorkit.
Clone the git repository: https://github.com/giantswarm/prometheus-meta-operator.git
Build it using the standard go build
command.
go build github.com/giantswarm/prometheus-meta-operator
You may want to regenerate the unit test files with:
go test -v ./... -update
We store modified upstream code for our own usage.
- pkg/alertmanager/config
- pkg/prometheus/common/config
Add the upstream git repository:
$ git remote add alertmanager https://github.com/prometheus/alertmanager.git
On first run commands are the same as for Upgrade except for git subtree merge
which has to be replaced with:
$ git subtree add --squash -P pkg/alertmanager/config alertmanager-config
# add upstream tags
$ git tag -d $(git tag -l)
$ git fetch alertmanager
$ git checkout v0.22.2
$ git subtree split -P config/ -b alertmanager-config
$ git checkout -b alertmanager-0.22.2 origin/master
$ git subtree merge --message "Upgrade alertmanager/config to v0.22.2" --squash -P pkg/alertmanager/config alertmanager-config
# fix conflicts (the usual way) if any
# restore local tags
$ git tag -d $(git tag -l)
$ git fetch
# push for review
$ git push -u origin HEAD
/!\ Do not merge with squash, once approved merge to master manually.
/!\ We need to preserve commit history otherwise following git subtree commands won't work.
$ git checkout master
$ git merge --ff-only alertmanager-0.22.2
$ git push
Prometheus-meta-operator also manages remoteWrite custom resources.
Code for remoteWrite CRDs is in the api/v1alpha1/
directory.
The actual CRDs are in config/crd/monitoring.giantswarm.io_remotewrites.yaml
To generate the CRDs from code, just use make generate
.
CRDs deployment is managed within the helm chart. The remoteWrite CRD is located under the chart's templates directory as a symbolic link to the generated yaml file.
Prometheus-meta-operator provides a way of setting custom Prometheus volume size.
The Prometheus volume size can be set on the cluster CR using the dedicated annotation monitoring.giantswarm.io/prometheus-volume-size
Three values are possible:
small
= 30 Gimedium
= 100 Gilarge
= 200 Gi
while medium
is the default value.
The retention size of prometheis will be set according to the volume size: we apply a ratio of 90%:
small
(30 Gi) => retentionSize = 27Gimedium
(100 Gi) => retentionSize = 90Gilarge
(200 Gi) => retentionSize = 180Gi
Check Prometheus Volume Sizing for more details.
Prometheus Meta Operator configures the Prometheus Agent instances running in workload clusters (pre-mimir setup cf. observability-operator).
To be able to ingest metrics without disrupting the workload running in the clusters, Prometheus Meta Operator can shard the number of running Prometheus Agents.
The default configuration is defined in PMO itself: PMO adds a new shard every 1M time series present in the WC prometheus running on the management cluster. To avoid scaling down too abruptly, we defined a scale down threshold of 20%.
As this default value was not enough to avoid workload disruptions, we added 2 ways to be able to override the scale up series count target and the scale down percentage.
- Those values can be configured at the installation level by overriding the following values:
prometheusAgent:
shardScaleUpSeriesCount: 1000000
shardScaleDownPercentage: 0.20
- Those values can also be set per cluster using the following cluster annotations:
monitoring.giantswarm.io/prometheus-agent-scale-up-series-count: 1000000
monitoring.giantswarm.io/prometheus-agent-scale-down-percentage: 0.20