-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addon: expose /metrics endpoints for Prometheus #49
base: master
Are you sure you want to change the base?
Conversation
#49 but maybe with tests instead of talk
7240ded
to
5221e4d
Compare
Based on the observation that the test pod in https://github.com/Yolean/kubernetes-kafka/blob/addon-metrics/test/jmx-selftest.yml takes 40 - 100 MB memory (in GKE according to |
Poor results in GKE, getting pod restarts at least once per five minutes:
|
which might not matter because we no longer have a loadbalancing service. These probes won't catch all failure modes, but if they fail we're pretty sure the container is malfunctioning. I found some sources recommending ./bin/kafka-topics.sh for probes but to me it looks risky to introduce a dependency to some other service for such things. One such source is helm/charts#144 The zookeeper probe is from https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ An issue is that zookeeper's logs are quite verbose for every probe.
@solsson I had the same problem with kafka metrics.
lowercaseOutputName: true
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
rules:
- pattern : kafka.server<type=ReplicaFetcherManager, name=MaxLag, clientId=(.+)><>Value
- pattern : kafka.server<type=BrokerTopicMetrics, name=(.+), topic=(.+)><>OneMinuteRate
- pattern : kafka.server<type=KafkaRequestHandlerPool, name=RequestHandlerAvgIdlePercent><>OneMinuteRate
- pattern : kafka.server<type=Produce><>queue-size
- pattern : kafka.server<type=ReplicaManager, name=(.+)><>(Value|OneMinuteRate)
- pattern : kafka.server<type=controller-channel-metrics, broker-id=(.+)><>(.*)
- pattern : kafka.server<type=socket-server-metrics, networkProcessor=(.+)><>(.*)
- pattern : kafka.server<type=Fetch><>queue-size
- pattern : kafka.server<type=SessionExpireListener, name=(.+)><>OneMinuteRate
- pattern : java.lang<type=OperatingSystem><>SystemCpuLoad
- pattern : java.lang<type=Memory><HeapMemoryUsage>used
- pattern : java.lang<type=OperatingSystem><>FreePhysicalMemorySize
prometheus global config:
k8s container live probe: livenessProbe:
httpGet:
path: /metrics
port: 8080
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 60
successThreshold: 1
failureThreshold: 3 Enjoy :) |
@yacut Thanks for the feedback. I noticed the default export just included everything, so I'll update the exporter config for brokers to your suggestion. I'm surprised about the performance issue though. I the tests I ran, 3 seconds were sufficient according to https://github.com/Yolean/kubernetes-kafka/blob/addon-metrics/test/metrics.yml#L80. |
@solsson I'm surprised too. There are some issue about it:
I'm not an expert, but I guess the bigger kafka cluster (brokers/topics/partitions/messages rate) the slower the responses. With our cluster size the responses are ~ 15-35 seconds 😟 I also saw that jmx exporter responds very quickly, if I stop the broker and he is no longer in the cluster replication, but still runs a bit. |
Thanks for the background. This looks like a weakness with jmx_exporter. Before we dig deep here it could be worth investigating if there's other ways to get Prometheus compliant metrics out of Kafka. |
There are not much exporters for kafka: https://prometheus.io/docs/instrumenting/exporters/ For me the important metrics are:
If you find another exporter, it would be great, but at the moment we have no choice... |
@solsson Performance improved from lowercaseOutputName: true
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
ssl: false
whitelistObjectNames: ["kafka.server:*","java.lang:*"]
rules:
- pattern : kafka.server<type=ReplicaFetcherManager, name=MaxLag, clientId=(.+)><>Value
- pattern : kafka.server<type=BrokerTopicMetrics, name=(.+), topic=(.+)><>OneMinuteRate
- pattern : kafka.server<type=KafkaRequestHandlerPool, name=RequestHandlerAvgIdlePercent><>OneMinuteRate
- pattern : kafka.server<type=Produce><>queue-size
- pattern : kafka.server<type=ReplicaManager, name=(.+)><>(Value|OneMinuteRate)
- pattern : kafka.server<type=controller-channel-metrics, broker-id=(.+)><>(.*)
- pattern : kafka.server<type=socket-server-metrics, networkProcessor=(.+)><>(.*)
- pattern : kafka.server<type=Fetch><>queue-size
- pattern : kafka.server<type=SessionExpireListener, name=(.+)><>OneMinuteRate
- pattern : java.lang<type=OperatingSystem><>SystemCpuLoad
- pattern : java.lang<type=Memory><HeapMemoryUsage>used
- pattern : java.lang<type=OperatingSystem><>FreePhysicalMemorySize Prometheus scrape settings are back to normal:
|
@yacut great find. Does the branch metrics-improve-scrape-times correspond to your config? It get speedy scrapes with it, and it contains the metrics I've looked for except Have you had a look at the scrape config for zookeeper? I failed completely to extract meaningful metrics in #61.
I assume this is for the metrics container, but I don't understand port 8080. Do you think it's worth the extra jmx runs to have this kind of liveness probe, given performance is an issue already? |
@solsson Basically yes, but I don't think that - pattern : kafka.server<type=ReplicaManager, name=(PartitionCount|UnderReplicatedPartitions)><>Value
- pattern : kafka.server<type=BrokerTopicMetrics, name=(BytesInPerSec|BytesOutPerSec|MessagesInPerSec), topic=(.+)><>OneMinuteRate I believe the k8s container live probe is important because if the jmx exporter can't response anymore than it's useless. One minute live probe duration should not be a problem if you use the whitelist config and only the metrics that important to you. In my humble opinion, the following metrics are important for zookeeper:
More info here: https://zookeeper.apache.org/doc/r3.1.2/zookeeperJMX.html |
Suggested a liveness probe in e4fadac |
Confluent's release post for 1.0.0 mentions changes to metrics. Most of it, according to the release notes is in Connect. For Kafka I found https://issues.apache.org/jira/browse/KAFKA-5341. |
This reverts commit 22a314a.
for jmx containers in kafka and zoo pods
It'll just make the requests slower. Dreadfully slow on Minikube (>30s even when limit is increased to 100m).
and with 150Mi limit I got zero restarts in 48 hours.
d7784d0
to
d4b95d2
Compare
But before this, how did the metrics container know which port to connect to?
Try to get meaningful metrics from Zookeeper
This is a great addition, but with + 100M-150M memory per pod (+800M with the default scale) I'm a bit hesitant to merge. Will test more in #84. |
I had always started from jmx-exporter's sample yaml for kafka, but it's much more enlightening to do as in metrics-experiment -- export everything. To inspect the result I'm using: metrics_save() {
pod=$1
kubectl -n kafka port-forward $pod 5556:5556 &
sleep 1
time curl -o "tmp-metrics-$pod-$(date +%FT%H%M%S).txt" -f -s http://localhost:5556/metrics
kill %%
}
metrics_save kafka-0
metrics_save pzoo-0 Sample full kafka /metrics at https://gist.github.com/solsson/efb929260fd663a9e15e0ac8557c5028, zoo at https://gist.github.com/solsson/15e2bdce7c23b2d1c7aea0ef895900cb |
I've been testing kafka on a cluster with quite busy nodes, and I'm having more problems with the metrics containers than with Kafka itself. Currently exporting more metrics than committed conf, but with ssl=false.
I've also raised the memory limit to 200M. I think we must find a liveness probe that doesn't cost an additional round of JMX probing. Or drop the liveness probes, and have the monitoring system alert on stale metrics. |
Good fit with https://github.com/Yolean/kubernetes-monitoring.
TODO recommend a Grafana dashboard json