Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track progress for Kubernetes 1.8 / kubernetes-kafka v3.0.0 #84

Closed
10 of 12 tasks
solsson opened this issue Oct 25, 2017 · 6 comments
Closed
10 of 12 tasks

Track progress for Kubernetes 1.8 / kubernetes-kafka v3.0.0 #84

solsson opened this issue Oct 25, 2017 · 6 comments
Milestone

Comments

@solsson
Copy link
Contributor

solsson commented Oct 25, 2017

Manifests:

Test open PRs:

Test tests:

  • kafkacat based, delete brokers etc
  • java based, delete brokers etc

Structure:

  • Addons -> master, selected using kubectl apply [folder]
    • See "1.8-" branches like 1.8-logs-streaming
    • See label v3.1
@solsson
Copy link
Contributor Author

solsson commented Oct 25, 2017

  • rename manifests from NNname.yml to NN-name.yml?

@solsson
Copy link
Contributor Author

solsson commented Oct 27, 2017

Found an explanation for why selector: needs to be added to workload API manifests:
From daemonset but I guess it applies to other resources as well:
"The pod selector will no longer be defaulted when left empty. Selector defaulting was not compatible with kubectl apply."

@solsson
Copy link
Contributor Author

solsson commented Nov 5, 2017

An update on testing, based on #79 (comment). I lack the Kafka experiment to interpret current results, so I think I want 3 tests that do essentially the same but with different clients.

What we want to assert is basically "uptime" in the face of re-scheduled broker and zk pods, caused by things like node downtime, cluster upgrades or zone outages. Tests continuously "bootstrap" + consume from a topic with 2 replicas (b8bfda8) + regularly produce messages + assert that those messages get consumed.

Measurements of throughput etc will have to wait until we have Prometheus monitoring up and running (#49 + Yolean/kubernetes-monitoring + ServiceMonitors + rules).

The three tests are:

As a complement it'll be interesting to have kafkacat with new bootstrap for every assert run, i.e. https://github.com/Yolean/kubernetes-kafka/blob/master/test/basic-with-kafkacat.yml prior to #79)

@solsson
Copy link
Contributor Author

solsson commented Nov 5, 2017

Killing pods and watching test readiness I tend to need a tab that does a human readable variant of #60. My new favorite oneliner 😄 (with alias from solsson/kubectx#1 (comment)):

while :; do k get pods --all-namespaces -w; done | gawk '{ print strftime("%FT%T"), $0; fflush() }'

@solsson solsson added this to the v3.0 milestone Nov 6, 2017
@solsson
Copy link
Contributor Author

solsson commented Nov 6, 2017

Tests have been put into practice in #79 (comment)

Huge improvement that we don't spin up 2 JVMs every 10s :)

@solsson
Copy link
Contributor Author

solsson commented Nov 7, 2017

Closing this messy ticket in favor of https://github.com/Yolean/kubernetes-kafka/milestone/1, with scope reductions -> https://github.com/Yolean/kubernetes-kafka/milestone/2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant