Skip to content

Latest commit

 

History

History
231 lines (155 loc) · 7.42 KB

File metadata and controls

231 lines (155 loc) · 7.42 KB

Kafka and Zookeeper deployments with TLS on Kubernetes

Self Hosted Cluster

I used this link High Available Kubernetes Cluster Setup using Kubespray to deploy my cluster on Scaleway

After you configured your ansible inventory base on the doc you can simply run this command to deploy the cluster

ansible-playbook -b -v -i inventory/prod/hosts.ini cluster.yml

1 - One node Worker/Master

Best Practice is 3 Master Node and 3 Worker Node at minimum

Note :

1 - If you experience highly load traffic in your cluster you should take care of out of-resource handling in Kubelet, you have to reserve some memory and CPU for nodes because if you don't, the nodes in under pressure kernel will kill the process, so this means you will lose your node or even the cluster.

2 - If you use Master Node alos as worker Please don't do it.

Deploy Cluster on GKE

Step 1 ( Install Terraform ):

Download terraform from below link and extend path to your bash path variable Download HERE

Step 2 ( Download Google Cloud Service Key and Enable kubernetes API ):

We need a way for the Terraform runtime to authenticate with the GCP API so go to the Cloud Console, navigate to IAM & Admin > Service Accounts, and click Create Service Account with Project Editor role. Your browser will download a JSON file containing the details of the service account and a private key that can authenticate as a project editor to your project. Keep this JSON file safe!

cd deploy-gke-cluster
mkdir creds
cp DOWNLOADEDSERVICEKEY.json creds/serviceaccount.json

Step 3 ( Deploy GKE Cluster )

Then you can deploy cluster on GCP with terraform apply command

terraform plan

terraform apply
Setting up Helm

Helm initialized to work with clusters

kubectl delete --namespace kube-system svc tiller-deploy
kubectl delete --namespace kube-system deploy tiller-deploy
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
Secret CA (self-generated Certificate Authority key and certificate)

I used this domain kafka-0.kafka-headless.default.svc.roham.pinsvc.net for generating certificate thus URL accessible inside the cluster

NOTE about tls.sh

This file included some password best way for you is define some variabe in CI/CD platform and pass the value to it or just useing hashicorp vault.

chmod +x tls.sh 

./ tls

after this script run, it will generate jks files and some pem file  in SSL folder that application can use for authenticating to Kafka I also generated Kafka clients jks for sure 

mv kafka.server.truststore.jks kafka.truststore.jks
mv kafka.server.keystore.jks kafka-0.keystore.jks

Note

There are some other jks when you run tls.sh like kafka.clinet.truststore.jks and kafka.client.keystore.jks if you want to use it for your client but I used the same jks on the server

NOTE:

How make tls.sh automated with helm charts

To make this happen we can use initContainers inside statefulset and one image that has access to the cluster to deploy secret file

 spec:
   initContainers:
   - name: gangway-certs
     image: yourimage:tag
     imagePullPolicy: Always
     command:
     - sh
     - -c
     - ./tls.sh
     - mv kafka.server.truststore.jks kafka.truststore.jks
     - mv kafka.server.keystore.jks kafka-0.keystore.jks
     - kubectl create secret generic hatch-ca --from-file=./kafka.server.truststore.jks --from-file=./kafka.server.keystore.jks --from-file=./ca-cert --from-file=ca-key
    volumeMounts:
    - mountPath: /certs/
      name: hatch-ca
NOTE

I also Included ca to the hatch-ca so that's mean if you have the plan to generate a certificate for other services Except Kafka/Zookeeper base on this ca you can make an issuer like this

We can now create an Issuer referencing the Secret resource we just created

NOTE

You will need certmanager for this

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: ca-issuer
  namespace: default
spec:
  ca:
    secretName: hatch-ca  
create secret file on kubernetes
kubectl create secret generic hatch-ca --from-file=./kafka.server.truststore.jks --from-file=./kafka.server.keystore.jks --from-file=./ca-cert --from-file=ca-key
NOTE:

for customizing your deployment you can change anything you want on the helm charts in values file, to deploy Kafka and zookeeper you need to just run this command

NOTE

1 - I didn't use any storage for Kafka and zookeeper because of my environment but to have Kafka in Kubernetes base on best practice you need o good and stable storage

helm install --name kafka -f  values.yaml .
config values for tls auth for kafka

1 - you need config cluster domain in values for me is roham.pinsvc.net

2 - you have to enable tls in auth

auth:
  clientProtocol: tls
  interBrokerProtocol: tls
  jksSecret: hatch-ca
  ## Password to access the JKS files when they are password-protected.
  jksPassword: qwerrewq 
  tlsEndpointIdentificationAlgorithm: https
config values for tls auth for zookeper base on same certificate
service:
  type: ClusterIP
  port: 2181
  followerPort: 2888
  electionPort: 3888
  publishNotReadyAddresses: true
  tls:
    client_enable: true
    client_port: 3181
    client_keystore_path: /certs/truststore/kafka-0.keystore.jks
    client_keystore_password: "qwerrewq"
    client_truststore_path: /certs/truststore/kafka.truststore.jks
    client_truststore_password: "qwerrewq"

  extraVolumes:
  - name: zookeeper-truststore
    secret:
      defaultMode: 288
      secretName: hatch-ca
  extraVolumeMounts:
   - name: zookeeper-truststore
     mountPath: /certs/truststore
     readOnly: true
Test our kafka

after Kafka deployed we can test it with SSL authenticate

kubectl exec -it kafka-0 bash 

cd  /opt/bitnami/kafka/bin

cat > client-ssl.properties <<EOL   
bootstrap.servers=kafka-0.kafka-headless.default.svc.roham.pinsvc.net
security.protocol=SSL
ssl.truststore.location=/certs/kafka.truststore.jks
ssl.truststore.password=qwerrewq
ssl.keystore.location=/certs/kafka-0.keystore.jks
ssl.keystore.password=qwerrewq
ssl.key.password=qwerrewq
EOL


./kafka-console-producer.sh --broker-list kafka-0.kafka-headless.default.svc.roham.pinsvc.net:9093 --topic test --producer.config client-ssl.properties 


./kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-headless.default.svc.roham.pinsvc.net:9093 --topic test --from-beginning --consumer.config client-ssl.properties

explain security concerns and solution

1 - It's good for us if we use vault for certificates and secrets to manage

2 - To have an end to end encryption between services in kubernetes we can use Service mesh like Istio

3 - It's good if we use ACLs for Authorization, Once your clients are authenticated, your Kafka brokers can run them against access control lists (ACL) to determine whether or not a particular client would be authorised to write or read to some topic. Encryption solves the problem of the man in the middle (MITM) attack.