Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolumeClaim labels are not properly recreated #1517

Open
Foosvald opened this issue Sep 26, 2024 · 1 comment
Open

PersistentVolumeClaim labels are not properly recreated #1517

Foosvald opened this issue Sep 26, 2024 · 1 comment

Comments

@Foosvald
Copy link

In our ClickHouseInstallation we have volumeClaimTemplates with labels like this:

    volumeClaimTemplates:
      - name: log-volume-template
        metadata:
          labels:
            app.kubernetes.io/name: clickhouse
            app.kubernetes.io/instance: xxxxxx-clickhouse
            app.kubernetes.io/component: database
            app.kubernetes.io/part-of: xxxxxx
            app.kubernetes.io/managed-by: clickhouse-operator

These labels are properly applied to the PVCs when the ClickHouseInstallation is created the first time. But if I manually delete the PVCs and delete the Pod using kubectl the PVCs and Pod are recreated but the PVCs are then missing our custom labels.

We're running operator version 0.24.0.

Screenshot 2024-09-26 at 16 00 15

Are we missing some operator configuration or are the PVCs supposed to contain the labels even after they are recreated?

How to reproduce:

kubectl apply -f clickhouse-installation.yaml
kubectl get pvc log-volume-template-chi-xxxxxx-clickhouse-0-0-0 -o yaml > pvc_before_delete.yaml

# Delete PVC
kubectl delete pvc log-volume-template-chi-xxxxxx-clickhouse-0-0-0
# In another terminal delete pod to release PVC and let it be deleted
kubectl delete pod chi-xxxxxx-clickhouse-0-0-0

kubectl get pvc log-volume-template-chi-xxxxxx-clickhouse-0-0-0 -o yaml > pvc_after_delete.yaml

with clickhouse-installation.yaml being:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "xxxxxx"
spec:
  configuration:
    profiles:
      readonly/readonly: 2
    clusters:
      - name: "clickhouse"
        layout:
          shardsCount: 1
          replicasCount: 1
    settings:
      logger/level: warning
  defaults:
    templates:
      podTemplate: pod-template
      dataVolumeClaimTemplate: data-volume-template
      logVolumeClaimTemplate: log-volume-template
      serviceTemplate: cluster-service-template
  templates:
    serviceTemplates:
      - name: cluster-service-template
        generateName: "clickhouse-xxxxxx"
        spec:
          ports:
            - name: http
              port: 8123
            - name: tcp
              port: 9000
            - name: backup
              port: 80
          type: ClusterIP
    podTemplates:
      - name: pod-template
        metadata:
          labels:
            app.kubernetes.io/name: clickhouse
            app.kubernetes.io/instance: xxxxxx-clickhouse
            app.kubernetes.io/component: database
            app.kubernetes.io/part-of: xxxxxx
            app.kubernetes.io/managed-by: clickhouse-operator
        spec:
          containers:
            - name: clickhouse
              image: clickhouse/clickhouse-server:24.3.2.23
            - name: clickhouse-log
              image: clickhouse/clickhouse-server:24.3.2.23
              command:
                - "/bin/sh"
                - "-c"
                - "--"
              args:
                - "while true; do sleep 30; done;"
    volumeClaimTemplates:
      - name: data-volume-template
        metadata:
          labels:
            app.kubernetes.io/name: clickhouse
            app.kubernetes.io/instance: xxxxxx-clickhouse
            app.kubernetes.io/component: database
            app.kubernetes.io/part-of: xxxxxx
            app.kubernetes.io/managed-by: clickhouse-operator
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 40Gi
      - name: log-volume-template
        metadata:
          labels:
            app.kubernetes.io/name: clickhouse
            app.kubernetes.io/instance: xxxxxx-clickhouse
            app.kubernetes.io/component: database
            app.kubernetes.io/part-of: xxxxxx
            app.kubernetes.io/managed-by: clickhouse-operator
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi

pvc_before_delete.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/selected-node: osvald-worker-1
    volume.kubernetes.io/storage-provisioner: rancher.io/local-path
  creationTimestamp: "2024-09-26T13:32:17Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: database
    app.kubernetes.io/instance: xxxxxx-clickhouse
    app.kubernetes.io/managed-by: clickhouse-operator
    app.kubernetes.io/name: clickhouse
    app.kubernetes.io/part-of: xxxxxx
    clickhouse.altinity.com/app: chop
    clickhouse.altinity.com/chi: xxxxxx
    clickhouse.altinity.com/cluster: clickhouse
    clickhouse.altinity.com/namespace: edge
    clickhouse.altinity.com/object-version: 71698b3e20441a8043bcfe5213df0e4ce6031fe2
    clickhouse.altinity.com/reclaimPolicy: Delete
    clickhouse.altinity.com/replica: "0"
    clickhouse.altinity.com/shard: "0"
  name: log-volume-template-chi-xxxxxx-clickhouse-0-0-0
  namespace: edge
  resourceVersion: "10325608"
  uid: 04fe6b91-8095-41fa-bb59-3dabdfec0e84
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-04fe6b91-8095-41fa-bb59-3dabdfec0e84
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

pvc_after_delete.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/selected-node: osvald-worker-1
    volume.kubernetes.io/storage-provisioner: rancher.io/local-path
  creationTimestamp: "2024-09-26T13:38:55Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    clickhouse.altinity.com/app: chop
    clickhouse.altinity.com/chi: xxxxxx
    clickhouse.altinity.com/cluster: clickhouse
    clickhouse.altinity.com/namespace: edge
    clickhouse.altinity.com/replica: "0"
    clickhouse.altinity.com/shard: "0"
  name: log-volume-template-chi-xxxxxx-clickhouse-0-0-0
  namespace: edge
  resourceVersion: "10326925"
  uid: d95b1cc9-8058-44cb-b6f4-3fb6737609ae
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-d95b1cc9-8058-44cb-b6f4-3fb6737609ae
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound
@chengjoey
Copy link

delete operator pod, custom labels for pvc will be added

because operator will reconcile again and adjust pvc labels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants