Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD is forever stuck on updating the Cluster resource generated by the cluster chart #426

Open
spantaleev opened this issue Oct 25, 2024 · 5 comments · May be fixed by #441
Open

ArgoCD is forever stuck on updating the Cluster resource generated by the cluster chart #426

spantaleev opened this issue Oct 25, 2024 · 5 comments · May be fixed by #441
Assignees
Labels
chart( cluster ) Related to the cluster chart

Comments

@spantaleev
Copy link

I'm using the following ArgoCD application:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cloudnative-pg-cluster-something-postgres
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: something
    name: in-cluster
  source:
    repoURL: 'https://cloudnative-pg.io/charts/'
    targetRevision: 0.1.0
    helm:
      values: |
        fullnameOverride: cluster-postgres

        version:
          postgresql: "17"

        cluster:
          instances: 3

          storage:
            size: 3Gi

          initdb:
            database: something
            owner: something

          enableSuperuserAccess: true
          superuserSecret: cluster-postgres-superuser

          walStorage:
            enabled: true
    chart: cluster
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
  sources: []
  project: common

This generates a Cluster resource with pg_hba: [] and pg_ident: []:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  labels:
    app.kubernetes.io/instance: cloudnative-pg-cluster-something-postgres
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: cluster
    app.kubernetes.io/part-of: cloudnative-pg
    argocd.argoproj.io/instance: cloudnative-pg-cluster-something-postgres
    helm.sh/chart: cluster-0.1.0
  name: cluster-postgres
  namespace: something
spec:
  affinity:
    topologyKey: topology.kubernetes.io/zone
  bootstrap:
    initdb:
      database: something
      owner: something
      postInitApplicationSQL: null
  enableSuperuserAccess: true
  imageName: ghcr.io/cloudnative-pg/postgresql:17
  imagePullPolicy: IfNotPresent
  instances: 3
  logLevel: info
  managed: null
  monitoring:
    disableDefaultQueries: false
    enablePodMonitor: true
  postgresGID: 26
  postgresUID: 26
  postgresql:
    parameters: {}
    pg_hba: []
    pg_ident: []
    shared_preload_libraries: null
  primaryUpdateMethod: switchover
  primaryUpdateStrategy: unsupervised
  priorityClassName: null
  storage:
    size: 3Gi
    storageClass: null
  superuserSecret:
    name: cluster-postgres-superuser
  walStorage:
    size: 1Gi
    storageClass: null

ArgoCD continuously retries updating this Cluster resource, but the final result lacks the pg_hba and pg_ident keys. I suspect it's because they are empty lists.

Perhaps the code here could benefit from some if statements that exclude pg_hba and pg_ident if they would end up being empty?

pg_hba:
{{- toYaml .pg_hba | nindent 6 }}
pg_ident:
{{- toYaml .pg_ident | nindent 6 }}

This only became an issue with version 0.1.0 of the cluster chart. Downgrading to 0.0.11 (which does not have these pg_hba and pg_ident keys) fixes the problem.

Seems like pg_hba got introduced in #321 while pg_ident got introduced in #377

@itay-grudev
Copy link
Collaborator

We can wrap them in a with statement so they are not included.

@itay-grudev itay-grudev added the chart( cluster ) Related to the cluster chart label Oct 25, 2024
@itay-grudev itay-grudev self-assigned this Oct 25, 2024
@pschichtel
Copy link

I'm also seeing this with a bunch of other entries:

Image

I'm using server-side apply

@logan-broit
Copy link

logan-broit commented Oct 26, 2024

We also are running into this issue when using v0.1.0 with fleet, as a workaround we put in values so pg_hba and pg_ident are not empty arrays and that seems to have resolved it for now...

Added this to our values.yaml for our cnpg clusters

    postgresql:
      pg_hba:
        - host all all 10.0.0.0/8 trust
      pg_ident:
        - postgres   local   postgres

Thanks to @cterence for a better idea, adding below to our fleet deployment for the application also temporarily solves the issue

diff:
  comparePatches:
    - apiVersion: postgresql.cnpg.io/v1
      kind: Cluster
      jsonPointers:
        - /spec/postgresql/pg_hba
        - /spec/postgresql/pg_ident

@cterence
Copy link

cterence commented Oct 26, 2024

Adding this ignoreDifferences block to the Application temporarily solves the issue

- group: postgresql.cnpg.io
  kind: Cluster
  jqPathExpressions:
    - .spec.postgresql.pg_hba
    - .spec.postgresql.pg_ident

@bozorgmehr96 bozorgmehr96 linked a pull request Nov 5, 2024 that will close this issue
@bozorgmehr96
Copy link

I submitted a PR for this, which fixes spec.postgresql.pg_hba, spec.postgresql. pg_ident and spec.postgresql.parameters
If you encounter other keys like this, please let me know so I can add a fix to the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chart( cluster ) Related to the cluster chart
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants