Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in unsafeFlags in operator version 1.15.0 #375

Open
mohammed6688 opened this issue Sep 16, 2024 · 4 comments
Open

Issue in unsafeFlags in operator version 1.15.0 #375

mohammed6688 opened this issue Sep 16, 2024 · 4 comments

Comments

@mohammed6688
Copy link
Contributor

i was working with operator version 1.14.0 normally without any issues and was able to enable unsafeconfiguration options using the old deprecated allowUnsafeConfigurations option, after trying the new version of the operator and the crds 1.15.0 and enabling unsafeFlags.pxcSize to true and trying to install 2 replicas from the pxc db i got the folloing error

2024-09-16T14:41:26.023Z ERROR Reconciler error {"controller": "pxc-controller", "namespace": "staging", "name": "db-pxc-db", "reconcileID": "c3cd0b29-321b-4232-9fbd-8ee7ca06f0b2", "error": "wrong PXC options: check safe defaults: PXC size must be at least 3. Set spec.unsafeFlags.pxcSize to true to disable this check", "errorVerbose": "PXC size must be at least 3. Set spec.unsafeFlags.pxcSize to true to disable this check\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1.(*PerconaXtraDBCluster).checkSafeDefaults\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1/pxc_types.go:1195\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1.(*PerconaXtraDBCluster).CheckNSetDefaults\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1/pxc_types.go:873\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:214\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695\ncheck safe defaults\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1.(*PerconaXtraDBCluster).CheckNSetDefaults\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/apis/pxc/v1/pxc_types.go:874\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:214\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695\nwrong PXC options\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:324 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222

@jvpasinatto
Copy link
Contributor

Hello @mohammed6688. I could not reproduce with unsafeFlags.pxcSize=true and pxc.size=2. Could you give more details about your installation and your values.yaml file?
Thanks.

@mohammed6688
Copy link
Contributor Author

mohammed6688 commented Sep 27, 2024

Hello @jvpasinatto. i tried to deploy the db with the same values file on k8s cluster and I did not found any issue, but when I deploy the db on rke2 cluster the operator print this log
2024-09-26T12:26:27.670Z INFO KubeAPIWarningLogger unknown field "spec.unsafeFlags"
and it seems like the operator cant understand the unsafeFlags or tls options that have been added in the last version of the operator v1.15.0
here are my values file
`

secrets:
passwords:
root: password
xtrabackup: backup_password
monitor: monitory
clustercheck: clustercheckpassword
proxyadmin: admin_password
operator: operatoradmin
replication: repl_password
pmmserverkey: eyJrIjoiN0NyQ3lWNHdpNzVaMjAwaUZnWnFtMjVjMk1YV1FicDEiLCJuIjoicGVyY29uYSIsImlkIjoxfQ==

unsafeFlags:
tls: true
pxcSize: true

tls:
enabled: false

pause: false
updateStrategy: SmartUpdate

pxc:
size: 2
imagePullPolicy: IfNotPresent
autoRecovery: true
expose:
enabled: true
type: LoadBalancer
readinessDelaySec: 15
livenessDelaySec: 600
configuration: |
[mysqld]
authentication-policy=mysql_native_password
max_connections=10000
wsrep_debug=CLIENT
wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
wsrep_sync_wait=1
tmp_table_size=512M
max_heap_table_size=512M
[sst]
xbstream-opts=--decompress
[xtrabackup]
compress=lz4
readinessProbes:
initialDelaySeconds: 15
timeoutSeconds: 15
periodSeconds: 30
successThreshold: 1
failureThreshold: 5
livenessProbes:
initialDelaySeconds: 300
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
resources:
limits:
memory: 2Gi
cpu: 800
labels:
component: proxysql
podDisruptionBudget:
maxUnavailable: 1

persistence:
enaple: true
storageClass: local-path
accessMode: ReadWriteOnce
size: 6Gi
gracePeriod: 600

proxysql:
enabled: true
size: 2
imagePullPolicy: IfNotPresent
readinessDelaySec: 15
livenessDelaySec: 600
resources:
limits:
memory: 1Gi
cpu: 500
labels:
component: proxysql
configuration: |
datadir="/var/lib/proxysql"

admin_variables =
{
  admin_credentials="proxyadmin:admin_password"
  mysql_ifaces="0.0.0.0:6032"
  refresh_interval=2000

  cluster_username="proxyadmin"
  cluster_password="admin_password"
  cluster_check_interval_ms=200
  cluster_check_status_frequency=100
  cluster_mysql_query_rules_save_to_disk=true
  cluster_mysql_servers_save_to_disk=true
  cluster_mysql_users_save_to_disk=true
  cluster_proxysql_servers_save_to_disk=true
  cluster_mysql_query_rules_diffs_before_sync=1
  cluster_mysql_servers_diffs_before_sync=1
  cluster_mysql_users_diffs_before_sync=1
  cluster_proxysql_servers_diffs_before_sync=1
}

mysql_variables=
{
  monitor_password="monitor"
  monitor_galera_healthcheck_interval=1000
  threads=2
  max_connections=10000
  default_query_delay=0
  default_query_timeout=10000
  poll_timeout=2000
  interfaces="0.0.0.0:3306"
  default_schema="information_schema"
  stacksize=1048576
  connect_timeout_server=10000
  monitor_history=60000
  monitor_connect_interval=20000
  monitor_ping_interval=10000
  ping_timeout_server=200
  commands_stats=true
  sessions_sort=true
  have_ssl=false
}

persistence:
enapled: true
storageClass: local-path
accessMode: ReadWriteOnce
size: 2Gi

logcollector:
enabled: true
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 100M
cpu: 80

pmm:
enabled: false
imagePullPolicy: IfNotPresent
serverHost: monitoring-service
serverUser: admin
pxcParams: "--disable-tablestats-limit=2000"
proxysqlParams: "--custom-labels=CUSTOM-LABELS"
resources:
limits:
memory: 150M
cpu: 100

backup:
enabled: true
imagePullPolicy: IfNotPresent
storages:
fs-pvc:
type: filesystem
volume:
persistentVolumeClaim:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 6Gi
schedule:
- name: "daily-backup"
schedule: "0 2 * * *"
keep: 2
storageName: fs-pvc
`

@mohammed6688
Copy link
Contributor Author

this is the script i use to install percona operator

helm repo add percona https://percona.github.io/percona-helm-charts/ helm install percona-operator percona/pxc-operator --version 1.15.0 -n "${APP_NAME_SPACE}" --kubeconfig "${KUBECONFIG_PATH}" -f $SCRIPTDIR/percona-operator-values.yaml

and this one for percona db

helm repo add percona https://percona.github.io/percona-helm-charts/ helm upgrade -i db percona/pxc-db --version 1.15.0 -n "${APP_NAME_SPACE}" --kubeconfig "${KUBECONFIG_PATH}" -f $SCRIPTDIR/percona-values.yaml

@mohammed6688
Copy link
Contributor Author

I found the issue, i had to delete the old CRDs and redeploy the operator to install the new CRDs
is there any way to upgrade the crds without need to remove them?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants