Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm Chart broken #334

Open
SebUndefined opened this issue Nov 21, 2022 · 14 comments
Open

Helm Chart broken #334

SebUndefined opened this issue Nov 21, 2022 · 14 comments

Comments

@SebUndefined
Copy link

Deployment helm chart is broken with the version 1.8.0. Pods logs:

Permissions ok: Our pod vernemq-0 belongs to StatefulSet vernemq with 1 replicas
Error generating config with cuttlefish
  run `vernemq config generate -l debug` for more information.

Back with previous version (1.6.12) everything works perfectly.

VerneMQ value file:

# Default values for vernemq.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: vernemq/vernemq
  tag: 1.12.6.1-alpine
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

serviceMonitor:
  create: false
  labels: {}

service:
  # Can be disabled if more advanced use cases require more complex setups, e.g., combining LoadBalancer and ClusterIP for internal and external access. See also issue #274.
  enabled: true
  # NodePort - Listen to a port on nodes and forward to the service.
  # ClusterIP - Listen on the service internal to the cluster only.
  # LoadBalancer - Create a LoadBalancer in the cloud provider and forward to the service.
  type: ClusterIP
  #  clusterIP: 10.1.2.4
  #  externalIPs: []
  #  loadBalancerIP: 10.1.2.4
  #  loadBalancerSourceRanges: []
  #  externalTrafficPolicy: Local
  #  sessionAffinity: None
  #  sessionAffinityConfig: {}
  mqtt:
    enabled: true
    port: 1883
    # This is the port used by nodes to expose the service
    nodePort: 1883
  mqtts:
    enabled: false
    port: 8883
    # This is the port used by nodes to expose the service
    nodePort: 8883
  ws:
    enabled: false
    port: 8080
    # This is the port used by nodes to expose the service
    nodePort: 8080
  wss:
    enabled: false
    port: 8443
    # This is the port used by nodes to expose the service
    nodePort: 8443
  api:
    enabled: true
    port: 8888
    nodePort: 38888
  annotations: {}
  labels: {}

## Ingress can optionally be applied when enabling the MQTT websocket service
## This allows for an ingress controller to route web ports and arbitrary hostnames
## and paths to the websocket service as well as allow the controller to handle TLS
## termination for the websocket traffic. Ingress is only possible for traffic exchanged
## over HTTP, so ONLY the websocket service take advantage of ingress.
ingress:
  className: ""
  enabled: false

  labels: {}

  annotations: {}

  ## Hosts must be provided if ingress is enabled.
  ##
  hosts: []
  # - vernemq.domain.com

  ## Paths to use for ingress rules.
  ##
  paths:
    - path: /
      pathType: ImplementationSpecific


  ## TLS configuration for ingress
  ## Secret must be manually created in the namespace
  ##
  tls: []
  # - secretName: vernemq-tls
  #   hosts:
  #   - vernemq.domain.com

## VerneMQ resources requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources
resources: {}
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
#  limits:
#    cpu: 1
#    memory: 256Mi
#  requests:
#    cpu: 1
#    memory: 256Mi

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}

## Node tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
tolerations: []

## Pod affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
podAntiAffinity: soft

securityContext:
  runAsUser: 10000
  runAsGroup: 10000
  fsGroup: 10000

## If RBAC is enabled on the cluster,VerneMQ needs a service account
## with permissisions sufficient to list pods
rbac:
  create: true
  serviceAccount:
    create: true
    ## Service account name to be used.
    ## If not set and serviceAccount.create is true a name is generated using the fullname template.
#    name:

persistentVolume:
  ## If true, VerneMQ will create/use a Persistent Volume Claim
  ## If false, use local directory
  enabled: false

  ## VerneMQ data Persistent Volume access modes
  ## Must match those of existing PV or dynamic provisioner
  ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  accessModes:
    - ReadWriteOnce

  ## VerneMQ data Persistent Volume size
  size: 5Gi

  ## VerneMQ data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  #  storageClass: ""

  ## Annotations for Persistent Volume Claim
  annotations: {}

extraVolumeMounts: []
## Additional volumeMounts to the pod.
#  - name: additional-volume-mount
#    mountPath: /var/additional-volume-path

extraVolumes: []
## Additional volumes to the pod.
#  - name: additional-volume
#    emptyDir: {}

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security (tls)
secretMounts: []
#  - name: vernemq-certificates
#    secretName: vernemq-certificates-secret
#    path: /etc/ssl/vernemq

statefulset:
  ## Start and stop pods in Parallel or OrderedReady (one-by-one.)  Note - Can not change after first release.
  ## Ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  podManagementPolicy: OrderedReady
  ## Statefulsets rolling update update strategy
  ## Ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
  updateStrategy: RollingUpdate
  ## Configure how much time VerneMQ takes to move offline queues to other nodes
  ## Ref: https://vernemq.com/docs/clustering/#detailed-cluster-leave-case-a-make-a-live-node-leave
  terminationGracePeriodSeconds: 60
  ## Liveness and Readiness probe values
  ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes
  livenessProbe:
    initialDelaySeconds: 90
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 3
  readinessProbe:
    initialDelaySeconds: 90
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 3
  podAnnotations: {}
  #    prometheus.io/scrape: "true"
  #    prometheus.io/port: "8888"
  annotations: {}
  labels: {}
  podLabels: {}
  lifecycle: {}

pdb:
  enabled: false
  minAvailable: 1
  # maxUnavailable: 1

## VerneMQ settings

additionalEnv:
  - name: DOCKER_VERNEMQ_ACCEPT_EULA
    value: "yes"
  - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_passwd
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_acl
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_webhooks
    value: "on"


envFrom: []
@ioolkos
Copy link
Contributor

ioolkos commented Nov 22, 2022

@SebUndefined Thanks. The Helm chart seems to work for me. Can you make extra sure that there are no faulty values injected into the vernemq.conf file? (over the ENV vars)
That's what the error indicates.


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@SebUndefined
Copy link
Author

hi @ioolkos , sorry for the late answer. Please find below the additionalEnv section of the yaml file:

...
additionalEnv:
  - name: DOCKER_VERNEMQ_ACCEPT_EULA
    value: "yes"
  - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_passwd
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_acl
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS.vmq_webhooks
    value: "on"
    # Session lifecycle
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.auth_on_register.hook
    value: "auth_on_register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.auth_on_register.endpoint
    value: "http://my_service:3000/v1/session-lifecycle/auth-on-register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_wakeup.hook
    value: "on_client_wakeup"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_wakeup.endpoint
    value: "http://my_service:3000/v1/session-lifecycle/on-client-wakeup"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_register.hook
    value: "on_register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_register.endpoint
    value: "http://my_service:3000/v1/session-lifecycle/on-register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_offline.hook
    value: "on_client_offline"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_offline.endpoint
    value: "http://my_service:3000/v1/session-lifecycle/on-client-offline"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_gone.hook
    value: "on_client_gone"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_client_gone.endpoint
    value: "http://my_service:3000/v1/session-lifecycle/on-client-gone"
  # Publish Flow
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.auth_on_publish.hook
    value: "auth_on_publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.auth_on_publish.endpoint
    value: "http://my_service:3000/v1/publish-flow/auth-on-publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_publish.hook
    value: "on_publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_publish.endpoint
    value: "http://my_service:3000/v1/publish-flow/on-publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_offline_message.hook
    value: "on_offline_message"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_offline_message.endpoint
    value: "http://my_service:3000/v1/publish-flow/on-offline-message"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_deliver.hook
    value: "on_deliver"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS.on_deliver.endpoint
    value: "http://my_service:3000/v1/publish-flow/on-deliver"

@ioolkos
Copy link
Contributor

ioolkos commented Dec 1, 2022

@SebUndefined can you check you do not run into this?: https://github.com/vernemq/docker-vernemq#remarks
(replace . in config settings with __)


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@SebUndefined
Copy link
Author

Thanks @ioolkos . I have followed the remark section but it still does not work.

What I have tried:

  • Delete all additionalEnv section (keeping only DOCKER_VERNEMQ_ACCEPT_EULA)
  • upgrade to 1.12.6.2 (alpine or not)
  • get the default value.yaml available in the repo (adding the EULA env var however)

I still get the same message...

@SebUndefined
Copy link
Author

For infos, I have tried as mentionned in the error message to run the command vernemq config generate -l debug, and the result is sh: vernemq config generate -l debug: not found.

Probably because vernemq is not already started... Any idea @ioolkos ?

@ioolkos
Copy link
Contributor

ioolkos commented Dec 7, 2022

@SebUndefined what are your exact commands? helm install..?


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@SebUndefined
Copy link
Author

SebUndefined commented Dec 7, 2022

It is as simple as helm install -f ./vernemq-values_test.yaml vmq-front vernemq/vernemq

It is a test file where the only thing I modified is tag: 1.12.6.2-alpineinstead of 1.12.3-alpine. Please note that I updated the additionalEnv with __instead of . and it works perfectly with the old version.

@ioolkos
Copy link
Contributor

ioolkos commented Dec 7, 2022

@SebUndefined Thanks, jep, looks right. Ponder whether I'm missing something simple here.

Did you run helm repo update?


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@SebUndefined
Copy link
Author

Yes, i did it again just for to be sure:

helm repo update vernemq
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "vernemq" chart repository
Update Complete. ⎈Happy Helming!⎈

Install:

helm install -f ./vernemq-values_test.yaml vmq-front vernemq/vernemq
NAME: vmq-front
LAST DEPLOYED: Wed Dec  7 17:05:54 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Check your VerneMQ cluster status:
  kubectl exec --namespace default vernemq-0 /vernemq/bin/vmq-admin cluster show

2. Get VerneMQ MQTT port
  Subscribe/publish MQTT messages there: 127.0.0.1:1883
  kubectl port-forward svc/vmq-front-vernemq 1883:1883

Logs:

Permissions ok: Our pod vmq-front-vernemq-0 belongs to StatefulSet vmq-front-vernemq with 1 replicas
Error generating config with cuttlefish
  run `vernemq config generate -l debug` for more information.

@SebUndefined
Copy link
Author

The value.yaml file just in case:

# Default values for vernemq.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: vernemq/vernemq
  tag: 1.12.6.2-alpine
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

serviceMonitor:
  create: false
  labels: {}

service:
  # Can be disabled if more advanced use cases require more complex setups, e.g., combining LoadBalancer and ClusterIP for internal and external access. See also issue #274.
  enabled: true
  # NodePort - Listen to a port on nodes and forward to the service.
  # ClusterIP - Listen on the service internal to the cluster only.
  # LoadBalancer - Create a LoadBalancer in the cloud provider and forward to the service.
  type: ClusterIP
  #  clusterIP: 10.1.2.4
  #  externalIPs: []
  #  loadBalancerIP: 10.1.2.4
  #  loadBalancerSourceRanges: []
  #  externalTrafficPolicy: Local
  #  sessionAffinity: None
  #  sessionAffinityConfig: {}
  mqtt:
    enabled: true
    port: 1883
    # This is the port used by nodes to expose the service
    nodePort: 1883
  mqtts:
    enabled: false
    port: 8883
    # This is the port used by nodes to expose the service
    nodePort: 8883
  ws:
    enabled: false
    port: 8080
    # This is the port used by nodes to expose the service
    nodePort: 8080
  wss:
    enabled: false
    port: 8443
    # This is the port used by nodes to expose the service
    nodePort: 8443
  api:
    enabled: false
    port: 8888
    nodePort: 38888
  annotations: {}
  labels: {}

## Ingress can optionally be applied when enabling the MQTT websocket service
## This allows for an ingress controller to route web ports and arbitrary hostnames
## and paths to the websocket service as well as allow the controller to handle TLS
## termination for the websocket traffic. Ingress is only possible for traffic exchanged
## over HTTP, so ONLY the websocket service take advantage of ingress.
ingress:
  className: ""
  enabled: false

  labels: {}

  annotations: {}

  ## Hosts must be provided if ingress is enabled.
  ##
  hosts: []
  # - vernemq.domain.com

  ## Paths to use for ingress rules.
  ##
  paths:
    - path: /
      pathType: ImplementationSpecific


  ## TLS configuration for ingress
  ## Secret must be manually created in the namespace
  ##
  tls: []
  # - secretName: vernemq-tls
  #   hosts:
  #   - vernemq.domain.com

## VerneMQ resources requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources
resources: {}
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
#  limits:
#    cpu: 1
#    memory: 256Mi
#  requests:
#    cpu: 1
#    memory: 256Mi

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}

## Node tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
tolerations: []

## Pod affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
podAntiAffinity: soft

securityContext:
  runAsUser: 10000
  runAsGroup: 10000
  fsGroup: 10000

## If RBAC is enabled on the cluster,VerneMQ needs a service account
## with permissisions sufficient to list pods
rbac:
  create: true
  serviceAccount:
    create: true
    ## Service account name to be used.
    ## If not set and serviceAccount.create is true a name is generated using the fullname template.
#    name:

persistentVolume:
  ## If true, VerneMQ will create/use a Persistent Volume Claim
  ## If false, use local directory
  enabled: false

  ## VerneMQ data Persistent Volume access modes
  ## Must match those of existing PV or dynamic provisioner
  ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  accessModes:
    - ReadWriteOnce

  ## VerneMQ data Persistent Volume size
  size: 5Gi

  ## VerneMQ data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  #  storageClass: ""

  ## Annotations for Persistent Volume Claim
  annotations: {}

extraVolumeMounts: []
## Additional volumeMounts to the pod.
#  - name: additional-volume-mount
#    mountPath: /var/additional-volume-path

extraVolumes: []
## Additional volumes to the pod.
#  - name: additional-volume
#    emptyDir: {}

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security (tls)
secretMounts: []
#  - name: vernemq-certificates
#    secretName: vernemq-certificates-secret
#    path: /etc/ssl/vernemq

statefulset:
  ## Start and stop pods in Parallel or OrderedReady (one-by-one.)  Note - Can not change after first release.
  ## Ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  podManagementPolicy: OrderedReady
  ## Statefulsets rolling update update strategy
  ## Ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
  updateStrategy: RollingUpdate
  ## Configure how much time VerneMQ takes to move offline queues to other nodes
  ## Ref: https://vernemq.com/docs/clustering/#detailed-cluster-leave-case-a-make-a-live-node-leave
  terminationGracePeriodSeconds: 60
  ## Liveness and Readiness probe values
  ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes
  livenessProbe:
    initialDelaySeconds: 90
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 3
  readinessProbe:
    initialDelaySeconds: 90
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 3
  podAnnotations: {}
  #    prometheus.io/scrape: "true"
  #    prometheus.io/port: "8888"
  annotations: {}
  labels: {}
  podLabels: {}
  lifecycle: {}

pdb:
  enabled: false
  minAvailable: 1
  # maxUnavailable: 1

## VerneMQ settings

additionalEnv:
  - name: DOCKER_VERNEMQ_ACCEPT_EULA
    value: "yes"
  - name: DOCKER_VERNEMQ_LOG__console__level
    value: "debug"
  - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
    value: "on"
  - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS__vmq_passwd
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS__vmq_acl
    value: "off"
  - name: DOCKER_VERNEMQ_PLUGINS__vmq_webhooks
    value: "on"
    # Session lifecycle
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__auth_on_register__hook
    value: "auth_on_register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__auth_on_register__endpoint
    value: "http://service:3000/v1/session-lifecycle/auth-on-register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_wakeup__hook
    value: "on_client_wakeup"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_wakeup__endpoint
    value: "http://service:3000/v1/session-lifecycle/on-client-wakeup"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_register__hook
    value: "on_register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_register__endpoint
    value: "http://service:3000/v1/session-lifecycle/on-register"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_offline__hook
    value: "on_client_offline"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_offline__endpoint
    value: "http://service:3000/v1/session-lifecycle/on-client-offline"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_gone__hook
    value: "on_client_gone"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_client_gone__endpoint
    value: "http://service:3000/v1/session-lifecycle/on-client-gone"
  # Publish Flow
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__auth_on_publish__hook
    value: "auth_on_publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__auth_on_publish__endpoint
    value: "http://service:3000/v1/publish-flow/auth-on-publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_publish__hook
    value: "on_publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_publish__endpoint
    value: "http://service:3000/v1/publish-flow/on-publish"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_offline_message__hook
    value: "on_offline_message"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_offline_message__endpoint
    value: "http://service:3000/v1/publish-flow/on-offline-message"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_deliver__hook
    value: "on_deliver"
  - name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__on_deliver__endpoint
    value: "http://service:3000/v1/publish-flow/on-deliver"
#  - name: DOCKER_VERNEMQ_MAX_CLIENT_ID_SIZE
#    value: "100"
#  - name: DOCKER_VERNEMQ_MAX_ONLINE_MESSAGES
#    value: "10000"
#  - name: DOCKER_VERNEMQ_MAX_OFFLINE_MESSAGES
#    value: "-1"
#  - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
#    value: "/etc/ssl/vernemq/tls.crt"
#  - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
#    value: "/etc/ssl/vernemq/tls.crt"
#  - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
#    value: "/etc/ssl/vernemq/tls.key"

envFrom: []
# add additional environment variables e.g. from a configmap or secret
# can be usefull if you wanna use authentication via files
#  - secretRef:
#      name: vernemq-users

@ioolkos
Copy link
Contributor

ioolkos commented Dec 8, 2022

Your values.yaml seems to work for me. (the dreaded "works on my machine"... )


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@mojoscale
Copy link

@ioolkos can you confirm that the names of the webhook based env variables above are correct? I tried using them and deployed the cluster, but the webhooks do not fire at all. So i am wondering if these are correct names.

@ioolkos
Copy link
Contributor

ioolkos commented Aug 9, 2023

@mojoscale the webhook ENV variables seem to look allright, yes. Maybe confusing because of the given names.
The format should be like this (with you chosing a name for the Webhook):

vmq_webhooks.mywebhook1.hook = auth_on_register
vmq_webhooks.mywebhook1.endpoint = http://127.0.0.1/myendpoints

Docs are here: https://docs.vernemq.com/plugin-development/webhookplugins

The ENV variables get injected into the vernemq.conf file in Docker. You can check the generated block at the end of the /etc/vernemq.conf file to verify. (the lines from ##START## to end).


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@AntonSmolkov
Copy link

I edited k8s sts, set tail -f /dev/null as the container command, set ivenessProbe.failureThreshold: 9999, then ran
vernemq config generate -l debug from pods shell and it pointed me to the wrong setting i use

2024-06-21T10:52:36.219261+00:00 [error] You've tried to set ovveride_max_online_messages, but there is no setting with that name.
2024-06-21T10:52:36.219307+00:00 [error] Did you mean one of these?
2024-06-21T10:52:36.289651+00:00 [error] override_max_online_messages
2024-06-21T10:52:36.289707+00:00 [error] max_online_messages
2024-06-21T10:52:36.289782+00:00 [error] max_offline_messages
2024-06-21T10:52:36.293361+00:00 [error] Error generating configuration in phase transform_datatypes
2024-06-21T10:52:36.293415+00:00 [error] Conf file attempted to set unknown variable: ovveride_max_online_messages 

Removed this setting and everythink started fine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants