Skip to content

Commit

Permalink
Merge pull request #59 from percona/psmdb-170
Browse files Browse the repository at this point in the history
Update for PSMDBO 1.7.0 release
  • Loading branch information
tplavcic authored Mar 15, 2021
2 parents 528c785 + 1638110 commit a07c486
Show file tree
Hide file tree
Showing 15 changed files with 269 additions and 179 deletions.
4 changes: 4 additions & 0 deletions .github/ct.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# See https://github.com/helm/chart-testing#configuration
remote: origin
target-branch: main
helm-extra-args: --timeout 600s
6 changes: 3 additions & 3 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,18 @@ jobs:
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed)
changed=$(ct list-changed --config .github/ct.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint
run: ct lint --config .github/ct.yaml

- name: Create kind cluster
uses: helm/[email protected]
# Only build a kind cluster if there are chart changes to test.
if: steps.list-changed.outputs.changed == 'true'

- name: Run chart-testing (install)
run: ct install
run: ct install --config .github/ct.yaml
4 changes: 2 additions & 2 deletions charts/psmdb-db/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
apiVersion: v1
appVersion: "1.6.0"
appVersion: "1.7.0"
description: A Helm chart for installing Percona Server MongoDB Cluster Databases using the PSMDB Operator.
name: psmdb-db
home: https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html
version: 0.1.2
version: 1.7.0
maintainers:
- name: cap1984
email: [email protected]
Expand Down
77 changes: 42 additions & 35 deletions charts/psmdb-db/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ This chart implements Percona Server MongoDB deployment in Kubernets via Custom

## Pre-requisites
* [PSMDB operator](https://hub.helm.sh/charts/percona/psmdb-operator) running in you K8S cluster
* Kubernetes 1.11+
* Kubernetes 1.15+
* PV support on the underlying infrastructure - only if you are provisioning persistent volume(s).
* At least `v2.4.0` version of helm
* At least `v2.5.0` version of helm

## Custom Resource Details
* <https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/>
Expand All @@ -24,7 +24,7 @@ To install the chart with the `psmdb` release name using a dedicated namespace (

```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-db percona/psmdb-db --version 0.1.1 --namespace my-namespace
helm install my-db percona/psmdb-db --version 1.7.0 --namespace my-namespace
```

The chart can be customized using the following configurable parameters:
Expand All @@ -38,7 +38,7 @@ The chart can be customized using the following configurable parameters:
| `upgradeOptions.apply` | PSMDB image to apply from version service - recommended, latest, actual version like 4.4.2-4 | `recommended` |
| `upgradeOptions.schedule` | Cron formatted time to execute the update | `"0 2 * * *"` |
| `image.repository` | PSMDB Container image repository | `percona/percona-server-mongodb` |
| `image.tag` | PSMDB Container image tag | `4.4.2-4` |
| `image.tag` | PSMDB Container image tag | `4.4.3-5` |
| `imagePullSecrets` | PSMDB Container pull secret | `[]` |
| `runUid` | Set UserID | `""` |
| `secrets` | Users secret structure | `{}` |
Expand All @@ -47,35 +47,37 @@ The chart can be customized using the following configurable parameters:
| `pmm.image.tag` | PMM Container image tag | `2.12.0` |
| `pmm.serverHost` | PMM server related K8S service hostname | `monitoring-service` |
||
| `replset.name` | ReplicaSet name | `rs0` |
| `replset.size` | ReplicaSet size (pod quantity) | `3` |
| `replset.antiAffinityTopologyKey` | ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `replset.priorityClass` | ReplicaSet Pod priorityClassName | `""` |
| `replset.annotations` | ReplicaSet Pod annotations | `{}` |
| `replset.labels` | ReplicaSet Pod labels | `{}` |
| `replset.nodeSelector` | ReplicaSet Pod nodeSelector labels | `{}` |
| `replset.livenessProbe` | ReplicaSet Pod livenessProbe structure | `{}` |
| `replset.podDisruptionBudget.maxUnavailable` | ReplicaSet failed Pods maximum quantity | `1` |
| `replset.expose.enabled` | Allow access to replicaSet from outside of Kubernetes | `false` |
| `replset.expose.exposeType` | Network service access point type | `LoadBalancer` |
| `replset.arbiter.enabled` | Create MongoDB arbiter service | `false` |
| `replset.arbiter.size` | MongoDB arbiter Pod quantity | `1` |
| `replset.arbiter.antiAffinityTopologyKey` | MongoDB arbiter Pod affinity | `kubernetes.io/hostname` |
| `replset.arbiter.priorityClass` | MongoDB arbiter priorityClassName | `""` |
| `replset.arbiter.annotations` | MongoDB arbiter Pod annotations | `{}` |
| `replset.arbiter.labels` | MongoDB arbiter Pod labels | `{}` |
| `replset.arbiter.nodeSelector` | MongoDB arbiter Pod nodeSelector labels | `{}` |
| `replset.arbiter.livenessProbe` | MongoDB arbiter Pod livenessProbe structure | `{}` |
| `replset.schedulerName` | ReplicaSet Pod schedulerName | `""` |
| `replset.resources` | ReplicaSet Pods resource requests and limits | `{}` |
| `replset.volumeSpec` | ReplicaSet Pods storage resources | `{}` |
| `replset.volumeSpec.emptyDir` | ReplicaSet Pods emptyDir K8S storage | `{}` |
| `replset.volumeSpec.hostPath` | ReplicaSet Pods hostPath K8S storage | |
| `replset.volumeSpec.hostPath.path` | ReplicaSet Pods hostPath K8S storage path | `""` |
| `replset.volumeSpec.pvc` | ReplicaSet Pods PVC request parameters | |
| `replset.volumeSpec.pvc.storageClassName` | ReplicaSet Pods PVC target storageClass | `""` |
| `replset.volumeSpec.pvc.accessModes` | ReplicaSet Pods PVC access policy | `[]` |
| `replset.volumeSpec.pvc.resources.requests.storage` | ReplicaSet Pods PVC storage size | `3Gi` |
| `replsets[0].name` | ReplicaSet name | `rs0` |
| `replsets[0].size` | ReplicaSet size (pod quantity) | `3` |
| `replsets[0].antiAffinityTopologyKey` | ReplicaSet Pod affinity | `kubernetes.io/hostname` |
| `replsets[0].priorityClass` | ReplicaSet Pod priorityClassName | `""` |
| `replsets[0].annotations` | ReplicaSet Pod annotations | `{}` |
| `replsets[0].labels` | ReplicaSet Pod labels | `{}` |
| `replsets[0].nodeSelector` | ReplicaSet Pod nodeSelector labels | `{}` |
| `replsets[0].livenessProbe` | ReplicaSet Pod livenessProbe structure | `{}` |
| `replsets[0].runtimeClass` | ReplicaSet Pod runtimeClassName | `""` |
| `replsets[0].sidecars` | ReplicaSet Pod sidecars | `{}` |
| `replsets[0].podDisruptionBudget.maxUnavailable` | ReplicaSet failed Pods maximum quantity | `1` |
| `replsets[0].expose.enabled` | Allow access to replicaSet from outside of Kubernetes | `false` |
| `replsets[0].expose.exposeType` | Network service access point type | `LoadBalancer` |
| `replsets[0].arbiter.enabled` | Create MongoDB arbiter service | `false` |
| `replsets[0].arbiter.size` | MongoDB arbiter Pod quantity | `1` |
| `replsets[0].arbiter.antiAffinityTopologyKey` | MongoDB arbiter Pod affinity | `kubernetes.io/hostname` |
| `replsets[0].arbiter.priorityClass` | MongoDB arbiter priorityClassName | `""` |
| `replsets[0].arbiter.annotations` | MongoDB arbiter Pod annotations | `{}` |
| `replsets[0].arbiter.labels` | MongoDB arbiter Pod labels | `{}` |
| `replsets[0].arbiter.nodeSelector` | MongoDB arbiter Pod nodeSelector labels | `{}` |
| `replsets[0].arbiter.livenessProbe` | MongoDB arbiter Pod livenessProbe structure | `{}` |
| `replsets[0].schedulerName` | ReplicaSet Pod schedulerName | `""` |
| `replsets[0].resources` | ReplicaSet Pods resource requests and limits | `{}` |
| `replsets[0].volumeSpec` | ReplicaSet Pods storage resources | `{}` |
| `replsets[0].volumeSpec.emptyDir` | ReplicaSet Pods emptyDir K8S storage | `{}` |
| `replsets[0].volumeSpec.hostPath` | ReplicaSet Pods hostPath K8S storage | |
| `replsets[0].volumeSpec.hostPath.path` | ReplicaSet Pods hostPath K8S storage path | `""` |
| `replsets[0].volumeSpec.pvc` | ReplicaSet Pods PVC request parameters | |
| `replsets[0].volumeSpec.pvc.storageClassName` | ReplicaSet Pods PVC target storageClass | `""` |
| `replsets[0].volumeSpec.pvc.accessModes` | ReplicaSet Pods PVC access policy | `[]` |
| `replsets[0].volumeSpec.pvc.resources.requests.storage` | ReplicaSet Pods PVC storage size | `3Gi` |
| |
| `sharding.enabled` | Enable sharding setup | `true` |
| `sharding.configrs.size` | Config ReplicaSet size (pod quantity) | `3` |
Expand All @@ -84,6 +86,8 @@ The chart can be customized using the following configurable parameters:
| `sharding.configrs.annotations` | Config ReplicaSet Pod annotations | `{}` |
| `sharding.configrs.labels` | Config ReplicaSet Pod labels | `{}` |
| `sharding.configrs.nodeSelector` | Config ReplicaSet Pod nodeSelector labels | `{}` |
| `sharding.configrs.runtimeClass` | Config ReplicaSet Pod runtimeClassName | `""` |
| `sharding.configrs.sidecars` | Config ReplicaSet Pod sidecars | `{}` |
| `sharding.configrs.podDisruptionBudget.maxUnavailable` | Config ReplicaSet failed Pods maximum quantity | `1` |
| `sharding.configrs.resources.limits.cpu` | Config ReplicaSet resource limits CPU | `300m` |
| `sharding.configrs.resources.limits.memory` | Config ReplicaSet resource limits memory | `0.5G` |
Expand All @@ -102,6 +106,8 @@ The chart can be customized using the following configurable parameters:
| `sharding.mongos.annotations` | Mongos Pods annotations | `{}` |
| `sharding.mongos.labels` | Mongos Pods labels | `{}` |
| `sharding.mongos.nodeSelector` | Mongos Pods nodeSelector labels | `{}` |
| `sharding.mongos.runtimeClass` | Mongos Pod runtimeClassName | `""` |
| `sharding.mongos.sidecars` | Mongos Pod sidecars | `{}` |
| `sharding.mongos.podDisruptionBudget.maxUnavailable` | Mongos failed Pods maximum quantity | `1` |
| `sharding.mongos.resources.limits.cpu` | Mongos Pods resource limits CPU | `300m` |
| `sharding.mongos.resources.limits.memory` | Mongos Pods resource limits memory | `0.5G` |
Expand All @@ -114,14 +120,15 @@ The chart can be customized using the following configurable parameters:
| `backup.enabled` | Enable backup PBM agent | `true` |
| `backup.restartOnFailure` | Backup Pods restart policy | `true` |
| `backup.image.repository` | PBM Container image repository | `percona/percona-server-mongodb-operator` |
| `backup.image.tag` | PBM Container image tag | `1.6.0-backup` |
| `backup.image.tag` | PBM Container image tag | `1.7.0-backup` |
| `backup.serviceAccountName` | Run PBM Container under specified K8S SA | `percona-server-mongodb-operator` |
| `backup.storages` | Local/remote backup storages settings | `{}` |
| `backup.tasks` | Backup working schedule | `{}` |
| `users` | PSMDB essential users | `{}` |


Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Notice that you can use multiple replica sets only with sharding enabled.

## Examples

Expand All @@ -131,6 +138,6 @@ This is great for a dev PSMDB/MongoDB cluster as it doesn't bother with backups

```bash
$ helm install dev --namespace psmdb . \
--set runUid=1001 --set replset.volumeSpec.pvc.resources.requests.storage=20Gi \
--set runUid=1001 --set "replsets[0].volumeSpec.pvc.resources.requests.storage=20Gi" \
--set backup.enabled=false --set sharding.enabled=false
```
3 changes: 3 additions & 0 deletions charts/psmdb-db/crds/crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ spec:
storage: false
served: true
- name: v1-6-0
storage: false
served: true
- name: v1-7-0
storage: true
served: true
- name: v1alpha1
Expand Down
114 changes: 69 additions & 45 deletions charts/psmdb-db/production-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@
# platform: kubernetes

# Cluster DNS Suffix
# DNSsuffix: .svc.cluster.local
# DNSsuffix: svc.cluster.local

finalizers:
## Set this if you want to delete database persistent volumes on cluster deletion
# - delete-psmdb-pvc

nameOverride: ""
fullnameOverride: ""
Expand All @@ -21,7 +25,7 @@ upgradeOptions:

image:
repository: percona/percona-server-mongodb
tag: 4.4.2-4
tag: 4.4.3-5

# imagePullSecrets: []
# runUid: 1001
Expand All @@ -34,53 +38,59 @@ pmm:
tag: 2.12.0
serverHost: monitoring-service

replset:
name: rs0
size: 3
antiAffinityTopologyKey: "kubernetes.io/hostname"
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
livenessProbe:
failureThreshold: 4
initialDelaySeconds: 60
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
startupDelaySeconds: 7200
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: LoadBalancer
arbiter:
enabled: false
size: 1
replsets:
- name: rs0
size: 3
antiAffinityTopologyKey: "kubernetes.io/hostname"
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
livenessProbe:
failureThreshold: 4
initialDelaySeconds: 60
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
startupDelaySeconds: 7200
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
exposeType: LoadBalancer
arbiter:
enabled: false
size: 1
antiAffinityTopologyKey: "kubernetes.io/hostname"
# priorityClass: ""
# annotations: {}
# labels: {}
# nodeSelector: {}
# livenessProbe: {}
# schedulerName: ""
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
# emptyDir: {}
# hostPath:
# path: /data
pvc:
# storageClassName: standard
# accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi

sharding:
enabled: true
Expand All @@ -92,6 +102,12 @@ sharding:
# annotations: {}
# labels: {}
# nodeSelector: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
podDisruptionBudget:
maxUnavailable: 1
resources:
Expand Down Expand Up @@ -120,6 +136,12 @@ sharding:
# annotations: {}
# labels: {}
# nodeSelector: {}
# runtimeClassName: image-rc
# sidecars:
# - image: busybox
# command: ["/bin/sh"]
# args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
# name: rs-sidecar-1
podDisruptionBudget:
maxUnavailable: 1
resources:
Expand All @@ -145,7 +167,7 @@ backup:
restartOnFailure: true
image:
repository: percona/percona-server-mongodb-operator
tag: 1.6.0-backup
tag: 1.7.0-backup
serviceAccountName: percona-server-mongodb-operator
# resources:
# limits:
Expand All @@ -172,11 +194,13 @@ backup:
# - name: daily-s3-us-west
# enabled: true
# schedule: "0 0 * * *"
# keep: 3
# storageName: s3-us-west
# compressionType: gzip
# - name: weekly-s3-us-west
# enabled: false
# schedule: "0 0 * * 0"
# keep: 5
# storageName: s3-us-west
# compressionType: gzip

Expand Down
2 changes: 1 addition & 1 deletion charts/psmdb-db/templates/NOTES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ To get a MongoDB prompt inside your new cluster you can run:

And then for replica set:
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.4 --restart=Never \
-- mongo "mongodb+srv://${ADMIN_USER}:${ADMIN_PASSWORD}@{{ include "psmdb-database.fullname" . }}-{{ .Values.replset.name }}.{{ .Release.Namespace }}.svc.cluster.local/admin?replicaSet=rs0&ssl=false"
-- mongo "mongodb+srv://${ADMIN_USER}:${ADMIN_PASSWORD}@{{ include "psmdb-database.fullname" . }}-{{ (index .Values.replsets 0).name }}.{{ .Release.Namespace }}.svc.cluster.local/admin?replicaSet=rs0&ssl=false"

Or for sharding setup:
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.4 --restart=Never \
Expand Down
Loading

0 comments on commit a07c486

Please sign in to comment.