Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8SPSMDB-227: Add topologySpreadConstraints #1280

Merged
merged 14 commits into from
Sep 1, 2023
Merged

K8SPSMDB-227: Add topologySpreadConstraints #1280

merged 14 commits into from
Sep 1, 2023

Conversation

pooknull
Copy link
Contributor

@pooknull pooknull commented Aug 2, 2023

K8SPSMDB-227 Powered by Pull Request Badge

https://jira.percona.com/browse/K8SPSMDB-227

DESCRIPTION

If the operator uses topologySpreadConstraints with maxSkew there's no need to set hard affinity rules and still guarantee that pods are distributed among available nodes.

Solution:
Add topologySpreadConstraints to the PerconaServerMongoDB

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are the manifests (crd/bundle) regenerated if needed?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

@pull-request-size pull-request-size bot added the size/XXL 1000+ lines label Aug 2, 2023
@@ -2854,6 +2867,58 @@ spec:
type: string
type: object
type: array
topologySpreadConstraints:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we update cr.yml too?

@pooknull pooknull marked this pull request as ready for review August 9, 2023 13:27
inelpandzic
inelpandzic previously approved these changes Aug 11, 2023
nmarukovich
nmarukovich previously approved these changes Aug 15, 2023
pkg/apis/psmdb/v1/psmdb_defaults.go Outdated Show resolved Hide resolved
@pooknull pooknull dismissed stale reviews from nmarukovich and inelpandzic via 80072f5 August 18, 2023 14:48
@pooknull pooknull requested a review from hors August 18, 2023 15:02
@JNKPercona
Copy link
Collaborator

Test name Status
arbiter passed
cross-site-sharded passed
data-at-rest-encryption passed
data-sharded passed
demand-backup passed
demand-backup-eks-credentials passed
demand-backup-physical passed
demand-backup-physical-sharded passed
demand-backup-sharded passed
expose-sharded passed
ignore-labels-annotations passed
init-deploy passed
limits passed
liveness passed
mongod-major-upgrade passed
mongod-major-upgrade-sharded passed
monitoring-2-0 passed
multi-cluster-service passed
non-voting passed
one-pod passed
operator-self-healing-chaos passed
pitr passed
pitr-sharded passed
recover-no-primary passed
rs-shard-migration passed
scaling passed
scheduled-backup passed
security-context passed
self-healing-chaos passed
service-per-pod passed
serviceless-external-nodes passed
smart-update passed
storage passed
upgrade passed
upgrade-consistency passed
upgrade-sharded passed
users passed
version-service passed
We run 38 out of 38

commit: a3e5d95
image: perconalab/percona-server-mongodb-operator:PR-1280-a3e5d95e

@hors hors merged commit ef110e3 into main Sep 1, 2023
8 checks passed
@hors hors deleted the dev/K8SPSMDB-227 branch September 1, 2023 10:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/XXL 1000+ lines
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants