Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Crossplane provider throw error when trying to reduce number of shard #1622

Open
1 task done
stevan95 opened this issue Dec 25, 2024 · 2 comments
Open
1 task done
Labels
bug Something isn't working needs:triage

Comments

@stevan95
Copy link

stevan95 commented Dec 25, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Affected Resource(s)

Scaling up number of shard using Global Replication Group works fine

Issue arise when trying to reduce number of shards
Screenshot 2024-12-26 at 12 17 42
Screenshot 2024-12-26 at 12 18 35

Affected resource Global Replication group

Resource MRs required to reproduce the bug


apiVersion: elasticache.aws.upbound.io/v1beta2
kind: ReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-uw2-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-uw2-rg
name: test-uw2-rg
spec:
forProvider:
applyImmediately: true
authTokenSecretRef:
key: token
name: redis-token
namespace: default
description: Managed by Crossplane
maintenanceWindow: sun:21:00-mon:04:00
region: us-west-2
replicasPerNodeGroup: 1
subnetGroupNameSelector:
matchLabels:
managed-resource-name: test-uw2-subnet-group
managementPolicies:

  • '*'

apiVersion: elasticache.aws.upbound.io/v1beta1
kind: ParameterGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-elasticache-parameter-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-elasticache-parameter-group
name: test-ue1-elasticache-parameter-group
spec:
forProvider:
description: Managed by Crossplane
family: redis6.x
name: test-ue1-pg
parameter:
- name: cluster-allow-reads-when-down
value: "yes"
- name: min-replicas-max-lag
value: "60"
- name: cluster-enabled
value: "yes"
- name: maxmemory-policy
value: allkeys-lru
- name: maxmemory-samples
value: "10"
- name: reserved-memory-percent
value: "25"
region: us-east-1
managementPolicies:

  • '*'

apiVersion: elasticache.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-subnet-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-subnet-group
name: test-ue1-subnet-group
spec:
forProvider:
description: Managed by Crossplane
region: us-east-1
subnetIds:
- subnet-1
- subnet-2
- subnet-3
managementPolicies:

  • '*'

apiVersion: elasticache.aws.upbound.io/v1beta2
kind: ReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-rg
name: test-ue1-rg
spec:
forProvider:
applyImmediately: true
atRestEncryptionEnabled: true
authTokenSecretRef:
key: token
name: redis-token
namespace: default
autoGenerateAuthToken: true
clusterMode: enabled
description: Managed by Crossplane
engineVersion: "6.2"
logDeliveryConfiguration: []
maintenanceWindow: sun:21:00-mon:04:00
multiAzEnabled: true
nodeType: cache.r6g.large
region: us-east-1
replicasPerNodeGroup: 1
subnetGroupNameSelector:
matchLabels:
managed-resource-name: test-ue1-subnet-group
transitEncryptionEnabled: true
managementPolicies:

  • '*'

apiVersion: secretsmanager.aws.upbound.io/v1beta1
kind: Secret
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-secret
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-secret
name: test-ue1-secret
spec:
forProvider:
name: test-ue1-token
recoveryWindowInDays: 0
region: us-east-1
managementPolicies:

  • '*'

apiVersion: secretsmanager.aws.upbound.io/v1beta1
kind: SecretVersion
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-sv
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-sv
name: test-ue1-sv
spec:
forProvider:
region: us-east-1
secretId: test-ue1-token
secretStringSecretRef:
key: token
name: redis-token
namespace: default
managementPolicies:

  • '*'

apiVersion: elasticache.aws.upbound.io/v1beta1
kind: GlobalReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-global-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-global-rg
name: test-global-rg
spec:
forProvider:
cacheNodeType: cache.r6g.large
globalReplicationGroupDescription: ""
globalReplicationGroupIdSuffix: test
numNodeGroups: 1
primaryReplicationGroupId: null
region: us-east-1
managementPolicies:

  • '*'

apiVersion: elasticache.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-uw2-subnet-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-uw2-subnet-group
name: test-uw2-subnet-group
ownerReferences:
spec:
forProvider:
description: Managed by Crossplane
region: us-west-2
subnetIds:
- subnet-1
- subnet-2
- subnet-3
managementPolicies:

  • '*'

Steps to Reproduce

  1. Create globaldata store with primary and secondary cluster within two different regions.
  2. When you try to increase number of shards that works perfect, issue arise when you try to reduce number of shard of Reids clusters which are part of global replication group.
  3. Crossplane provider will throw an error like crossplane update failed: async update failed: recovered from panic: runtime error: slice bounds out of range [:1] with capacity 0

What happened?

When you try to reduce number of shards of Redis cluster which is part of global replication group crossplane provider throw following error:
crossplane update failed: async update failed: recovered from panic: runtime error: slice bounds out of range [:1] with capacity 0

Relevant Error Output Snippet

No response

Crossplane Version

v1.18

Provider Version

v1.17

Kubernetes Version

No response

Kubernetes Distribution

No response

Additional Info

No response

@stevan95 stevan95 added bug Something isn't working needs:triage labels Dec 25, 2024
@turkenf
Copy link
Collaborator

turkenf commented Dec 26, 2024

Hi @stevan95,

Thank you for bringing up this. Could you please provide us MRs and fill in the description of which resource(s) was affected?

@stevan95
Copy link
Author

Hi @turkenf

Thank you for your response i updated issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs:triage
Projects
None yet
Development

No branches or pull requests

2 participants