You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create globaldata store with primary and secondary cluster within two different regions.
When you try to increase number of shards that works perfect, issue arise when you try to reduce number of shard of Reids clusters which are part of global replication group.
Crossplane provider will throw an error like crossplane update failed: async update failed: recovered from panic: runtime error: slice bounds out of range [:1] with capacity 0
What happened?
When you try to reduce number of shards of Redis cluster which is part of global replication group crossplane provider throw following error: crossplane update failed: async update failed: recovered from panic: runtime error: slice bounds out of range [:1] with capacity 0
Relevant Error Output Snippet
No response
Crossplane Version
v1.18
Provider Version
v1.17
Kubernetes Version
No response
Kubernetes Distribution
No response
Additional Info
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Affected Resource(s)
Scaling up number of shard using Global Replication Group works fine
Issue arise when trying to reduce number of shards
Affected resource Global Replication group
Resource MRs required to reproduce the bug
apiVersion: elasticache.aws.upbound.io/v1beta2
kind: ReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-uw2-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-uw2-rg
name: test-uw2-rg
spec:
forProvider:
applyImmediately: true
authTokenSecretRef:
key: token
name: redis-token
namespace: default
description: Managed by Crossplane
maintenanceWindow: sun:21:00-mon:04:00
region: us-west-2
replicasPerNodeGroup: 1
subnetGroupNameSelector:
matchLabels:
managed-resource-name: test-uw2-subnet-group
managementPolicies:
apiVersion: elasticache.aws.upbound.io/v1beta1
kind: ParameterGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-elasticache-parameter-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-elasticache-parameter-group
name: test-ue1-elasticache-parameter-group
spec:
forProvider:
description: Managed by Crossplane
family: redis6.x
name: test-ue1-pg
parameter:
- name: cluster-allow-reads-when-down
value: "yes"
- name: min-replicas-max-lag
value: "60"
- name: cluster-enabled
value: "yes"
- name: maxmemory-policy
value: allkeys-lru
- name: maxmemory-samples
value: "10"
- name: reserved-memory-percent
value: "25"
region: us-east-1
managementPolicies:
apiVersion: elasticache.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-subnet-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-subnet-group
name: test-ue1-subnet-group
spec:
forProvider:
description: Managed by Crossplane
region: us-east-1
subnetIds:
- subnet-1
- subnet-2
- subnet-3
managementPolicies:
apiVersion: elasticache.aws.upbound.io/v1beta2
kind: ReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-rg
name: test-ue1-rg
spec:
forProvider:
applyImmediately: true
atRestEncryptionEnabled: true
authTokenSecretRef:
key: token
name: redis-token
namespace: default
autoGenerateAuthToken: true
clusterMode: enabled
description: Managed by Crossplane
engineVersion: "6.2"
logDeliveryConfiguration: []
maintenanceWindow: sun:21:00-mon:04:00
multiAzEnabled: true
nodeType: cache.r6g.large
region: us-east-1
replicasPerNodeGroup: 1
subnetGroupNameSelector:
matchLabels:
managed-resource-name: test-ue1-subnet-group
transitEncryptionEnabled: true
managementPolicies:
apiVersion: secretsmanager.aws.upbound.io/v1beta1
kind: Secret
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-secret
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-secret
name: test-ue1-secret
spec:
forProvider:
name: test-ue1-token
recoveryWindowInDays: 0
region: us-east-1
managementPolicies:
apiVersion: secretsmanager.aws.upbound.io/v1beta1
kind: SecretVersion
metadata:
annotations:
crossplane.io/composition-resource-name: test-ue1-sv
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-ue1-sv
name: test-ue1-sv
spec:
forProvider:
region: us-east-1
secretId: test-ue1-token
secretStringSecretRef:
key: token
name: redis-token
namespace: default
managementPolicies:
apiVersion: elasticache.aws.upbound.io/v1beta1
kind: GlobalReplicationGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-global-rg
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-global-rg
name: test-global-rg
spec:
forProvider:
cacheNodeType: cache.r6g.large
globalReplicationGroupDescription: ""
globalReplicationGroupIdSuffix: test
numNodeGroups: 1
primaryReplicationGroupId: null
region: us-east-1
managementPolicies:
apiVersion: elasticache.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
crossplane.io/composition-resource-name: test-uw2-subnet-group
labels:
crossplane.io/composite: test-cluster
managed-resource-name: test-uw2-subnet-group
name: test-uw2-subnet-group
ownerReferences:
spec:
forProvider:
description: Managed by Crossplane
region: us-west-2
subnetIds:
- subnet-1
- subnet-2
- subnet-3
managementPolicies:
Steps to Reproduce
What happened?
When you try to reduce number of shards of Redis cluster which is part of global replication group crossplane provider throw following error:
crossplane update failed: async update failed: recovered from panic: runtime error: slice bounds out of range [:1] with capacity 0
Relevant Error Output Snippet
No response
Crossplane Version
v1.18
Provider Version
v1.17
Kubernetes Version
No response
Kubernetes Distribution
No response
Additional Info
No response
The text was updated successfully, but these errors were encountered: