Skip to content

Commit

Permalink
Add documentation for defining HorizontalPodAutoscaler infinispan#2133
Browse files Browse the repository at this point in the history
  • Loading branch information
ryanemerson committed Sep 30, 2024
1 parent 32d71f3 commit aa6654f
Show file tree
Hide file tree
Showing 4 changed files with 62 additions and 0 deletions.
27 changes: 27 additions & 0 deletions documentation/asciidoc/stories/assembly_auto_scaling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
ifdef::context[:parent-context: {context}]
[id='auto-scaling']
:context: scaling
= Auto Scaling

[role="_abstract"]
Kubernetes includes the `HorizontalPodAutoscaler` which allows StatefulSets or Deployments to be automatically scaled up or
down based upon specified metrics. The Infinispan CR exposes the `.status.scale` sub-resource, which enables `HorizontalPodAutoscaler`
resources to target the Infinispan CR.

Before defining a `HorizontalPodAutoscaler` configuration, consider the types of {brandname} caches that you define. Distributed
and Replicated caches have very different scaling requirement, so defining a `HorizontalPodAutoscaler` for server's running
a combination of these cache types may not be advantageous. For example, defining a `HorizontalPodAutoscaler` that scales
when memory usage reaches a certain percentage will allow overall cache capacity to be increased when defining Distributed
caches as cache entries are spread across pods, however it will not work with replicated cache as every pod hosts all cache
entries. Conversely, configuring a `HorizontalPodAutoscaler` based upon CPU usage will be more beneficial for clusters
with replicated cache as every pod contains all cache entries and so distributing read requests across additional nodes
will allow a greater number of requests to be processed simultaneously.

include::{topics}/proc_configuring_auto_scaling.adoc[leveloffset=+1]

IMPORTANT: HorizontalPodAutoscaler should be removed when upgrading a {brandname} cluster, as the automatic scaling will
cause the upgrade process to enter unexpected state, as the Operator needs to scale the cluster down to 0 pods.

// Restore the parent context.
ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
1 change: 1 addition & 0 deletions documentation/asciidoc/titles/stories.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ include::{stories}/assembly_network_access.adoc[leveloffset=+1]
include::{stories}/assembly_cross_site_replication.adoc[leveloffset=+1]
include::{stories}/assembly_monitoring.adoc[leveloffset=+1]
include::{stories}/assembly_anti_affinity.adoc[leveloffset=+1]
include::{stories}/assembly_auto_scaling.adoc[leveloffset=+1]
include::{stories}/assembly_cache_cr.adoc[leveloffset=+1]
include::{stories}/assembly_batch_cr.adoc[leveloffset=+1]
include::{stories}/assembly_backing_up_restoring.adoc[leveloffset=+1]
Expand Down
16 changes: 16 additions & 0 deletions documentation/asciidoc/topics/proc_configuring_auto_scaling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[id='configuring_auto-scaling-{context}']
= Configuring HorizontalPodAutoscaler

[role="_abstract"]
Create a HorizontalPodAutoScaler resource that targets your Infinispan CR.

.Procedure

. Define a `HorizontalPodAutoscaler` resource in the same namespace as your `Infinispan` CR
+
[source,options="nowrap",subs=attributes+]
----
include::yaml/horizontal_pod_autoscaler.yaml[]
----
+
<1> The name of your `Infinispan` CR
18 changes: 18 additions & 0 deletions documentation/asciidoc/topics/yaml/horizontal_pod_autoscaler.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: infinispan-auto
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Infinispan
name: example # <1>
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

0 comments on commit aa6654f

Please sign in to comment.