Skip to content

Commit

Permalink
Promote Develop to main for Splunk Operator Release 2.5.0 (#1273)
Browse files Browse the repository at this point in the history
* cspl-2505: add Pod Security standard to restricted (#1266)

* add Pod Security standard to restricted

* helm chart changes

Signed-off-by: vivekr-splunk <[email protected]>

* helm chart packages for 2.5

* removed secret

---------

Signed-off-by: vivekr-splunk <[email protected]>

* level2: Support for Level-2 Upgrade Strategy in Splunk Operator  (#1262)

* CSPL-2094, 2342: Upgrade Strategy for LM and CM (#1181)

* Added changeAnnotation method

* Refined changeClusterManagerAnnotations

* test case for upgrade scenario

* Modified kuttl cases

* Added kuttl tests; Updated LicenseMaster

* Fixed uninstall kuttl test

* Fixed unit test

* Removed changeAnnotation from licenseMaster

* Added branch in int-tests

* Completed code coverage tests

* Added upgradeScenario and related methods for CM

* Added label selectors to get Current Image

* Changed pod.Spec to pod.Status

* Added changeAnnotations for MC

* Added kuttl test cases

* Fixed unit test

* Fixed SmartStore unit test

* Added code coverage test

* using fake client instead of mock

* removed creating statefulset and service

* Corrected LMCurrentImage method

* Completed Coverage tests for CM

* Refined changeClusterManagerAnnotations

* test case for upgrade scenario

* Modified kuttl cases

* Added kuttl tests; Updated LicenseMaster

* Fixed unit test

* Removed changeAnnotation from licenseMaster

* Completed code coverage tests

* Resolved all conflict issues

* Added comments

* Updated upgradeScenario to check if statefulSet exists

* Fixed Unit tests

* Added common APIs, changed upgrade condition

* Added only warning if annotation not found

* Add warning

* Updated upgradeCondition

* updated changeAnnotation to work with no ref

* Fixed unit tests

* Handled not found error

* Removed blank lines; handled errors in changeAnnotation

* Only call changeAnnotation if LM is ready

* Removed redundant checks

* Return if CM list is empty

* removed superfluous nil err check

* Removed branch from workflow

---------

Co-authored-by: vivekr-splunk <[email protected]>

* CSPL-2343: Upgrade Strategy for MC (#1194)

* Added changeAnnotation method

* Refined changeClusterManagerAnnotations

* test case for upgrade scenario

* Modified kuttl cases

* Added kuttl tests; Updated LicenseMaster

* Fixed uninstall kuttl test

* Fixed unit test

* Removed changeAnnotation from licenseMaster

* Added branch in int-tests

* Completed code coverage tests

* Added upgradeScenario and related methods for CM

* Added label selectors to get Current Image

* Changed pod.Spec to pod.Status

* Added changeAnnotations for MC

* Added kuttl test cases

* Fixed unit test

* Fixed SmartStore unit test

* Added code coverage test

* using fake client instead of mock

* removed creating statefulset and service

* Corrected LMCurrentImage method

* Completed Coverage tests for CM

* Refined changeClusterManagerAnnotations

* test case for upgrade scenario

* Modified kuttl cases

* Added kuttl tests; Updated LicenseMaster

* Fixed unit test

* Removed changeAnnotation from licenseMaster

* Completed code coverage tests

* Resolved all conflict issues

* Added comments

* Updated upgradeScenario to check if statefulSet exists

* Fixed Unit tests

* Added common APIs, changed upgrade condition

* Added only warning if annotation not found

* Add warning

* Updated upgradeCondition

* updated changeAnnotation to work with no ref

* Fixed unit tests

* Handled not found error

* Added MC functions

* Removed blank lines; handled errors in changeAnnotation

* Only call changeAnnotation if LM is ready

* Removed redundant checks

* Return if CM list is empty

* removed superfluous nil err check

* Removed branch from workflow

* Added branch to workflow

* Fixed comment

* Fixed unit test

* Improved comment for the upgrade condition

* Removed branch from workflow

---------

Co-authored-by: vivekr-splunk <[email protected]>

* Level-2: Single state machine for Level-2 support (#1216)

* Added SHC functions

* Check error in change annotation

* Added Single Site IDX functions

* Added functional test case

* Removed Change annotation; Added TODO

* Added documentation

* Added multisite func

* Added branch to workflow

* Commiting

* Added specific test

* Changed image

* Added cm ref

* Removed cm ref

* Only CM and LM

* Added image output

* Added mc change image

* Added shc change image

* Fixed shc name

* Added idxc

* Check this

* Test with only CM, SHC, IDX

* Test with only CM, IDX

* Test with only LM, CM, SHC, IDX

* Test only with CM, MC, SHC, IDX

* Addd cm ref to CM,MC,SHC,IDX

* All the instances

* Test with LM,CM,MC

* Check without cm ref

* Cm Ref + LM,CM,MC,SHC

* CM ref + LM,CM,MC,IDX

* Testing all with no idx update code

* Fixed commit

* All + only single site

* With everything

* Fixed mgr client

* Final

* one stop for all the upgrade scenarios

Signed-off-by: vivekr-splunk <[email protected]>

* added upgradepath to clustermanager

Signed-off-by: vivekr-splunk <[email protected]>

* added upgradepath to all CR

Signed-off-by: vivekr-splunk <[email protected]>

* Made changes in upgrade checks

* somemore changes to fix test case for upgrade scenario

Signed-off-by: vivekr-splunk <[email protected]>

* ignore tel app install in unit test

Signed-off-by: vivekr-splunk <[email protected]>

* intermittent, changes

Signed-off-by: vivekr-splunk <[email protected]>

* fixed searchhead cluster,mc, lm, cm

Signed-off-by: vivekr-splunk <[email protected]>

* fixed test case

Signed-off-by: vivekr-splunk <[email protected]>

* working test code for upgrade

Signed-off-by: vivekr-splunk <[email protected]>

* unit test cases fixed

Signed-off-by: vivekr-splunk <[email protected]>

* added comments to the new code

Signed-off-by: vivekr-splunk <[email protected]>

* fixed some test cases

Signed-off-by: vivekr-splunk <[email protected]>

* fixed some test cases

Signed-off-by: vivekr-splunk <[email protected]>

* formatting changes

Signed-off-by: vivekr-splunk <[email protected]>

* addressed review comments

Signed-off-by: vivekr-splunk <[email protected]>

* changing go to 1.21

Signed-off-by: vivekr-splunk <[email protected]>

* changing go to 1.21

Signed-off-by: vivekr-splunk <[email protected]>

* changing go to 1.21

Signed-off-by: vivekr-splunk <[email protected]>

* adding this branch for int test pipeline

Signed-off-by: vivekr-splunk <[email protected]>

* test case fix - adding extra timeout

* test case fix - adding extra timeout

* changed splunk version to 9.1.2

* changed order in the test case for level-2 support

* changing timeout to so test can pass

Signed-off-by: vivekr-splunk <[email protected]>

* changed order first search and then index

Signed-off-by: vivekr-splunk <[email protected]>

* adding back kind name in controller

* adding more timeout

Signed-off-by: vivekr-splunk <[email protected]>

* increasing to 10 min

Signed-off-by: vivekr-splunk <[email protected]>

* increasing overall test run to 6 hours

Signed-off-by: vivekr-splunk <[email protected]>

* doc changes

Signed-off-by: vivekr-splunk <[email protected]>

* just run m4 tests

Signed-off-by: vivekr-splunk <[email protected]>

* just run c3 test

Signed-off-by: vivekr-splunk <[email protected]>

* enabled all the test

Signed-off-by: vivekr-splunk <[email protected]>

* fixed go libraries

* increasing time to 7h for test

* adding helm test

* removed unused functions

* adding comment

---------

Signed-off-by: vivekr-splunk <[email protected]>
Signed-off-by: vivekr-splunk <[email protected]>
Co-authored-by: Tanya Garg <[email protected]>

* fixed test case

* fixed helm test cases

* fixed helm test case

Signed-off-by: vivekr-splunk <[email protected]>

* adding gp2 to helm test

Signed-off-by: vivekr-splunk <[email protected]>

* fixed topologyspread constraint test case

---------

Signed-off-by: vivekr-splunk <[email protected]>
Signed-off-by: vivekr-splunk <[email protected]>
Co-authored-by: tgarg-splunk <[email protected]>
Co-authored-by: Tanya Garg <[email protected]>

* helm test case fix (#1270)

* fixed c3 test case

* adding helm test

* fixed c3 with operator test case

Signed-off-by: vivekr-splunk <[email protected]>

---------

Signed-off-by: vivekr-splunk <[email protected]>

* Splunk Operator 2.5.0 release (#1271)

* [create-pull-request] automated change

* adding helm 2.5.0 packages

* cleanup workflows

* adding env changes

* adding bundle changes

* adding bundle changes

* changing eks version to 1.27

* changing splunk version to 9.1.2

* updated changelog

* updated changelog

---------

Co-authored-by: vivekr-splunk <[email protected]>
Co-authored-by: vivekr-splunk <[email protected]>

* setting splunk verion to 9.1.3

* removing unwanted file

* removed unused files (#1276)

* Update helm-test-workflow.yml

---------

Signed-off-by: vivekr-splunk <[email protected]>
Signed-off-by: vivekr-splunk <[email protected]>
Co-authored-by: gaurav-splunk <[email protected]>
Co-authored-by: vivekr-splunk <[email protected]>
Co-authored-by: tgarg-splunk <[email protected]>
Co-authored-by: Tanya Garg <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: vivekr-splunk <[email protected]>
Co-authored-by: vivekr-splunk <[email protected]>
  • Loading branch information
7 people authored Feb 5, 2024
1 parent e70c93e commit 1663227
Show file tree
Hide file tree
Showing 97 changed files with 3,124 additions and 824 deletions.
12 changes: 6 additions & 6 deletions .env
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
OPERATOR_SDK_VERSION=v1.28.1
REVIEWERS=smohan-splunk,sgontla,gaurav-splunk,vivekr-splunk,kumarajeet
GO_VERSION=1.19.2
OPERATOR_SDK_VERSION=v1.31.0
REVIEWERS=vivekr-splunk,akondur
GO_VERSION=1.21.5
AWSCLI_URL=https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.8.6.zip
KUBECTL_VERSION=v1.28.0
KUBECTL_VERSION=v1.29.1
AZ_CLI_VERSION=2.30.0
EKSCTL_VERSION=v0.143.0
EKS_CLUSTER_K8_VERSION=1.26
SPLUNK_ENTERPRISE_RELEASE_IMAGE=splunk/splunk:9.1.1
EKS_CLUSTER_K8_VERSION=1.27
SPLUNK_ENTERPRISE_RELEASE_IMAGE=splunk/splunk:9.1.3
1 change: 1 addition & 0 deletions .github/workflows/int-test-workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ on:
jobs:
build-operator-image:
runs-on: ubuntu-latest
timeout-minutes: 360
env:
SPLUNK_ENTERPRISE_IMAGE: ${{ secrets.SPLUNK_ENTERPRISE_IMAGE }}
SPLUNK_OPERATOR_IMAGE_NAME: splunk/splunk-operator
Expand Down
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@
# To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
VERSION ?= 2.2.1
VERSION ?= 2.5.0

# SPLUNK_ENTERPRISE_IMAGE defines the splunk docker tag that is used as default image.
SPLUNK_ENTERPRISE_IMAGE ?= "docker.io/splunk/splunk:edge"

# WATCH_NAMESPACE defines if its clusterwide operator or namespace specific
# by default we leave it as clusterwide if it has to be namespace specific,
# by default we leave it as clusterwide if it has to be namespace specific,
# add namespace to this
WATCH_NAMESPACE ?= ""

# NAMESPACE defines default namespace where operator will be installed
# NAMESPACE defines default namespace where operator will be installed
NAMESPACE ?= "splunk-operator"

# CHANNELS define the bundle channels used in the bundle.
Expand Down
22 changes: 18 additions & 4 deletions bundle/manifests/splunk-operator.clusterserviceversion.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ metadata:
capabilities: Seamless Upgrades
categories: Big Data, Logging & Tracing, Monitoring, Security, AI/Machine Learning
containerImage: splunk/splunk-operator@sha256:c4e0d314622699496f675760aad314520d050a66627fdf33e1e21fa28ca85d50
createdAt: "2023-10-06T22:35:48Z"
createdAt: "2024-01-22T21:05:16Z"
description: The Splunk Operator for Kubernetes enables you to quickly and easily
deploy Splunk Enterprise on your choice of private or public cloud provider.
The Operator simplifies scaling and management of Splunk Enterprise by automating
Expand Down Expand Up @@ -788,8 +788,15 @@ spec:
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
- args:
- --leader-elect
- --pprof
Expand All @@ -801,14 +808,14 @@ spec:
fieldRef:
fieldPath: metadata.annotations['olm.targetNamespaces']
- name: RELATED_IMAGE_SPLUNK_ENTERPRISE
value: docker.io/splunk/splunk:9.1.1
value: docker.io/splunk/splunk:9.1.3
- name: OPERATOR_NAME
value: splunk-operator
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
image: docker.io/splunk/splunk-operator:2.4.0
image: docker.io/splunk/splunk-operator:2.5.0
imagePullPolicy: Always
livenessProbe:
httpGet:
Expand All @@ -832,8 +839,15 @@ spec:
memory: 2000Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /opt/splunk/appframework/
name: app-staging
Expand Down Expand Up @@ -913,7 +927,7 @@ spec:
name: Splunk Inc.
url: www.splunk.com
relatedImages:
- image: docker.io/splunk/splunk:9.1.1
- image: docker.io/splunk/splunk:9.1.3
name: splunk-enterprise
replaces: splunk-operator.v2.2.0
version: 2.2.1
2 changes: 1 addition & 1 deletion config/default/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ patches:
- name: WATCH_NAMESPACE
value: WATCH_NAMESPACE_VALUE
- name: RELATED_IMAGE_SPLUNK_ENTERPRISE
value: docker.io/splunk/splunk:9.1.1
value: docker.io/splunk/splunk:9.1.3
- name: OPERATOR_NAME
value: splunk-operator
- name: POD_NAME
Expand Down
7 changes: 7 additions & 0 deletions config/default/manager_auth_proxy_patch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,13 @@ spec:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- "ALL"
add:
- "NET_BIND_SERVICE"
seccompProfile:
type: "RuntimeDefault"
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
args:
- "--secure-listen-address=0.0.0.0:8443"
Expand Down
2 changes: 1 addition & 1 deletion config/manager/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ kind: Kustomization
images:
- name: controller
newName: docker.io/splunk/splunk-operator
newTag: 2.4.0
newTag: 2.5.0
9 changes: 8 additions & 1 deletion config/manager/manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ spec:
matchLabels:
control-plane: controller-manager
name: splunk-operator
strategy:
strategy:
type: Recreate
replicas: 1
template:
Expand Down Expand Up @@ -54,6 +54,13 @@ spec:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- "ALL"
add:
- "NET_BIND_SERVICE"
seccompProfile:
type: "RuntimeDefault"
livenessProbe:
httpGet:
path: /healthz
Expand Down
2 changes: 1 addition & 1 deletion docs/AppFramework.md
Original file line number Diff line number Diff line change
Expand Up @@ -542,7 +542,7 @@ spec:
serviceAccountName: splunk-operator
containers:
- name: splunk-operator
image: "docker.io/splunk/splunk-operator:2.4.0"
image: "docker.io/splunk/splunk-operator:2.5.0"
volumeMounts:
- mountPath: /opt/splunk/appframework/
name: app-staging
Expand Down
18 changes: 18 additions & 0 deletions docs/ChangeLog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Splunk Operator for Kubernetes Change Log

## 2.5.0 (2024-01-31)

CSPL-2155: Support for Level-2 Upgrade Strategy in Splunk Operator

CSPL-2505: Pod Security standard set to restricted mode

### Supported Splunk Version
>| Splunk Version|
>| --- |
>| 9.0.8 |
>| 9.1.3 |
### Supported Kubernetes Version
>| Kubernetes Version|
>| --- |
>| 1.25+ |

## 2.4.0 (2023-10-13)

* This is the 2.4.0 release. The Splunk Operator for Kubernetes is a supported platform for deploying Splunk Enterprise with the prerequisites and constraints laid out [here](https://github.com/splunk/splunk-operator/blob/main/docs/README.md#prerequisites-for-the-splunk-operator)
Expand Down
8 changes: 4 additions & 4 deletions docs/Install.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
If you want to customize the installation of the Splunk Operator, download a copy of the installation YAML locally, and open it in your favorite editor.

```
wget -O splunk-operator-cluster.yaml https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-cluster.yaml
wget -O splunk-operator-cluster.yaml https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-cluster.yaml
```

## Default Installation
Expand All @@ -17,7 +17,7 @@ Based on the file used Splunk Operator can be installed cluster-wide or namespac
By installing `splunk-operator-cluster.yaml` Operator will watch all the namespaces of your cluster for splunk enterprise custom resources

```
wget -O splunk-operator-cluster.yaml https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-cluster.yaml
wget -O splunk-operator-cluster.yaml https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-cluster.yaml
kubectl apply -f splunk-operator-cluster.yaml
```

Expand All @@ -44,10 +44,10 @@ If Splunk Operator is installed clusterwide and user wants to manage multiple na

## Install operator to watch single namespace with restrictive permission

In order to install operator with restrictive permission to watch only single namespace use [splunk-operator-namespace.yaml](https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-namespace.yaml). This will create Role and Role-Binding to only watch single namespace. By default operator will be installed in `splunk-operator` namespace, user can edit the file to change the namespace
In order to install operator with restrictive permission to watch only single namespace use [splunk-operator-namespace.yaml](https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-namespace.yaml). This will create Role and Role-Binding to only watch single namespace. By default operator will be installed in `splunk-operator` namespace, user can edit the file to change the namespace

```
wget -O splunk-operator-namespace.yaml https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-namespace.yaml
wget -O splunk-operator-namespace.yaml https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-namespace.yaml
kubectl apply -f splunk-operator-namespace.yaml
```

Expand Down
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,12 +113,12 @@ For production environments, we are requiring the use of Splunk SmartStore. As a

A Kubernetes cluster administrator can install and start the Splunk Operator for specific namespace by running:
```
kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-namespace.yaml --server-side --force-conflicts
kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-namespace.yaml --server-side --force-conflicts
```

A Kubernetes cluster administrator can install and start the Splunk Operator for cluster-wide by running:
```
kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-cluster.yaml --server-side --force-conflicts
kubectl apply -f https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-cluster.yaml --server-side --force-conflicts
```

The [Advanced Installation Instructions](Install.md) page offers guidance for advanced configurations, including the use of private image registries, installation at cluster scope, and installing the Splunk Operator as a user who is not a Kubernetes administrator. Users of Red Hat OpenShift should review the [Red Hat OpenShift](OpenShift.md) page.
Expand Down
25 changes: 15 additions & 10 deletions docs/SplunkOperatorUpgrade.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# How to upgrade Splunk Operator and Splunk Enterprise Deployments

To upgrade the Splunk Operator for Kubernetes, you will overwrite the prior Operator release with the latest version. Once the lastest version of `splunk-operator-namespace.yaml` ([see below](#upgrading-splunk-operator-and-splunk-operator-deployment)) is applied the CRD's are updated and Operator deployment is updated with newer version of Splunk Operator image. Any new spec defined by the operator will be applied to the pods managed by Splunk Operator for Kubernetes.
To upgrade the Splunk Operator for Kubernetes, you will overwrite the prior Operator release with the latest version. Once the lastest version of `splunk-operator-namespace.yaml` ([see below](#upgrading-splunk-operator-and-splunk-operator-deployment)) is applied the CRD's are updated and Operator deployment is updated with newer version of Splunk Operator image. Any new spec defined by the operator will be applied to the pods managed by Splunk Operator for Kubernetes.
A Splunk Operator for Kubernetes upgrade might include support for a later version of the Splunk Enterprise Docker image. In that scenario, after the Splunk Operator completes its upgrade, the pods managed by Splunk Operator for Kubernetes will be restarted using the latest Splunk Enterprise Docker image.
Expand All @@ -10,7 +10,7 @@ A Splunk Operator for Kubernetes upgrade might include support for a later versi
* Before you upgrade, review the Splunk Operator [change log](https://github.com/splunk/splunk-operator/releases) page for information on changes made in the latest release. The Splunk Enterprise Docker image compatibility is noted in each release version.
* If the Splunk Enterprise Docker image changes, review the Splunk Enterprise [Upgrade Readme](https://docs.splunk.com/Documentation/Splunk/latest/Installation/AboutupgradingREADTHISFIRST) page before upgrading.
* If the Splunk Enterprise Docker image changes, review the Splunk Enterprise [Upgrade Readme](https://docs.splunk.com/Documentation/Splunk/latest/Installation/AboutupgradingREADTHISFIRST) page before upgrading.
* For general information about Splunk Enterprise compatibility and the upgrade process, see [How to upgrade Splunk Enterprise](https://docs.splunk.com/Documentation/Splunk/latest/Installation/HowtoupgradeSplunk).
Expand All @@ -25,7 +25,7 @@ A Splunk Operator for Kubernetes upgrade might include support for a later versi
1. Download the latest Splunk Operator installation yaml file.
```
wget -O splunk-operator-namespace.yaml https://github.com/splunk/splunk-operator/releases/download/2.4.0/splunk-operator-namespace.yaml
wget -O splunk-operator-namespace.yaml https://github.com/splunk/splunk-operator/releases/download/2.5.0/splunk-operator-namespace.yaml
```
2. (Optional) Review the file and update it with your specific customizations used during your install.
Expand All @@ -44,9 +44,12 @@ NAME READY STATUS RESTARTS
splunk-operator-controller-manager-75f5d4d85b-8pshn 1/1 Running 0 5s
```
If a Splunk Operator release changes the custom resource (CRD) API version, the administrator is responsible for updating their Custom Resource specification to reference the latest CRD API version.
If a Splunk Operator release includes an updated Splunk Enterprise Docker image, the operator upgrade will also initiate pod restart using the latest Splunk Enterprise Docker image.
If a Splunk Operator release changes the custom resource (CRD) API version, the administrator is responsible for updating their Custom Resource specification to reference the latest CRD API version.

### Upgrading Splunk Enterprise Docker Image with the Operator Upgrade

Splunk Operator follows the upgrade path steps mentioned in [Splunk documentation](https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/HowtoupgradeSplunk). If a Splunk Operator release includes an updated Splunk Enterprise Docker image, the operator upgrade will also initiate pod restart using the latest Splunk Enterprise Docker image. To follow the best practices described under the [General Process to Upgrade the Splunk Enterprise], a recommeded upgrade path is followed while initiating pod restarts of different Splunk Instances. At each step, if a particular CR instance exists, a certain flow is imposed to ensure that each instance is updated in the correct order. After an instance is upgraded, the Operator verifies if the upgrade was successful and all the components are working as expected. If any unexpected behaviour is detected, the process is terminated.


## Steps to Upgrade from 1.0.5 or older version to latest

Expand Down Expand Up @@ -128,7 +131,7 @@ imagePullPolicy: IfNotPresent
```bash
kubectl get pod <splunk_operator_pod> -o yaml | grep -i image
image: docker.io/splunk/splunk-operator:<desired_operator_version>
imagePullPolicy: IfNotPresent
imagePullPolicy: IfNotPresent
```
To verify that a new Splunk Enterprise Docker image was applied to a pod, you can check the version of the image. Example:
Expand All @@ -143,8 +146,10 @@ imagePullPolicy: IfNotPresent
This is an example of the process followed by the Splunk Operator if the operator version is upgraded and a later Splunk Enterprise Docker image is available:
1. A new Splunk Operator pod will be created, and the existing operator pod will be terminated.
2. Any existing License Manager, Search Head, Deployer, ClusterManager, Standalone pods will be terminated to be redeployed with the upgraded spec.
3. After a ClusterManager pod is restarted, the Indexer Cluster pods which are connected to it are terminated and redeployed.
4. After all pods in the Indexer cluster and Search head cluster are redeployed, the Monitoring Console pod is terminated and redeployed.
3. Any existing License Manager, Standalone, Monitoring console, Cluster manager, Search Head, ClusterManager, Indexer pods will be terminated to be redeployed with the upgraded spec.
4. Splunk Operator follows the upgrade path steps mentioned in Splunk documentation. The termination and redeployment of the pods happen in a particular order based on a recommended upgrade path.
5. After a ClusterManager pod is restarted, the Indexer Cluster pods which are connected to it are terminated and redeployed.
6. After all pods in the Indexer cluster and Search head cluster are redeployed, the Monitoring Console pod is terminated and redeployed.
7. Each pod upgrade is verified to ensure the process was successful and everything is working as expected.

* Note: If there are multiple pods per Custom Resource, the pods are terminated and re-deployed in a descending order with the highest numbered pod going first
Loading

0 comments on commit 1663227

Please sign in to comment.