Skip to content

Helm Deployment Details

Eva Tuczai edited this page Oct 31, 2023 · 110 revisions

Kubeturbo Deploy via Helm Charts

This Helm deployment supports Helm 2 and Helm 3. Helm is a Kubernetes package manager that allows you to more easily manage charts, which are a way to package all resources associated with an application. Helm provides a way to package, deploy and update using simple commands, and provides a way to customize or update parameters of your resources, without the worry of yaml formatting. For more info see: Helm: The Kubernetes Package Manager

To use this method, you will already have helm configured for your environment (helm client, and if applicable a tiller server) and are familiar with how to use helm and chart repositories. Go to Helm Docs for an introduction and overview of Helm if needed.

The Helm Chart option will require you to clone this project to your local environment or the deployment will not work, and then create a local repo for the chart in this directory here. Index the local repo to be able to deploy kubeturbo which will create the following resources in the cluster:

  1. Namespace or Project (default is turbo)
  2. Service Account and binding to cluster-admin clusterrole (default is "turbo-user" with "turbo-all-binding"-{My_Kubeturbo_name}-{My_Namespace} binding using a cluster-admin roleRef)
  3. Updated configMap containing required info for kubeturbo to connect to the Turbonomic Server
  4. Deployment of kubeturbo

Note:

  • The kubeturbo image tag version used will depend on your Turbonomic Server version. The kubeturbo tag version being deployed should always match the Turbonomic Server version you are running. For more info see CWOM -> Turbonomic Server -> kubeturbo version and review the Releases.

Helm Install Steps

  1. Review general kubeturbo prerequisites
  2. Helm 2 or Helm 3 installed (needed to deploy via helm)
  3. Git installed (needed to clone kubeturbo repo locally)
  4. Clone Kubeturbo repo (needed to use for deployment)
  5. Kubeturbo will need information to find and register with the Turbonomic Server. These Turbonomic Server details are stored in a configMap resource. You set these parameters via the helm install command detailed in the steps below. See the Values table below for explanation of each of the parameters and what is required.
  6. Determine if you can use a the default builtin Cluster Role cluster-admin or if you need to use a custom Cluster Role.
  7. Choose where and how to store Turbonomic Server username and password for kubeturbo to use (one or the other, not both)

Option 1: Use Kubernetes Secret

  1. Create a Kubernetes Secret for use in the deployment reference the guide here if needed. If none exists, kubeturbo will fall back to the username and password provided in the configMap, which are provided through the parameters of restAPIConfig.opsManagerUserName and restAPIConfig.opsManagerPassword in Option 2 below. If neither exist of are invalid kubeturbo will fail to add itself as a target to your Turbonomic Server.

  2. Helm 3 example command to perform a dry run first, to make sure no errors in the command, substitute your environment values where you see { }. Make sure to resolve any errors before proceeding to the next step.
    **NOTE when using the default secret name of turbonomic-credentials you don't need to specify the parameter --set restAPIConfig.turbonomicCredentialsSecretName in the helm command, you only need to use it if you created a secret with a different name.

helm install --dry-run --debug {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}

  1. Helm 3 example command to run the install, substitute your environment values where you see { }.

helm install {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}

  1. Go to Review and Validate Deployment section

Option 2: Use Plain Text Username and Password

  1. Helm 3 example command to perform a dry run first, to make sure no errors in the command, substitute your environment values where you see { }. Make sure to resolve any errors before proceeding to the next step (this uses plain text username and password)

helm install --dry-run --debug {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.opsManagerUserName={TURBOSERVER_ADMINUSER} --set restAPIConfig.opsManagerPassword={TURBOSERVER_ADMINUSER_PWD} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}

  1. Helm 3 example command to run the install, substitute your environment values where you see { }. (this uses plain text username and password)

helm install {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.opsManagerUserName={TURBOSERVER_ADMINUSER} --set restAPIConfig.opsManagerPassword={TURBOSERVER_ADMINUSER_PWD} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}

  1. Go to Review and Validate Deployment section

Review and Validate Deployment

  1. Review helm command output (if successful it will look like the example below) Screenshot 2023-08-10 at 11 49 09 AM

  2. Check that the kubeturbo pod was deployed and running 1/1 (assuming you deployed into the turbo namespace)

kubectl get pods -n turbo

Screenshot 2023-08-10 at 11 26 31 AM

  1. Review Turbonomic UI under Settings, Target Configurations, Cloud Native. You should have a new target added automatically that has your Cluster Name in the Target Name (if successful it will look like the example below)
Screenshot 2023-08-10 at 11 02 11 AM
  1. If the target does not show up as per above in the Turbonomic Server after about 5 minutes there was probably an issue with the deployment that needs to be resolved. You can review the kubeturbo logs here as it will give you specific details as to what the issue might be. If you cannot resolve the issue please open a support ticket with IBM Turbonomic Support here

Values

The following table shows the more commonly used values, which are also seen in the default values.yaml. Additionally you can modify the values.yaml file directly in the cloned repo that was downloaded in the directory kubeturbo/deploy/kubeturbo/values.yaml and update it directly.

Parameters that are default and/or required are noted.

Parameter Default Value Required / Opt to Change Parameter Type
image.repository icr.io/cpopen/turbonomic/kubeturbo (IBM Cloud Container Registry) optional path to repo
image.tag {currentVersion} optional kubeturbo tag
image.pullPolicy IfNotPresent optional
image.busyboxRepository busybox optional Busybox repository. This is overridden by cpufreqgetterRepository
image.cpufreqgetterRepository icr.io/cpopen/turbonomic/cpufreqgetter optional Repository used to get node cpufrequency.
image.imagePullSecret optional Define the secret used to authenticate to the container image registry
roleName cluster-admin optional Specify custom turbo-cluster-reader or turbo-cluster-admin role instead of the default cluster-admin role
roleBinding turbo-all-binding-{My_Kubeturbo_name}-{My_Namespace} optional Specify the name of clusterrolebinding
serviceAccountName turbo-user optional Specify the name of the serviceaccount
serverMeta.version 8.1 required number x.y that represents your Turbo Server version
serverMeta.turboServer required https URL to log into Server
serverMeta.proxy optional Proxy URL http://username:password@proxyserver:proxyport or http://proxyserver:proxyport
restAPIConfig.opsManagerUserName required or use k8s secret Turbo Server user (local or AD) with admin role
restAPIConfig.opsManagerPassword required or use k8s secret Turbo Server user's password
restAPIConfig.turbonomicCredentialsSecretName turbonomic-credentials required only if using secret and not taking default secret name secret that contains the turbo server admin user name and password
targetConfig.targetName "Your_k8s_cluster" optional but required for multiple clusters String, how you want to identify your cluster
targetConfig.targetType "Your_k8s_cluster" optional - to be deprecated String, to be used only for UI manual setup.
args.logginglevel 2 optional number
args.kubelethttps true optional, change to false if k8s 1.10 or older bolean
args.kubeletport 10250 optional, change to 10255 if k8s 1.10 or older number
args.stitchuuid true optional, change to false if IaaS is VMM, Hyper-V bolean
args.pre16k8sVersion false optional if Kubernetes version is older than 1.6, then add another arg for move/resize action
args.cleanupSccImpersonationResources true optional cleanup the resources for scc impersonation by default
args.sccsupport optional required for OCP cluster, see here for more details
HANodeConfig.nodeRoles "\"master\"" Optional. Used to automate policies to keep nodes of same role limited to 1 instance per ESX host or AZ (starting with 6.4.3+) regex used, values in quotes & comma separated "\"master\"" (default),"\"worker\"","\"app\"" etc
daemonPodDetectors.daemonPodNamespaces1 and daemonPodNamespaces2 daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods in the namespace to be ignored for cluster consolidation regex used, values in quotes & comma separated"kube-system","kube-service-catalog","openshift-.*"
daemonPodDetectors.daemonPodNamePatterns daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods matching this pattern to be ignored for cluster consolidation regex used .*ignorepod.*
annotationWhitelist optional The annotationWhitelist allows users to define regular expressions to allow kubeturbo to collect matching annotations for the specified entity type. By default, no annotations are collected. These regular expressions accept the RE2 syntax (except for \C) as defined here: https://github.com/google/re2/wiki/Syntax
logging.level 2 optional Changing the logging level here doesn't require a restart on the pod but takes about 1 minute to take effect

For more on HANodeConfig go to Node Role Policies and view the default values.yaml For more on daemonPodDetectors go to the YAMLs deploy option wiki page or YAMLS_README.md under kubeturbo/deploy/kubeturbo_yamls/

Deprecated parameters

Parameter Default Value Required / Opt to Change Parameter Type
masterNodeDetectors.nodeNamePatterns node name includes .*master.* Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node name. If no match, this is ignored. string, regex used, example: .*master.*
masterNodeDetectors.nodeLabels any value for label key value node-role.kubernetes.io/master Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node label key value pair, If no match, this is ignored. regex used, specify the key as masterNodeDetectors.nodeLabelsKey such as node-role.kubernetes.io/master and the value as masterNodeDetectors.nodeLabelsValue such as .*

Working with a Private Repo

If you would like to pull required container images into your own repo, refer to this article here.

Kubeturbo Logging

For details on how to collect and configure Kubeturbo Logging go here.

Updating Turbonomic Server Release version and Kubeturbo version

When you update the "Release" of Turbonomic or CWOM Server version for example from 8.8.6 -> 8.9.6, you will also need to update the "Release" number in the configMap resource to reflect the "Release" version change such as from 8.8 -> 8.9. Additionally you may be instructed to update the kubeturbo pod image or you might have upgraded your Turbonomic Server which requires a kubeturbo image tag version change to match it. Determine which new tag version you will be using by going here: CWOM -> Turbonomic Server -> kubeturbo version and review Releases. You may be instructed by IBM Turbonomic Support to use a new image, or you may want to refresh the image to pick up a patch or new feature.

  1. After the update, obtain the new Turbonomic Server version. To get this from the UI, go to Settings -> Updates and use the numeric version such as “8.8.6” or “8.9.6” (Build details not required)

  2. You will update the values specific to your environment - substitute your values for { }

helm upgrade {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}

  1. The kubeturbo pod should have restarted to pick up new value

  2. Repeat for every Kubernetes / OpenShift cluster with a kubeturbo pod deployed


There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally