Skip to content

Commit

Permalink
docs(infraex): bump versions and fix links
Browse files Browse the repository at this point in the history
  • Loading branch information
Langleu authored Oct 7, 2024
1 parent 09b596b commit aeed973
Show file tree
Hide file tree
Showing 14 changed files with 108 additions and 160 deletions.
12 changes: 6 additions & 6 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ This guide requires you to have previously completed or reviewed the steps taken
## Prerequisites

- An [AWS account](https://docs.aws.amazon.com/accounts/latest/reference/accounts-welcome.html) to create resources within AWS.
- [Helm (3.x)](https://helm.sh/docs/intro/install/) for installing and upgrading the [Camunda Helm chart](https://github.com/camunda/camunda-platform-helm).
- [Kubectl (1.30.x)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- [Terraform (1.9.x)](https://developer.hashicorp.com/terraform/downloads)
- [Helm (3.16+)](https://helm.sh/docs/intro/install/) for installing and upgrading the [Camunda Helm chart](https://github.com/camunda/camunda-platform-helm).
- [Kubectl (1.30+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- [Terraform (1.9+)](https://developer.hashicorp.com/terraform/downloads)

## Considerations

Expand Down Expand Up @@ -79,7 +79,7 @@ Using the same namespace names on both clusters won't work as CoreDNS won't be a

The dot is required to export those variables to your shell and not a spawned subshell.

```shell
```shell reference
https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/export_environment_prerequisites.sh
```

Expand Down Expand Up @@ -113,9 +113,9 @@ Do not store sensitive information (credentials) in your Terraform files.

This file is using [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allows abstracting resources into reusable components.

The [Camunda provided module](https://github.com/camunda/camunda-tf-eks-module) is publicly available. It's advisable to review this module before usage.
The [Camunda provided module](https://github.com/camunda/camunda-tf-eks-module/tree/2.5.0/modules/eks-cluster) is publicly available. It's advisable to review this module before usage.

There are various other input options to customize the cluster setup further. See the [module documentation](https://github.com/camunda/camunda-tf-eks-module) for additional details.
There are various other input options to customize the cluster setup further. See the [module documentation](https://github.com/camunda/camunda-tf-eks-module/tree/2.5.0/modules/eks-cluster) for additional details.

This contains the declaration of the two clusters. One of them has an explicit provider declaration, as otherwise everything is deployed to the default AWS provider, which is limited to a single region.

Expand Down
34 changes: 16 additions & 18 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ Lastly you'll verify that the connection to your Self-Managed Camunda 8 environm
## Prerequisites

- A Kubernetes cluster; see the [eksctl](./eksctl.md) or [terraform](./terraform-setup.md) guide.
- [Helm (3.13+)](https://helm.sh/docs/intro/install/)
- [kubectl (1.28+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- [Helm (3.16+)](https://helm.sh/docs/intro/install/)
- [kubectl (1.30+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- (optional) Domain name/[hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) in Route53. This allows you to expose Camunda 8 and connect via [zbctl](/apis-tools/community-clients/cli-client/index.md) or [Camunda Modeler](https://camunda.com/download/modeler/).

## Considerations
Expand Down Expand Up @@ -50,13 +50,13 @@ export DOMAIN_NAME=camunda.example.com
# The e-mail to register with Let's Encrypt
export [email protected]
# The Ingress-Nginx Helm Chart version
export INGRESS_HELM_CHART_VERSION="4.10.1"
export INGRESS_HELM_CHART_VERSION="4.11.2"
# The External DNS Helm Chart version
export EXTERNAL_DNS_HELM_CHART_VERSION="1.14.4"
export EXTERNAL_DNS_HELM_CHART_VERSION="1.15.0"
# The Cert-Manager Helm Chart version
export CERT_MANAGER_HELM_CHART_VERSION="1.14.5"
export CERT_MANAGER_HELM_CHART_VERSION="1.15.3"
# The Camunda 8 Helm Chart version
export CAMUNDA_HELM_CHART_VERSION="10.0.5"
export CAMUNDA_HELM_CHART_VERSION="11.0.0"
```

Additionally, follow the guide from either [eksctl](./eks-helm.md) or [Terraform](./terraform-setup.md) to retrieve the following values, which will be required for subsequent steps:
Expand Down Expand Up @@ -108,7 +108,7 @@ Make sure to have `EXTERNAL_DNS_IRSA_ARN` exported prior by either having follow
:::warning
If you are already running `external-dns` in a different cluster, ensure each instance has a **unique** `txtOwnerId` for the TXT record. Without unique identifiers, the `external-dns` instances will conflict and inadvertently delete existing DNS records.

In the example below, it's set to `external-dns` and should be changed if this identifier is already in use. Consult the [documentation](https://kubernetes-sigs.github.io/external-dns/v0.14.2/initial-design/#ownership) to learn more about DNS record ownership.
In the example below, it's set to `external-dns` and should be changed if this identifier is already in use. Consult the [documentation](https://kubernetes-sigs.github.io/external-dns/v0.15.0/#note) to learn more about DNS record ownership.
:::

```shell
Expand Down Expand Up @@ -224,15 +224,11 @@ helm upgrade --install \
--set tasklist.contextPath="/tasklist" \
--set optimize.contextPath="/optimize" \
--set zeebeGateway.ingress.grpc.enabled=true \
--set zeebeGateway.ingress.grpc.host=zeebe-grpc.$DOMAIN_NAME \
--set zeebeGateway.ingress.grpc.host=zeebe.$DOMAIN_NAME \
--set zeebeGateway.ingress.grpc.tls.enabled=true \
--set zeebeGateway.ingress.grpc.tls.secretName=zeebe-c8-tls-grpc \
--set-string 'zeebeGateway.ingress.grpc.annotations.kubernetes\.io\/tls-acme=true' \
--set zeebeGateway.ingress.rest.enabled=true \
--set zeebeGateway.ingress.rest.host=zeebe-rest.$DOMAIN_NAME \
--set zeebeGateway.ingress.rest.tls.enabled=true \
--set zeebeGateway.ingress.rest.tls.secretName=zeebe-c8-tls-rest \
--set-string 'zeebeGateway.ingress.rest.annotations.kubernetes\.io\/tls-acme=true'
--set zeebeGateway.contextPath="/zeebe"
```

The annotation `kubernetes.io/tls-acme=true` is [interpreted by cert-manager](https://cert-manager.io/docs/usage/ingress/) and automatically results in the creation of the required certificate request, easing the setup.
Expand Down Expand Up @@ -276,11 +272,12 @@ After following the installation instructions in the [zbctl docs](/apis-tools/co
Export the following environment variables:

```shell
export ZEEBE_ADDRESS=zeebe-grpc.$DOMAIN_NAME:443
export ZEEBE_ADDRESS=zeebe.$DOMAIN_NAME:443
export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
export ZEEBE_AUTHORIZATION_SERVER_URL=https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token
export ZEEBE_TOKEN_AUDIENCE='zeebe-api'
export ZEEBE_TOKEN_SCOPE='camunda-identity'
```

</TabItem>
Expand All @@ -301,6 +298,7 @@ export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page o
export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token
export ZEEBE_TOKEN_AUDIENCE='zeebe-api'
export ZEEBE_TOKEN_SCOPE='camunda-identity'
```

</TabItem>
Expand All @@ -321,20 +319,20 @@ zbctl status --insecure
Cluster size: 3
Partitions count: 3
Replication factor: 3
Gateway version: 8.5.1
Gateway version: 8.6.0
Brokers:
Broker 0 - camunda-zeebe-0.camunda-zeebe.camunda.svc:26501
Version: 8.5.1
Version: 8.6.0
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Broker 1 - camunda-zeebe-1.camunda-zeebe.camunda.svc:26501
Version: 8.5.1
Version: 8.6.0
Partition 1 : Leader, Healthy
Partition 2 : Leader, Healthy
Partition 3 : Follower, Healthy
Broker 2 - camunda-zeebe-2.camunda-zeebe.camunda.svc:26501
Version: 8.5.1
Version: 8.6.0
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Leader, Healthy
Expand Down
18 changes: 9 additions & 9 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/eksctl.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ This guide provides a user-friendly approach for setting up and managing Amazon
## Prerequisites

- An [AWS account](https://docs.aws.amazon.com/accounts/latest/reference/accounts-welcome.html) is required to create resources within AWS.
- [AWS CLI (2.11+)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), a CLI tool for creating AWS resources.
- [eksctl (0.163+)](https://eksctl.io/getting-started/), a CLI tool for creating and managing Amazon EKS clusters.
- [kubectl (1.28+)](https://kubernetes.io/docs/tasks/tools/#kubectl), a CLI tool to interact with the cluster.
- [AWS CLI (2.17+)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), a CLI tool for creating AWS resources.
- [eksctl (0.191+)](https://eksctl.io/getting-started/), a CLI tool for creating and managing Amazon EKS clusters.
- [kubectl (1.30+)](https://kubernetes.io/docs/tasks/tools/#kubectl), a CLI tool to interact with the cluster.

## Considerations

Expand All @@ -33,9 +33,9 @@ Following this guide will incur costs on your Cloud provider account, namely for

Following this guide results in the following:

- An Amazon EKS 1.28 Kubernetes cluster with four nodes.
- An Amazon EKS 1.30 Kubernetes cluster with four nodes.
- Installed and configured [EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html), which is used by the Camunda 8 Helm chart to create [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
- A [managed Aurora PostgreSQL 15.4](https://aws.amazon.com/rds/aurora/) instance that will be used by the Camunda 8 components.
- A [managed Aurora PostgreSQL 15.x](https://aws.amazon.com/rds/aurora/) instance that will be used by the Camunda 8 components.
- [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (IRSA) configured.
- This simplifies the setup by not relying on explicit credentials, but instead allows creating a mapping between IAM roles and Kubernetes service accounts based on a trust relationship. A [blog post](https://aws.amazon.com/blogs/containers/diving-into-iam-roles-for-service-accounts/) by AWS visualizes this on a technical level.
- This allows a Kubernetes service account to temporarily impersonate an AWS IAM role to interact with AWS services like S3, RDS, or Route53 without supplying explicit credentials.
Expand Down Expand Up @@ -80,7 +80,7 @@ export PG_PASSWORD=camundarocks123
# The default database name created within Postgres. Can directly be consumed by the Helm chart
export DEFAULT_DB_NAME=camunda
# The PostgreSQL version
export POSTGRESQL_VERSION=15.4
export POSTGRESQL_VERSION=15.8

# Optional
# Default node type for the Kubernetes cluster
Expand Down Expand Up @@ -119,7 +119,7 @@ apiVersion: eksctl.io/v1alpha5
metadata:
name: ${CLUSTER_NAME:-camunda-cluster} # e.g. camunda-cluster
region: ${REGION:-eu-central-1} # e.g. eu-central-1
version: "1.28"
version: "1.30"
availabilityZones:
- ${REGION:-eu-central-1}c # e.g. eu-central-1c, the minimal is two distinct Availability Zones (AZs) within the region
- ${REGION:-eu-central-1}b
Expand Down Expand Up @@ -187,7 +187,7 @@ vpc:
nat:
gateway: HighlyAvailable
secretsEncryption:
keyARN: ${KMS_KEY}
keyARN: ${KMS_ARN}
EOF
```

Expand Down Expand Up @@ -286,7 +286,7 @@ The same can also be achieved by using `kubectl` and manually adding the mapping
kubectl edit configmap aws-auth -n kube-system
```

For detailed examples, review the [documentation provided by AWS](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html/).
For detailed examples, review the [documentation provided by AWS](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html).

</p>
</details>
Expand Down
18 changes: 1 addition & 17 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ For a Helm-based deployment, you can directly configure these settings using Hel
identityKeycloak:
postgresql:
enabled: false
image: docker.io/camunda/keycloak:23 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags
image: docker.io/camunda/keycloak:25 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: "--db-driver=software.amazon.jdbc.Driver --transaction-xa-enabled=false --log-level=INFO,software.amazon.jdbc:INFO"
Expand Down Expand Up @@ -600,22 +600,6 @@ Don't forget to set the `serviceAccountName` of the deployment/statefulset to th

## Troubleshooting

### Versions used

This page was created based on the following versions available and may work with newer releases of mentioned software.

| Software | Version |
| ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| AWS Aurora PostgreSQL | 13 / 14 / 15 |
| [AWS JDBC Driver Wrapper](https://github.com/awslabs/aws-advanced-jdbc-wrapper) | 2.3.1 |
| AWS OpenSearch | 2.5 |
| [AWS SDK Dependencies](#dependencies) | 2.21.x |
| KeyCloak | 21.x / 22.x |
| [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/5.9.0) | 5.29.0 |
| [Terraform Amazon EKS Module](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/19.15.3) | 19.20.0 |
| [Terraform IAM Roles Module](https://registry.terraform.io/modules/terraform-aws-modules/iam/aws/5.28.0/submodules/iam-role-for-service-accounts-eks) | 5.32.0 |
| [Terraform PostgreSQL Provider](https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs) | 1.21.0 |

### Instance Metadata Service (IMDS)

[Instance Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) is a default fallback for the AWS SDK due to the [default credentials provider chain](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials-chain.html). Within the context of Amazon EKS, it means a pod will automatically assume the role of a node. This can hide many problems, including whether IRSA was set up correctly or not, since it will fall back to IMDS in case of failure and hide the actual error.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ If you are completely new to Terraform and the idea of IaC, read through the [Te
## Prerequisites

- An [AWS account](https://docs.aws.amazon.com/accounts/latest/reference/accounts-welcome.html) to create any resources within AWS.
- [Terraform (1.7+)](https://developer.hashicorp.com/terraform/downloads)
- [Kubectl (1.28+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- [Terraform (1.9+)](https://developer.hashicorp.com/terraform/downloads)
- [Kubectl (1.30+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (IRSA) configured.
- This simplifies the setup by not relying on explicit credentials and instead creating a mapping between IAM roles and Kubernetes service account based on a trust relationship. A [blog post](https://aws.amazon.com/blogs/containers/diving-into-iam-roles-for-service-accounts/) by AWS visualizes this on a technical level.
- This allows a Kubernetes service account to temporarily impersonate an AWS IAM role to interact with AWS services like S3, RDS, or Route53 without having to supply explicit credentials.
Expand All @@ -43,7 +43,7 @@ Following this tutorial and steps will result in:

- An Amazon EKS Kubernetes cluster running the latest Kubernetes version with four nodes ready for Camunda 8 installation.
- The [EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) is installed and configured, which is used by the Camunda 8 Helm chart to create [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
- A [managed Aurora PostgreSQL 15.4](https://aws.amazon.com/rds/postgresql/) instance to be used by the Camunda 8 components.
- A [managed Aurora PostgreSQL 15.8](https://aws.amazon.com/rds/postgresql/) instance to be used by the Camunda 8 components.

## Installing Amazon EKS cluster with Terraform

Expand All @@ -61,7 +61,7 @@ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.65"
version = "~> 5.69"
}
}
}
Expand Down Expand Up @@ -110,14 +110,14 @@ This module creates the basic layout that configures AWS access and Terraform.

The following will use [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allows abstracting resources into reusable components.

The [Camunda provided module](https://github.com/camunda/camunda-tf-eks-module) is publicly available. It's advisable to review this module before usage.
The [Camunda provided module](https://github.com/camunda/camunda-tf-eks-module/tree/2.5.0/modules/eks-cluster) is publicly available. It's advisable to review this module before usage.

1. In the folder where your `config.tf` resides, create an additional `cluster.tf`.
2. Paste the following content into the newly created `cluster.tf` file to make use of the provided module:

```hcl
module "eks_cluster" {
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/eks-cluster?ref=2.1.0"
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/eks-cluster?ref=2.5.0"
region = "eu-central-1" # change to your AWS region
name = "cluster-name" # change to name of your choosing
Expand All @@ -128,7 +128,7 @@ module "eks_cluster" {
}
```

There are various other input options to customize the cluster setup further; see the [module documentation](https://github.com/camunda/camunda-tf-eks-module).
There are various other input options to customize the cluster setup further; see the [module documentation](https://github.com/camunda/camunda-tf-eks-module/tree/2.5.0/modules/eks-cluster).

### PostgreSQL module

Expand All @@ -142,8 +142,8 @@ We separated the cluster and PostgreSQL modules from each other to allow more cu

```hcl
module "postgresql" {
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/aurora?ref=2.1.0"
engine_version = "15.4"
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/aurora?ref=2.5.0"
engine_version = "15.8"
auto_minor_version_upgrade = false
cluster_name = "cluster-name-postgresql" # change "cluster-name" to your name
default_database_name = "camunda"
Expand Down
Loading

0 comments on commit aeed973

Please sign in to comment.