diff --git a/docs/self-managed/operational-guides/multi-region/dual-region-ops.md b/docs/self-managed/operational-guides/multi-region/dual-region-ops.md
index c09203d3e4..722de513fb 100644
--- a/docs/self-managed/operational-guides/multi-region/dual-region-ops.md
+++ b/docs/self-managed/operational-guides/multi-region/dual-region-ops.md
@@ -54,7 +54,8 @@ Running a dual-region configuration requires users to detect and manage any regi
- In that guide, we're showcasing Kubernetes dual-region installation, based on the following tools:
- [Helm (3.x)](https://helm.sh/docs/intro/install/) for installing and upgrading the [Camunda Helm chart](https://github.com/camunda/camunda-platform-helm).
- [Kubectl (1.30.x)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the Kubernetes cluster.
-- [zbctl](/apis-tools/community-clients/cli-client/index.md) to interact with the Zeebe cluster.
+- (deprecated) [zbctl](/apis-tools/community-clients/cli-client/index.md) to interact with the Zeebe cluster.
+- `cURL` or similar to interact with the REST API.
## Terminology
@@ -152,6 +153,151 @@ The following alternatives to port-forwarding are possible:
In our example, we went with port-forwarding to a localhost, but other alternatives can also be used.
+
+
+
+1. Use the [REST API](../../../apis-tools/camunda-api-rest/camunda-api-rest-overview.md) to retrieve the list of the remaining brokers
+
+```bash
+kubectl --context $CLUSTER_SURVIVING port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 8080:8080 -n $CAMUNDA_NAMESPACE_SURVIVING
+
+curl -L -X GET 'http://localhost:8080/v2/topology' \
+ -H 'Accept: application/json'
+```
+
+
+ Example output
+
+
+```bash
+{
+ "brokers": [
+ {
+ "nodeId": 0,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 2,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 4,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 6,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ }
+ ],
+ "clusterSize": 8,
+ "partitionsCount": 8,
+ "replicationFactor": 4,
+ "gatewayVersion": "8.6.0"
+}
+```
+
+
+
+
+
+
+
1. Use the [zbctl client](/apis-tools/community-clients/cli-client/index.md) to retrieve list of remaining brokers
```bash
@@ -197,6 +343,8 @@ Brokers:
+
+
2. Port-forward the service of the Zeebe Gateway to access the [management REST API](../../zeebe-deployment/configuration/gateway.md#managementserver)
@@ -215,6 +363,149 @@ curl -XPOST 'http://localhost:9600/actuator/cluster/brokers?force=true' -H 'Cont
Port-forwarding the Zeebe Gateway via `kubectl` and printing the topology should reveal that the cluster size has decreased to 4, partitions have been redistributed over the remaining brokers, and new leaders have been elected.
+
+
+
+```bash
+kubectl --context $CLUSTER_SURVIVING port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 8080:8080 -n $CAMUNDA_NAMESPACE_SURVIVING
+
+curl -L -X GET 'http://localhost:8080/v2/topology' \
+ -H 'Accept: application/json'
+```
+
+
+ Example output
+
+
+```bash
+{
+ "brokers": [
+ {
+ "nodeId": 0,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 2,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 4,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 6,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 4,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ }
+ ],
+ "clusterSize": 4,
+ "partitionsCount": 8,
+ "replicationFactor": 2,
+ "gatewayVersion": "8.6.0"
+}
+```
+
+
+
+
+
+
+
```bash
kubectl --context $CLUSTER_SURVIVING port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 26500:26500 -n $CAMUNDA_NAMESPACE_SURVIVING
zbctl status --insecure --address localhost:26500
@@ -259,6 +550,9 @@ Brokers:
+
+
+
You can also use the Zeebe Gateway's REST API to ensure the scaling progress has been completed. For better output readability, we use [jq](https://jqlang.github.io/jq/).
```bash
@@ -436,6 +730,177 @@ It is expected that the Zeebe broker pods will not reach the "Ready" state since
Port-forwarding the Zeebe Gateway via `kubectl` and printing the topology should reveal that the new Zeebe brokers are recognized but yet a full member of the Zeebe cluster.
+
+
+
+```bash
+kubectl --context $CLUSTER_SURVIVING port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 8080:8080 -n $CAMUNDA_NAMESPACE_SURVIVING
+
+curl -L -X GET 'http://localhost:8080/v2/topology' \
+ -H 'Accept: application/json'
+```
+
+
+ Example output
+
+
+```bash
+{
+ "brokers": [
+ {
+ "nodeId": 0,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 1,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 2,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 3,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 4,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 5,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 6,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 4,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 7,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [],
+ "version": "8.6.0"
+ },
+ ],
+ "clusterSize": 4,
+ "partitionsCount": 8,
+ "replicationFactor": 2,
+ "gatewayVersion": "8.6.0"
+}
+```
+
+
+
+
+
+
+
```bash
kubectl --context $CLUSTER_SURVIVING port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 26500:26500 -n $CAMUNDA_NAMESPACE_SURVIVING
zbctl status --insecure --address localhost:26500
@@ -488,6 +953,9 @@ Brokers:
+
+
+
diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md
index 1b53b6bfaf..cebaa707b7 100644
--- a/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md
+++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md
@@ -7,6 +7,8 @@ description: "Deploy two Amazon Kubernetes (EKS) clusters with Terraform for a p
import CoreDNSKubeDNS from "./assets/core-dns-kube-dns.svg"
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
:::caution
Review our [dual-region concept documentation](./../../../../concepts/multi-region/dual-region.md) before continuing to understand the current limitations and restrictions of this blueprint setup.
@@ -504,6 +506,267 @@ helm install $HELM_RELEASE_NAME camunda/camunda-platform \
1. Open a terminal and port-forward the Zeebe Gateway via `kubectl` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster.
+
+
+
+```shell
+kubectl --context "$CLUSTER_0" -n $CAMUNDA_NAMESPACE_0 port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 8080:8080
+```
+
+2. Open another terminal and use e.g. `cURL` to print the Zeebe cluster topology:
+
+```
+curl -L -X GET 'http://localhost:8080/v2/topology' \
+ -H 'Accept: application/json'
+```
+
+3. Make sure that your output contains all eight brokers from the two regions:
+
+
+ Example output
+
+
+```shell
+{
+ "brokers": [
+ {
+ "nodeId": 0,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 1,
+ "host": "camunda-zeebe-0.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 2,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 3,
+ "host": "camunda-zeebe-1.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 4,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 5,
+ "host": "camunda-zeebe-2.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 6,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-london",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 4,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 7,
+ "host": "camunda-zeebe-3.camunda-zeebe.camunda-paris",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 5,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 6,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 7,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 8,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ }
+ ],
+ "clusterSize": 8,
+ "partitionsCount": 8,
+ "replicationFactor": 4,
+ "gatewayVersion": "8.6.0"
+}
+```
+
+
+
+
+
+
+
```shell
kubectl --context "$CLUSTER_0" -n $CAMUNDA_NAMESPACE_0 port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 26500:26500
```
@@ -524,52 +787,52 @@ zbctl status --insecure --address localhost:26500
Cluster size: 8
Partitions count: 8
Replication factor: 4
-Gateway version: 8.5.0
+Gateway version: 8.6.0
Brokers:
Broker 0 - camunda-zeebe-0.camunda-zeebe.camunda-london.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 1 : Follower, Healthy
Partition 6 : Follower, Healthy
Partition 7 : Follower, Healthy
Partition 8 : Follower, Healthy
Broker 1 - camunda-zeebe-0.camunda-zeebe.camunda-paris.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 1 : Follower, Healthy
Partition 2 : Leader, Healthy
Partition 7 : Follower, Healthy
Partition 8 : Follower, Healthy
Broker 2 - camunda-zeebe-1.camunda-zeebe.camunda-london.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 1 : Leader, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Leader, Healthy
Partition 8 : Follower, Healthy
Broker 3 - camunda-zeebe-1.camunda-zeebe.camunda-paris.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Partition 4 : Leader, Healthy
Broker 4 - camunda-zeebe-2.camunda-zeebe.camunda-london.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Partition 4 : Follower, Healthy
Partition 5 : Leader, Healthy
Broker 5 - camunda-zeebe-2.camunda-zeebe.camunda-paris.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 3 : Follower, Healthy
Partition 4 : Follower, Healthy
Partition 5 : Follower, Healthy
Partition 6 : Follower, Healthy
Broker 6 - camunda-zeebe-3.camunda-zeebe.camunda-london.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 4 : Follower, Healthy
Partition 5 : Follower, Healthy
Partition 6 : Leader, Healthy
Partition 7 : Leader, Healthy
Broker 7 - camunda-zeebe-3.camunda-zeebe.camunda-paris.svc:26501
- Version: 8.5.0
+ Version: 8.6.0
Partition 5 : Follower, Healthy
Partition 6 : Follower, Healthy
Partition 7 : Follower, Healthy
@@ -578,3 +841,6 @@ Brokers:
+
+
+
diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
index f6ba071292..d0d28f778d 100644
--- a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
+++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
@@ -13,6 +13,7 @@ Lastly you'll verify that the connection to your Self-Managed Camunda 8 environm
## Prerequisites
- A Kubernetes cluster; see the [eksctl](./eksctl.md) or [terraform](./terraform-setup.md) guide.
+
- [Helm (3.16+)](https://helm.sh/docs/intro/install/)
- [kubectl (1.30+)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster.
- (optional) Domain name/[hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) in Route53. This allows you to expose Camunda 8 and connect via [zbctl](/apis-tools/community-clients/cli-client/index.md) or [Camunda Modeler](https://camunda.com/download/modeler/).
@@ -262,6 +263,153 @@ Instead of creating a confidential application, a machine-to-machine (M2M) appli
This reveals a `client-id` and `client-secret` that can be used to connect to the Camunda 8 cluster.
+
+
+For a detailed guide on generating and using a token, please conduct the relevant documentation on [authenticating with the REST API](./../../../../../apis-tools/camunda-api-rest/camunda-api-rest-authentication.md?environment=self-managed).
+
+
+
+
+Export the following environment variables:
+
+```shell
+export ZEEBE_ADDRESS=zeebe-rest.$DOMAIN_NAME
+export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
+export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
+export ZEEBE_AUTHORIZATION_SERVER_URL=https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token
+```
+
+
+
+
+This requires to port-forward the Zeebe Gateway and Keycloak to be able to connect to the cluster.
+
+```shell
+kubectl port-forward services/camunda-zeebe-gateway 8080:8080
+kubectl port-forward services/camunda-keycloak 18080:80
+```
+
+Export the following environment variables:
+
+```shell
+export ZEEBE_ADDRESS=localhost:8080
+export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
+export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
+export ZEEBE_AUTHORIZATION_SERVER_URL=http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token
+```
+
+
+
+
+
+Generate a temporary token to access the REST API:
+
+```shell
+curl --location --request POST "${ZEEBE_AUTHORIZATION_SERVER_URL}" \
+--header "Content-Type: application/x-www-form-urlencoded" \
+--data-urlencode "client_id=${ZEEBE_CLIENT_ID}" \
+--data-urlencode "client_secret=${ZEEBE_CLIENT_SECRET}" \
+--data-urlencode "grant_type=client_credentials"
+```
+
+Capture the value of the `access_token` property and store it as your token.
+
+Use the stored token, in our case `TOKEN`, to use the REST API to print the cluster topology.
+
+```shell
+curl --header "Authorization: Bearer ${TOKEN}" "${ZEEBE_ADDRESS}/v2/topology"
+```
+
+...and results in the following output:
+
+
+ Example output
+
+
+```shell
+{
+ "brokers": [
+ {
+ "nodeId": 0,
+ "host": "camunda-zeebe-0.camunda-zeebe",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 1,
+ "host": "camunda-zeebe-1.camunda-zeebe",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "leader",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "follower",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ },
+ {
+ "nodeId": 2,
+ "host": "camunda-zeebe-2.camunda-zeebe",
+ "port": 26501,
+ "partitions": [
+ {
+ "partitionId": 1,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 2,
+ "role": "follower",
+ "health": "healthy"
+ },
+ {
+ "partitionId": 3,
+ "role": "leader",
+ "health": "healthy"
+ }
+ ],
+ "version": "8.6.0"
+ }
+ ],
+ "clusterSize": 3,
+ "partitionsCount": 3,
+ "replicationFactor": 3,
+ "gatewayVersion": "8.6.0"
+}
+```
+
+
+
+
+
After following the installation instructions in the [zbctl docs](/apis-tools/community-clients/cli-client/index.md), we can configure the required connectivity to check that the Zeebe cluster is reachable.
@@ -315,6 +463,10 @@ zbctl status --insecure
...and results in the following output:
+
+ Example output
+
+
```shell
Cluster size: 3
Partitions count: 3
@@ -338,6 +490,9 @@ Brokers:
Partition 3 : Leader, Healthy
```
+
+
+
For more advanced topics, like deploying a process or registering a worker, consult the [zbctl docs](/apis-tools/community-clients/cli-client/cli-get-started.md).
If you want to access the other services and their UI, you can port-forward those as well: