In this setup we integrate the secrets exercise with GCP GKE and let pods consume secrets from the GCP Secret manager. If you want to know more about integrating secrets with GKE, check this link. Please make sure that the account in which you run this exercise has either Cloud Audit Logs enabled, or is not linked to your current organization and/or DTAP environment.
Have the following tools installed:
- gcloud CLI - Installation
- Tfenv (Optional) - Installation
- Terraform CLI - Installation
- Wget - Installation
- Helm Installation
- Kubectl Installation
- jq Installation
Make sure you have an active account at GCP for which you have configured the credentials on the system where you will execute the steps below.
Please note that this setup relies on bash scripts that have been tested in MacOS and Linux. We have no intention of supporting vanilla Windows at the moment.
If you want to host a multi-user setup, you will probably want to share the state file so that everyone can try related challenges. We have provided a starter to easily do so using a Terraform gcs backend.
First, create an storage bucket:
- Navigate to the 'shared-state' directory
cd shared-state
- Change the
project_id
in theterraform.tfvars
file to your project id - Run
terraform init
- Run
terraform apply
.
The bucket name should be in the output. Please use that to configure the Terraform backend in main.tf
.
Note: Applying the Terraform means you are creating cloud infrastructure which actually costs you money. The authors are not responsible for any cost coming from following the instructions below. If you have a brand new GCP account, you could use the $300 in credits to set up the infrastructure for free.
Note-II: We create resources in europe-west4
by default. You can set the region by editing terraform.tfvars
.
Note-III: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.
- Check whether you have the right project by doing
gcloud config list
. Otherwise configure it by doinggcloud init
. - Change the
project_id
in theterraform.tfvars
file to your project id - Run
gcloud auth application-default login
to be able to use your account credentials for terraform. - Enable the required gcloud services using
gcloud services enable compute.googleapis.com container.googleapis.com secretmanager.googleapis.com
- Run
terraform init
(if required, use tfenv to select TF 0.14.0 or higher ) - Run
terraform plan
- Run
terraform apply
. Note: the apply will take 10 to 20 minutes depending on the speed of the GCP backplane. - Run
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- When creation is done, run
gcloud container clusters get-credentials wrongsecrets-exercise-cluster --region YOUR_REGION
. Note if it errors on a missing plugin to supportkubectl
, then rungcloud components install gke-gcloud-auth-plugin
andgcloud container clusters get-credentials wrongsecrets-exercise-cluster
. - Run
./k8s-vault-gcp-start.sh
By default the deployment uses a nodePort tunneled to localhost. For a larger audience deployment the wrongsecrets app can deployed with a GKE ingress, run k8s-vault-gcp-ingress-start.sh
Please note that the GKE ingress can take a few minues to deploy and is publicly available. A connection URL will be returned once the ingress is available. Note that, after the connection URL is returned, a first lookup might still take a minute, after which it is much faster.
Your GKE cluster should be visible in EU-West4 by default. Want a different region? You can modify terraform.tfvars
or input it directly using the region
variable in plan/apply.
Are you done playing? Please run terraform destroy
twice to clean up.
Run ./k8s-vault-gcp-start.sh
and connect to http://localhost:8080 when it's ready to accept connections (you'll read the line Forwarding from 127.0.0.1:8080 -> 8080
in your console). Now challenge 9 and 10 should be available as well.
When you stopped the k8s-vault-gcp-start.sh
script and want to resume the port forward run: k8s-vault-gcp-resume.sh
. This is because if you run the start script again it will replace the secret in the vault and not update the secret-challenge application with the new secret.
When you're done:
- Kill the port forward.
- Run
terraform destroy
to clean up the infrastructure. - Run
unset KUBECONFIG
to unset the KUBECONFIG env var. - Run
rm ~/.kube/wrongsecrets
to remove the kubeconfig file. - Run
rm terraform.tfstate*
to remove local state files.
- Does your worker node now have access as well?
- Can you easily obtain the GCP IAM role of the Node?
- Can you get the secrets in the SSM Parameter Store and Secret Manager easily? Which paths do you see?
- You should see at the configuration details of the cluster that
databaseEncryption
isDECRYPTED
(gcloud container clusters describe wrongsecrets-exercise-cluster --region europe-west4
). What does that mean?
The documentation below is auto-generated to give insight on what's created via Terraform.
Name | Version |
---|---|
terraform | ~> 1.1 |
~> 4.75.1 | |
google-beta | ~> 4.75.1 |
http | ~> 3.4.0 |
random | ~> 3.5.1 |
Name | Version |
---|---|
4.75.1 | |
google-beta | 4.75.1 |
http | 3.4.0 |
random | 3.5.1 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
cluster_name | The GKE cluster name | string |
"wrongsecrets-exercise-cluster" |
no |
cluster_version | The GKE cluster version to use | string |
"1.25" |
no |
project_id | project id | string |
n/a | yes |
region | The GCP region to use | string |
"eu-west4" |
no |
Name | Description |
---|---|
gke_config | config string for the cluster credentials |
kubernetes_cluster_host | GKE Cluster Host |
kubernetes_cluster_name | GKE Cluster Name |
project_id | GCloud Project ID |
region | GCloud Region |