Skip to content

oktadev/jhipster-terraform-gke

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploy Secure Spring Boot Microservices on Google Kubernetes Engine Using Terraform and Kubernetes

This is an example application accompanying the blog post Deploy Secure Spring Boot Microservices on Google GKE Using Terraform and Kubernetes on the Auth0 developer blog.

Prerequisites

Create an GKE cluster and Auth0 application using Terraform

To deploy the stack to Google Kubernetes Engine (GKE), we need to create a cluster. So let's begin by creating a cluster using Terraform.

Create a cluster

Ensure you have configured your gcloud CLI. If not, run the following command:

gcloud init

Edit the file terraform/auth0.tf and update the auth0 provider with your Auth0 domain URI:

# terraform/auth0.tf
provider "auth0" {
  domain        = "https://<your-auth0-domain>"
  debug         = false
}

Set the google-project-id in the file terraform/terraform.tfvars:

# terraform/terraform.tfvars
project_id = "<google-project-id>"

Now before we can run the scripts we need to create a machine to machine application in Auth0 so that Terraform can communicate with the Auth0 management API. This can be done using the Auth0 CLI. Please note that you also need to have jq installed to run the below commands. Run the following commands to create an application after logging into the CLI with the auth0 login command:

# Create a machine to machine application on Auth0
export AUTH0_M2M_APP=$(auth0 apps create \
  --name "Auth0 Terraform Provider" \
  --description "Auth0 Terraform Provider M2M" \
  --type m2m \
  --reveal-secrets \
  --json | jq -r '. | {client_id: .client_id, client_secret: .client_secret}')

# Extract the client ID and client secret from the output.
export AUTH0_CLIENT_ID=$(echo $AUTH0_M2M_APP | jq -r '.client_id')
export AUTH0_CLIENT_SECRET=$(echo $AUTH0_M2M_APP | jq -r '.client_secret')

This will create the application and set environment variables for client ID and secret. This application needs to be authorized to use the Auth0 management API. This can be done using the below commands.

# Get the ID and IDENTIFIER fields of the Auth0 Management API
export AUTH0_MANAGEMENT_API_ID=$(auth0 apis list --json | jq -r 'map(select(.name == "Auth0 Management API"))[0].id')
export AUTH0_MANAGEMENT_API_IDENTIFIER=$(auth0 apis list --json | jq -r 'map(select(.name == "Auth0 Management API"))[0].identifier')
# Get the SCOPES to be authorized
export AUTH0_MANAGEMENT_API_SCOPES=$(auth0 apis scopes list $AUTH0_MANAGEMENT_API_ID --json | jq -r '.[].value' | jq -ncR '[inputs]')

# Authorize the Auth0 Terraform Provider application to use the Auth0 Management API
auth0 api post "client-grants" --data='{"client_id": "'$AUTH0_CLIENT_ID'", "audience": "'$AUTH0_MANAGEMENT_API_IDENTIFIER'", "scope":'$AUTH0_MANAGEMENT_API_SCOPES'}'

Initialize, plan and apply the following Terraform configuration:

cd terraform
# download modules and providers. Initialize state.
terraform init
# see a preview of what will be done
terraform plan -out main.tfplan
# apply the changes
terraform apply main.tfplan

The complete provisioning of all resources will take while. Once the AKS cluster is ready, you will see the output variables printed on the console. Get the cluter credentials with the following command:

gcloud container clusters get-credentials example-autopilot-cluster --location us-east1

As it is autopilot, if you run kdash or kubectl get nodes commands, you won't see cluster nodes or workloads yet.

Set up OIDC authentication using Auth0

First get the client ID and secret for the Auth0 application created by Terraform.

# Client ID
terraform output auth0_webapp_client_id
# Client Secret
terraform output auth0_webapp_client_secret

Update kubernetes/registry-k8s/application-configmap.yml with the OIDC configuration from above, replacing <your-auth0-domain>, <client-id> and <client-secret>.

Deploy the microservice stack to AKS

You need to build Docker images for each app. This is specific to the JHipster application used in this tutorial. Navigate to each app folder (store, invoice, product) and run the following command:

./gradlew bootJar -Pprod jib -Djib.to.image=<docker-repo-uri-or-name>/<image-name>

Image names would be store, invoice, and product.

Once the images are pushed to the Docker registry, we can deploy the stack using the handy script provided by JHipster. Navigate to the kubernetes folder created by JHipster and run the following command.

cd kubernetes
./kubectl-apply.sh -f

Note: GKE Autopilot will return warnings if the container spec does not specify 'cpu' resource.

Once the deployments are done, we must wait for the pods to be in RUNNING status.

As the Ingress controller requires the inbound traffic to be for the host store.example.com, you can test the store service by adding an entry in your hosts file that maps to the store-ingress public IP:

kubectl get ingress -n jhipster

Then navigate to http://store.example.com and sign in at Atuh0 with the test user/password [email protected]/passpass$12$12.

Cleanup

Once you are done with the tutorial, you can delete the cluster and all the resources created using Terraform by running the following commands:

cd terraform
terraform destroy -auto-approve

Links

This example uses the following open source projects:

Help

Please post any questions as comments on the blog post, or visit our Auth0 Developer Forums.

License

Apache 2.0, see LICENSE.

About

No description, website, or topics provided.

Resources

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 64.6%
  • TypeScript 29.7%
  • HTML 2.0%
  • JavaScript 1.3%
  • SCSS 0.8%
  • Shell 0.5%
  • Other 1.1%