The tf.sh
script provides a convenient method to manage multiple Terraform configurations for various components of a system. The primary modules managed by this script include infra
, cluster
, and nodes
. These components might represent different layers of an infrastructure deployment. Similarly the set-mod-version.sh
script helps to set the source module version on all three modules(infra
, cluster
, and nodes
), see README.
- Ensure that
terraform
is installed and accessible in your path. - Ensure that
jq
, a command-line JSON processor, is installed.
The script expects the following directory structure:
deploy
├── README.md
├── meta.sh
├── set-mod-version.sh
├── terraform
│ ├── cluster
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── cluster.tfvars
│ ├── infra
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── infra.tfvars
│ ├── nodes
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── nodes.tfvars
└── tf.sh
- Each subdirectory under
terraform
(e.g.,infra
,cluster
,nodes
) should contain its respective Terraform configurations. - Each component is expected to have a corresponding
.tfvars
file at theterraform
directory. For instance, for theinfra
component, there should be anterraform/infra.tfvars
file. - Each of component's state and output(when the
output
command is invoked) is saved in theterraform
directory:
└─ deploy/terraform
├── cluster.outputs
├── cluster.tfstate
├── infra.outputs
├── infra.tfstate
├── nodes.outputs
└── nodes.tfstate
See README
To use the script, invoke it with the desired command and component:
./tf.sh <component> <command>
-
component: The component parameter refers to the specific section of your architecture that you wish to target with a command. Supported components include
infra
,cluster
,nodes
, andall
. Selecting all will execute the command acrossinfra
,cluster
andnodes
. The script uses the component parameter to identify corresponding Terraform directories and to name both the Terraform variables file (terraform/${component}.tfvars
) and the Terraform state file (terraform/${component}.tfstate
). If you create a custom folder namedmydir
that includes your Terraform configuration, setup a terraform variables file(terraform/mydir.tfstate
), and state file(terraform/mydir.tfstate
) if existing, then you can utilize the tf.sh script to execute Terraform commands. For example, running./tf.sh mydir plan
.It's important to note that your custom directory, mydir, will not be included when using the
all
value for components. -
command: Supported commands include:
init
: Initializes the Terraform configurations.plan
: Shows the execution plan of Terraform.apply
: Applies the Terraform configurations.destroy
: Destroys the Terraform resources.output
: Shows the output values of your configurations.output
<myoutput>
: Shows a specific output value.refresh
: Refreshes the Terraform state file.plan_out
: Generates a plan and writes it toterraform/${component}-terraform.plan
.apply_plan
: Applies plan located atterraform/${component}-terraform.plan
.roll_nodes
: (Rollout nodes)Runs apply one at a time onaws_eks_node_group.node_groups
resources.
- To preview the execution plan of the cluster:
./tf.sh cluster plan
- To create all components:
./tf.sh all apply
- To destroy all components:
./tf.sh all destroy
- To perform a plan and write it to a file(the plan file will be stored at:
terraform/${component}-terraform.plan
):
./tf.sh cluster plan_out
- To apply a a previously generated plan stored at
terraform/${component}-terraform.plan
for this exampleterraform/cluster-terraform.plan
:
./tf.sh cluster apply_plan
For some frequently performed operations, follow the steps outlined below:
See the repo's README for how to bootstrap the module.
See README
Update modules version
In order to update Kubernetes we will need to update the cluster
and the nodes
.
- Set the
eks.k8s_version
variable to desired version(At most it can be 2 minor versions ahead.) - Update cluster:
- Plan and review the changes:
./tf.sh cluster plan
- Apply the changes:
./tf.sh cluster apply
- Plan and review the changes:
- Update nodes:
Given that the nodes source the k8s version from
eks
we just need to plan and apply.- Plan and review the changes:
./tf.sh nodes plan
- Apply the changes:
./tf.sh nodes apply
- Plan and review the changes:
Given that the nodes module looks for the latest AMI we just need to plan and apply:
- Plan and review the changes:
./tf.sh nodes plan
⚠️ In case of large amount ofnode_groups
or if just want to update onenode_group
at a time.
./tf.sh nodes roll_nodes
Otherwise to update all node_groups
in parallel.
./tf.sh nodes apply
If Kubernetes has been upgraded without corresponding VPC CNI addon version updates, it might require multiple jumps. VPC CNI only allows a single minor version update per upgrade:
Updating VPC-CNI can only go up or down 1 minor version at a time
If the next minor version is not supported, it may require deleting the VPC CNI addon while preserving the actual resources:
aws eks delete-addon --cluster-name <cluster name> --addon-name vpc-cni --preserve
Provision FIPS-compliant resources by utilizing the use_fips_endpoint
variable in the modules.
By setting use_fips_endpoint: true
, the modules will:
- Set the Terraform AWS provider use_fips_endpoint attribute, ensuring API calls to AWS are made via the FIPS endpoints.
- Set the shell environment variable
AWS_USE_FIPS_ENDPOINT=true
onlocal-exec
resources. - Enable FIPS mode (fipsMode: enabled) during the installation of Calico (see documentation).