This is what PurpleTeam-Labs uses to set-up systems to attack and test that it's working as we think it should. Feel free to run yourself if you are taking PurpleTeam for a test drive, or just want to attack some SUTs to hone your red teaming skills.
- Set-up email account
As Root Account:
- Change Payment Currency Preference to your currency
- Configure Security Challenge Questions
- Activate IAM User and Role Access to Billing Information
- Enable MFA
- Record account ID and Canonical User ID in password manager
- Disable Security Token Service (STS) for all regions we don't need
- Launch Cost Explorer
- Billing Preferences: Turn on:
- Receive PDF Invoice By Email
- Receive Free Tier Usage Alerts - add email address
- Receive Billing Alerts
- Create a Cost Budget for Monthly with alerts:
- Threshold: 50% of budgeted amount, Trigger: Actual, Email recipients: you
- Threshold: 100% of budgeted amount, Trigger: Forecasted, Email recipients: you, 2IC
- Threshold: 100% of budgeted amount, Trigger: Actual, Email recipients: you, 2IC
- Create User groups
- Create Permissions/Policies - update account Ids in source controlled policies before applying
- Add policies to respective Groups
- Add IAM user
- Add user to group(s)
- Assign MFA
- Add CLI IAM user
- Add user to group(s)
- Create Access keys for devices that need access
- Apply Access keys to said devices
- Update environment variables in .env of purpleteam-iac-sut project. The easiest way to do this is to rename the .env.example to .env and replace the dummy values
Make sure this is set to 2000 per region where we require certificates, submit support ticket if required. Although the documentation says it's already set at this, by default it's actually only set to 20, which means you'll run out quickly.
- Region: Asia Pacific (Sydney)
Limit: Number of ACM certificates
New limit value: 2000 - Region: US East (Northern Virginia)
Limit: Number of ACM certificates
New limit value: 2000
Log in as root user, navigate to ECS -> Account Settings... For each region we operate in:
Setting scope: Set for root (override account default)
Check the following check boxes:
- Container instance
- Service
- Task
Doing the above should also automatically do the following. Confirm.
Setting scope: Set for specific IAM user
IAM user with console only access. Let's call this purpleteam-iac-sut
Check the following check boxes:
- Container instance
- Service
- Task
Setting scope: Set for specific IAM user
IAM user with cli access only. Let's call this purpleteam-iac-sut-cli
Check the following check boxes:
- Container instance
- Service
- Task
- Google search for "Unable to register as a container instance with ECS: InvalidParameterException: Long arn format must be enabled for tagging" which was produced in terraform apply
- hashicorp/terraform-provider-aws#7373
- hashicorp/terraform-provider-aws#6481
- Google search for "ecs-agent.log Unable to register as a container instance with ECS: InvalidParameterException: Long arn" which the ecs-agent.log was telling us
- Migrating your Amazon ECS deployment to the new ARN and resource ID format
ECS container instances not connected to the cluster
systemctl status ecs
for the status of the ecs agent
Install boto3
The nw
root uses python3 to run getNlbPrivateIps.py. pip3 needs to be installed to install boto3.
- Install pip3 on Linux Mint:
- Confirm you have python3 installed by running:
python3 --version
sudo apt update && sudo apt install python3-pip
- Run
pip3 --version
to confirm you now have pip3 installed
- Confirm you have python3 installed by running:
- Install boto3 with:
pip3 install boto3
The architecture of this Terraform project was inspired by this Terraform talk. The how we organize terraform code at 2nd watch blog post also played a small part.
The implementation of this Terraform project was inspired by @freedomofkeima with the terraform-docker-ecs project.
- Download the Terraform zip file, the checksums, and sig file
- Import the hashicorp public GPG key (first time installing only)
- Verify the checksum file with the sig
- Verify the checksum in the checksums file matches the binary
Step 2,3,4 detailed here sudo unzip ~/Downloads/terraform_[version]_linux_amd64.zip -d /opt/
sudo ln -s /opt/terraform /usr/local/bin/terraform
Hashicorp GPG pub key on hashicorp, on keybase
Or on Linux via the package sources. Details here.
Install Terragrunt and configure
Using the Manual install, similar to installing Terraform.
In the roots
directory:
- Locate and rename the
common_vars.example.yaml
file tocommon_vars.yaml
and configure the values within- You will need a domain and it's DNS configured in CloudFlare
- For the first (default) SUT we are using (NodeGoat)
- Chetan Karande maintains a hosted version running at https://nodegoat.herokuapp.com/
- Once this project is
apply
ed you should be able to see NodeGoat running at https://nodegoat.sut.<your-domain-name.com>
For the case of purpleteam-labs, that will be https://nodegoat.sut.purpleteam-labs.com. Currently we only have this instance running during our testing
- Add as many or few SUTs as you require
- Locate and rename the
terragrunt.example.hcl
file toterragrunt.hcl
and configure the values within
In each root directory add and configure the following file if it doesn't exist:
terragrunt.hcl
Optional: Set-up the terraform oh-my-zsh plugin. This will give you a prompt that displays which terraform workspace is selected, Ask Kim about this if unsure.
Assuming the aws cli has been configured
Each terraform root aws provider (in the main.tf file, or each specific root variables.tf
) needs to specify the correct aws profile
,
if this is not correct, you could clobber or destroy an incorrect set of infrastructure.
This needs to be configured in a .env
file in the top directory of this repository. Replace the angle bracket dummy values of the .env.example file you renamed to .env with values to suite your environment.
The above values are read into all Terraform roots that specify the variables. This can be seen in the extra_arguments "custom_env_vars_from_file"
block within the terraform
block of the terragrunt.hcl
in the roots
directory.
The .env
file is also consumed within the buildAndDeployCloudImages.sh
file that is executed as an npm script, where the variables from the .env
file are exported to the current shell.
Each root directory requires a terraform.tfvars
file to initialise the sibling variables.tf
variables. Don't worry, if you miss this step, Terraform will inform you.
Create tokens for all devices that need to work with the remote state in Terraform Cloud:
- Create
~/.terraformrc
file for each device (desktop, laptop) - Add the specific devices token
From each root within the Terraform project run terraform init
terragrunt init
, or just watch this video and do likewise.
If you run terraform plan
terragrunt plan
and receive an error similar to:
Error: Failed to instantiate provider "aws" to obtain schema: fork/exec .../purpleteam-iac-sut/tf/roots/2_nw/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.24.0_x4: permission denied
This is probably because the executable bit is not turned on on terraform-provider-aws_v2.24.0_x4
When creating a new Terraform root (or possibly even just workspace), make sure the Execution Mode in the Terraform cloud is set to Local rather than Remote in the General Settings of the new workspace in the Web UI.
This is required to push images to ECR.
When we did this, the package wasn't available for our distro, so we just:
- Download the latest binary
- Checksum it
- Rename it to
docker-credential-ecr-login
- Put it in
/opt/
and symlink it to/usr/local/bin/docker-credential-ecr-login
- You'll also need to add the following to
~/.docker/config.json
{
"credHelpers": {
"your_aws_account_id_here.dkr.ecr.your_aws_region_here.amazonaws.com": "ecr-login"
}
}
Above details and more found here. If you have issues authenticating with ECR, follow these steps.
apt-get install jq
This is used in the buildAndDeployCloudImages.sh
script in the top level directory of this repository.
The following are the Terraform roots in this project and the order in which they should be applied:
-
static (IAM roles, policies, ECR repository creation)
Sometime now before you
apply
thecontOrc
root, make sure you have built the Docker images you want to host and pushed them to their respective ECR repositories (created in the static root). To do this:- You will need to have cloned the git repositories that you want hosted (NodeGoat for this example) in Docker containers to the same level directory that this git repository will be cloned to, you can see this location specified in the npm scripts
- From the top level directory of this repository, run the following command:
npm run buildAndDeploySUTCloudImages
-
nw (network, VPC, load balancer, api certificates, api subdomain)
-
contOrc (SSH pub keys, EC2 Cloudwatch log groups, ECS, autoscaling)
-
api (SUT APIs (Api Gateway), Cloudwatch log groups, VpcLink, SUT subdomain(s))
Each root's dependencies are defined in their terragrunt.hcl
.
The roots applied earliest require the least amount of ongoing changes making for faster iterative development of the later roots, for example the static root hardly ever needs reapply
ing, the nw root usually only needs reapply
ing when a SUT is added/removed/or with nw related modification.
When we add or remove a SUT, the nw
root onwards will need to be re-applied.
We use Terraform Cloud to store our state remotely so each developer can collaborate with a single source of state
Getting AWS permissions right can be a pain. This is how we do it.
- Create a policy that
Allow
s all actions of the AWS service (ec2:*
for example) specific to the Terraform resource you want to create. Now assuming you have all the permissions to do what Terraform needs to do... - From the tf root within the project, run
terraform plan
thenterraform apply
- Browse to the CloudTrail Event history, and wait for the logs to come through, this can take a while (aprx 15-20 minutes). Filter on User name of [your-aws-cli-profile], you can also add another filter of Event source for the resource you want to see. Now you can copy the Event names of each log event along with the Event source prefix (
ec2
for example) and add to theAllow
dAction
array in the policy you want to modify. Then remove the wildcard action (ec2:*
for example) - Try step 2 again, you may need to run
terraform destroy
first. If you get an error with aEncoded authorization failure message
:- You will need to add the
sts:DecodeAuthorizationMessage
action to a policy that your user has - Then to retreive the decoded error run the following command:
aws sts decode-authorization-message --encoded-message [encodedmessage] --profile [your-aws-cli-profile]
Details on this and interpreting the output can be found under the heading "Using DecodeAuthorizationMessage to troubleshoot permissions" here
- You will need to add the
Details for more granular policies:
- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html
- hashicorp/terraform#2834
To make reading logs easier set the time zone to local
Please open an issue to discus the proposed change before submitting a pull request.
Copyright Kim Carter, Licensed under MIT.