Skip to content

Commit

Permalink
PLAT-8798 Option to create CF stack to adjusts the fsx fs storage cap…
Browse files Browse the repository at this point in the history
…acity based on usage params (#265)
  • Loading branch information
miguelhar authored Sep 9, 2024
1 parent 216bd85 commit 9632c7c
Show file tree
Hide file tree
Showing 13 changed files with 406 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ repos:
args:
- "--args=--compact"
- "--args=--quiet"
- "--args=--skip-check CKV_CIRCLECIPIPELINES_2,CKV_CIRCLECIPIPELINES_6,CKV2_AWS_11,CKV2_AWS_12,CKV2_AWS_6,CKV_AWS_109,CKV_AWS_111,CKV_AWS_135,CKV_AWS_144,CKV_AWS_145,CKV_AWS_158,CKV_AWS_18,CKV_AWS_184,CKV_AWS_19,CKV_AWS_21,CKV_AWS_66,CKV_AWS_88,CKV2_GHA_1,CKV_AWS_163,CKV_AWS_39,CKV_AWS_38,CKV2_AWS_61,CKV2_AWS_62,CKV_AWS_136,CKV_AWS_329,CKV_AWS_338,CKV_AWS_339,CKV_AWS_341,CKV_AWS_356,CKV2_AWS_19,CKV2_AWS_5,CKV_AWS_150,CKV_AWS_123,CKV2_AWS_65,CKV2_AWS_67,CKV2_AWS_57,CKV_AWS_149"
- "--args=--skip-check CKV_CIRCLECIPIPELINES_2,CKV_CIRCLECIPIPELINES_6,CKV2_AWS_11,CKV2_AWS_12,CKV2_AWS_6,CKV_AWS_109,CKV_AWS_111,CKV_AWS_135,CKV_AWS_144,CKV_AWS_145,CKV_AWS_158,CKV_AWS_18,CKV_AWS_184,CKV_AWS_19,CKV_AWS_21,CKV_AWS_66,CKV_AWS_88,CKV2_GHA_1,CKV_AWS_163,CKV_AWS_39,CKV_AWS_38,CKV2_AWS_61,CKV2_AWS_62,CKV_AWS_136,CKV_AWS_329,CKV_AWS_338,CKV_AWS_339,CKV_AWS_341,CKV_AWS_356,CKV2_AWS_19,CKV2_AWS_5,CKV_AWS_150,CKV_AWS_123,CKV2_AWS_65,CKV2_AWS_67,CKV2_AWS_57,CKV_AWS_149,CKV_AWS_117,CKV_AWS_116,CKV_AWS_173,CKV_AWS_115,CKV_AWS_7,CKV_AWS_124"
- id: terraform_trivy
args:
- "--args=--severity=HIGH,CRITICAL"
Expand Down
16 changes: 13 additions & 3 deletions bin/pre-commit/check-aws-partition.sh
Original file line number Diff line number Diff line change
@@ -1,13 +1,24 @@
#! /usr/bin/env bash
#!/usr/bin/env bash

exec 1>&2

check_aws_partition() {
declare -A failed_files
exclude_patterns=("policy/AWSLambdaExecute")

for file in "$@"; do
if grep -q "arn:aws" "${file}"; then
failed_files["${file}"]=1
skip_file=false
for pattern in "${exclude_patterns[@]}"; do
if grep -q "$pattern" "${file}"; then
skip_file=true
break
fi
done

if [ "$skip_file" = false ]; then
failed_files["${file}"]=1
fi
fi
done

Expand All @@ -19,7 +30,6 @@ check_aws_partition() {
fi

return 0

}

check_aws_partition "$@"
Expand Down
25 changes: 25 additions & 0 deletions examples/tfvars/netapp.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
deploy_id = "plantest0014"
region = "us-west-2"
ssh_pvt_key_path = "domino.pem"

default_node_groups = {
compute = {
availability_zone_ids = ["usw2-az1", "usw2-az2"]
}
gpu = {
availability_zone_ids = ["usw2-az1", "usw2-az2"]
}
platform = {
"availability_zone_ids" = ["usw2-az1", "usw2-az2"]
}
}

storage = {
filesystem_type = "netapp"
netapp = {
storage_capacity_autosizing = {
enabled = true
}
}

}
32 changes: 32 additions & 0 deletions modules/iam-bootstrap/bootstrap-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,38 @@
"secretsmanager:*"
],
"Resource": ["*"]
},
{
"Sid": "CloudformationUngated",
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": ["*"]
},
{
"Sid": "SNSUngated",
"Effect": "Allow",
"Action": [
"sns:*"
],
"Resource": ["*"]
},
{
"Sid": "CloudwatchUngated",
"Effect": "Allow",
"Action": [
"cloudwatch:*"
],
"Resource": ["*"]
},
{
"Sid": "S3fsxAutoSizing",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:${partition}:s3:::solution-references-*"
}
]
}
2 changes: 1 addition & 1 deletion modules/infra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
| <a name="input_network"></a> [network](#input\_network) | vpc = {<br> id = Existing vpc id, it will bypass creation by this module.<br> subnets = {<br> private = Existing private subnets.<br> public = Existing public subnets.<br> pod = Existing pod subnets.<br> }), {})<br> }), {})<br> network\_bits = {<br> public = Number of network bits to allocate to the public subnet. i.e /27 -> 32 IPs.<br> private = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br> pod = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br> }<br> cidrs = {<br> vpc = The IPv4 CIDR block for the VPC.<br> pod = The IPv4 CIDR block for the Pod subnets.<br> }<br> use\_pod\_cidr = Use additional pod CIDR range (ie 100.64.0.0/16) for pod networking. | <pre>object({<br> vpc = optional(object({<br> id = optional(string, null)<br> subnets = optional(object({<br> private = optional(list(string), [])<br> public = optional(list(string), [])<br> pod = optional(list(string), [])<br> }), {})<br> }), {})<br> network_bits = optional(object({<br> public = optional(number, 27)<br> private = optional(number, 19)<br> pod = optional(number, 19)<br> }<br> ), {})<br> cidrs = optional(object({<br> vpc = optional(string, "10.0.0.0/16")<br> pod = optional(string, "100.64.0.0/16")<br> }), {})<br> use_pod_cidr = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_region"></a> [region](#input\_region) | AWS region for the deployment | `string` | n/a | yes |
| <a name="input_ssh_pvt_key_path"></a> [ssh\_pvt\_key\_path](#input\_ssh\_pvt\_key\_path) | SSH private key filepath. | `string` | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> filesystem\_type = File system type(netapp\|efs)<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> netapp = {<br> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br> storage\_capacity = Filesystem Storage capacity<br> throughput\_capacity = Filesystem throughput capacity<br> automatic\_backup\_retention\_days = How many days to keep backups<br> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br> }<br> } | <pre>object({<br> filesystem_type = optional(string, "efs")<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> netapp = optional(object({<br> deployment_type = optional(string, "SINGLE_AZ_1")<br> storage_capacity = optional(number, 1024)<br> throughput_capacity = optional(number, 128)<br> automatic_backup_retention_days = optional(number, 90)<br> daily_automatic_backup_start_time = optional(string, "00:00")<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {}),<br> enable_remote_backup = optional(bool, false)<br> costs_enabled = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> filesystem\_type = File system type(netapp\|efs)<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> netapp = {<br> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br> storage\_capacity = Filesystem Storage capacity<br> throughput\_capacity = Filesystem throughput capacity<br> automatic\_backup\_retention\_days = How many days to keep backups<br> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br><br> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br> enabled = Enable automatic storage capacity increase.<br> threshold = Used storage capacity threshold.<br> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br> notification\_email\_address = The email address for alarm notification.<br> }))<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br> }<br> } | <pre>object({<br> filesystem_type = optional(string, "efs")<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> netapp = optional(object({<br> deployment_type = optional(string, "SINGLE_AZ_1")<br> storage_capacity = optional(number, 1024)<br> throughput_capacity = optional(number, 128)<br> automatic_backup_retention_days = optional(number, 90)<br> daily_automatic_backup_start_time = optional(string, "00:00")<br> storage_capacity_autosizing = optional(object({<br> enabled = optional(bool, false)<br> threshold = optional(number, 70)<br> percent_capacity_increase = optional(number, 30)<br> notification_email_address = optional(string, "")<br> }), {})<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {}),<br> enable_remote_backup = optional(bool, false)<br> costs_enabled = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | `{}` | no |
| <a name="input_use_fips_endpoint"></a> [use\_fips\_endpoint](#input\_use\_fips\_endpoint) | Use aws FIPS endpoints | `bool` | `false` | no |
| <a name="input_vpn_connection"></a> [vpn\_connection](#input\_vpn\_connection) | create = Create a VPN connection.<br> shared\_ip = Customer's shared IP Address.<br> cidr\_block = CIDR block for the customer's network. | <pre>object({<br> create = optional(bool, false)<br> shared_ip = optional(string, "")<br> cidr_block = optional(string, "")<br> })</pre> | `{}` | no |
Expand Down
Loading

0 comments on commit 9632c7c

Please sign in to comment.