Skip to content

Commit

Permalink
flag for deploy to isolated subnet
Browse files Browse the repository at this point in the history
  • Loading branch information
Ilia Chibaev committed Aug 31, 2024
1 parent 65129b8 commit 0d08e5b
Show file tree
Hide file tree
Showing 10 changed files with 82 additions and 44 deletions.
5 changes: 3 additions & 2 deletions cdk.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
"eks.default.min-size": 1,
"eks.default.max-size": 2,
"eks.default.desired-size": 1,
"eks.default.private-cluster": "false"
"eks.default.private-cluster": "false",
"eks.default.isolated-cluster": "false"
}
}
}
12 changes: 6 additions & 6 deletions docs/cluster-providers/asg-cluster-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The `AsgClusterProvider` allows you to provision an EKS cluster which leverages [EC2 Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html)(ASGs) for compute capacity. An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.

## Usage
## Usage

```typescript
const props: AsgClusterProviderProps = {
Expand All @@ -20,7 +20,7 @@ new blueprints.EksBlueprint(scope, { id: 'blueprint', [], [], clusterProvider })

## Configuration

`AsgClusterProvider` supports the following configuration options.
`AsgClusterProvider` supports the following configuration options.

| Prop | Description |
|-------------------|-------------|
Expand All @@ -33,18 +33,19 @@ new blueprints.EksBlueprint(scope, { id: 'blueprint', [], [], clusterProvider })
| machineImageType | Machine Image Type for the Autoscaling Group.
| updatePolicy | Update policy for the Autoscaling Group.
| vpcSubnets | The subnets for the cluster.
| privateCluster | If `true` Kubernetes API server is private.
| privateCluster | If `true` Kubernetes API server is private.
| tags | Tags to propagate to Cluster.

There should be public and private subnets for EKS cluster to work. For more information see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html).

Configuration can also be supplied via context variables (specify in cdk.json, cdk.context.json, ~/.cdk.json or pass with -c command line option):

- `eks.default.min-size`
- `eks.default.max-size`
- `eks.default.max-size`
- `eks.default.desired-size`
- `eks.default.instance-type`
- `eks.default.instance-type`
- `eks.default.private-cluster`
- `eks.default.isolated-cluster`

Configuration of the EC2 parameters through context parameters makes sense if you would like to apply default configuration to multiple clusters without the need to explicitly pass `AsgClusterProviderProps` to each cluster blueprint.

Expand All @@ -67,4 +68,3 @@ const props: AsgClusterProviderProps = {
const clusterProvider = new blueprints.AsgClusterProvider(props);
new blueprints.EksBlueprint(scope, { id: 'blueprint', teams, addOns, clusterProvider });
```

23 changes: 12 additions & 11 deletions docs/cluster-providers/generic-cluster-provider.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Generic Cluster Provider

The `GenericClusterProvider` allows you to provision an EKS cluster which leverages one or more [EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)(MNGs), or one or more autoscaling groups[EC2 Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html) for its compute capacity. Users can also configure multiple Fargate profiles along with the EC2 based compute cpacity.
The `GenericClusterProvider` allows you to provision an EKS cluster which leverages one or more [EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)(MNGs), or one or more autoscaling groups[EC2 Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html) for its compute capacity. Users can also configure multiple Fargate profiles along with the EC2 based compute cpacity.

Today it is not possible for an Amazon EKS Cluster to propagate tags to EC2 instance worker nodes directly when you create an EKS cluster. You can create a launch template with custom tags on `managedNodeGroups` with `GenericClusterProvider` as shown in `mng2-launchtemplate`. This will allow you to propagate custom tags to your EC2 instance worker nodes.

Expand All @@ -15,7 +15,7 @@ Full list of configuration options:
- [Autoscaling Group](../api/interfaces/clusters.AutoscalingNodeGroup.html)
- [Fargate Cluster](../api/interfaces/clusters.FargateClusterProviderProps.html)

## Usage
## Usage

```typescript
const windowsUserData = ec2.UserData.forWindows();
Expand Down Expand Up @@ -49,7 +49,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
amiType: NodegroupAmiType.AL2_X86_64,
instanceTypes: [new InstanceType('m5.2xlarge')],
desiredSize: 2,
maxSize: 3,
maxSize: 3,
nodeGroupSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS },
launchTemplate: {
// You can pass Custom Tags to Launch Templates which gets propagated to worker nodes.
Expand Down Expand Up @@ -92,7 +92,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
amiType: NodegroupAmiType.AL2_X86_64,
instanceTypes: [new ec2.InstanceType('m5.4xlarge')],
desiredSize: 0,
minSize: 0,
minSize: 0,
nodeRole: blueprints.getNamedResource("node-role") as iam.Role,
launchTemplate: {
blockDevices: [
Expand Down Expand Up @@ -123,7 +123,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
fargateProfiles: {
"fp1": {
fargateProfileName: "fp1",
selectors: [{ namespace: "serverless1" }]
selectors: [{ namespace: "serverless1" }]
}
}
});
Expand All @@ -134,7 +134,7 @@ EksBlueprint.builder()
```


The Cluster configuration and node group configuration exposes a number of options that require to supply an actual CDK resource.
The Cluster configuration and node group configuration exposes a number of options that require to supply an actual CDK resource.
For example cluster allows passing `mastersRole`, `securityGroup`, etc. to the cluster, while managed node group allow specifying `nodeRole`.

All of such cases can be solved with [Resource Providers](../resource-providers/index.md#using-resource-providers-with-cdk-constructs).
Expand All @@ -157,7 +157,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
id: "mng1",
nodeRole: blueprints.getResource(context => {
const role = new iam.Role(context.scope, 'NodeRole', { assumedBy: new iam.ServicePrincipal("ec2.amazonaws.com")});
... add policies such as AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly
... add policies such as AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly
return role;
})
}
Expand All @@ -173,11 +173,11 @@ EksBlueprint.builder()
.build(app, blueprintID);
```
## Configuration
The `GenericClusterProvider` supports the following configuration options.
The `GenericClusterProvider` supports the following configuration options.
| Prop | Description |
|-----------------------|-------------|
Expand All @@ -197,10 +197,11 @@ There should be public and private subnets for EKS cluster to work. For more inf
Default configuration for managed and autoscaling node groups can also be supplied via context variables (specify in cdk.json, cdk.context.json, ~/.cdk.json or pass with -c command line option):
- `eks.default.min-size`
- `eks.default.max-size`
- `eks.default.max-size`
- `eks.default.desired-size`
- `eks.default.instance-type`
- `eks.default.instance-type`
- `eks.default.private-cluster`
- `eks.default.isolated-cluster`
Configuration of the EC2 parameters through context parameters makes sense if you would like to apply default configuration to multiple clusters without the need to explicitly pass individual `GenericProviderClusterProps` to each cluster blueprint.
Expand Down
16 changes: 9 additions & 7 deletions docs/cluster-providers/mng-cluster-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The `MngClusterProvider` allows you to provision an EKS cluster which leverages [EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)(MNGs) for compute capacity. MNGs automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

## Usage
## Usage

```typescript
import * as cdk from 'aws-cdk-lib';
Expand All @@ -15,7 +15,7 @@ const app = new cdk.App();
const props: bp.MngClusterProviderProps = {
minSize: 1,
maxSize: 10,
desiredSize: 4,
desiredSize: 4,
instanceTypes: [new ec2.InstanceType('m5.large')],
amiType: eks.NodegroupAmiType.AL2023_X86_64_STANDARD,
nodeGroupCapacityType: eks.CapacityType.ON_DEMAND,
Expand All @@ -28,7 +28,7 @@ new bp.EksBlueprint(app, { id: 'blueprint-1', addOns:[], teams: [], clusterProvi

## Configuration

The `MngClusterProvider` supports the following configuration options.
The `MngClusterProvider` supports the following configuration options.

| Prop | Description |
|-----------------------|-------------|
Expand All @@ -45,18 +45,20 @@ The `MngClusterProvider` supports the following configuration options.
| nodeGroupCapacityType | The capacity type for the node group (on demand or spot).
| vpcSubnets | The subnets for the cluster.
| privateCluster | If `true` Kubernetes API server is private.
| isolatedCluster | If `true` EKS Cluster is configured to deploy in an isolated subnet.
| tags | Tags to propagate to Cluster.
| nodeGroupTags | Tags to propagate to Node Group.
| nodeGroupTags | Tags to propagate to Node Group.

There should be public and private subnets for EKS cluster to work. For more information see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html).

Configuration can also be supplied via context variables (specify in cdk.json, cdk.context.json, ~/.cdk.json or pass with -c command line option):

- `eks.default.min-size`
- `eks.default.max-size`
- `eks.default.max-size`
- `eks.default.desired-size`
- `eks.default.instance-type`
- `eks.default.instance-type`
- `eks.default.private-cluster`
- `eks.default.isolated-cluster`

Configuration of the EC2 parameters through context parameters makes sense if you would like to apply default configuration to multiple clusters without the need to explicitly pass `MngClusterProviderProps` to each cluster blueprint.

Expand Down Expand Up @@ -92,7 +94,7 @@ Note that two attributes in this configuration are relevant for Spot: `nodeGroup

## Creating Clusters with custom AMI for the node group

To create clusters using custom AMI for the worker nodes, set the `customAmi` to your custom image and provide your `userData` for node bootstrapping.
To create clusters using custom AMI for the worker nodes, set the `customAmi` to your custom image and provide your `userData` for node bootstrapping.

```typescript
const userData = UserData.forLinux();
Expand Down
5 changes: 3 additions & 2 deletions examples/monorepo/cdk.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
"eks.default.min-size": 1,
"eks.default.max-size": 2,
"eks.default.desired-size": 1,
"eks.default.private-cluster": "false"
"eks.default.private-cluster": "false",
"eks.default.isolated-cluster": "false"
}
}
}
8 changes: 7 additions & 1 deletion lib/cluster-providers/asg-cluster-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import { AutoscalingNodeGroup } from "./types";
* Configuration options for the cluster provider.
*/
export interface AsgClusterProviderProps extends Partial<eks.CommonClusterOptions>, AutoscalingNodeGroup {

/**
* The name for the cluster.
*/
Expand All @@ -19,6 +19,12 @@ export interface AsgClusterProviderProps extends Partial<eks.CommonClusterOption
*/
privateCluster?: boolean;

/**
* Is the EKS Cluster in isolated subnets?
* @default false
*/
isolatedCluster?: boolean,

/**
* Tags for the cluster
*/
Expand Down
4 changes: 3 additions & 1 deletion lib/cluster-providers/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,6 @@ export const MAX_SIZE_KEY = "eks.default.max-size";

export const DESIRED_SIZE_KEY = "eks.default.desired-size";

export const PRIVATE_CLUSTER = "eks.default.private-cluster";
export const PRIVATE_CLUSTER = "eks.default.private-cluster";

export const ISOLATED_CLUSTER = "eks.default.isolated-cluster";
8 changes: 7 additions & 1 deletion lib/cluster-providers/fargate-cluster-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,12 @@ export interface FargateClusterProviderProps extends Partial<eks.CommonClusterOp
*/
privateCluster?: boolean;

/**
* Is the EKS Cluster in isolated subnets?
* @default false
*/
isolatedCluster?: boolean,

/**
* Tags for the cluster
*/
Expand All @@ -57,5 +63,5 @@ export class FargateClusterProvider extends GenericClusterProvider {
*/
internalCreateCluster(scope: Construct, id: string, clusterOptions: any): eks.Cluster {
return new eks.FargateCluster(scope, id, clusterOptions);
}
}
}
33 changes: 23 additions & 10 deletions lib/cluster-providers/generic-cluster-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ export function clusterBuilder() {
}

/**
* Function that contains logic to map the correct kunbectl layer based on the passed in version.
* Function that contains logic to map the correct kunbectl layer based on the passed in version.
* @param scope in whch the kubectl layer must be created
* @param version EKS version
* @returns ILayerVersion or undefined
Expand All @@ -50,9 +50,9 @@ export function selectKubectlLayer(scope: Construct, version: eks.KubernetesVers
return new KubectlV29Layer(scope, "kubectllayer29");
case "1.30":
return new KubectlV30Layer(scope, "kubectllayer30");

}

const minor = version.version.split('.')[1];

if(minor && parseInt(minor, 10) > 30) {
Expand All @@ -66,6 +66,11 @@ export function selectKubectlLayer(scope: Construct, version: eks.KubernetesVers
*/
export interface GenericClusterProviderProps extends Partial<eks.ClusterOptions> {

/**
* Whether cluster has internet access.
*/
isolatedCluster?: boolean,

/**
* Whether API server is private.
*/
Expand Down Expand Up @@ -262,10 +267,11 @@ export class GenericClusterProvider implements ClusterProvider {
const version: eks.KubernetesVersion = kubernetesVersion || this.props.version || eks.KubernetesVersion.V1_30;

const privateCluster = this.props.privateCluster ?? utils.valueFromContext(scope, constants.PRIVATE_CLUSTER, false);
const isolatedCluster = this.props.isolatedCluster ?? utils.valueFromContext(scope, constants.ISOLATED_CLUSTER, false);
const endpointAccess = (privateCluster === true) ? eks.EndpointAccess.PRIVATE : eks.EndpointAccess.PUBLIC_AND_PRIVATE;
const vpcSubnets = this.props.vpcSubnets ?? (privateCluster === true ? [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }] : undefined);
const vpcSubnets = this.props.vpcSubnets ?? (isolatedCluster === true ? [{ subnetType: ec2.SubnetType.PRIVATE_ISOLATED }]: privateCluster === true ? [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }] : undefined);
const mastersRole = this.props.mastersRole ?? new Role(scope, `${clusterName}-AccessRole`, {
assumedBy: new AccountRootPrincipal()
assumedBy: new AccountRootPrincipal()
});

const kubectlLayer = this.getKubectlLayer(scope, version);
Expand All @@ -286,7 +292,14 @@ export class GenericClusterProvider implements ClusterProvider {
defaultCapacity: 0 // we want to manage capacity ourselves
};

const clusterOptions = { ...defaultOptions, ...this.props, version , ipFamily };
const isolatedOptions: Partial<eks.ClusterProps> = isolatedCluster ? {
placeClusterHandlerInVpc: true,
clusterHandlerEnvironment: { AWS_STS_REGIONAL_ENDPOINTS: "regional" },
kubectlEnvironment: { AWS_STS_REGIONAL_ENDPOINTS: "regional" },
}: {};

const clusterOptions = { ...defaultOptions, ...isolatedOptions, ...this.props, version , ipFamily };

// Create an EKS Cluster
const cluster = this.internalCreateCluster(scope, id, clusterOptions);
cluster.node.addDependency(vpc);
Expand Down Expand Up @@ -323,10 +336,10 @@ export class GenericClusterProvider implements ClusterProvider {
}

/**
* Can be overridden to provide a custom kubectl layer.
* @param scope
* @param version
* @returns
* Can be overridden to provide a custom kubectl layer.
* @param scope
* @param version
* @returns
*/
protected getKubectlLayer(scope: Construct, version: eks.KubernetesVersion) : ILayerVersion | undefined {
return selectKubectlLayer(scope, version);
Expand Down
12 changes: 9 additions & 3 deletions lib/cluster-providers/mng-cluster-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { aws_autoscaling as asg, aws_eks as eks } from "aws-cdk-lib";
// Cluster
import { ClusterInfo } from "..";
import { defaultOptions, GenericClusterProvider } from "./generic-cluster-provider";
// Constants
// Constants
import { ManagedNodeGroup } from "./types";


Expand All @@ -28,6 +28,12 @@ export interface MngClusterProviderProps extends Partial<eks.CommonClusterOption
*/
privateCluster?: boolean;

/**
* Is the EKS Cluster in isolated subnets?
* @default false
*/
isolatedCluster?: boolean,

/**
* Tags for the Cluster.
*/
Expand Down Expand Up @@ -62,9 +68,9 @@ export class MngClusterProvider extends GenericClusterProvider {

/**
* Validates that cluster is backed by EC2 either through a managed node group or through a self-managed autoscaling group.
* @param clusterInfo
* @param clusterInfo
* @param source Used for error message to identify the source of the check
* @returns
* @returns
*/
//TODO: move to clusterInfo
export function assertEC2NodeGroup(clusterInfo: ClusterInfo, source: string): eks.Nodegroup[] | asg.AutoScalingGroup[] {
Expand Down

0 comments on commit 0d08e5b

Please sign in to comment.