Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/import clusters #696

Merged
merged 18 commits into from
Jun 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/cluster-providers/generic-cluster-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
mastersRole: blueprints.getResource(context => {
return new iam.Role(context.scope, 'AdminRole', { assumedBy: new AccountRootPrincipal() });
}),
securityGroup: blueprints.getNamedResource("my-cluster-security-group"), // assumed to be register as a resource provider under name my-cluster-security-group
securityGroup: blueprints.getNamedResource("my-cluster-security-group") as ec2.ISecurityGroup, // assumed to be register as a resource provider under name my-cluster-security-group
managedNodeGroups: [
{
id: "mng1",
Expand All @@ -119,7 +119,7 @@ const clusterProvider = new blueprints.GenericClusterProvider({
EksBlueprint.builder()
.resourceProvider("my-cluster-security-group", {
provide(context: blueprints.ResourceContext) : ec2.ISecurityGroup {
return ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'sg-12345', { mutable: false }); // example for look up
return ec2.SecurityGroup.fromSecurityGroupId(context.scope, 'SG', 'sg-12345', { mutable: false }); // example for look up
}
})
.clusterProvider(clusterProvider)
Expand Down
108 changes: 108 additions & 0 deletions docs/cluster-providers/import-cluster-provider.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# Import Cluster Provider

The `ImportClusterProvider` allows you to import an existing EKS cluster into your blueprint. Importing an existing cluster at present will allow adding certain add-ons and limited team capabilities.

## Usage

The framework provides a couple of convenience methods to instantiate the `ImportClusterProvider` by leveraging the SDK API call to describe the cluster.

### Option 1

Recommended option is to get the cluster information through the `DescribeCluster` API (requires `eks:DescribeCluster` permission at build-time) and then use it to instantiate the `ImportClusterProvider` and **(very important)** to set up the blueprint VPC.

Make sure VPC is set to the VPC of the imported cluster, otherwise the blueprint by default will create a new VPC, which will be redundant and cause problems with some of the add-ons.

**Note:** `blueprints.describeCluster() is an asynchronous function, you should either use `await` or handle promise resolution chain.

```typescript
const clusterName = "quickstart-cluster";
const region = "us-east-2";

const sdkCluster = await blueprints.describeCluster(clusterName, region); // get cluster information using EKS APIs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is clusterName a variable. If so we should also add that and show people of a sample populated value.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The downside of that is people mindlessly copying this to their env just to discover that it does not work, similar to what we had with the update-kubeconfig frm the blog post. But for consistency I will add sample data.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed


/**
* Assumes the supplied role is registered in the target cluster for kubectl access.
*/
const importClusterProvider = blueprints.ImportClusterProvider.fromClusterAttributes(
sdkCluster,
blueprints.getResource(context => new blueprints.LookupRoleProvider(kubectlRoleName).provide(context))
);

const vpcId = sdkCluster.resourcesVpcConfig?.vpcId;

blueprints.EksBlueprint.builder()
.clusterProvider(importClusterProvider)
.resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) // this is required with import cluster provider

```

### Option 2

This option is convenient if you already know the VPC Id of the target cluster. It also requires `eks:DescribeCluster` permission at build-time:

```typescript
const clusterName = "quickstart-cluster";
const region = "us-east-2";

const kubectlRole: iam.IRole = blueprints.getNamedResource('my-role');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is clusterName a variable. If so we should also add that and show people of a sample populated value.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed


const importClusterProvider2 = await blueprints.ImportClusterProvider.fromClusterLookup(clusterName, region, kubectlRole); // note await here

const vpcId = ...; // you can always get it with blueprints.describeCluster(clusterName, region);

blueprints.EksBlueprint.builder()
.clusterProvider(importClusterProvider2)
.resourceProvider('my-role', new blueprints.LookupRoleProvider('my-role'))
.resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId))
```

### Option 3

Unlike the other options, this one does not require any special permissions at build time, however it requires passing all the required information to the import cluster provider.
OIDC provider is expected to be passed in as well if you are planning to leverage IRSA with your blueprint. The OIDC provider is expected to be registered in the imported cluster already, otherwise IRSA won't work.


```typescript

const importClusterProvider3 = new ImportClusterProvider({
clusterName: 'my-existing-cluster',
version: KubernetesVersion.V1_26,
clusterEndpoint: 'https://B792B88BC60999B1A37D.gr7.us-east-2.eks.amazonaws.com',
openIdConnectProvider: getResource(context =>
new LookupOpenIdConnectProvider('https://oidc.eks.us-east-2.amazonaws.com/id/B792B88BC60999B1A37D').provide(context)),
clusterCertificateAuthorityData: 'S0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCasdd234................',
kubectlRoleArn: 'arn:...',
});

const vpcId = ...;

blueprints.EksBlueprint.builder()
.clusterProvider(importClusterProvider3)
.resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId))
```

## Configuration

The `ImportClusterProvider` supports the following configuration options:

| Prop | Description |
|-----------------------|-------------|
| clusterName | Cluster name
| version | EKS version of the target cluster
| clusterEndpoint | The API Server endpoint URL
| openIdConnectProvider | An Open ID Connect provider for this cluster that can be used to configure service accounts. You can either import an existing provider using `LookupOpenIdConnectProvider`, or create a new provider using new custom resource provider to call `new eks.OpenIdConnectProvider`
| clusterCertificateAuthorityData | The certificate-authority-data for your cluster.
| kubectlRoleArn | An IAM role with cluster administrator and "system:masters" permissions.


## Known Limitations

The following add-ons will not work with the `ImportClusterProvider` due to the inability (at present) of the imported clusters to modify `aws-auth` ConfigMap and mutate cluster authentication:
* `ClusterAutoScalerAddOn`
* `AwsBatchAddOn`
* `EmrEksAddOn`
* `KarpenterAddOn`

Teams can be added to the cluster and will perform all of the team functionality except cluster access due to the same inability to mutate cluster access.

At the moment, there are no examples to add extra capacity to the imported clusters like node groups.
3 changes: 2 additions & 1 deletion docs/cluster-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@ The framework currently provides support for the following Cluster Providers:
| [`GenericClusterProvider`](./generic-cluster-provider) | Provisions an EKS cluster with one or more managed or Auto Scaling groups as well as Fargate Profiles.
| [`AsgClusterProvider`](./asg-cluster-provider) | Provisions an EKS cluster with an Auto Scaling group used for compute capacity.
| [`MngClusterProvider`](./mng-cluster-provider) | Provisions an EKS cluster with a Managed Node group for compute capacity.
| [`FargateClusterProviders`](./fargate-cluster-provider) | Provisions an EKS cluster which leverages AWS Fargate to run Kubernetes pods.
| [`FargateClusterProvider`](./fargate-cluster-provider) | Provisions an EKS cluster which leverages AWS Fargate to run Kubernetes pods.
| [`ImportClusterProvider`](./import-cluster-provider) | Imports an existing EKS cluster into the blueprint allowing capabilities to add (certain) add-ons and teams.

By default, the framework will leverage the `MngClusterProvider` which creates a single managed node group.

Expand Down
2 changes: 1 addition & 1 deletion examples/teams/team-troi/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ export class TeamTroi implements Team {
new cdk.CfnOutput(stack, this.name + '-sa-iam-role', { value: sa.role.roleArn });
}

setupNamespacePolicies(cluster: eks.Cluster) : eks.KubernetesManifest {
setupNamespacePolicies(cluster: eks.ICluster) : eks.KubernetesManifest {
const quotaName = this.name + "-quota";
return cluster.addManifest(quotaName, {
apiVersion: 'v1',
Expand Down
5 changes: 4 additions & 1 deletion lib/addons/aws-batch-on-eks/index.ts
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
import assert = require("assert");
import { ClusterAddOn, ClusterInfo } from "../../spi";
import { Stack } from "aws-cdk-lib";
import { Cluster } from "aws-cdk-lib/aws-eks";
import { CfnServiceLinkedRole, IRole, Role } from "aws-cdk-lib/aws-iam";
import { Construct } from "constructs";

const BATCH = 'aws-batch';

export class AwsBatchAddOn implements ClusterAddOn {
deploy(clusterInfo: ClusterInfo): Promise<Construct> {
const cluster = clusterInfo.cluster;
assert(clusterInfo.cluster instanceof Cluster, "AwsBatchAddOn cannot be used with imported clusters");
const cluster: Cluster = clusterInfo.cluster;
const roleNameforBatch = 'AWSServiceRoleForBatch';
const slrCheck = Role.fromRoleName(cluster.stack, 'BatchServiceLinkedRole', roleNameforBatch);

Expand Down
4 changes: 2 additions & 2 deletions lib/addons/aws-node-termination-handler/index.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { AutoScalingGroup, LifecycleHook, LifecycleTransition } from 'aws-cdk-lib/aws-autoscaling';
import { QueueHook } from 'aws-cdk-lib/aws-autoscaling-hooktargets';
import { Cluster, ServiceAccount } from 'aws-cdk-lib/aws-eks';
import { ICluster, ServiceAccount } from 'aws-cdk-lib/aws-eks';
import { EventPattern, Rule } from 'aws-cdk-lib/aws-events';
import { SqsQueue } from 'aws-cdk-lib/aws-events-targets';
import * as iam from 'aws-cdk-lib/aws-iam';
Expand Down Expand Up @@ -122,7 +122,7 @@ export class AwsNodeTerminationHandlerAddOn extends HelmAddOn {
* @param asgCapacity
* @returns Helm values
*/
private configureQueueMode(cluster: Cluster, serviceAccount: ServiceAccount, asgCapacity: AutoScalingGroup[], karpenter: Promise<Construct> | undefined): any {
private configureQueueMode(cluster: ICluster, serviceAccount: ServiceAccount, asgCapacity: AutoScalingGroup[], karpenter: Promise<Construct> | undefined): any {
const queue = new Queue(cluster.stack, "aws-nth-queue", {
retentionPeriod: Duration.minutes(5)
});
Expand Down
2 changes: 1 addition & 1 deletion lib/addons/ebs-csi-driver/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import { getEbsDriverPolicyDocument } from "./iam-policy";
/**
* Interface for EBS CSI Driver EKS add-on options
*/
interface EbsCsiDriverAddOnProps {
export interface EbsCsiDriverAddOnProps {
/**
* Version of the driver to deploy
*/
Expand Down
7 changes: 4 additions & 3 deletions lib/addons/emr-on-eks/index.ts
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
import assert = require("assert");
import { ClusterAddOn, ClusterInfo } from "../../spi";
import { Stack } from "aws-cdk-lib";
import { Cluster } from "aws-cdk-lib/aws-eks";
import { CfnServiceLinkedRole, IRole, Role } from "aws-cdk-lib/aws-iam";
import { Construct } from "constructs";

export class EmrEksAddOn implements ClusterAddOn {
deploy(clusterInfo: ClusterInfo): Promise<Construct> {
const cluster = clusterInfo.cluster;

assert(clusterInfo.cluster instanceof Cluster, "EmrEksAddOn cannot be used with imported clusters as it requires changes to the cluster authentication.");
const cluster: Cluster = clusterInfo.cluster;

/*
* Create the service role used by EMR on EKS
Expand Down Expand Up @@ -35,6 +37,5 @@ export class EmrEksAddOn implements ClusterAddOn {
);

return Promise.resolve(emrOnEksSlr);

}
}
3 changes: 2 additions & 1 deletion lib/addons/karpenter/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,8 @@ export class KarpenterAddOn extends HelmAddOn {

@conflictsWith('ClusterAutoScalerAddOn')
deploy(clusterInfo: ClusterInfo): Promise<Construct> {
const cluster = clusterInfo.cluster;
assert(clusterInfo.cluster instanceof Cluster, "KarpenterAddOn cannot be used with imported clusters as it requires changes to the cluster authentication.");
const cluster : Cluster = clusterInfo.cluster;
const endpoint = cluster.clusterEndpoint;
const name = cluster.clusterName;

Expand Down
6 changes: 6 additions & 0 deletions lib/addons/vpc-cni/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,12 @@ export interface VpcCniAddOnProps {
*
*/
serviceAccountPolicies?: iam.IManagedPolicy[];

/**
* Version of the add-on to use. Must match the version of the cluster where it
* will be deployed.
*/
version?: string;
}


Expand Down
43 changes: 26 additions & 17 deletions lib/cluster-providers/generic-cluster-provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,31 @@ export function clusterBuilder() {
return new ClusterBuilder();
}

/**
* Function that contains logic to map the correct kunbectl layer based on the passed in version.
* @param scope in whch the kubectl layer must be created
* @param version EKS version
* @returns ILayerVersion or undefined
*/
export function selectKubectlLayer(scope: Construct, version: eks.KubernetesVersion): ILayerVersion | undefined {
switch(version) {
case eks.KubernetesVersion.V1_23:
return new KubectlV23Layer(scope, "kubectllayer23");
case eks.KubernetesVersion.V1_24:
return new KubectlV24Layer(scope, "kubectllayer24");
case eks.KubernetesVersion.V1_25:
return new KubectlV25Layer(scope, "kubectllayer25");
case eks.KubernetesVersion.V1_26:
return new KubectlV26Layer(scope, "kubectllayer26");
}

const minor = version.version.split('.')[1];

if(minor && parseInt(minor, 10) > 26) {
return new KubectlV26Layer(scope, "kubectllayer26"); // for all versions above 1.25 use 1.25 kubectl (unless explicitly supported in CDK)
}
return undefined;
}
/**
* Properties for the generic cluster provider, containing definitions of managed node groups,
* auto-scaling groups, fargate profiles.
Expand Down Expand Up @@ -282,23 +307,7 @@ export class GenericClusterProvider implements ClusterProvider {
* @returns
*/
protected getKubectlLayer(scope: Construct, version: eks.KubernetesVersion) : ILayerVersion | undefined {
switch(version) {
case eks.KubernetesVersion.V1_23:
return new KubectlV23Layer(scope, "kubectllayer23");
case eks.KubernetesVersion.V1_24:
return new KubectlV24Layer(scope, "kubectllayer24");
case eks.KubernetesVersion.V1_25:
return new KubectlV25Layer(scope, "kubectllayer25");
case eks.KubernetesVersion.V1_26:
return new KubectlV26Layer(scope, "kubectllayer26");
}

const minor = version.version.split('.')[1];

if(minor && parseInt(minor, 10) > 26) {
return new KubectlV26Layer(scope, "kubectllayer26"); // for all versions above 1.25 use 1.25 kubectl (unless explicitly supported in CDK)
}
return undefined;
return selectKubectlLayer(scope, version);
}

/**
Expand Down
Loading