Skip to content
This repository has been archived by the owner on Feb 5, 2020. It is now read-only.

Running only master in public subnet #19

Open
rohit-zabbed opened this issue Feb 25, 2018 · 2 comments
Open

Running only master in public subnet #19

rohit-zabbed opened this issue Feb 25, 2018 · 2 comments

Comments

@rohit-zabbed
Copy link

rohit-zabbed commented Feb 25, 2018

Trying to customise the setup , to separate master and workers in different subnets(public & private),
need workers to communicate using nat gateway, with below tf script

provider "aws" {
  region = "${var.aws_region}"
}

resource "aws_eip" "nat" {
  count = 1
  vpc = true
}

resource "aws_default_security_group" "default" {
  vpc_id = "${module.vpc.vpc_id}"

  ingress {
    from_port = 8
    to_port = 0
    protocol = "icmp"
    cidr_blocks = [
      "0.0.0.0/0"]
  }
}

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  name = "${var.tectonic_cluster_name}"
  cidr = "${var.vpc_cidr}"
  azs = [
    "us-west-1a"]
  public_subnets = [
    "10.10.11.0/24"]
  private_subnets = [
    "10.10.1.0/24"]
  database_subnets = [
    "10.10.21.0/24"]
  elasticache_subnets = [
    "10.10.31.0/24"]
  enable_nat_gateway = true
  single_nat_gateway = true
  reuse_nat_ips = true
  external_nat_ip_ids = [
    "${aws_eip.nat.*.id}"]
  enable_vpn_gateway = false
  create_database_subnet_group = true

  tags = "${var.tags}"

  private_subnet_tags = {
    "kubernetes.io/cluster/${var.tectonic_cluster_name}" = "shared"
    Owner = "rohit"
    Environment = "${var.tectonic_cluster_name}"
    Name = "${var.tectonic_cluster_name}"
  }

  database_subnet_tags = {
    Owner = "rohit"
    Environment = "${var.tectonic_cluster_name}"
    Name = "${var.tectonic_cluster_name}"
  }

  elasticache_subnet_tags = {
    Owner = "rohit"
    Environment = "${var.tectonic_cluster_name}"
    Name = "${var.tectonic_cluster_name}"
  }
}

module "kubernetes" {
  source = "coreos/kubernetes/aws"
  tectonic_aws_assets_s3_bucket_name = "tectonic-cf"

  tectonic_aws_region = "${var.aws_region}"
  tectonic_aws_ssh_key = "itops"
  tectonic_aws_vpc_cidr_block = "${var.vpc_cidr}"
  tectonic_aws_public_endpoints = true
  tectonic_base_domain = "${var.tectonic_base_domain}"
  tectonic_cluster_name = "${var.tectonic_cluster_name}"
  tectonic_container_linux_version = "latest"
  tectonic_license_path = "/Users/rverma/dev/tectonic/tectonic-license.txt"
  tectonic_pull_secret_path = "/Users/rverma/dev/tectonic/config.json"
  tectonic_networking = "flannel"
  tectonic_tls_validity_period = "26280"
  tectonic_vanilla_k8s = false
  tectonic_admin_email = "${var.tectonic_admin_email}"
  tectonic_admin_password = "${var.tectonic_admin_password}"

  tectonic_aws_external_vpc_id = "${module.vpc.vpc_id}"
  tectonic_aws_external_private_zone = "***"
  // tectonic_ca_cert = ""
  // tectonic_ca_key = ""
  // tectonic_ca_key_alg = "RSA"

  tectonic_etcd_count = "0"
  tectonic_aws_etcd_ec2_type = "${var.master_instance_type}"
  tectonic_aws_etcd_root_volume_iops = "100"
  tectonic_aws_etcd_root_volume_size = "30"
  tectonic_aws_etcd_root_volume_type = "gp2"

  tectonic_master_count = "1"
  tectonic_aws_master_ec2_type = "${var.master_instance_type}"
  tectonic_aws_external_master_subnet_ids = "${module.vpc.public_subnets}"
  tectonic_aws_master_root_volume_iops = "100"
  tectonic_aws_master_root_volume_size = "30"
  tectonic_aws_master_root_volume_type = "gp2"

  tectonic_worker_count = "${var.min_worker_count}"
  tectonic_aws_external_worker_subnet_ids = "${module.vpc.private_subnets}"
  tectonic_aws_worker_ec2_type = "${var.worker_instance_type}"
  tectonic_aws_worker_root_volume_iops = "100"
  tectonic_aws_worker_root_volume_size = "30"
  tectonic_aws_worker_root_volume_type = "gp2"
}

Getting warnings as

Warning: output "etcd_sg_id": must use splat syntax to access aws_security_group.etcd attribute "id", because it has "count" set; use aws_security_group.etcd.*.id to obtain a list of the attributes across all instances
Warning: output "aws_api_external_dns_name": must use splat syntax to access aws_elb.api_external attribute "dns_name", because it has "count" set; use aws_elb.api_external.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_external_zone_id": must use splat syntax to access aws_elb.api_external attribute "zone_id", because it has "count" set; use aws_elb.api_external.*.zone_id to obtain a list of the attributes across all instances
Warning: output "aws_api_internal_dns_name": must use splat syntax to access aws_elb.api_internal attribute "dns_name", because it has "count" set; use aws_elb.api_internal.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_internal_zone_id": must use splat syntax to access aws_elb.api_internal attribute "zone_id", because it has "count" set; use aws_elb.api_internal.*.zone_id to obtain a list of the attributes across all instances

And Exceptions as

module.kubernetes.module.vpc.data.aws_subnet.external_worker: data.aws_subnet.external_worker: value of 'count' cannot be computed
module.kubernetes.module.vpc.data.aws_subnet.external_master: data.aws_subnet.external_master: value of 'count' cannot be computed
@squat
Copy link
Contributor

squat commented Feb 25, 2018

@rohit-zabbed in Tectonic masters and workers are already separated into public and private subnets, respectively. What exactly do you hope to accomplish?

@rohit-zabbed
Copy link
Author

@squat I check that, its correct, but I still want to create another private subnet as part of definition.
Wondering what's wrong with above script, ideally I should be able to setup kubernetes in an existing vpc.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants