Skip to content

Commit

Permalink
Fix: Enabled common VRF b/w clusters @build
Browse files Browse the repository at this point in the history
  • Loading branch information
codinja1188 committed Jul 5, 2024
1 parent 6566581 commit 8d77944
Show file tree
Hide file tree
Showing 23 changed files with 325 additions and 17 deletions.
161 changes: 161 additions & 0 deletions examples/nutanix-clusters/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
# Nutanix Clusters Setup and Protection Policy Example

## Overview

This example demonstrates how to create two Nutanix clusters and set up a protection policy between them. Additionally, it covers the process of creating a VM in one cluster and migrating it to the other. The setup is partially automated using Terraform and partially manual.

## Prerequisites

- Terraform installed on your local machine
- Equinix Metal account
- SSH key pair for accessing the Nutanix clusters

## Automated Steps

1. **Create two Nutanix clusters**

1.1. Clone the repository:
```sh
git clone [email protected]:equinix-labs/terraform-equinix-metal-nutanix-cluster.git
cd terraform-equinix-metal-nutanix-cluster
cd examples/nutanix-clusters
```

1.2. Create the `terraform.tfvars` file:
```hcl
metal_project_id = "XXXXXXXXXXXXXXXXXXXXXXX"
metal_organization_id = "XXXXXXXXXXXXXXXXXXXXXXX" # The ID of the Metal organization in which to create the project if `create_project` is true.
metal_metro = "sl" # The metro to create the cluster in
create_project = false # (Optional) to use an existing project matching `metal_project_name`, set this to false.
metal_bastion_plan = "m3.small.x86" # Which plan to use for the bastion host.
metal_nutanix_os = "nutanix_lts_6_5" # Which OS to use for the Nutanix nodes.
metal_nutanix_plan = "m3.large.x86" # Which plan to use for the Nutanix nodes (must be Nutanix compatible, see https://deploy.equinix.com/developers/os-compatibility/)
create_vlan = false # Whether to create a new VLAN for this project.
create_vrf = true
# metal_vlan_id=null # ID of the VLAN you wish to use. e.g. 1234
nutanix_node_count = 1 # The number of Nutanix nodes to create.
skip_cluster_creation = false # Skip the creation of the Nutanix cluster.
cluster_subnet = "192.168.96.0/21" # Pick an arbitrary private subnet, we recommend a /22 like "192.168.100.0/22"
# nutanix_reservation_ids=[] # Hardware reservation IDs to use for the Nutanix nodes
```

1.3. Initialize and apply Terraform:
```sh
terraform init
terraform plan
terraform apply
```

1.4. Network Topology:
![Network Topology](assets/NutanixClusterTopology.jpg)

1.5. After a successful run, the expected output is:
```
Outputs:

nutanix_cluster1_bastion_public_ip = "145.40.91.33"
nutanix_cluster1_cvim_ip_address = "192.168.97.57"
nutanix_cluster1_iscsi_data_services_ip = "192.168.99.253"
nutanix_cluster1_prism_central_ip_address = "192.168.99.252"
nutanix_cluster1_ssh_forward_command = "ssh -L 9440:192.168.97.57:9440 -L 19440:192.168.99.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-qh0f2 [email protected]"
nutanix_cluster1_ssh_private_key = "/Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-qh0f2"
nutanix_cluster1_virtual_ip_address = "192.168.99.254"

nutanix_cluster2_bastion_public_ip = "145.40.91.141"
nutanix_cluster2_cvim_ip_address = "192.168.102.176"
nutanix_cluster2_iscsi_data_services_ip = "192.168.103.253"
nutanix_cluster2_prism_central_ip_address = "192.168.103.252"
nutanix_cluster2_ssh_forward_command = "ssh -L 9442:192.168.102.176:9440 -L 19442:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-lha20 [email protected]"
nutanix_cluster2_ssh_private_key = "/Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-lha20"
nutanix_cluster2_virtual_ip_address = "192.168.103.254"
```

## Manual Steps

1. **Set up network resources to connect the clusters**

1.1. Access Cluster 1:
```sh
ssh -L 9440:192.168.97.57:9440 -L 19440:192.168.99.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-qh0f2 [email protected]
```

1.2. Follow the instructions to change the password of Cluster 1:
[Nutanix Metal Workshop - Access Prism UI](https://equinix-labs.github.io/nutanix-on-equinix-metal-workshop/parts/3-access_prism_ui/)

1.3. Access Cluster 2:
```sh
ssh -L 9442:192.168.102.176:9440 -L 19442:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-lha20 [email protected]
```

1.4. Follow the instructions to change the password of Cluster 2:
[Nutanix Metal Workshop - Access Prism UI](https://equinix-labs.github.io/nutanix-on-equinix-metal-workshop/parts/3-access_prism_ui/)

1.5. Run the firewall rules to establish connectivity between the two clusters:

1.5.1. On Cluster 1:
```sh
ssh -L 9440:192.168.97.57:9440 -L 19440:192.168.99.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-qh0f2 [email protected]
ssh [email protected]
sudo ip route add 192.168.100.0/22 via 192.168.96.1
```

1.5.2. On Cluster 2:
```sh
ssh -L 9442:192.168.102.176:9440 -L 19442:192.168.103.252:9440 -i /Users/vasubabu/Equinix/terraform-equinix-metal-nutanix-cluster/examples/nutanix-clusters/ssh-key-lha20 [email protected]
ssh [email protected]
sudo ip route add 192.168.96.0/22 via 192.168.100.1
```

**Note:** It is recommended to use Cluster 1 in a normal window and Cluster 2 in an incognito window.

2. **Update Cluster Details**

2.1. Update on Cluster 1:
Click on the gear icon in the upper right corner of the Prism UI. Then choose Cluster Details and enter `192.168.99.254` for the Virtual IP and `192.168.99.253` for the ISCSI Data Services IP and click Save.

2.2. Update on Cluster 2:
Click on the gear icon in the upper right corner of the Prism UI. Then choose Cluster Details and enter `192.168.103.254` for the Virtual IP and `192.168.102.176` for the ISCSI Data Services IP and click Save.

![Cluster Details](assets/ClusterDetails.jpg)
![Cluster Details Pop](assets/ClusterDetailsPop.jpg)

3. **Setup Remote Site On both Clusters**

![Data Protection](assets/DataProtection.jpg)

Navigate to the top-right, click on `+ Remote Site`, and select `Physical Cluster`.
![Remote Site1](assets/RemoteSite.jpg)
Navigate to the next pop-up window.
![Remote Site2](assets/RemoteSite1.jpg)
4. **Create a Virtual Machine on any one Cluster**
[Nutanix Metal Workshop - Create A VM](https://equinix-labs.github.io/nutanix-on-equinix-metal-workshop/parts/4-create_a_vm/)
5. **Set up a protection policy between the clusters**
5.1. Log in to Nutanix Prism Central.
![Home](assets/PrisumUIHome.jpg)
5.2. Navigate to the Data Protection section and create a new Protection Domain.
![Navigate to Protection Domain](assets/ProtectionDomain.jpg)
![Create Data Protection](assets/DataProtectionCreate.jpg)
![Add Data Protection](assets/DataProtectionAdd.jpg)
![Next Data Protection](assets/DataProtectionNext.jpg)
![Close Data Protection](assets/DataProtectionClose.jpg)
6. **Migrate the VM to the other cluster**
6.1. Log in to Nutanix Prism Central.
![Home](assets/PrisumUIHome.jpg)
6.2. Migrate the VM.
![Migrate](assets/Migrate.jpg)
![Migrate Init](assets/MigrateInit.jpg)
After migration is initiated, it will take a while. You can see the progress in recent tasks.
![Migrate Progress](assets/MigrateProgress.jpg)
![Migrate Success](assets/MigrateSuccess.jpg)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/nutanix-clusters/assets/Migrate.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/nutanix-clusters/assets/MigrateInit.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/nutanix-clusters/assets/RemoteSite.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/nutanix-clusters/assets/RemoteSite1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
74 changes: 60 additions & 14 deletions examples/nutanix-clusters/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -33,24 +33,70 @@ provider "equinix" {
auth_token = var.metal_auth_token
}

locals {
project_id = var.create_project ? element(equinix_metal_project.nutanix[*].id, 0) : element(data.equinix_metal_project.nutanix[*].id, 0)
vrf_id = var.create_vrf ? element(equinix_metal_vrf.nutanix[*].id, 0) : element(data.equinix_metal_vrf.nutanix[*].id, 0)
}

resource "equinix_metal_project" "nutanix" {
count = var.create_project ? 1 : 0
name = var.metal_project_name
organization_id = var.metal_organization_id
}

data "equinix_metal_project" "nutanix" {
count = var.create_project ? 0 : 1
name = var.metal_project_name != "" ? var.metal_project_name : null
project_id = var.metal_project_id != "" ? var.metal_project_id : null
}

# Common resources shared between both clusters
resource "random_string" "vrf_name_suffix" {
length = 5
special = false
}

resource "equinix_metal_vrf" "nutanix" {
count = var.create_vrf ? 1 : 0
description = "VRF with ASN 65000 and a pool of address space that includes 192.168.96.0/21"
name = "nutanix-vrf-${random_string.vrf_name_suffix.result}"
metro = var.metal_metro
local_asn = "65000"
ip_ranges = [var.cluster_subnet]
project_id = local.project_id
}

data "equinix_metal_vrf" "nutanix" {
count = var.create_vrf ? 0 : 1
vrf_id = var.vrf_id
}

module "nutanix_cluster1" {
source = "equinix-labs/metal-nutanix-cluster/equinix"
version = "0.1.2"
metal_auth_token = var.metal_auth_token
metal_metro = var.metal_metro
create_project = false
metal_project_id = var.metal_project_id
#metal_subnet = "192.168.100.0/22"
source = "equinix-labs/metal-nutanix-cluster/equinix"
version = "0.3.1"
metal_auth_token = var.metal_auth_token
metal_metro = var.metal_metro
create_project = false
nutanix_node_count = var.nutanix_node_count
metal_project_id = local.project_id
cluster_subnet = "192.168.96.0/22"
vrf_id = local.vrf_id
create_vrf = false
create_vlan = true
cluster_gateway = "192.168.96.1"
}

module "nutanix_cluster2" {
source = "equinix-labs/metal-nutanix-cluster/equinix"
version = "0.1.2"
metal_auth_token = var.metal_auth_token
metal_metro = var.metal_metro
create_project = false
metal_project_id = var.metal_project_id
#metal_subnet = "192.168.104.0/22"
source = "equinix-labs/metal-nutanix-cluster/equinix"
version = "0.3.1"
metal_auth_token = var.metal_auth_token
metal_metro = var.metal_metro
create_project = false
nutanix_node_count = var.nutanix_node_count
metal_project_id = local.project_id
cluster_subnet = "192.168.100.0/22"
vrf_id = local.vrf_id
create_vrf = false
create_vlan = true
cluster_gateway = "192.168.100.1"
}
6 changes: 3 additions & 3 deletions examples/nutanix-clusters/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ output "nutanix_cluster1_ssh_forward_command" {

output "nutanix_cluster2_ssh_forward_command" {
description = "SSH port forward command to use to connect to the Prism GUI"
value = "ssh -L 9440:${module.nutanix_cluster2.cvim_ip_address}:9440 -L 19440:${module.nutanix_cluster2.prism_central_ip_address}:9440 -i ${module.nutanix_cluster2.ssh_private_key} root@${module.nutanix_cluster2.bastion_public_ip}"
value = "ssh -L 9442:${module.nutanix_cluster2.cvim_ip_address}:9440 -L 19442:${module.nutanix_cluster2.prism_central_ip_address}:9440 -i ${module.nutanix_cluster2.ssh_private_key} root@${module.nutanix_cluster2.bastion_public_ip}"
}

output "nutanix_cluster1_cvim_ip_address" {
Expand All @@ -43,12 +43,12 @@ output "nutanix_cluster2_cvim_ip_address" {

output "nutanix_cluster1_virtual_ip_address" {
description = "Reserved IP for cluster virtal IP"
value = module.nutanix_cluster1.virtual_ip_address
value = module.nutanix_cluster1.virtual_ip_address
}

output "nutanix_cluster2_virtual_ip_address" {
description = "Reserved IP for cluster virtal IP"
value = module.nutanix_cluster2.virtual_ip_address
value = module.nutanix_cluster2.virtual_ip_address
}

output "nutanix_cluster1_iscsi_data_services_ip" {
Expand Down
17 changes: 17 additions & 0 deletions examples/nutanix-clusters/terraform.tfvars.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# metal_auth_token = "" # Equinix Metal API token
# metal_vlan_description = "ntnx-demo" # Description to add to created VLAN.
# metal_project_id = "" # The ID of the Metal project in which to deploy to cluster if `create_project` is false.
# metal_organization_id = "" # The ID of the Metal organization in which to create the project if `create_project` is true.
# metal_metro = "sl" # The metro to create the cluster in
# create_project = false # (Optional) to use an existing project matching `metal_project_name`, set this to false.
# metal_bastion_plan = "m3.small.x86" # Which plan to use for the bastion host.
# metal_nutanix_os = "nutanix_lts_6_5" # Which OS to use for the Nutanix nodes.
# metal_nutanix_plan = "m3.large.x86" # Which plan to use for the Nutanix nodes (must be Nutanix compatible, see https://deploy.equinix.com/developers/os-compatibility/)
# create_vlan = false # Whether to create a new VLAN for this project.
# create_vrf = true
# metal_vlan_id=null # ID of the VLAN you wish to use. e.g. 1234
# nutanix_node_count = 1 # The number of Nutanix nodes to create.
# skip_cluster_creation = false # Skip the creation of the Nutanix cluster.
# cluster_subnet = "192.168.96.0/21" # Pick an arbitrary private subnet, we recommend a /22 like "192.168.100.0/22"
# nutanix_reservation_ids=[] # Hardware reservation IDs to use for the Nutanix nodes

84 changes: 84 additions & 0 deletions examples/nutanix-clusters/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,87 @@ variable "nutanix_node_count" {
default = 2
description = "The number of Nutanix nodes to create."
}

variable "create_vlan" {
type = bool
default = true
description = "Whether to create a new VLAN for this project."
}

variable "metal_vlan_id" {
type = number
default = null
description = "ID of the VLAN you wish to use."
}

variable "metal_project_name" {
type = string
default = ""
description = <<EOT
The name of the Metal project in which to deploy the cluster. If `create_project` is false and
you do not specify a project ID, the project will be looked up by name. One (and only one) of
`metal_project_name` or `metal_project_id` is required or `metal_project_id` must be set.
Required if `create_project` is true.
EOT
}

variable "metal_organization_id" {
type = string
default = null
description = "The ID of the Metal organization in which to create the project if `create_project` is true."
}

variable "metal_subnet" {
type = string
default = "192.168.96.0/21"
description = "Nutanix cluster subnet."
}

variable "metal_vlan_description" {
type = string
default = "ntnx-demo"
description = "Description to add to created VLAN."
}

variable "create_vrf" {
type = bool
default = true
description = "Whether to create a new VRF for this project."
}

variable "vrf_id" {
type = string
default = null
description = "ID of the VRF you wish to use."
}

variable "metal_nutanix_plan" {
type = string
default = "c3.small.x86"
description = "The plan to use for the Nutanix nodes."
}

variable "skip_cluster_creation" {
type = bool
default = false
description = "Skip the creation of the Nutanix cluster."
}

variable "metal_bastion_plan" {
type = string
default = "t3.small.x86"
description = "The plan to use for the bastion host."
}

variable "metal_nutanix_os" {
type = string
default = "ubuntu_20_04"
description = "The operating system to use for the Nutanix nodes."
}

variable "cluster_subnet" {
type = string
default = "192.168.100.0/22"
description = "nutanix cluster subnet"
}

0 comments on commit 8d77944

Please sign in to comment.