Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a VM to a resource pool is not reflected in tfstate #1117

Open
meerumschlungen opened this issue Sep 30, 2024 · 0 comments
Open

Adding a VM to a resource pool is not reflected in tfstate #1117

meerumschlungen opened this issue Sep 30, 2024 · 0 comments

Comments

@meerumschlungen
Copy link

When cloning VM templates, the specified pool is not stored correctly. Note how the following main.tf adds the 3 VMs to the pool "production":

locals {
  hosts = {
    "swarm-w01" = { ipconfig = "ip=10.0.2.90/24,gw=10.0.2.1", hagroup = "pve01-first" },
    "swarm-w02" = { ipconfig = "ip=10.0.2.91/24,gw=10.0.2.1", hagroup = "pve02-first" },
    "swarm-w03" = { ipconfig = "ip=10.0.2.92/24,gw=10.0.2.1", hagroup = "pve03-first" },
  }
}

resource "proxmox_vm_qemu" "vm" {
  for_each = tomap(local.hosts)

  name = each.key
  target_node = "pve01"
  pool = "production"
  clone = "debian12-cloudinit"

  os_type = "cloud-init"
  onboot = true
  bios = "ovmf"

  cores   = 4
  sockets = 1
  memory  = 4096

  scsihw = "virtio-scsi-pci"
  disks {
    scsi {
      scsi0 {
        disk {
          storage = "pve-ceph"
          emulatessd = true
          discard = true
          size = "8G"
        }
      }
    }
    ide {
      ide2 {
        cloudinit {
          storage = "pve-ceph"
        }
      }
    }
  }

  network {
    model = "virtio"
    bridge = "vnet002"
  }
  ipconfig0 = each.value.ipconfig
  agent = 1

  hagroup = each.value.hagroup
  hastate = "started"

  ciuser = "user"
  sshkeys = data.http.ssh_keys.response_body
}

When performing terraform apply the VMs actually get added to the resource pool. This is not reflected in terraform.tfstate:
image

Thus, without any changes, when terraform plan is executed directly afterwards, it assumes the pool property needs to be changed:

$ terraform plan
proxmox_vm_qemu.vm["swarm-w01"]: Refreshing state... [id=pve01/qemu/121]
proxmox_vm_qemu.vm["swarm-w02"]: Refreshing state... [id=pve02/qemu/122]
proxmox_vm_qemu.vm["swarm-w03"]: Refreshing state... [id=pve03/qemu/123]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # proxmox_vm_qemu.vm["swarm-w01"] will be updated in-place
  ~ resource "proxmox_vm_qemu" "vm" {
        id                     = "pve01/qemu/121"
        name                   = "swarm-w01"
      + pool                   = "production"
        tags                   = null
        # (48 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # proxmox_vm_qemu.vm["swarm-w02"] will be updated in-place
  ~ resource "proxmox_vm_qemu" "vm" {
        id                     = "pve02/qemu/122"
        name                   = "swarm-w02"
      + pool                   = "production"
        tags                   = null
        # (48 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

  # proxmox_vm_qemu.vm["swarm-w03"] will be updated in-place
  ~ resource "proxmox_vm_qemu" "vm" {
        id                     = "pve03/qemu/123"
        name                   = "swarm-w03"
      + pool                   = "production"
        tags                   = null
        # (48 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 3 to change, 0 to destroy.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants