Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk Resize Failure: Error “volume ‘Pure-NFS:115/vm-115-disk-0.raw’ does not exist” during VM cloning. #1599

Open
arsensimonyanpicsart opened this issue Oct 19, 2024 · 4 comments
Labels
🐛 bug Something isn't working topic:clone

Comments

@arsensimonyanpicsart
Copy link

Describe the bug
When attempting to create a Proxmox VM using the OpenTofu provider, the disk resize fails during the creation process. The error indicates that the specified volume does not exist on the datastore, even though the resource is properly defined.

To Reproduce
Steps to reproduce the behavior:

  1. Define a Proxmox VM resource with a clone from an existing VM and specify a disk size.
  2. Run terraform init && terraform apply -auto-approve.
  3. The error appears during VM creation: volume 'x:x/x.raw' does not exist.

Please also provide a minimal Terraform configuration that reproduces the issue.

# >>> terraform {
  required_providers {
    proxmox = {
      source  = "bpg/proxmox"
      version = "0.66.2"
    }
    time = {
      source = "hashicorp/time"
    }
  }
}

provider "proxmox" {
  endpoint  = "https://xxx:8006/"
  api_token = "xxx"
  insecure  = true
  ssh {
    agent    = true
    username = "xxx"
  }
}

resource "proxmox_virtual_environment_vm" "vm1" {
  name        = "va-vm-name1"
  description = "Managed by Terraform"
  tags        = ["infradb", "ubuntu"]
  pool_id     = "infra-DB"
  node_name   = "xxx"

  scsi_hardware = "virtio-scsi-single"
  acpi          = true
  bios          = "seabios"
  machine       = "pc"
  boot_order    = ["scsi0"]

  clone {
    vm_id     = 9000
    full      = true
    node_name = "xxx"
  }

  cpu {
    cores   = 1
    sockets = 1
    type    = "host"
  }

  memory {
    dedicated = 4096
  }

  network_device {
    bridge  = "vmbr0"
    vlan_id = 1045
    model   = "virtio"
  }

  disk {
    datastore_id = "Pure-NFS"
    interface    = "scsi0"
    size         = "10"
  }

  vga {
    type   = "std"
    memory = 16
  }

  tablet_device = true

  keyboard_layout = "en-us"

  operating_system {
    type = "l26"
  }

  agent {
    enabled = true
    timeout = "20s"
    trim    = false
    type    = "virtio"
  }

} <<< #

and the output of terraform|tofu apply.

t apply -auto-approve

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # proxmox_virtual_environment_vm.vm1 will be created
  + resource "proxmox_virtual_environment_vm" "vm1" {
      + acpi                    = true
      + bios                    = "seabios"
      + boot_order              = [
          + "scsi0",
        ]
      + description             = "Managed by Terraform"
      + id                      = (known after apply)
      + ipv4_addresses          = (known after apply)
      + ipv6_addresses          = (known after apply)
      + keyboard_layout         = "en-us"
      + mac_addresses           = (known after apply)
      + machine                 = "pc"
      + migrate                 = false
      + name                    = "va-vm-name1"
      + network_interface_names = (known after apply)
      + node_name               = "x"
      + on_boot                 = true
      + pool_id                 = "infra-DB"
      + protection              = false
      + reboot                  = false
      + scsi_hardware           = "virtio-scsi-single"
      + started                 = true
      + stop_on_destroy         = false
      + tablet_device           = true
      + tags                    = [
          + "infradb",
          + "ubuntu",
        ]
      + template                = false
      + timeout_clone           = 1800
      + timeout_create          = 1800
      + timeout_migrate         = 1800
      + timeout_move_disk       = 1800
      + timeout_reboot          = 1800
      + timeout_shutdown_vm     = 1800
      + timeout_start_vm        = 1800
      + timeout_stop_vm         = 300
      + vm_id                   = (known after apply)

      + agent {
          + enabled = true
          + timeout = "20s"
          + trim    = false
          + type    = "virtio"
        }

      + clone {
          + full      = true
          + node_name = "xxx"
          + retries   = 1
          + vm_id     = 9000
        }

      + cpu {
          + cores      = 1
          + hotplugged = 0
          + limit      = 0
          + numa       = false
          + sockets    = 1
          + type       = "host"
          + units      = 1024
        }

      + disk {
          + aio               = "io_uring"
          + backup            = true
          + cache             = "none"
          + datastore_id      = "Pure-NFS"
          + discard           = "ignore"
          + file_format       = (known after apply)
          + interface         = "scsi0"
          + iothread          = false
          + path_in_datastore = (known after apply)
          + replicate         = true
          + size              = 10
          + ssd               = false
        }

      + memory {
          + dedicated      = 4096
          + floating       = 0
          + keep_hugepages = false
          + shared         = 0
        }

      + network_device {
          + bridge      = "vmbr0"
          + enabled     = true
          + firewall    = false
          + mac_address = (known after apply)
          + model       = "virtio"
          + mtu         = 0
          + queues      = 0
          + rate_limit  = 0
          + vlan_id     = 1045
        }

      + operating_system {
          + type = "l26"
        }

      + vga {
          + memory = 16
          + type   = "std"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
proxmox_virtual_environment_vm.vm1: Creating...
proxmox_virtual_environment_vm.vm1: Still creating... [10s elapsed]
╷
│ Error: disk resize fails: error waiting for VM disk resize: All attempts fail:
│ #1: task "UPID:xxx:001B1AA1:0942E6DB:67136E48:resize:115:terraform@pve!terraform:" failed to complete with exit code: volume 'Pure-NFS:115/vm-115-disk-0.raw' does not exist
│
│   with proxmox_virtual_environment_vm.vm1,
│   on main.tf line 23, in resource "proxmox_virtual_environment_vm" "vm1":
│   23: resource "proxmox_virtual_environment_vm" "vm1" {
│

Expected behavior
The VM should be created with the defined configuration, including disk size, without any errors related to missing volumes.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
it is working with provider version 0.64.0

  • Single or clustered Proxmox: Clustered
  • Proxmox version: 8.2.7
  • Provider version (ideally it should be the latest version): 0.66.2
  • Terraform/OpenTofu version: 1.8.7
  • OS (where you run Terraform/OpenTofu from): Mac OS sonoma
  • Debug logs (TF_LOG=DEBUG terraform apply):
@arsensimonyanpicsart arsensimonyanpicsart added the 🐛 bug Something isn't working label Oct 19, 2024
@bpg
Copy link
Owner

bpg commented Oct 25, 2024

Hey @arsensimonyanpicsart

volume 'Pure-NFS:115/vm-115-disk-0.raw' does not exist

That's an odd error. Have you tried creating a VM using different storage (not NFS backed)?
I'm wondering if this is specific for cloning on NFS. Also, what is the storage used by the source VM?

@bpg bpg added ⌛ pending author's response Requested additional information from the reporter topic:clone labels Oct 25, 2024
@arsensimonyanpicsart
Copy link
Author

@bpg the source vm(template) using same NFS storage, i tried to change source vm storage type to local and it works.
Yes i tried for example use ceph but the error the same. so my guess is if you creating vm template(source) for cloning you need to use local or local-lvm type of storages.....

@bpg bpg removed the ⌛ pending author's response Requested additional information from the reporter label Nov 4, 2024
@bpg
Copy link
Owner

bpg commented Nov 4, 2024

Thanks, there seems to be a regression then. The NFS datastore support was definetely working a while back in my prod env, before I switched everything to Ceph 🤔

@BongoEADGC6
Copy link

Adding to this. When using an NFS backed template, cloning to "local-lvm" results in a "not found" message.

Additionally, the clone log itself looks to be cloning to the same NFS data store as opposed to the requested "local-lvm"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working topic:clone
Projects
Status: ☑️ Todo
Development

No branches or pull requests

3 participants