Replies: 4 comments 4 replies
-
yes.
That would not work. Ceph datastore should be accessible to from multiple machines. The correct way is to download the image only once, and for each VM use According to the documentation, the ID format for Replacing
with
should work.
Few possibly useful hints: With guest agent enabled, make sure to read (recently added) Qemu guest agent section of Argument Terraform providers do not have Proxmox does not support using disks "without VM", but it is possible to have data disks survive VM re-creation, see attached disks (experimental). |
Beta Was this translation helpful? Give feedback.
-
Oh wow, that was a quick response! Will test and get back to you. This was excellent info and MUCH appreciated! |
Beta Was this translation helpful? Give feedback.
-
Ok, so I tried this: And unfortunately, it doesn't create a hard disk for the VM. It did correctly create a cloud-init drive though (which is a step forward). The cloud-init drive is the drive which contains the initialization (user-data.yml) files needed to get the cloud VM personalized. I'm thinking there's something I'm missing for the hard drive assignment parameters. Terraform spits this out when I try to run "apply": ╷ |
Beta Was this translation helpful? Give feedback.
-
Ok, so this worked perfectly. Not sure how it is working because the files.tf points only to proxmox1 but it is working as expected: main.tf -->
files.tf -->
virtual_machines.tf -->
ansible.tf -->
cloud-init/user-data.yml -->
ansible/playbook.yml -->
ansible/ansible.cfg -->
This went through and created the VM's and then started the Ansible playbook to kick off a Kubespray install. Looks like the file_id's worked without pointing them anywhere since they are on shared storage. Thanks for your help btw! It allowed me to debug this and get it working. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
I am hoping someone can help me out. I have a Ceph pool created for 3 node cluster which is used primarily for a snippets/ISO datastore for spinning up cloud-init based Debian 11 VM's. The issue I am running into is that I think I am overcomplicating the amount of downloads for the VM's I'm creating and am hoping someone can take a gander at what I am doing and see if there's an easier way. Note that this is working as expected but I just wanted to decrease the amount of times I am fetching and uploading an ISO and snippet file to the Ceph pool.
I'm using the latest 0.37.0 release as well (just as an FYI).
Here's my files.tf:
`resource "proxmox_virtual_environment_file" "debian_cloud_image" {
content_type = "iso"
datastore_id = "cephfs"
node_name = "proxmox1"
source_file {
path = "https://cdimage.debian.org/images/cloud/bullseye/20221219-1234/debian-11-genericcloud-amd64-20221219-1234.qcow2"
file_name = "debian-11-genericcloud-amd64-20221219-1234.img"
checksum = "ba0237232247948abf7341a495dec009702809aa7782355a1b35c112e75cee81"
}
}
resource "proxmox_virtual_environment_file" "cloud_config" {
content_type = "snippets"
datastore_id = "cephfs"
node_name = "proxmox1"
source_file {
path = "cloud-init/user-data.yml"
}
}
resource "proxmox_virtual_environment_file" "debian_cloud_image2" {
content_type = "iso"
datastore_id = "cephfs"
node_name = "proxmox2"
source_file {
path = "https://cdimage.debian.org/images/cloud/bullseye/20221219-1234/debian-11-genericcloud-amd64-20221219-1234.qcow2"
file_name = "debian-11-genericcloud-amd64-20221219-1234.img"
checksum = "ba0237232247948abf7341a495dec009702809aa7782355a1b35c112e75cee81"
}
}
resource "proxmox_virtual_environment_file" "cloud_config2" {
content_type = "snippets"
datastore_id = "cephfs"
node_name = "proxmox2"
source_file {
path = "cloud-init/user-data.yml"
}
}
resource "proxmox_virtual_environment_file" "debian_cloud_image3" {
content_type = "iso"
datastore_id = "cephfs"
node_name = "proxmox3"
source_file {
path = "https://cdimage.debian.org/images/cloud/bullseye/20221219-1234/debian-11-genericcloud-amd64-20221219-1234.qcow2"
file_name = "debian-11-genericcloud-amd64-20221219-1234.img"
checksum = "ba0237232247948abf7341a495dec009702809aa7782355a1b35c112e75cee81"
}
}
resource "proxmox_virtual_environment_file" "cloud_config3" {
content_type = "snippets"
datastore_id = "cephfs"
node_name = "proxmox3"
source_file {
path = "cloud-init/user-data.yml"
}
}`
The above will end up downloading the Debian cloud-init image 3 times instead of just getting it once.
Here's my virtual_machines.tf:
`resource "proxmox_virtual_environment_vm" "k8s_cp_01" {
name = "k8s-cp-01"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox1"
vm_id = "100"
cpu {
cores = 2
}
memory {
dedicated = "4096"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "3A:53:4E:50:13:F6"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_01" {
name = "k8s-worker-01"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox1"
vm_id = "101"
cpu {
cores = 1
}
memory {
dedicated = "4096"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "2A:B4:2E:60:BD:5F"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_02" {
name = "k8s-worker-02"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox1"
vm_id = "102"
cpu {
cores = 1
}
memory {
dedicated = "4096"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "CA:1D:F9:41:75:BA"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_03" {
name = "k8s-worker-03"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox2"
vm_id = "103"
cpu {
cores = 1
}
memory {
dedicated = "6144"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "6A:EB:AB:27:EA:53"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image2.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config2.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_04" {
name = "k8s-worker-04"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox2"
vm_id = "104"
cpu {
cores = 1
}
memory {
dedicated = "6144"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "E6:3B:B6:D1:D4:2E"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image2.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config2.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_05" {
name = "k8s-worker-05"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox3"
vm_id = "105"
cpu {
cores = 1
}
memory {
dedicated = "6144"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "22:D1:D2:D0:51:35"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image3.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config3.id
}
}
resource "proxmox_virtual_environment_vm" "k8s_worker_06" {
name = "k8s-worker-06"
description = "Managed by Terraform"
tags = ["terraform"]
node_name = "proxmox3"
vm_id = "106"
cpu {
cores = 1
}
memory {
dedicated = "6144"
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
mac_address = "2E:B1:38:11:0E:A6"
}
disk {
datastore_id = "local-zfs"
file_id = proxmox_virtual_environment_file.debian_cloud_image3.id
interface = "scsi0"
size = "32"
}
serial_device {} # The Debian cloud image expects a serial port to be present
operating_system {
type = "l26" # Linux Kernel 2.6 - 5.X.
}
initialization {
datastore_id = "local-zfs"
user_data_file_id = proxmox_virtual_environment_file.cloud_config3.id
}
}`
What you'll notice is that the disk and cloud-inits are pointing to their respective node's datastores. It's because the VM's disks are stored on each node of the cluster as I don't have pooled storage for these nodes. Each node has a local ZFS datastore for VM disks.
Is there a way to eliminate the amount of downloads and limit it to one and still have each VM in the virtual_machines.tf define their respective "file_id" and "user_data_file_id" locations on their respective node? I'm looking for something like "node_name" to be available for both the disk {} and initialization {} arguments. If that was the case, I could just download once to the Ceph pool and then directly define that the 2 arguments are located in the specific node.
Hope this makes sense. I'm just goofing around with this in my homelab and am loving it. It's used to kick off an ansible to kubespray method so I can quickly create/destroy a k8s cluster for development and testing. Keep up the great work!
Beta Was this translation helpful? Give feedback.
All reactions