-
-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrating VMs from node to node not working #1256
Comments
Hi @mmacedo2000! 👋🏼 The migration support was added in #501, so in theory this should work, however, I have not tested it for a while. How did you try migrating the VM? By updating the |
Hello, Yes, I am trying to migrate the VM by updating the node_name and migrate = true The VM is created with proxmox_virtual_environment_vm and I am also using a cloud init. This is the VM creation configuration: resource "proxmox_virtual_environment_vm" "vm" {
name = var.name
description = var.description
tags = ["terraform"]
#local.proxmox_tags
node_name = var.host_name
migrate = true
cpu {
cores = var.vm_cores
numa = true
type = "host"
}
memory {
dedicated = var.vm_memory
}
agent {
enabled = true
}
network_device {
bridge = "vmbr0"
}
disk {
datastore_id = var.disk_datasource_name
file_id = proxmox_virtual_environment_file.ubuntu_cloud_image.id
interface = "scsi0"
size = var.disk_size
}
serial_device {}
operating_system {
type = "l26"
}
initialization {
datastore_id = var.disk_datasource_name
user_data_file_id = proxmox_virtual_environment_file.cloud_init.id
ip_config {
ipv4 {
address = "dhcp"
}
}
}
} |
Hello, Did you have a chance to look at this problem? @bpg Thanks. |
Hi @mmaced, I can't reproduce this issue in my lab. My template: resource "proxmox_virtual_environment_vm" "ubuntu_vm" {
name = "test"
node_name = "pve2"
vm_id = 1000
agent {
enabled = true
}
cpu {
cores = 4
}
memory {
dedicated = 4096
# hugepages = "any"
}
boot_order = ["virtio0", "scsi0"]
disk {
datastore_id = "local-lvm"
file_id = proxmox_virtual_environment_download_file.ubuntu_cloud_image.id
interface = "virtio0"
}
initialization {
datastore_id = "local-lvm"
ip_config {
ipv4 {
address = "dhcp"
}
}
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
}
network_device {
bridge = "vmbr0"
}
}
resource "proxmox_virtual_environment_download_file" "ubuntu_cloud_image" {
content_type = "iso"
datastore_id = "local"
node_name = "pve2"
url = "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
overwrite_unmanaged = true
}
After initial deployment, I changed node_name = "pve1"
migrate = true in the TF template for the
![]() ![]() Could you post your terraform / tofu output from the |
Hey @mmaced, could you post your TF output and ideally a debug log, so I can try to debug it? |
Hello @bpg , sorry for the late response. So, this is my vm configuration (initially already with migrate=true, idk if that could be the problem).
Now I will change variable var.host_name to another host name and run tf. and as you can see he will destroy and recreate instead of migrate: |
Ah, that's migration of a You'd probably need to put your cloud_init file to a shared datastore (ceph, nfs, cifs, etc) to support this scenario. Moving file resources between local datastores on different cluster nodes is not supported by the provider. |
But this promox_virtual_environment_file is already set to be shared as a nfs (NAS2), I am also migrating the promox_virtual_environment_vm from node_name to another and it also tries to destroy and create instead of migrate.
|
Great, then it doesn't need to be moved from node to node, as the file should be available under the same datastore name on all nodes.
Could you share a terraform output of this try? |
Hello,
I setuped my proxmox infra with terraform bpg/proxmox.
Now I wanted to migrate a VM from host01 to host02 but it's not working because it's re-creating and I lose my data...
I already tried
migrate = true
but still didn't work.Is there a way to migrate whenever I want using terraform bpg/proxmox?
The text was updated successfully, but these errors were encountered: