Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proxmox_virtual_environment_container managing mount_points #1392

Open
GuillaumeSeren opened this issue Jun 14, 2024 · 7 comments
Open

proxmox_virtual_environment_container managing mount_points #1392

GuillaumeSeren opened this issue Jun 14, 2024 · 7 comments
Labels
🐛 bug Something isn't working
Milestone

Comments

@GuillaumeSeren
Copy link

Hey,
thank you for this great project I really enjoy it,
I think it is very usefull.

Describe the bug
When you try to add anonymous (non pre-created) mount_points (as documented here),
in a container like so you get an error:

# Here an example of the variable of a host list I got in my variable list
  "mycontainer" = {
    .. basic container params ..
    mount_points = [
      {
        path = "/mysql3"
        volume = "local"
        size   = "10G"
      }
    ]
  },
# Here the terrafrom plan / apply
└─> terraform plan
      - mount_point {
...
          - path          = "/mysql3" -> null
...
          - size          = "105G" -> null
          - volume        = "local:105/vm-105-disk-2.raw" -> null
        }

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
...
# Here as the plan was fine, let's try to apply
└─> terraform apply
...
Terraform will perform the following actions:
...
      - mount_point {
          - acl           = false -> null
          - backup        = true -> null
          - mount_options = [] -> null
          - path          = "/mysql3" -> null
          - quota         = false -> null
          - read_only     = false -> null
          - replicate     = true -> null
          - shared        = false -> null
          - size          = "105G" -> null
          - volume        = "local:105/vm-105-disk-2.raw" -> null
        }

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
...
  Enter a value: yes
...
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

# If we try to plan again the same diff is as before is still here:
└─> terraform plan
...
  ~ update in-place

Terraform will perform the following actions:
...
      - mount_point {
          - acl           = false -> null
          - backup        = true -> null
          - mount_options = [] -> null
          - path          = "/mysql3" -> null
          - quota         = false -> null
          - read_only     = false -> null
          - replicate     = true -> null
          - shared        = false -> null
          - size          = "105G" -> null
          - volume        = "local:105/vm-105-disk-2.raw" -> null
        }

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
...

To Reproduce
Here the code I use to manage that:

resource "proxmox_virtual_environment_container" "container" {
  for_each      = var.containers
  description   = each.value.description
  node_name     = var.node_name
  start_on_boot = true
  tags          = each.value.tags
  unprivileged  = true

  cpu {
    architecture = "amd64"
    cores        = each.value.cpu_cores
  }

  disk {
    datastore_id = var.ct_datastore_storage_location
    size         = each.value.disk_size
  }

  dynamic "mount_point" {
    for_each = each.value.mount_points != null ? each.value.mount_points : []

    content {
      size      = mount_point.value.size
      volume    = mount_point.value.volume
      path      = mount_point.value.path
      read_only = mount_point.value.read_only
    }
  }

  memory {
    dedicated = each.value.memory_dedicated
    swap      = each.value.memory_swap
  }

  operating_system {
    template_file_id = proxmox_virtual_environment_file.debian_container_template.id
    type             = var.os_type
  }

  initialization {
    hostname = each.value.hostname

    dns {
      domain = var.dns_domain
      server = var.dns_server
    }

    ip_config {
      ipv4 {
        address = each.value.ipv4_address
        gateway = each.value.ipv4_gateway
      }
    }

    user_account {
      keys     = var.ssh_keys
      password = random_password.container_root_password.result
    }
  }

  dynamic "network_interface" {
    for_each = each.value.network_interfaces
    content {
      name       = network_interface.value.name
      bridge     = network_interface.value.bridge
      rate_limit = network_interface.value.rate_limit
      firewall   = network_interface.value.firewall
    }
  }

  features {
    nesting = true
    fuse    = false
  }
}

Additional context

  • Single or clustered Proxmox:
  • Proxmox version:
  • Provider version (ideally it should be the latest version):
  • Terraform/OpenTofu version:
  • OS (where you run Terraform/OpenTofu from):
  • Debug logs (TF_LOG=DEBUG terraform apply):
# Here something interresting in the debug log
2024-06-14T17:01:56.485+0200 [WARN]  Provider "provider[\"registry.terraform.io/bpg/proxmox\"]" produced an unexpected new value for module.containers.proxmox_virtual_environment_container.container["mycontainer"], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .mount_point: block count changed from 1 to 2
module.containers.proxmox_virtual_environment_container.container["mycontainer"]: Modifications complete after 0s [id=105]

Best,
Guillaume

@GuillaumeSeren GuillaumeSeren added the 🐛 bug Something isn't working label Jun 14, 2024
@bpg
Copy link
Owner

bpg commented Jun 18, 2024

Hey @GuillaumeSeren 👋🏼

Just to confirm, you have only one mount point, not two, defined in the mount_points list?

I suspect the list could end up having a valid value, and a null value, which then makes the provider unhappy. I suspect it doesn't handle reordering of mount_point blocks well, neither handling of an empty mount_point block.

As a workaround I'd suggest to define mount points explicitly without using dynamic blocks, until the issue is fixed.

@GuillaumeSeren
Copy link
Author

GuillaumeSeren commented Jun 19, 2024

Hey @bpg,
Thank you for the response ❤️

Just to confirm, you have only one mount point, not two, defined in the mount_points list?

No I had 2 mount_points something like this, the addition works it is the removal that do not:

# data:
...
    mount_points = [
      {
        size   = "400G"
        path   = "/web/mysql"
        volume = "local:105/vm-105-disk-1.raw"
      },
      {
        size   = "10G"
        path   = "/web/mysql2"
        volume = "local"
      }
    ]
...

# Terraform plan
  ~ resource "proxmox_virtual_environment_container" "container" {
        id            = "105"
        tags          = [
            "ct",
            "pp",
        ]
        # (7 unchanged attributes hidden)

      + mount_point {
          + acl       = false
          + backup    = true
          + path      = "/web/mysql2"
          + quota     = false
          + read_only = false
          + replicate = true
          + shared    = false
          + size      = "10G"
          + volume    = "local:"
        }

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

# Terraform apply
      + mount_point {                                                                                                                                                                       
          + acl       = false                                                                                                                                                               
          + backup    = true                                                                                                                                                                
          + path      = "/web/mysql2"                                                                                                                                                       
          + quota     = false                                                                                                                                                               
          + read_only = false                                                                                                                                                               
          + replicate = true                                                                                                                                                                
          + shared    = false                                                                                                                                                               
          + size      = "10G"                                                                                                                                                               
          + volume    = "local:105"                                                                                                                                                         
        }                                                                                                                                                                                   
                                                                                                                                                                                            
        # (8 unchanged blocks hidden)                                                                                                                                                       
    }                                                                                                                                                                                       
                                                                                                                                                                                            
Plan: 0 to add, 1 to change, 0 to destroy.                                                                                                                                                  
                                                                                                                                                                                            
Do you want to perform these actions?                                                                                                                                                       
  Terraform will perform the actions described above.                                                                                                                                       
  Only 'yes' will be accepted to approve.                                                                                                                                                   
                                                                                                                                                                                            
  Enter a value: yes                                                                                                                                                                        
                                                                                                                                                                                            
module.containers.proxmox_virtual_environment_container.container["servername"]: Modifying... [id=105]                                                                                     
module.containers.proxmox_virtual_environment_container.container["servername"]: Modifications complete after 0s [id=105]                                                                  

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Now the disk is added and mounted in the container,
what append if we try to delete:

# data:
...
    mount_points = [
      {
        size   = "400G"
        path   = "/web/mysql"
        volume = "local:105/vm-105-disk-1.raw"
      }
    ]
...
# terraform apply 
...
  # module.containers.proxmox_virtual_environment_container.container["lx118-db-pp"] will be updated in-place
  ~ resource "proxmox_virtual_environment_container" "container" {
        id            = "105"                                                                 
...                                                                               
      - mount_point {
          - acl           = false -> null
          - backup        = true -> null
          - mount_options = [] -> null
          - path          = "/web/mysql2" -> null
          - quota         = false -> null
          - read_only     = false -> null
          - replicate     = true -> null
          - shared        = false -> null
          - size          = "105G" -> null
          - volume        = "local:105/vm-105-disk-2.raw" -> null
        }

        # (8 unchanged blocks hidden)
    }
...
module.containers.proxmox_virtual_environment_container.container["servername"]: Modifying... [id=105]
module.containers.proxmox_virtual_environment_container.container["servername"]: Modifications complete after 1s [id=105]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

# Here the  mount_point is not deleted:
...
      - mount_point {
          - acl           = false -> null
          - backup        = true -> null
          - mount_options = [] -> null
          - path          = "/web/mysql2" -> null
          - quota         = false -> null
          - read_only     = false -> null
          - replicate     = true -> null
          - shared        = false -> null
          - size          = "105G" -> null
          - volume        = "local:105/vm-105-disk-2.raw" -> null
        }

        # (8 unchanged blocks hidden)
    }
...
  Enter a value: yes

module.containers.proxmox_virtual_environment_container.container["servername"]: Modifying... [id=105]
module.containers.proxmox_virtual_environment_container.container["servername"]: Modifications complete after 1s [id=105]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

As a workaround I'd suggest to define mount points explicitly without using dynamic blocks, until the issue is fixed.

I am not sure to get it right, but what I did was creating the /mysql/web mount_point manually,
and then add it the data-list, please show me an example of your suggestion.

Best,
Guillaume

@bpg
Copy link
Owner

bpg commented Jun 19, 2024

As a workaround I'd suggest to define mount points explicitly without using dynamic
blocks, until the issue is fixed.

I am not sure to get it right, but what I did was creating the /mysql/web mount_point manually, and then add it the data-list, please show me an example of your suggestion.

I was under impression that the result config was something like

...
      mount_point {
        size   = "400G"
        path   = "/web/mysql"
        volume = "local:105/vm-105-disk-1.raw"
      }

      mount_point {}
...

But I see now the problem is the actual delete

@bpg
Copy link
Owner

bpg commented Jun 23, 2024

Well, the mount point implementation is broken :( The schema does not have the actual "mount point name" (i.e., mp0, mp1) attribute, so the provider does not know for sure to which mount point each list item belongs. For example, if the first of two mount points gets deleted from the config, the remaining one will get index 0 in the list. Then the provider will try to update mp0 and delete mp1, which is totally opposite from what's needed 🤦🏼

I'm not sure yet if this can be fixed in a backward-compatible way. If not, I'd rather postpone this bug to v1+. Working with sets is much more convenient in the plugin framework than in the legacy SDK.

@bpg bpg added this to the v2.0 milestone Jun 25, 2024
@GuillaumeSeren
Copy link
Author

Hey @bpg ,
Thank you for the reply.

I guess from my use-case the easy choice is to not support those extra mount_points,
and recreate the container with the good size at least for now.

I would be interested to help/test/dev this new feature if you have any pointer to start from.

@bpg
Copy link
Owner

bpg commented Jun 28, 2024

Working with lists/sets of blocks is inherently difficult in the current implementation of the provider. This is partly due to some choices made in the legacy code at the very beginning and partly due to limitations of the SDKv2 API. In an ideal world, these types of attributes should be represented as a map in the config:

...
  "mount_points" {
    "mp0" = {
       size   = "400G"
       path   = "/web/mysql"
       volume = "local:105/vm-105-disk-1.raw"
    }
    "mp1" = { ... }
  }
...

Then there would be no ambiguity, no issues with ordering, etc. But that's not possible with SDKv2, and this deficiency is my main motivation to move to the FWK implementation in #1231. Since there is a better long-term solution for these problems, I don't want to spend my time on fixes and workarounds in the SDK-based code.

@GuillaumeSeren, I do really appreciate your offer, but I need to come up with some conventions and guidelines on how to implement various types of attributes and blocks so we have consistency in the code. I'm making some progress in the VM resource and hopefully can come up with some contribution guidelines in the near future.

@GuillaumeSeren
Copy link
Author

@GuillaumeSeren, I do really appreciate your offer, but I need to come up with some conventions and guidelines on how to implement various types of attributes and blocks so we have consistency in the code. I'm making some progress in the VM resource and hopefully can come up with some contribution guidelines in the near future.

Sure thank you for all the nice responses.

Best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants