You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the charm base is changed, it triggers a change which then applies successfully, but isn't actually changed and causes a loop in our Flux CI/CD system constantly trying to apply this change.
| Terraform used the selected providers to generate the following execution
| plan. Resource actions are indicated with the following symbols:
| ~ update in-place
|
| Terraform will perform the following actions:
|
| # module.canonical_k8s.juju_application.k8s_control will be updated in-place
| ~ resource "juju_application" "k8s_control" {
| id = "stg-mcarvalhor-1:k8s-control"
| name = "k8s-control"
| + principal = (known after apply)
| + storage = (known after apply)
| # (6 unchanged attributes hidden)
|
| ~ charm {
| ~ base = "[email protected]" -> "[email protected]"
| name = "k8s"
| # (3 unchanged attributes hidden)
| }
|
| # (1 unchanged block hidden)
| }
|
| # module.canonical_k8s.juju_application.k8s_worker will be updated in-place
| ~ resource "juju_application" "k8s_worker" {
| id = "stg-mcarvalhor-1:k8s-worker"
| name = "k8s-worker"
| + principal = (known after apply)
| + storage = (known after apply)
| # (6 unchanged attributes hidden)
|
| ~ charm {
| ~ base = "[email protected]" -> "[email protected]"
| name = "k8s-worker"
| # (3 unchanged attributes hidden)
| }
| }
|
| Plan: 0 to add, 2 to change, 0 to destroy.
Changing a charm's base should perhaps trigger a replacement instead?
Triggering a replacement for a base (or series) change is also problematic as is. With a machine charm the application needs to be redeployed to do this. However it's okay with a Kubernetes charm as an upgrade uses a new OCI Image, which can effectively "upgrade" the base with no problems.
We can handle both scenarios with the RequiresReplaceIf rather than RequiresReplace.
Even with this change, there are potential problems with a clean run depending on how a plan is written and a bug in the provider.
The bug is #521. In that case, run terraform apply a second time and it should be okay.
Issues based on how the terraform plan is written:
If there is a machine resource used for the application and the machine isn't also replaced, the new deploy will fail.
Description
When the charm
base
is changed, it triggers achange
which then applies successfully, but isn't actually changed and causes a loop in our Flux CI/CD system constantly trying to apply this change.Changing a charm's
base
should perhaps trigger a replacement instead?Urgency
Casually reporting
Terraform Juju Provider version
0.15.0
Terraform version
v1.9.8-dev
Juju version
3.5.4
Terraform Configuration(s)
Reproduce / Test
Debug/Panic Output
No response
Notes & References
No response
The text was updated successfully, but these errors were encountered: