-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keepTLSSecret value not being honoured #2312
Comments
@lcaproni-pp Can you try to use |
@lcaproni-pp we use Helm is not supposed to contact the Kubernetes API Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case. And |
We use https://github.com/hashicorp/terraform-provider-helm to install charts into clusters and face the same issue.
Each time when we run Environment |
@oliviassss Thanks for this, I tried doing
I had to manually compare the certs with the ones that are already stored on the cluster and they didn't match up at least from what I seen so my deduction was that they would have been changing from what was already stored on the cluster. I will say also that our pipelines are doing a Values file contents just for reference:
|
Can confirm this does not work when running |
not working with Pulumi helm module either |
With the recent chart version v1.3.x, we've also added support for directly specifying the TLS certificates/keys during chart install. Here is the snippet from the values.yaml:
This method doesn't depend on querying the cluster for existing secret, and provide consistent results with the |
Is this the required way to do things, or is it actually bug to still view a diff when |
keepTLSSecret option doesn't either work in 2.3 release with newest helm chart - we use argocd for deployments. |
Siding with @ypicard on this one here, if this is the recommended way to go then the keeptls feature should be removed and docs should be updated so users know to do this. I know it's a feature but personally I didn't want to go through the hassle of doing this X times and then every X time I deploy this helm chart into a new cluster. If that's how it has to be then fair enough but since it wasn't previously I hoped it would stay the same. |
@jdomag I haven't tried it yet (I use helmfile not ArgoCD for deploying this project, nor have tried the 2.3 chart myself yet) but there supposedly is an ArgoCD workaround. See this comment to let ArgoCD ignore changes. Please let us know if this still works or the config needs updating for the latest helm chart. Feedback on 2.3 - using |
@tyrken |
Hey @oliviassss @kishorj is there any update on this at all? Is the offical recommended way to do this generate our own secrets and store them inside the cluster or is there a bug that needs addressing? Happy to help out in some way if possible :) |
@kishorj or anyone: !!urgent help!! app.kubernetes.io/version=v2.3.1 If I have to assign the initially created TLS values to the webhookTLS config params, is there any way? |
we are using cert-manager and not facing this issue anymore. |
sorry, I am new to the kubernetes world, so @tamirhad if we enable cert-manager, are the certs auto generated and maintained? or is it app teams responsibility to load it somewhere (secrets or aws acm) and reference it? Also, is there any option to not use TLS? our plan is to terminate TLS at the load balancer level. |
The cert manager will create the certificates for you. |
@tamirhad thank you, that did the trick! I have to add this config "enableCertManager": true on the aws load balancer controller |
This is a workaround correct? We should still expect this to work at some point? |
using a cert-manager is not a workaround. |
I'm siding with the workaround theory, I don't want to have to enable cert-manager as it's more things to think about also this used to just work. It was a recent upgrade at some point that made it break, I know this because we have an older cluster with a old version of this controller installed and it doesn't hit these issues. |
You are right it's should be fixed in any case, we are using cert-manager anyway so it's not quite a workaround for us. |
Good afternoon, We are running aws-load-balancer-controller (chart version: 1.3.3 / app version v2.3.1) through Argo CD (version 2.1.7) also and run into this. As for what we have understood if we deploy cert-manager and enable it on aws-load-balancer-controller chart We have done it but now we are getting booth, the ValidatingWebhookConfiguration and the MutatingWebhookConfiguration, re-generated in loop, seems like every second it generates a new one, I attach logs from the cert-manager ca injector:
We think this is because Argo is trying to auto-sync all the time, are we missing something? if we are using Argo CD we still have to do the following "workaround"? Thank you so much in advance 🙇 |
We're also hitting this but oddly only in one cluster. We're using Terraform's The only thing that's "worked" is the heavy hammer to ignore drift in Terraform 🤮: resource "helm_release" "aws-load-balancer-controller" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
version = "1.3.3"
namespace = "kube-system"
values = [
yamlencode(
{
serviceAccount = {
annotations = {
"eks.amazonaws.com/role-arn" = "${var.aws_load_balancer_controller_role_arn}"
}
}
}
)
]
set {
name = "clusterName"
value = var.name
}
set {
name = "replicaCount"
value = 1
}
set {
name = "keepTLSSecret"
value = true
}
lifecycle {
ignore_changes = [
manifest
]
}
} Is there any way to get things to converge or to ignore TLS changes? As an aside, this issue only impacts Terraform if manifest rendering is enabled for drift detection in the provider: provider "helm" {
kubernetes {
...
}
experiments {
// Render Helm manifest so we get full diffs
manifest = true
}
} When manifest are off, things immediately converge as the underlying diffs are not presented to Terraform. We ended up disabling manifest and were able to workaround this issue. |
My current workaround is to read the values from the data "kubernetes_secret" "alb-tls" {
metadata {
name = "aws-load-balancer-tls"
namespace = "kube-system"
}
binary_data = {
"ca.crt" = ""
"tls.crt" = ""
"tls.key" = ""
}
}
resource "helm_release" "aws-alb-ingress-controller" {
name = "aws-load-balancer-controller"
namespace = "kube-system"
chart = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
version = "1.4.0"
set {
name = "webhookTLS.caCert"
value = base64decode(lookup(data.kubernetes_secret.alb-tls.binary_data, "ca.crt"))
}
set {
name = "webhookTLS.cert"
value = base64decode(lookup(data.kubernetes_secret.alb-tls.binary_data, "tls.crt"))
}
set {
name = "webhookTLS.key"
value = base64decode(lookup(data.kubernetes_secret.alb-tls.binary_data, "tls.key"))
}
... Of course, this only works once the chart has been installed, since the secret needs to exist already. A slightly better workaround would be to also generate the TLS secrets with terraform. |
@gitmaniak @tamirhad when you set |
@ajmal-bash when using aws-load-balancer-controller/helm/aws-load-balancer-controller/templates/webhook.yaml Line 15 in c1b9579
|
The workaround to use cert-manager isn't working 100% with ArgoCD as it's showing the two webhooks permanently out of sync. The problem I think is that the Helm chart is templating in the "empty" I think the fix here is to not template any CA bundle in at all if using cert-manager, i.e.: {{ if not $.Values.enableCertManager }}
caBundle: {{ $tls.caCert }}
{{ end }} That way ArgoCD shouldn't care when the live version of the manifest has that field, based on another in-house webhook which also uses cert-manager but doesn't show this permanent diff. |
I'll delete my fork of the repo and keep an eye on your PR then 😄 |
@bodgit In the past we just ignored differences on webhooks in ArgoCD due to that reason. I recently hit this issue after applying ArgoCD on our aws-load-balancer-controller apps. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
We have been hit with the same error in multiple clusters that were provisioned recently, is there a fix planned for this? |
Same here - helmfile shows differences every time... :( |
Encountering the same issue. Would be awesome if someone who got it working with manually generating SSL certificates and specifying it in the Helm chart, could share how they did that (and some code). Fingers crossed! 🤞 |
We have the same issue. I think its this part of code in _helpers.tpl
|
Any update/progress on this one? |
@atamgp that's awesome. Would you be able to submit a PR? That'll maybe make it easier for the team to prioritize. |
As a workaround for this problem, I added the You can ignore helmfile diff differences by resource, as shown in the example below. in helmfile.yaml helmDefaults:
diffArgs:
- --suppress-secrets
- --suppress
- MutatingWebhookConfiguration
- --suppress
- ValidatingWebhookConfiguration Preferably, I'd like to suppress |
Thanks @yktakaha4 !!! I can confirm this workaround works for helmfile users |
This thread is only workarounds. Do we know if anyone is actually working on the solution for |
For anyone who stumbles on this and is using argocd (and only cares about it there), here was my workaround: Goes under spec: (for your
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Posting two better workarounds here for anyone using helmfile.
The helm chart fetches the current tls values by running |
This is the exact cause of the issue. ArgoCD also has the same problem where the helm chart does not support lookup. Like other helm charts, it seems best to reference a pre-declared Secret using a parameter like |
Describe the bug
There was an update here #2264 - to allow users of the helm chart to keep the TLS secrets as they are and to stop non-empty diffs running every plan/apply. However upon setting the value the chart still generates diffs for the TLS certs.
Steps to reproduce
1 - Install helm chart & set the keepTLSSecret value to
true
2 - Run a helm diff after installation
3 - Observe output and notice that diffs are still being generated
Expected outcome
The webhook secret that already contains the TLS certs should be re-used when using the keepTLSSecret value within the helm chart.
It could be that i'm missing some configuration or something but there is nothing that seems obvious to me.
Environment
The text was updated successfully, but these errors were encountered: