Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider version 2.30.0 failed provider configuration on terraform plan #2507

Open
jtgorny opened this issue May 28, 2024 · 2 comments
Open
Labels

Comments

@jtgorny
Copy link

jtgorny commented May 28, 2024

Attempted to upgrade from 2.29.0 to 2.30.0 and suddenly the provider is throwing errors on a terraform plan. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to 2.29.0 is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on 2.29.0, I can upgrade to 2.30.0 and everything works fine.

│ Error: Provider configuration: cannot load Kubernetes client config
│ 
│   with provider["registry.terraform.io/hashicorp/kubernetes"],
│   on main.tf line 17, in provider "kubernetes":
│   17: provider "kubernetes" {
│ 
│ invalid configuration: default cluster has no server defined

Issue is only present on new builds, if I am running a terraform plan against an environment that already has a cluster, I get no error.

Terraform Version, Provider Version and Kubernetes Version

Terraform version: `1.8.4` (also tried `1.4.7`)
Kubernetes provider version: `2.30.0`
Kubernetes version: `1.29`

Affected Resource(s)

tf plan failing due to "invalid configuration" in provider

Terraform Configuration Files

  • Provider definition
provider "kubernetes" {
  host                              = data.aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)

  exec {
    api_version = "client.authentication.k8s.io/v1"
    command    = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.default.name, "--region", var.region]
    env = {
      AWS_PROFILE = var.profile
    }
  }
}
  • Kube Config Template file
apiVersion: v1
clusters:
- cluster:
    server: ${EKS_SERVICE_ENDPOINT}
    certificate-authority-data: ${EKS_CA_DATA}
  name: ${APPLICATION}-${ENVIRONMENT}

contexts:
- context:
    cluster: ${APPLICATION}-${ENVIRONMENT}
    user: ${APPLICATION}-${ENVIRONMENT}-admin
  name: ${APPLICATION}-${ENVIRONMENT}-system

current-context: ${APPLICATION}-${ENVIRONMENT}-system
kind: Config
preferences: {}

users:
- name: ${APPLICATION}-${ENVIRONMENT}-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1
      interactiveMode: IfAvailable
      command: aws
      args:
        - "eks"
        - "get-token"
        - "--cluster-name"
        - "${K8_CLUSTER_NAME}"
      env:
        - name: AWS_PROFILE
          value: ${AWS_PROFILE}
  • Cluster Data definition
data "aws_eks_cluster" "default" {
  depends_on = [ module.EKS.EKS-Server-Endpoint ]
  name             = var.EKSClusterName
}
  • EKS Module definition
module "EKS" {
  source  = "terraform-aws-modules/eks/aws"
  version = "18.30.0"
  ...blah...
  ...blah...
  ...blah...
}

Expected Behavior

What should have happened?
A successful plan to create an EKS cluster.

Actual Behavior

What actually happened?

│ Error: Provider configuration: cannot load Kubernetes client config
│ 
│   with provider["registry.terraform.io/hashicorp/kubernetes"],
│   on main.tf line 17, in provider "kubernetes":
│   17: provider "kubernetes" {
│ 
│ invalid configuration: default cluster has no server defined

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@jtgorny jtgorny added the bug label May 28, 2024
@sheneska
Copy link
Contributor

Hi @jtgorny, thanks for opening this issue. To confirm, could you apply this config on 2.29.0 and let us know if it works?

@jtgorny
Copy link
Author

jtgorny commented Jun 12, 2024

@sheneska - I can confirm that this configuration on 2.29.0 works as expected. Thanks and apologies for the delay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants