Skip to content

Commit

Permalink
Better behavior during destroy (cloudposse#169)
Browse files Browse the repository at this point in the history
  • Loading branch information
Nuru authored Oct 17, 2022
1 parent 77f8aa0 commit 1c44f2c
Show file tree
Hide file tree
Showing 14 changed files with 95 additions and 88 deletions.
1 change: 1 addition & 0 deletions .github/workflows/validate-codeowners.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ jobs:
steps:
- name: "Checkout source code at current commit"
uses: actions/checkout@v2
# Leave pinned at 0.7.1 until https://github.com/mszostok/codeowners-validator/issues/173 is resolved
- uses: mszostok/[email protected]
if: github.event.pull_request.head.repo.full_name == github.repository
name: "Full check of CODEOWNERS"
Expand Down
19 changes: 9 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,7 @@ For automated tests of the complete example using [bats](https://github.com/bats

Other examples:

- [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
- [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC
- [terraform-aws-components/eks/cluster](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/eks/cluster) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures

```hcl
provider "aws" {
Expand Down Expand Up @@ -316,7 +315,7 @@ Module usage with two unmanaged worker groups:
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.security_group_id
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
Expand All @@ -343,7 +342,7 @@ Module usage with two unmanaged worker groups:
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.security_group_id
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
Expand Down Expand Up @@ -393,7 +392,7 @@ Available targets:

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.14.0 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 3.38 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.7.1 |
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 2.0 |
Expand Down Expand Up @@ -515,7 +514,7 @@ Available targets:
| <a name="input_tags"></a> [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).<br>Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| <a name="input_tenant"></a> [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| <a name="input_vpc_id"></a> [vpc\_id](#input\_vpc\_id) | VPC ID for the EKS cluster | `string` | n/a | yes |
| <a name="input_wait_for_cluster_command"></a> [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz"` | no |
| <a name="input_wait_for_cluster_command"></a> [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"if test -n \"$ENDPOINT\"; then curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz; fi"` | no |
| <a name="input_workers_role_arns"></a> [workers\_role\_arns](#input\_workers\_role\_arns) | List of Role ARNs of the worker nodes | `list(string)` | `[]` | no |
| <a name="input_workers_security_group_ids"></a> [workers\_security\_group\_ids](#input\_workers\_security\_group\_ids) | DEPRECATED: Use `allowed_security_group_ids` instead.<br>Historical description: Security Group IDs of the worker nodes.<br>Historical default: `[]` | `list(string)` | `[]` | no |

Expand All @@ -535,13 +534,13 @@ Available targets:
| <a name="output_eks_cluster_id"></a> [eks\_cluster\_id](#output\_eks\_cluster\_id) | The name of the cluster |
| <a name="output_eks_cluster_identity_oidc_issuer"></a> [eks\_cluster\_identity\_oidc\_issuer](#output\_eks\_cluster\_identity\_oidc\_issuer) | The OIDC Identity issuer for the cluster |
| <a name="output_eks_cluster_identity_oidc_issuer_arn"></a> [eks\_cluster\_identity\_oidc\_issuer\_arn](#output\_eks\_cluster\_identity\_oidc\_issuer\_arn) | The OIDC Identity issuer ARN for the cluster that can be used to associate IAM roles with a service account |
| <a name="output_eks_cluster_managed_security_group_id"></a> [eks\_cluster\_managed\_security\_group\_id](#output\_eks\_cluster\_managed\_security\_group\_id) | Security Group ID that was created by EKS for the cluster. EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads |
| <a name="output_eks_cluster_managed_security_group_id"></a> [eks\_cluster\_managed\_security\_group\_id](#output\_eks\_cluster\_managed\_security\_group\_id) | Security Group ID that was created by EKS for the cluster.<br>EKS creates a Security Group and applies it to the ENI that are attached to EKS Control Plane master nodes and to any managed workloads. |
| <a name="output_eks_cluster_role_arn"></a> [eks\_cluster\_role\_arn](#output\_eks\_cluster\_role\_arn) | ARN of the EKS cluster IAM role |
| <a name="output_eks_cluster_version"></a> [eks\_cluster\_version](#output\_eks\_cluster\_version) | The Kubernetes server version of the cluster |
| <a name="output_kubernetes_config_map_id"></a> [kubernetes\_config\_map\_id](#output\_kubernetes\_config\_map\_id) | ID of `aws-auth` Kubernetes ConfigMap |
| <a name="output_security_group_arn"></a> [security\_group\_arn](#output\_security\_group\_arn) | ARN of the created Security Group for the EKS cluster |
| <a name="output_security_group_id"></a> [security\_group\_id](#output\_security\_group\_id) | ID of the created Security Group for the EKS cluster |
| <a name="output_security_group_name"></a> [security\_group\_name](#output\_security\_group\_name) | Name of the created Security Group for the EKS cluster |
| <a name="output_security_group_arn"></a> [security\_group\_arn](#output\_security\_group\_arn) | (Deprecated) ARN of the optionally created additional Security Group for the EKS cluster |
| <a name="output_security_group_id"></a> [security\_group\_id](#output\_security\_group\_id) | (Deprecated) ID of the optionally created additional Security Group for the EKS cluster |
| <a name="output_security_group_name"></a> [security\_group\_name](#output\_security\_group\_name) | Name of the optionally created additional Security Group for the EKS cluster |
<!-- markdownlint-restore -->


Expand Down
7 changes: 3 additions & 4 deletions README.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -157,8 +157,7 @@ usage: |2-
Other examples:
- [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
- [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC
- [terraform-aws-components/eks/cluster](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/eks/cluster) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
```hcl
provider "aws" {
Expand Down Expand Up @@ -276,7 +275,7 @@ usage: |2-
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.security_group_id
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
Expand All @@ -303,7 +302,7 @@ usage: |2-
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.security_group_id
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
Expand Down
26 changes: 13 additions & 13 deletions auth.tf
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ locals {
exec_profile = local.kube_exec_auth_enabled && var.kube_exec_auth_aws_profile_enabled ? ["--profile", var.kube_exec_auth_aws_profile] : []
exec_role = local.kube_exec_auth_enabled && var.kube_exec_auth_role_arn_enabled ? ["--role-arn", var.kube_exec_auth_role_arn] : []

cluster_endpoint_data = join("", aws_eks_cluster.default.*.endpoint)
cluster_endpoint_data = join("", aws_eks_cluster.default[*].endpoint) # use `join` instead of `one` to keep the value a string
cluster_auth_map_endpoint = var.apply_config_map_aws_auth ? local.cluster_endpoint_data : var.dummy_kubeapi_server

certificate_authority_data_list = coalescelist(aws_eks_cluster.default.*.certificate_authority, [[{ data : "" }]])
certificate_authority_data_list = coalescelist(aws_eks_cluster.default[*].certificate_authority, [[{ data : "" }]])
certificate_authority_data_list_internal = local.certificate_authority_data_list[0]
certificate_authority_data_map = local.certificate_authority_data_list_internal[0]
certificate_authority_data = local.certificate_authority_data_map["data"]
Expand All @@ -50,9 +50,9 @@ locals {
# Note that we don't need to do this for managed Node Groups since EKS adds their roles to the ConfigMap automatically
map_worker_roles = [
for role_arn in var.workers_role_arns : {
rolearn : role_arn
username : "system:node:{{EC2PrivateDNSName}}"
groups : [
rolearn = role_arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes"
]
Expand All @@ -62,13 +62,13 @@ locals {

resource "null_resource" "wait_for_cluster" {
count = local.enabled && var.apply_config_map_aws_auth ? 1 : 0
depends_on = [aws_eks_cluster.default[0]]
depends_on = [aws_eks_cluster.default]

provisioner "local-exec" {
command = var.wait_for_cluster_command
interpreter = var.local_exec_interpreter
environment = {
ENDPOINT = aws_eks_cluster.default[0].endpoint
ENDPOINT = local.cluster_endpoint_data
}
}
}
Expand All @@ -84,7 +84,7 @@ resource "null_resource" "wait_for_cluster" {
#
data "aws_eks_cluster_auth" "eks" {
count = local.kube_data_auth_enabled ? 1 : 0
name = join("", aws_eks_cluster.default.*.id)
name = one(aws_eks_cluster.default[*].id)
}


Expand All @@ -99,25 +99,25 @@ provider "kubernetes" {
# If this solution bothers you, you can disable it by setting var.dummy_kubeapi_server = null
host = local.cluster_auth_map_endpoint
cluster_ca_certificate = local.enabled && !local.kubeconfig_path_enabled ? base64decode(local.certificate_authority_data) : null
token = local.kube_data_auth_enabled ? data.aws_eks_cluster_auth.eks[0].token : null
token = local.kube_data_auth_enabled ? one(data.aws_eks_cluster_auth.eks[*].token) : null
# The Kubernetes provider will use information from KUBECONFIG if it exists, but if the default cluster
# in KUBECONFIG is some other cluster, this will cause problems, so we override it always.
config_path = local.kubeconfig_path_enabled ? var.kubeconfig_path : ""
config_context = var.kubeconfig_context

dynamic "exec" {
for_each = local.kube_exec_auth_enabled ? ["exec"] : []
for_each = local.kube_exec_auth_enabled && length(local.cluster_endpoint_data) > 0 ? ["exec"] : []
content {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = concat(local.exec_profile, ["eks", "get-token", "--cluster-name", aws_eks_cluster.default[0].id], local.exec_role)
args = concat(local.exec_profile, ["eks", "get-token", "--cluster-name", try(aws_eks_cluster.default[0].id, "deleted")], local.exec_role)
}
}
}

resource "kubernetes_config_map" "aws_auth_ignore_changes" {
count = local.enabled && var.apply_config_map_aws_auth && var.kubernetes_config_map_ignore_role_changes ? 1 : 0
depends_on = [null_resource.wait_for_cluster[0]]
depends_on = [null_resource.wait_for_cluster]

metadata {
name = "aws-auth"
Expand All @@ -137,7 +137,7 @@ resource "kubernetes_config_map" "aws_auth_ignore_changes" {

resource "kubernetes_config_map" "aws_auth" {
count = local.enabled && var.apply_config_map_aws_auth && var.kubernetes_config_map_ignore_role_changes == false ? 1 : 0
depends_on = [null_resource.wait_for_cluster[0]]
depends_on = [null_resource.wait_for_cluster]

metadata {
name = "aws-auth"
Expand Down
12 changes: 6 additions & 6 deletions docs/terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.14.0 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 3.38 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.7.1 |
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 2.0 |
Expand Down Expand Up @@ -125,7 +125,7 @@
| <a name="input_tags"></a> [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).<br>Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| <a name="input_tenant"></a> [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| <a name="input_vpc_id"></a> [vpc\_id](#input\_vpc\_id) | VPC ID for the EKS cluster | `string` | n/a | yes |
| <a name="input_wait_for_cluster_command"></a> [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz"` | no |
| <a name="input_wait_for_cluster_command"></a> [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"if test -n \"$ENDPOINT\"; then curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz; fi"` | no |
| <a name="input_workers_role_arns"></a> [workers\_role\_arns](#input\_workers\_role\_arns) | List of Role ARNs of the worker nodes | `list(string)` | `[]` | no |
| <a name="input_workers_security_group_ids"></a> [workers\_security\_group\_ids](#input\_workers\_security\_group\_ids) | DEPRECATED: Use `allowed_security_group_ids` instead.<br>Historical description: Security Group IDs of the worker nodes.<br>Historical default: `[]` | `list(string)` | `[]` | no |

Expand All @@ -145,11 +145,11 @@
| <a name="output_eks_cluster_id"></a> [eks\_cluster\_id](#output\_eks\_cluster\_id) | The name of the cluster |
| <a name="output_eks_cluster_identity_oidc_issuer"></a> [eks\_cluster\_identity\_oidc\_issuer](#output\_eks\_cluster\_identity\_oidc\_issuer) | The OIDC Identity issuer for the cluster |
| <a name="output_eks_cluster_identity_oidc_issuer_arn"></a> [eks\_cluster\_identity\_oidc\_issuer\_arn](#output\_eks\_cluster\_identity\_oidc\_issuer\_arn) | The OIDC Identity issuer ARN for the cluster that can be used to associate IAM roles with a service account |
| <a name="output_eks_cluster_managed_security_group_id"></a> [eks\_cluster\_managed\_security\_group\_id](#output\_eks\_cluster\_managed\_security\_group\_id) | Security Group ID that was created by EKS for the cluster. EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads |
| <a name="output_eks_cluster_managed_security_group_id"></a> [eks\_cluster\_managed\_security\_group\_id](#output\_eks\_cluster\_managed\_security\_group\_id) | Security Group ID that was created by EKS for the cluster.<br>EKS creates a Security Group and applies it to the ENI that are attached to EKS Control Plane master nodes and to any managed workloads. |
| <a name="output_eks_cluster_role_arn"></a> [eks\_cluster\_role\_arn](#output\_eks\_cluster\_role\_arn) | ARN of the EKS cluster IAM role |
| <a name="output_eks_cluster_version"></a> [eks\_cluster\_version](#output\_eks\_cluster\_version) | The Kubernetes server version of the cluster |
| <a name="output_kubernetes_config_map_id"></a> [kubernetes\_config\_map\_id](#output\_kubernetes\_config\_map\_id) | ID of `aws-auth` Kubernetes ConfigMap |
| <a name="output_security_group_arn"></a> [security\_group\_arn](#output\_security\_group\_arn) | ARN of the created Security Group for the EKS cluster |
| <a name="output_security_group_id"></a> [security\_group\_id](#output\_security\_group\_id) | ID of the created Security Group for the EKS cluster |
| <a name="output_security_group_name"></a> [security\_group\_name](#output\_security\_group\_name) | Name of the created Security Group for the EKS cluster |
| <a name="output_security_group_arn"></a> [security\_group\_arn](#output\_security\_group\_arn) | (Deprecated) ARN of the optionally created additional Security Group for the EKS cluster |
| <a name="output_security_group_id"></a> [security\_group\_id](#output\_security\_group\_id) | (Deprecated) ID of the optionally created additional Security Group for the EKS cluster |
| <a name="output_security_group_name"></a> [security\_group\_name](#output\_security\_group\_name) | Name of the optionally created additional Security Group for the EKS cluster |
<!-- markdownlint-restore -->
2 changes: 1 addition & 1 deletion examples/complete/fixtures.us-east-2.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ stage = "test"
name = "eks"

# When updating the Kubernetes version, also update the API and client-go version in test/src/go.mod
kubernetes_version = "1.21"
kubernetes_version = "1.22"

oidc_provider_enabled = true

Expand Down
Loading

0 comments on commit 1c44f2c

Please sign in to comment.