diff --git a/Makefile b/Makefile
index 97d80875..1cfc68f2 100644
--- a/Makefile
+++ b/Makefile
@@ -3,7 +3,7 @@ SHELL := /bin/bash
# List of targets the `readme` target should call before generating the readme
export README_DEPS ?= docs/targets.md docs/terraform.md
--include $(shell curl -sSL -o .build-harness "https://git.io/build-harness"; echo .build-harness)
+-include $(shell curl -sSL -o .build-harness "https://cloudposse.tools/build-harness"; echo .build-harness)
## Lint terraform code
lint:
diff --git a/README.md b/README.md
index 347ff182..603681aa 100644
--- a/README.md
+++ b/README.md
@@ -68,13 +68,13 @@ The module provisions the following resources:
[terraform-aws-eks-fargate-profile](https://github.com/cloudposse/terraform-aws-eks-fargate-profile)
modules to create a full-blown cluster
- IAM Role to allow the cluster to access other AWS services
-- Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane
-- The module creates and automatically applies an authentication ConfigMap to allow the workers nodes to join the cluster and to add additional users/roles/accounts
+- Optionally, the module creates and automatically applies an authentication ConfigMap (`aws-auth`) to allow the
+ worker nodes to join the cluster and to add additional users/roles/accounts. (This option is enabled
+ by default, but has some caveats noted below. Set `apply_config_map_aws_auth` to `false` to avoid these issues.)
-__NOTE:__ The module works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html).
-
-__NOTE:__ Release `0.45.0` contains some changes that could result in the destruction of your existing EKS cluster.
-To circumvent this, follow the instructions in the [0.45.x+ migration path](./docs/migration-0.45.x+.md).
+__NOTE:__ Release `2.0.0` (previously released as version `0.45.0`) contains some changes that
+could result in the destruction of your existing EKS cluster.
+To circumvent this, follow the instructions in the [v1 to v2 migration path](./docs/migration-v1-v2.md).
__NOTE:__ Every Terraform module that provisions an EKS cluster has faced the challenge that access to the cluster
is partly controlled by a resource inside the cluster, a ConfigMap called `aws-auth`. You need to be able to access
@@ -84,25 +84,42 @@ a problem: how do you authenticate to an API endpoint that you have not yet crea
We use the Terraform Kubernetes provider to access the cluster, and it uses the same underlying library
that `kubectl` uses, so configuration is very similar. However, every kind of configuration we have tried
has failed at some point.
-- After creating the EKS cluster, we can generate a `kubeconfig` file that configures access to it.
+- An authentication token can be retrieved using the `aws_eks_cluster_auth` data source. Again, this works, as
+long as the token does not expire while Terraform is running, and the token is refreshed during the "plan"
+phase before trying to refresh the state. Unfortunately, failures of both types have been seen. Nevertheless,
+this is the only method that is compatible with Terraform Cloud, so it is the default.
+- After creating the EKS cluster, you can generate a `KUBECONFIG` file that configures access to it.
This works most of the time, but if the file was present and used as part of the configuration to create
the cluster, and then the file is deleted (as would happen in a CI system like Terraform Cloud), Terraform
would not cause the file to be regenerated in time to use it to refresh Terraform's state and the "plan" phase will fail.
-- An authentication token can be retrieved using the `aws_eks_cluster_auth` data source. Again, this works, as
-long as the token does not expire while Terraform is running, and the token is refreshed during the "plan"
-phase before trying to refresh the state. Unfortunately, failures of both types have been seen.
- An authentication token can be retrieved on demand by using the `exec` feature of the Kubernetes provider
to call `aws eks get-token`. This requires that the `aws` CLI be installed and available to Terraform and that it
-has access to sufficient credentials to perform the authentication and is configured to use them.
+has access to sufficient credentials to perform the authentication and is configured to use them. When those
+conditions are met, this is the most reliable method, and the one Cloud Posse prefers to use. However, since
+it has these requirements that are not always easily met, it is not the default method and it is not
+fully supported.
All of the above methods can face additional challenges when using `terraform import` to import
-resources into the Terraform state. The KUBECONFG file is the most reliable, and probably what you
-would want to use when importing objects if your usual method does not work. You will need to create
-the file, of course, but that is easily done with `aws eks update-kubeconfig`.
+resources into the Terraform state. The `KUBECONFIG` file method is the only sure way to `import` resources, due to
+[Terraform limitations](https://github.com/hashicorp/terraform/issues/27934) on providers. You will need to create
+the file, of course, but that is easily done with `aws eks update-kubeconfig`. Depending on the situation,
+you may also be able to import resources by setting `-var apply_config_map_aws_auth=false` during import.
At the moment, the `exec` option appears to be the most reliable method, so we recommend using it if possible,
but because of the extra requirements it has, we use the data source as the default authentication method.
+__Additional Note:__ All of the above methods require network connectivity between the host running the
+`terraform` command and the EKS endpoint. If your EKS cluster does not have public access enabled, this means
+you need to take extra steps, such as using a VPN to provide access to the private endpoint, or running
+`terraform` on a host in the same VPC as the EKS cluster.
+
+__Failure during `destroy`:__ If the cluster is destroyed (via Terraform or otherwise) before the Terraform resource
+responsible for the `aws-auth` ConfigMap is destroyed, Terraform will get stuck trying to delete the ConfigMap,
+because it cannot contact the now destroyed cluster. This can show up as a `connection refused` error (usually
+to `https://localhost/`). The easiest ways to handle this is either to add `-var apply_config_map_aws_auth=false`
+to the `destroy` command or to remove the ConfigMap (`...kubernetes_config_map.aws_auth[0]`) from the Terraform
+state with `terraform state rm`.
+
__NOTE:__ We give you the `kubernetes_config_map_ignore_role_changes` option and default it to `true` for the following reasons:
- We provision the EKS cluster
- Then we wait for the cluster to become available (see `null_resource.wait_for_cluster` in [auth.tf](auth.tf)
@@ -115,7 +132,8 @@ to provision a managed Node Group
However, it is possible to get the worker node roles from the terraform-aws-eks-node-group via Terraform "remote state"
and include them with any other roles you want to add (example code to be published later), so we make
-ignoring the role changes optional. If you do not ignore changes then you will have no problem with making future intentional changes.
+ignoring the role changes optional. (This is what we do for Cloud Posse clients.)
+If you do not ignore changes then you will have no problem with making future intentional changes.
The downside of having `kubernetes_config_map_ignore_role_changes` set to true is that if you later want to make changes,
such as adding other IAM roles to Kubernetes groups, you cannot do so via Terraform, because the role changes are ignored.
@@ -437,6 +455,7 @@ Available targets:
| [associated\_security\_group\_ids](#input\_associated\_security\_group\_ids) | A list of IDs of Security Groups to associate the cluster with.
These security groups will not be modified. | `list(string)` | `[]` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element. | `list(string)` | `[]` | no |
| [aws\_auth\_yaml\_strip\_quotes](#input\_aws\_auth\_yaml\_strip\_quotes) | If true, remove double quotes from the generated aws-auth ConfigMap YAML to reduce spurious diffs in plans | `bool` | `true` | no |
+| [cloudwatch\_log\_group\_kms\_key\_id](#input\_cloudwatch\_log\_group\_kms\_key\_id) | If provided, the KMS Key ID to use to encrypt AWS CloudWatch logs | `string` | `null` | no |
| [cluster\_encryption\_config\_enabled](#input\_cluster\_encryption\_config\_enabled) | Set to `true` to enable Cluster Encryption Configuration | `bool` | `true` | no |
| [cluster\_encryption\_config\_kms\_key\_deletion\_window\_in\_days](#input\_cluster\_encryption\_config\_kms\_key\_deletion\_window\_in\_days) | Cluster Encryption Config KMS Key Resource argument - key deletion windows in days post destruction | `number` | `10` | no |
| [cluster\_encryption\_config\_kms\_key\_enable\_key\_rotation](#input\_cluster\_encryption\_config\_kms\_key\_enable\_key\_rotation) | Cluster Encryption Config KMS Key Resource argument - enable kms key rotation | `bool` | `true` | no |
@@ -489,7 +508,7 @@ Available targets:
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| [vpc\_id](#input\_vpc\_id) | VPC ID for the EKS cluster | `string` | n/a | yes |
-| [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint are available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 60 --retry-delay 5 --retry-connrefused --insecure --output /dev/null $ENDPOINT/healthz"` | no |
+| [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz"` | no |
| [workers\_role\_arns](#input\_workers\_role\_arns) | List of Role ARNs of the worker nodes | `list(string)` | `[]` | no |
| [workers\_security\_group\_ids](#input\_workers\_security\_group\_ids) | DEPRECATED: Use `allowed_security_group_ids` instead.
Historical description: Security Group IDs of the worker nodes.
Historical default: `[]` | `list(string)` | `[]` | no |
@@ -497,6 +516,7 @@ Available targets:
| Name | Description |
|------|-------------|
+| [cloudwatch\_log\_group\_kms\_key\_id](#output\_cloudwatch\_log\_group\_kms\_key\_id) | KMS Key ID to encrypt AWS CloudWatch logs |
| [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | The name of the log group created in cloudwatch where cluster logs are forwarded to if enabled |
| [cluster\_encryption\_config\_enabled](#output\_cluster\_encryption\_config\_enabled) | If true, Cluster Encryption Configuration is enabled |
| [cluster\_encryption\_config\_provider\_key\_alias](#output\_cluster\_encryption\_config\_provider\_key\_alias) | Cluster Encryption Config KMS Key Alias ARN |
@@ -671,25 +691,27 @@ Check out [our other projects][github], [follow us on twitter][twitter], [apply
### Contributors
-| [![Erik Osterman][osterman_avatar]][osterman_homepage]
[Erik Osterman][osterman_homepage] | [![Andriy Knysh][aknysh_avatar]][aknysh_homepage]
[Andriy Knysh][aknysh_homepage] | [![Igor Rodionov][goruha_avatar]][goruha_homepage]
[Igor Rodionov][goruha_homepage] | [![Oscar][osulli_avatar]][osulli_homepage]
[Oscar][osulli_homepage] |
-|---|---|---|---|
+| [![Erik Osterman][osterman_avatar]][osterman_homepage]
[Erik Osterman][osterman_homepage] | [![Andriy Knysh][aknysh_avatar]][aknysh_homepage]
[Andriy Knysh][aknysh_homepage] | [![Igor Rodionov][goruha_avatar]][goruha_homepage]
[Igor Rodionov][goruha_homepage] | [![Nuru][Nuru_avatar]][Nuru_homepage]
[Nuru][Nuru_homepage] | [![Oscar][osulli_avatar]][osulli_homepage]
[Oscar][osulli_homepage] |
+|---|---|---|---|---|
[osterman_homepage]: https://github.com/osterman
- [osterman_avatar]: https://s.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb?s=144
+ [osterman_avatar]: https://s.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb?s=150
[aknysh_homepage]: https://github.com/aknysh/
- [aknysh_avatar]: https://avatars0.githubusercontent.com/u/7356997?v=4&u=ed9ce1c9151d552d985bdf5546772e14ef7ab617&s=144
+ [aknysh_avatar]: https://avatars0.githubusercontent.com/u/7356997?v=4&u=ed9ce1c9151d552d985bdf5546772e14ef7ab617&s=150
[goruha_homepage]: https://github.com/goruha/
- [goruha_avatar]: https://s.gravatar.com/avatar/bc70834d32ed4517568a1feb0b9be7e2?s=144
+ [goruha_avatar]: https://s.gravatar.com/avatar/bc70834d32ed4517568a1feb0b9be7e2?s=150
+ [Nuru_homepage]: https://github.com/Nuru
+ [Nuru_avatar]: https://img.cloudposse.com/150x150/https://github.com/Nuru.png
[osulli_homepage]: https://github.com/osulli/
- [osulli_avatar]: https://avatars1.githubusercontent.com/u/46930728?v=4&s=144
+ [osulli_avatar]: https://avatars1.githubusercontent.com/u/46930728?v=4&s=150
[![README Footer][readme_footer_img]][readme_footer_link]
diff --git a/README.yaml b/README.yaml
index f0667386..c4f3cb65 100644
--- a/README.yaml
+++ b/README.yaml
@@ -61,13 +61,13 @@ introduction: |-
[terraform-aws-eks-fargate-profile](https://github.com/cloudposse/terraform-aws-eks-fargate-profile)
modules to create a full-blown cluster
- IAM Role to allow the cluster to access other AWS services
- - Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane
- - The module creates and automatically applies an authentication ConfigMap to allow the workers nodes to join the cluster and to add additional users/roles/accounts
+ - Optionally, the module creates and automatically applies an authentication ConfigMap (`aws-auth`) to allow the
+ worker nodes to join the cluster and to add additional users/roles/accounts. (This option is enabled
+ by default, but has some caveats noted below. Set `apply_config_map_aws_auth` to `false` to avoid these issues.)
- __NOTE:__ The module works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html).
-
- __NOTE:__ Release `0.45.0` contains some changes that could result in the destruction of your existing EKS cluster.
- To circumvent this, follow the instructions in the [0.45.x+ migration path](./docs/migration-0.45.x+.md).
+ __NOTE:__ Release `2.0.0` (previously released as version `0.45.0`) contains some changes that
+ could result in the destruction of your existing EKS cluster.
+ To circumvent this, follow the instructions in the [v1 to v2 migration path](./docs/migration-v1-v2.md).
__NOTE:__ Every Terraform module that provisions an EKS cluster has faced the challenge that access to the cluster
is partly controlled by a resource inside the cluster, a ConfigMap called `aws-auth`. You need to be able to access
@@ -77,24 +77,41 @@ introduction: |-
We use the Terraform Kubernetes provider to access the cluster, and it uses the same underlying library
that `kubectl` uses, so configuration is very similar. However, every kind of configuration we have tried
has failed at some point.
- - After creating the EKS cluster, we can generate a `kubeconfig` file that configures access to it.
+ - An authentication token can be retrieved using the `aws_eks_cluster_auth` data source. Again, this works, as
+ long as the token does not expire while Terraform is running, and the token is refreshed during the "plan"
+ phase before trying to refresh the state. Unfortunately, failures of both types have been seen. Nevertheless,
+ this is the only method that is compatible with Terraform Cloud, so it is the default.
+ - After creating the EKS cluster, you can generate a `KUBECONFIG` file that configures access to it.
This works most of the time, but if the file was present and used as part of the configuration to create
the cluster, and then the file is deleted (as would happen in a CI system like Terraform Cloud), Terraform
would not cause the file to be regenerated in time to use it to refresh Terraform's state and the "plan" phase will fail.
- - An authentication token can be retrieved using the `aws_eks_cluster_auth` data source. Again, this works, as
- long as the token does not expire while Terraform is running, and the token is refreshed during the "plan"
- phase before trying to refresh the state. Unfortunately, failures of both types have been seen.
- An authentication token can be retrieved on demand by using the `exec` feature of the Kubernetes provider
to call `aws eks get-token`. This requires that the `aws` CLI be installed and available to Terraform and that it
- has access to sufficient credentials to perform the authentication and is configured to use them.
+ has access to sufficient credentials to perform the authentication and is configured to use them. When those
+ conditions are met, this is the most reliable method, and the one Cloud Posse prefers to use. However, since
+ it has these requirements that are not always easily met, it is not the default method and it is not
+ fully supported.
All of the above methods can face additional challenges when using `terraform import` to import
- resources into the Terraform state. The KUBECONFG file is the most reliable, and probably what you
- would want to use when importing objects if your usual method does not work. You will need to create
- the file, of course, but that is easily done with `aws eks update-kubeconfig`.
+ resources into the Terraform state. The `KUBECONFIG` file method is the only sure way to `import` resources, due to
+ [Terraform limitations](https://github.com/hashicorp/terraform/issues/27934) on providers. You will need to create
+ the file, of course, but that is easily done with `aws eks update-kubeconfig`. Depending on the situation,
+ you may also be able to import resources by setting `-var apply_config_map_aws_auth=false` during import.
At the moment, the `exec` option appears to be the most reliable method, so we recommend using it if possible,
but because of the extra requirements it has, we use the data source as the default authentication method.
+
+ __Additional Note:__ All of the above methods require network connectivity between the host running the
+ `terraform` command and the EKS endpoint. If your EKS cluster does not have public access enabled, this means
+ you need to take extra steps, such as using a VPN to provide access to the private endpoint, or running
+ `terraform` on a host in the same VPC as the EKS cluster.
+
+ __Failure during `destroy`:__ If the cluster is destroyed (via Terraform or otherwise) before the Terraform resource
+ responsible for the `aws-auth` ConfigMap is destroyed, Terraform will get stuck trying to delete the ConfigMap,
+ because it cannot contact the now destroyed cluster. This can show up as a `connection refused` error (usually
+ to `https://localhost/`). The easiest ways to handle this is either to add `-var apply_config_map_aws_auth=false`
+ to the `destroy` command or to remove the ConfigMap (`...kubernetes_config_map.aws_auth[0]`) from the Terraform
+ state with `terraform state rm`.
__NOTE:__ We give you the `kubernetes_config_map_ignore_role_changes` option and default it to `true` for the following reasons:
- We provision the EKS cluster
@@ -108,7 +125,8 @@ introduction: |-
However, it is possible to get the worker node roles from the terraform-aws-eks-node-group via Terraform "remote state"
and include them with any other roles you want to add (example code to be published later), so we make
- ignoring the role changes optional. If you do not ignore changes then you will have no problem with making future intentional changes.
+ ignoring the role changes optional. (This is what we do for Cloud Posse clients.)
+ If you do not ignore changes then you will have no problem with making future intentional changes.
The downside of having `kubernetes_config_map_ignore_role_changes` set to true is that if you later want to make changes,
such as adding other IAM roles to Kubernetes groups, you cannot do so via Terraform, because the role changes are ignored.
@@ -316,17 +334,19 @@ include:
contributors:
- name: Erik Osterman
homepage: https://github.com/osterman
- avatar: https://s.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb?s=144
+ avatar: https://s.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb?s=150
github: osterman
- name: Andriy Knysh
homepage: https://github.com/aknysh/
- avatar: https://avatars0.githubusercontent.com/u/7356997?v=4&u=ed9ce1c9151d552d985bdf5546772e14ef7ab617&s=144
+ avatar: https://avatars0.githubusercontent.com/u/7356997?v=4&u=ed9ce1c9151d552d985bdf5546772e14ef7ab617&s=150
github: aknysh
- name: Igor Rodionov
homepage: https://github.com/goruha/
- avatar: https://s.gravatar.com/avatar/bc70834d32ed4517568a1feb0b9be7e2?s=144
+ avatar: https://s.gravatar.com/avatar/bc70834d32ed4517568a1feb0b9be7e2?s=150
github: goruha
+ - name: "Nuru"
+ github: "Nuru"
- name: Oscar
homepage: https://github.com/osulli/
- avatar: https://avatars1.githubusercontent.com/u/46930728?v=4&s=144
+ avatar: https://avatars1.githubusercontent.com/u/46930728?v=4&s=150
github: osulli
diff --git a/auth.tf b/auth.tf
index 65d2cadd..9b11f8ca 100644
--- a/auth.tf
+++ b/auth.tf
@@ -38,6 +38,9 @@ locals {
exec_profile = local.kube_exec_auth_enabled && var.kube_exec_auth_aws_profile_enabled ? ["--profile", var.kube_exec_auth_aws_profile] : []
exec_role = local.kube_exec_auth_enabled && var.kube_exec_auth_role_arn_enabled ? ["--role-arn", var.kube_exec_auth_role_arn] : []
+ cluster_endpoint_data = join("", aws_eks_cluster.default.*.endpoint)
+ cluster_auth_map_endpoint = var.apply_config_map_aws_auth ? local.cluster_endpoint_data : var.dummy_kubeapi_server
+
certificate_authority_data_list = coalescelist(aws_eks_cluster.default.*.certificate_authority, [[{ data : "" }]])
certificate_authority_data_list_internal = local.certificate_authority_data_list[0]
certificate_authority_data_map = local.certificate_authority_data_list_internal[0]
@@ -94,8 +97,8 @@ provider "kubernetes" {
# so we can proceed with the task of creating or destroying the cluster.
#
# If this solution bothers you, you can disable it by setting var.dummy_kubeapi_server = null
- host = local.enabled ? coalesce(aws_eks_cluster.default[0].endpoint, var.dummy_kubeapi_server) : var.dummy_kubeapi_server
- cluster_ca_certificate = local.enabled ? base64decode(local.certificate_authority_data) : null
+ host = local.cluster_auth_map_endpoint
+ cluster_ca_certificate = local.enabled && !local.kubeconfig_path_enabled ? base64decode(local.certificate_authority_data) : null
token = local.kube_data_auth_enabled ? data.aws_eks_cluster_auth.eks[0].token : null
# The Kubernetes provider will use information from KUBECONFIG if it exists, but if the default cluster
# in KUBECONFIG is some other cluster, this will cause problems, so we override it always.
diff --git a/docs/migration-0.45.x+.md b/docs/migration-0.45.x+.md
index a1cd2962..e94f2014 100644
--- a/docs/migration-0.45.x+.md
+++ b/docs/migration-0.45.x+.md
@@ -1,47 +1,3 @@
# Migration to 0.45.x+
-Version `0.45.0` of this module introduces potential breaking changes that, without taking additional precautions, could cause the EKS cluster to be recreated.
-
-## Background
-
-This module creates an EKS cluster, which automatically creates an EKS-managed Security Group in which all managed nodes are placed automatically by EKS, and unmanaged nodes could be placed
-by the user, to ensure the nodes and control plane can communicate.
-
-Before version `0.45.0`, this module, by default, created an additional Security Group. Prior to version `0.19.0` of this module, that additional Security Group was the only one exposed by
-this module (because EKS at the time did not create the managed Security Group for the cluster), and it was intended that all worker nodes (managed and unmanaged) be placed in this
-additional Security Group. With version `0.19.0`, this module exposed the managed Security Group created by the EKS cluster, in which all managed node groups are placed by default. We now
-recommend placing non-managed node groups in the EKS-created Security Group as well by using the `allowed_security_group_ids` variable, and not create an additional Security Group.
-
-See https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html for more details.
-
-## Migration process
-
-If you are deploying a new EKS cluster with this module, no special steps need to be taken. Just keep the variable `create_security_group` set to `false` to not create an additional Security
-Group. Don't use the deprecated variables (see `variables-deprecated.tf`).
-
-If you are updating this module to the latest version on existing (already deployed) EKS clusters, set the variable `create_security_group` to `true` to enable the additional Security Group
-and all the rules (which were enabled by default in the previous releases of this module).
-
-## Deprecated variables
-
-Some variables have been deprecated (see `variables-deprecated.tf`), don't use them when creating new EKS clusters.
-
-- Use `allowed_security_group_ids` instead of `allowed_security_groups` and `workers_security_group_ids`
-
-- When using unmanaged worker nodes (e.g. with https://github.com/cloudposse/terraform-aws-eks-workers module), provide the worker nodes Security Groups to the cluster using
- the `allowed_security_group_ids` variable, for example:
-
- ```hcl
- module "eks_workers" {
- source = "cloudposse/eks-workers/aws"
- }
-
- module "eks_workers_2" {
- source = "cloudposse/eks-workers/aws"
- }
-
- module "eks_cluster" {
- source = "cloudposse/eks-cluster/aws"
- allowed_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id]
- }
- ```
+Version `0.45.0` has been re-released as v2.0.0 and the migration documentation moved to [migration-v1-v2.md](migration-v1-v2.md)
diff --git a/docs/migration-v1-v2.md b/docs/migration-v1-v2.md
new file mode 100644
index 00000000..7f4cbd97
--- /dev/null
+++ b/docs/migration-v1-v2.md
@@ -0,0 +1,47 @@
+# Migration From Version 1 to Version 2
+
+Version 2 (a.k.a version 0.45.0) of this module introduces potential breaking changes that, without taking additional precautions, could cause the EKS cluster to be recreated.
+
+## Background
+
+This module creates an EKS cluster, which automatically creates an EKS-managed Security Group in which all managed nodes are placed automatically by EKS, and unmanaged nodes could be placed
+by the user, to ensure the nodes and control plane can communicate.
+
+Before version 2, this module, by default, created an additional Security Group. Prior to version `0.19.0` of this module, that additional Security Group was the only one exposed by
+this module (because EKS at the time did not create the managed Security Group for the cluster), and it was intended that all worker nodes (managed and unmanaged) be placed in this
+additional Security Group. With version `0.19.0`, this module exposed the managed Security Group created by the EKS cluster, in which all managed node groups are placed by default. We now
+recommend placing non-managed node groups in the EKS-created Security Group as well by using the `allowed_security_group_ids` variable, and not create an additional Security Group.
+
+See https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html for more details.
+
+## Migration process
+
+If you are deploying a new EKS cluster with this module, no special steps need to be taken. Just keep the variable `create_security_group` set to `false` to not create an additional Security
+Group. Don't use the deprecated variables (see `variables-deprecated.tf`).
+
+If you are updating this module to the latest version on existing (already deployed) EKS clusters, set the variable `create_security_group` to `true` to enable the additional Security Group
+and all the rules (which were enabled by default in the previous releases of this module).
+
+## Deprecated variables
+
+Some variables have been deprecated (see `variables-deprecated.tf`), don't use them when creating new EKS clusters.
+
+- Use `allowed_security_group_ids` instead of `allowed_security_groups` and `workers_security_group_ids`
+
+- When using unmanaged worker nodes (e.g. with https://github.com/cloudposse/terraform-aws-eks-workers module), provide the worker nodes Security Groups to the cluster using
+ the `allowed_security_group_ids` variable, for example:
+
+ ```hcl
+ module "eks_workers" {
+ source = "cloudposse/eks-workers/aws"
+ }
+
+ module "eks_workers_2" {
+ source = "cloudposse/eks-workers/aws"
+ }
+
+ module "eks_cluster" {
+ source = "cloudposse/eks-cluster/aws"
+ allowed_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id]
+ }
+ ```
diff --git a/docs/terraform.md b/docs/terraform.md
index 6c47d622..d3d5168a 100644
--- a/docs/terraform.md
+++ b/docs/terraform.md
@@ -69,6 +69,7 @@
| [associated\_security\_group\_ids](#input\_associated\_security\_group\_ids) | A list of IDs of Security Groups to associate the cluster with.
These security groups will not be modified. | `list(string)` | `[]` | no |
| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element. | `list(string)` | `[]` | no |
| [aws\_auth\_yaml\_strip\_quotes](#input\_aws\_auth\_yaml\_strip\_quotes) | If true, remove double quotes from the generated aws-auth ConfigMap YAML to reduce spurious diffs in plans | `bool` | `true` | no |
+| [cloudwatch\_log\_group\_kms\_key\_id](#input\_cloudwatch\_log\_group\_kms\_key\_id) | If provided, the KMS Key ID to use to encrypt AWS CloudWatch logs | `string` | `null` | no |
| [cluster\_encryption\_config\_enabled](#input\_cluster\_encryption\_config\_enabled) | Set to `true` to enable Cluster Encryption Configuration | `bool` | `true` | no |
| [cluster\_encryption\_config\_kms\_key\_deletion\_window\_in\_days](#input\_cluster\_encryption\_config\_kms\_key\_deletion\_window\_in\_days) | Cluster Encryption Config KMS Key Resource argument - key deletion windows in days post destruction | `number` | `10` | no |
| [cluster\_encryption\_config\_kms\_key\_enable\_key\_rotation](#input\_cluster\_encryption\_config\_kms\_key\_enable\_key\_rotation) | Cluster Encryption Config KMS Key Resource argument - enable kms key rotation | `bool` | `true` | no |
@@ -121,7 +122,7 @@
| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
| [vpc\_id](#input\_vpc\_id) | VPC ID for the EKS cluster | `string` | n/a | yes |
-| [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint are available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 60 --retry-delay 5 --retry-connrefused --insecure --output /dev/null $ENDPOINT/healthz"` | no |
+| [wait\_for\_cluster\_command](#input\_wait\_for\_cluster\_command) | `local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT` | `string` | `"curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz"` | no |
| [workers\_role\_arns](#input\_workers\_role\_arns) | List of Role ARNs of the worker nodes | `list(string)` | `[]` | no |
| [workers\_security\_group\_ids](#input\_workers\_security\_group\_ids) | DEPRECATED: Use `allowed_security_group_ids` instead.
Historical description: Security Group IDs of the worker nodes.
Historical default: `[]` | `list(string)` | `[]` | no |
@@ -129,6 +130,7 @@
| Name | Description |
|------|-------------|
+| [cloudwatch\_log\_group\_kms\_key\_id](#output\_cloudwatch\_log\_group\_kms\_key\_id) | KMS Key ID to encrypt AWS CloudWatch logs |
| [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | The name of the log group created in cloudwatch where cluster logs are forwarded to if enabled |
| [cluster\_encryption\_config\_enabled](#output\_cluster\_encryption\_config\_enabled) | If true, Cluster Encryption Configuration is enabled |
| [cluster\_encryption\_config\_provider\_key\_alias](#output\_cluster\_encryption\_config\_provider\_key\_alias) | Cluster Encryption Config KMS Key Alias ARN |
diff --git a/main.tf b/main.tf
index d380d048..a8055f13 100644
--- a/main.tf
+++ b/main.tf
@@ -29,6 +29,7 @@ resource "aws_cloudwatch_log_group" "default" {
count = local.enabled && length(var.enabled_cluster_log_types) > 0 ? 1 : 0
name = local.cloudwatch_log_group_name
retention_in_days = var.cluster_log_retention_period
+ kms_key_id = var.cloudwatch_log_group_kms_key_id
tags = module.label.tags
}
diff --git a/outputs.tf b/outputs.tf
index e2d573cc..29d575af 100644
--- a/outputs.tf
+++ b/outputs.tf
@@ -87,3 +87,8 @@ output "cloudwatch_log_group_name" {
description = "The name of the log group created in cloudwatch where cluster logs are forwarded to if enabled"
value = local.cloudwatch_log_group_name
}
+
+output "cloudwatch_log_group_kms_key_id" {
+ description = "KMS Key ID to encrypt AWS CloudWatch logs"
+ value = var.cloudwatch_log_group_kms_key_id
+}
diff --git a/variables.tf b/variables.tf
index bff5ecd1..2bbdba32 100644
--- a/variables.tf
+++ b/variables.tf
@@ -129,9 +129,11 @@ variable "local_exec_interpreter" {
}
variable "wait_for_cluster_command" {
- type = string
- default = "curl --silent --fail --retry 60 --retry-delay 5 --retry-connrefused --insecure --output /dev/null $ENDPOINT/healthz"
- description = "`local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint are available as environment variable `ENDPOINT`"
+ type = string
+ ## --max-time is per attempt, --retry is the number of attempts
+ ## Approx. total time limit is (max-time + retry-delay) * retry seconds
+ default = "curl --silent --fail --retry 30 --retry-delay 10 --retry-connrefused --max-time 11 --insecure --output /dev/null $ENDPOINT/healthz"
+ description = "`local-exec` command to execute to determine if the EKS cluster is healthy. Cluster endpoint URL is available as environment variable `ENDPOINT`"
}
variable "kubernetes_config_map_ignore_role_changes" {
@@ -182,6 +184,23 @@ variable "permissions_boundary" {
description = "If provided, all IAM roles will be created with this permissions boundary attached."
}
+variable "cloudwatch_log_group_kms_key_id" {
+ type = string
+ default = null
+ description = "If provided, the KMS Key ID to use to encrypt AWS CloudWatch logs"
+}
+
+variable "addons" {
+ type = list(object({
+ addon_name = string
+ addon_version = string
+ resolve_conflicts = string
+ service_account_role_arn = string
+ }))
+ default = []
+ description = "Manages [`aws_eks_addon`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) resources."
+}
+
##################
# All the following variables are just about configuring the Kubernetes provider
# to be able to modify the aws-auth ConfigMap. Once EKS provides a normal
@@ -274,14 +293,3 @@ variable "dummy_kubeapi_server" {
via `kubeconfig_path` and set `kubeconfig_path_enabled` to `true`.
EOT
}
-
-variable "addons" {
- type = list(object({
- addon_name = string
- addon_version = string
- resolve_conflicts = string
- service_account_role_arn = string
- }))
- default = []
- description = "Manages [`aws_eks_addon`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) resources."
-}