From 719cd7e8e89ee8ca2802dd1b7a34fa38cb717bdb Mon Sep 17 00:00:00 2001 From: Andriy Knysh Date: Wed, 13 Nov 2019 17:58:33 -0500 Subject: [PATCH] Allow installing external packages. Allow assuming IAM roles (#33) * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Install AWS CLI and `kubectl` if needed * Add `set -e` * Update README --- README.md | 104 +++++++++++++++++++++++++++++---- README.yaml | 96 ++++++++++++++++++++++++++---- auth.tf | 69 +++++++++++++++++++--- docs/terraform.md | 10 +++- examples/complete/main.tf | 43 +++++++++----- examples/complete/variables.tf | 66 +++++++++++++++++++++ main.tf | 2 +- variables.tf | 50 +++++++++++++++- 8 files changed, 394 insertions(+), 46 deletions(-) diff --git a/README.md b/README.md index dee08fb9..2436d605 100644 --- a/README.md +++ b/README.md @@ -85,7 +85,44 @@ The module provisions the following resources: - EKS cluster of master nodes that can be used together with the [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers) module to create a full-blown cluster - IAM Role to allow the cluster to access other AWS services - Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane (see [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers)) -- The module generates `kubeconfig` configuration to connect to the cluster using `kubectl` +- The module creates and automatically applies (via `kubectl apply`) an authentication ConfigMap to allow the wrokers nodes to join the cluster and to add additional users/roles/accounts + +### Works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html) + +To run on Terraform Cloud, set the following variables: + + ```hcl + install_aws_cli = true + install_kubectl = true + external_packages_install_path = "~/.terraform/bin" + kubeconfig_path = "~/.kube/config" + configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml" + + # Optional + aws_eks_update_kubeconfig_additional_arguments = "--verbose" + aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + aws_cli_assume_role_session_name = "eks_cluster_example_session" + ``` + + Terraform Cloud executes `terraform plan/apply` on workers running Ubuntu. + For the module to provision the authentication ConfigMap (to allow the EKS worker nodes to join the EKS cluster and to add additional users/roles/accounts), + AWS CLI and `kubectl` need to be installed on Terraform Cloud workers. + + To install the required external packages, set the variables `install_aws_cli` and `install_kubectl` to `true` and specify `external_packages_install_path`, `kubeconfig_path` and `configmap_auth_file`. + + See [auth.tf](auth.tf) and [Installing Software in the Run Environment](https://www.terraform.io/docs/cloud/run/install-software.html) for more details. + + In a multi-account architecture, we might have a separate identity account where we provision all IAM users, and other accounts (e.g. `prod`, `staging`, `dev`, `audit`, `testing`) + where all other AWS resources are provisioned. The IAM Users from the identity account can assume IAM roles to access the other accounts. + + In this case, we provide Terraform Cloud with access keys (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) for an IAM User from the identity account + and allow it to assume an IAM Role into the AWS account where the module gets provisioned. + + To support this, the module can assume an IAM role before executing the command `aws eks update-kubeconfig` when applying the auth ConfigMap. + + Set variable `aws_cli_assume_role_arn` to the Amazon Resource Name (ARN) of the role to assume and variable `aws_cli_assume_role_session_name` to the identifier for the assumed role session. + + See [auth.tf](auth.tf) and [assume-role](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) for more details. ## Usage @@ -95,9 +132,12 @@ Instead pin to the release tag (e.g. `?ref=tags/x.y.z`) of one of our [latest re -Module usage examples: +For a complete example, see [examples/complete](examples/complete). + +For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test](test). + +Other examples: -- [examples/complete](examples/complete) - complete example - [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures - [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC @@ -123,9 +163,9 @@ Module usage examples: tags = merge(var.tags, map("kubernetes.io/cluster/${module.label.id}", "shared")) # Unfortunately, most_recent (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141) - # variable does not work as expected, if you are not going to use custom ami you should + # variable does not work as expected, if you are not going to use custom AMI you should # enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers, - # otherwise will be used the first version of Kubernetes supported by AWS (v1.11) for EKS workers but + # otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be used, but # EKS control plane will use the version specified by kubernetes_version variable. eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*" } @@ -202,9 +242,6 @@ Module usage examples: Module usage with two worker groups: ```hcl - { - ... - module "eks_workers" { source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master" namespace = var.namespace @@ -270,7 +307,46 @@ Module usage with two worker groups: workers_role_arns = [module.eks_workers.workers_role_arn, module.eks_workers_2.workers_role_arn] workers_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id] - + } +``` + +Module usage on [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html): + +```hcl + provider "aws" { + region = "us-east-2" + + assume_role { + role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + } + } + + module "eks_cluster" { + source = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master" + namespace = var.namespace + stage = var.stage + name = var.name + attributes = var.attributes + tags = var.tags + region = "us-east-2" + vpc_id = module.vpc.vpc_id + subnet_ids = module.subnets.public_subnet_ids + + local_exec_interpreter = "/bin/bash" + kubernetes_version = "1.14" + + workers_role_arns = [module.eks_workers.workers_role_arn] + workers_security_group_ids = [module.eks_workers.security_group_id] + + # Terraform Cloud configurations + kubeconfig_path = "~/.kube/config" + configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml" + install_aws_cli = true + install_kubectl = true + external_packages_install_path = "~/.terraform/bin" + aws_eks_update_kubeconfig_additional_arguments = "--verbose" + aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + aws_cli_assume_role_session_name = "eks_cluster_example_session" } ``` @@ -298,6 +374,9 @@ Available targets: | apply_config_map_aws_auth | Whether to generate local files from `kubeconfig` and `config-map-aws-auth` templates and perform `kubectl apply` to apply the ConfigMap to allow worker nodes to join the EKS cluster | bool | `true` | no | | associate_public_ip_address | Associate a public IP address with an instance in a VPC | bool | `true` | no | | attributes | Additional attributes (e.g. `1`) | list(string) | `` | no | +| aws_cli_assume_role_arn | IAM Role ARN for AWS CLI to assume before calling `aws eks` to update `kubeconfig` | string | `` | no | +| aws_cli_assume_role_session_name | An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig` | string | `` | no | +| aws_eks_update_kubeconfig_additional_arguments | Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html | string | `` | no | | configmap_auth_file | Path to `configmap_auth_file` | string | `` | no | | configmap_auth_template_file | Path to `config_auth_template_file` | string | `` | no | | delimiter | Delimiter to be used between `name`, `namespace`, `stage`, etc. | string | `-` | no | @@ -305,9 +384,14 @@ Available targets: | enabled_cluster_log_types | A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [`api`, `audit`, `authenticator`, `controllerManager`, `scheduler`] | list(string) | `` | no | | endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false | bool | `false` | no | | endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true | bool | `true` | no | +| external_packages_install_path | Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers | string | `` | no | +| install_aws_cli | Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no | +| install_kubectl | Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no | +| jq_version | Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set) | string | `1.6` | no | | kubeconfig_path | The path to `kubeconfig` file | string | `~/.kube/config` | no | +| kubectl_version | `kubectl` version to install. If not specified, the latest version will be used | string | `` | no | | kubernetes_version | Desired Kubernetes master version. If you do not specify a value, the latest available version is used | string | `1.14` | no | -| local_exec_interpreter | shell to use for local exec | string | `/bin/sh` | no | +| local_exec_interpreter | shell to use for local exec | string | `/bin/bash` | no | | map_additional_aws_accounts | Additional AWS account numbers to add to `config-map-aws-auth` ConfigMap | list(string) | `` | no | | map_additional_iam_roles | Additional IAM roles to add to `config-map-aws-auth` ConfigMap | object | `` | no | | map_additional_iam_users | Additional IAM users to add to `config-map-aws-auth` ConfigMap | object | `` | no | diff --git a/README.yaml b/README.yaml index 7a41cc54..08ecefb7 100644 --- a/README.yaml +++ b/README.yaml @@ -70,14 +70,54 @@ introduction: |- - EKS cluster of master nodes that can be used together with the [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers) module to create a full-blown cluster - IAM Role to allow the cluster to access other AWS services - Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane (see [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers)) - - The module generates `kubeconfig` configuration to connect to the cluster using `kubectl` + - The module creates and automatically applies (via `kubectl apply`) an authentication ConfigMap to allow the wrokers nodes to join the cluster and to add additional users/roles/accounts + + ### Works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html) + + To run on Terraform Cloud, set the following variables: + + ```hcl + install_aws_cli = true + install_kubectl = true + external_packages_install_path = "~/.terraform/bin" + kubeconfig_path = "~/.kube/config" + configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml" + + # Optional + aws_eks_update_kubeconfig_additional_arguments = "--verbose" + aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + aws_cli_assume_role_session_name = "eks_cluster_example_session" + ``` + + Terraform Cloud executes `terraform plan/apply` on workers running Ubuntu. + For the module to provision the authentication ConfigMap (to allow the EKS worker nodes to join the EKS cluster and to add additional users/roles/accounts), + AWS CLI and `kubectl` need to be installed on Terraform Cloud workers. + + To install the required external packages, set the variables `install_aws_cli` and `install_kubectl` to `true` and specify `external_packages_install_path`, `kubeconfig_path` and `configmap_auth_file`. + + See [auth.tf](auth.tf) and [Installing Software in the Run Environment](https://www.terraform.io/docs/cloud/run/install-software.html) for more details. + + In a multi-account architecture, we might have a separate identity account where we provision all IAM users, and other accounts (e.g. `prod`, `staging`, `dev`, `audit`, `testing`) + where all other AWS resources are provisioned. The IAM Users from the identity account can assume IAM roles to access the other accounts. + + In this case, we provide Terraform Cloud with access keys (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) for an IAM User from the identity account + and allow it to assume an IAM Role into the AWS account where the module gets provisioned. + + To support this, the module can assume an IAM role before executing the command `aws eks update-kubeconfig` when applying the auth ConfigMap. + + Set variable `aws_cli_assume_role_arn` to the Amazon Resource Name (ARN) of the role to assume and variable `aws_cli_assume_role_session_name` to the identifier for the assumed role session. + + See [auth.tf](auth.tf) and [assume-role](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) for more details. # How to use this project usage: |- - Module usage examples: + For a complete example, see [examples/complete](examples/complete). + + For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test](test). + + Other examples: - - [examples/complete](examples/complete) - complete example - [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures - [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC @@ -101,11 +141,11 @@ usage: |- # for EKS and Kubernetes to discover and manage networking resources # https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking tags = merge(var.tags, map("kubernetes.io/cluster/${module.label.id}", "shared")) - + # Unfortunately, most_recent (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141) - # variable does not work as expected, if you are not going to use custom ami you should + # variable does not work as expected, if you are not going to use custom AMI you should # enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers, - # otherwise will be used the first version of Kubernetes supported by AWS (v1.11) for EKS workers but + # otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be used, but # EKS control plane will use the version specified by kubernetes_version variable. eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*" } @@ -182,9 +222,6 @@ usage: |- Module usage with two worker groups: ```hcl - { - ... - module "eks_workers" { source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master" namespace = var.namespace @@ -250,7 +287,46 @@ usage: |- workers_role_arns = [module.eks_workers.workers_role_arn, module.eks_workers_2.workers_role_arn] workers_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id] - + } + ``` + + Module usage on [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html): + + ```hcl + provider "aws" { + region = "us-east-2" + + assume_role { + role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + } + } + + module "eks_cluster" { + source = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master" + namespace = var.namespace + stage = var.stage + name = var.name + attributes = var.attributes + tags = var.tags + region = "us-east-2" + vpc_id = module.vpc.vpc_id + subnet_ids = module.subnets.public_subnet_ids + + local_exec_interpreter = "/bin/bash" + kubernetes_version = "1.14" + + workers_role_arns = [module.eks_workers.workers_role_arn] + workers_security_group_ids = [module.eks_workers.security_group_id] + + # Terraform Cloud configurations + kubeconfig_path = "~/.kube/config" + configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml" + install_aws_cli = true + install_kubectl = true + external_packages_install_path = "~/.terraform/bin" + aws_eks_update_kubeconfig_additional_arguments = "--verbose" + aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole" + aws_cli_assume_role_session_name = "eks_cluster_example_session" } ``` diff --git a/auth.tf b/auth.tf index 39a1cb36..d925ac55 100644 --- a/auth.tf +++ b/auth.tf @@ -19,6 +19,11 @@ # https://cloud.google.com/kubernetes-engine/docs/concepts/configmap # http://yaml-multiline.info # https://github.com/terraform-providers/terraform-provider-kubernetes/issues/216 +# https://www.terraform.io/docs/cloud/run/install-software.html +# https://stackoverflow.com/questions/26123740/is-it-possible-to-install-aws-cli-package-without-root-permission +# https://stackoverflow.com/questions/58232731/kubectl-missing-form-terraform-cloud +# https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html +# https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html locals { @@ -30,6 +35,9 @@ locals { configmap_auth_template_file = var.configmap_auth_template_file == "" ? join("/", [path.module, "configmap-auth.yaml.tpl"]) : var.configmap_auth_template_file configmap_auth_file = var.configmap_auth_file == "" ? join("/", [path.module, "configmap-auth.yaml"]) : var.configmap_auth_file + external_packages_install_path = var.external_packages_install_path == "" ? join("/", [path.module, ".terraform/bin"]) : var.external_packages_install_path + kubectl_version = var.kubectl_version == "" ? "$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)" : var.kubectl_version + cluster_name = join("", aws_eks_cluster.default.*.id) # Add worker nodes role ARNs (could be from many worker groups) to the ConfigMap @@ -72,21 +80,66 @@ resource "null_resource" "apply_configmap_auth" { count = var.enabled && var.apply_config_map_aws_auth ? 1 : 0 triggers = { - cluster_updated = join("", aws_eks_cluster.default.*.id) - worker_roles_updated = local.map_worker_roles_yaml - additional_roles_updated = local.map_additional_iam_roles_yaml - additional_users_updated = local.map_additional_iam_users_yaml - additional_aws_accounts_updated = local.map_additional_aws_accounts_yaml + cluster_updated = join("", aws_eks_cluster.default.*.id) + worker_roles_updated = local.map_worker_roles_yaml + additional_roles_updated = local.map_additional_iam_roles_yaml + additional_users_updated = local.map_additional_iam_users_yaml + additional_aws_accounts_updated = local.map_additional_aws_accounts_yaml + configmap_auth_file_content_changed = join("", local_file.configmap_auth.*.content) + configmap_auth_file_id_changed = join("", local_file.configmap_auth.*.id) } depends_on = [aws_eks_cluster.default, local_file.configmap_auth] provisioner "local-exec" { interpreter = [var.local_exec_interpreter, "-c"] - command = <` | no | +| aws_cli_assume_role_arn | IAM Role ARN for AWS CLI to assume before calling `aws eks` to update `kubeconfig` | string | `` | no | +| aws_cli_assume_role_session_name | An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig` | string | `` | no | +| aws_eks_update_kubeconfig_additional_arguments | Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html | string | `` | no | | configmap_auth_file | Path to `configmap_auth_file` | string | `` | no | | configmap_auth_template_file | Path to `config_auth_template_file` | string | `` | no | | delimiter | Delimiter to be used between `name`, `namespace`, `stage`, etc. | string | `-` | no | @@ -14,9 +17,14 @@ | enabled_cluster_log_types | A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [`api`, `audit`, `authenticator`, `controllerManager`, `scheduler`] | list(string) | `` | no | | endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false | bool | `false` | no | | endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true | bool | `true` | no | +| external_packages_install_path | Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers | string | `` | no | +| install_aws_cli | Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no | +| install_kubectl | Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no | +| jq_version | Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set) | string | `1.6` | no | | kubeconfig_path | The path to `kubeconfig` file | string | `~/.kube/config` | no | +| kubectl_version | `kubectl` version to install. If not specified, the latest version will be used | string | `` | no | | kubernetes_version | Desired Kubernetes master version. If you do not specify a value, the latest available version is used | string | `1.14` | no | -| local_exec_interpreter | shell to use for local exec | string | `/bin/sh` | no | +| local_exec_interpreter | shell to use for local exec | string | `/bin/bash` | no | | map_additional_aws_accounts | Additional AWS account numbers to add to `config-map-aws-auth` ConfigMap | list(string) | `` | no | | map_additional_iam_roles | Additional IAM roles to add to `config-map-aws-auth` ConfigMap | object | `` | no | | map_additional_iam_users | Additional IAM users to add to `config-map-aws-auth` ConfigMap | object | `` | no | diff --git a/examples/complete/main.tf b/examples/complete/main.tf index 9b546d8a..2a047957 100644 --- a/examples/complete/main.tf +++ b/examples/complete/main.tf @@ -3,7 +3,7 @@ provider "aws" { } module "label" { - source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.15.0" + source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0" namespace = var.namespace name = var.name stage = var.stage @@ -27,7 +27,7 @@ locals { } module "vpc" { - source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.0" + source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.1" namespace = var.namespace stage = var.stage name = var.name @@ -37,7 +37,7 @@ module "vpc" { } module "subnets" { - source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0" + source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.1" availability_zones = var.availability_zones namespace = var.namespace stage = var.stage @@ -52,7 +52,7 @@ module "subnets" { } module "eks_workers" { - source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.10.0" + source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.11.0" namespace = var.namespace stage = var.stage name = var.name @@ -79,17 +79,30 @@ module "eks_workers" { } module "eks_cluster" { - source = "../../" - namespace = var.namespace - stage = var.stage - name = var.name - attributes = var.attributes - tags = var.tags - region = var.region - vpc_id = module.vpc.vpc_id - subnet_ids = module.subnets.public_subnet_ids - kubernetes_version = var.kubernetes_version - kubeconfig_path = var.kubeconfig_path + source = "../../" + namespace = var.namespace + stage = var.stage + name = var.name + attributes = var.attributes + tags = var.tags + region = var.region + vpc_id = module.vpc.vpc_id + subnet_ids = module.subnets.public_subnet_ids + kubernetes_version = var.kubernetes_version + kubeconfig_path = var.kubeconfig_path + local_exec_interpreter = var.local_exec_interpreter + + configmap_auth_template_file = var.configmap_auth_template_file + configmap_auth_file = var.configmap_auth_file + + install_aws_cli = var.install_aws_cli + install_kubectl = var.install_kubectl + kubectl_version = var.kubectl_version + jq_version = var.jq_version + external_packages_install_path = var.external_packages_install_path + aws_eks_update_kubeconfig_additional_arguments = var.aws_eks_update_kubeconfig_additional_arguments + aws_cli_assume_role_arn = var.aws_cli_assume_role_arn + aws_cli_assume_role_session_name = var.aws_cli_assume_role_session_name workers_role_arns = [module.eks_workers.workers_role_arn] workers_security_group_ids = [module.eks_workers.security_group_id] diff --git a/examples/complete/variables.tf b/examples/complete/variables.tf index 25eea0ed..492e5678 100644 --- a/examples/complete/variables.tf +++ b/examples/complete/variables.tf @@ -126,3 +126,69 @@ variable "kubeconfig_path" { type = string description = "The path to `kubeconfig` file" } + +variable "local_exec_interpreter" { + type = string + default = "/bin/bash" + description = "shell to use for local exec" +} + +variable "configmap_auth_template_file" { + type = string + default = "" + description = "Path to `config_auth_template_file`" +} + +variable "configmap_auth_file" { + type = string + default = "" + description = "Path to `configmap_auth_file`" +} + +variable "install_aws_cli" { + type = bool + default = false + description = "Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers" +} + +variable "install_kubectl" { + type = bool + default = false + description = "Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers" +} + +variable "kubectl_version" { + type = string + default = "" + description = "`kubectl` version to install. If not specified, the latest version will be used" +} + +variable "external_packages_install_path" { + type = string + default = "" + description = "Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers" +} + +variable "aws_eks_update_kubeconfig_additional_arguments" { + type = string + default = "" + description = "Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html" +} + +variable "aws_cli_assume_role_arn" { + type = string + default = "" + description = "IAM Role ARN for AWS CLI to assume before calling `aws eks` to update kubeconfig" +} + +variable "aws_cli_assume_role_session_name" { + type = string + default = "" + description = "An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig`" +} + +variable "jq_version" { + type = string + default = "1.6" + description = "Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set)" +} diff --git a/main.tf b/main.tf index 15a9428e..6e70e8d6 100644 --- a/main.tf +++ b/main.tf @@ -1,5 +1,5 @@ module "label" { - source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.15.0" + source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0" namespace = var.namespace stage = var.stage name = var.name diff --git a/variables.tf b/variables.tf index ee99443a..a8e166ad 100644 --- a/variables.tf +++ b/variables.tf @@ -150,7 +150,7 @@ variable "kubeconfig_path" { variable "local_exec_interpreter" { type = string - default = "/bin/sh" + default = "/bin/bash" description = "shell to use for local exec" } @@ -165,3 +165,51 @@ variable "configmap_auth_file" { default = "" description = "Path to `configmap_auth_file`" } + +variable "install_aws_cli" { + type = bool + default = false + description = "Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers" +} + +variable "install_kubectl" { + type = bool + default = false + description = "Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers" +} + +variable "kubectl_version" { + type = string + default = "" + description = "`kubectl` version to install. If not specified, the latest version will be used" +} + +variable "external_packages_install_path" { + type = string + default = "" + description = "Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers" +} + +variable "aws_eks_update_kubeconfig_additional_arguments" { + type = string + default = "" + description = "Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html" +} + +variable "aws_cli_assume_role_arn" { + type = string + default = "" + description = "IAM Role ARN for AWS CLI to assume before calling `aws eks` to update `kubeconfig`" +} + +variable "aws_cli_assume_role_session_name" { + type = string + default = "" + description = "An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig`" +} + +variable "jq_version" { + type = string + default = "1.6" + description = "Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set)" +}