Skip to content

Commit

Permalink
Allow installing external packages. Allow assuming IAM roles (cloudpo…
Browse files Browse the repository at this point in the history
…sse#33)

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Install AWS CLI and `kubectl` if needed

* Add `set -e`

* Update README
  • Loading branch information
aknysh authored Nov 13, 2019
1 parent 992c667 commit 719cd7e
Show file tree
Hide file tree
Showing 8 changed files with 394 additions and 46 deletions.
104 changes: 94 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,44 @@ The module provisions the following resources:
- EKS cluster of master nodes that can be used together with the [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers) module to create a full-blown cluster
- IAM Role to allow the cluster to access other AWS services
- Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane (see [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers))
- The module generates `kubeconfig` configuration to connect to the cluster using `kubectl`
- The module creates and automatically applies (via `kubectl apply`) an authentication ConfigMap to allow the wrokers nodes to join the cluster and to add additional users/roles/accounts

### Works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html)

To run on Terraform Cloud, set the following variables:

```hcl
install_aws_cli = true
install_kubectl = true
external_packages_install_path = "~/.terraform/bin"
kubeconfig_path = "~/.kube/config"
configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml"
# Optional
aws_eks_update_kubeconfig_additional_arguments = "--verbose"
aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
aws_cli_assume_role_session_name = "eks_cluster_example_session"
```

Terraform Cloud executes `terraform plan/apply` on workers running Ubuntu.
For the module to provision the authentication ConfigMap (to allow the EKS worker nodes to join the EKS cluster and to add additional users/roles/accounts),
AWS CLI and `kubectl` need to be installed on Terraform Cloud workers.

To install the required external packages, set the variables `install_aws_cli` and `install_kubectl` to `true` and specify `external_packages_install_path`, `kubeconfig_path` and `configmap_auth_file`.

See [auth.tf](auth.tf) and [Installing Software in the Run Environment](https://www.terraform.io/docs/cloud/run/install-software.html) for more details.

In a multi-account architecture, we might have a separate identity account where we provision all IAM users, and other accounts (e.g. `prod`, `staging`, `dev`, `audit`, `testing`)
where all other AWS resources are provisioned. The IAM Users from the identity account can assume IAM roles to access the other accounts.

In this case, we provide Terraform Cloud with access keys (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) for an IAM User from the identity account
and allow it to assume an IAM Role into the AWS account where the module gets provisioned.

To support this, the module can assume an IAM role before executing the command `aws eks update-kubeconfig` when applying the auth ConfigMap.

Set variable `aws_cli_assume_role_arn` to the Amazon Resource Name (ARN) of the role to assume and variable `aws_cli_assume_role_session_name` to the identifier for the assumed role session.

See [auth.tf](auth.tf) and [assume-role](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) for more details.

## Usage

Expand All @@ -95,9 +132,12 @@ Instead pin to the release tag (e.g. `?ref=tags/x.y.z`) of one of our [latest re



Module usage examples:
For a complete example, see [examples/complete](examples/complete).

For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test](test).

Other examples:

- [examples/complete](examples/complete) - complete example
- [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
- [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC

Expand All @@ -123,9 +163,9 @@ Module usage examples:
tags = merge(var.tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))
# Unfortunately, most_recent (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141)
# variable does not work as expected, if you are not going to use custom ami you should
# variable does not work as expected, if you are not going to use custom AMI you should
# enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers,
# otherwise will be used the first version of Kubernetes supported by AWS (v1.11) for EKS workers but
# otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be used, but
# EKS control plane will use the version specified by kubernetes_version variable.
eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}
Expand Down Expand Up @@ -202,9 +242,6 @@ Module usage examples:
Module usage with two worker groups:

```hcl
{
...
module "eks_workers" {
source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master"
namespace = var.namespace
Expand Down Expand Up @@ -270,7 +307,46 @@ Module usage with two worker groups:
workers_role_arns = [module.eks_workers.workers_role_arn, module.eks_workers_2.workers_role_arn]
workers_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id]
}
```

Module usage on [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html):

```hcl
provider "aws" {
region = "us-east-2"
assume_role {
role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
}
}
module "eks_cluster" {
source = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
region = "us-east-2"
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
local_exec_interpreter = "/bin/bash"
kubernetes_version = "1.14"
workers_role_arns = [module.eks_workers.workers_role_arn]
workers_security_group_ids = [module.eks_workers.security_group_id]
# Terraform Cloud configurations
kubeconfig_path = "~/.kube/config"
configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml"
install_aws_cli = true
install_kubectl = true
external_packages_install_path = "~/.terraform/bin"
aws_eks_update_kubeconfig_additional_arguments = "--verbose"
aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
aws_cli_assume_role_session_name = "eks_cluster_example_session"
}
```

Expand Down Expand Up @@ -298,16 +374,24 @@ Available targets:
| apply_config_map_aws_auth | Whether to generate local files from `kubeconfig` and `config-map-aws-auth` templates and perform `kubectl apply` to apply the ConfigMap to allow worker nodes to join the EKS cluster | bool | `true` | no |
| associate_public_ip_address | Associate a public IP address with an instance in a VPC | bool | `true` | no |
| attributes | Additional attributes (e.g. `1`) | list(string) | `<list>` | no |
| aws_cli_assume_role_arn | IAM Role ARN for AWS CLI to assume before calling `aws eks` to update `kubeconfig` | string | `` | no |
| aws_cli_assume_role_session_name | An identifier for the assumed role session when assuming the IAM Role for AWS CLI before calling `aws eks` to update `kubeconfig` | string | `` | no |
| aws_eks_update_kubeconfig_additional_arguments | Additional arguments for `aws eks update-kubeconfig` command, e.g. `--role-arn xxxxxxxxx`. For more info, see https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html | string | `` | no |
| configmap_auth_file | Path to `configmap_auth_file` | string | `` | no |
| configmap_auth_template_file | Path to `config_auth_template_file` | string | `` | no |
| delimiter | Delimiter to be used between `name`, `namespace`, `stage`, etc. | string | `-` | no |
| enabled | Whether to create the resources. Set to `false` to prevent the module from creating any resources | bool | `true` | no |
| enabled_cluster_log_types | A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [`api`, `audit`, `authenticator`, `controllerManager`, `scheduler`] | list(string) | `<list>` | no |
| endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false | bool | `false` | no |
| endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true | bool | `true` | no |
| external_packages_install_path | Path to install external packages, e.g. AWS CLI and `kubectl`. Used when the module is provisioned on workstations where the external packages are not installed by default, e.g. Terraform Cloud workers | string | `` | no |
| install_aws_cli | Set to `true` to install AWS CLI if the module is provisioned on workstations where AWS CLI is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no |
| install_kubectl | Set to `true` to install `kubectl` if the module is provisioned on workstations where `kubectl` is not installed by default, e.g. Terraform Cloud workers | bool | `false` | no |
| jq_version | Version of `jq` to download to extract temporaly credentials after running `aws sts assume-role` if AWS CLI needs to assume role to access the cluster (if variable `aws_cli_assume_role_arn` is set) | string | `1.6` | no |
| kubeconfig_path | The path to `kubeconfig` file | string | `~/.kube/config` | no |
| kubectl_version | `kubectl` version to install. If not specified, the latest version will be used | string | `` | no |
| kubernetes_version | Desired Kubernetes master version. If you do not specify a value, the latest available version is used | string | `1.14` | no |
| local_exec_interpreter | shell to use for local exec | string | `/bin/sh` | no |
| local_exec_interpreter | shell to use for local exec | string | `/bin/bash` | no |
| map_additional_aws_accounts | Additional AWS account numbers to add to `config-map-aws-auth` ConfigMap | list(string) | `<list>` | no |
| map_additional_iam_roles | Additional IAM roles to add to `config-map-aws-auth` ConfigMap | object | `<list>` | no |
| map_additional_iam_users | Additional IAM users to add to `config-map-aws-auth` ConfigMap | object | `<list>` | no |
Expand Down
96 changes: 86 additions & 10 deletions README.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,54 @@ introduction: |-
- EKS cluster of master nodes that can be used together with the [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers) module to create a full-blown cluster
- IAM Role to allow the cluster to access other AWS services
- Security Group which is used by EKS workers to connect to the cluster and kubelets and pods to receive communication from the cluster control plane (see [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers))
- The module generates `kubeconfig` configuration to connect to the cluster using `kubectl`
- The module creates and automatically applies (via `kubectl apply`) an authentication ConfigMap to allow the wrokers nodes to join the cluster and to add additional users/roles/accounts
### Works with [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html)
To run on Terraform Cloud, set the following variables:
```hcl
install_aws_cli = true
install_kubectl = true
external_packages_install_path = "~/.terraform/bin"
kubeconfig_path = "~/.kube/config"
configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml"
# Optional
aws_eks_update_kubeconfig_additional_arguments = "--verbose"
aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
aws_cli_assume_role_session_name = "eks_cluster_example_session"
```
Terraform Cloud executes `terraform plan/apply` on workers running Ubuntu.
For the module to provision the authentication ConfigMap (to allow the EKS worker nodes to join the EKS cluster and to add additional users/roles/accounts),
AWS CLI and `kubectl` need to be installed on Terraform Cloud workers.
To install the required external packages, set the variables `install_aws_cli` and `install_kubectl` to `true` and specify `external_packages_install_path`, `kubeconfig_path` and `configmap_auth_file`.
See [auth.tf](auth.tf) and [Installing Software in the Run Environment](https://www.terraform.io/docs/cloud/run/install-software.html) for more details.
In a multi-account architecture, we might have a separate identity account where we provision all IAM users, and other accounts (e.g. `prod`, `staging`, `dev`, `audit`, `testing`)
where all other AWS resources are provisioned. The IAM Users from the identity account can assume IAM roles to access the other accounts.
In this case, we provide Terraform Cloud with access keys (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) for an IAM User from the identity account
and allow it to assume an IAM Role into the AWS account where the module gets provisioned.
To support this, the module can assume an IAM role before executing the command `aws eks update-kubeconfig` when applying the auth ConfigMap.
Set variable `aws_cli_assume_role_arn` to the Amazon Resource Name (ARN) of the role to assume and variable `aws_cli_assume_role_session_name` to the identifier for the assumed role session.
See [auth.tf](auth.tf) and [assume-role](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) for more details.
# How to use this project
usage: |-
Module usage examples:
For a complete example, see [examples/complete](examples/complete).
For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test](test).
Other examples:
- [examples/complete](examples/complete) - complete example
- [terraform-root-modules/eks](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
- [terraform-root-modules/eks-backing-services-peering](https://github.com/cloudposse/terraform-root-modules/tree/master/aws/eks-backing-services-peering) - example of VPC peering between the EKS VPC and backing services VPC
Expand All @@ -101,11 +141,11 @@ usage: |-
# for EKS and Kubernetes to discover and manage networking resources
# https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking
tags = merge(var.tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))
# Unfortunately, most_recent (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141)
# variable does not work as expected, if you are not going to use custom ami you should
# variable does not work as expected, if you are not going to use custom AMI you should
# enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers,
# otherwise will be used the first version of Kubernetes supported by AWS (v1.11) for EKS workers but
# otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be used, but
# EKS control plane will use the version specified by kubernetes_version variable.
eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}
Expand Down Expand Up @@ -182,9 +222,6 @@ usage: |-
Module usage with two worker groups:
```hcl
{
...
module "eks_workers" {
source = "git::https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=master"
namespace = var.namespace
Expand Down Expand Up @@ -250,7 +287,46 @@ usage: |-
workers_role_arns = [module.eks_workers.workers_role_arn, module.eks_workers_2.workers_role_arn]
workers_security_group_ids = [module.eks_workers.security_group_id, module.eks_workers_2.security_group_id]
}
```
Module usage on [Terraform Cloud](https://www.terraform.io/docs/cloud/index.html):
```hcl
provider "aws" {
region = "us-east-2"
assume_role {
role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
}
}
module "eks_cluster" {
source = "git::https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
region = "us-east-2"
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
local_exec_interpreter = "/bin/bash"
kubernetes_version = "1.14"
workers_role_arns = [module.eks_workers.workers_role_arn]
workers_security_group_ids = [module.eks_workers.security_group_id]
# Terraform Cloud configurations
kubeconfig_path = "~/.kube/config"
configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml"
install_aws_cli = true
install_kubectl = true
external_packages_install_path = "~/.terraform/bin"
aws_eks_update_kubeconfig_additional_arguments = "--verbose"
aws_cli_assume_role_arn = "arn:aws:iam::xxxxxxxxxxx:role/OrganizationAccountAccessRole"
aws_cli_assume_role_session_name = "eks_cluster_example_session"
}
```
Expand Down
Loading

0 comments on commit 719cd7e

Please sign in to comment.