This is a project containing Terraform IaC to get a scalable Kubernetes cluster up and running in AWS with ArgoCD deployed to it.
- Terraform CLI
- An AWS account with Admin Permissions
- Your AWS credentials configured via Environment Variables or
~/.aws/credentials
file. - Kubectl CLI
Right now, we only support a K3S deployment model using RDS as a backend store. Eventually we'll expand to EKS.
-
Navigate to the
k3s
directory:cd k3s
-
Create an S3 bucket in the AWS console to persist Terraform state. This gives you a highly reliable way to maintain your state file.
-
Update the
bucket
entry in bothbackends/s3.tfvars
andmain.tf
files with the name of you bucket from the previous step. -
(Optional) If you want to maintain multiple Terraform states, you can create/select separate workspaces. This will create separate files within your S3 bucket, so you can maintain multiple environments at once:
# Create a new workspace terraform workspace new staging # Or select and switch to an existing workspace terraform workspace select staging
-
Update the
example.tfvars
:- db_username: The master username for the RDS cluster.
- db_password: The master password for the RDS cluster (you should actually not store this in a file and enter it when you apply your Terraform, but leaving it here for simplicity's sake.)
- public_ssh_key: Set this to the public SSH key you're going to use to SSH to boxes. It is usually in
~/.ssh/id_rsa.pub
on your system. - keypair_name: The name of the keypair to store your public SSH key.
- key_s3_bucket_name: The S3 bucket to store the K3S kubeconfig file. (NOTE: This needs to be GLOBALLY UNIQUE across AWS.)
-
Initialize Terraform with the S3 backend:
terraform init -backend-config=backends/s3.tfvars
-
Apply terraform (you'll need to type 'yes'):
terraform apply -var-file=example.tfvars
-
Wait until Terraform successfully deploys your cluster + a few minutes, then run the following to get your Kubeconfig file from S3:
aws s3 cp s3://YOUR_BUCKET_NAME/k3s.yaml ~/.kube/config
-
You should now be able to interact with your cluster via:
kubectl get nodes
You should see 6 healthy nodes running (unless you've otherwised specified agent/server counts).
-
Lastly, let's check to make sure your ArgoCD pods are running:
kubectl get deployments -n kube-system | grep argocd
You should see all ArgoCD deployments as
1/1
.
To destroy a cluster, you need to first go to your AWS console, the EC2 service, and click on Load Balancers. There will be an ELB that the Kubernetes cloud provider created but isn't managed by Terraform that you need to clean up. You also need to delete the Security Group that that ELB is using.
After you've cleaned the ELB up, run the following and type "yes" when prompted:
terraform destroy -var-file=example.tfvars
If you're looking to really get into GitOps via ArgoCD, check out the demo-app for adding a ton of cool tools to this cluster.