Infra As Code — Terraform (3) Add Worknodes To AWS EKS Cluster

Tony
2 min readMay 9, 2020

--

In my last article, “Infra As Code — Create AWS EKS Cluster Using Terraform”. I showed you how to create a EKS control plane stack using Terraform. A EKS control plane without any worknodes is not very useful. Let’s add some worknodes to it.

Since most of heavy lifting has been done in EKS control plane creation, add worknodes to the cluster is quite easy now.

Things to Pay Attention

  • You need to tag your VPC with the following Key/Value pair:
# testapp-dev-eks is the EKS cluster name
Key: kubernetes.io/cluster/testapp-dev-eks
Value: shared
  • Remember to restrict your EKS cluster public access to you own IP first, for security reasons. You can set it up in aws_eks_cluster -> vpc_config :
resource "aws_eks_cluster" "eks_cluster" {
...

vpc_config {

security_group_ids = [aws_security_group.eks_cluster_security_group.id]
subnet_ids = module.aws_vpc.public_subnet[*].id
endpoint_public_access = true
public_access_cidrs = [var.my_ip]
}

The worknodes Terraform Code:

  • eks-nodes.tf
#
# EKS Worker Nodes Resources
# * IAM role allowing Kubernetes actions to access other AWS services
# * EKS Node Group to launch worker nodes
#

resource "aws_iam_role" "eks_worknode" {
name = "${var.cluster_name}-worknode"

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "worknode-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_worknode.name
}

resource "aws_iam_role_policy_attachment" "worknode-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_worknode.name
}

resource "aws_iam_role_policy_attachment" "worknode-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_worknode.name
}

resource "aws_eks_node_group" "eks-worknode-group" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = "${var.cluster_name}-worknode-group"
node_role_arn = aws_iam_role.eks_worknode.arn
subnet_ids = aws_subnet.eks_vpc_public_subnet[*].id
remote_access {
ec2_ssh_key = var.ssh_key_name
}

scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}

depends_on = [
aws_iam_role_policy_attachment.worknode-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.worknode-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.worknode-AmazonEC2ContainerRegistryReadOnly,
]
}

Add the eks-nodes.tf into your aws-eks Terraform module, then run:

  • terraform fmt
  • terraform validate
  • terraform apply

Congratulations! Now you have built an end-to-end EKS cluster using Terraform! In my next article, I will show you how to configure your kubectl client and deploy a small application to your EKS cluster.

--

--

Tony
Tony

No responses yet