Infra As Code — Terraform (2) Create AWS EKS Cluster

Tony
4 min readMay 2, 2020

--

In this article I will show you how to create a EKS control plane stack using Terraform. The resources we will be creating in this article:

  • One AWS VPC with two public subnets and two private subnets
  • Two security groups
  • One IAM eks role and one eks service-linked role
  • One EKS cluster with public access only limited to your IP

You can find more details about the VPC part from my previous article: “Infra As Code — Create AWS VPC and Subnets Using Terraform and Best Practices”

Note: It is recommend to use a separate VPC for each EKS cluster for better security and network isolation.

Before You Start

I strongly recommend you to manually create a AWS EKS cluster from the management console, to get a good understanding of what are required for creating a EKS cluster. Terrafrom just a tool to help you create it from a programmable way, however without basic understanding, it doesn’t help much. Give a good read of below in red circles:

EKS Cluster Primary Components

A EKS cluster consists of two primary components:

  • The EKS control plane
  • The EKS work nodes that are registered with the control plane

The EKS control plane consists of control plane nodes that run k8s software, and it runs in an account managed by AWS, the k8s API is exposed via the EKS endpoint associated with your cluster.

In this article, we will create the EKS control plane only, and I will show you how to register work nodes to your EKS control plane in my next article.

Terraform Code Structure

Terrafrom code structure looks like the following:

  1. “providers.tf” file:
# Provider configuration

provider "aws" {
region = var.region
version = "~> 2.8"
profile = "testapp"
}

2. “variables.tf” file:

#
# Variables Configuration
#

variable "region" {
default = "us-east-1"
}

variable "cluster-name" {
default = "testapp-eks-cluster"
type = string
}

variable "eks-version" {
default = "1.16"
}

3. “eks.tf” file (For code reusability, We import the aws-vpc module we defined here)

#
# EKS Cluster Resources
# * IAM Role to allow EKS service to manage other AWS services
# * EC2 Security Group to allow networking traffic with EKS cluster
# * EKS Cluster
#

resource "aws_iam_role" "eks_cluster_iam_role" {
name = var.cluster_name

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_iam_role.name
}

resource "aws_iam_role_policy_attachment" "cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster_iam_role.name
}

resource "aws_security_group" "eks_cluster_sg" {
name = "${var.cluster_name}-security-group"
description = "Cluster communication with worker nodes"
vpc_id = aws_vpc.eks_vpc.id

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "${var.cluster_name}-security-group"
}
}

resource "aws_security_group_rule" "cluster-ingress-workstation-https" {
cidr_blocks = [local.workstation-external-cidr]
description = "Allow workstation to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster_sg.id
to_port = 443
type = "ingress"
}

resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster_iam_role.arn

vpc_config {
security_group_ids = [aws_security_group.eks_cluster_sg.id]
subnet_ids = aws_subnet.eks_vpc_public_subnet[*].id
endpoint_public_access = true
endpoint_private_access = true
public_access_cidrs = [var.my_ip]
}

depends_on = [
aws_iam_role_policy_attachment.cluster-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.cluster-AmazonEKSServicePolicy,
]
}

After you run the terraform apply command, you should have the following AWS resources:

VPC:

EKS cluster:

With networking looks like this:

You should be able to access your newly created EKS cluster by typing the API server endpoint in your browser, and you should see something like this:

{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {

},
"code": 403
}

Congratulations! You have created your EKS control plane in AWS successfully!

--

--