Getting Traffic to EKS: Using ALB Ingress Controller with Amazon EKS on Fargate

Illustration of a person wearing an orange safety vest and scarf, holding wands for directing traffic. They are blowing a whistle and surrounded by light clouds or smoke. Lines suggest movement or direction in the background. Illustration of a person wearing an orange safety vest and scarf, holding wands for directing traffic. They are blowing a whistle and surrounded by light clouds or smoke. Lines suggest movement or direction in the background.

What we are going to see today is fascinating. We are going to connect Kubernetes with AWS, automatically creating Application Load Balancers as soon as we create an ingress object in our EKS Fargate cluster.

But, what is that? The idea is that our EKS cluster operates inside a VPC with six subnets—in our case, three are private and three are public. When we create a service, it's normally inside our private subnet, and the recently created endpoint uses one of those subnets. So, how can we access the service?

As you can see here, what we are going to do is create an ALB connected to a target group. This target group is the connection between the service and the port in our Kubernetes cluster and the outside world with the Application Load Balancer.

But how do we create all those components? How is this magic working? If you remember in one of our videos about EKS Fargate, we created our cluster with a simple command:

eksctl create cluster

As soon as we do that, we can create a simple deployment object as we did last week to start our application:

kubectl apply -f deployment.yml

Now, we create our service to point to the port that the application in the pod is opening:

kubectl apply -f service.yml

After those steps, we need to start working. First, we create an association between the IAM and the cluster:

eksctl utils associate-iam-oidc-provider --cluster pathfinder --approve

Now, we will download from the Kubernetes GitHub repository the IAM policy that we need to create our policy:

wget -O alb-ingress-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json
aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://alb-ingress-iam-policy.json

The next step is to create the ClusterRole and the ClusterRoleBinding with the YAML file that you have on the screen by executing:

kubectl apply -f role.yaml

We will use aws sts get-caller-identity to get the caller identity, and after that, create our IAM service account with:

eksctl create iamserviceaccount

The last step creates the magic controller, the alb-ingress-controller. As you can see in the YAML code, this is a deployment. This deployment will create a pod in the namespace kube-system called alb-ingress-controller:

[root@ip-192-168-47-226 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
alb-ingress-controller-55646dccdc-q55f7 1/1 Running 1 45m
coredns-654c94cbd5-fnlsg 1/1 Running 0 78m
coredns-654c94cbd5-fw26k 1/1 Running 0 78m

And this is the door to our logs:

kubectl logs pods/alb-ingress-controller-55646dccdc-q55f7 -n kube-system -f

Here, we will see and detect any error in our configuration.

And the magic trick is here. We need to create our ingress object in our Kubernetes cluster. As you can see on the screen, this is what we are going to execute, and the trick is in the annotations. We have kubernetes.io/ingress.class: 'alb' to indicate that we are going to use the Application Load Balancer and not the classic Load Balancer. We have alb.ingress.kubernetes.io/target-type with 'ip' because we are using Fargate, and in case that we are not using Fargate, you can choose 'instance'.

There is another metadata called alb.ingress.kubernetes.io/scheme where we can choose between 'internet-facing' or 'internal', and those will be the subnets that we will use.

Other points are the healthcheck-port, listen-ports, and healthcheck-path that will be used to create our Target Group.

After that, the spec part that is needed in Kubernetes to create the ingress object and that will connect with the AWS ALB.

And now, if we go to our AWS account, in EC2, and we choose Load Balancers, we will see that there is a new Load Balancer with a listener pointing to a target group. If we go to view/edit rules and we click on the target group, we can see how the target is connected with the endpoint that was created in the Kubernetes services and how it is listening. Wow, amazing. Now we can even go back to the load balancer and use the URL it is providing to us to connect with our application.

I love magic.

Today, we have seen how AWS is magically connected with ALB. Next week, we will move ahead to Amazon and will go to see GCP. Something magical too, CloudRun.


Here's the same article in video form for your convenience: