GKE Autopilot: Forget About Your Kubernetes Nodes

An illustration of a person napping in the cockpit of a vintage biplane, wearing an orange scarf and goggles, with fluffy clouds in the background. An illustration of a person napping in the cockpit of a vintage biplane, wearing an orange scarf and goggles, with fluffy clouds in the background.

Imagine for a moment that you want to deploy one application in a Kubernetes environment that is not created yet. At this point you start to think if Kubernetes is a good idea for only one deployment. You are going to create at least 3 masters and a minimum of 3 workers. There's network setup, maintenances, work pools, issues, etc.

You can go to GKE or EKS but at the same time you need to deploy a complete infrastructure for a simple application and you are going to talk with your infrastructure team and your SRE team about that. At this point, most of the people that are currently in the cloud will start to check Cloud Run or ECS but Cloud Run only works with HTTP applications and ECS is AWS-specific so any of those is the correct solution for our problem.

But this problem has a solution in Google and it is called GKE autopilot. In standard GKE Google takes care only of your master nodes that are working inside a private VPC, but in GKE autopilot Google takes care of your workers too, and then you only have an entry to a Kubernetes API where you deploy your applications.

Kubernetes audit: it's a complex framework, and it's tricky to get it right. We are here to help you with that. About Kubernetes audits

As you will see the steps are few and simple. You need to go to GKE in your Google console and create a GKE cluster that in this case will be Autopilot. Setup in your public or private cluster the VPC that you want to use and ranges for services and pods, and the Kubernetes version. And that’s all, nothing else. Now we need to connect to the cluster with gcloud CLI and as you can see we can start to apply our templates.

And now when you execute kubectl apply you are not only deploying your application, you are even setting up your application with workload identity, autoscaling, and security setup for you. As you can see if you execute kubectl get pods -w it is going to take more or less a minute to create

So bye-bye control plane and Workers. It’s true that there are some limitations in GKE autopilot like the lack of external IP in the private clusters that avoid inbound connections directly, no ssh to anything because you don’t control a thing, no external monitoring tools, etc. But in the end, this solution is valid in some cases and the good thing is that the billing is based on your pod consumption and not anymore on your workers. You have one more option for your next Kubernetes cluster.


Here's the same article in video form for your convenience: