Hey There | ✉️ #1
Hey 👋
Earlier this month, I've been working with AWS Aurora quite a lot. The benefits of Aurora over traditional RDS are clear - it's easier to scale out, you don't need to worry about storage and IOPS throttling, and it promises way better overhaul performance.
But what can easily be missed is that you pay for the amount of I/O operations performed. What it means, is that the amount of money you pay for Aurora is directly connected to which SQL queries you are going to run over your data - and how exactly this data is structured.
One example of a rather unfortunate situation is a PostgreSQL Autovacuum that can not keep up with the amount of changes in your tables. If Autovacuum can't keep up, your tables accumulate a lot of table bloat. The more bloat your table has, the more I/Os is required to read from it, and the more I/Os means higher price. Another example is PostgreSQL's usage of toast tables - that's where any value bigger than 8Kb is stored. Most typical use case - JSONB columns. The more data you have in toast tables, the more read operations Aurora needs to fetch your data, and the more I/Os means.. you get the point.
While removing some boring database scaling issues, like adding bigger or faster disks, Aurora adds a whole new set of more complex issues. Misconfigured tables and indexes, as well as badly written queries and unoptimized database parameters, can have a direct impact on your AWS bill. If you just throw your data at Aurora without giving it much thought, quite soon you will realize, that your application's architecture and data setup have a direct influence on your cloud costs.
What we've shared
Our series of articles about Kubernetes Capacity Management is complete!
Part I starts with a tour of how infrastructure evolved. We'll witness the jump from bare-metal machines to the virtual world, to software, to containers. This will help us later understand what Kubernetes really is.
Part II dives deep into the concept of pod, resources configuration (requests and limits), Quality of Service classes, scaling cluster resources, multi-tenant clusters and pretty much everything you need to know to win the Kubernetes Capacity Management Game.
And, finally, in Part III we summarize what we've learned so far, find out if Kubernetes makes capacity management any better, talk about GCP GKE Autopilot and AWS EKS Fargate and bring everything to a closure.
Meanwhile on our Youtube channel it's Pablo Time!
Pablo Inigo Sanchez talks about how to make money with Open Source projects. It's hard to understand how to do that when we're showing our code, but here are 6 ways to do exactly that.
And this week he answers the question, should you use Terraform or Pulumi? When do you choose one and when another? In less then 8 minutes we learn how to use these Infrastructures as Code tools.
What we've discovered
Diagrams as Code: A Python library to generate infra diagrams from code. Not exactly visually attractive, but highly automatable.
A guide to kubectl scale command: kubectl scale command lets you quickly scape up/down deployments. Especially useful for automating shut down of environments for the night/weekend.
Debugging containerd: With containerd becoming the default container runtime in Kubernetes, it's worth reading this article to learn how to debug ongoing issues.
A random reminder
Terraform Lightning Course is a free mkdev Terraform video course that will explain the basics of Terraform to you in 45 minutes.
The 2nd mkdev dispatch will arrive on Friday, September 30th. See you next time!