DeepSeek and the $5 Million AI Revolution: Are We All Just Stupid? | ✉️ #62
Hey! 👋
Over the past few days, we've all read about DeepSeek and how, in just two weeks, China has shaken up the AI market. The narrative is everywhere: how Goliath has lost another battle, how a "decent" AI model can be built with just $5 million, and how the established giants might not be as invincible as they seemed.
You'll hear claims that DeepSeek used OpenAI models to cut training costs, that they're using different chips than they publicly admit, or that they spent far more than the declared $5 million. But regardless of the speculation, one thing is painfully clear: we’ve been fools.
The AI Illusion: Are We Betting on the Wrong Things?
The entire industry seems to believe that progress in AI requires burning trillions of dollars—yet we’re still stuck with models that hallucinate daily, struggle to truly understand human intent, and merely mimic a fraction of the human brain.
Why, then, are companies shaping their entire IT policies around how many billions they'll pour into AI next year? Why are we treating primitive AI models as if they are final, untouchable solutions when we are still at the infancy of AI?
And then, there’s NVIDIA—losing 17% of its market value simply because another company built an AI model. Why? Because we all bought into the belief that you need to spend every last dollar on Earth to train a competitive AI. That you need to consume endless energy to keep up. That you need to sacrifice everything to stay ahead. And for what?
DeepSeek's Lesson: Stop Being Stupid
DeepSeek is here to prove us wrong. To show us that the AI arms race isn’t about throwing insane amounts of money at a problem but about being smart.
Will we learn anything from this? Probably not. Because, after all, we’re human.
What We've Shared
DevOps Accents #54: Humanizing Customer Support with Caolan Melvin from VoxMail. Is customer support fundamentally broken in our day and age? What should you focus on deciding on your customer support solution? Is there something new we can try to improve this process? For episode 54 of DevOps Accents Leo and Pablo talk Caolan Melvin from VoxMail, a company that transcribes and analyses audio messages from your customers and provides new ways for support.
How to Use AI in Collaborative and Creative Tools? A segment from the previous episode of DevOps Accents in video form:
And on the website we have two new articles:
Getting Traffic to EKS: Using ALB Ingress Controller with Amazon EKS on Fargate
Serverless Kubernetes with AWS EKS and Fargate: You don't need servers to run a Kubernetes Cluster
What We've Discovered
Designing chat architecture for reliable message ordering at scale: Many how-to articles might convince you that creating a chat system is a simple task. In a real, distributed and complex world, it’s anything than simple. Check out this article to learn about many different scenarios that need to be considered while building a scalable chat system.
What Karpenter v1.0.0 means for Kubernetes autoscaling: You might have missed Karpenter reaching v1. We would almost always recommend Karpenter over good old cluster auto scaler. Now is a perfect time to get acquainted with this tool!
Troubleshooting Amazon EKS networking issues at scale in an Enterprise scenario: Not sure why emphasis is on ”enterprise" - EBS throttling while connecting from Pod IP can happen to organization of any size. Lots of good advanced tips in here.
Automating Centralized NAT Gateways in AWS VPCs and Region with Terraform: We like the idea of cutting NAT Gateway costs via re-structuring network routing a bit. Provided Terraform examples are a nice bonus, in case you want to explore this idea further.
The Terralith: Monolithic Architecture of Terraform & Infrastructure as Code. While we agree that monolithic Terraform setups are not ideal in many cases, we’d argue that a small company with a handful of resources would benefit from a Terralith - as long as all preparation steps are made to later split it up into multiple states.
The 63rd mkdev dispatch of the year will arrive on Friday, February 14th. See you next time!