We Are Happy | ✉️ #73

Hey! 👋
A few days ago, we took a big step that we're truly excited to share with all of you: we’ve just launched our very first open source Terraform provider! And not just any provider — this one lets you manage OpenAI resources as part of your infrastructure as code. You can check it out on the official Terraform Registry and the GitHub repo.
We’ve been working hard on this over the past few weeks — and we’ve poured a lot of love into it — because we genuinely believe in what it enables: more structure, better traceability, and less chaos when it comes to configuring organizations, projects, permissions, or API keys in OpenAI. If you’ve ever had to manage all of that manually (hello, ClickOps), you’ll know exactly what we mean.
This provider makes it possible to automate the entire administrative side of OpenAI — something that until now wasn’t really covered by modern infrastructure as code practices. We're talking about the ability to create projects, invite members, generate and revoke API keys, set usage limits, and more… all from your .tf files. No more scattered clicks in the OpenAI UI, hoping you didn't forget a setting or make a mistake.
Why does this matter? Because when you're managing multiple environments, multiple organizations, or simply care about control and reproducibility, every click matters. And what isn’t codified... gets forgotten. Lost. Broken. With this new provider, we want to help you avoid that.
And this is just the beginning. Our goal is to keep expanding it, increase coverage, and — most importantly — listen to the community to see what you would like to see next. We've already received some great feedback and we’re looking forward to more people trying it out in real environments and sharing their experiences.
If you like the project, find it useful, or just want to support us — swing by the repo, leave a star, and feel free to open an issue or a PR if you want to contribute. And if you have colleagues starting to integrate OpenAI into their workflows, spread the word! The more, the merrier.
This is a small but meaningful milestone for us, and we truly hope you enjoy using it as much as we enjoyed building it.
What We've Shared
- You can read more about our Open Source Terraform Provider for OpenAI in the article!
What We've Discovered
The pros and cons of Lambdalith: A very needed breakdown of why and why not it’s good idea to have a single Lambda serving all of your API endpoints. Despite the outlined cons, in most cases Lambdalith is the simplest way to start and grow. Also, you can solve observability drawback by introducing tracing.
Introduction to observing machine learning workloads on Amazon EKS: Before it gets to EKS-specific parts, the article explains the differences of between monitoring ML and non-ML workloads. In general, the text is useful for any Kubernetes cluster, regardless of the cloud flavor.
Anomaly Detection in Time Series Using Statistical Analysis: Booking.com's engineer Ivan Shubin shares how they use statistical analysis for anomaly detection and alerting. The article goes from initial simpler approaches the team took and how it evolved based on new findings - all of this with great visualisations that makes it easy to understand everything!
Achieving relentless Kafka reliability at scale with the Streaming Platform: DataDog is one of the companies you should learn from when it comes to some proper high-load big scale engineering - as well as how to bankrupt you with their stellar observability offering's pricing.
The Lost Fourth Pillar of Observability - Config Data Monitoring: Configuration changes are one of the most frequent causes of incidents - we should talk more about it, and integrate those more, just like you would integrate deployment events to your application's metrics. Kudos to Yevgeny Pats for bringing this topic up!
The 74th mkdev dispatch will arrive on Friday, July 25th. See you next time!