The Current Reality of AI Coding Assistants | 🎙️#68
AI now codes faster than most engineers but it also confidently makes stuff up. The “AI made me a 10x dev” story sounds great online, yet many AI-driven projects quietly collapse, and teams pretend they’re not using it. The real value isn’t writing code anymore; it’s steering AI, setting boundaries, and catching its mistakes. Without strong code review, AI becomes a liability. With it, it’s a massive accelerator. The gap is widening fast: structured AI teams are flying ahead, resistant ones are stuck. Today we’re breaking down the real wins, failures, and what coding actually looks like in 2026. In this episode:
- Our experiences with AI coding assistants;
- How to do it wrong and how to do it right;
- Context engineering: beyond prompt engineering;
- The importance of tooling around the model;
- How to implement this for your team and not break everything.
You can listen to episode 68 of DevOps Accents on Spotify, or right now:
AI-powered coding tools have quietly crossed an important line. They are no longer just autocomplete engines or chat windows on the side. As Leo points out early on, these tools now live inside editors, run code searches, propose refactors, and even work autonomously for long stretches of time. What used to be a quick experiment has turned into something that actively reshapes how development work gets done.
Leo’s own experience shows both sides of that shift. A seemingly harmless attempt to improve a small HTML and JavaScript project ended with broken scrolling, a dead burger menu, and a forced rollback. The lesson wasn’t that AI tools are useless—but that they demand new habits, clearer constraints, and better oversight than many developers are used to.
Kirill: From Autocomplete to “Agentic” Development
Kirill frames the real breakthrough not as better language models, but better wrappers around them. The early leap came with inline suggestions and simple scripts that cut hours of work down to minutes. But the biggest shift arrived with so-called agentic coding.
Instead of responding to a single prompt, these agents read multiple files, explore repositories, modify code, and iterate until a task is complete. For Kirill, that moment felt like suddenly having “an army of junior developers” working in parallel. Tasks that previously weren’t worth starting—because they would consume a weekend—now get done in under an hour.
A concrete example: improving mobile navigation on the mkdev site. By giving the agent clear architectural constraints (no new frameworks, stick to existing Rails and Stimulus patterns), Kirill shipped a meaningful UI improvement in about 20 minutes. The key wasn’t magic—it was knowing exactly what to ask for and how the system should behave.
Pablo: You Don’t Need to Know the Language, But You Do Need the Architecture
Pablo highlights a different kind of productivity gain. For him, the biggest win is not needing deep expertise in every language. This year alone, he’s built multiple tools in JavaScript despite barely using it before. AI closes the syntax gap—but it does not replace architectural thinking.
That shift turns developers into architects first. You need a clear vision, precise boundaries, and a sense of direction. Without that, the models will happily wander off, implementing something that technically works but solves the wrong problem.
Pablo also stresses the importance of context size. With modern tools capable of handling massive repositories, developers can now point agents at entire folders instead of single files. That’s a game changer—but it comes with a catch. Context is still finite. Models forget. If you assume the system remembers everything you discussed hours ago, you’ll eventually be surprised by repeated mistakes.
New Habits: Changelogs, Documentation, and Context Engineering
Both Kirill and Pablo agree that success with coding agents depends on discipline. Kirill describes simple but powerful habits: always ask the agent to maintain a changelog, generate documentation, and explain its own changes. These practices aren’t new—they mirror how good teams already work with humans—but they become critical when your “coworker” is an AI.
This is where the conversation shifts from prompt engineering to context engineering. Instead of crafting clever one-liners, developers need to curate durable context: comments, docs, summaries, and guardrails that guide future changes.
Pablo adds an important warning: the model may confidently fill gaps with incorrect assumptions. That’s why trust must always be paired with review. AI may generate fewer errors overall, but unchecked iteration can lock the same mistake into every subsequent attempt.
Teams, Trust, and the Reality of Adoption
When it comes to teams, Kirill is blunt: it doesn’t matter whether code was written by a human or an AI. The pull request still has your name on it. The same rules apply—tests, reviews, and accountability.
Trying to ban AI tools outright only pushes usage underground and slows companies down. Kirill argues that organizations should do the opposite: provide licenses, set expectations, and train people properly. Otherwise, a few developers will quietly become ten times more productive than the rest, and the gap will show up in performance reviews anyway.
Pablo notes that, for now, teams rarely discuss prompts the way they discuss code. But that will change. In the future, collaboration may involve debating how to ask the agent for a solution, not just what the solution looks like.
The Bottom Line
AI coding agents don’t eliminate the need for skill—they amplify it. As Leo’s early frustration shows, careless use can break things fast. But as Kirill and Pablo demonstrate, with clear intent, architectural thinking, and solid engineering habits, these tools unlock work that simply wasn’t feasible before.
The real shift isn’t automation. It’s leverage.
Podcast editing: Mila Jones, milajonesproduction@gmail.com