Vibe Coding — a Downfall or a Breakthrough? with Tammuz Dubnov from AutonomyAI |🎙️#61


If you haven’t been living under a rock, you’ve definitely heard a lot about Vibe Coding. Some drag it through the mud, others meme it to death, some treat it like a guilty pleasure, and then there are those who embrace it openly, seeing it as the future. We decided to dig into the phenomenon and start with the perspective of someone who’s building a product that’s taking Vibe Coding to the next level. Our guest for episode 61 of DevOps Accents is Tammuz Dubnov from AutonomyAI. In this episode:
- How did arrival of LLMs change Machine Learning careers;
- The pressure of being an AI company;
- Vibe Coding: what to know and what to keep in mind;
- Code quality in LLM training data;
- Preserving context and creating a knowledge base;
- Coding for future developers.
You can listen to episode 61 of DevOps Accents on Spotify, or right now:
With over a decade of experience in machine learning and startups, Tammuz Dubnov, founder and CTO of AutonomyAI, offers a grounded perspective on how large language models (LLMs) are reshaping software development. In a thoughtful discussion with Leo, co-founder of mkdev, they unpack the accelerating demands on AI companies, the shift from coding to engineering with LLMs, and the importance of context in autonomous development.
The LLM Era: Reshaping Career Paths in Machine Learning
Tammuz described how the arrival of LLMs has transformed the role of technical leaders like himself. In earlier startup phases, he would be deeply involved in the code. But as teams grow, staying close to the evolving codebase becomes harder. The emergence of LLMs provided a critical bridge—tools that could observe and interpret changes, giving technical leads real-time insight without requiring them to review every pull request. This new relationship with code prompted him to start developing autonomous agents, even before the term “agent” became mainstream.
For many in the machine learning space, the shift brought both opportunity and existential pressure. Startups now race to implement capabilities enabled by LLMs while competing with rivals who may be just months behind. “It’s not a marathon,” Tammuz said. “It’s a really long sprint.”
Code Quality, Context, and the Risks of Speed
As teams embrace auto-generated code, there's a growing tension between velocity and stability. Leo raised the concern of “vibe coding”—giving a model a natural language prompt and trusting it to write working code. While effective in the short term, this approach risks introducing silent bugs, bloated code, and poor long-term maintainability.
Tammuz acknowledged these risks but emphasized that high-quality results are still possible if the right practices are followed. His team coined the term "Vibe Engineering" to capture a more rigorous approach: one where the developer collaborates with the LLM using rich prompts, test-driven development, and architectural alignment before generating any code. It’s not about skipping the engineering process but enhancing it through better tooling and structured human-AI interactions.
He pointed to data showing increased bug rates and code churn when developers rely heavily on LLMs without critical oversight—highlighting the need for strong feedback loops and thoughtful prompt design.
The distribution of high-quality to low-quality code in open source is pretty poor already. Most of it is not great. Some of it is amazing. But it's not true just for open source — we're seeing it. Our agents work in codebases for huge organizations, and a part of the struggle that we face is: okay, well, some of those organizations have areas of the code that are really poorly written, and some areas that are really great.
And it goes back to, kind of, the developers in the organization. Some are amazing; some are, let's say, less polished in their work — and we need to deal with all of it. At least our agents do.
And the beauty with LLMs that learn on everything is that, if you point them correctly, they can give you that 1% golden code output. So now, it becomes more of a… I’d say, a knowledge gap on how to use LLMs to give you high-quality code. — Tammuz Dubnov
Agents Need Context, Not Just Prompts
One of the most powerful ideas from the conversation was how Autonomy AI's agentic context engine (ACE) preserves and applies organizational context. Rather than relying solely on chat history or manually fed documents, ACE builds a living knowledge base by continuously analyzing the codebase, tracking changes, and adapting to standards and best practices over time. This lets the agents produce not just functional, but consistent and idiomatic code—even across large, fragmented teams.
Leo challenged the notion that more open-ended prompts are better, and Tammuz agreed: the agent’s performance improves dramatically when it has detailed product-level context. For example, a vague prompt might yield runnable code, but a narrative-rich prompt—focused on goals, stakeholders, and edge cases—provides the kind of clarity needed for sustainable software development.
There was one company that used us — still uses us — and the team lead reached out the next day, ecstatic that a new developer who had just started yesterday already did the work, picked the right components because the agent picked the right components, and the code quality output was amazing.
So, what this means is, effectively, you can skip onboarding — because the agent has all that kind of tribal knowledge: which components exist, how the codebase behaves. All that knowledge exists within the ACE and the agent.
And therefore, in theory — and for us, in practice — we see that the human developer doesn't have to do that onboarding. They can just jump straight into the task.
And so that's one part. It's really interesting that the whole onboarding experience for developers is changing — I think, drastically changing. — Tammuz Dubnov
Building for the Future Developer
As development workflows change, so does onboarding. Tammuz noted that one company using ACE onboarded a new developer who was productive on day one—thanks to the agent's ability to guide decisions and align with existing standards. Documentation is being reimagined too. Instead of static files, Autonomy AI offers tools like Visualizer, which can explain code in real time with diagrams and context-sensitive analysis.
This hints at a broader shift. Developers will spend less time writing boilerplate and more time shaping outcomes and ensuring governance. The human role won’t disappear—it will move upstream, to setting architecture, security policies, and ethical guardrails. “We’re becoming much more like proper engineers,” Tammuz said, as abstraction levels continue to rise.
Leo closed the conversation with a reminder: even if LLMs accelerate development, responsibility remains a human concern. From security to licensing to architecture, some decisions must be consciously made by people. That’s not a limitation—it’s a safeguard.
I think it's just a very natural evolution. Like, we already went through this phase a few times — just with coding over the decades, right? You're no longer… you're not thinking binary in any way. And we're jumping to languages that are more and more abstract. And I think this is just the next level of abstraction, right?
Now we're at the point where we're writing code, and as we're writing code, we're thinking about how we implement it. But behind the scenes, the interesting thought process is: well, architecturally and in terms of business logic, what do I mean for the code to do?
So, LLMs just enable us to make that next jump of abstraction. So now, the developer — or the future developer — will no longer need to think about the nuances of the actual code and can think abstractly: okay, well, behavior-wise and logic-wise and architecture-wise, I want it to be this way. And now you do it — but I'm still gonna be the engineer behind it that drives how I want it to act.
And so, I think we're becoming much more like proper engineers, moving to that level of abstraction. — Tammuz Dubnov
Show Notes
As you can see, adopting AI into your business isn't always straightforward. Processes shift, impact isn’t always obvious, and change can feel messy. But one thing’s for sure — you can’t afford to ignore it. We help you make sense of it all. Start by talking to our Data & AI experts, they’ll help you figure out where to begin and how to scale smart.
Also, don’t miss our AI Strategy Guide: How to Scale AI Across Your Business, a must-read for anyone making AI decisions.
- Follow Tammuz on LinkedIn.
- Make sure to check out their tool, the access to the public will be available very soon.
Podcast editing: Mila Jones, milajonesproduction@gmail.com