2025 Year In Review | 🎙️#67
In this episode of DevOps Accents, Leo, Pablo and Kirill look back at their own predictions for 2025 they made in 2024 and compare it to what actually happened. In this episode:
- The fear and secrecy of using AI tools;
- AI agents in 2025;
- The progress in vibe coding;
- The AI bubble;
- Quantum computing and security concerns;
- What else can we expect from 2026?
You can listen to episode 67 of DevOps Accents on Spotify, or right now:
At the end of 2024, many people in tech felt confident about what 2025 would bring. The narrative sounded clear: artificial intelligence would move from experimentation to real business value, operations would become more autonomous, and “agentic AI” would finally take over repetitive work. A year later, the reality looks both more interesting and more chaotic.
What stands out most is not that predictions were completely wrong, but that they underestimated where the real impact would show up.
AI Everywhere, but Not Where We Expected
Leo frames one of the big surprises clearly: 2025 was supposed to be about serious enterprise reinvention with AI, yet much of the visible impact landed in content generation and everyday workflows instead. Data centers are now competing for memory, not because of breakthrough enterprise automation, but because generative models are powering image, video, and code generation at massive scale.
Pablo builds on this by pointing to creative tools. Image and video generation have improved so quickly that long-established software markets feel genuinely threatened. Instead of paying for heavyweight licenses, people increasingly prefer to write a prompt and get usable results in minutes. This shift was not just faster than expected, it happened in places many did not anticipate—design, marketing, and simple content production.
Kirill adds a more technical angle: the real leap was not “AI agents” as a concept, but the models themselves becoming strong enough to execute complex tasks directly. The promised future of multi-node, autonomous agents quietly faded as large models learned to use tools, APIs, and workflows on their own.
From “Helpers” to Multipliers in Daily Work
All three agree on one thing: AI has moved from being a nice assistant to a force multiplier. Kirill describes how coding changed over the year—from generating small scripts to producing entire applications, infrastructure pieces, and even polished app-store assets. For front-end work, code quality often matters less than speed and usability; for back-end systems, understanding architecture is still essential, but AI dramatically accelerates execution.
Pablo highlights the economic consequences. If a project that once took six months and ten people can now be done in weeks by a small team, traditional pricing models start to break. Customers may not fully realize what changed yet, but the pressure is building. The value is no longer in hours worked, but in knowing what to build and how to guide the tools.
Leo brings a complementary perspective from creative work. AI can drastically reduce costs—such as replacing expensive photo shoots—but it still requires human judgment. Even when results are “good enough,” teams often rely on traditional tools to refine and correct what AI produces. The workflow changes, but responsibility does not disappear.
The Disappearing Fear—and the New One
Another unexpected shift is cultural. Pablo notes that many people already use AI daily but are still hesitant to admit it at work. The fear of replacement lingers, even as productivity gains become obvious. Kirill is blunt here: the real risk is not using these tools at all. The gap between people who embrace AI and those who avoid it is now clearly visible.
At the same time, Leo observes that older concerns—like constant talk about hallucinations—have mostly faded. Models are more reliable, often connected to live data, and better at correcting themselves. Instead of worrying about whether AI is “safe to use,” people worry about how fast it is reshaping expectations.
Is This a Bubble, or Just a Shift?
The question of an AI bubble comes up repeatedly. Pablo points out how deeply interconnected the ecosystem has become, with companies like AWS, Google, and Cloudflare investing billions into infrastructure and models. This makes a total collapse unlikely, but it also raises the stakes if something goes wrong.
Leo suggests that if there is a bubble, it is more likely around AI startups that simply wrap existing models without adding real value. When end users can go directly to platforms like OpenAI or Google’s models, middle layers become harder to justify.
Kirill adds that unused capacity today does not mean wasted investment. As more people switch to AI-driven coding, design, and content workflows, demand is likely to catch up quickly.
Beyond AI: A Quiet Signal from Quantum Computing
One topic breaks the AI-only focus: quantum computing. Kirill highlights how services like Signal are already preparing for a future where today’s encryption may no longer be safe. The idea of “harvest now, decrypt later” attacks—where encrypted data is stolen and stored until quantum computers can break it—forces organizations to think far ahead, even if practical quantum machines are still rare.
Leo adds that recent breakthroughs hint at quantum systems moving closer to real-world environments, not just labs. It is not a prediction of immediate disruption, but a reminder that some long-term risks are already becoming short-term design considerations.
No Grand Predictions, Just Adaptation
After revisiting a year of missed forecasts, there is little appetite for bold predictions about 2026. The shared conclusion is simpler: the pace of change makes precise forecasts unreliable. What matters is adaptability.
People with clear ideas and the ability to use these tools effectively can now build things that once required large teams and large budgets. Whether in software, design, or entirely new domains, the barrier to creation has dropped sharply.
Instead of trying to guess the next headline, the more realistic strategy is to learn continuously, experiment openly, and accept that the biggest shifts may come from directions nobody is watching yet.
Podcast editing: Mila Jones, milajonesproduction@gmail.com