10 Years of Kubernetes | ๐ŸŽ™๏ธ#41

Illustration promoting 'DevOps Accents Episode 41' featuring a vintage microphone alongside a laptop with '10 Years of Kubernetes' text and a heart symbol. Illustration promoting 'DevOps Accents Episode 41' featuring a vintage microphone alongside a laptop with '10 Years of Kubernetes' text and a heart symbol.

Celebrate 10 years of Kubernetes with Pablo, Kirill and Leo and join us for a discussion of its history, impact and possible future.

  • What was it like to experience the rise of Kubernetes?
  • A big break for cloud and infra technologies 10 years ago;
  • Where will Kubernetes be in another 10 years?
  • Alternatives to Kubernetes;
  • How big is the impact of Kubernetes for the industry?
  • Kuberenetes issues;
  • AI hype bubble.

You can listen to episode 41 of DevOps Accents on Spotify, or right now:


Kubernetes, often abbreviated as K8S, celebrated its 10th anniversary recently, marking a significant milestone in the evolution of containerized application management. Launched by Google in June 2014, Kubernetes has revolutionized the industry by providing a robust system for automating the deployment, scaling, and management of containerized applications. Its roots in Googleโ€™s Borg system have helped it evolve into the powerhouse it is today, transforming how developers and businesses approach infrastructure and operations.


We're talking about 2014. Most people and most companies were using monolithic systems. At most, they were using VMware for different kinds of virtualization, private clouds, and things like that. Some companies were using AWS, but it wasnโ€™t something that was on everyone's mind. Docker was there, along with many other technologies. It was a time when a movement began in architecture from monolithic environments to microservices environments. That's when we started hearing about microservices.

We needed an architectural overhaul to accommodate all these components and the idea of containers, where an application or component would perform consistently every time it started. This concept of repetition was crucial. In 2014, there were few positions for DevOps, marking a global change in the IT mindset. Kubernetes appeared, along with new architectures, job positions, and new ways of working with tools like Jira and dynamic work methods. It wasn't just Kubernetes; many other tools, such as monitoring tools, also emerged during this period. โ€” Pablo Inigo Sanchez


It's funny because, for example, SCRUM went from something that existed to something that became like a religion. Everyone started introducing SCRUM, converting all their processes to SCRUM, and hiring SCRUM consultants. I witnessed this firsthand in a couple of companies that suddenly had to redo everything to adopt SCRUM.

I think what triggered Kubernetes originally was the release of Docker in 2014, or possibly 2013. Once Docker appeared, we realized that containers were such a powerful abstraction for developers. Google then took the best parts of Borg and built Kubernetes as a new system on top of Docker. However, it took a while for Kubernetes to become something even close to Borg. For example, Kubernetes could not scale well beyond a certain number of nodes initially. In 2014, Borg was already powering all of Google's data centers, which were operating at an enormous scale. Kubernetes was nowhere near capable of handling this level of infrastructure, and even today, managing one of Google's data centers entirely with Kubernetes might not be possible. They still use Borg for that. As Pablo mentioned, everything in Google Cloud is powered by Borg. Every virtual machine in Google Cloud is a container inside Borg, which is pretty impressive. โ€” Kirill Shirinkin


The Birth of Essential Technologies in 2013-2014

The years 2013 and 2014 were a period of remarkable innovation in the tech industry. This era saw the emergence of several foundational technologies that have become integral to modern infrastructure and operations. Docker, a containerization platform, was released in 2013, providing a new way to package and distribute applications. Kubernetes followed in 2014, offering a scalable way to manage these containers.

During this time, other tools like Terraform and OpenShift also appeared, each contributing to the shift towards more flexible and scalable infrastructure management. These tools collectively laid the groundwork for the DevOps practices that are now standard in the industry, enabling faster development cycles and more efficient management of applications.


We need to consider a few things because when Kubernetes appeared in 2014, it was on the heels of Docker in 2013. In 2011, we first heard about microservices, and Jenkins also appeared around that time. During this period, infrastructure as a dedicated field didn't really exist. Infrastructure was present, but not as a distinct concept. Everything was related to the software. We had software that we wanted to execute somewhere, but the way we think about it today is completely different. Now, we have teams dedicated solely to infrastructure, teams that deploy and manage it exclusively.

Before around 2011, most people were not working on the cloud but on traditional systems. When Kubernetes appeared, as Kiril mentioned, it didn't explode onto the scene. It was a tool that could be used, but for it to be utilized the way it is today, the cloud had to grow significantly. Microservices needed to become widespread. Pipelines, like those Jenkins facilitated, needed to be used for all applications. People had to transition from monolithic systems to various components.

It took many years for people to start thinking about how to change their architecture. During this time, many worked on this transformation. This is when Kubernetes became viable and evolved into what it is today. โ€” Pablo Inigo Sanchez


The Impact of Kubernetes on the Industry

Kubernetes has had a profound impact on the tech industry. By enabling the efficient management of containerized applications, it has streamlined operations and reduced the complexity of deploying applications at scale. Its open-source nature has fostered a vibrant community, driving innovation and ensuring continuous improvement.

Companies have been able to adopt microservices architectures more easily, breaking down monolithic applications into smaller, more manageable components. This has improved scalability and resilience, allowing for more rapid deployment of new features and faster response times to issues.

Moreover, Kubernetes has influenced the development of other essential tools in the ecosystem. Helm, a package manager for Kubernetes, and Argo CD, a declarative continuous delivery tool, have emerged to simplify the management and deployment of applications. Prometheus for monitoring and Istio for service mesh management are other examples of tools that have been developed to complement Kubernetes, enhancing its capabilities and making it even more powerful.


10 to 15 years ago, there were people working in networking, handling switches and related equipment. There were also guys working with Linux operating systems or Unix variants like HP-UX and AIX, but there was no relation to anything like containers. It was Unix guys, storage guys, and database guys.

These days, when you look for job positions, you find roles like DevOps engineers, infrastructure engineers, and open-source specialists. These people work with tools that generate virtual infrastructure rather than with the physical infrastructure itself. There has been a paradigm shift: the workers who were once physically in front of the machines are no longer there. โ€” Pablo Inigo Sanchez


The Challenges and Future of Kubernetes

Despite its many advantages, Kubernetes is not without its challenges. Its complexity can be daunting for newcomers, and managing a Kubernetes cluster requires a significant amount of expertise. The need for precise configuration and resource management can lead to steep learning curves and potential pitfalls.


This is what I mean by Kubernetes essentially forcing you to think about resource requests like CPU and memory, similar to working with virtual machines and VMware. Over time, this aspect of Kubernetes in the cloud environment has become less relevant. Now, we have this trend of moving back from the cloud to on-premises to cut costs. Kubernetes is a great tool for on-premises deployment as it handles many things and is the most popular open-source and standard tool.

In the cloud, what I think is a good place for Kubernetes today is what Azure and Google Cloud are doing. You don't really use Kubernetes directly but use the abstractions that Kubernetes provides. For example, Cloud Run is basically just Knative initially. You can manage a Cloud Run application with a YAML file, which is a Kubernetes resource definition. The same goes for Microsoft Azure Container Apps. You use container apps without thinking about any virtual machines or servers, but you work with the Kubernetes custom resource definition that defines the application, such as which container image to use and which port to expose.

In the last three to four years, this has been the most powerful trait of Kubernetes: allowing the creation of powerful abstractions. The resource management part has become less important in the modern environment unless you really have to manage your own data center. In the cloud, you don't want to manage these two layersโ€”the virtual machines layer with all the CPU, memory, and capacity management, and the Kubernetes layer with all the pod CPU and memory management. It just makes no sense if you're going for a cloud-native and serverless setup. โ€” Kirill Shirinkin


Looking ahead, the role of Kubernetes in the next decade is a topic of much speculation. Some experts believe that Kubernetes will continue to be a cornerstone of cloud-native infrastructure, evolving to meet new demands and integrate with emerging technologies. Others argue that the complexity of Kubernetes may drive the development of simpler, more streamlined solutions that abstract away much of the underlying infrastructure management.

There is also the possibility that advancements in AI and machine learning could automate many aspects of Kubernetes management, making it more accessible to a broader audience. Tools that can intelligently manage resources, optimize performance, and ensure security could further reduce the barriers to entry and make Kubernetes even more integral to modern infrastructure.


The problem these days is that I cannot imagine the next generation of changes happening because of software. I think they're going to happen because of infrastructure. It's the infrastructure that is going to generate a new massive change, unlike the previous era where software drove innovation.

To identify solutions, you need to find the problems and the pain points. Solutions arise from addressing these pains. Currently, in coding, the pain points are not as significant as they were before, so there isn't a pressing need for new solutions in software. However, there are many pain points in infrastructure today. Therefore, I believe the changes will come from infrastructure, not from software as it was 10 years ago. โ€” Pablo Inigo Sanchez


The AI Hype Bubble

In parallel to discussions about Kubernetes, the AI landscape is also undergoing significant scrutiny. Recently, concerns have been raised about the hype surrounding generative AI and large language models (LLMs). Critics argue that the current state of AI, particularly in the realm of LLMs, is reminiscent of the dot-com bubble, with significant investments being made despite uncertain paths to profitability.

Issues such as AI hallucinations, where models generate incorrect or nonsensical information, highlight the limitations of current AI technologies. While AI has undoubtedly brought advancements in various fields, there is a growing awareness that it may not be the panacea it is often touted to be. The challenges of making AI reliable and trustworthy are significant, and there is skepticism about whether the current trajectory of AI development can meet the high expectations set by its proponents.

In conclusion, while Kubernetes and AI both represent significant technological advancements, they also come with their own sets of challenges and uncertainties. The future will likely see continued evolution in both areas, driven by ongoing innovation and the need to address existing limitations. Whether through simplification, integration with new technologies, or better management tools, the landscape of infrastructure and AI is set to remain dynamic and exciting.



Podcast editing: Mila Jones / milajonesproduction@gmail.com

Previous Episode โ€ข All Episodes