by Sergey Pronin

How Kubernetes succeeded

feature
Jun 03, 20245 mins
Cloud ComputingSoftware Development

Kubernetes started as one of many tools for container orchestration. Ten years later it’s the leading platform for cloud-native applications.

Winner holding golden trophy cup above head sun and golden sky in background
Credit: afotostock / Shutterstock

June 6th is the official 10th anniversary of the launch of Kubernetes. Kubernetes was built as a container management and orchestration platform that would make it easier to manage all of the software containers within microservices applications. Based on Borg, Google’s internal container management service that handled thousands of instances, Kubernetes eventually was released as open source for others to take advantage of for running containers.

It’s worth thinking back to 2014 when Kubernetes was one of many different approaches to managing containers that were being launched. Bigger open-source projects like Apache Mesos already existed, while the company that kick-started containerization, Docker, provided a great option with its Docker Swarm. Companies were also looking at approaches like AWS ECS management tools, and how those could be used for specific container management.

So why did Kubernetes win out? Was it always likely that we would end up with Kubernetes as the platform for cloud-native applications? Or were there hurdles in the way?

From stateless to stateful workloads

First off, it is important to say that Kubernetes started small. While it might have been based on a tool used by Google for managing huge numbers of workloads and processes, it was not ready to take on that role in other organizations to start off with. It was great for managing stateless application containers and orchestrating how those containers were created, used, and then torn down when they were no longer needed. But it was solely focused on application components at the start.

This did not fit with all the other elements that make up an application’s infrastructure. While your application might run in the cloud and carry out processing, it also creates data that has to be stored over time. It has to interact with data sources that exist. And it has to operate securely, so information does not leak or attackers can’t access those components. These elements were not supported in the initial launch for Kubernetes. In fact it took a further two years to get StatefulSets support and the launch of Kubernetes Operators before those workloads could be considered.

StatefulSets provided support for stable and unique network identifiers and for stable, persistent storage. It also made it possible to carry out more ordered, graceful deployment and scaling, and more ordered, automated rolling updates. Alongside this, the launch of Kubernetes Operators allowed developers to hide the complexity that went into using Kubernetes primitives with other applications. Without these two additions, running stateful workloads in Kubernetes required some serious Kubernetes core hacking to make things work.

Alongside this, there was a community push to make stateful workloads work effectively on Kubernetes. While the conversations around running databases like MySQL and PostgreSQL started on the likes of Reddit and Stack Overflow, more formal collaboration was needed to turn this from nice ideas into real and sustainable projects. Organizations like the Data on Kubernetes community came together to provide the right framework for this collaboration, making it easier for companies and individuals to contribute.

This work was essential, as there was a lot of pushback around running databases on Kubernetes to start off with. For those familiar with the 12 Factors approach to designing applications, back-end services should be treated as attached resources. At the time, this was problematic for developers that wanted to run in containers but then had to manage interaction with databases or storage systems that were hosted in different environments. The ideal approach—and what we now have today—is that databases should run in clusters in exactly the same way that application components do, as this makes it easier to control and manage infrastructure across the whole service from one point.

The role of open source

One of the major reasons why Kubernetes succeeded was that it was open source. Kubernetes was donated to the Cloud Native Computing Foundation so that it could be supported by a wider organization rather than one controlling vendor. This helped spread the load in terms of contributions and increased acceptance. When you are looking at how to make a bet on a platform for cloud computing, picking one that is not tied to a specific cloud provider and that can run containers independently on any of them was viewed as a smarter choice.

This required a community that would be willing to support Kubernetes as a project, and they would have to be invested in its success. To build that community, Kubernetes had to be open source, as Kubernetes co-creator Brendan Burns explained to the Dev Interrupted podcast. Without being open source, there would be much less incentive for developers to contribute or choose Kubernetes as their container management tool.

Over time, Kubernetes has gone from being one of many tools for container orchestration to becoming the platform for cloud-native applications. It enables developers to build and run their applications across any cloud platform or on their own data center environment, and then move that workload to whatever platform they want to use in the future. As part of this, Kubernetes has evolved from focusing on application components to supporting everything in the cloud.

Kubernetes is not perfect. For example, Kubernetes still needs more work on auto-scaling and managing resources like data and storage so that companies can control their costs more effectively. But that work is going on with support from multiple companies and communities, so everyone can benefit in the future.

Sergey Pronin is group product manager at Percona.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.