Kubernetes Augments Container Management FeaturesKubernetes Augments Container Management Features
Google-backed Kubernetes 1.1, released Monday, provides automated "horizontal scaling" of Linux containers, where new pods are created as needed.
Containers 101: 10 Terms To Know
Containers 101: 10 Terms To Know (Click image for larger view and slideshow.)
Kubernetes, in its 1.1 version released Monday, is accelerating its ability to serve as an open source container management platform, thanks to the sizeable developer community that is now contributing code to it, said Brendan Burns, senior staff software engineer at Google and cofounder of the Kubernetes open source project.
Burns, who has seldom ventured into the public eye, was the kickoff speaker at the first KubeCon conference for Kubernetes users and developers in San Francisco on Nov. 9.
"He's the cofounder and a primary engineering lead of the project, like Linus Torvalds," said Alex Polvi, CEO of CoreOS, supplier of the container host Linux system of the same name. Polvi said Kubernetes has become one of the largest ongoing open source projects, with a rapidly expanding code base and a large number of contributors.
With containers being considered as a production deployment mechanism, a system like Kubernetes, coming from Google, a large-scale user of containers for its internal systems, garners unusual interest. It's a set of software building blocks or primitives that can deploy, maintain, and scale containers on a cluster. The primitives are loosely coupled, so they can be extended with specialized code or otherwise adjusted for use in deploying different kinds of applications.
[Want to learn more about Kubernetes development is managed? See Google Donates Kubernetes to New Foundation.]
Burns discussed what Kubernetes has accomplished recently and where it's headed in his opening keynote talk at KubeCon, which runs through Wednesday, Nov. 11. The open source project has attracted support from developers at IBM, Intel, Red Hat, VMware, Mesosphere, CoresOS, and Windows Azure, among others.
Kubernetes, roughly translated, is Greek for helmsman. It was developed at Google for launching a cluster of servers designed to run hundreds or thousands of containers. Google contributed its core system as open source code in June 2014, and offers its own version as a service, Google ContainerEngine. Google is known for its heavy internal use of Linux containers, and spokesmen say it launches 2 billion containers each week.
Burns said Kubernetes, as a project, wants to enable users to launch hundreds of servers to host thousands of containers quickly. He gave a demonstration of a server directed to scale up to a hundred servers able to host 1 million requests per second.
In the demo, he bumped the number of requests by 100,000 every few seconds until it was handling a million a second. Then he applied an update to each server in the system, while handling requests, and Kubernetes did that as well. "What's really exciting is being able to do a rolling update to servers, while doing a million requests per second," he said.
Kubernetes can orchestrate other tasks alongside a high-traffic request workload, such as load-testing a new Kubernetes features while depending on the existing version to carry a regular container workload. "We want to be able to treat the software like it's something that's alive," he noted.
Burns complimented "the team in Warsaw," the volunteer developers who built the code that provided horizontal pod auto-scaling in the 1.1 release. A pod is a set of containers colocated on the same server, with containers in the pod using some of the same resources on the host. As the server CPU utilization (or some other resource use) reaches a certain level, an additional server is commissioned with another set of containers that duplicate the pod to spread out the load, and the process is repeated as necessary.
Burns said the project hopes to work on vertical as well as horizontal scaling in the future. Vertical scaling would increase the resources available to an existing pod instead of spreading the load over additional pods.
Release 1.1 also includes the ability to schedule and run batch jobs.
It includes a feature called resource overcommit, where a commissioned container can be guaranteed a certain amount of resources, can be given a "burstable" level of commitment (where for short periods it may use more than its steady state resources), or it may be given a "best effort" level or commitment (where existing resources are divided among existing workloads).
The user decides which level of commitment he wants assigned to his container workload and Kubernetes 1.1 enacts it.
In the future, the project will be working with Ubernetes, a feature that can manage applications as a set of containers distributed over multiple clusters. The project also wants to simplify configuration steps in the upcoming 1.2 Kubernetes release, Burns said.
About the Author
You May Also Like