Skip to main content
Contact Us 1-800-596-4880

Kubernetes container orchestration

Kubernetes is an open source container management system to manage containers like Docker. Containers create a consistent and portable way for developers to build microservices. They are widely used by enterprise developers because they provide an optimal abstraction that allows developers to build modules of code that can be deployed consistently across many platforms and environments. After the rise of containerization, enterprises adopted container services like Kubernetes to manage, orchestrate, and automate containers.

 

Kubernetes clusters

The most important concept in Kubernetes is the cluster, which is a way of grouping a set of containers into a single deployment unit. Think of a cluster as a single microservice. Kubernetes features include shared networking, security, and capacity management for container environments. Kubernetes enables developers to define how applications should run and define how they interact with other applications. Clusters in Kubernetes have four key components:
 

Kubernetes component No. 1: Pods

Pods are a group of containers on the same node that are created, scheduled, and deployed together.

Kubernetes component No. 2: Labels

Labels are key-value tags that are assigned to identify elements of clusters: pods, services, and replication controllers.

Kubernetes component No. 3: Services

Services are used to give names to Pod groups. As a result, they can act as load balancers to direct traffic to run containers.

Kubernetes component No. 4: Replication controllers

Replication controllers are frameworks that are designed specifically to ensure that a certain number of pod replicas are scheduled and running at any given moment.

Kubernetes component No. 5: Kubernetes API

The Kubernetes API is a resource-based (RESTful) programmatic interface provided through HTTP.
 

Kubernetes, containers, and the rise of microservices

Containers like Docker can be assembled to form distributed applications. This approach has benefits over the old monolithic style, which require developers to compile all of their code together into one hard to manage and hard to deploy chunk of executable code. IT operations managers also like containers because containers allow them to deploy these small units without having to know all the details of what's inside.

Once people started breaking things down into containers and distributing them around cloud infrastructure, which is very dynamic, they realized that it took a lot of work to manage them. How many containers would you need? How can they talk to each other? Which containers are running right now, and where? Additionally, what we consider to be a full "microservice" is also likely be made up of more than one container: for example, one container for core logic, one for persisting data, and others for handling security and logging.

Just having containers wasn't enough. This is where Kubernetes came in. Kubernetes was one of the most important technologies driving the microservices movement. Containers were being used at Google for quite a while. It's how they were able to use highly distributed commodity hardware to run their massively scaled systems. They built an internal system called "Borg" that managed their containers. Recognizing the industry need for the problems they'd already dealt with, they turned Borg into the open source Kubernetes project.

When Google released Kubernetes as an open source container management product in 2014, Kubernetes found itself competing against other container-management systems — namely, Docker Swarm and Apache Mesos.

Get the latest on Kubernetes in this post, K8s: 8 questions about Kubernetes.