How Kubernetes exemplifies a truly API driven application

api server controls a k8s cluster figure

Figure 1: In Kubernetes, a pod contains the logic that is represented by an associated service

manifest file to create a pod that has the container

Listing 3: The manifest file to create a pod that has the container, pinger

manifest file that defines a service that is bound to pods

Listing 4: The manifest file that defines a service that is bound to pods that have the labels, app: pinger and purpose: demo

 The basic architecture of a Kubernetes Cluster

Figure 2: The basic architecture of a Kubernetes Cluster

Components in both the control node and worker nodes

Component Location Purpose
API Server Controller Node The API Server is the primary interface into a Kubernetes cluster and for components within the given Kubernetes cluster. It's a set of REST operations for creating, updating and deleting Kubernetes resources within the cluster. Also, the API publishes a set of endpoints that allow components, services, and administrators to "watch" cluster activities asynchronously.
etcd Controller Node etcd is the internal database technology used by Kubernetes to store information about all resources and components that are operational within the cluster.
Scheduler Controller Node The Scheduler is the Kubernetes component that identifies a node to be the host location where a pod will be created and run within the cluster. Scheduler does NOT create the container's associated with a pod. Scheduler notifies the API Server that a host node has been identified. The kubelet component on the identified worker node does the work of creating the given pod's container(s).
Controller Manager Controller Node The Controller Manager is a high-level component that controls the constituent controller resources that are operational in a Kubernetes cluster. Examples of controllers that are subordinate to the Controller Manager are replication controller, endpoints controller which binds services to pods, namespace controller, and the serviceaccounts controller.
kubelet Worker Node kubelet interacts with the API Server in the controller node to create and maintain the state of pods on the node in which it is installed. Every node in a Kubernetes cluster runs an instance of kubelet.
Kube-Proxy Worker Node Kube-proxy does Kubernetes network management activity on the node upon which it is installed. Every node in a Kubernetes cluster runs an instance of Kube-proxy. Kube-proxy provides service discovery, routing, and load balancing between network requests and container endpoints.
Container Runtime Interface Worker Node The Container Runtime Interface (CRI) works with kubelet to create and destroy containers on the node. Kubernetes is agnostic in terms of the technology used to realize containers. The CRI provides the abstraction layer required to allow kubelet to work with any container runtime operational within the node.
Container Runtime Worker Node The Container Runtime is the actual container daemon technology in force in the node. The Container Runtime does the work of creating and destroying containers on a node. Examples of Container Runtime technologies are Dockercontainerd, and CRI-O, to name the most popular.
process for creating pods in a Kubernetes Deployment using kubectl

Figure 3: The process for creating pods in a Kubernetes Deployment using kubectl

+

Esta página está disponible en español

Ver en español