The virtualisation of computers allowed a dynamic provisioning of resources and a consequent service demand model according to usage. System-level virtualisation through containers has made this provisioning capability more dynamic.

The architecture of the Kubernetes system, a container orchestrator, aims to offer the application the ability to replicate when necessary.

The use of virtual machines made dynamic provisioning of resources practical. As a result, it was possible to implement the cost-on-demand model, making the ability to allocate and release resources even faster.

How it works

Kubernetes has the ability to replicate containers in order to increase the availability of the application hosted in the container. When a container fails, Kubernetes eliminates the missing container and instantiates another one from an image. In this process, the state of the application hosted in the container is lost. Applications can use external volumes to persist the state of the application, but it is necessary to protect these volumes against failures. In addition, in the case of replicating application states, access to the volume must be organised by the application to deal with concurrency issues.

The ZooKeeper system and the Raft algorithm can be used in practice, for example, to create replicated state machines. These solutions can be used at the application level in Kubernetes, but will occupy space in all containers participating in the replication, in addition to burdening the application with the effort to carry out coordination.

To provide Kubernetes with the ability to deal with replicated states, the integration approach is appropriate as it is in line with the collaborative nature of Kubernetes development. Integrating means building or modifying an existing system, adding components to complement functionality. Other approaches are possible, such as interception or service, but the integration is transparent to the user, and it is also the approach that performs best, according to experiences carried out within the scope of CORBA. But we will talk about that in another moment ;D

Well, as we know, Kubernetes was developed by Google. Google has been using containers for a few years and launched the Kubernetes system to orchestrate Docker containers in clusters. Kubernetes is open-source, has been developed collaboratively and brings knowledge from the engineers of Borg, the current container manager at Google. It is currently under the Apache 2.0 license and is maintained by the Cloud Native Computing Foundation.

Production applications span multiple containers. They must be deployed on multiple server hosts. Container security has multiple layers and can be complex. That's where Kubernetes comes into the picture. It offers the orchestration and management capabilities needed to deploy containers at scale for these workloads. With Kubernetes orchestration, it is possible to create application services that span multiple containers, schedule their use in the cluster, scale them and manage their integrity over time.

Architecture

In a microservice approach, Kubernetes fits amazingly well. Scalability makes it possible to have a high availability of some service through its horizontal scale, replicating (N *).

A simple representation of a state share with volume persistence and POD clusters:

The architecture of Kubernetes can be complex, and has its own language and terms, among the main ones then:

  • Master: the machine that controls the Kubernetes nodes. This is where all task assignments originate.
  • Node: are machines that perform the requested and assigned tasks. The Kubernetes master machine controls the nodes.
  • Pod: a group of one or more containers deployed on a single node. All containers in a pod share the same IP address, IPC, host name and other resources. Pods separate the network and storage from the underlying container. This facilitates the movement of containers through the cluster.
  • Replication controller: controls how many identical copies of a pod should be run at a given location in the cluster.
  • Service: decouples pods' work settings. Kubernetes service proxies automatically take service requests to the correct pod, regardless of the pod's location in the cluster or whether it has been replaced.
  • Kubelet: a service run on the nodes that reads the container manifests and ensures that the defined containers have been started and are running.
  • kubectl: the Kubernetes command line configuration tool.

Kubernetes Architecture

Under the hood

Kubernetes runs on an operating system and interacts with container pods running on nodes. The Kubernetes master machine accepts commands from an administrator and relays these instructions to subservient nodes. This retransmission is carried out in conjunction with various services to automatically decide which node is best suited for the task. Then, resources are allocated and the node's pods are assigned to fulfill the requested task.

So from an infrastructure standpoint, there are few changes compared to how you already manage containers. Control over containers takes place at a higher level, making it more refined, without the need to micromanage each container or node separately. You will need to do some work, but for the most part it is just a matter of assigning a Kubernetes master and defining the nodes and pods.

Demo time!

Below, a brief demonstration of how to create a simple cluster using kubeadm.
(Disclaimer: The purpose of this is not to be a tutorial)

Here we go.

As previously said, Kubernetes has a few plugins, extensions and CLI’s. In this example, the main agent for creating the cluster will be kubeadm, it abstracts the difficulty of creating clusters and nodes through kubelet. Then kubectl to perform the other administration and deployment actions.

Kickoff

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

Starting the cluster and connecting to the cluster's master node:
kubeadm init

After finishing the command, some commands to be executed to finish the configuration will be displayed.
Your Kubernetes master has initialized successfully!

Use your Cluster

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run kubectl apply -f [podnetwork].yaml with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join <ip-do-master>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<discovery-token>

Deploy

Deploy the cluster network with Project Calico to manage the cluster network:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
watch kubectl get pods -n calico-system
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes -o wide

Starting the Nodes
kubeadm join <ip-do-master>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<discovery-token>

Now we can deploy by creating our template and running the command below

Command to run the deploy
kubectl apply -f deploy.yml

List the pods in the cluster
kubectl get pods