How containers are redefining the standards: the Kubernetes orchestrator
Containers make it possible for an application to run consistently and reliably, regardless of the operating system or infrastructure environment. They do this by bundling up everything a service needs to run (i.e., code, runtime, system tools, libraries, and settings) creating a portable, standalone, and executable package.
Building containers consists in pulling out just the application (or the service) you need to run, along with its dependencies and configurations, abstracting it from the operating system. The result is a “container image” which can be run on any container platform. As a consequence, multiple containers can run on the same host and share the same Operating System with other containers, each running isolated processes within its own secured space. Because containers share the base OS, the result is being able to run each container using significantly fewer resources than if each was a separate virtual (or physical) machine.
Why are organisations so eager to adopt containers?
Most organisations have multiple reasons for adopting containers, and while all organisation’s specific motivations are different, there are commonalities. These include:
- Reducing IT costs. Containers can be much more efficient than VM-based or other legacy application architectures. They can be packed more densely on instances, reducing the number of resources needed to run the same application, whether it’s in a data centre or in the cloud. Because containers share operating systems, they are more lightweight than a virtual machine and require less processing power, less memory, and less storage. Across an entire organisation, the cost savings can be substantial.
- Improving developer productivity. Containers lend themselves to DevOps approaches and overall higher development velocity. This means organisations are able to develop, test, and deploy applications faster. They have more agility to respond to changes in the market ecosystem and changes in customer behaviour, as well as the ability to test how different applications deliver on business goals. Each developer is able to accomplish more and do so quicker with containers.
- Faster time to market. Along with the increased productivity is a shorter time to market for each application, which translates to organisations being able to use software to create a competitive advantage and stay ahead of other market actors.
- Cross-environment portability. Containers work the same regardless of the environment. This means that developers don’t have to worry that an application that worked properly on the local machine won’t work in another environment, contributing to increased productivity and faster time to market. At the same time, containers can also make it easier for organisations to move all or part of their infrastructure to the public cloud, making multi-cloud and hybrid cloud strategies possible.
- Simpler operations. Containers can also be easier to operate. They are easier and more cost-efficient to scale. Not only can each container scale independently but the individual components of an app can scale independently of each other too. They are also easier to upgrade and facilitate advanced upgrade techniques like rolling upgrades. In addition, failures are less likely to bring down the entire app — a problem with one container is more likely to be isolated in that container rather than bring the entire app down. Containers are also more portable than monoliths and run the same regardless of the environment, so they are less likely to have environment-related issues in production that don’t show up during development and testing.
And these benefits, in terms of performance savings, enabled to the spread of the containers also amongst non-professional devices. SBDs (Single Board Computers) as RaspberryPi are increasingly used for new purposes because it is not necessary to run a complete Virtual Machine on it to perform some tasks. Let’s think about this: if we want to have a NAS (Network Attached Storage) in our house to store videos and photos and access them whenever we want, we needed a dedicated Raspberry to do that before the containers were distributed. And, if we were figuring out how to make our home become a smart-home, controlling the lights or the TVs with a Home Assistant dashboard, we probably had to buy another Raspberry to set up the dedicated environment. Why? Because both the NAS environment and the domotic one needed to use a lot of resources in terms of RAM, CPU, and GPU: you must reserve a whole machine (virtual or physical) for each of them, including many “wasted” resources that, in fact, will not be used. But, with the adoption of the containers, the approach changes radically. On a single $60.00 Raspberry Pi, running Ubuntu with 4GB of memory and 32GB of storage, you can easily set up and run two containers, one for the NAS and one for the Home Assistant. The advantage is that each container will access – and use – only the resource needed: when that resource is not required anymore, it will be “released” so the OS can benefit from it.
Since then, both technologies have been growing exponentially. 451 Research expects the container market to grow 30% year-over-year, while a survey of over 500 IT professionals done by Portworx and Aqua Security in 2019 showed that 87% were using containers, nearly all of them for production workloads. Meanwhile, membership in the Cloud Native Computing Foundation grew by 50% from 2018 to 2019.[2]
Kubernetes: a fully equipped container orchestrator
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It runs containers in a cluster and aims to provide a “platform for automating deployment, scaling, and operations of container workloads”
Kubernetes was originally designed by Google, now it is maintained by the Cloud Native Computing Foundation (CNCF). CNCF is a Linux Foundation project with the aim of helping advance container technology. Founding members include Goggle, IBM, Red Hat, Cisco, Intel, and other major tech companies.
The name kubernetes is a reference to the Greek word for governor or helmsman.
Why Kubernetes
In the past, the common way for organisations to run multiple applications was to execute them in a physical server, with the applications sharing resources such as memory, cpu, disk and operating system. The applications had to compete for the shared resources and this often caused resource allocation issues, where one application could for example consume most of the cpu, at the expense of the other applications.
Evolution from the physical server was the virtual server architecture, where players like VMware provided a way to obtain multiple virtual machines, each one with his own operating system, and a pool of pre-defined resources, on top of a single physical server. The advantage of the virtual server architecture lies in the flexibility: each application or group of applications can now have a predefined pool of resources, thus the competition for resources is limited and managed. Still this architecture lacks efficiency, since every virtual machine has to run its own operating system.
The concept of container was developed to increase the efficiency of the virtual servers architecture; containers are similar to virtual machines as they allow to pre-define and manage the resources each application can access, but containers share a single instance of the operating system, thus reducing the need to have multiple OS constantly running. As the name suggests, containers are a self-contained pool of filesystem, share of CPU, memory and services, dedicated to run an application.
Containers have many advantages: Agile creation and deployment of applications, compared to physical server or virtual server architecture; simplified company architecture maintenance; loose-coupling of the different applications; resource isolation and managing; portability across OS and cloud solutions.
Kubernetes, or K8s, as they are sometimes known, using the numeronym notation, improves and builds on top of the container solution, by providing an infrastructure to manage the containers of an organisation. Picture a medium to big modern organisation, with hundreds of containerized applications that need to work together smoothly.
Kubernetes provides solutions to orchestrate the applications; it is often compared to a conductor directing an orchestra to perform elaborate symphonies by allowing many different players to perform together. The main services Kubernetes provides are: Container deployment, scale, networking, observability, security.
Kubernetes details
Kubernetes infrastructure is the combination of resources as servers, physical or virtual machines, cloud platforms and others.
Kubernetes’ infrastructure and architecture are based on the concept of a cluster, and it is a set of machines called nodes that represent the machines, responsible for running your containerized workloads. Every Kubernetes cluster has a master node and at least one such worker node. Typically, a cluster will have several or more worker nodes.
Then as pod it is defined as the smallest deployable unit in Kubernetes, and they run on the nodes. The pods represent the various components of your application and usually they run a single container, though it can run multiple containers in certain circumstances.
A fundamental element is the control plane that includes the API server and other components that manage your nodes.
Right now, the biggest cloud hosting providers offer k8s service:
Amazon Web Service (AWS), Google Cloud Platform (GCP), Microsoft Azure and Digital Ocean are in the top 6 environments, enterprises deploy Kubernetes workloads on.
In a CNCF 2021 survey we see that the most popular kubernetes cloud certified providers are: Amazon Elastic Container Service for Kubernetes (39%), Azure Kubernetes Service (23%), Azure (AKS) Engine (17%), and Google Kubernetes Engine (GKE)
Who is using it
According to the 2020 Cloud Native Computing Foundation survey, 92% of organisations are using containers in production, and 83% of these use Kubernetes as their preferred container management solution.
And according to the already mentioned 2021 State of Cloud Native Development Report, Kubernetes has demonstrated impressive growth over the past 12 months with 5.6 million developers using Kubernetes today.
So many companies already use Kubernetes in their tech stacks. Some of them are mentioned below:
Google, Digital Ocean, Adobe, SAP, BlaBlaCar, Trivago, Deliveroo, 9GAG, Airbnb, Pinterest, Udemy, Uber, Pinterests, Tinder and many others.
Why to use Kubernetes
There are multiple benefits that leverage reasons to move using K8s.
For example, it can save time and effort. As a container orchestrator his main scope is to save the IT teams time by creating an extensive code and allowing them to create a bundle for your application along with their dependencies to deploy them in the environment. This is the reason why containerization has gained more popularity than the traditional way of deploying applications.
As discussed above, with the containerized applications, you can easily handle the OS update or upgrade as the applications are bundled separately and do not depend on the underlying OS. It can prevent stability and security issues and allows you to orchestrate different container versions based on requirements.
Using containers, you can divide your large applications into small parts (as microservice logic) that can be set up and run separately and with their dependencies bundled in a package. This modular approach helps you develop applications efficiently in smaller parts and associate specific solution teams to each dedicated container with the help of pods, which are controlled as a single application.
In order to prevent performance issues, there are various auto scaler tools in Kubernetes able to increase or decrease the number of applications within your infrastructure. In such a way, you can handle the increasing traffic and maintain the application’s performance even in peak time, because it is able to ensure that your system will work efficiently and consistently without any outage and shortage of resources.
Kubernetes advantages
The Kubernetes solution, with uniform deployment, management, scaling, and availability services for containerized applications, offers advantages on every side of the IT organisation: the developers and the IT team feels more productive and happier because of the fewer impediments to development and deployment, faster and less constrained. The ticket-based infrastructure of conventional IT is replaced by self-service infrastructure that allows developers to access the resources they need when they need them. The result is more frequent code deployment, reduced time to production, and faster patches and bug fixes. On a higher level, Kubernetes allows delivering new software and features more quickly, with faster time to market and enables multi-cloud operations for greater agility and freedom to choose the most suitable cloud offer, and switch to another cloud vendor if necessary.
On the other side, the one challenge usually mentioned referring to the implementation of Kubernetes is the initial steep learning curve for the technical team and the initial adoption curve for the organisations: the technical team need to learn how to deliver and deploy in the new, fairly complex, Kubernetes architecture, and the company as a whole needs to dedicate time and effort in getting the existing architecture ready for the Kubernetes. The time and effort required depends on the starting point: is the company already using containerized cloud applications? what programming languages are currently used? and so on.
As with any new technology, Kubernetes are not a magic wand to solve the organisation’s problems, and, as with any new technology project, the key for success is focusing on the business problem we need solving. Kubernetes have their sweet spot in managing complex, containerized, cloud applications. If you need a simple web site to share some information, Kubernetes is not for you. On the other hand, if you need a hyper-scalable, globally available, complex cloud application, then you should evaluate Kubernetes and see if it can help manage your application.