Kubernetes : The Orchestration Tool

Case Study -How Kubernetes is being used across the Industry

Prateek Gupta
11 min readMar 14, 2021

Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers.

Kubernetes, also known as K8s or kube, is an open-source container orchestration platform for automating deployment, scaling, and management of containerized applications. Kubernetes is a container management tool. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. Kubernetes clusters can span hosts across on-premise, public, private or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.

What is Container?

Container technology was born in 1979 with Unix version 7 and the chroot system. A container provides operating-system-level virtualization by abstracting the “user space”.

Containers and VMs are similar in their goals: to isolate an application and its dependencies into a self-contained unit that can run anywhere. Moreover, containers and VMs remove the need for physical hardware, allowing for more efficient use of computing resources, both in terms of energy consumption and cost effectiveness. The main difference between containers and VMs is in their architectural approach.

Containerization is a modern virtualization method that accesses a single OS kernel to power multiple distributed applications that are each developed and run in their own container.

What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

The History of Kubernetes

Kubernetes was founded by Joe Beda, Brendan Burns, and Craig McLuckie, who were quickly joined by other Google engineers including Brian Grant and Tim Hockin, and was first announced by Google in mid-2014. Its development and design are heavily influenced by Google’s Borg system, and many of the top contributors to the project previously worked on Borg. The original codename for Kubernetes within Google was Project 7, a reference to the Star Trek ex-Borg character Seven of Nine. The seven spokes on the wheel of the Kubernetes logo are a reference to that codename. The original Borg project was written entirely in C++, but the rewritten Kubernetes system is implemented in Go.

How Kubernetes come in Play

2003 -2004: birth of the Borg system

  • Google introduced the Borg System around 2003–2004. It started off as a small-scale project, with about 3–4 people initially in collaboration with a new version of Google’s new search engine. Borg was a large-scale internal cluster management system, which ran hundreds of thousands of jobs, from many thousands of different applications, across many clusters, each with up to tens of thousands of machines.

2013: From Borg to Omega

  • Following Borg, Google introduced the Omega cluster management system, a flexible, scalable scheduler for large compute clusters.

2014: Google Introduces Kubernetes

  • mid-2014: Google introduced Kubernetes as an open source version of Borg
  • June 7: Initial release — first GitHub commit for Kubernetes
  • July 10: Microsoft, RedHat, IBM, Docker joins the Kubernetes community.

2015: The year of Kube v1.0 & CNCF

  • In July, Kubernetes v1.0 gets released. Along with the release, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF). The CNFC aims to build sustainable ecosystems and to foster a community around a constellation of high-quality projects that orchestrate containers as part of a microservices architecture.

And till now as you know how far this tool has come in the field of management of the containers.

Kubernetes Architecture

The given picture will give you the idea about the Kubernetes Architecture.

Architecture of Kubernetes

Master Node

The master node is the first and most vital component which is responsible for the management of Kubernetes cluster. It is the entry point for all kind of administrative tasks. There might be more than one master node in the cluster to check for fault tolerance.

The master node has various components like API Server, Controller Manager, Scheduler, and ETCD. Let see all of them.

API Server: The API server acts as an entry point for all the REST commands used for controlling the cluster.

Scheduler

The scheduler schedules the tasks to the slave node. It stores the resource usage information for every slave node. It is responsible for distributing the workload.

It also helps you to track how the working load is used on cluster nodes. It helps you to place the workload on resources which are available and accept the workload.

Etcd

etcd components store configuration detail and wright values. It communicates with the most component to receive commands and work. It also manages network rules and port forwarding activity.

Worker/Slave nodes

Worker nodes are another essential component which contains all the required services to manage the networking between the containers, communicate with the master node, which allows you to assign resources to the scheduled containers.

  • Kubelet: gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
  • Docker Container: Docker container runs on each of the worker nodes, which runs the configured pods
  • Kube-proxy: Kube-proxy acts as a load balancer and network proxy to perform service on a single worker node
  • Pods: A pod is a combination of single or multiple containers that logically run together on nodes

Kubernetes Features

  • Self-healing : Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Automated rollouts and rollbacks: Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you.
  • Storage Orchestration: Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
  • Horizontal Scaling: The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization . Note that Horizontal Pod Autoscaling does not apply to objects that can’t be scaled, for example, DaemonSets. You can scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
  • Secret and Configuration Management: Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image. There are multiple types of secrets, you can specify its type using the type field of the Secret resource or certain equivalent kubectl command line flags.
Different types of secrets
  • Service discovery and load balancing: No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Why people use Kubernetes?

Kubernetes is an important piece of the cloud-native puzzle: But it’s important to understand that its broader ecosystem provides even more value to IT organizations.

As Red Hat’s Haff notes, “The power of the open source cloud-native ecosystem comes only in part from individual projects such as Kubernetes. It derives, perhaps even more, from the breadth of complementary projects that come together to create a true cloud-native platform.

Kubernetes eases the burden of configuring, deploying, managing, and monitoring even the largest-scale containerized applications. It also helps IT pros manage container lifecycles and related application lifecycles, and issues including high availability and load balancing.

CASE STUDY: THE NEW YORK TIMES

  • Challenge: When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. “We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center,” says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.”
  • Solution: The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
  • Impact: Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes,” says Engineering Manager Brian Balser. Adds Li: “Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.” Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.

I think once you get over the initial hump, things get a lot easier and actually a lot faster.

— DEEP KAPADIA, EXECUTIVE DIRECTOR, ENGINEERING AT THE NEW YORK TIMES

CASE STUDY: BOSE

  • Challenge: A household name in high-quality audio equipment, Bose has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. “We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: “To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan O’Mahony.
  • Solution: From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt Kubernetes for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including Fluentd, CoreDNS, Jaeger, and OpenTracing.
  • Impact: With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. “We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says O’Mahony.

At Bose we’re building an IoT platform that has enabled our physical products. If it weren’t for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.

— JOSH WEST, LEAD CLOUD ENGINEER, BOSE

CASE STUDY: HUAWEI

  • Challenge: A multinational company that’s the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. “It’s very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge,” says Peixin Hou, the company’s Chief Software Architect and Community Director for Open Source. “We wanted to move into a more agile and decent practice.”
  • Solution: After deciding to use container technology, Huawei began moving the internal I.T. department’s applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.
  • Impact: “By the end of 2016, Huawei’s internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution,” says Hou. “The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold.” For the bottom line, he says, “We also see significant operating expense spending cut, in some circumstances 20–30 percent, which we think is very helpful for our business.” Given the results Huawei has had internally — and the demand it is seeing externally — the company has also built the technologies into FusionStage, the PaaS solution it offers its customers.

If you’re a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology.

— PEIXIN HOU, CHIEF SOFTWARE ARCHITECT AND COMMUNITY DIRECTOR FOR OPEN SOURCE

Future of Kubernetes

We are looking forward to see where Kubernetes is heading to. Nowadays, there is a growing excitement about ‘serverless’ technologies, and Kubernetes is going in the opposite direction. However, Kubernetes has it’s place in our ‘increasingly serverless’ world. Tools like Kubeless and Fission providing equivalents to functions-as-a-service but running within Kubernetes. These won’t replace the power of Lambda, but show us that there are solutions on the spectrum between serverless and cluster of servers.

I hope you enjoyed this article. Thanks for learning ….

--

--