Priya Soni
9 min readDec 26, 2020



Kubernetes ,also known as k8s, is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.


Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors.

2003–2004: Birth of the Borg System

  • Google introduced the Borg System around 2003–2004. It started off as a small-scale project, with about 3–4 people initially in collaboration with a new version of Google’s new search engine. Borg was a large-scale internal cluster management system, which ran hundreds of thousands of jobs, from many thousands of different applications, across many clusters, each with up to tens of thousands of machines.

2013: From Borg to Omega

  • Following Borg, Google introduced the Omega cluster management system, a flexible, scalable scheduler for large compute clusters. (whitepaper and announcement)

2014: Google Introduces Kubernetes


We are looking forward to see where Kubernetes is heading to. Nowadays, there is a growing excitement about ‘serverless’ technologies, and Kubernetes is going in the opposite direction. However, Kubernetes has it’s place in our ‘increasingly serverless’ world.

Tools like Kubeless and Fission providing equivalents to functions-as-a-service but running within Kubernetes. These won’t replace the power of Lambda, but show us that there are solutions on the spectrum between serverless and cluster of servers.


Google built Kubernetes and has been using it for 10 years. That it has been used to run Google’s massive systems for that long is one of its key selling points. Two years ago Google pushed Kubernetes into open source.

Kubernetes is a cluster and container management tool. It lets you deploy containers to clusters, meaning a network of virtual machines. It works with different containers, not just Docker.

Docker is by far the most popular container and it is written in Linux. Microsoft also has added containers to Windows as well, because they have become so popular.

The best way to illustrate why this is useful and important is to give an example.

Suppose you want to install the nginx web server on a Linux server. You have several ways to do that. First, you could install it directly on the physical server’s OS. But most people use virtual machines now, so you would probably install it there.

But setting up a virtual machine requires some administrative effort and cost as well. And machines will be underutilized if you just dedicate it for just one task, which is how people typically use VMs. It would be better to load that one machine up with nginx, messaging software, a DNS server, etc.


In container world making the container small is not the only advantage. The container can be deployed just like a VM template, meaning an application that is ready to go that requires little or no configuration.

There are thousands of preconfigured Docker images at the Dockerhub public repository. There, people have taken the time to assemble opensource software configurations that might take someone else hours or days to put together. People benefit from that because they can install nginx or even far more complicated items simply by downloading them from there.


There is an inherent problem with containers, just like there is with virtual machines. That is the need to keep track of them. When public cloud companies bill you for CPU time or storage then you need to make sure you do not have any orphaned machines spinning out there doing nothing.

Plus there is the need to automatically spin up more when a machine needs more memory, CPU, or storage, as well as shut them down when the load lightens. Orchestration tackles these problems. This is where Kubernetes comes in.


The basic idea of Kubernetes is to further abstract machines, storage, and networks away from their physical implementation.

So it is a single interface to deploy containers to all kinds of clouds, virtual machines, and physical machines. Here are a few of Kubernetes concepts to help understand what it does.


A node is a physical or virtual machine. It is not created by Kubernetes. You create those with a cloud operating system, like OpenStack or Amazon EC2, or manually install them.

So you need to lay down your basic infrastructure before you use Kubernetes to deploy your apps. But from that point it can define virtual networks, storage, etc. For example, you could use OpenStack Neutron or Romana to define networks and push those out from Kubernetes.


A pod is a one or more containers that logically go together. Pods run on nodes. Pods run together as a logical unit. So they have the same shared content. They all share the share IP address but can reach other other via localhost. And they can share storage. But they do not need to all run on the same machine as containers can span more than one machine. One node can run multiple pods.

Pods are cloud-aware. For example you could spin up two Nginx instances and assign them a public IP address on the Google Compute Engine (GCE). To do that you would start the Kubernetes cluster, configure the connection to GCE, and then type something like:-

kubectl expose deployment my-nginx –port=80 –type=LoadBalancer


A set of pods is a deployment. A deployment ensures that a sufficient number of pods are running at one time to service the app and shuts down those pods that are not needed. It can do this by looking at, for example, CPU utilization.


1. Load balancing and Service Discovery

2. Automatic Bin Packing

3. Storage Orchestration

4. Self-Healing

5. Batch Execution

6. Horizontal Scaling

7. Secret and Configuration Management

8. Automatic Rollback and Rollout

9. Helps you to Move Faster

10. Kubernetes is Cost Efficient


1. Kubernetes can be an overkill for simple applications.

2. Kubernetes is very complex and can reduce productivity.

3. The transition to Kubernetes can be cumbersome.

4. Kubernetes can be more expensive than its alternatives.


Launching and Scaling Up Experiments For Open AI:


An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.


OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. “We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster,” says Christopher Berner, Head of Infrastructure. “This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration.”

“Research teams can now take advantage of the frameworks we’ve built on top of Kubernetes, which make it easy to launch experiments, scale them by 10x or 50x, and take little effort to manage.”



The company has benefited from greater portability: “Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters,” says Berner. Being able to use its own data centers when appropriate is “lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud,” he adds. “As long as the utilization is high, the costs are much lower there.”

Launching experiments also takes far less time: “One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work.”

OpenAI’s experiments take advantage of Kubernetes’ benefits, including portability. “Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters…”


One of the things Berner continues to focus on is Kubernetes’ ability to scale, which is essential to deep learning experiments. OpenAI has been able to push one of its Kubernetes clusters on Azure up to more than 2,500 nodes. “I think we’ll probably hit the 5,000-machine number that Kubernetes has been tested at before too long,” says Berner, adding, “We’re definitely hiring if you’re excited about working on these things!”



In recent years, the adidas team was happy with its software choices from a technology perspective — but accessing all of the tools was a problem. For instance, “just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who’s responsible, give the internal cost center a call so that they can do recharges,” says Daniel Eichten, Senior Director of Platform Engineering. “The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week.”


To improve the process, “we started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.


Just six months after the project began, 100% of the adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4–6 weeks to 3–4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.

“I call our cloud native platform the field of dreams. We built it, and we never anticipated that people would come and just love it.”



In early 2017, adidas chose Giant Swarm to consult, install, configure, and run all of its Kubernetes clusters in AWS and on premise. “There is no competitive edge over our competitors like Puma or Nike in running and operating a Kubernetes cluster,” says Eichten. “Our competitive edge is that we teach our internal engineers how to build cool e-comm stores that are fast, that are resilient, that are running perfectly.”