Mind Matters Natural and Artificial Intelligence News and Analysis
server-room-or-server-computers3d-rendering-stockpack-adobe-stock
Server room or server computers.3d rendering.
Licensed via Adobe Stock

Getting Started with Kubernetes: A Brief History of Cloud Hosting

A history lesson for a better understanding of why web infrastructure hosting is the way it is
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Oftentimes it is hard to understand why something is the way it is unless you understand its history. To start with, I want to present a quick overview of the history of web infrastructure hosting to give you a better feel for what sorts of problems cloud native development solves.

The Old Way

Way back in the early days of the Internet, web applications were hosted on specific server machines. That is, when you wanted to host a web application, you had to purchase a physical machine, install Linux or some other operating system on it, and then pay an Internet Service Provider to put your machine on their network. This process was both time-consuming and expensive, often costing hundreds of dollars a month just to rent the space where you install your own server.

If you needed more capacity, this process was equally painful. You had to buy another server, install another copy of Linux on it, make sure it was configured exactly like your other server, and then pay to host that one as well.

Additionally, you had to do quite a bit of work to share the load between servers. You either had to implement a DNS trick to get clients to split their time between machines, or you had to buy another piece of equipment, a load balancer, which took incoming traffic and balanced it between machines.

If one machine had faulty hardwire, well, you are back to the drawing board. If you needed to upgrade the operating system, you had to log in to each machine, take it out of the lineup, and manually upgrade components. This was ridiculously hard work, and the systems administrators who performed these tasks also cost a lot of money. Systems were eventually built to make the task synchronizing such systems easier, but those had their own complexities to learn.

The Virtual Private Server

The first move to “the cloud” is known as a Virtual Private Server, or VPS. What happened is that advances in operating systems allowed administrators to run virtual machines (VMs) underneath a host machine at essentially the same speed as a non-virtual machine. A virtual machine is simply a “fake” machine running under a real machine. For instance, you can run a Windows operating system within a Macintosh operating system using a virtual machine. The Windows operating system “thinks” it is a real computer, when actually it is just acting like a sub-computer of the Macintosh.

Virtual machines have been around since the 1970s, but it was not until much later that virtual machines on standard processors were able to function at a speed that is close to “bare metal” and simultaneously be sufficiently secure that you don’t have to worry about two VMs on the same host spying on each other. The way that a virtual machine worked is that the actual hardware ran a slimmed-down operating system called a hypervisor. The hypervisor was in charge of creating and running virtual machines under it, and making sure the resources of the computer (RAM, CPU, and input/output operations) were properly distributed between the various VMs that were running under it.

Virtual machines paved the way for all sorts of innovations. To start with, systems administrators no longer had to worry about getting the right size machine for the task. They could just purchase a high-powered machine, and then later decide how to divide the computing power up. If they bought a machine with 8 CPUs, 64GB of memory, and 10TB of drive space, they could decide to give 2 CPUs, 1GB of memory, and 3TB of drive space to one VM, 3 CPUs, 2GB of memory, and 1TB of drive space to another VM, and so forth. CPUs can even be split – with a VM being given only a portion of a CPU’s time.

Since the actual running operating system was virtualized, this made installation and maintenance a lot easier. If one physical host was having hardware problems, you could pause the VMs, move them to another physical host, and start them back up. It was easy to replicate VMs because their hard drives were all visible to the hypervisor. Thus, management of the servers was greatly simplified.

Hosting companies were able to take great advantage of these advances as well. They would offer users the ability to buy VMs with just a click of a button. You just select which system image you want to install, what size of computer you want, and click a button, and the hosting platform would find a machine with enough capacity for your needs, allocate a VM, and send you the IP address. These were known as virtual private servers, or VPSs. Additionally, you could request the allocation of a load-balancer to distribute traffic to all of your different VPSs.

From Virtual Private Servers to Containers

The problem with VPSs is that they actually have quite a bit of somewhat needless overhead:

  • Each VPS has a complete copy of the operating system stored on-disk.
  • Each VPS is running a complete copy of the operating system in its memory.

This all seems like overkill, especially considering that what we usually want to run is just a single process, such as a web server or database. What if we didn’t really need all of that overhead? What if there was another way to give developers something similar to full access to a server, but didn’t have the overhead of a full machine?

Enter the container. Containers are similar to virtual machines in that they give a developer something that feels like an isolated machine. However, a container actually shares its operating system and its filesystem with other containers on the same host without affecting them. That is, each container thinks and acts like it is an independent machine, but it is usually sharing both the operating system and the filesystem with other containers. The most popular container system is known as Docker, and you can learn more about building and running Docker containers here.

VMs and VPSs, while flexible, had quite a bit of overhead, as each instance required all the overhead of a regular operating system. Now, with containers, all of this overhead is shared by all or most of the containers running on a physical machine. Each container only adds the memory and disk space used by the specific process or processes that it was launched to run.

So, a typical Linux server usually has an installed footprint of 10-20 gigabytes of disk space. Such a server is usually running a bunch of kernel (operating system) processes, the init program, a system logger, a terminal program, an SSH service, and possibly other system services. These services typically use up about a half of a gigabyte of working memory (including disk cache). With containers, this price only has to be paid once per physical machine, rather than once per VPS, while still giving the developers the option to make any modification they want on their own running instance.

Additionally, a set of standards developed around containers which allow them to be created, manipulated, stored, and retrieved by tools from a variety of different projects or vendors. So, there might be a standard container for a base install of Debian, and I can pull down that container and install my application on top of it, and create a new container out of the result, and store it back on the service.

Additionally, because these containers utilized so little overhead, it was possible for developers to run multiple containers on their own personal computers. In fact, they could set up containers that were exact replicas of the setup used on the servers, and it had minimal impact on the performance of their own computers.

This almost eliminates an entire class of errors. Historically, the configuration of a developer’s personal computer does not exactly match what is on the server. They may be running a different version of the programming language, database, or other components of the environment. With containers, developers can specify, run, and validate the precise operating environment on their local computers as will be running on the server. The entire operating system image itself can be the subject of development and testing.

Cloud Native Infrastructure

The move to containerized computing then paved the way for what is known as cloud native infrastructure. In cloud native infrastructure, entire application infrastructures are specified, created, and deployed using code. That is, you don’t have to go and ask for a bunch of servers, then install the image you want on them, and then add them to a load balancer. Instead, you tell a control service that you want to build a load-balanced system using some particular image and run between 3 and 10 copies of it, depending on the load. The control service then launches everything you need, and keeps everything in sync.

If a container dies, it gets cleaned up and restarted by the control service. If you start to get a higher load, the control service will launch more copies of your container. The control service will handle registering each of these containers with the load balancer, as well as finding which machines it should run the containers on.

Cloud native applications essentially turn infrastructure into code. You write code which specifies what your network infrastructure should look like, and the cloud makes it happen.

Today, there are a lot of choices for building cloud native applications. However, there is one system which runs on practically every hosting platform – Kubernetes, also known as K8s. Kubernetes is based on a system originally developed by Google for managing their own infrastructure. However, Kubernetes is an open-source product and open standard which is used by numerous different cloud providers, including the big guys like Google, Microsoft, AWS, and IBM, as well as the smaller players such as Linode and Digital Ocean. Kubernetes essentially gives you the ability to build a scalable cloud infrastructure which doesn’t tie you down to any specific provider.

In the next installment, we will walk through the process of setting up a simple Kubernetes cluster on the Internet.


Jonathan Bartlett’s Docker series:

How the Docker Revolution Will Change Your Programming, Part 1. Since 2013, Docker (an operating system inside your current operating system) has grown rapidly in popularity. Docker is a “container” system that wraps the application and the operating system into a single bundle that can be easily deployed anywhere.

Part 2: A Peek Under the Covers at the New Docker Technology. Many advances enable Docker to significantly reduce a system’s overhead. Docker, over and above the basic container technology, also provides a well-defined system of container management.

Part 3: Working with Docker: An Interactive Tutorial. Docker gives development teams more reliable, repeatable, and testable systems, deployed at massive scale with the click of a button. In this installment (Part 3), we look at the commands needed to start and run Docker, beginning with containers.

Part 4: Docker – An Introduction to Container Orchestration. This tutorial will focus on Docker’s swarm because it comes installed with Docker and uses the same standard Docker files. By splitting the app into different containers for each service, we can choose how our app scales and even scale different parts in differing amounts.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Getting Started with Kubernetes: A Brief History of Cloud Hosting