Mind Matters Natural and Artificial Intelligence News and Analysis
Camera slowly moving in data center showing server equipment wit
Camera slowly moving in data center showing server equipment with flickering light indicators, close up view. Seamlessly looped photorealistic 3D render animation.
Licensed via Adobe Stock

Serverless Computing: What Is It?

A serverless system makes for a more convenient and efficient experience
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A new trend in cloud programming these days is known as “serverless” programming. This term is a bit confusing, because it does not mean that your code isn’t running on a server. What it does mean is that you don’t have to manage the server(s).

The Physical Server

In the early days of the Internet, nearly all communication was directly between the “client” (the person using a web browser or other application) and the “server” (the physical device you were communicating with). Of course, there is a limit to the number of connections that a single physical machine can process. Early on, several mechanisms were developed that allowed companies to grow their services beyond what a single machine could handle, splitting the load among multiple machines. A load balancer, for instance, is a machine that takes requests and divvies them up among a multitude of other machines.

During that time, most everyone owned physical hardware. Sometimes this was on-site at a company’s place of business, or it was co-located at a hosting company. Some hosting companies also allowed you to rent servers for hundreds of dollars a month.

The Virtual Server

Eventually, hosting companies developed the ability to deploy “virtual” servers. Essentially, they would buy very large machines (lots of memory and processors) and, using special virtualization software, divide up these machines into “virtual” servers. This was not only very efficient (more servers with less space), but it also made managing the servers easier. Instead of systems administrators having to manually install operating systems and software on each computer, now each computer runs a special “hypervisor” program which does the work of provisioning and deploying the virtual servers.

Now, instead of waiting days or weeks for a systems administrator to build and deploy a new server for you, you simply click on a button and a few seconds later the server is ready for you. When you are done with it, one more click and it goes away. Instead of billing you by the month, you were billed by the hour – only for those hours you actually used.

This made it very easy to scale applications. If you know your busiest time is on a specific day, you can simply order more machines the day before, add them to your load balancer, and you are ready to go. After your big day, you can simply turn them off and you no longer have to pay for them. This type of cloud computing, known as Infrastructure as a Service (IaaS), made deploying all sorts of network infrastructure extremely fast and easy.

However, this still relies on systems administrators to monitor their traffic and know when they need to add more servers, or take them back offline. Wouldn’t it be nice if the system just “knew” how many servers to launch based on the traffic, and just billed you for how much processing time you used?

Serverless environments do just that.

The Serverless System

In a serverless system, you don’t allocate any machines, whether real or virtual. You don’t tell the system how many processes you want running. You don’t tell the system when to boot up or shut down. Instead, the system simply waits for traffic to your application to appear. When it does, it boots up an instance of your service and starts processing requests. When the requests are completed, the system waits a little while for more traffic, and if no more requests come in, it shuts down. If too many requests are received for the machine that it booted up, the system automatically boots up another one. You are simply billed for the number of requests processed.

While there are many serverless options available, the primary one is Amazon’s AWS Lambda. Lambda allows for all sorts of serverless functions, not just web requests. For instance, Lambda services can bet triggered by all sorts of events within Amazon’s AWS infrastructure – processing an uploaded file, receiving an email, handling an AI interaction, processing a new record in a database, receiving a message or event from another application, or other system-level event. You don’t have to allocate any servers to run these functions – they are auto-created and auto-destroyed as needed. Amazon bills you by the millisecond.

While AWS was the first to popularize the concept of serverless programming with their Lambda service, other popular serverless environments have sprung up. The top competitors to AWS Lambda today include Microsoft Azure Functions, Google Cloud Functions, and CloudFlare Workers.

While seemingly paradoxical, you can actually manage serverless applications on your own servers with the open-source Apache OpenWhisk. What this allows you to do is to separate server management from application deployment. You may be running your own servers, but your application developers don’t have to care how many servers they need to deploy to, and they don’t have to be individually provisioned ahead of time. The job of the system administrators is simply to make sure there is enough capacity for “everything,” but they don’t have to worry about the capacity for any particular service – the serverless framework takes care of distributing tasks to servers as-needed.

In the next installment, we will look at how to write and deploy a simple serverless application.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Serverless Computing: What Is It?