Haven’t we seen serverless before?
This post is about understanding serverless in the context of CGI. It covers whether serverless is actually a new innovation, how serverless is different from CGI of the 90s, and what you should understand about serverless in the ’20s to take advantage of its potential.
When you finish this post you will understand the serverless of today in 2020 and why you will want to use it. This post will be most interesting to IT decision-makers, architects, developers, and other old CGI-programmers like myself.
Haven’t we seen serverless before?
In my work with OpenShift and Kubernetes I am always looking at the new innovation and one of the new innovations is serverless. Now, at first I didn’t get it all. I didn’t get it because I could not understand what all the fuss was about. People were going on about this amazing new technology that enabled code to be executed on demand. Huh? What’s new — I used to do that in the 90s !?
In the 90s, CGI or common gateway interface (not computer-generated imagery) was an innovation in web programming. It enabled web servers to execute command-line programs, usually scripts, in order to generate dynamic web content. Prior to the definition of a standard, each web server would have a different way of communicating to the external script being executed on the web server. Web servers were quick to adopt this new standard, and the general way of indicating that a file was an executable script was by putting it in a directory called cgi-bin on the web server.
Essentially, a file in cgi-bin was a program executed on-demand and triggered by the incoming serving request of either HTTP post or get. At last a web developer could write dynamic code to process forms, or generate pages dynamically — and they didn’t have to understand how the web server would talk to the operating system because all they needed to do was code and FTP the file into the correct directory. I wrote most of my CGI scripts in Perl and PHP.
So yes, we have seen this ability to execute code on-demand before. However, there ends the similarity. Getting hung up on the idea of ‘on-demand’ execution — as I did — blinkers your vision to the potential of serverless.
In the 90s, the common gateway interface came about to make it easier for developers to create dynamic web pages and process forms. That is not a problem for us today in the 2020s.
Solving a different problem: resource optimization
When trying to understand what serverless is, most people get stuck on the word ‘serverless’ — after all, it still needs a physical server to run on. If this is you, imagine ‘server’ as in ‘Client-Server’ application framework. Where a ‘server’ is an application that is running, waiting to process requests. It’s running all the time, even when it is doing nothing. Serverless says — the application shall only run when it needs to serve.
And therein lies the problem serverless solves, optimisation of resource consumption.
In this way, one could think of Serverless as another iteration of virtualisation, where containers can now occupy the compute resource of some other container that is not using it, without developers having to worry about how to make that happen. If that also sounds familiar, you are right — it was called time-sharing and debuted in 1961. So on top of all the advantages Kubernetes gives applications running in containers…
Serverless gives on-demand and time-sharing capability to containers.
Functions-as-a-Service (FaaS) could be thought of as Serverless 1.0. When AWS released Lambda in 2014, it was not the only but it soon became the most popular FaaS on the market. It provided a way for developers to write code, put it into a ‘directory’ on the lambda server and have it execute on-demand.
Knowing what I now know, it seems Lambda is actually more akin to 90s CGI or the later update FastCGI — an extension developed as a way to execute time-consuming code (like opening a database) only once, rather than every time the script is loaded.
Lambda has on-demand process consumption and no need to understand the underlying machine execution logic, you have to keep the function light so as not to take too long to start up from cold, you could only run one process per instantiation, programmers are limited in their choice of language and Lambda and other FaaS options have a high degree of vendor lock-in.
Let us come back to that ‘server’ in serverless again. Like our CGI script, a serverless program still needs a platform to run on, a platform that is always running waiting to receive on-demand application requests. For CGI that was the webserver, for Serverless 2020 applications this is a serverless platform.
Serverless 2.0: The serverless platform
By 2018 there were several projects in the open-source community seeking to advance us from serverless 1.0 to 2.0. Including OpenFaas, OpenWhisk, Kubeless and Knative. In 2020 — Knative — is looking to be the standout platform of choice for serverless applications.
As Joao Cardoso describes in his blog from iamondemand.com “Knative’s differentiator was its aim to provide the building blocks needed to make serverless applications a reality on Kubernetes and not necessarily be a full-blown solution by itself.”
Like Kubernetes, Knative was released originally by Google along with other companies including Red Hat. If you would like to know more here is a comparison between Knative and other serverless frameworks from serverless observability vendor — Espagon.
So why is this on-demand execution of applications needed today when we have almost unending access to commodity compute?
The most obvious is you only pay for computing resources when you are using them, and for those of you who are responsible for an IT budget that is significant unto itself. That is not all though. Think about being able to load more processing potential into a smaller space. Imagine as you are reaching out to the edge, it’s not your financial budget that matters here, but rather your limited footprint. Imagine how many more demand-driven container-based applications you can put on a compute node at the edge of a telco network, or on a satellite, or on a plane or smart car.
Serverless could be as revolutionary to cloud computing in the 2020s as timesharing was to the computing in the 70s.
If you would like to read more about Knative, or get some examples of running Knative applications, have a look at some of the blogs from my colleague Michael Calizo on medium and openshift.com.
If you’re looking for something a little more hands-on, check out the Getting Started with OpenShift Serverless module included in the free Developing on OpenShift training.