Cloud computing has changed the way that modern applications are designed, deployed and maintained. One of the driving forces behind the adoption of cloud is serverless computing.
Table of Contents
We can think of software as a collection of smaller pieces of code working together, each one solving a particular task. We refer to these small pieces of code as functions. These can run independently of each other, but when working in unison, they form the application itself. Serverless architecture exploits this trait. An application is broken down to its constituent functions and each is deployed and run individually as opposed to being bundled together. Running these individual functions separately in the cloud is known as Function as a Service (FaaS), and provides a plethora of benefits when designing modern software.
Serverless architectures are ideal for use cases where an application revolves around responding to events with code. Some common examples include data processing pipelines, application backends, APIs, chatbots, voicebots, and workflow automation.
In traditional architecture, developers would run code on a server to complete a specific task. The server could either be owned – usually the hardware residing on the customer’s premises - or rented from a cloud provider. Either way, the server would be available around-the-clock, waiting for requests to service. These servers would often sit idle, and even when processing requests, they would not use all their resources in terms of CPU or allocated memory.
Rather than running 24/7 and constantly waiting for requests to come in, serverless computing adopts a usage-based model, spinning up resources when requests are submitted. Once these are completed, the servers are powered down. As such, instead of paying for computing resources such as virtual machines every month regardless of their usage, serverless computing bills per function invoked. You only pay for the resources you use, when you use them.
A key differentiator between serverless-based applications and monolithic applications is the way in which code is executed. In serverless architectures, functions are only run as and when they are needed. When a request is received, an event is triggered and the cloud provider runs the relevant function automatically, reducing the amount of time servers are idle.
In an Infrastructure as a Service (IaaS) environment - which better suits traditional software - applications would need to run continuously, always listening for incoming requests to process and output data as required. If the hosting infrastructure or application is not running when a request is sent, the request will not be processed. This always-on approach results in wasted resources as the code will need to be run on a server even when there are no requests to service. Let’s take as an example a web API designed to process orders from an online store. With an always-on approach, if this web API receives no requests during the night, the resources used overnight are wasted. FaaS allows the web API to always be available, but only actually runs the code as and when it is required.
Software written to be run in a serverless environment must be decentralised, with each component running individually in the cloud as a function. These components can still communicate with one another in order to share and process data.
However, they do not require the entire application to be running when only a subset of these components of the application are actually servicing user requests. Any code that facilitates the creation of a user interface or other user-facing content is deployed to the users’ device, and other functions such as storing details in a database are enacted by triggering the exact function needed on the FaaS platform. This decentralising of code also allows for an application to be more easily scaled in order to service a larger than usual number of requests from users.
This contrasts with systems deployed in an IaaS environment, where large applications are deployed as a singular unit, serving all of the functions of the software. Components of the software responsible for rendering user interfaces, storing data in a database and sending requests to other services such as Stripe to manage payments are all deployed together.
Serverless computing removes the need for application developers and businesses to own or rent their own servers and infrastructure. Instead, code is stored and managed by a cloud provider, and the supplied functions are run in the correct environment with the relevant accompanying software as and when required. Application developers no longer need to manage how their code will be run. When a function is triggered, the serverless computing service automatically finds a server to run the code on, runs the code and outputs the result to the relevant destination.
In traditional environments, developers would need to update and maintain a server they are using to run their code. In the past, having control of this was paramount in ensuring the performance and stability of your software. The need to control and manage your own infrastructure and servers is no longer as essential as it once was, due to new modern standards and the environments provided in FaaS offerings.
Having discussed differences between serverless and traditional architectures, we will now describe the components that make up the serverless model and how they play a role in achieving flexibility and scalability.
API gateways provide the entry-point for clients to send requests to a service and retrieve data. They will receive requests from all clients and trigger the relevant function, passing data to the relevant component and returning the resulting data back to the client. API Gateways provide an easy way to adapt called functions based on requests without ever needing to update client code.
API gateways can be used to aggregate multiple function invocations together to achieve composite functionalities. A client may send a singular request, but that one request can trigger the execution of more than one function. These functions would produce a single response that gets returned to the user. This approach helps to reduce the number of requests the client needs to make, improve performance and client experience. To exemplify, consider a user registering an account - a singular client request is sent to the API gateway, but multiple functions are executed to store user details, send a welcome email and generate a ‘welcome’ coupon code.
The business logic of a serverless application can be found in the functions developers write. These functions are executed on a FaaS platform that’s provided by a cloud provider. They often perform small, concentrated tasks and only execute for a short period of time, the upper time limit before the function times out generally being five minutes. This maximum runtime is one of the reasons why tasks are broken down in many smaller, interlinked functions that together solve a wider problem.
For a function to run in isolation from others that execute on the same physical servers, it needs to execute within a container. These are small, light-weight virtual environments that, in a similar manner with virtual machines, abstract physical resources and dedicate them to the applications that run within the containers. They are very inexpensive to spin and run, and allow functions to execute very quickly after an event is detected.
Functions initialise when they get triggered by an event. Imagine a user updating their contact details in a mobile application. The API gateway receives the request, invokes the relevant functions, and returns the result once the function has completed its task. Functions can also be triggered by passive events such as on a time schedule or when a file is updated in a file storage service.
Serverless environments are stateless. Functions are not necessarily aware of previous function executions and will often receive a piece of data as a parameter, do something with it and then either store the result somewhere or return a result to the relevant function or client.
Persisting data such as user contact details, product information and more can be achieved using a database Backend as a Service (BaaS) solution. Functions running in a serverless application need to persist data somewhere for later usage by the same functions or for other components in the wider application. Many cloud providers offer some form of database management service whereby the database is maintained and deployed by the cloud provider, and functions simply leverage the service. In addition to persisting data, some databases can be used for caching data where writing and reading from a database is inefficient.
By design, serverless architecture enables the wider business to inherit technical benefits around efficiency and scalability. Below we explore the benefits as they are applicable to each type of stakeholder.
As with every new technology, serverless comes with a handful of challenges. Most of those can be mitigated with careful planning and preparation.
Serverless computing is solving some critical challenges in application development. It allows engineers to develop their applications as small, individual functions that work together to form a powerful, naturally scalable solution that can serve large numbers of users concurrently. Infrastructure management is handled by the cloud provider, simplifying and streamlining operations.
The caveat to all these benefits, as with all new technologies, is that developers need training and relevant knowledge to design and maintain serverless applications using best practices.
We're gradually opening up early access to companies who like to build and automate the future of X.