What is Serverless Architecture and what are its advantages?

Illustration of a man holding a cauldron with glowing contents, surrounded by flying birds connected by lines, suggesting a network or interconnected system. Illustration of a man holding a cauldron with glowing contents, surrounded by flying birds connected by lines, suggesting a network or interconnected system.

It doesn’t matter if you are an industry veteran or just interested in modern approaches to architecture, you probably have heard about a new way to run applications in cloud environment called ‘Serverless’. In this article we’re going to talk about what it actually is, if it’s really worth it and why conventional servers abruptly fell out of favor.

Cloud Native consulting: regardless if you are just getting started, or looking for a big shift from the old ways to the future, we are here to help. About consulting

Definition

Here’s a Serverless definition from Wikipedia.

Serverless computing is a cloud-computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.

It might be not really clear at first, but don’t worry, you’ll figure it out soon. But before that let’s read another Wikipedia definition. This one is about FaaS, the thing closely connected to Serverless.

Function as a service (FaaS) is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.

These definitions combine three features of anything that can be called Serverless:

  1. Abstraction. You do not manage the server on which your program runs. You know absolutely nothing about it. All the details of the OS, updates and network settings are hidden, so that you can focus on developing, not administration.

  2. Elasticity. Serverless provider automatically gives you more or less computational resources depending on how heavy the load on your app is.

  3. Cost efficiency. The price depends on if your app is being used or not. If not, you don’t pay anything as you do not use any computational resource. You pay only for the time your app is being used.

  4. Limited lifecycle. Your application runs in a container, after some time (from several minutes to many hours) the service stops it automatically. If your app needs to be called again, then, of course, the new container will be launched.

Application area

Serverless model can be used mostly anywhere, with some exceptions, though. However, there are some cases which are the easiest and safest for a first try and thus we recommend you to start with them.

Such cases might be, for example, background tasks like:

  • creating additional copies of an image after it’s been uploaded to a website;
  • scheduled creation of a backup;
  • sending asynchronous notifications to users (push, email, SMS);
  • different export and import tasks.

All these tasks are either scheduled or do not mean that the user will get an instant response. This is due to the fact that applications (functions) in Serverless are not working constantly, but are launched when needed and then disabled automatically. Thus, each launch takes some time, sometimes up to several seconds.

However, it doesn’t mean that you cannot use Serverless in the parts of an app that users interact with or when the response time is important. Quite the opposite! Serverless functions are widely used for:

  • chatbots;
  • backend for IoT apps;
  • management of requests to your main backend (e.g. to identify the user using User-Agent, IP and other data, or to receive the location information of the user using IP);
  • completely independent API endpoints. However, these things expect more understanding of the model from the developer, so if I were you, I would start with background tasks.

Service providers

Nowadays one of the market leaders concerning FaaS is AWS and its AWS Lambda. It supports many programming languages (including Ruby, Python, Go, NodeJS, C# and Java) and a huge number of services which allow you to use not only Serverless computing, but also Database as a service, message queues, API Gateway and other things that make your work with the model much more easier.

Google Cloud and Microsoft Azure with their Google Functions and Azure Functions are also worth mentioning. I don’t have much experience with them, but, as far as I can tell, they’re not as good as AWS and leave much to be desired. For example, as of yet Google supports NodeJS and Python only. Azure supports way more languages, but most of them have only experimental support for now.

Moreover, you can implement Serverless not only using the services of public cloud providers, but also in your own datacenter. If you’ve taken an interest in it and want to know more, you can have a look at KNative, which allows you to build and run serverless applications on Kubernetes orOpenWhisk. Openwhisk was the first product that made the community aware of the possibility of Serverless on their own servers.

Advantages in a nutshell

Before we finish, let’s have another look at all the advantages of the Serverless Architecture implementation.

  1. Elasticity. From zero to thousands parallel working functions.

  2. Full abstraction from operating system or any other app-related software. It doesn’t matter where your Serverless apps are launched, be it Linux, Windows or custom OS. The only thing that’s important to you is the platform’s ability to execute Python/Java/Ruby/YouNameIt code and its libraries.

  3. With proper function design, it’s easier to build a loosely coupled architecture in which an error in a single function does not affect the work of the entire app.

  4. The entry barrier is relatively low for beginners. For a new developer in a team it’s way easier to grasp 100-500 lines of a ‘nano’ service than millions of lines and a multitude of entanglements of the legacy code of an old project.

What about disadvantages?

Unfortunately (or fortunately), our world isn’t just black and white, and all the technologies and approaches are not undoubtedly good or bad. This means that the Serverless approach also has its disadvantages and there are difficulties you might face. Most of them are the same as of any other distributed system.

  1. Since any other function or service can be sensitive to your interface or business logic, you need to always try to keep backward compatibility.

  2. The integration scheme of a classic monolithic application and the one of a distributed system differ a lot. You need to keep asynchronous interaction and possible delays in mind and monitor separate parts of an app.

  3. Even though the functions are isolated, the wrong architecture might lead to a cascade failure (when the failure of one part might trigger the failure of others) anyway.

  4. The price you pay for having great scalability is that your function is not running if it’s not called. So when you need to actually run it, it might take up to a few seconds, which can be crucial for your business.

  5. If there’s a problem, it’s difficult to identify the possible cause of a bug when the request from a client goes through a dozen of functions.

  6. So-called vendor lock. The functions developed exclusively for AWS might be very difficult to port to, let’s say, Google Cloud. And not because of the functions in general, JS is the same everywhere, but mostly because Serverless functions are rarely isolated. Besides them, you will use databases, message queues, logging systems and other things which are always different depending on the provider. However, if you are eager enough, you can make them independent.

To sum up

Even though we’ve found more disadvantages than advantages, it doesn’t mean that Function as a Service is a bad approach and you need to forget it for good after reading this article. Rather, most of the risks can be either minimized or just accepted. For instance, you can preinitialize the functions so that the user wouldn’t wait for it to launch. There are also some approaches to debugging which will make it less painful. Vendor lock shouldn’t be a problem for most businesses as well.

Moreover, the Serverless approach doesn’t mean that you need to shift the entire app to this model in an instant. The evolutionary approach, where you’re expected to start with the code for the parts of the app which are non-critical or customer-facing, works better here.

Do I really think that the future belongs to the Serverless model (at least partly)? Yes, indeed. Do I really think that’s it’s good for all companies? Definitely not. Do I really think that everybody should spend a little of their time and try it? Yes, without a doubt.

I’m sure that this is the skill that might prove useful later. To try out AWS Serverless by yourself, you can follow the steps I listed in the article on my personal blog or wait for it to be published on mkdev.

Thank you for reading this article. If you are interested in the Serverless and Microservice approaches, Ruby and JS as well as Cloud Infrastructure, I’ll be more than happy to become your mentor. If you have any constructive criticism or suggestions, I’m all ears.

Good luck with your work and studies, my dear readers!

Supplementary reading

  1. Amazon Lambda + API Gateway introduction on my blog

  2. Building an Amazon Lambda function to write to the DynamoDB on my blog

  3. HOWTO: Create and integrate AWS Lambda function using Terraform on my blog

  4. Virtualization basics by Kirill Shirinkin