Cloud Structures: Kubernetes, Container Instances, Serverless

Cloud Infrastructure

When it comes to cloud services and infrastructure, there are many routes to choose from.

You can choose a serverless structure, where FaaS (Function as a Service) providers can perform simpler tasks without much hassle on the developer part. If you have more complex operations requiring multiple microservices with different dependencies, you can choose to deploy containers. If you don’t want to spend too much time on managing your servers, you can use a container instance. If you want more control and easy scalability with self-healing, you can have a container orchestration system.

Before delving deeper into each of these options, some basic terminology is needed.

Background

What are containers and how did they gain popularity? First, there were singular servers, that only served a single application. To serve an application that required a collaboration of services that had conflicting dependencies, you needed multiple servers, and management of these servers became a real problem.

Then came in virtual machines. Each virtual machine can run its own operating system, and is separated from other virtual machines running on the same physical server. This was a huge step towards self-contained architectures. However, virtual machines also brought a heavy memory and operation cost with them, as each of them had their own copy of an operating system. It was also slow to boot up a virtual machine, taking even a couple of minutes for some applications.

Now, it’s the age of containers. Containers bundle up each application and its dependencies, and can be semi-isolated from other containers. Containers in a single physical machine can share the same operating system. Containerization brought faster booting times, along with lower memory and CPU usage.

Containers vs. Virtual Machines

Source

While virtual machines certainly still exist and are widely used, containers are the newer and better way of serving an application with numerous microservices. However, a microservice oriented architecture with containers is not a fit-for-all solution, and brings certain organizational problems with it.

With this evolution of cloud architectures in mind, let’s take a look at the three methods that were mentioned at the beginning: FaaS, serverless containers such as Azure Container Instances, and orchestration tools such as Kubernetes.

Kubernetes:

Kubernetes

An application might require a number of services, and thus multiple containers. These containers need to be managed, replicated if there is a need for scaling, moved around if a failure occurs, monitored for usage and problems, distributed and redirected for incoming traffic.

Kubernetes comes in as an orchestration tool that allows developers to take care of these issues and more. Kubernetes is seen as an operating system for cloud, managing containers instead of processes.

Serverless:

For serverless, the name itself is a misnomer. There certainly is a server, somewhere, that is running your code. However, you do not know what server that is, or what its details are, and you do not have to!

In a serverless structure, the developers write the code they want to be run, deploy it to the serverless platform, and without thinking about resource management or scaling, their code gets run whenever a trigger event happens. Most famous FaaS providers are AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions.

Container Instances:Azure Container Instances

Container instances are a new alternative to run your containers without managing servers.

As mentioned in our episode with Gabe Monroy, container instances bring two out of three core elements of serverless, a micro-billing model and an invisible structure, without the assumption of an event based model.

They have the ease-of-use of serverless, and added to it the availability and portability of containers. A container instance is a container deployed on a cloud, that can scale up and down as needed, and rids the user of the responsibility of managing it.

This is not to say that container instances and Kubernetes are exclusive. Container instances can be managed using Kubernetes, and with connectors such as ACI Connector for Kubernetes by Microsoft, Kubernetes clusters can deploy Azure Container Instances.

When to use which?

Consider the following use-cases:

1. The website or mobile application of a local book vendor.

You have a business, buying and selling books, and you want to go digital to reach a wider audience. What you need in such a situation is a database, a website for your clients to browse and buy books from, maybe a mobile app as well with the same functionalities.

Your application in the broad sense does not have a very high number of underlying services. You should be able to respond to user actions such as buying and contacting you, perhaps alert users when a book they are following is added to your stocks. In this case, spinning up a container cluster would be an overkill, it would bring unnecessary costs, and would raise organizational costs.

The use cases in your application look like single, self-contained functions, and there are only a few of these separate functions. In this case, you might be better off using a serverless structure. FaaS is great at handling user input that trigger an event such as buying and selling, sending notifications or emails to users when an event happens such as new or sought-for items coming in, and when users enter a new item to your database for sale.

This brings you a couple of advantages:

  • You only pay per use. Instead of having a constantly running server from a cloud provider, you pay only when a client triggers an event. This lowers your hosting costs.
  • You do not need to worry about scaling. When your traffic increases, your FaaS provider handles the scaling for your application.
  • You focus on the application code, and do not worry about management. This saves time, and consequently money.  

Some disadvantages you might face:

  • You have to write your code in a small subset of programming languages that the FaaS provider supports.
  • Your function and its dependencies must be smaller in size than a threshold set by the provider.
  • The cold start problem. If your functions are not called frequently enough, e.g. at least once every 10 minutes or more depending on the FaaS provider, your function will go through a cold start the next time it’s called. FaaS providers remove inactive functions to free up resources for other frequently-used functions. This requires a restart after the function was shelved, and brings along latency problems. If you believe your traffic will not reach a certain threshold and a certain frequency, you might be faced with a slow, unresponsive application.
  • Your function execution cannot last more than a specified time by the provider, and must be stateless. However, Microsoft Azure’s Durable Functions can provide stateful functions, that can be used in function chaining and async HTTP APIs.
  • Testing is harder in applications leveraging FaaS. While some tools exist that help with the process, and Azure Functions and AWS Lambda can both be run locally, integration tests require numerous calls to the cloud. This can create unnecessary logs and monitoring issues, and results in additional spending with higher number of function calls.

In the end, if you require a small, event-driven application with a few entry points, a serverless structure can be of great benefit.

Azure Functions Web Commerce Use Case

Using Azure Functions for an E-Commerce website. Every checkout triggers an initial function. Corresponding microservices are handled with different functions. Source.

2. An international trading platform.

Now, you have extended your business. The company is now international, and you sell more products, not just books. There are many services that you need – user authentication, buying and selling, messaging, personal recommendations for users, user and product ratings, perhaps bidding. On top of these, your platform is gaining a lot of traffic.

While you can still divide your services into smaller parts, there are numerous microservices arising from this process. In this situation, using containers is the better option. And consequently to orchestrate deployment, scalability, and general management of these containers, Kubernetes.

Some advantages containerization and orchestration brings:

    • Kubernetes gives you much more control, and eases the management process, compared to a container cluster without an orchestration tool.
    • Testing and debugging with a container cluster is easier.
    • You are not restricted by the provider in terms of programming languages, size, or time.
    • Since you are getting a lot of traffic, containers can be more cost efficient.
  • Kubernetes environment is changing rapidly, and with frameworks like serverless Kubernetes, infrastructure management is getting easier. IaaS is evolving towards easier-to-use systems with hidden infrastructure systems. 

Disadvantages, on the other hand, are:

    • With control comes responsibility. You have to monitor your traffic and load into your network, and decide on whether to scale or not.
  • In a similar vein, you have more organizational costs. A dedicated engineer might be required to manage your Kubernetes cluster. However, this cost is minimized by using managed Kubernetes services such as Azure Kubernetes Service which simplifies deploying and managing Kubernetes clusters.

3. In both scenarios.

In both of these applications, you can make use of container instances with tools such as ACI.

While the concept is still developing and has its limitations, container instances can serve to bridge the gap between serverless structures and containers. Container instances give a pricing option similar to a serverless structure, with payments according to the usage time of the container, rather than number of calls. However, you have a 24/7 availability with container instances, as opposed to the cold start and resource problems you might observe with FaaS.

Kubernetes Virtual Kubelet using a container instance such as Azure Container Instance is the fastest way to scale a Kubernetes cluster. However, keeping container instances for the long run is probably not ideal, considering the associated costs.

Perhaps the most suitable usage of container instances is as a gateway between the smaller scale and larger scale application. While your business is not yet an international trading platform, but it still expanded from the earlier simple book selling store, container instances can be used as a way to manage the growing number of microservices, before a commitment to larger number of containers and orchestration.

There’s nothing in the way of creating hybrid applications, that use FaaS for some microservices, and containers orchestrated with Kubernetes for others, supported by container instances for scaling up. In fact, these methods can be connected on a deeper level, with projects such as Kubeless that can deploy FaaS on a Kubernetes cluster.

In the end, usage of these cloud services depend on the type of your application, and its scale.

Martin Fowler made a pretty concise summary in his blog post:

“As we see the gap of management and scaling between Serverless FaaS and hosted containers narrow, the choice between them may just come down to style and type of application. For example, it may be that FaaS is seen as a better choice for an event-driven style with few event types per application component, and containers are seen as a better choice for synchronous-request–driven components with many entry points.”

Gokhan Simsek

Eindhoven, The Netherlands

Gokhan is a computer science graduate, currently pursuing a MSc. degree in Data Science at Eindhoven University of Technology. He’s interested in big data, NLP, and machine learning.

Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.