Serverless Scheduling with Rodric Rabbah

Functions as a service are deployable functions that run without an addressable server.

Functions as a service scale without any work by the developer. When you deploy a function as a service to a cloud provider, the cloud provider will take care of running that function whenever it is called.

You don’t have to worry about spinning up a new machine and monitoring that machine, and spinning the machine down once it becomes idle. You just tell the cloud provider that you want to run a function, and the cloud provider executes it and returns the result.

Functions as a service can be more cost effective than running virtual machines or containerized infrastructure, because you are letting the cloud provider decide where to schedule your function, and you are giving the cloud provider flexibility on when to schedule the function.

The developer experience for deploying a serverless function can feel mysterious. You send a blob of code into the cloud. Later on, you send a request to call that code in the cloud. The result of the execution of that code gets sent back down to you. What is happening in between?

Rodric Rabbah is the principal researcher and technical lead in serverless computing at IBM. He helped design IBM Cloud Functions, the open source functions-as-a-service platform that IBM has deployed and operationalized as IBM Cloud Functions. Rodric joins the show to explain how to build a platform for functions as a service.

When a user deploys a function to IBM Cloud Functions, that function gets stored in a database as a blob of text, waiting to be called. When the user makes a call to the function, IBM Cloud Functions takes it from the database and queues the function in Kafka, and eventually schedules the function onto a container for execution. Once the function has executed, IBM Cloud Functions stores the result in a database and sends that result to the user.

When you execute a function, the time spent scheduling it and loading it onto a container is known as the “cold start problem”. The steps of executing a serverless function take time, but the resource savings are significant. Your code is just stored as a blob of text in a database, rather than sitting in memory on a server, waiting to execute.

In his research for building IBM Cloud Functions, Rodric wrote about some of the tradeoffs for users who build applications with serverless functions. The tradeoffs exist along what Rodric calls “the serverless trilemma.”

In today’s episode, we discuss why people are using functions-as-a-service, the architecture of IBM Cloud Functions, and the unsolved challenges of building a serverless platform. Full disclosure: IBM is a sponsor of Software Engineering Daily.

Show Notes

IBM Cloud Functions

Apache OpenWhisk

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.


Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.