Serverless at the Edge with Kenton Varda

Over the last decade, computation and storage has moved from on-premise hardware into the cloud data center. Instead of having large servers “on premise,” companies started to outsource their server workloads to cloud service providers.

At the same time, there has been a proliferation of devices at the “edge.” The most common edge device is your smartphone, but there are many other smart devices that are growing in number–drones, smart cars, Nest thermostats, smart refrigerators, IoT sensors, and next generation centrifuges. Each of these devices contains computational hardware.

Another class of edge device is the edge server. Edge servers are used to facilitate faster response times than your core application. For example, Software Engineering Daily uses a content delivery network for audio files. These audio files are distributed throughout the world on edge servers. The core application logic of Software Engineering Daily runs on a WordPress site, and that WordPress application is distributed to far fewer servers than our audio files.

“Cloud computing” and “edge computing” both refer to computers that can serve requests. The “edge” is commonly used to refer to devices that are closer to the user–so they will deliver faster responses. The “cloud” refers to big, bulky servers that can do heavy duty processing workloads–such as training machine learning models, or issuing a large distributed MapReduce query.

As the volume of computation and data increases, we look for better ways to utilize our resources, and we are realizing that the devices at the edge are underutilized.

In today’s episode, Kenton Varda explains how and why to deploy application logic to the edge. He works at Cloudflare on a project called Cloudflare Workers, which are a way to deploy JavaScript to edge servers, such as the hundreds of data centers around the world that are used by Cloudflare for caching.

Kenton was previously on the show to discuss protocol buffers, a project he led while he was at Google. To find that episode, and many other episodes about serverless, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

Sponsors


Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily.


Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/sedaily.


Simplify continuous delivery with GoCD, the on-premise, open source, continuous delivery tool by ThoughtWorks. With GoCD, you can easily model complex deployment workflows using pipelines and visualize them end-to-end with the Value Stream Map. You get complete visibility into and control of your company’s deployments. At gocd.org/sedaily, find out how to bring continuous delivery to your teams. Say goodbye to deployment panic and hello to consistent, predictable deliveries. Visit gocd.org/sedaily to learn more about GoCD. Commercial support and enterprise add-ons, including disaster recovery, are available.

Software Weekly

Software Weekly

Subscribe to Software Weekly, a curated weekly newsletter featuring the best and newest from the software engineering community.