Load Balancing at Scale with Vivek Panyam

Facebook serves interactive content to billions of users. Google serves query requests on the world’s biggest search engine. Uber handles a significant percentage of the transportation within the United States. These services are handling radically different types of traffic, but many of the techniques they use to balance loads are similar.

Vivek Panyam is an engineer with Uber, and he previously interned at Google and Facebook. In a popular blog post about load balancing at scale, he described how a large company scales up a popular service. The methods for scaling up load balancing are simple, but effective–and they help to illustrate how load balancing works at different layers of the networking stack.

Let’s say you have a simple service where a user makes a request, and your service sends them a response with a cat picture. Your service starts to get popular, and begins timing out and failing to send a response to users.

When your service starts to get overwhelmed, you can scale up load by creating another service instance that is a copy of your cat picture service. Now you have two service instances, and you can use a layer 7 load balancer to route traffic evenly between those two service instances. You can keep adding service instances as the load scales and have the load distributed among those new instances.

Eventually, your L7 load balancer is handling so much traffic itself that you can’t put any more service instances in front of it. So you have to set up another L7 load balancer, and put an L4 load balancer in front of those L7 load balancers. You can scale up that tier of L7 load balancers, each of which is balancing traffic across a set of your service instances. But eventually, even your L4 load balancer gets overwhelmed with requests for cat pictures. You have to set up another tier, this time with L3 load balancing…

In this episode, Vivek gives a clear description for how load balancing works. We also review the 7 networking layers before discussing why there are different types of load balancers associated with the different networking layers.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

Sponsors


Today’s episode is sponsored by Datadog, a platform for monitoring your infrastructure and application performance. Datadog provides seamless integrations with more than 200 technologies, including AWS, NGINX, and Docker, so you can start collecting and visualizing performance metrics quickly. Access out-of-the-box dashboards for HAProxy, Amazon ELB, ALB, and more, and correlate metrics from your load balancers with application peformance data to get full visibility into your web apps. Start monitoring and optimizing performance today with a free trial! Listeners of this podcast will get a super-soft Datadog T-shirt too.Visit softwareengineeringdaily.com/datadog to get started. 


The octopus: a sea creature known for its intelligence and flexibility. Octopus Deploy: a friendly deployment automation tool for deploying applications like .NET apps, Java apps and more. Ask any developer and they’ll tell you it’s never fun pushing code at 5pm on a Friday then crossing your fingers hoping for the best. That’s where Octopus Deploy comes into the picture. Octopus Deploy is a friendly deployment automation tool, taking over where your build/CI server ends. Use Octopus to promote releases on-prem or to the cloud. Octopus integrates with your existing build pipeline–TFS and VSTS, Bamboo, TeamCity, and Jenkins. It integrates with AWS, Azure, and on-prem environments. Reliably and repeatedly deploy your .NET and Java apps and more. If you can package it, Octopus can deploy it! It’s quick and easy to install. Go to Octopus.com to trial Octopus free for 45 days. That’s Octopus.com


You want to work with Kubernetes but wish the process was simpler. The folks who brought you Kubernetes now want to make it easier to use. Heptio is a company by founders of the Kubernetes project, built to support and advance the open Kubernetes ecosystem. They build products, open source tools, and services that bring people closer to ‘upstream’ Kubernetes. Heptio offers instructor-led Kubernetes training, professional help from expert Kubernetes solutions engineers, as well as expert support of upstream Kubernetes configurations. Find out more at softwareengineeringdaily.com/heptio. Heptio is committed to making Kubernetes easier for all developers to use through their contributions to Kubernetes, Heptio open source projects, and other community efforts. Check out Heptio to make your life with Kubernetes easier at softwareengineeringdaily.com/heptio.