Oliver Gould worked at Twitter from 2010 to 2014. Twitter’s popularity was taking off, and the engineering team was learning how to scale the product.
During that time, Twitter adopted Apache Mesos, and began breaking up its monolithic architecture into different services. As more and more services were deployed, engineers at Twitter decided to standardize communications between those services with a tool called a service proxy.
A service proxy provides each service with features that every service would want: load balancing, routing, service discovery, retries, and visibility. It turns out that lots of other companies wanted this service proxy technology as well, which is why Oliver left Twitter to start Buoyant, a company that was focused on developing software around the service proxy–and eventually the service mesh.
If you are unfamiliar with service proxies and service mesh, check out our previous shows on Linkerd, Envoy, and Istio.
Kubernetes is often deployed with a service mesh. A service mesh consists of two parts: the data plane and the control plane.
The “data plane” refers to the sidecar containers that are deployed to each of your Kubernetes application pods. Each sidecar has a service proxy. The “control plane” refers to a central service that aggregates data from across the data plane and can send communications to the service proxies sitting across that control plane.
The Linkerd service mesh was built in Java, and the project started before Kubernetes had become the standard for container orchestration. More recently, Buoyant built Conduit, a service mesh built using Rust and Go.
In this episode, we explore how to design a service mesh and what Oliver learned in his experience building Linkerd and Conduit.
Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.
Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/sedaily.
Digital Ocean Spaces gives you simple object storage with a beautiful user interface. You need an easy way to host objects like images and videos. Your users need to upload objects like pdfs and music files. To try Digital Ocean Spaces, go to do.co/sedaily
and get 2 months of Spaces plus a $10 credit to use on any other Digital Ocean products–and you get this credit even if you have been with Digital Ocean for awhile. It’s a nice added bonus just for trying out Spaces. If you become a customer, the pricing is simple:
$5 per month price and includes 250GB of storage and 1TB of outbound bandwidth. There are no costs per request and additional storage is priced at the lowest rate available: $0.01 per GB transferred and $0.02 per GB stored. There won’t be any surprises on your bill. Digital Ocean simplifies the cloud–they look for every opportunity to remove friction from a developer’s experience. I love it, and I think you will too–check it out at do.co/sedaily.
Simplify continuous delivery with GoCD, the on-premise, open source, continuous delivery tool by ThoughtWorks. With GoCD, you can easily model complex deployment workflows using pipelines and visualize them end-to-end with the Value Stream Map. You get complete visibility into and control of your company’s deployments. At gocd.org/sedaily
, find out how to bring continuous delivery to your teams. Say goodbye to deployment panic and hello to consistent, predictable deliveries. Visit gocd.org/sedaily
to learn more about GoCD. Commercial support and enterprise add-ons, including disaster recovery, are available.