Istio Motivations with Louis Ryan

A single user request hits Google’s servers. A user is looking for search results.

In order to deliver those search results, that request will have to hit several different internal services on the way to getting a response.

These different services work together to satisfy the user request. All of these services need to communicate efficiently, they need to scale, and they need to be secure. Services need to have a consistent way of being “observable”–allowing logging and monitoring. Services need to have proper security.

Since every service wants these different features (like communication, load balancing, security), it makes sense to build these features into a common system that can be deployed to every server.

Louis Ryan has spent his years at Google working on service infrastructure. During that time, he has seen massive changes in the way traffic flows through Google. First, the rise of Android, and all of the user traffic from mobile phones. And second, the rise of Google Cloud Platform, which meant that Google was now responsible for nodes deployed by users outside of Google.

These two changes–mobile and cloud–led to an increase in the amount of traffic and the type of traffic. All of this traffic leads to more internal services communicating with each other. How does service networking change in such an environment?

Google’s adaptation to the new networking conditions is to introduce a “service mesh”. A service mesh is a network for services. It provides observability, resiliency, traffic control, and other features to every service that plugs into it.

Each service needs to plug into the service mesh. In Kubernetes, services connect to the mesh through a sidecar. Let me explain the term “sidecar.”

Kubernetes manages its resources in pods, and each pod contains a set of containers. You might have a pod that is dedicated to responding to any user that is requesting a picture of a cat. Within that pod, you not only have the container that serves the cat picture–you also have other “sidecar” containers that help out an application container.

You could have a sidecar that gets deployed next to your application container that handles logging, or a sidecar that helps out with monitoring, or network communications.

If you are using the Istio service mesh, that means that you are using a sidecar called Envoy. Envoy is a sidecar called a “service proxy” that provides configuration updates, load balancing, proxying, and lots of other benefits. If we get all that out of Envoy, why do we need a separate abstraction of a “service mesh”? Because it helps to have a tool that aggregates and centralizes all the different communications among these proxies.

Every service gets a sidecar for a service proxy. Every service proxy communicates with the centralized service mesh.

Louis Ryan joins this episode to explain the motivations for building the Istio service mesh, and the problems it solves for Kubernetes developers.

For the next two weeks, we are covering exclusively the world of Kubernetes. Kubernetes is a project that is likely to have as much impact as Linux–and it is very early days. Whether you are an expert in Kubernetes or you are just starting out, we have lots of episodes to fit your learning curve. To find all of our old episodes about Kubernetes (including a previous show about Istio), download the Software Engineering Daily app for iOS or for Android. In other podcast players, only the most 100 recent episodes are available, but in our apps you can find all 650 episodes–and there is also plenty of content that is totally unrelated to Kubernetes!

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

Sponsors


Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/sedaily.


This episode of Software Engineering Daily is sponsored by Datadog. Datadog integrates seamlessly with container technologies like Docker and Kubernetes, so you can monitor your entire container cluster in real time. See across all your servers, containers, apps, and services in one place, with powerful visualizations, sophisticated alerting, distributed tracing and APM. Start monitoring your microservices today with a free trial! As a bonus, Datadog will send you a free T-shirt. Visit softwareengineeringdaily.com/datadog to get started.


Simplify continuous delivery with GoCD, the on-premise, open source, continuous delivery tool by ThoughtWorks. With GoCD, you can easily model complex deployment workflows using pipelines and visualize them end-to-end with the Value Stream Map. You get complete visibility into and control of your company’s deployments. At gocd.org/sedaily, find out how to bring continuous delivery to your teams. Say goodbye to deployment panic and hello to consistent, predictable deliveries. Visit gocd.org/sedaily to learn more about GoCD. Commercial support and enterprise add-ons, including disaster recovery, are available.

Software Weekly

Software Weekly

Subscribe to Software Weekly, a curated weekly newsletter featuring the best and newest from the Software Engineering community.