Service Mesh Wars with William Morgan

A service mesh is an abstraction that provides traffic routing, policy management, and telemetry for a distributed application.

A service mesh consists of a data plane and a control plane. In the data plane, a proxy runs alongside each service, with every request from a service being routed through the proxy. In the control plane, an application owner can control the behavior of the proxies distributed throughout the application.

As the Kubernetes ecosystem has developed, the service mesh abstraction has become an increasingly desirable component of a “cloud native” application stack.

As companies enthusiastically adopt Kubernetes, they eventually find themselves with a large distributed system that is difficult to operate. A service mesh simplifies some of these operational difficulties, as we have explored in numerous previous episodes.

The Kubernetes community has grown to include lots of enterprises, and those enterprises want to adopt service mesh. But today, many of them are afraid to adopt the technology because there are multiple competing products, and it is unclear which one the community will centralize around–or if the community will end up supporting multiple products.

Over the next few weeks, we will be airing interviews from KubeCon EU 2019 in Barcelona. These interviews are a window into the world of the Kubernetes and the cloud-native ecosystem, which is transforming the world of infrastructure software.

The most prominent theme across these shows was that of service mesh. Why was service mesh such an important topic? Because the battle for service mesh supremacy is a classic technology competition between a giant incumbent and a startup with far fewer resources.

The Kubernetes ecosystem is beautifully engineered to allow for a marketplace of warring ideas where the most worthy competitor wins out–but in some cases there is room for multiple products to occupy different subsections of the market.

Across these episodes, one theme we will explore is the governance and the diplomacy of these competing solutions, and how the Kubernetes ecosystem is structured to allow for harmonious resolution to technology battles.

It is tempting to look at this competition between service meshes as winner-take-all. But as of late May 2019, we do not yet know if it will be winner-take-all. In order to predict how the service mesh wars will play out, the best we can do is to look at historical examples.

Container orchestration wars was a winner-take-all market. Container orchestration was a problem of such depth, such technical complexity and integration, that there had to be a single winner for the ecosystem to marshall around.

During the container orchestration wars, as Mesos and Docker Swarm and HashiCorp Nomad and Kubernetes fought for supremacy, many large enterprises made bets on container orchestration systems that were not Kubernetes. When the dust settled, Kubernetes was the victor, and these large enterprises who had adopted orchestration systems other than Kubernetes begrudgingly began thinking about how to migrate to Kubernetes.

But during the orchestration wars, many more enterprises were sitting out altogether. They did not choose Kubernetes or Mesos or Swarm. They chose to wait.

Enterprise technologists are smart, and they can tell when a technology is immature. Although many enterprises wanted an orchestration system to manage their Docker containers, they did not want to insert a heavy abstraction that they would have to tear out later on.

Once Kubernetes won the orchestration wars, enterprise dollars piled into the space. The cloud native community has grown faster than anyone expected, because we solved the collective action problem of centralizing on a container orchestrator.

From enterprises to cloud providers to ISVs to podcasters, we share the same vision for Kubernetes: it is the Linux for distributed systems.

Within the Kubernetes ecosystem, the thought leadership tries not to pick winners. It is better for everyone if the winners are decided through competition. In order to foster competition, interfaces into Kubernetes can provide a layer of standardization along which different products can compete. Enterprises can buy into an interface without buying into any particular product.

Examples include the container networking interface (CNI) and the container storage interface (CSI). Every Kubernetes application wants storage and networking, but these Kubernetes applications do not want to be locked into a particular provider. Since there is a standardized interface for networking and storage, these applications can swap out one storage provider for another, or one networking provider for another.

How does this relate to service mesh?

In the service mesh market, Buoyant was first to market with its open source project Linkerd. Today’s guest William Morgan is the CEO of Buoyant. Over the last four years, Linkerd has slowly grown a following of dedicated users who run the open source service mesh in production.

Over the last four years, Linkerd has changed from its initial technology of the embedded JVM service proxy developed at Twitter to a Rust-based sidecar data plane and a Go-based control plane. Buoyant’s dedicated focus to the service mesh space has won over much of the community, as was evidenced by Linkerd becoming the predominant apparel brand at Kubecon EU 2019: Linkerd hats and t-shirts were everywhere at the conference.

Why did Linkerd become trendy? Ironically, because of a competing service mesh whose launch strategy was widely seen as an affront to the spirit of the cloud native community.

Istio was created within Google, and launched with a set of brittle partnerships with IBM and other companies. Istio careened into the Kubernetes ecosystem with violent fanfare, trumpeting itself as the cloud native service mesh du jour through endless banner ads, marketing email campaigns, and KubeCon programming.

Any listener to this podcast knows I am as gullible as any technologist. I’m an idealist–and I wanted to believe that Istio represented the service mesh equivalent of Kubernetes. It’s from Google, it launched with a bunch of impressive logos, and it has an inspiring vision. Looks cloud native, smells cloud native, must be cloud native, right?

Unfortunately, Istio’s early marketing aggrandizements were disconnected from the nascent realities of the project. Istio was buggy and difficult to set up. Istio quickly developed a reputation as Google-manufactured vaporware: nice idea, not nearly ready for production.

For Linkerd, the timing could not have been better.

Istio’s romantic vision of an operating plane for routing traffic, managing security policy, and measuring network telemetry had seduced the enterprise masses. With their cravings unmet by Istio, these enterprises surveyed the market and quickly found their way to Linkerd, the humble service mesh next door, who had been waiting patiently all along.

The tide has turned against Istio, and towards Linkerd. But the service mesh wars have just begun. And as easy as it is to criticize Istio, the project is not only vaporware. Istio has a vision for a detailed operating plane that will evolve together with Envoy, a service proxy sidecar developed at Lyft.

Perhaps Istio’s early embers had too much marketing gasoline poured on them initially, but the project could still succeed. Google is the most sophisticated, well-resourced company in the world–and judging from Google’s adjacent strategic messaging around Anthos and other strategic initiatives, the company has already decided that Istio will be around for the long haul.

As a community, we should be grateful to witness the folly of Istio’s carpet-bomb marketing strategy. It is validation for the earnest resilience of the cloud native community, that even under the omnipresent duress of Google marketing, the community was able to collectively reject the Istio Kool Aid.

This should come as no surprise. The Cloud Native Computing Foundation resides within the Linux Foundation, and the Kubernetes ecosystem has been ordained with the ardent technical purity of Linus Torvalds.

The CNCF was formed under the looming shadow of AWS. The CNCF was seeded with the donation of Kubernetes by Google. Much like the Linux community was positioned as a rebellious movement in reaction to Microsoft’s dominance, the Kubernetes community represents a fervent desire to open up the market to cloud providers beyond the tight-lipped, proprietary dominion of Amazon.

With such a deep spirit of insubordination, it is no surprise that the community has rejected Istio like a set of loosely coupled organs rejecting a foreign skin attempting to layer itself across them. Even though the CNCF was founded by Google, the community was formed in spite of big centralized clouds, not as a marketing vessel for their products which may or may not be open source.

Microsoft seems to understand this fact better than Google, at least in the domain of service mesh.

The day after this interview with William, Microsoft announced the Service Mesh Interface (SMI), a project it partnered with Buoyant and other companies on to create a minimal spec for what a service mesh should offer to a Kubernetes deployment. The SMI presents a safe buy-in point for enterprises who want a service mesh, but do not want to get caught in the evangelistic crossfire of Istio and Linkerd.

It is in this environment that we begin our next series of shows on the current cloud native ecosystem.

Thanks to the Cloud Native Computing Foundation for putting together an amazing podcasting zone at KubeCon, and allowing me to conduct these interviews.

ANNOUNCEMENTS

We are hiring two interns for software engineering and business development! If you are interested in either position, send an email with your resume to jeff@softwareengineeringdaily.com with “Internship” in the subject line.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.


Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.