EPISODE 1683 [INTRODUCTION] [0:00:00] ANNOUNCER: Kong is a software company that provides open-source platforms and cloud services for managing, monitoring, and scaling APIs in microservices. Marco Palladino is the CTO of Kong, and he joins the podcast to talk about the platform and APIs as the building blocks of the digital world. This episode is hosted by Lee Atchison. Lee Atchison is a software architect, author, and thought leader on cloud computing and application modernization. His best-selling book, Architecting for Scale, is an essential resource for technical teams looking to maintain high availability and manage risk in their cloud environments. Lee is the host of his podcast, Modern Digital Business, produced for people looking to build and grow their digital business. Listen at mdb.fm. Follow Lee at softwarearchitectureinsights.com and see all his content at leeatchison.com. [INTERVIEW] [0:01:08] LA: Marco, welcome to Software Engineering Daily. [0:01:11] MP: Thanks for having me here. [0:01:13] LA: Great. APIs are the building blocks of modern applications, right? I mean, I think everyone here can pick that as a given. But can you give me your thoughts on that statement? [0:01:23] MP: Well, APIs are at the backbone of every digital product and every digital experience in the world. Everything we do in our daily lives, from booking tickets to getting paid, to traveling, to going to a concert, all of this is powered by an API. Which is why APIs are the new Internet. 85% of the Internet traffic is APIs. When 85% of the Internet is APIs, what we're really saying is that Internet is APIs. The Internet, as we know it, made of websites, made of blog posts, made of images, got replaced by API traffic in front of our eyes. [0:02:06] LA: We didn't even notice that it happened. [0:02:07] MP: And we didn't notice. [0:02:09] LA: Yeah. Yeah. What technology change has caused that revolution to occur? [0:02:15] MP: Well, API traffic is being driven by digital use cases. The first digital use case that drove the consumption of APIs was back in the days, if you remember, SOA was the beginning of an API-driven world. Then after SOA, Steve Jobs went on stage, announced the iPhone. Now everybody's building mobile applications in order to be able to capture this new platform, the mobile platform. When you're building mobile applications, you need an API to connect your apps to the monoliths that are powering your services. Then in 2013 and 2014, Docker came out, Kubernetes came out, and the way we build application is fundamentally different. We are not building APIs as an add-on on our monoliths, but APIs are there since day one in microservices. We have a lot more API traffic, a lot more back and forth, and of course, the API use case keeps growing. Then COVID happened. All of a sudden, our world, which already was in a large part digital, became entirely digital, as we all know. All of that drove more and more consumption of APIs. Finally, AI, artificial intelligence, is driving now, it's the latest use case that's driving adoption of APIs, because AI is fundamentally driven by APIs. AI can do three things. We can use AI, we can train AI, or we can have AI interact with the world. In each one of these use cases is more APIs. The more AI, the more API, and APIs, even more so become the backbone of everything that we do. [0:03:58] LA: Yeah. I think you hit really the major drivers of APIs. But there's one other that I think you might have just missed, but I think also is a major driver. That is the smart web application, the React-driven application, the JavaScript-driven application, and the making a web page much more interactive. That also has a big API-driven backend associated with that as well. That was similar to what was going on with mobile phones, but still separate and distinct from mobile phones. [0:04:29] MP: 100%. The reason why APIs, as we know them, RESTful APIs primarily, became very popular, it is because frontend engineers and mobile engineers didn't really have good support for SOAP. If you remember, SOAP was XML-based, was hard to build, hard to use. Then all of a sudden, SOAP goes away and RESTful APIs become the main way to build APIs in the world. The reason for that, it is that RESTful APIs in JSON specifically, had already widespread adoption in frontend frameworks and mobile frameworks, and it was just much easier to build and much easier to use than SOAP. Part of the reason why APIs emerged the way they are, it is because of those frontend engineers and mobile engineers. [0:05:18] LA: Right, right. Yeah. Certainly, when REST first came out, or maybe not when it first came out, but when it was starting to become popular, let's put it that way, I know there was a lot of criticism of REST that it was just it was too verbose, it was too restrictive, it wasn't typed well enough, lots of issues like that. What is it that ultimately drove REST to be the API of choice for applications? [0:05:43] MP: It was a bottom-up developer driven adoption of REST. Developers already had very good parsers in JSON. They really like JSON as a format. When we look at HTTP, HTTP already gives us the primitives that we need to be able to drive API behavior. You combine HTTP primitives with JSON, and then you have your typical RESTful API. Now, the thing about REST is that REST is a set of best practices. It's not really a strict spec. [0:06:12] MP: Right. [0:06:13] LA: While on one end, it makes it easier to build and consume APIs, on the other end, it also creates a little bit of fragmentation that especially in the early days of REST, we were seeing across the board, developers didn't know that in order to change state, you were not supposed to use a GET request, you were supposed to use POST requests and all of that. Then eventually, as the industry became more and more mature, we started to learn how to build proper RESTful APIs. [0:06:40] LA: Yeah. It caused a lot of APIs to be rewritten multiple times before they finally got it right. [0:06:46] MP: Absolutely. [0:06:47] LA: Yeah. Yeah. What is an API gateway? What is that term? What does that mean? [0:06:53] MP: Building an API is only half of the work. Once we build the request and response handling of the API, we need to be able to build security. We need to be able to build governance. We may want to offer our API in a tiered way, so that the different tiers of consumers can do different operations on the API, they have different rate limits, they have different access control. To build the underlying infrastructure that allows us to productize an API, well, that's a lot of work that needs to be built. An API gateway takes that infrastructure, that productization of APIs away from the developer and pushes it inside of the infrastructure in such a way that the developers can focus on the business logic and the outcomes of the API, whereas the API gateway provides that underlying infrastructure to run and offer and expose an API. [0:07:44] LA: It puts things like, security, governance, rate limiting, those sorts of things and makes them infrastructure issues versus product issues. [0:07:52] MP: Correct. The more APIs we have, and the bigger these requirements become, because then, essentially, these requirements become cross-capping requirements of every API that anybody's building in the team, or in the organization. The more APIs we have, and the more important it is to be able to have a unified way to manage that underlying infrastructure. The gateway really provides that abstraction layer that allows us to separate infra from the actual API business logic that the developers are building. [0:08:23] LA: Yeah. I think a keyword you put into that sentence was unified, right? I mean, doing this in a unified manner, when you have hundreds, maybe thousands of APIs in an application, or in a system, handling APIs in a consistent way is really critical for both security, as well as for governance reasons and other things like that. Would you agree with that? [0:08:44] MP: Yeah, especially when APIs - we spoke about REST and how REST became the predominant way to build APIs. Then over time, that has also changed. APIs started to use different protocols, or different type of technologies, like GRPC, GraphQL, other than REST. The more APIs that we have, and the more important it is to use infrastructure that can standardize how we think about security compliance, traffic control, all of those cross-cutting requirements, but also, that can support all of the protocols that we use in our APIs. We want to use the best tool for the job, and we must rely on infrastructure that can support us in doing that. [0:09:28] LA: Yeah, it makes sense. It makes sense. Let's start talking about Kong a little bit. Now, when I envision Kong in my mind - I'm not a user of Kong. I've never used Kong before, other than seeing what you've done over the years in various shows. In fact, we met at some point in time in the past, a year or so ago, probably at one of the trade shows that we were both at. When I think of Kong, what I envision is essentially, a single plane of glass for monitoring the status of all your APIs via a series of Kong API gateways. Is that a good description of what Kong does? [0:10:05] MP: Let me expand that description. [0:10:07] LA: Please. [0:10:08] MP: Today, when we build APIs, we build APIs for different use cases. We may build an API because we want to turn our application into a platform, so we want to attract the developers outside of the organization to build on top of our APIs. We may be building APIs internally, because we want to allow other teams to compose APIs together and create new products faster. We may be using APIs as a way to implement microservices and as a way to consume AI. There is different use cases of APIs. Based on that use case, we may need different infrastructure. We need an edge gateway for external consumption. We need an internal service mesh for microservices consumption. We need an AI gateway for connecting together multiple LLMs and orchestrating them. What Kong provides is a unified control plane that allows us to deploy the gateway in all of these different capacities in as an edge gateway, as a service mesh, as an AI gateway, and then gives us a unified control plane to be able to see what are all the APIs that are emerging across all these different use cases, and then provide a standardized way to manage them, to consume them, to monitor them, to expose them, whether they're external, internal in pretty much any capacity. [0:11:29] LA: You create API gateways for a variety of different use cases that you mentioned. We'll go into those in a little bit more detail in a bit, and provide the coordination to provide a single plane of glass approach for managing those gateways, which includes not only controlling them, but also analytics and all of that. [0:11:50] MP: Absolutely. I mean, there is a lot more that's been done. The point being is this. APIs, we said earlier, they used to be an add on. If you look at APIs today, they are at the backbone of what every organization does. APIs are being used to access every data and every service that the organization provides to their customers. When you look at the APIs, you're really looking at the interface of the business. How can any business innovate and modernize and build, execute on a vision without understanding what is the API that the organization has? The APIs are the system of record of everything the organization does. A platform like Kong gives to developers the tooling and the infrastructure technology to cater to their technology use cases, but it also gives the organization a platform to be able to drive business outcomes, like creating new products faster, like shipping back fixes in a quicker way, entering new markets, stuff like that. [0:12:53] LA: When I think about the types of things that an API gateway is going to help with it, so we've talked about governance, we talked about security, analytics, and I want you to go into each of those three in a little bit more detail in a second here. But is documentation the fourth one? [0:13:10] MP: Well, documentation is an important part of providing an API platform. The consumer of an API at the end of the day is a developer. In the future, it will be AI. But today, it is a developer. A developer needs to know how to use API as a VA documentation. I want to clarify that the gateway, it's one of the many API use cases that we can deliver. There is an API gateway, there is a service mesh, there is ingress controllers. The gateway, it is what Kong started with, but it's part of a broader end-to-end platform that covers other use cases as well. [0:13:51] LA: You're actually right. I've been very inappropriately using the word API gateway as an overall theme issue. But in fact, there are several use cases. API gateways are really more for external APIs, not while service mesh is more for internal and the other use cases you're talking about. What about outbound APIs? An application's use of business SaaS applications and things like that. [0:14:19] MP: When we look at the gateway in general, it provides an ingress point to be able to consume internal APIs. A gateway can also be deployed as an egress gateway in such a way that the APIs that we consume are external APIs, which could be a SaaS product, to your point. Then we can use that gateway as an egress point for all traffic. On top of that egress traffic, we can then manage credentials, we can manage security, we can manage observability, the same way we would do for our own APIs that we're exposing in the ingress. [0:14:52] LA: Got it. That makes sense. Let's talk about those three main use cases here that we're talking about. Governance, security, and analytics, I think. Am I missing anything besides those three? [0:15:04] MP: There's others. There is a traffic control. There is support for serverless APIs, but we can talk about some of these capabilities. Let's go to your point. Let's go one level deeper. [0:15:13] LA: Yeah, so let's start with governance. [0:15:16] MP: Yup. We're building APIs, and now these APIs have to be exposed for other developers to consume them. Now, the biggest mistake developers do is to not think of APIs as products. Products have a lifecycle. We build products, we ship new products, we version products, we decommission products. Likewise, APIs are products. APIs are products that need to have the same lifecycle, that regular products when we think of mobile applications also have. Managing that lifecycle, imagine the governance of that lifecycle, it is essential for an organization to be successful with their API strategy. Governance is about governing, who can create APIs, who can publish APIs, who can version APIs, who can access those APIs, and create rules and permissions and tiers in such a way that we have very good guardrails to determine the behavior of our API consumption, or publishing. Then once we do that, we need to be able to secure APIs. APIs cannot just be open to the Internet. We need to be able to provide authentication, authorization, and force entitlements on the API operations that we execute. We need to be able to rate limit and throttle our APIs in such a way that we prevent abuse. We need to be able to pretty much enforce the actual traffic security and traffic control that we want for that API to be reliable and successful in the organization. These are all aspects of infrastructure of APIs that the gateway and the service mesh and the ingress controller can provide to our APIs. Then finally, once we have our APIs secured and governed, we need to be able to analyze their traffic. We want to know how many requests per second we're receiving, how many errors, what's the latency. You see, when APIs go down, our applications go down. APIs are on the critical execution path of every user experience that we build. Whenever anybody that's listening to these podcasts, we all have horror experiences of using products that they don't work, the connection drops, or there is an error. Chances are that all of that is happening because there is an API somewhere in the product that's not working properly. Being able to monitor that traffic and ensure reliability, it is important for our products for the end user. [0:17:46] LA: You talk about analytics as a way of making sure the API is still functioning correctly. Rate limiting is a good part of that as well, too. Let's talk a little bit about the rate limiting aspect. Most people know what rate limiting is and that applications do rate limit, but why is rate limiting such a critical aspect to maintaining the health of an application? [0:18:10] MP: For a couple of reasons. First and for most, we can use rate limiting as a packaging capability. We may have different tiers of consumption that can get access to different levels of usage. We may have partners that are whitelisted to be able to make more requests, because they're premium partners, or developers versus others. That's a packaging conversation. Then internally, we may want to use rate limiting to avoid one API going down, or one API receiving too many requests. By doing so, overloading other APIs that the API may be using. The thing about microservices is that APIs, they don't live in a silo. APIs themselves are going to be communicating with other services, like a database, like a cache, or maybe other APIs. Being able to rate limit the traffic assures the good health of every other API, not only the API that's the entry point of the traffic. It's really the entire infrastructure. What happens is without rate limiting, or without throttling, if one API gets overloaded, then another system, one, or two, or three layers down will also get overloaded and it will start cascading failures that are very hard to recover from. Being able to ensure good access and correct access to our APIs is important for that reliability aspect. [0:19:34] LA: Right. They're important not only for external to prevent customers, or bad actors from doing bad things to you, but it's also important internally, too. When you have a service that acts, that goes crazy, or is acting poorly, it can provide too much traffic to another internal service that is otherwise working fine, causing that service to no longer work fine, which can affect three, or four, or five, or 20 other levels deep of other services that depend on those services that are being acting poorly. The rate limiting can actually keep that from happening by preventing this bad acting service from overloading another service. Is that a good summary? [0:20:19] MP: Correct. When we use rate limiting, in addition with governance, we can now be more sophisticated in the way we enforce these limits. Let's say that there is one API that's being used by, let's say, 20 internal systems, but some of these systems are more critical than others. If our API is underload, we want to give it enough time to be able to scale up in such a way that we don't overload the infrastructure. As we do so, we may want to restrict services that are consuming API that are not as important and give a higher limit instead to the ones that are more critical, in such a way that if we need to rate limit our systems, we still prioritize the functionality of the ones that are more important, versus the others that are not as important. The governance aspects of being able to have different limits per different client and different consumer now becomes very important. [0:21:08] LA: Right, right. That makes a lot of sense. Okay, great. What other use cases for the analytics side of the, I use the term API gateway, but the entire products that we're talking about here, what are the uses of analytics other than for monitoring things like rate limiting and detecting problems and things like that? What are some of the other uses? [0:21:29] MP: Well, analytics on APIs really becomes the analytics on how the business is performing, right? Because every operation is an API operation. By tracking those API requests, we can see how many tickets were purchased. We can see how many retries the user made to book that flight, or book that hotel. Essentially, we can see it's the business intelligence of the organization, or not only of APIs intended as the underlying technology, but APIs intended as the underlying fabric of the business. The analytics are important for both solving problems in our infrastructure, to determine if the traffic is healthy or not, but it is also business intelligence for the business itself; to understand where to invest and what are areas of the organization, the teams, the lines of businesses that depend on each other based on that API traffic. Really, it paints a picture that is technical, but also, business driven. [0:22:29] LA: Makes sense. When we talk about analytics, not all analytics are created equal. Now, for APIs, logging is an important part of the analytics for APIs, but there's other things that are important besides logging in the analytics of an API. Do you want to go a little bit deeper and talk about the different types of analytics and why that's important for an API? [0:22:51] MP: Absolutely. You mentioned it. Logs are also quite important. You see, when we are looking at the most recent cyber-attacks in the world, chances are that they're leveraging a weak link in our API infrastructure to enter the organization and then take advantage of the organization. Being able to measure traffic logs and then detect anomalies in the logs, it is not only something that allows us to solve problems faster, but it also allows us to prevent malicious access to our systems. Recently, in Australia, there was a huge cyber security attack that affected millions of customers, because one of the APIs, very critical APIs of the organization was left open, surprisingly, open to the public. It was very easy to exploit. Being able to measure access to that API with proper logging and proper monitoring and proper metrics would have helped the organization understand where the problem was. Logs for traffic anomalies, traffic detection, metrics, to be able to capture the vitals of the API, how many requests, how many errors, how many latencies, but also, being able to prevent problems before they happen. [0:24:07] LA: Post-problem analysis too, when you have a cyber-attack, you can use logs and specifically logs and APIs to detect how deep the problem went. Just because the service was compromised, what other services became compromised as well, or what information were they able to get from other services from the compromised service? [0:24:28] MP: A few years ago, we were building websites on the internet and we were using firewalls, web application firewalls to block malicious traffic. In an API world, and given that APIs are the new Internet, we're using API infrastructure to collect metrics, logs, and provide API security, which is very different from website security. Provide API security to protect this new Internet. In this new world that's API-driven, the technologies we used to use to protect our blogs and our websites are not working so well anymore. We need to have API technology that's specific to APIs, and this is what Kong and, in general, an API infrastructure provides. [0:25:13] LA: I think some people can sit here and say, "Yeah, this all makes sense for enterprises and all that good stuff. But I'm the CEO of a SMB company. Why is this important to me?" Can you talk about the difference between SMBs and enterprises and how their needs for a product like Kong are the same, as well as how they're different? [0:25:35] MP: Well, the outcomes are very similar, whether we are working with a 10,000 people company, or a 25 people company. At the end of the day, it boils down to focus and proper allocation of resources. You see, building API infrastructure, and we are going deeper in this conversation, you've seen how expensive it can be. These has nothing to do with the actual products and the customers that your organization is serving. By leveraging infrastructure that's built with a great technology, that's fast and performant, and can cover all the basis, well, now we can have our teams focus on building great products, finding product market fit, being more competitive in the market with the actual capabilities that we're building for the business that we're building. Arguably, it's even more important for a smaller organization to have focus, because a smaller organization has even less resources than a larger organization. And so, it becomes even more important for them to actually not do all of these things and instead, leverage technology that does it for them. [0:26:40] LA: That makes sense. That makes sense. Let's talk about the different offerings you have. You mentioned there's different types of API management that's needed for whether you're talking external, or whether you're talking about microservices, mesh technology, whatever. What products do you have that fit into each of those places? [0:26:58] MP: Today, Kong provides an end-to-end API platform that includes a gateway. It includes a service mesh. We may want to use the gateway to enable communication at the edge, or between applications. Some of these applications are going to be microservices. When they are microservices, we need a secure network overlay that enables at L4, zero trust security, observability, traffic control at a lower layer of the network and we use a service mesh for doing that. Then when we build our services and our APIs, there is a whole life cycle, a development life cycle to be able to design the APIs, mock the APIs, test the APIs. Kong provides a product called Insomnia that does all of that from a developer life cycle standpoint. Essentially, we cover all bases from building an API, designing an API, and then pushing it either in a gateway, or in a service mesh, or an ingress controller if you're using Kubernetes. Then on top of all of this, we provide a unified control plane that allows us to manage these whole automation, these all life cycle, the whole catalog of APIs that we're building in a way that we can then secure them, we can expose them, we can document them, we can provide analytics for them, and so on and so forth. One of the latest additions to our platform is the AI gateway to consume one or more LLM providers, whether they are in the cloud, or self-hosted. Then the final release we did of our platform is our dedicated cloud gateway's announcement, which allows us, essentially, to provision our gateways in the cloud in one click across multiple regions, multiple cloud vendors without scaling, but still running them on dedicated infrastructure. It's a very revolutionary offering for running all of these infrastructure in one click, so that our API infrastructure becomes like electricity; always on, always enabled, always reliable, ready to use. [0:28:55] LA: Oh, cool. I guess, the last one is something I was not aware of. That's a brand-new offering. That's, essentially, you're replacing the cloud provider's API gateways with a API gateway, and I'm using the term API gateway generically here now and loosely, with a service that is centrally managed and centrally controlled, versus individual cloud-based resources that are unrelated and unconnected. [0:29:22] MP: We're doing that with half the latency in twice the performance at least. Which means that every API use case that we drive in our organization, every mobile interaction, every website interaction, it's twice as fast that we can deliver that user experience twice as fast and lower latency as you could do with the native solutions. There is something to be said about our technology that it is truly phenomenal when it comes from a performance and extensibility standpoint, which is why Kong is very popular today. [0:29:56] LA: Do you call yourselves a SaaS service, or would the SaaS not be a good description for what you do? [0:30:03] MP: The entire platform, the gateway, the service mesh, the AI gateway, all of the things I mentioned today, Insomnia, the ingress controller, they can be deployed on prime in a self-hosted way, or they can be used using our cloud control plane. The cloud control plane allows the customer to then run the data plane, so the gateway that's processing the actual traffic, it allows them to run it still in a self-hosted way, so the control planes in the cloud, but the gateways are self-hosted for that API traffic to remain local. Now with the dedicated cloud gateways offering, the data plane itself can be in the cloud. It could be entirely in the cloud and available in one click across all the cloud vendors and all the regions that we support for customers that want to, essentially, not worry about having to ever deploy, scale, or upgrade modern API infrastructure. [0:30:56] LA: Makes sense. Makes sense. Now, I'm guessing at this point that most of your data plane components have to be single tenant, and there's probably some exceptions there, etc. But is your control plane single tenant, or is that a multi-tenant solution? [0:31:12] MP: Well, so the control plane allows a customer to create multiple virtual control planes in their account. Each one of them can have their own data plane clusters that can be compartmentalized one from another. Or, of course, this is one option. Otherwise, the customer can also run everything in one virtual control plane. That's up to them to decide how they want to slice and dice it. The point being is that we work with top Fortune 500, top global 5,000, and of course, the broader developer community, that they want to have that compartmentalization, because maybe they want to separate their edge gateways from their internal gateway, and they want those two gateways to be part of different virtual control planes and yet, have the benefit of a unified solution that can catalog in a unified way all of these control planes and all the services in the control plane. The governance capabilities that connect, it's called our platform in the cloud, it's called Konnect with a K. Kong Konnect. I mean, can be quite expensive and extensive. It really depends on what the customer wants to do. [0:32:15] LA: What about conformance, or governance issues, like HIPAA compliance, or GDPR compliance, where you need to localize your data, localize your resources, separate components in various ways? I imagine this multiple virtual control plane process can help with that. [0:32:35] MP: Well, so the platform itself, we have compliances that accelerate those security assessments. We ourselves are compliant with SOC 2, in all of PCI SOC 2, and so on. Then if the organization still wants to retain the API traffic, but does not want to manage the infrastructure, they can run headless, stateless gateways in their own infra that will communicate with the control plane in the cloud, but the traffic never goes in the cloud. The traffic still runs in these stateless gateways that are very easy to scale, so they can just add more, or remove them based on traffic, and yet, the control plane and all the benefits of that unified control plane in the cloud is still there. Because the traffic never goes in the cloud, then they can, essentially, use their own compliances to process their traffic, instead of relying on ours. [0:33:28] LA: Right. Got it. The analytics, at least the analytics, they're allowed to take out of the data plane and the management of the API's can still be centrally controlled, while still complying with whatever regional requirements, or security requirements they have for whatever their industry requirements are. [0:33:47] MP: Yeah. We call these our hybrid offering. Hybrid, because the data plane still runs in a self-managed way, but everything else runs in the cloud. Today, actually, when you look at our cloud adoption, we are much farther ahead than companies that are our peers were at our size. When you look at - so, Kong just announced we crossed a hundred million dollars of revenue, or north of that. We're growing very fast. We're building a long-lasting organization that's going to be here to stay. I do believe that Kong can be the next Cisco of L4 and L7. Back in the days, we were using switches and routers to connect our services in the data center. But in a microservices world, we're using gateways and service meshes and ingress controllers to connect our services. We have the opportunity here to build a long-lasting organization. Today, 20% of our revenue comes from the cloud, our platform, primarily deployed in a hybrid way, which is the mode we've just described. We're working with top Fortune 500 and top global 5,000 to power their infrastructure. Chances are, that without knowing it, if you are using banking solutions in the world, if you're booking tickets, flight tickets in the world, you're actually using Kong, you just don't know it, because it's powering that underlying infrastructure, but it's there. [0:35:08] LA: Cool, cool. You've actually open sourced some amount of your software anyway. Do you want to talk about what parts of your software is open sourced and why you decided to go that way? [0:35:20] MP: The core of our platform, so the core gateway technologies, the core mesh technologies, the core API development technologies, Insomnia, are open source, because Kong, since day one, has been a strong proponent of open-source software and a strong contributor in the ecosystem of open-source software. Why? Because we believe that open-source software is better than closed-source software. It is better for the end user, who can look into the product, they can submit fixes and patches. It's not a black box. They know what it is and they know how it's running. It's better for us as well, because with open source, you have a much quicker feedback loop from the community to understand what they want, how the roadmap should be driven from our standpoint in order to be able to cater to them. It's a much quicker feedback loop, which ensures a much better success of the product. Also, the aligns are incentive when it comes to ease of use. Open-source software to become popular has to be easy, because if it's not easy, nobody is going to use it. It's a chicken-egg type of situation. By open sourcing technologies, you really are thinking hard on how you can make it easier to use, because being easier to use and yet powerful, but easy to use, it is important for every product and open source really aligns that incentive very well. [0:36:39] LA: Switching gears a little bit here, let's talk about AI, which of course is the buzzword that everyone is caring about and using about nowadays, and the growth of AI is tremendous, much like the growth of the cloud was not that long ago. AI is really the technology that is driving a lot of application innovation nowadays. Now, I get the impression from things you've said that API management for AI is a little different than API management for other types of services. Can you talk about that in a little bit more detail? What do you mean by that? [0:37:12] MP: I believe that AI is revolutionary. I believe that AI can have an impact in the world and in how we build software, as big as the Internet itself, and as big as Kubernetes itself. AI can be the next generation platform that powers our applications. Where we deploy AI agent that implement business logic that access our data and provide a service to our users, and all of that could run on an AI platform, instead of microservices running on Kubernetes. I believe that there is big potential for AI. But today, we're seeing usage of AI in a much smaller capacity, compared to the vision I just described. We're seeing AI as a way to improve productivity, provide better support, to provide intelligence on top of our APIs. When you look at AI, it is fundamentally an API use case. Because everything we do with AI, especially with Gen AI and LLMs, it is being accessed through an API. When we consume AI, there are three things that we want. First, we want to improve developer productivity when consuming one or more LLM. The future, it is multi-LLM. We are going to be orchestrating multiple LLMs, because each one of them may be fine-tuned with specific information and data that provide a different level of intelligence. Number one, we want to improve developer productivity when consuming one or more LLMs. Number two, we want to ensure governance and compliance on what is the usage of AI that we are performing and make sure that, for example, we're not asking anything offensive, or illegal, or we're not propagating personal information into AI. So, the whole security and compliance of AI. Third, we want to optimize costs of AI, especially the cloud models. They can become very expensive and very quickly. How can we fine-tune our own self-hosted models and only use, for example, the cloud models as a fallback, or as a source of truth to train our self-hosted models and then use more of our self-hosted ones than the cloud ones to optimize for cost? With our AI gateway, we cover all three bases. We improve developer productivity by providing one API that allows us to consume more LLMs. You build once and you consume as many LLMs as you want, but you only build your code once. Then we provide advanced prompt engineering and security and compliance to determine what is the behavior that the developers are using with AI, and we can then put the guardrails in place to prevent, to enforce responsible usage of AI. Then finally, we can monitor L7 AI traffic, the tokens, the providers, the models that we're using to gather that intelligence on our AI consumption and then monitor costs, as well as implement traffic control capabilities to orchestrate between one LLM and another. The AI gateway that Kong has built, by the way, it's also open source. It provides advanced AI capabilities that are somewhat complementary to the API management capabilities, but are very focused on AI as its own protocol, as its own L7 protocol. We provide deep support for that. [0:40:29] LA: My guest today has been Marco Palladino, the CTO of Kong. Marco, thank you for joining me today on Software Engineering Daily. [0:40:36] MP: Thank you for having me. Fun conversation. [END]