EPISODE 1605 [INTRODUCTION] [0:00:00] ANNOUNCER: This episode of Software Engineering Daily is part of our on-site coverage of KubeCon 2023, which took place from November 6th through the 9th in Chicago. In today's interview, host Jordi Mon Companys speaks with Justin Cormack, who is the CTO at Docker. This episode of Software Engineering Daily is hosted by Jordi Mon Companys. Check the show notes for more information on Jordi's work and where to find him. [INTERVIEW] [0:00:37] JMC: Hi, Justin. Welcome to Software Engineering Daily. [0:00:39] JC: Good to be here. [0:00:40] JMC: My first question to you is actually about KubeCon itself, and open source in general. I am a bit fearful of this magic spell that bond together two great elements of life and business, which were open source as an ecosystem that is fantastic for ideas, for new solutions, for new businesses, for new collaboration, new projects. I mean, hard to say free money, but relatively easy access to financing. Specifically, venture capital, right? Which requires probably less requirements, or strong requirements than banks and more traditional financing. The ecosystem, open source has not changed. There are a few challenges that we will talk a bit, this question actually. That nature of open source being a blossoming in every sense has not changed too much, I think. But the money has definitely gone away. It's way tougher to get financing and so forth. I'm not sure that is coming back. Do you see things changing in the way that I just described? Do you think that is a permanent change? Or not so much? [0:01:50] JC: There's definitely a change. I mean, the financing situation is definitely harder. I think there's still a reasonable amount of early-stage funding available. I think that you still see quite a lot of activity with pre-seed, seed, angel funding. I think that people have to being pushed towards, getting a real business earlier if they want to continue getting funding, rather than just being able to carry on. [0:02:15] JMC: Is that possible, what you just say? [0:02:17] JC: Well, I mean, I think that one of the things is that people finding business models around open source has always been a interesting thing. A lot of people find it hard. I mean, I've been at Docker for a long time. We found it hard the first time. Took us a lot of iteration to get around and restructuring to get around to finding a successful business model. I think that it's definitely difficult. I think we have more patterns of success now. We have examples. We have companies who have CNCF projects and business models. The two things are sufficiently separated, that they're not competing with the open-source project. I think that, because they've planned what the distinction between which bits are open source and which commercial is in a way that makes sense and monetizes. It does really help if you plan that upfront, when you're thinking about an open-source project. I mean, obviously, with a lot of open-source projects, you're just scratching an itch and trying to experiment with something. But thinking about like, what does this look as a business and what does this look like as a project and how am I going to get contributors? I mean, one of the roles of CNCF is to be that neutral ground where people can work together and collaborate on things. It's also true that many of the projects are things that are just not businesses as is. They're components of something else. I mean, all the businesses are different. It's something like, containerd is not a business, but it's very, very widely used to enable all sorts of businesses, everything from Amazon to Docker. We're all using containerd to enable very different businesses. Then the business is not about selling containerd. It's about selling a solution to a problem that just needs that infrastructure to be built. Then there's infrastructure projects, like Kubernetes itself, those infrastructure projects are not so affected by the funding cycle, but they're more affected by the degree to which people want to collaborate. People have realized that most of the software they use is open source as well, which is another key change that's happened since in the last decade or so. [0:04:37] JMC: It's pervasive. It's everywhere. Yeah, and I do actually like a lot what you mentioned, like the role of the CNCF in being that neutral ground for everything to compete, collaborate, sprout, decline the natural cycle of things. Also, care about equal competition and financing, or funding rather. I mean, not that the CNCF is involved in that, but it does actually not oppose it at all and welcoming. [0:05:04] JC: Well, I mean, we're specifically in the charter. We did call out that we don't, with making-making. We're not going to choose which project is going to win. Occasionally, we do things like, sit projects down and try and persuade them to go - open telemetry is a great example where there was a deliberate process of trying to merge to one standard, because the customers really needed the one standard, and it's been remarkably successful. In general, we have competing projects that do the same thing, which was also a reaction to the way OpenStack worked and that kind of building one stack doesn't give you a foundation when things change and when different things turn out to be important, and so on. It also means we have ruthlessly competing people to try and to make their project better than the other one, the other alternative in their space. [0:05:58] JMC: I quote James Governor, founder and one of the main analysts at RedMonk. He says in a tweet, I think this morning, "I realize some folks have beef with the Linux Foundation for being a commercial organization. But if this is commercial, then I am here for it." The focus on inclusion, access, community, learning and sustainability at the heart of the cloud native competing foundation is just so welcome. Probably, misread a bit that, but the general gist is so correct. I think that this definition of it is a minute for it, too. [0:06:29] JC: Open source is about community and I think the - and CNCF has built this very tight-knit community, where people come to keep count, because their people are keep coming. There's a lot of collaboration that goes on between companies that are also competing. It's just, we work together because we recognize that value, even if we're competing against. [0:06:55] JMC: To the point of funding, by the way, and to close this chapter, just, I think yesterday, Chainguard announced a B round of 60 something million. TestifySec, another company in the same sector of supply chain security announced this morning, maybe, I think a seed round of 7 million, which is quite big for a seed round. Yeah. This thing is still moving and there might be headwinds, economic headwinds, particularly again, and I think the open source ethos and the momentum behind it is equally unparalleled and equally fast and thriving in that - thriving. In that sense, I'm not concerned. [0:07:31] JC: Funding is just becoming as - early stage, there's still quite a lot of it. Obviously, there's still those areas. Like AI, which is obviously, getting lots of funding. Then, later stage funding, it's just being more selective. They applied to the companies that are sharing the right commercial traction and that's the - it used to be that just like, everyone would get a series A without any particular, the gates for series A were a little bit fuzzy for a while. Around you could get money from someone. But now, it's more the strong companies are still getting funding. It's the weaker ones - [0:08:11] JMC: To a certain degree, this is, in a way, nature healing itself. I mean, it is true that free money, or very easily accessible funding is, although it sounds counterintuitive, is not healthy for an ecosystem. There needs to be some sort of, not only profitability criteria, but other tighter requirements. Anyway, let's shift gears into Docker itself. What was wrong on shipping a binary? I know this is a complete shift of - gear shifted turn around the car itself, but yeah, we will get into the second wave of DevOps, but what did Docker fundamentally change in what Adam Jacob is calling the first wave of DevOps? That arguably Docker-led among others. What was wrong, again, on shipping a binary that had everything packaged in it? There you go. That's all for you. [0:09:05] JC: Yeah. I mean, I think that the answer was that really, a lack of consistency. Because yes, you can do that with, say, Java and maybe a guy, but shipping a Python binary is very difficult and different. There's a whole lot of tools that goes with it. Actually, like software, we build quite different kinds of software look quite different and it's packaged in quite different ways. Docker just provided one way that basically, was consistent. It didn't force you to change how you built things. Didn't say, "Oh, the only way you can deploy your things is if you can create one single binary that fits into the JVM." It's like, no. Actually, you can just make your software work and then you can ship it to production working. I think there's a number of things that we changed. I mean, I think the - you get a change budget when you're making something new. We, for example, the workflow that you build and test stuff and then you don't update it in production, you ship a new version. That workflow was quite - it was a big change. Weirdly, people don't actually know, but you can actually update containers in production if you want to. But don't tell anyone. Because we might as well persuade them not to, and it's actually incredibly valuable, because it means it fits with the model the way you build it, you test it. You'd know that the actual version you've tested is the version in production. I think that that kind of thing affects things like, supply chain, which we could talk about later. It's like, knowing the exact version that you had in production last Thursday when there was a security incident, or performance incident, or something enables you to really understand your system that much better. [0:10:47] JMC: Then, because I wasn't present - I was actually more on the other side. I was in the C++ industry when this happened. I did miss a bit the cloud native revolution in a way, the first wave of DevOps. Let's call it that. Let's keep the same framing that Adam is using. This appealed to a smaller population of the software engineering developer population, right? Because C++, C, those compile languages were happy with shipping their binaries. Java, the same probably, but Python was not that big then, right? [0:11:19] JC: Python has been big for a long time. [0:11:21] JMC: But not compared to the addition of C++, C and Java, for example. It would be a lesser amount of developers, right? [0:11:30] JC: No, not at UK. Docker was actually originally a Python-based company. We moved from Python to Go quite early on, but Docker was launched at PyCon, and that was the community that Docker came from. That was back when things like Node.js was becoming prevalent in the enterprise quite rapidly. There was a ton of innovation around thinking about developer productivity, like how do we actually make it easier to develop things? Then there was the Ruby on Rails thing that happened that was a little bit before cloud native, but that was a story about developer productivity. How can we ship applications faster? That's where a lot of the drive for cloud native came from those ideas like, how can we ship things faster, more effectively, more often? The continuous delivery book is 15, 20-years-old now, but it was - I remember reading it, I'm like, this is going to make a big difference to how we do things. Taken time, but it's really, those ideas of really driving that cloud native culture. [0:12:33] JMC: Do the underlying reasons that have driven the industry from monolithic applications, and probably in compiled languages, but not necessarily, to microservices, to what seems to never become mainstream, but it's still there, which would be functions serverless? Are those the reasons that explain that evolution, although evolution is a biased word, meaning that the end is probably better than the beginning. I don't think that's necessarily true, but the underlying reasons for that evolution, would you know which ones are those? Do they map to the reasons that explain, well, physical hardware, VMs, and container runtimes? [0:13:16] JC: Microservices and containers grew up together. I think there was a - I mean, there's a number of reasons for that. I think, the real driver for microservices was actually organizational change. It was, organizations suddenly had a lot more developers, because software was more important, and having hundreds of developers working on one application turned out to be really hard. [0:13:38] JMC: Except, if you're Google. [0:13:40] JC: Yeah. I mean, some organizations do it, but it's difficult. I mean, Facebook does it, too. [0:13:45] JMC: Yeah, because they've got incredible engineers, right? [0:13:47] JC: They have built a lot of custom tooling for doing that as well. Most organizations found that having a team of six people working on an application was more effective, and so based things into these units. Now, there's an argument that the unit maybe shouldn't be the unit of delivery, and that maybe a network boundary isn't always the right thing, especially as things get more and more fine-grained. I think that some of the work around WebAssembly and exploring some of those ideas. But mostly, you want to get that human understandable context of something that I can understand the whole of the application. The unit that I'm working on should be that size, whether that necessarily should correspond to exactly how it gets delivered is maybe less clear. [0:14:38] JMC: You don't see applications being delivered in the shape of functions in the main way in which applications are being developed in, I don't know, 10 years? [0:14:47] JC: I think functions are interesting, because almost everyone is using functions for stuff, but they're using it not for the whole application. They're using it for parts of an application, mostly, or lots of glue. I think there's a few things. I mean, part of the reason why containers were successful is you didn't have to change the way you write applications much at all. You could just run the same application, but in the container. That was, made adoption really easy. Functions tend to ask you to change more stuff, and there's potentially more of an architectural change that you have to think about. I think that if you look at some of the work on functions, it's actually interesting that thinking about how you glue them back together again. There's things like, temporal and things that are effectively and deliver long-running functions, AWS step functions that you leave and pack together. There's a bunch of work Microsoft's down on turning things into functions automatically and putting them back together and working out where the boundaries are. I mean, one of the things about going more fine-grained than standard microservices is that you get a lot of pieces. They're very fine-grained. Again, that's where - do you want a network call when it's - you end up with performance issues, because it's too fine-grained and things like that. Again, I think that it's not necessarily that, I mean, it's a useful way of thinking of things, but it's not clear if it's architecturally always the right thing. I mean, and performance is a really interesting area as well. [0:16:25] JMC: Exactly. There's two elements about - I've been talking to people about container images in general, and two things crop up in these conversations. One is a trust element, of which we will talk later on as part of my questions about obliging security. The other one's heavy weight, or weight. Are they in general, you reckon, or will the industry evolve into lighter weight container images? Is that something that is a concern? [0:16:50] JC: I mean, I think there are different reasons why container images are large. It's a bit nuanced, because sometimes - some container images do, because they have things in that you don't actually need, but you don't know. You either don't know that you don't need them, or it's difficult to not have them in there. A lot of container images are large. [0:17:11] JMC: Could you give me an example of things that are included that you don't know? [0:17:15] JC: Lots of people do not want a shell in their container for security reasons, so that you can't do a - certain people can't execute things in your shell. Now, it's also extremely difficult to develop as a developer and a container doesn't have a shell. We've actually just - we're building out a set of tooling called Docker Debug, which we're shipping soon. Which is in extension now, which is actually great, because it basically gives you a set of - you can add a set of tools to contain it without changing the actual image it mounts on top. You can add a development environment, even to a container that has nothing in, which is actually really useful when dealing with those things. A lot of customers just have really big images for good reasons. We have customers whose container images are 9 gig. [0:18:02] JMC: That's huge. [0:18:03] JC: It is huge. Household names that use these large containers and it's like, that's the way that their organization develops code. A lot of the AI stuff is really big. I mean, I think the data and models bag and - [0:18:18] JMC: That's going to bring way more weight to any given. [0:18:22] JC: Yeah, exactly. I think that sometimes the people who advocate for lightweight, things are just running Hello World and not really understanding how people really build applications. I think, you need to understand the reasons why people's code is the way it actually is, and what they're actually doing. [0:18:43] JMC: What about software supply chain in general? Is there any aspect of it? It's a huge, broad topic that I'm laying out here, so feel free to pick the area in which you have focused, because it's impossible to have everything in mind. Yeah, what area of supply chain security and transparency is the one that most concerns you, you most being focused lately? [0:19:07] JC: I gave a talk at Reject on Saturday about attestations and verifiability. I think it really is one of the areas. I mean, so - [0:19:16] JMC: Could you define what attestations is? [0:19:16] JC: Yeah. Let me start from the others. [0:19:19] JMC: Oh, yeah. [0:19:20] JC: There's lots to talk about build materials and S-bomb and I want to know what's in the image. One of the things that we've discovered is that everyone's tool for generating S-bomb gives you a different answer. This is not a great situation really to be in. One of the questions we asked a while back was like, well, if you give me an S-bomb, can I check if it's actually true? [0:19:48] JMC: What do you mean? By the way, if the S-bomb is true? [0:19:51] JC: For example, attacker could give me a container with an S-bomb saying, this container's got - it's got Python, this version, whatever. You could admit to tell me, it's also got this program I wrote that hacks you. One of the questions is like, it's basically trying to answer the question of like, if you say it's got Python 3.1.2 from Debian in it, is that true? Can I go and take the Debian package and find it? Can I check off every file in this container and see if it came from something you've listed in the S-bomb? Attestations are basically statements like this, like a component of the S-bomb. Like, this image has this version of this package in from Debian, for example, would be an attestation. Then the verifiability piece is, can I actually go and check the statement, the attestation that you've added to this? I think the, because a lot of the time, the attacker could just give you a load of false information about it. [0:20:56] JMC: Mix some things. [0:20:59] JC: There's a lot of stuff on signing, or someone says, that's okay. But does the person really know? Because a lot of this stuff, it's built in an automated process. It's hard to really - we need tooling that helps us inspect these processes that go on and help us ex-post, check that they were valid. We've been doing a whole bunch of work around this thing. With BastionZero, we launched OpenPubkey, which is basically designed to give you attestations on OIDC pieces. For example, if you've built something in GitHub Actions, you can basically attach the JWT module of various things. Anyway, you can effectively implicitly attach it and say, this was definitely ran in GitHub on this Git commit. I ran this build action on it and GitHub have standing behind the fact that this is true and I can go and check this later. I think there's a whole lot of work really about like, how can I really verify this? It really fits into the whole concept of zero trust. It's like, you want to - you don't trust something, because of where it is. You trust it because it has enough information and the metadata on it, that you can verify, that you could have confidence. [0:22:14] JMC: Stepping back a bit, I know this might be a bit of a controversial topic and I don't mean to force you into a hot take, but what is the main problem with S-bombs? With that S-bomb adoption, is it the tooling that is just not yet there for it to generate a - [0:22:32] JC: I mean, basically, I mean, generating an s-Bomb ex-post by looking at a bunch of binaries is hard. I mean, there's bits of it that are relatively - first of all, assuming you've left the package metadata there and haven't deleted it, you can look at the package metadata from the various packaging systems that you've used. Then, there's things like, when we build a jar, it's got a whole lot of libraries in it, and they're packed in various ways that turn out to be somewhat inscrutable. When Log4Shell came out, most of the tools couldn't detect it and they had to be patched in order to detect it, because it turned out, they were six different weird ways you can build jars, and the tools could generally detect one or two of them. If you're compiling C++ code and you statically link things, it's really hard to see. We're trying to get to a point where - one of the areas we're really working on is trying to get to the point where we generate S-bombs at build time using build kit, because at build time, you actually have the information. I think we've been doing some experiments around things like Nix, which, because Nix has build descriptions that are guaranteed to be complete, because of the by-construction. You can actually generate really nice S-bombs from them. I mean, we've had this term of executable S-bombs, like the S-bomb, it gives you enough information to construct the entire thing. You could just pull the bits and run it effectively from the S-bomb in principle. There's Zoom experiments reaching that. I think that constructing things with S-bombs in mind rather than trying to scan and understand them is really where we're trying to move. It's early work. But the way we built Docker Scout is designed with that thing in mind. I see scanning and examining images as a poly fill for really doing it properly at build. And so, we're looking at building and all that stuff into build kit and thinking about how we make it more accurate at build time and what the ecosystem needs to look like to make that happen. I think it's a really - I mean, it's definitely difficult, but it's a really important area. [0:24:52] JMC: Yeah, that would help indeed build more trust and transparency to, in particular, the Docker supply chain, the bit that is built with Docker. But, I guess, if everyone adopts it, it wouldn't help. Well, again, I think, yeah, what we're talking about before about the whole story about immutability, it makes supply chain easier. It's like, there's a set of properties of software delivery that makes things work better from a supply chain point of view. If you care, you probably want to do that. [0:25:25] JC: That's absolutely true. That is a fantastic feature of containerizing your applications. Yeah, you're absolutely right. [0:25:32] JMC: DockerCon happened when? Two months ago? A month ago, maybe? [0:25:37] JC: I think it was a month ago, yes. [0:25:38] JMC: You had a blast over there, announcing your new AI offering, I'm going to call. Because there's two things that, I mean, you announced plenty of things and there were plenty of customers there, Ikea and so forth, talking about the way they ship code with Docker and so forth. Within the realm of announcements to caught my attention, one was this new AI offering that I'm calling, which is basically a partnership with Neo4j, Ollama, and Langchain. I wonder, how did you go about this wrapping, this partnership, and what was the reasoning behind that? Why these partners? [0:26:17] JC: I mean, it was really, people are excited by gen AI and want to know how to get started and how to build things. Lots of new people are coming who've never built something in this ecosystem, because it's new and exciting, especially with the Facebook Llama local models, which I mean, we particularly wanted to support, because people are really interested in what can I run locally on my machine and test out the stuff. How does it work, or what does it want to build? The aim was really to make something that was just Docker composer, where I can run it. That was our aim. I mean, we chose the partners, because they were the pieces we felt were widely used. I mean, I think they were existing people we knew and with. It was a really nice community to get working together. [0:27:12] JMC: Basically, what do you get with that command? [0:27:14] JC: You basically get, you go to RAGstack. That's basically, retrieval of augmented generation. It's basically, you can feed your data. One of our examples was just like, feed in Stack Overflow questions about something and you can see that you get better answers about things than the LLM knows on its own, because it's not - these are things that are not on its data set and you can get it to answer specific questions around them. We have examples where you can build a class and things put in a PDF and ask questions about it and things like that. [0:27:45] JMC: Nice. That's quite nice. Yeah. I do like the maintaining the Docker experience of just building this thing really easily, and then fostering - leaving it to the developer to be creative with wherever he or she wants to build. Yeah, the demos were amazing. I mean, it's brilliant and easy to spin this thing up. The other thing that was AI related was Docker AI, which in this case is a AI companion, I would argue, an AI assistant, that is meant to help you work with Docker, to build stuff with Docker. This has the form of a VS Code extension. I think the name is extension plugin, whatever. It has the form of a notebook. I'm intrigued about that decision, because I think I'm going to question it. I don't have the skills to question anything and it's not the point, but in my own experience, developers are way more familiarized with two ways of interacting with assistants within the realm of an IDE. That would be flash commands with comments and requesting things through that. Or lately, more so in a chat format, in an open chat format, like what's up, or any other literal interface. You went for another third tangent there. [0:29:00] JC: Well, we do actually have suggestions interface as well. We actually wanted something that would basically be like a terminal experience, because we wanted to be able to - you could run Docker commands. It could suggest you run Docker commands. It could then look at the output and just see if what happened, whether there was an error, what the error was, and so on. We needed to have the context of what your project was, so you could see the files. You can see like, oh, have you got a Docker file? If you haven't got a Docker file, maybe their first aid is you want to create a Docker file and things like that. We wanted that context. It's difficult to do that contextual thing in a terminal. If you've seen the warp terminal - [0:29:44] JMC: No, no, but I agree. [0:29:45] JC: - they've got an AI assistant. But it's actually a nice interface. It's actually similar in a way to the notebook interface, but different. It feels slightly similar. The AI at the moment doesn't have the context of your files. It can do some kinds of thing. I think that one of the interesting things about AI, I think in general, is a lot of what we're going to have to explore is what these interfaces look like for different tasks. I don't think that we know yet at all what they're going to look like. I think, one of the things, it all depends on what the AI can do and how you're going to - it's like, do you want to have a copilot pair programmer? Or do you, are you going to think of your AI as a team of - if you have a team of programmers, you don't sit in a pair program with all of them. If you have six of them, you write them, do a ticket, and then maybe do code review on their pull request and things like that. Maybe that's more the workflow will have with AI. I think there's a lot of different kinds of ways that we could interact with it. We've chosen some of the ones that are closer to how we work now, but that's not necessarily where it'll go in the medium term as well. [0:31:03] JMC: I haven't tested it, but now that I think about it twice, I've watched the demo. Anyone listening to this can go and watch it by Docker Engineer and Docker YouTube channel, DockerCon, the recordings from it. Yeah, because in my mind, it felt awkward initially, because notebooks were, I think, designed for data science, right? They do have this layout that feels very similar to docs, to well-written docs with examples and stuff like that. It might be a fantastic way to explore Docker documentation, right? Not necessarily Docker documentation, but that feeling that the docs have been brought to you and the specific chunk of the docs that you're looking for, or you're missing can be brought with examples, right? It lends itself very well to that display of information, doesn't it? I think. [0:31:54] JC: Yeah, I agree. Because again, it's like, we wanted it to be runnable things. Not just copy and paste things. The notebook gives you that flexibility to actually run the thing. Then you can go back and edit it. We're trying to get that feel of it's a very interactive - this form factor is unfamiliar thing that we're definitely, something we're very aware of and we spent a lot of time talking to you in this. I mean, it's also fascinating how notebooks have just been siloed to data scientists. [0:32:30] JMC: They're way more versatile than I think, right? [0:32:32] JC: Yeah. I mean, I think software engineers find them weird, because there's all sorts of issues, like checking them into version control that is historically difficult and things like that. There's this kind of, how do you productionize them? When you're doing research about something, when you're trying to understand a problem, the laboratory notebook model is where they mentally come from, where you write down your experiments and what you did and what you've learnt. I like that. I was very influenced by [inaudible 0:33:05] literature programming book, of which you might have read many, many years ago. That model of writing about code, to where the writing is perhaps more important than the actual code, because it's the explanation of it. I think that code bases that are there for readability are really important, like understanding. I mean, it's also we're still trying to understand how AI is going to help us and what context Windows are going to be. Again, what we're talking about earlier about, microservices and the amount that you can keep in your head is the amount that the AI can keep in its head, even less than you, the same as you. I think those questions are interesting, because those affect how AI will affect the way we code as well. [0:33:57] JMC: To wrap up, I mean, anyone that is interested in what these things look like and more, I mean, you've mentioned Scout. Can't remember if you announced anything, probably the general availability of Scout at DockerCon? [0:34:09] JC: Yes, that's right. Yes. [0:34:09] JMC: Okay. I mean, anyone interested in just messing around with these products, just go to Docker's website, but also to the recordings from DockerCon. You gave enough scoops and revealed enough things. I'd like to wrap up this conversation with not necessarily what's coming next, because everyone's infusing, and everyone's using this word of infusing AI into everything, right? I interviewed yesterday here at KubeCon North America, GitLab CPO, David DeSanto. He was, well, not him personally, but GitLab is infusing AI into many - it makes a lot of sense to summarize, merch requests, find the best code reviews. Is Docker planning to infuse AI elsewhere? Again, not asking you about the specifics, but what's your mind on that? [0:34:55] JC: Yeah. I think that there's definitely an infusion thing that you can do. There's also a question of - [0:35:01] JMC: You can also sprinkle and - [0:35:02] JC: You can sprinkle. Yeah. There's also a question for - I think that the question is, because the infusion, it seems the obvious route. Sometimes, something like AI is actually going to change a lot. Fusion into existing things might be the wrong answer, and the right answer might be to do different things. I think that, really trying to understand that balance of what's going to change by how much and which things are we going to stop doing? I mean, maybe for example, if the AI writes code for us, do we need an editor anymore? If it does all the editing, do we need an editor? Or we just know just somewhere to read the code? It's like, so yes, you can infuse it into the editor. That's been very successful as an incremental move, but maybe that's not the long-term change. The long-term change, maybe we don't really care about editors anymore and we do our work somewhere else. Those issues are quite important to think about, because if you're spending too much time infusing in a place that's irrelevant, then you're wasting your time. [0:36:08] JMC: By the way, and we mentioned the first wave of DevOps, which happened when Docker was founded around that time, and many other things were happening. There's a second one according to Adam. Let's see how that pans out. Although, I was not part of that movement in a way, I was in the software industry more less fast-moving bit of it, but the same industry after all. One thing that feels overwhelming to me at least is the pace of innovation with AI. I mean, I know there's a huge difference between marketing announcements and what really is happening, but it still feels that what really is happening is incredibly fast. Is it equitable to what happened, again, when Docker revolutionized software delivery? Or, so this thing is happening in a really, really fast space, even faster than that one? Do you feel particularly - [0:36:56] JC: I mean, I think that things do feel - things are moving fast, yes. I mean, there's a nice graph somewhere of the fastest adopted consumer products and each time to a hundred million, or something. [0:37:11] JMC: Compete OpenAI, Beat. [0:37:14] JC: Yeah, absolutely. Like, ChatGPT was the fastest-growing consumer product ever. It was like, if you compare it to the adoption of the mobile phone, or Facebook, or the other thing, it's - I mean, part of that is that it's quite easy. It's easy to consume, at least in some sense. [0:37:35] JMC: I'm thinking of the developer angle for developers, because they - [0:37:37] JC: Yeah. I think, yeah, that was in ChatGPT in general. Developers have adopted AI pretty rapidly. I mean, developers are pretty, sometimes quite conservative about tools, but copilot was extremely fast adoption. [0:37:54] JMC: Well, today, by the way, GitHub universe is happening today, literally, and tomorrow. The CEO of GitHub just announced that they are rebranding GitHub itself as an AI company in a way. They are doing copilot for everything, for GitHub itself, too. Not only features within GitHub, but GitHub itself is going to be a copilot. I'm not sure if I got that - it's credible. [0:38:19] JC: Yes. Developer adoption has been really crack. I think part of the reason is, as well, software has this ecosystem where we can actually use tools that are not perfectly accurate, more effectively, because we have testing, for example. If you're getting ChatGPT to generate your legal opinions, it's somewhat - you don't have a way of testing if they're correct. Whereas, if you get it generated software, you can - right, does it pass a test? There's actually a feedback loop that controls for its potential limitations more easily. I think that thinking about the controls we built into the software process and how you would - you can really potentially build a lot of reinforcement around making sure it's working well in a software in a way that you can't necessarily in other domains. I think it's actually really promising. Get your AI to do TDD, and I think is actually a sane strategy. I mean, thinking about why would you ask your AI to generate untyped code when you could use strongly typed code? It's like, it doesn't care. It doesn't have a learning path to it. You can make it do all the difficult things, so that you get more assurance about the quality of its output. I think that's a really interesting thing that comes out of the ecosystem that, again, that we've built. [0:39:43] JMC: I'd like to finalize this with a bit of a legacy news and build on nostalgia, but today, I think Mozilla announced the ditching finally Mercurio. Which is funny, because we were talking at the beginning about how only a few companies are able to build their own super strong dev tools. Google being one of them. They're famous for that amount of repo and the builds that allow them to build it. Facebook, you mentioned Facebook, or Meta. I think Meta is the only big company right now using Mercurio for their source control and version control. Yeah, this is the end of an era. If they ditch it, no one else - Git will be a 100% of the market. [0:40:22] JC: Well, yeah. I mean, I think that there's very few - I mean, a lot of work's gone into Git, because it's so popular. Like into scaling it and working with people to use cases. There's also definitely a ecosystem benefit of using the same version control software and things. I think there's some of that in it. I think one of the things about companies building their own and developer tools is that they can actually end up way worse than using off-the-shelf common developers. First of all, you have to onboard your developers to these weird tools they don't know about. Secondly, they can get quite ossified and you end up in a world where, unless you keep investing, you end up, because you're building it just for yourself, or you haven't got the community building it with you, you have to do a lot more work. You have to be very confident that your use cases and the value is very strong for you not to use something off-the-shelf. Being in the developer tools business at Docker, it's interesting to watch the developer tooling market and people coming to the realization that, first of all, buying developer tools is a good idea. Having developer tools is a great idea. Developer productivity is something that can be improved with tooling. I think that things like AI-based tooling is something that's actually helped convince some people that, okay, actually, there's a return on this investment. Historically, people spend surprisingly a little money on developer tools, versus the production tools. I think that is starting to shift and people realize that developer productivity is really, really, really important. Developers are expensive and if they can do more off, that's great, and if you can get out of their way. But also, if you can give them tools that they already know how to use and they're familiar with and they understand the workflows and things, it does help. I think that if you go to somewhere and they don't use Git, how much value are you really getting from not using Git? [0:42:28] JMC: Honestly, I'm fascinated by the case. I've never interviewed someone from Meta, or Facebook. But, yeah, they are clearly an outlier. As far as I know, as Google and many other companies are quite secretive about their own tooling, but I believe they've got the - they might, by the way, be using Git and Mercurial. But as far as I know, they use their own version of Mercurial, so a fork maintained by them, and they do the same for PHP, I believe. They've got their own flavor of PHP. Yeah, they need to onboard their new developers, at least those two things that are completely unique to the world, I guess. [0:43:04] JC: They do publish papers on some of the stuff. [0:43:07] JMC: Yeah, they released BOC2, their build system. [0:43:09] JC: Yes. They do a fair amount of open source, and the people who work on their tools are actually quite - I can probably introduce you to some of them as well. They released a paper on their serverless framework a few weeks ago, which was really interesting. But their use cases, again, which is quite different, and the choices they've made are quite unique. It's interesting to compare it into what other people are doing and look at the why and things. [0:43:36] JMC: Look, if Borg inspired Kubernetes, and you and I are here because of Kubernetes, in a sense, who knows if that, something that Meta and any other company releases something that has a very unique use case, but then eventually is open source, then it sprouts a new CNCF in a way, and then you - [0:43:56] JC: Quite a few of the CNCF projects have come out of companies doing exactly that. I mean, things like Backstage out of Spotify and Istio. Actually, no. Not Istio. The project from Lyft. [0:44:09] JMC: Envoy? [0:44:10] JC: Envoy, yeah. It was a Lyft project, and Argo was into it. The other route is if you have one of these sets of problems, then bring your software to a larger community and make it open source and get that community leverage around to build it. [0:44:27] JMC: Well, Justin, thanks so much. If anyone is interested in reaching out to you about anything that we've talked about, where can they find you? [0:44:34] JC: They can find me on, I'm still on X, Twitter, still. [0:44:38] JMC: Well, the artist is formerly known as Twitter. X. [0:44:42] JC: You can just email me, justin@docker.com. Yeah. Well, LinkedIn. [0:44:47] JMC: Lovely. Thanks so much for being with us, Justin. Take care. [0:44:50] JC: Thank you. [END]