EPISODE 1578 [INTRODUCTION] [0:00:00] ANNOUNCER: If you're a developer, there's a good chance you've experimented with coding assistants like GitHub co-pilot. Many developers have even fully integrated these tools into their workflows. One way these tools accelerate development is by auto-completing entire blocks of code. The AI achieves this by having awareness of the surrounding code. It understands context. However, in many cases, the context available to an AI is limited. This restricts the AI's ability to suggest more sweeping changes to a codebase, or even to refactor an entire application. Quinn Slack is the CEO of Sourcegraph. He is now hard at work on the challenge of giving more context to AI, to make it aware of entire codebases, dependencies, error logs, and other data. Quinn joins the show today to talk about what it takes to move beyond code autocomplete, how to develop the next generation of coding AI, and what the future looks like for software engineers and programming languages. This episode is hosted by Josh Goldberg, an independent full-time open-source developer. Josh works on projects in the TypeScript ecosystem, most notably, TypeScript ESLint, the tooling that enables ESLint and prettier to run on TypeScript code. Josh is also the author of the O'Reilly Learning TypeScript book, a Microsoft MVP for developer technologies, and a live code streamer on Twitch. Find Josh on Blue Sky, Mastodon, Twitter, Twitch, YouTube, and dotcom as Joshua K. Goldberg. [INTERVIEW] [0:01:41] JG: Hey, everyone. I'm Josh Goldberg, Software Engineering Daily host. With me today is the illustrious, Quinn Slack, Co-Founder and CEO of Sourcegraph. Quinn, how's it going? [0:01:51] QS: Good. Thanks for having me on. How are you doing, Josh? [0:01:53] JG: Oh, it's my pleasure. I'm very excited about this. We were chatting before. I've watched your company from a browser extension into a major, I believe, unicorn, so it's really exciting to see how you've grown and evolved over time. For those who haven't experienced your introduction yet, could you tell us a little bit about yourself? Who you are and what you've done so far? [0:02:11] QS: Yeah. Well, today, it's all about code search and code AI. My personal journey here, it's been, I guess, several decades now. I've been a coder all my life. I love coding. When I was a kid, I loved that spark of creation you could get, where you could write some code and you could build something that people all around the world would be able to use. For me, when I was much younger, I loved – and also, nobody knew how old I was, that I could create Pearl scripts and PHP applications. People, I think, thought that this was some professional. On the Internet, it's this great equalizer, where if you can create code, then you can operate with people all around the world, and they don't need to know where you came from, what your background is. As long as the code is good, that's all that matters. I loved that. Fast forward to college, I was working in some big open-source projects. I was making patches to cURL and new TLS, OpenSSL, and other things like that. I got to feel what it was like working in these massive code bases, not these tiny Pearl scripts that I was writing before, but massive code bases. Naturally, I learned so much from reading that code, and I would also set up these code search tools to help me understand massive code bases, like the Chromium code base, one of the biggest code bases in the world. That's when I got to feel my first taste of working in real professional software development. It was very different. Again, these code search tools helped a lot. After college, I was working at a company where we were working inside of two really big banks. Like the Chromium code base, they had tons and tons of code. Unlike the Chromium code base, there was no code search set up. My Co-Founder, Beyang Liu at Sourcegraph now, and we’re working together here, and in lieu of having code search, we had to go and set up a bunch of meetings. We had to ask people, “How does this code work? If we change this, what's going to break?” We were sometimes emailed zip files of code, and it was just a mess. We started to realize that most of the world doesn't write code in environments where you have code search and a tool to help you understand the code base. Most of the world was writing code in an environment where you had no idea what the other code was, where you had no way to understand it, to ask and answer all these questions that we just took for granted. We said, we got to fix that. Because in our vision for software, it was it needed to be way easier to build. We needed to move the industry from this artisan cottage industry. All coding is this manual road work that's really painful, to something where developers could operate at a higher level. We're not the first ones to think this way. That's how so much progress in programming worked. That's how we went from using vacuum tubes to punch cards to writing code to writing higher level code and having a compiler translate it. We viewed this as the next step. To take that problem that we saw and give a name to it, we call it big code. It's this idea that the amount of code was starting to grow massively. This was around 2011 and 2013 when you see GitHub really starting to take off, open source, library dependencies really started to take off. It was when people started to make those first memes of NPM install, installing thousands of projects. Coding was changing. Where in the past, you could have a shrink-wrapped application that might only use a few libraries. Now, even a medium size application would use tons and tons of library dependencies. The amount of code you had to contend with was getting much, much greater. It wasn't just those really big companies, or the really big code bases like Chromium that had a ton of code. Everyone was starting to feel that problem. Code search was not just a problem for these extreme cases. It was going to be a solution for a lot of projects. That's how we started Sourcegraph. We wanted to get code search to every dev. We built code search that now tons and tons of devs are using. We got awesome customers. We got four to five of the Fang companies. We got four of the top 10 US banks. We got companies like Uber and Dropbox and Plaid and Lyft and Databricks and all these devs using code search. That brings us to today, where coding again is changing very rapidly, and it's because of code AI. Right now, we've obviously seen AI advance so much in the last year in terms of what's possible. We've been keeping a close eye on that, because it's causing a bunch of new problems with code, but it's also the promise of a lot of new solutions. The problems, of course, it's causing is it's making this big code problem even worse, because AI is writing so much code now, much of it is not very good code. In the future, AI will be writing 99.9% of the code. How will humans contend with that amount of code? Now, even a button in your application might have as much code as the Chromium code base did. Also, AI has the potential to do a much better job than humans of writing code. It's that opportunity to be that next way that humans get higher level, just as compilers did, just as code search did, now code AI can do that. That gets us really excited. It's an awesome time for anyone who's a software developer. I think the task of coding has changed more in the last year than any other year that I can remember, and we're really just getting started with code AI. That's obviously a huge focus of us at Sourcegraph. How do we take this deep understanding of the code that we built into code search and start feeding that into AI, so that instead of going to feeble human brains, it can start to power AI to write and maintain much better code. That's what we're building as well. It's code search and Cody, which is our code AI product. [0:08:02] JG: There are two thrusts that I got from there. One is the current code search area that Sourcegraph has built its bread and butter on so far. Then there's the long-term play of AI taking that code search and feeding it into the AIs. In the interim for the feeble human brains that we are coding with now, how do you see the combination of code search and AI playing? [0:08:22] QS: Well, code AI is going to be very gradual. Even if all the technology was there for AI to write code in the snap of a finger, humans would not be ready for that. But that technology is not even ready. We see it progressing along some code AI levels. We've actually got an idea that we're going to share with the community around what are the levels of code AI? How do we see it progressing? Like, how for autonomous vehicles, you have internationally accepted set of levels of autonomous driving that make it so that any automaker that says you can not pay attention to the road for 60 seconds, that's not just a marketing claim by an automaker, but there's actually a lot of compliance and legal and regulatory and safety things that go into being able to make such a claim and have that accepted by all the various jurisdictions. We want to do some of the same for code AI, so that in this incredibly exciting, fast-moving area, there's some actual definitions. We've seen some claims of people saying, “Hey, there's this new code AI agent. You no longer need any junior engineers.” None of that's actually true in practice. A stage that many people are at today in code AI, which is auto complete. That's something where generally, you're using one or two files as context. It doesn't require a deep understanding of code. You'll get the next line, or maybe two lines, or a few lines of code completed. But where code search and this deep understanding of code really begins to be valuable is when you have the AI writing more code at once. If the AI is going to be writing tests for you, then it benefits greatly from knowing, what are all the other tests that you've written? What are the libraries you use for testing? What are the conventions you use? Also, how do we need to test this function? What are all of the calls to this function in your entire code base? These are some of the ways in which we're taking this understanding of code that we've built and feeding that to the AI to make it better at writing code for you. Because the code completion, write the next line or two that's already hitting a ceiling in terms of impact on dev productivity, and we think AI can be doing so much more. [0:10:32] JG: You bring up a really interesting parallel that I want to hone in on. That was self-driving cars. Much of the marketing in the industry has been dominated by the very big splashing entries of, “Oh, yeah. Soon, we'll have you self-driving from California to New York completely autonomously.” No, as you've described, we're going much more iteratively. That car makers are increasing, say, the small amounts of time that we're able to go on the road and in slightly less and less stable circumstances. What do you see as the next steps? Say, we have auto-complete and then test. Do you have a progression in mind for how AI might advance there? [0:11:04] JG: Yeah. After code completion, it’s code creation and maintenance. This is still under the umbrella of human initiated, where the human is asking the AI specifically to do something and then reviewing the results. That has a huge impact. But ultimately, it's bottlenecked on the human. When it starts to get really interesting is when the AI is starting to automate more of the process, when the AI is given more of a higher-level goal, and it goes and pursues that through many invocations of an underlying LLM, for example, with an optimization function to know a thousand invocations and responses from the LLM, which one is the best? We see at least probably two distinct levels there. One where the human is still supervising, the output is still looking at the overall code like a code reviewer. Not every single line, but reviewing it at a high level. Think of that as the stage where AI is everyone has a million software engineering interns, and then there's a stage where the human is really just looking at the end results. If you're the CEO of a company, you don't review the senior engineer's code, but you do make sure that it's hitting the right intermediate metrics, that it's increasing the retention rate here, for example. That's the second stage, where it's like, everyone has a million senior software engineers in AI. After that, then this is when we get to the singularity, and it's a little bit hard to know exactly what will happen, but we do see a world where AI is autonomous, where it's pursuing its own goals that are not necessarily aligned with human goals. We hope that they are good goals, but it's certainly something I know that a lot of people have raised concern about. With these levels of AI coding, we're not trying to say what we think should happen. We're trying to say what we think will happen based on how we see this technology evolving. [0:13:01] JG: What is the difference then between what you think should happen and what you think will happen? Is there anything we can do to reduce that difference? [0:13:08] QS: I think that's a really interesting topic. I think the most important thing that we can do here is to make it so people know how code AI is progressing, because it is the most significant technology of the next, I don't know, 30 years, is going to create the rest of the technology out there. I can tell you for sure that humans will not adapt and react to it well, if nobody knows what the hell is going on with it, if everyone is confused, if it's so hard to evaluate these hype-ridden claims and substance and other realms. As long as we have a clear and consistent view of how we are progressing, I mean, we as in the entire world is progressing along code AI and what is truly capable of, then I think we'll be able to make the best decisions about it. We want to bring this and do this in a transparent way. We don't want opacity here. We don't want marketing claims from some vendors removing credibility, so that there's a boy who cried wolf situation. [0:14:06] JG: That transparency has been a key part of the Sourcegraph documentation, for those among the viewers who haven't dove into your guidebooks. You folks have an open guide to how you've been marketing product areas such as Cody. One of the big tenants that I was pleased to see was, don't oversell it at first. Undersell it and let it exceed expectations. Can you talk a little bit about how you're so far trying to make sure people don't get into these wild and wacky misunderstandings about your AI? [0:14:33] QS: Yeah. Just in the last almost year now, since ChatGPT came out, there's been many ups and downs in people's perception of AI and the rate of progress. It feels like, we've already been through 10 hype cycles since then. I am a developer. I am naturally skeptical of any very hype-ridden claims. There have been plenty of hype cycles over the last 10, 20 years that I've ridden through. Here, LLMs truly are a new primitive that any application can use and there is so much excitement there. The last thing I want is for some people to make hyped-up claims and then that removes the credibility from everyone else who's building truly awesome things on this. The reality is everything always takes longer than you think to build. Again, to that self-driving car analogy, we've been hearing about self-driving cars, I don't know, for 15 years now or more. There's some people that are so jaded at this point and they think, “Oh. Well, I've been hearing this, and so it’s never going to happen.” Yet, in San Francisco, you now have several companies that are providing fully autonomous taxi rides. I have two cars. I don't have a Tesla. I think Teslas are great, but I have two cars that are non-Tesla that have great driving automation. The tech is real. It just takes longer than we expect. That is, I think, the right way to talk about code AI. There is so many problems with it. It is very flawed. It is imperfect. It is not going to replace an engineer today, but it's going to be an incredible accelerant to an engineer. Now, you could also say, there's no human engineer that always writes a 100% correct code. Any company that starts to evaluate code AI by printing out a 100 lines of code that are right, it's then grading it like a teacher is going to be misguided, because if you did that to a human, they would fail as well. You've got to be really transparent with what it can and can't do. I know from having been on both sides that the best thing is when somebody who's building software is really open with the limitations. If they're not, then you know that they're just lying about it. It's this nice case where it's both completely true and it's the most effective way to get people comfortable with adopting a new product. [0:16:40] JG: Let's take a little bit of a step back, because although Sourcegraph is working on AI and Codiy and such now, you've had quite a journey as a company. Do you want to talk a little bit about how you went from big code and code search towards AI from a tech perspective? How one built into the other? [0:16:55] QS: Yeah. Code search is a product that we think every developer will be using, should be using and devs love it. A lot of devs say, well, I don't know why I would search. I don't know when would I need to look across a lot of code. What we found is that we could win that dev over if someone else in their company brought in code search. If it was so easy for them to just go and type a few things on their code. Sometimes if you locked me in an elevator with someone who'd be hard for me to convince a given dev to use it, they'd have those questions and they would just have to try it to actually see the value. With code AI, the hype has been helpful in that people do have the sense that it's the future, and there's a great willingness to try a new tool, which developers are generally pretty reluctant to do. We found that when a lot of devs would try a code act, or like co-pilot, their complaint would be, it doesn't actually write very good code. That was the state, say, last December. That was where there's clearly a lot of promise here, but people were still trying to figure out how do we make it so code AI actually writes good code? We did this experiment. It was pretty easy to go and use ChatGPT and you can try two things. First, you ask it a question about your code base, you ask it to write some code for you. A lot of times, it would say something like, “I don't know your code base. I can't write that code.” Or it would hallucinate and give you something that maybe looks correct, but would not actually compile, because it doesn't use your APIs. That's one. The second thing is, ask it the same question, but in that little chat box in ChatGPT, paste in a bunch of relevant code files. It turns out, if you do that, then ChatGPT gives you pretty darn good answers. We wanted to push code AI along by essentially automating that process to simplify code AI into the dumbest thing possible. It's just automating the copy and pasting of relevant files from your code base right above your question in ChatGPT. It turns out, that approach can get you really far. There's so much more you can do to select just the right relevant files, just the right parts of them to know how to re-rank and so on. At a high level, that's exactly what Cody does. That if you're thinking, well, finding the relevant files, that sounds like search. You're exactly right. That is search. It turns out that anyone who had a search engine was really well-positioned to make a really great AI product. Now, the technique that I talked to that is been called retrieval augmented generation, or RAG. There's a lot more talk of it now. At the base, it requires a search engine. A lot of people looked to embeddings to do this, but embeddings is not the only way that you can build a search engine. There's a lot of other ways, just keyword search is also quite valuable. Then when there's structure to the data, for example, with code, you have the call graph definitions, references, and that's another great way to find a context. For example, if you're asking a question, or trying to write a test for a certain function, it's really valuable to find what are all the calls to that function. That's the data that Sourcegraph search engine also had in advance. We've been able to bring a lot of these techniques that code search enabled to Cody and do a really good job of retrieval augmented generation. It's really hard to build up a code search engine from scratch, but we were well-positioned. [0:20:29] JG: This is all referring to static content, such as the code or editor comments. Do you see a future in which dynamic content, such as integrations with runtime crashes, or how the app actually runs a production could feed into this information? [0:20:43] QS: Yeah, absolutely. This is super interesting to us. Think if you want the code AI to fix the damn bug in your application that's in production right now. Well, if you go and ask ChatGPT, fix the damn bug in my application. It's going to say, “I don't know what the bug is. I don't know what your code is.” You need to give it the logs, the error message somehow. That's another context that we can bring to the LLM. We want to let it tap the logs and then all kinds of other information. You should be able to tap code reviews, your JIRA tickets, your confluence wiki pages, your Google Doc design docs. It should be able to look at performance and runtime data. If you say, “Hey, can you make this function faster?” It's going to do a lot better if there's some profile associated with it. All these other different kinds of information that are out there hidden in dozens and dozens of different tools that you use, we want to bring all of that to the LLM as needed. That is this really awesome platform that we want to build over the next year and a half, so that Cody is not just able to tap your code and understand your code better, but understand all the stuff going on with your system. If you think about it, that's exactly what a human needs in order to do a good job. If there's a human who joins a company on their very first day, if all they have is access to the code base and they can't get into Datadog, they can't get into Splunk, they can't get into any of these, they're not going to be able to do their job. Same principle applies to the LLM. That's why we want to integrate more of that into Cody. We want to do so in a way that lets you use the best tools in each of these categories. We don't want you to be locked into the Sourcegraph log application. We don't have a log application. Or the Sourcegraph cloud. We don't have a public cloud, like Azure. There's some other vendors that are looking at their AI product as a way to lock you in. It's going to work really well if you're, for example, using all of their applications. Ultimately, the great thing about devs is they have a lot of choice and the dev tool ecosystem is evolving so quickly. We want devs to be able to use the very best tools for the job and still have an AI that understands the information locked in each of those. [0:22:53] JG: Do you worry at all about other companies creating code AI bots called Cody that compete with you directly or indirectly? [0:23:00] QS: There's a lot of people building code AI. We love that. As far as if they're calling it Cody, I think that would just be confusing to people. I know that Google has, I think it's an LLM. I'm not exactly sure what it is, but there's something called Cody. We've not seen that create that much confusion in practice. We do love that there's a lot of people building code AI, hacking on code AI. We get a lot of open-source contributions to Cody. The Cody client is open source. We love it. I mean, there's just an excitement about code AI from so many devs out there that I've never seen for any single dev tool. I mean, look at GitHub, for example, code hosts, or something that every dev uses, but only a tiny fraction of devs have ever built their own code host. It's not something you see people talking about, hacking on nice in weekends. With code AI, you do. I think that's just so exciting. It's the big reason why this is moving so fast. [0:23:56] JG: Coding has quite a few controls for code graph generation and the code graph context around how it pulls in from the repositories. Have you seen a lot of concerns or needs from companies around keeping privacy and locking down what the AI is, or isn't able to access? [0:24:11] QS: Yeah. Companies have all kinds of concerns and questions around code AI and security, both in the security of the tool itself and in the security of the code that it is generating. I think these are all valid. We've seen a lot of different changes over the last year and how companies are observing this. I think what I generally still see is companies do understand this is the future and they cannot just say no to everything. Even if they do say no and they try to ban their devs from using any of these, devs are going to find a way around it. It's much better to adopt it in a managed way. Where we see the industry is with code AI, there are a lot of tiny projects that have interesting code AI experiments. None of those are really being adopted in the enterprise. What we generally see is there's a small number of companies that have achieved the scale and trust that do have a code AI product. Sourcegraph is one of them, of course. Microsoft with GitHub co-pilot is another. With code search, we already have to earn that trust. Code search is a product that does connect to all the company's code bases and enforces permissions, and so on. We've got a lot of customers that had to vet us quite intensively. We've passed that hurdle in establishing the trust of our customer base. Is Microsoft? Obviously. I mean, they're a big company. That's not something we see. What we do see is a lot of concern around the security of the generated code. How do we know that it's following good security practices? That's, again, something where a deep understanding of the code can help. Organizations want to follow a lot of the generally accepted security practices, the OWASP top 10, for example, they want to run their security static analyzers, like SonarQube on the fly. That's the thing that fits in exactly to this universal code AI platform that we're building, that lets you plug in all these different tools to both inform the AI and to vet its output. We definitely see a big opportunity there. It's still very early days. We're still at the stage where humans are really looking at one, or two, or maybe four lines of code generated at a time, and we can still rely on the human to vet the security of that with a reasonable amount of success. As we have the AI writing more and more lines of code at once, and the humans not reviewing it quite as carefully as perhaps they should in an ideal world, then we're definitely going to need that automated way of vetting the security of code. [0:26:42] JG: Sure. That's very true. In the meantime, you touched earlier that the hallucinations, or lies as others may or may not call them, are a big part of AIs now, where AIs will often answer very confidently and very incorrectly. Do you have any thoughts or suggestions on how Cody, or other AIs might improve in that regard for not giving such confident and correct answers? [0:27:03] QS: Yeah. You handle it on the input side and on the output side. On the input side, grounding it with better context, not including irrelevant parts of your code. That's going to help it do better. In that example I gave, or if you paste in the relevant code at ChatGPT, it's going to do a better job of using your own APIs, instead of stuff that it makes up. Even there, you still have no guarantee. What you really need is on the output side, when it outputs code, I mean, ideally, you want to vet that code. The simplest way you can vet it is, is it syntactically correct? Then does it type check? Does it compile? Does it pass tests? Do you have a fast unit test suite that you could run in a sandbox to see is that correct? Then, does it pass test? Does it pass the integration test? Does it pass the end-to-end tests? Then in the future, I mean, you can think if you ask the AI to write a new feature for your e-commerce website, it should deploy that, run an AB test, have a lot of humans try it out, or have a 100,000 simulated AI bots try it out to see, does that actually increase revenue? All this comes down to, is there a really fast way that AI can evaluate the code that it wrote for fitness and correctness? That is the real solution. That's where we are working toward, and that, again, requires integration, and there’s so many different tools that you use in order to type check, to run tests, run integration tests, end-to-end tests, deploy, simulate, and so on. A lot of work there to be done. But once we can do that, then we can have the AI running in the background, and we can kick back and enjoy lives a lot more. [0:28:40] JG: Do you think that code bases that use languages such as Rust that are a little more difficult for humans to write, and a little more type-checked or type safe even fair better in an AI world than more dynamic fluid languages, like non-typed Python, or non-typed Ruby? [0:28:55] QS: That's a really good question. There's a lot of ways in which that could be true, or could be true. I think the answer is, we don't know yet. There's certainly a lot more Python code out there. When we look at Python code, we see, oh, there's no types. In a sense, when the AI looks at Python code, it's probably able to do a pretty good job of type inference. It's unclear to us how much the lack of explicit typing in Python is actually hampering the AI. Now, when it comes to building that fitness or optimization function, it would be much easier to have that type check pass in a language that has a strict compiler. In that sense, it does make it easier to have a really fast and simple fitness function that cuts out a lot of code that doesn't work. I think the reality is we probably won't know that for at least a year, because we've got to have – and by we, I mean, anyone has to have something out there that has that fitness function and let it sit for a year, let this amazing community of open-source code AI builders build on that and build new tools, create new synthetic data sets, and so on. I think that's one of these really exciting open questions. Then you start to ask, well, what programming languages will be better in a world where code AI is a thing? Some people think that it's going to be very bifurcated, where humans will be just specifying plain English descriptions and then AI will be directly generating assembly code. I don't think so. I do think that you'll see probably a pause in creation and adoption of brand-new programming languages, where we wait for things to settle and we see what language is best for programs where AI is doing most of the implementation, but humans do need to understand at a high level some of the APIs. I think you could see a language that has a first-class notion of here's what the humans should review. Then here's the implementation that humans should almost never need to review. I don't know exactly what that looks like, but again, it's the thing that just gets me so excited about this whole space. It makes me wish I could just take off 10 years and do a bunch of research in this space while everything would stop. Of course, that's not the way it works, because things are moving fast here. [0:31:03] JG: Well, this is a good time, actually, to bring it into the more personal realms. Quinn, you've been at Sourcegraph for over a decade, speaking of 10 years, but you have quite a few different things you've done in the past. You've done research back when you were at Stanford, you're on the board of Hack Club, you're of course, at Sourcegraph. Do you ever get bored? Do you ever feel, oh, that restlessness, “I want to go do something wild and wacky and completely new”? [0:31:24] QS: Well, the beauty of being CEO and co-founder of Sourcegraph is we're all about code. Pretty much anything that I hack on is somehow related to work. It helps me understand code AI, or coding more. Just this weekend, spent a bunch of time helping us migrate from Webpack to ES build and our code base and cleaning up some tech debt. This stuff I just love. When I do things like that, that's what most dev work consists of. I don't want to be a CEO who's just doing the really cool, exciting stuff, because that's not going to give me a good sense of what it's actually like to write code in 2023 with all the products that are available today. I want to be doing the work that's representative of what most devs are doing, so that we can build products that best address those kinds of pains. I think a lot of your audience will understand, but I love when I'm just refactoring tech debt, cleaning up code. Nothing makes me happier than deleting a bunch of messy code. Now, I don't really get restless in a way where it's like, “Oh, I wonder what else is possible?” Because I view what we're doing, building code AI is the most important thing in the world and the most exciting thing in software development. You can look at all the open-source hackers that are working on code AI. I think there's a lot of people out there that would agree with that. I feel really lucky to be doing what we're doing right now. [0:32:42] JG: For a second, I bristled at the suggestion that converting from Webpack to ES build was anything but a beautiful leisure activity for fun. I'm glad you pulled that back towards, like a side at the end. [0:32:54] QS: Yeah, I would say, we also want to be friendly toward all different build systems, so Webpack's done a lot of great work. Very excited about BUN and I was using BUN on a small project. You hit enter at the command line and you're like, “How's it done already?” I'm really excited to dig more into that, too. We're also using Basil Service and Basil build files for the first time and yeah, there's so much cool stuff out there and yes, again, this is the show for people who think that stuff is really fun. I'm right there with you. [0:33:24] JG: Are there any projects you started recently that you thought, “You know, this might be fun for someone, but not so much for me”? [0:33:30] QS: I tried building one of those AI code agents using Cody. One of those things where you could say at a high level on the command line, what is the change you want and then it would go and make the diff and however many files. I've seen a lot of those on Twitter. Some of those seem really cool and I think that's where we will get to. That's the stage three in the code AI levels, when you can start to treat code AI as a million interns. I tried a bunch of things out there and I tried building our own using Cody, and I just don't think that those work well enough yet. Even to make little screencasts to share with our team, I had to run it 50 different times and cherry pick an example, and I was very transparent with the team that I had to do that. That was a project that still lives in a branch, but has not quite been shipped yet. That's the tension, where that is the thing that gets so much hype. Where developers are is they just want the next few lines of code that AI writes to be good, to type check, to compile. They wanted to write a good unit test for them. If it could do that, then that's great. They're not ready to trust it to make a bunch of arbitrary changes. Yeah, that just doesn't work well enough, except in a few very limited cases. [0:34:42] JG: Let's talk about the new developers. For those who haven't played with it, Hack Club is and well, why would I give the explanation? Can you tell us what Hack Club is? [0:34:50] QS: Hack Club is a community that's worldwide, of teenagers that are learning to create things, mainly with code. I love Hack Club, because when I was a kid, when I was coding, when I was in middle school and high school, I would go home and go to my room and code all by myself. Yes, I met some people on the internet, but I had never even said the word Pearl or PHP. It was embarrassing for me to even say that. Was I even pronouncing it right? I didn't know anyone in real life who coded. Yet, if you're in high school and you play a band, well, you've got a built-in group of people to do that with. If you do improv, or sports, that's all in person. Hack Club started in a world where we were bringing after school coding clubs to high schools all around the world and students just built the most amazing things. When COVID happened, of course, so much of that community also then went online and Hack Club is now running hackathons that bring together students from all around the world to get together in person for a day, or a few days and build amazing things. It is so amazing what students can create. They can create websites that are way better than any company website that I've seen. They can create games that you can tell that someone so deeply cares about it, and they're so welcoming to each other, and it's the community that I wish existed when I was learning to code. I think it's exactly the thing that's going to make it, so that everyone in the world codes, which ultimately is what I want to see happen. [0:36:20] JG: Teenagers can sometimes be susceptible to hype, as are we all. It can be worrisome for someone who's considering a career in tech that, “Hey, all this AI stuff is coming out and, the CEO of that company is saying that we're all going to take a back seat in two to three months.” What would you say to someone who has misinterpreted your statements, or just is generally worried about AI removing the role of a software developer in the next decade?” [0:36:44] QS: Well, I didn't say two to three months. I think we are all along for this ride. The future of technology is going to be really interesting. What I am certain of is that understanding the concepts that you gain when you learn how to code, that will be valuable no matter what. Being able to explicitly specify the behavior of what you want. Being able to understand. At what step did it go wrong and how does this interact in a complex way with all the other systems out there? That's really what you learn when you code. Today, coding is the way of you specifying that to the computer. If that changes, well, so be it. Our brains will adapt. I mean, in the same way that 40 years ago, you were doing punch cards and you're writing assembler. Now, you write high-level languages. I think humans can adapt. There is a point at which I think nobody can predict what the world is going to look like, and that doesn't just pertain to code AI's transformation of the world. I think it's all of AI transforming the world. But certainly, in the meantime, you can be a part of that transformation by learning how to code. I see, actually, it's junior engineers, new engineers that are adopting code AI in the most exciting and transformative ways. Often, we see it's the senior engineers that are a little bit jaded and doubt that I doubt that this could actually work. It's the junior engineers that are using this all the time, that are gaining so much from it. Full speed ahead in learning to code. If you're a new engineer, use code AI to its fullest extent. Don't be worried about it robbing you of learning. Use it as a tool as you learn. It can help you advance so much faster. We're all part of this together. I don't know what the future is going to look like exactly. But again, I'm certain that these coding skills will help you push the world forward and be better equipped no matter what happens. [0:38:28] JG: I remember working at a certain Fortune 500, where many of the developers on the team were not just irritable about, but scared of using something like an auto-formatter in the JavaScript world, prettier on a code base. One shudders to think what those engineers would have has an opinion for code AI today. [0:38:45] QS: Yeah, exactly. Same thing with the Gofmt in the Go community, where there's one format that everyone uses. I think that is an idea that fought in the marketplace of ideas and definitely won out. I cannot imagine working in a code base without prettier in TypeScript anymore. [0:39:01] JG: It's a bad time. We're now going to enter the wild and wacky part of the interview, where I ask you a bunch of questions that have no bearing on what we've talked about so far. Are you ready and does that sound okay to you? [0:39:11] QS: I am ready. [0:39:12] JG: Great. We've talked about the future a lot. But on your personal side, you have a blog and you have a book reading list, but I see there hasn't been a book entered there in a little bit. Have you read any particularly good books recently you'd like to share? [0:39:25] QS: Yeah, I got to go update that. I read all the time. I'm reading The Prize, about the history of oil. Goes from 1850 to, I think, about 2005. Man, that is a crazy, crazy industry. It reminds me a lot of the tech industry and how much it's changed and how much of an influence it had in the world, and also, how much has changed there since 2005. [0:39:48] JG: There's a great quote out there. I believe it's something like, those who do not watch History Channel, or dooms to repeat History Channel. Do you see any parallels in particular between the tech industry now and the oil status back then? [0:40:00] QS: I think it's too soon to say if this is going to be correct, but in the oil industry, it always felt like, as soon as something became stable, then something else completely changed. I think there's a sense in the tech industry that things are stabilizing, that you could look at ARR as a way to evaluate company performance, that there are a few big tech companies that they'll be around forever. But then you realize that people thought that about the oil companies whose names you've never heard of. [0:40:32] JG: You've got on the record saying that Rao’s sauce is superior to most other sauces. Can you defend that, or at least explain it? [0:40:39] QS: Rao's Arrabbiata Sauce is the best tomato sauce ever. I've tried making my own tomato sauce. I've tried all kinds of other tomato sauce and I got three kids and I can do plenty of blind tests on them. They like Rao’s. They don't really like anything else. That's my main fitness function for tomato sauce and it definitely – Rao’s far exceeds all the others. You'd have to interview my kids to get more of their decision criteria. [0:41:08] JG: My apologies to the Rao’s company for mispronouncing their name. You bet, in one year, we'll have your kids on and we'll talk to them and you about the feature of AI and coding. [0:41:17] QS: Cool. [0:41:17] JG: Well, that's all I really wanted to ask you. Thanks so much. Were there any other toppings you wanted to bring up in our last couple minutes together? [0:41:23] QS: No, that's it. Everyone else, happy coding. [0:41:26] JG: Well, super. Thanks so much again for hanging out, Quinn. This has been excellent and I'm legitimately very excited to see where Sourcegraph and other AI ventures go over the next few years. [0:41:35] QS: Yeah. Same here. Thank you. [END]