EPISODE 1628 [INTRODUCTION] [0:00:01] INTRODUCTION: Vercel provides a cloud platform to rapidly deploy web projects and they develop the highly successful Next.js framework. The company recently made headlines when they announced v0, which is a generative AI tool to create react code from text prompts. The generated code uses open-source tools like Tailwind CSS and shadcn/ui.  Lee Robinson is the VP of Product at Vercel. He helps lead the product teams and focuses on developer experience on the platform. He joins the show to talk about Vercel, their AI SDK to easily connect front-end code with LLMs, the v0 AI tool and more.  This episode of Software Engineering Daily is hosted by Sean Falconer. Check the show notes for more information on Sean's work and where to find.  [INTERVIEW] [0:00:58] SF: Lee, welcome to the show.  [0:00:59] LR: Hey, thanks for having me. I'm excited to be here. [0:01:01] SF: Yeah, thanks for being here. I've been to a few Vercel meetups over the past year or so. I've been in the same room as you, but we've never spoken until now, which it's kind of funny. Because I've worked in developer experience, developer relations world for quite some time. It's kind of a small world in a lot of ways. I'm surprised we haven't ended up interacting more. But it's great to finally have this interaction now in this format so I get to ask you lots of questions and kind of dive into the details here. [0:01:26] LR: Yeah. And thanks for coming to the events. That's awesome.  [0:01:28] SF: Yeah. Maybe a good place to start for the audience is we can start with some basics. Who are you? What do you do at Vercel? [0:01:36] LR: Yeah. I'm Lee. I am now the VP of Product at Vercel. I help lead our product teams here as well as think about how we make a great experience for developers on our platform. Whether that's going in talking to customers. Talking to developers. Working on our documentation and making sure that it's technically accurate and compelling for developers. And just trying to build products that developers love. [0:01:59] LR: Awesome. Yeah. And I think as a Vercel and also Next.js user, I think I feel that from - and I think this is some of the things that we'll talk about today. You've been there around, I think, four years. Kind of had a number of different roles there. Starting in DevRel to like VP of DevEx. And I guess now, VP of Product.  I guess what are some of the biggest changes that you've seen since you started? It seems like Vercel has really sort of blown up at least from my outside perspective over the last couple of years. I'm sure things have changed massively. [0:02:27] LR: Yeah. I was just reflecting on the growth of Next.js over the past four years. And it's been quite a journey. Since I joined, it's grown about a thousand percent in terms of monthly active developers. We now have almost 900,000 monthly active developers, which is pretty exciting to see really some incredible companies choosing to build their web experience with React and with Next.js.  The growth of Next.js has been one part that's been fun to watch, fun to be part of and also just inspiring to see what our customers are building. With Vercel, I think when I joined, Vercel, we were a much smaller company. And we were really trying to figure out how to take the early success that we had had and help bring it to more of the largest websites on the web.  And along the way I think we found the right way to think about the value that Vercel provides to the world and maybe how we're a little bit different than other products or other companies that exist. And the key terminology, the key framing here is that we're a front-end cloud.  If you think about AWS, Google Cloud, Azure, these back-end cloud-type solutions, you can go there and purchase a bunch of Legos and you can put them together and build your own solutions. And those are really good Legos. They're custom-built for specific purposes. Like a queue, or a database, or some specific need that you need.  But we really noticed there was this whole category of services and problems to be solved around just the front-end specifically that wasn't really getting the amount of love that we thought that it should get. We've really focused in on how to create just the best front-end experience and a product specifically for the front-end developer. And that requires both the framework, which is why we created and maintain Next.js. But also, the infrastructure that gets created by using the framework as well too. We have basic basically two products that we sell at Vercel. We have a platform for developers. A developer experience platform. And then we also have managed infrastructure. You come to Vercel's front-end cloud and you take advantage of deploying any of your favorite frameworks and just having it basically scale automatically with our managed infrastructure.  [0:04:49] SF: I think historically we've kind of paid more attention to the back-end and the infrastructure. I think even in the way that we think about where hard engineering problems exist. But maybe over time, because we can do a lot more on the front-end than we could do even five or 10 years ago. And I think what we're essentially asking our applications to do, our front-end applications to do is now much more sophisticated than it was a few years ago. Do you think that is partly why this gap existed and where you've been able essentially to carve ownership or sort of claim ownership over the front-end cloud? Because this gap did exist and no one was really focused on sort of servicing the needs of the front-end.  [0:05:30] LR: Yeah. I think that's exactly it. When we look back, I know that when I was getting started in my career, the types of websites and web applications that companies were building were much smaller in comparison to what the web looks like today. And largely this was because the tooling at the time was a little bit more difficult to use.  For a lot of developers, they looked at the front-end as this kind of toy thing. The back-end was where the real work happened. And the server and all of their back-end logic, that was where the tough engineering problems were. And what the early team at Vercel, and what Guillermo and what a lot of the first engineers here saw was that there's some really hard problems to be solved in the front-end both in how you write your JavaScript code or how you build your user interfaces, but also how you make the center of your digital storefront or the center of your digital property, your .com, how you make that available for customers everywhere in the world and you make it fast for customers everywhere in the world.  And that last mile of, "Okay, we've got an amazing back-end. Superfast. Our API is incredibly fast. They scale up perfect database architecture. Amazing." But now, how do we actually make a product that customers love and want to use? And that comes down to the front-end, the UI, the user experience, the design. And that's really where we try to meet these folks at.  Because what we found when we talked to a lot of developers and we looked at the market was the front-end developers didn't really feel like they had a company that was sticking up for them. That was building products for them. That was helping solve their needs and essentially allowing them to focus on the product experience, focus on the user experience and let someone else handle all of those bits.  And especially as we move forward into this kind of AI-first, AI-forward world, these tools enable you to spend so much more time just building product code and not having to think about the connective bits between all these pieces. You're absolutely right that the front-end has - maybe 10 years ago was looked at as maybe less than the back-end. But I think today, a lot of developers are realizing not only the opportunities for optimizing your front-end. Whether that's improving the conversions on your e-commerce site or decreasing site load performance which can have major impacts on your business and on SEO. A lot of more developers are realizing that. And to achieve that, they have to become proficient in the front-end. And really, that's where we come in. [0:08:00] SF: Yeah. I would think that also consumer expectation has shifted as well. As a consumer, what I'm expecting from my application is different than it was a few years ago. I'm expecting lightning fast. I get frustrated if it isn't immediately responsive. And actually, what's happening, it's pretty amazing.  If I think back to like when I was in high school on a 56k baud modem praying for an image to load or something like that to where we are now. Our expectations have shifted. And that's also required a significant amount of technology innovation. And also, the scale of what we're doing on the front-end has changed not only with the amount of people that we're serving also globally, but also in terms of just the amount of code that you're sort of writing for the front-end. That makes it harder for teams to essentially build and scale what they're doing operationally from an engineering team perspective versus something that's like a much smaller essentially application that's serving a much more constrained users. And I think that's where essentially the market has to respond with innovation and also respond with I think really good engineers starting to understand that, "Hey, there's like huge opportunities. There's hard problems to solve in the space." And I should take a look at this rather than just sort of focus on the back-end or the back downstream services. [0:09:22] LR: Yeah, absolutely.  [0:09:23] SF: You talked a little bit there in your introduction about sort of your focus at Vercel on sort of developer experience. Making sure that's really good. And it's clear that I think that there strategic weight put into the value of developer experience from Vercel and in terms of Next.js. But why was that an important investment for Vercel?  There's a lot of go-to-market product strategies. But why was DevEx, at least from my perception, like a P0 Vercel versus something that I think a lot of companies maybe discount or maybe they look at down the road?  [0:09:57] LR: Yeah. Well, first and foremost, our team are tool builders. They're people who care about the craft. Because they build, and improve and sharpen their own tools. And that helps them create better products. We have this cycle of caring about the craft of building a great front-end. Building a great product. Having good tools that enable us to actually do that. And making tools that satisfy that demand for ourselves.  We care about the developer experience on a really fundamental level. Because not only is it helping scratch the itch of us building great tools to help us build better products. But interestingly enough, that's the same thing that our customers want as well too. Especially when we can validate that these tools actually enabled us to build really incredible things as well too.  Developer experience from the start has always been a foundation of how Vercel thinks about the world. How Vercel thinks about building great products. And we've had the opportunity to inject that into basically everything that we do both at the framework and Next.js level. Trying to make sure that this is a smooth experience for customers. But then also, the little bits of polish and user experience and developer experience wins that you can have when trying to make a platform that kind of takes away all of the hard parts about getting your website or web application online.  I'll give you one example of how we think about implementing some great developer experience. We have a philosophy that we call framework-defined infrastructure. And if you think about how the majority of front-end teams have been building their applications kind of up until now, let's say, they spend a lot of time taking the framework code that they write and then also writing this intermediate layer of infrastructure as code, which that's even improvement over the past when there was an infrastructure code and you were manually building out all of your infrastructure in some settings pages inside of AWS, right?  Then there was the Terraforms and the AWS CDKs. And there were more tools that allowed you to at least check this config into version control and have a review process on it, right? That's helpful. That's definitely an improvement over the previous status quo. But revisiting that experience kind of from the first principle of what's the best experience for developers? Well, when I'm trying to build a great product. And I've got this powerful framework. I actually don't want to have to think about the wiring of the bits of, "Okay, I'm rendering my homepage. I'm creating a homepage. How does that actually turn into a file that gets cached on an edge network somewhere? How does that actually turn into compute that is able to serve up a dynamic personalized version of my page?"  This idea of framework-defined infrastructure is you write code in whatever front-end framework you want, whether it's Next.js or any of the other 30-plus frameworks that we support. And you deploy to Vercel. And the developer experience part here is we took the hard part of figuring out how to map those framework outputs into global infrastructure, into managed infrastructure. And we just built that part for you. We maintain the layer of adapters essentially that convert that code for you. So, you just write your code. You push it up and we'll take care of that part. [0:13:25] SF: How far do you think you can go with this sort of approach of a framework infrastructure? Does a company reach a point where this no longer scales and they kind of have to go in and actually have their own infrastructure team to go and sort of turn the right knobs and so forth? Or can this essentially - can I go from very beginning ideation stage to global scale with massive reach?  [0:13:51] LR: Developer experience can't come at the cost of observability. While framework-defined infrastructure is great and it's awesome that you can build small to large applications with this, that doesn't mean that you should sacrifice being able to understand what the generated outputs of the managed infrastructure are.  Because at the end of the day, you're still responsible for ensuring that your product is online and that your customers are seeing a great experience. Of course, we're going to manage that infrastructure for you. But you want to ensure that we have that relationship where you understand the observability of your system. Whether it's user code that you introduce that could have been a regression, for example. We give you tools kind of to deal with that.  I guess going back to your question, your framing, is there a point at which this framework-defined infrastructure doesn't work? And what we've seen is that for hobby projects, for small to medium-sized businesses and even to some larger enterprises, this approach can work really well because you're effectively building what I like to call a majestic monolith.  You write your code maybe in a monorepo, maybe not in a monorepo where you have this collocation of what traditionally was kind of a separation of your front-end and your back-end connecting-defined code. Because as front-end has evolved, a lot more of what was traditionally like this layer of code in between to connect your front-end and the back-end has actually moved closer towards the front-end. And the back-end is more of the cues, and the databases and the separate parts like that.  In this world, in you're majestic monolith that you're deploying to Vercel, you write code like it's a monolith. But then the framework-defined infrastructure takes the outputs and gets to optimize each of the parts individually. It can determine, "Okay, this page, we're not doing any user personalization or customization. Let's put that behind an edge network. Let's add the correct caching headers. Let's just make this thing really fast and put it close to users all over the world."  This page or this component on a page is doing something dynamic. It's looking at the incoming headers. It's looking at the cookies. We're trying to do some kind of personalized banner for users returning to an e-commerce experience. For that, we're going to need to run some compute. So, we're going to have some compute that we can put close to our database and make it really fast to essentially have this personalized experience on our site as well too.  We've talked to a lot of customers that this approach works really well. But for the largest, largest customers that we work with, they still have completely separate teams of folks who are building their traditional back-end cloud, back-end infrastructure. Usually because they also have a mobile application or some other properties that are also using kind of the same shared API, right? Maybe it's a GraphQL API, maybe it's a REST API.  For their front-end, as it's a decoupled composable architecture, for that bit, they're still connecting and talking to this API. They're not just going direct to the database like maybe a smaller project or a self-served business might do. For them, they still have this security layer and this API layer in between their back, their database. [0:17:04] SF: And then how is that sort of mapping or translation happen? Is this more of like a heuristic-based translation that's happening? Or is there some sort of machine learning component to figure out? How do we actually optimize this based on what someone's trying to do on the front-end?  [0:17:21] LR: Yeah. How the translation happens for framework-defined infrastructure is that we publish a specification that is essentially like an API for frameworks. The specification defines what the output format is. And if code is deployed in this output format, then Vercel knows how to convert that into infrastructure basically.  And the cool part about this specification being public and published is that while we have adapters for the frameworks to convert into this output, you can technically build your own framework or take your custom internal company-rolled framework and convert it into this output through a script. And then it works the same way as the rest of all these other frameworks. It can take advantage of the same cloud infrastructure primitives, the same managed infrastructure, the same workflow on Vercel's developer experience platform. It's all integrated into that same thing. That's how we think about the translation. [0:18:18] SF: Mm-hmm. Okay. Someone can sort of like override some of the default experience and create something that's maybe a little bit more customized for whatever their application needs are.  [0:18:29] LR: Yeah. And at the framework level, we also give developers specifically in Next.js still the tools to control how their infrastructure behaves. For example, you want to control the maximum duration or the memory of the deployed compute for your Vercel function.  [0:18:50] SF: I want to get your thoughts on how AI sort of fits into this world both on like the DevEx side and some of the investments that Vercel is making. With all the development happening in the GenAI space right now, improving developer productivity. GitHub Copilot was of course a big one from a little over a year ago, what are your thoughts in how Generative AI is going to change what it means to have a great developer experience?  [0:19:17] LR: We started seeing a large increase in the amount of developers deploying AI applications to Vercel, really I would say at the beginning of this year, at the beginning of 2023. And when we started to look, a lot of this corresponded with the growth of OpenAI and ChatGPT I think. But also, the amazing number of open-source models that are being created and generated and developers looking for a way to run compute that then talks to these models.  And when we looked and we talked to these developers who are wanting to build with AI, we learned a couple of things. One is that, for a lot of existing workloads, existing applications, there are ways today for developers to sprinkle in a little bit of AI-enhanced tooling to drastically increase their product experience in ways that previously would have been extremely difficult to do.  And this is really bringing this whole new revolution to the front-end especially when you think about natural language interfaces. I think the most common one that we've seen from talking to customers is similar to how ChatGPT got so popular with a chat interface is taking that same idea of talking in natural language to every other part of the front-end. Every other part of your business. Maybe that's how can I write natural language and query in my database? Maybe that's how can I write natural language and look at all the documentation and support articles on my site and get really good responses for how to debug an issue? Or how to get help rather than having to wait to talk to a human, for example?  And for a lot of these customers, they're coming to Vercel because we're able to help accelerate how fast they can take advantage of bleeding edge technology, the latest technology. It's been interesting. Because as technology waves have happened in the past, there was a lot of innovation and a lot of excitement around crypto.  And we saw a lot of those customers also explore what is does this mean for us on Vercel? And we're still seeing some folks doing that as well too. It's a much larger wave I think with AI as well where customers are seeing more clear opportunities of how using large language models, AI tools can help make their product better. That's step one. That's what we've seen a lot of people do today.  Now what we're starting to see is a lot of newer customers, startups that operate inside larger companies, take a look at their product and reimagine what would this product look like designed in an AI-first world? How would we change the front-end? How would we innovate on the front-end to provide a product experience that previously was impossible to do?  And this spans across basically every single industry. Even Vercel's own product. We're thinking about how are there ways in a world where you have this powerful large language model at your side? Whether it's an open-source model or some larger foundational model. How do you take these and enrich the entire experience?  And when you start to go down that path, the design and the user experience of the front-end looks drastically different than maybe the last generation of websites. These customers are coming to Vercel. They're using vel's front and cloud and we're helping them accelerate towards getting ahead of their competitors building out this AI-first advanced version of their site. This AI native experience. [0:22:56] SF: Yeah. I mean, I think what you're saying around like chat and sort of changing interfaces to be more sort of natural language to humans completely makes sense as sort of the baseline starting point. Because we've kind of gotten good at training ourselves how to unnaturally interact with technology. If you want to know a restaurant in New York City, you go to Google Search and you type in restaurants in New York City. But I would never come up to you and be like, "Restaurants in New York City." And you'd be like, "Oh, okay. Here's my favorite top 10 list." But I can now actually go to ChatGPT or some other type of LLM-based system and actually ask it a question like I would ask a friend and actually generate a response as if it's my friend to some degree. And then I can ask follow-ups and have an actual like human conversation. I think that's transformative. That's kind of like the baseline.  And I think where another opportunity in the space is - and I'm curious to hear if you're seeing something similar is around actual adaptive UI. I think especially complicated products. If you think about interacting with AWS's console and the like thousand products that they have in there, you're kind of faced sometimes with this choice of am I in beginner mode or I'm in expert mode? But if we are using actual Generative AI, we can actually create adaptive UI that kind of learns what is the right set of options to provide something.  I think Microsoft, they tried like a very rudimentary version of this in Office years ago with an adaptive ribbon. Didn't really work out. But I could see essentially sort of all UI's becoming a lot more personalized based on how people actually interact with the tool and what their needs are. [0:24:36] SF: Yeah. Our vision for the web is a web that's very dynamic, and personalized and fast everywhere. And this has been the journey for the last eight years of Vercel and will be the journey for the next eight years as well too. Just how do we continue making this better, and better, and better, faster, more personalized, more dynamic?  But to step back a little bit, I love what you're talking about around adaptive interfaces. I want to talk a little bit about how we've been thinking about this and what we've been building towards here. I talked about at the beginning of 2023, we started to see a lot of developers wanting to build AI applications on Vercel.  At the same time what we decided to do going back to our principles of developer experience was how can we build a set of tools or a framework that help developers use AI models, use large language models, use AI tools on Vercel easier? And we built what we've called the AI SDK. And it essentially simplifies how you can add in a chat-based interface. Or how you can connect to OpenAI, or Anthropic, or Hugging Face, or all of these popular services for using an AI model into just a few lines of code in your JavaScript, or TypeScript, or, React, or Next.js application?  And what we learned from this first experience was, one, that developers really appreciated having an easier way to integrate this technology and just write a few lines of code and connect to their large language model. But we also learn that the transport format of that was primarily just text-based and it left an opportunity for improvement. You talk to your large language model. You send it some text. It returns back some text.  Well, the great thing about a Google search experience when I search for what's the weather in this location or what restaurants are in New York City is they can give you these rich interactive widgets when you search for plane tickets. And you can actually do the whole process in the widget and then click one button and it takes you out to United, or American, or something, for example.  And that experience hasn't really been democratized yet for large language models. The amazing incredible thing about Google search is they've built such a dynamic personalized application with all of these widgets and it's still so fast. There's just no way they could have possibly known about every single possible widget combination that you needed to build. It has to be very personalized based on the user's location. What they've searched? Et cetera.  We're trying to take that idea and democratize it to everyone. And the first step in doing that was actually taking our AI SDK and upgrading it so that it could output actual interactive components or widgets as well too. You ask a large language model like OpenAI, GPT, you ask it how can I build out a website that looks like Stack Overflow, or it looks like Sketchers, or something?  And it would be great if what got spit out were actual components. Whether it was you're actually trying to build this UI or you're trying to build kind of an interactive widget for maybe answering support tickets and you want to render what the flights are for the support tickets. And in doing this, we realized, "Huh, there's a really interesting opportunity here to simplify how developers are building UI as well as giving them this underlying primitive of generating components."  This kind of put a fork in the road. On one hand, we started building a product that we've now released, which is called V0. And this product actually uses all of this tech that I just talked about. You go to V0 and you type in. It is an AI-first user experience. There's just a search bar. There's just a box. You type into the box, "I'd like to create a dashboard for my internal application that has a table on the right that shows the customers. There's some buttons for how I can edit or modify their invoices. On the left, have a list of items, have a search bar." Whatever product requirements. You have you hit enter and it's going to use this AI SDK. It's going to talk through our model and it's going to stream back. It's going to progressively show the UI components that it generates.  And then when it's done, you can just click a button and copy-paste and use that in your application to quite literally create the V0. To create the first version. That's like one fork in the road. We can talk a little bit more about that. But then second is now that we built this functionality into the AI SDK. As well as some other things like OpenAI's function calling and some of the more advanced things like the assistance API. It's really interesting to see how developers are taking that and then using it to build these type of experiences that previously were super hard to do.  When you click on the chat widget in the bottom right on Expedia and you want to talk to customer support because you want to change the duration of your trip, you want to modify one part of your hotel. And it can actually spit back out this little widget that shows your reservation. It shows the dates. And you can actually interact with a real UI. That's more like this adaptive UI future that you were talking a little bit about versus always doing everything text-based, right?  [0:29:46] SF: Yeah. I mean, sometimes the best answer to a question is a picture. And so, that's something that we can do in the digital world. We can go beyond just sort of a verbal or textual response. And I mean, there's a reason why people say picture is worth a thousand words, or a video, or something like that. Or I think like one of the worst experiences you can have is when you ask somebody for directions and the way they describe complicated directions. It's like, "Oh." Then mentioning some reference that you never heard of. You turn right there. And so, humans are kind of bad at giving directions. But a map with a drawn-out set of directions is super useful.  If you can essentially go from the sort of crappy human version to actually bridging the gap or using sort of the way like humans naturally interact when that interaction makes sense and is better experience, then you can leverage that. But if a picture, a widget or whatever is a better experience, then you can sort of optimize for that and create this adaptive personalized experience. [0:30:43] LR: I've been trying to tell my dad this when he explains to me directions for his whole life. He's like, "Oh, yeah." I'm from a small town." He's like, "Oh, yeah. You just go down that gravel road. Then you go north. And then after the third house on the right that's got the red barn, then you take a left. And then that's old Bob's place." It's like, "Come on."  [0:31:01] SF: Exactly. It's like too much detail. Just give me the facts. Yeah. Exactly. You mentioned V0. There's this growing AI stack. We have vector databases, foundation models, orchestration tools like Wanchain, your AI SDK. Where do your investments kind of sit in this now emerging LLM tech stack?  [0:31:24] LR: Yeah. We've always cared a lot about the actual framework that's enabling the developers, which is why the first bits that you saw us work on were the AI SDK. Similar to how in the front-end framework space, we work on Next.js. The AI SDK is like our investment into that tooling for developers.  On the product side with V0, it's trying to build a product that makes it very easy to get that first version of your UI. But in terms of the entire landscape and how we want to work with the growing number of providers that pop up every single day, there's new open-source models that seems like basically every week there's a new breakthrough, which is super exciting. We've tried to build the SDK in a way that it can integrate with basically any of these providers.  And then, also, as we're continuing to make the product better for V0, as there's new technology, new breakthroughs that are released, we're able to integrate that back into the product.  For example, I don't know if it'll be out by the time this is posted. Maybe it'll be shortly after. But you'll have the ability to upload images to V0. And then rather than typing things, you can just use an image, which is even easier sometimes. And then I guess the last bit here is just Vercel as a whole, as a front-end cloud for all of our customers, how are we helping them accelerate their AI adoption journey?  And the great thing about having this kind of headless composable architecture is that because we're just focusing on the front-end first, the experience first, we can integrate with whichever tool these customers want. Sometimes it's their own kind of private version of an open source model that they've trained on their own data that they've put behind their private infrastructure behind an API. But they still want to integrate it into their product in UI and front-end experience. And we can still help them accomplish that. [0:33:12] SF: What was the reception when you first teased V0? I think it was originally launched as sort of like private beta.  [0:33:20] LR: Yeah. The initial reception was really interesting. Because a lot of developers I think strugle from a problem that I've also struggled with, which is it's almost like writer's block for building out your user interface. You kind of start to learn what good looks like after doing front-end development for a while. But going from zero to one is actually harder than it seems sometimes, especially with CSS.  CSS is a very deep, deep subject. There's a lot to learn to be very proficient and masterful with CSS. And for some of our best engineers at Vercel, some of the people that I've worked with who are just incredible engineers both on front-end as well as on back-end, some of them still aren't the best CSS experts in making something look really beautiful, really polished.  And the cool thing that we think V0 can help accelerate for them is getting a version of their UI that's actually a good place for them to start aesthetically. It's actually pleasing to look at. It's got the right break points for mobile, which is sometimes take quite a bit of time to do for how they actually visualize what the configuration for putting all these pieces together would be.  The initial reaction I think was really focused on that. And also, how they could then use this technology to speed up their workflows that they do every day. For us internally at Vercel, it's been really useful to use V0 to especially build parts of our tooling internally that isn't as much of our main customer-facing products. It's actually a lot of the internal tooling.  Because you're trying to get a very specific need done. And you want a UI that looks good. That's easy to use for all of the internal employees building something. And typically with internal tools, let's throw it in the bucket of like tech debt. When you're trying to make updates to tech debt-type things, to continuous improvement-type things, to like some little configuration to change a support value, you're probably not going to spend that extra mile on really getting a great UI. At least for me in my career, most of the internal dashboards or tools that you see like aren't the best-designed things.  [0:35:37] SF: Yeah. You'll be like, "Yeah, this was definitely designed by an engineer." Yeah. It's like minimum scaffolding to kind of hold this thing up. [0:35:43] LR: It's a plain input with no CSS, right? It's just the most simple thing ever, which sometimes that's fine, right? But the cool thing about V0 is that you're able to chat with this model that allows you to get a better-looking version of these tools which can have some serious impacts on the productivity of your engineering and your support teams when they're able to more quickly build internal tools and refactor internal tools. And still own all of that code and that experience internally if you need to connect to private APIs or you need to add in some custom logic. It's been a powerful combination for us. [0:36:19] SF: Do you think tools like V0, GitHub Copilot, these other tools that are making it easier and easier for people to kind of put in a prompt and spit out code that you can actually copy and paste and like use? Is that changing what it means to be an engineer and developer in any way? Or maybe it isn't right now. But if we sort of fast forward a year or two, where if we continue at this sort of exponential growth curve in AI, is it going to change what it means to be an engineer?  [0:36:46] LR: Yeah. I think it's already happening. I know for me, AI tooling has accelerated the rise of copy-paste. And copy-paste as a good thing. Not as a bad thing. I think maybe traditional engineering software design wisdom. A lot of folks have really stressed the importance of DRY. Don't repeat yourself. Try to create these perfect abstractions.  And in an AI-first world, abstractions are still good. But there's also a lot of value in being a little bit more explicit and making it very easy to copy-paste and distribute code throughout your application. For example, at least from my experience now becoming more of an AI engineer, using more AI tools in my day-to-day workflow, I find myself spending a lot of time in a workflow that looks something like this. I have an idea for an improvement I want to make to my site or a new product that I want to build. I'm kind of pairing. I'm pair programming with V0 and ChatGPT or my favorite large language model. I'm able to use V0 to help scaffold out that first version of the UI.  I recently built like an email clone and I just kind of typed in the first version of what I wanted this thing to look like. It gave me the code. Copy-pasted it. Okay. Now I'm on to the logic. Okay. Now I need to - when you click on the button, I want to have a form. That form is going to add a new row into my Postgres database. Now I'm pair programming with ChatGPT. Okay. Spit out a SQL query that's going to allow me to do this, but also sort by this and limit to 100. And also, do two joins on these other tables. Does this schema for my database look correct? Okay. Awesome. Just checking. I just want to make sure.  That's what the workflow has kind of looked like for me, which is like having this very, very experienced programmer who's way better at database design than I am sit beside me and help me build a great product.  [0:38:39] SF: Yeah. And I think one of the other nice things about this type of tooling is there's kind of like no judgment involved. If you're trying to learn something new, you might be a little bit nervous or intimidated to go and ask like an expert. Like, "Am I wasting this person's time?" Or depending on who that person is, maybe they are better or worse at like coaching and helping people. This is like you're probably going to - I guess like the bedside manner of the chatbot experience is going to be something that you can probably deal with.  [0:39:08] LR: Unless you add in custom instructions into GPT. And you can say, "You know what? Actually, just mess with me a little bit. Here's the result. I'm surprised you didn't know that." But it's like, "Okay."  [0:39:19] SF: Take on the personality of an arrogant professor that's been in the space for 40 years. Something like that. [0:39:25] LR: Yeah.  [0:39:28] SF: In terms of V0, can you talk a little bit about what's going on under the hood? How is the model built? Are you using existing open-source AI tools? How are you sort of also translating presumably what is a textual response into these like widgets?  [0:39:45] LR: Yeah. I talked a little bit earlier about the SDK and how that's kind of been the foundational part of what makes V0 successful. The V0 tech stack is actually end-to-end Vercel. It's using the AI SDK. It's using Vercel's DX platform for like how you actually build the project. It's using our manage infrastructure to scale it. It's using Vercel KV for a Redis database. It's using pretty much every single part of the products that we sell to customers. It's the ultimate dogfooding of our tooling.  A customer goes to V0. They have an idea for the next big thing. Great. They type in the search box or they soon upload an image. It talks to our model, which is a combination of tools that then can spit out essentially a first version of their UI that is using Tailwind and shadcn/ui, which is this open-source library that we think is great built on top of Tailwind CSS. An accessible set of primitives called Radix. As well as then React code as well too.  It spits out this code where you can copy-paste these accessible React components. But then also just Vanilla HTML that uses Tailwind if you want to. The great thing about Tailwind here, by the way, is that there's a collocation of the styles in the HTML. It makes it very copy-paste friendly. You get that first version and then we made it in such a way where you can literally copy-paste or you can run an MPX command to install the components and then put that code into your application.  [0:41:16] SF: And then what about customization? If I have my own sort of brand guidelines, can I take into account like my company's colors and maybe an existing stylesheet?  [0:41:24] LR: Yeah. We haven't built this yet. But it's definitely something we want do. The ability to customize the output based on your own - bring your own style guide. Bring your own design system. The way it works today is that the primitives that we're building on, the UI component library that's integrated with the output, it does give you some levers around I want to change my spacing style, and my topography style, and my color palette and all of these different things to give you a pretty unique look and feel.  There's some good examples of like how this works. And Tailwind also is like pretty customizable in terms of all of the different scales in your design system. But I think what will make this even better is if you can actually literally bring the components that are already in your existing potentially private design system as well too and then output those.  For example, we use V0 as well for internal components that are part of our design system, our component library to build Vercel as well too. Yeah, I think that will be a very exciting addition. [0:42:23] SF: Is this primarily focused on or limited to component design? Like, I want a better checkout page. Or can I go and actually create like a multi-page experience with this?  [0:42:35] LR: Yeah. Today, it's kind of focused on a single route, a single page. And inside of that page there may be multiple components. It's more than a singular component. It can do a full page. I would say it's the best at singular - like a focused component today. When it starts to do a full route, it's a little bit - there's more things that need to be stitched together. And it might take a few more follow-up iterations or additional prompts.  Typically, when I'm building a full page, I'm building this full email client, I think it took like five prompts for me to get something I was really happy with. First prompt was pretty good. But then I was like, "Actually, move this sidebar up a little bit. Add an icon in the top right. This other thing here." It's okay to have a few more iterations on top of there. In the future, we might try to make it easier to design that multi-page experience. But we're kind of waiting to see how things shake out after early customer feedback.  [0:43:29] SF: Yeah. And I imagine more things can go wrong sort of the bigger the scope of it, right? You're essentially creating a bigger surface area for potential errors or mistakes.  [0:43:39] LR: Yeah. The best way to get great results is to be as specific as possible, which is kind of like how a good workflow for using other large language model tools is. You get a very specific prompt you can actually ask the large language model to critique your own prompt. How can I make this prompt a little bit better? Is there anything missing?  There could be ways for us to essentially build that loop into V0 itself especially when you're considering uploading an image of how you want something to look. It can then kind of continuously re-evaluate, "Am I close to this?" Looking at the output versus what the initially uploaded image was.  [0:44:19] SF: What's next? You kind of touched on some of these things. But are there other investments in the AI space that you're working on that you can share?  [0:44:25] LR: Yeah. I think there's still a ton of opportunity for us to continue making the output of V0 as useful as possible. I think really fine-tuning how we can create an experience on as few number of prompts as possible you're going to get to something that you're ready to copy-paste and bring into your application. There's still so much that we can do there.  We're just going to keep seeing what our first customers of V0 here who have been using it to build some pretty awesome products and some pretty awesome UIs how we can make their lives a little bit easier. That's kind of on the V0 side.  On the AI SDK side, we're just starting to explore the new OpenAI Assistance API. I believe we just launched some experimental support for actually using that through the SDK. Lots of cool stuff happening there. A lot of cool stuff happening in the open source space and open source models that you can bring in and kind of self-host or host on your own infrastructure. We want to make the AI SDK really the place where you can integrate with all of these tools. That's that.  And then on Vercel, we're just trying to make it as easy as possible for you to accelerate your product development, accelerate bringing AI into your product with as little as friction as possible. Giving you the frameworks and the tools, whether it's AI SDK or V0. But then, also, if you want to bring your own back-end that's using your own AI model that has some specific set of constraints, we still want to support that as well too. [0:45:51] SF: Okay. And then if people want to learn more about this or get in contact with you, what's the best place to do that? Or how should they do that?  [0:45:59] LR: Yeah. The AI SDK and V0, if anyone has feedback about those, I would love to talk more about them. If you're using V0. If you're using the SDK. V0.dev if you want to go try things out and sign up. Happy to help with anyone there who has questions.  On Vercel in general, if you go to vercel.com, we've recently updated our homepage. You should check that out. I'm really happy with that. And you can also reach out to me directly on your favorite social media platform or lee@vercel.com. I'm happy to talk with you as well too. [0:46:29] SF: Awesome. Well, Lee, thanks so much for being here. I'm glad that we're finally able to connect in some fashion. I've been a big fan of Next.js for years. I use it for pretty much all my sort of demos and side projects. And use the Vercel platform. And I'm really excited about a lot of the work that you're doing in the AI space. I think it's different than a lot of what other people are necessarily focused on. And I think that this kind of laser focus on the front-end, making AI accessible to any developer is something that is needed. And it's kind of the way of the future for how we're going to build a lot of these types of applications. [0:47:00] LR: Yeah. Thanks, Sean. I appreciate that. Yeah, it's been fun. [0:47:03] SF: All right. Thank you. And cheers. [0:47:05] LR: Yeah. Thanks for having me.  [END]