EPISODE 1773 [INTRODUCTION] [0:00:00] ANNOUNCER: Next.js is an open-source JavaScript framework developed by Vercel. It's built on top of React and is designed to streamline web application development using server-side rendering and static site generation. The framework's handling of both front-end and backend tasks, along with features, like API routes and file-based routing have made it an increasingly popular choice in the web dev community. Next.js 15 just released in October of 2024 and introduces significant upgrades, including enhanced integration of Turbopack and support for React 19. Jimmy Lai is a Software Engineering Manager at Next.js, and Tim Neutkens is the Tech Lead for Next.js and Turbopack. They join the show to talk about Next.js and what's new in version 15. Kevin Ball, or KBall, is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies, founded the San Diego JavaScript Meetup, and organizes the AI in Action discussion group through latent space. Check out the show notes to follow KBall on Twitter or LinkedIn, or visit his website, kball.llc. [INTERVIEW] [0:01:18] KB: Hey, guys. Welcome to the show. [0:01:20] JL: Hey. [0:01:21] TN: Thanks for having us. [0:01:22] KB: Yeah, good to see you. Let's start out a little bit with some quick introductions. Actually, I'll throw it to you first, Jimmy. Jimmy, do you want to introduce yourself and your background and how you got involved with Next? [0:01:33] JL: Yeah. My name is Jimmy. I'm a French software engineer. Before Vercel, I used to work at Meta in London, used to work on React Native and on some internal products, used to work a lot on web performance and everything related to that, like product infrastructure in general. I decided to join Vercel because I wanted to work with a company that focused on, well, actually on performance, on the web. The mission, Guillermo's mission really struck me there, in terms of bringing the amazing technologies we added, thanks to our companies. I didn't start working on Next when I joined. I used to work on the Future Flex, but I quickly joined the team back in, was it end of 2022? Ever since, I've been working mostly on the app router. A year ago, I started managing the team and we've handled mostly the 15 release and a lot of the great work the team has completed in version 14.1, 15.2. Yeah, that's it on me. [0:02:40] KB: Awesome. How about you, Tim? [0:02:42] TN: Hi, I'm Tim. I've been working on Next.js for a while, since 2016 when it first came out as a contributor, and eventually joined Vercel in 2017, and since, had been building quite a lot of different things, but mostly working on Next.js across all of them and building out the team. Now, I'm tech lead for Next.js and Turbopack. Mostly focused on Turbopack nowadays and trying to get that over the line and into the hands of everyone. [0:03:09] KB: Yeah. The impetus for this is y'all just had a big release. Do you want to tell us what that was and what's in the box? [0:03:18] JL: Yeah. This release has been a long time coming, actually. For those not familiar with the release schedule, we used to drop really regularly in the past year. After we shipped app router with Next 13, we were really following up on it every month or so. With 15, we decided to take a slightly different approach. We wanted to take a bit more time to make sure it's really polished. We released a release candidate back in May. Took us quite some time to ship it, because we're now basically in October, it's been six months. We really took the opportunity to bundle as much really nice changes as possible, so that we could set it as the new baseline for the app router. We added an insane amount of stability improvements and improvements. Improvements were sprinkled, some features like the Next.js form component, or the after hooks, which allows you to tap into the request lifecycle. We also improved so much on the Turbopack development story, which to me, I think, is actually maybe the biggest headline for Next 15. Turbopack is now stable for development. Maybe Tim can say more about it as well. [0:04:34] TN: Yeah. There's definitely been other changes as well. Next two features, there's been a lot of work on polishing things that people ran to every day. It's definitely been a strong shift in focus towards stability improvement to just make your day-to-day better in short. If you ever use Next.js, or any React framework that this serves at rendering, you've seen this hydration errors, because you add a date somewhere and that they changes once it gets to the browser, that kind of thing. Those errors we just saw that everyone was struggling with them, we were struggling with them ourselves as well inside of Vercel. It was just not clear where and what it was causing it, right? What code, and what was even changing on the page that caused the error? What we did is we worked with the React team to make React better in that regard, so that React can actually show this is a diff basically for where this thing is mismatching, and that is the component that was causing it. What's really nice now is that you get all these small tweaks that may seem very small stuff, but in the end it's affecting a million developers every day, because it just makes it easier to solve hydration errors, or some other errors that didn't have correct source mapping, or things like that. That's just on the Next.js 15 side of things. Then as part of Next.js 15, we're also shipping Turbopack for development. Turbopack is the new underlying compiler and builder for Next.js, and we're planning to make it more a generic builder in the future. Right now, we're just focusing on Next.js, because that's the largest service area, and once that works well for - once it works well for Next.js and it can build all the dependencies that we see people use every day, then it will be a really good generic solution already as well. We're first focusing on, firstly, focus on development, because that's the thing that most people were running into, had complaints about things were too slow, took too long to open a page, or when you make a change, it would take seconds sometimes before you can see it on the screen, be it sees us changes, or code changes. Basically, we set out to build a faster solution, what Next.js had up to that point. We built this new architecture to scale to the large amount of code that we see nowadays. When I started working on it, actually is eight years ago, JavaScript apps were certainly not small and that node modules meme has always been true a little bit. I would say, the bottomless pit has gotten a lot more bottomless in the recent years, or what we basically see is that there's more consolidation of libraries and it's not bad at all, really. It is really nice. You see more icon libraries, more design systems that are just out of the box, have everything that you need. Previously, where you would have to go and write every single component yourself, eight years ago, for example, you just had to write your own button component, write your own menus, write the drop downs, everything yourself. Now, you just have out of the box toolkits that have everything. With that comes them shipping a lot of components by default. That means we have to build no more. That's not inherently bad. It just means that our tools now need to scale up with that demand of overall usage. Basically, what that meant for us is that in practice, what we would see is we'd see smaller apps get over 10,000 modules, where previously, that was not the case. Or in some exotic cases where you accidentally import five different icon libraries that all export 10,000 modules, you'd see 30,000-plus modules for something that seems to be a simple case. When I say modules, I don't mean - [0:08:19] KB: It's the great NPM inflation, basically. [0:08:22] TN: Yeah. There's some libraries that ship icons that have multiple icon libraries. You can pick and choose between different icon libraries and use different icons. From a design perspective, maybe not the best IDE, but it's very convenient, and that's why you see a lot. It's not a bad thing, like I said. It just means that there's more code to be compiled. It doesn't mean that we ship more code to the browser per se, because you have stuff tree shaking and all that. From the compiler and vendor level, we first need to know about everything that exists before we can actually shake them. That causes the compiler to take longer, even if you only use one icon from the cycle library. That's what we set out to build a new compiler and bundler that can scale up with these high demands of larger apps. Then besides that, also our own Vercel's internal app for Vercel com, for example, started growing quite a lot as well. We added hundreds of engineers at Vercel. It's just more people working on it day to day as well. The code base itself is growing way more quickly than it used to. In order to keep up with the scaling of that, we just have to create a better solution. That turned into Turbopack, eventually. We basically investigated all different kinds of solutions, but found that it doesn't really fit with the way that Next.js works, or the way that we wanted to do a node chest and browser compilation and a bunch of other things. In the end, we ended up building a new bundler that should set us up for the next 10 years at least. We can still optimize further as well. Where we're at today is like, this is just a start, right? We're at a certain performance that's much better than a web, the previous compiler. But the current performance of the new compiler is still only at a certain point where we still are not super. We're happy with where we're at, and it's much better than where it used to be, but it can still be so much better from here. That's where we're working on just caching and with some extra caching layers to make things even faster across rebuilds. [0:10:19] KB: Just to make sure I understand, this is replacing what you were using Webpack for and what other frameworks might use, like some combination of Vite and Rollup, or something like that? [0:10:28] TN: Exactly. Yeah. [0:10:30] JL: I think what we found, yeah, like Tim said, building on a Webpack is just that we were architecturally limited. I actually don't remember how old is Webpack. It's probably around 10-years-old, Tim. [0:10:45] TN: Over 10 years. Yeah. [0:10:46] JL: Yeah. The whole structure, the whole amount of legacy had support the whole host of weird options and queries that you could configure via Webpack was limiting us. We sat down and we were thinking like, what if we could start it from the ground up? We consider, obviously, that's going to be in a cushion. We considered using Vite and the roll-up option as well. I think we took a really big bet here a few years ago, right? We believe, we have the solution to scale it properly. What's exciting really now is that this is starting to pay off. We spent the last few years iterating on just the basics of making a bundle work. The great thing to me, which was I was really impressed talking with Tobias about it at the last conf is now that we build the bundler in mind with the idea that you can separate each of the tasks that it does and cache them individually at the functional level, instead of at the module level, this allows us to avoid repeating any work that we don't need to do. First of that, we can see that from the HMR performance boost, which is mind-blowing, you hit command save on a file and it feels like magic to me. [0:12:05] TN: Yeah, we found that it's 95% faster than what it was before. It would take, one example is a one page on Vercel's own app, it's taken 900 milliseconds and it went down to, I feel like, 45 milliseconds for the exact same change, right? Changing some CSAS, or a gumble. [0:12:24] JL: Exactly. [0:12:25] TN: Yeah, one of the problems that Webpack had is, or still has in general is that the moment you start you add more modules, some modules are JavaScript files, or TypeScript files, or CSS, or anything else that you had loaders for, for example. The moment you have over 10,000 to 30,000 modules, there's just an inherent overhead on processing all the module replacement updates. Fast refresh updates. What that means is that anytime you make a change - It doesn't matter what change it is. If it's a JavaScript file change, or a CSS file change, which you might expect the one is faster than the other, but actually it's not. That CSS file change will still take 900-plus milliseconds, because of just the overhead of having to crawl the entire list of modules. With Turbopack, we actually made it so that Turbopack only has to redo the work that is affected by the change. That means, if you're using, like you're writing CSS and you don't have any customization, so you don't add post-CSS, or Tailwind, or anything like that, we only have to recompile that single file, instead of recompiling the entire module graph, or the entire chunks, or JavaScript files, I'd put for example, or see those files. We don't have to recalculate those, we only have to recalculate the pods that's affected by that change, basically. [0:13:42] KB: Well, and that amount of timing change is a real difference for your dev cycle, right? 900 milliseconds is still not massively long, but that's, I make a change, I save it, I go see it reflected. Whereas, 45 is like, I'm tinkering with this and it's live updating with me and I can iterate this. Is this, right? Is that right? It's like using dev tools, essentially, except you're using your code base. [0:14:02] JL: Yeah. [0:14:03] TN: Yeah, exactly. [0:14:05] JL: What I was getting on in terms of since this is now the baseline for us, it allowed us to basically really quickly, well, I say quickly, but this year is not making to really quickly add a persistent caching layer on top of that. For HTML, it's all in memory, we do this instantly, but you still have to hit the cost of actually starting up and computing the task. The amazing thing we showed last Thursday at that conf was what if we could just persist all of that work now to the disk cache. What if, instead of like, well, instead of saving it in your session, we could save it across forever on all of your session. You stop that server, you go to sleep, you get wake up the day after tomorrow and you can pick up exactly where you left off in hundreds of milliseconds. That's a lot of time saved. [0:14:52] KB: That is a lot. All right, so this is part of what's going into Next 15 is Turbopack is stable and you're shipping it with this. Was there a reason to couple the two, or it just happened that way? [0:15:05] TN: Yeah. We had Turbopack in release candidate for a really long time. When Jimmy mentioned the release candidate for Next.js was shipped six months ago, I think we had Turbopack in release candidate for even longer than that. Really, the benchmark here for Turbopack was that we pass all development tests, because we only ship it for development so far. It's coming for builds as well, as word to note here. In the end, you'll be able to run Next build with Turbopack as well and have the same forms improvement, but it's still a work in progress, because we have to add production optimizations and things like that. Yeah, on the coupling of the releases, we shipped a release for Turbopack, but then the release candidate, that was the first-time people actually started to try it out in their own apps as well. Up until that point, we have been using it for Vercel, vercel.com and our internal apps and things like that since October last year. We already had it in production, in development for it well, and it was working great for us, right? The big thing with Turbopack is that since it's a bundler and it's going to touch all your code, so that's all your node modules that you're importing, all your first-party code that you wrote yourself and all that needs to be able to process every single edge case thing that you're using as well, a feature of the platform, or feature of node, or a specific resolving thing that TypeScript supports, or things like that. We basically spent the last eight months, basically the bundler was already done for one-half years, I think at this point. It was really stable. The big thing here was getting all the test to pause for Next.js. We finished that in April. Then after that, we just spent time on back reports, testing out the top 300 packages that are used with Next.js, for example, trying out a more open-source apps and doing all the due diligence on making sure that we could actually confidently say, this thing is going to work for your app, if you don't customize your Webpack config. The important thing to note here is that we allow you to customize your Webpack config and that means that you can basically overwrite any setting that is set in Webpack in Next.js internally, but we can't support that with Turbopack, because Turbopack is not Webpack. Might sound like a no brainer, but actually, it's not as simple. The easy explanation here is that we do support Webpack loaders in Turbopack, but we don't support Webpack plugins, for example. If you add Webpack plugins, then you can't add those to Turbopack, because we don't have the same low-level hooks and things like that. We do support loaders. If you just have a Webpack loader, like SVGR, or SVGR, I'm not sure what the right way to pronounce it is, but if you want to import SVG as components, for example, get the loader, that works with Turbopack as well. You can just add a Turbopack config for Webpack loaders. To answer your question around the timing. In the end, we spent a lot of time working towards a stable release. Then, I think, a month or two months ago, we finished all that work. Turbopack for development was basically, ready. We fixed all the linear issues that we had about it and all that. Then Next.js itself depends on Turbopack, right? It actually has Turbopack as a dependency in a way that it compiles it in as a rose binary. In order to release it, we had to ship it as part of Next.js itself. Next.js itself was in a release cycle, right? It was already in release candidate and was getting out as Next.js 15 in a month, or one-off months later. In the end, the timing is basically coincident. It could have been that it was an earlier version as well, or a later version depending on when these went out. Then the other thing to know here is that Turbopack and Next.js is actually not just Turbopack, the bundler. It's like, the bundler itself, so that's what we call Turbopack. Then the other part is the Rust bindings that we integrate with Next.js. We add all the Next.js-specific ways that layouts and pages are resolved and custom transforms that we do for Next.js specifically, things like that. We call that Next.rs internally, because we need to have some code name for it. That's basically all the bindings into the bundler and how we add entry point, routes, basically to the burner and things like that. Yeah. [0:19:27] KB: This gets to an interesting topic around when you own your own build chain, which you now do as you're doing this. You can use it to make standard things faster, because you happen to use them, or you can even use it to start extending the language. Frameworks like Svelte extend the language, but they own the compile chain, or you get frameworks like Quick, which also sets things up to be magical for you, because it knows end-to-end what it's doing. Next, as I understand it, and particularly with things like OpenNext, it's still just JavaScript and React. Are you looking at extending it further now that you own your whole build chain? [0:20:07] JL: Interesting. I feel like, one might say that Next.js is in the same category as the other frameworks you explain. If you think about it from our perspective, actually, Next.js is mostly all compiler-based, especially with the new server components we introduced with the app router. It's now its solved like its own sub-language. You have, well, it's its own language and React is one with use client and use server, sort of introduce new paradigms, we introduce use cache at Thursday at conf. Those is React, plus those things, which are to me really, an extension of the language already. [0:20:46] TN: It's not in the same way as Svelte, or Quick in that way. It's not like it's adding a language extension, where you have specific directives that are special. Besides the directives, like use caching and use client and use server. What is interesting there is that those are not JavaScript directives in a way. They're not actually directives that are saying, this is different syntax that allows you to do a certain thing. They're more like boundaries between the server and the client and they're bundler marks. They're more like, hey, bundler, now move to this different. Basically, move to this different environment. You can switch between environments using those directives. You can say, use client, now this is a browser/server-set rendered component. Then use server, this is now something that runs on the server as well. All of that is deep integration into bundlers already. We already had to do this with Webpack. We support it with Webpack as well. The main difference now is that with Webpack, it was like, we had to do manual bookkeeping between three different Webpack instances, where there's basically three compilers running at the same time. Whereas now, it's one compiler that can reason about the entire module graph of all the different environments as well. [0:22:06] JL: To go back to the compiler work, I think maybe, yeah, the difference in philosophy is that we try to still just be React and JavaScript. I don't think we're looking to go anywhere beyond that. But if React went for it, if they introduced their own .react file extension and then they had their own language where you would need to declare a use anywhere and it could have its own syntax, etc., we would follow it for sure. But we don't have any other ambitions besides that. [0:22:37] KB: To be fair, they already did that with JSX, but it didn't introduce new semantics. It was more sugar and easy use. Okay. [0:22:47] JL: Yeah. It could be interesting though, if React did it, they could introduce their own flavor on it and make it so that you could use conditional hooks, all that things. That'd be great. [0:22:58] KB: Let's maybe talk about some of the other functionality changes that - I mean, you mentioned you'd been making all of these improvements. When we talked about initially, a lot of what you mentioned was stability improvements, build improvements, things like that. This is a major release, so there's got to be some breaking features in there. Looking at it, the one that stood out to me and the release set was the async request APIs. Do you want to talk a little bit about that? What's the motivation? What are the implications of introducing that? [0:23:24] JL: Yeah. That one was pretty funny. It was a fairly risky on our end and we're very wary of such a big change. For context, what we had before was through the app browser, we exposed information about the current request through methods like, cookies, headers, or we would inject params or scramblers like props to the server component that you would render. In 15, we decided to change those methods and functions to be accessible in the same way, but via promises and sets. You're calling headers, would now return a promise. Calling cookies would now return a promise, and you need to await it in order to read the content there. The 15 blog post goes a little bit into why we did that change, but it's vague, basically. We didn't really say why we did it. We uncovered that at comf. But what we've been looking to do with this change is to actually prepare for this new - this other big change coming up in Next soon, which we call Dynamic.io internally. There's a lot of talks, basically, around Next.js complexity in the past around like, how did the semantics around caching and the static, or the dynamicness of Next makes it hard for people to reason about. For context, we used to what we still do is to pre-render all pages by default. You'd write a page and if it was a fetch call in there, or anything really, we tried to pre-render it at build time, so that we could optimize it and survey it in a static form. However, those changes were too - all this heuristic was a bit too strong sometimes, and you would end up with people deploying their website and asking themselves like, why their website content was not changing if they had made a fetch call to a third-party API to display some content. We had those semantic changes and basically, we ended up having to add a lot of configuration as well, because some people wanted control over whether or not always needed to be dynamic, or always wanted to be static, or actually, a mix of both. It all made a pretty hard learning experience in my opinion. I guess, we're probably the only framework to do these kinds of optimizations. Anyway, we went back to the drawing board and we came up with this concept of Dynamic.io, where in order to simplify the learning experience, we wanted to come up with a single concept through which users could determine if their code was static, or dynamic. Dynamic.io is this, the gist of it is if your user code uses promises, if you actually await for something asynchronous work, then Next can generally reason about it and say that this page should probably be dynamic. You don't have any problem anymore if you're doing file system, or if you're accessing your database, because 99% of the case are probably dynamic here. Now you use Next.js as you would, if you write a simple blog post, and then you're just reading content, it's going to be static. If you're having a dashboard and you're fixing for your database, it's probably going to be dynamic. Next can now reason more intelligently about it. Which leads us to the cookies and headers changes. If you think about it, reading from the cookies, or the headers actually makes the request dynamic, because it needs to be - it's actually about the incoming request that comes in, so you want to you want to read it so that you can personalize it according to the user info. Is the user logged in or not? Implicitly, that's dynamic behavior. The real reason we made that is just so we could adapt it to this new Dynamic.io behavior. Now, it works the same. You await it. Now, you're telling your page, it's dynamic. [0:27:28] KB: I think it makes sense. If I were to rephrase back to you, you are doing - this actually gets it back to the previous question around things you're doing with the build tools, right? You're doing a build time step, where you are optimizing things that can be generated statically to pre-generate them statically, so they go up there. You're trying to do that determination "automagically," without having somewhat - the developer have to tell you things. The simplest way to do that is say, is there anything async going on here? In order to do that, then you had to take these things that maybe were using asynchronous API previously, but actually, technically, should be asynchronous, because they do depend on something dynamic, something user request, change them to be async. Now, your initial build time static analysis works across the board. Is that a fair summary? [0:28:23] JL: Yup. That's perfect. [0:28:25] TN: The only thing there is that it's not static analysis, or [inaudible 0:28:27] per se. It's more like, we run the code and find that it's - basically, built. This is where it gets complicated, build times and during next build, we run the code. If the code then is doing anything async, then we mark it as this thing is not static. [0:28:44] JL: It's dynamic analysis, maybe. [0:28:47] TN: Yeah. [0:28:48] JL: We actually, internally we call the static the build phase, like just was possessing for doing runtime optimizations. [0:28:58] KB: It's not static analysis in terms of analyzing the written code, but it's pre-processed, pre-run code. Is that right? [0:29:06] TN: We tried to call it pre-render for the most part. We try to pre-render during built. If it turns out that it's doing anything async, then we basically bill out from doing the full pre-render. There are some implications on partial pre-rendering and all that as well. I mean, maybe we can talk about maybe now, we can talk about for hours, probably. Yeah, it's that pre-render we try to generate. That's the same in Next 14, by the way. We do this pre-render, but the mechanism is different. When you call cookies, it's a throwing mechanism, instead of finding promises. [0:29:40] KB: Makes sense. All right. Other changes that are in Next 15, you mentioned the Next Form Component. Do you want to talk a little bit about that? [0:29:51] JL: Yeah, Next Form, really simple. It's a drop in for just a normal form tag, but it adds some additional features. That's pretty fetching. It's client set navigation. It allows you to do the things that you're very often already doing anyway, but it's quite cumbersome to manually handle, or if you do manually handle it, figure this link component, for example. Link does a bunch of features for you automatically, that you could totally write yourself. You could write it as this thing in the view for then writer.prefetch, or something like that, but you really don't want to be spending time on that per se. It's similar for Next Form, where it will automatically do the prefetching for you. If it's a get route, for example, it just integrates better with server functions and directions. [0:30:42] JL: Yeah. Idea behind the component is that we wanted to make it as similar to the vanilla form as possible, and we just wanted to add a really thin layer that connects it to the Next.js router on its own. We're not looking to doing anything fancy there. We're not integrating with form validation, or anything you might expect from some other library. It's just supposed to be a really raw primitive, so that you can get instant loading states. You're doing get form to another page, that kind of thing. [0:31:14] TN: It's like your search forms and things like that. Much easier to write those. Whereas, today you might have to manually manage the spends and adding transitions and a bunch of things that are slightly newer React as well, so not everyone knows about them even. This just makes that whole setup a bit easier. [0:31:36] KB: That gets into another thing that I wanted to talk about with you guys, which is the relationship with React. In particular, I saw that you're releasing against an RC of React, not even a stable released version. What's the thinking behind that? Were there particular things you needed to get from that? How is that all working? [0:31:56] JL: Yeah. It's a pretty interesting question. For context, we've been working really closely with the React team at Meta. We also have a few members of the core team inside of our team as well. Generally, the roadmap, the decisions around releasing React 19 are usually led by those members. The decision here, originally what happened is that back in May, React also released their release candidate, React 19. Basically, we wanted to ship Next 15 as part of that as well. The idea was that we would release RC and then fast follow on it. We made all the breaking change that we needed. We bumped the peer dependency and forced users on React 18. Then we're using the pages rather, for example, to also upgrade to React 19. I remember, what happened is that a month later, there are some discussions around one changing in particular regarding the suspended siblings, rendering behavior, and React 19. That was a big change for a lot of community users that the - the React team decided to hold the React 19 to release on this, which is why we're still on the RC. We ended up waiting on it for a while, but then we actually discussed with the React team internally, and we decided to opt for this strategy of releasing, or stable without blocking on the RC and this behavior changing. I think, with the caveat that we would add backwards compatibility to React 18 for the pages rather, so that separate concerns there. However, yeah, we got into a slightly more complex situation with the app writer, because one thing to know about the app writer is that we're building it off of a vendor version of React, which is a thing you called Canary, Tim? Yeah. App writer always came with the actual latest version of React Canary, which is a version built for us frameworks, like Meta frameworks, so that we could build on top of it, so that we could integrate with the latest features before they actually hit the React stable. The reasoning here is that we we've always - we've been on React 19 for basically, a year or so already if you're using the app writer. The siblings, the suspend siblings change as a shorten it, like was actually, has been present for over a year now for us. We decided to not consider it a breaking change, and we decided to move forward with it. Because per the React team itself, that's really the only change that's going to be shipped whenever the React 19 really ships as a GA. [0:34:50] KB: Does Next depend on that particular part of React 19? Or is that just something separate, so that if they ship a change to that, it just doesn't bother you at all? [0:35:00] JL: Yeah. It doesn't actually affect us. Not to go into too much details, but it affects client side suspend usage, if they are doing fetching in rendering from top of my mind. That basically means, if you have two components that are in the same suspense boundary, what would happen previously is it would kick off the two components at the same time. Like, call render on both component A and component B, if they're in the same suspense boundary. Now, it actually will call component A, once it suspends, it will not render component B. That's a problem if you're using a library that is heavily relying on this. As it turns out, there's quite a few of those in the whole React community. In our case, the Next.js, the router in app writer is not using that pattern in any way. The only thing where it or might affect you in some ways for using lazy loading, or things like that, but that's not super common per se. Yeah. In practice, we don't run into the same problem here, because the fetching mechanism is different. If you're using server components, for example, like they didn't run in the browser, so you didn't hit the same annotation. [0:36:17] TN: Basically, at the worst, it doesn't change anything for Next.js app router users, since they always had it. Whenever that gets fixed, it's just going to be a minor performance optimization. [0:36:29] JL: Yeah. Will only get better, basically. That's the - [0:36:31] KB: Makes sense. This conversation brings me to another thing. I know there's been stuff out in the web community, questions around the deep interlocking relationship between the React team and the Next team now and thoughts about, oh, a server component is just for Next, or how does that work and things like that. Kind of curious. How do you all think about the relationship of Next and React? Philosophically, what do you think, like what belongs in Next versus what needs to be in the React side? Then, are there things even further out that shouldn't be in either of them and should be in a third-party library? How do you think about those lines? [0:37:11] TN: There's a surprising amount of things that people think are Next specific. They're actually React. Good example is fuse claim to server. Those are React RFC specific. Actually, the biggest misconception is that we invented use client and a server. That was actually not the case. That was based on feedback from other early adult terms of server components, actually. For example, Hydrogen and Shopify was one of the first frameworks to implement React server components, even before we had the full working implementation. That was even before we built app writer. They started migrating apps and then they found that they would run into problems with the extension. Like, why didn't you add .client, or .ESX, or something like that? It's a common feedback. That was actually something that the Hydrogen team found was a very big problem to get overall community package adoption, for example. Because it meant, every single React library out there would have to change their code in some way. There would be no way for you as a user to say, this is now a client component, or something like that. Yeah. I'm sure that Jimmy has a take on the raw Next.js and React overlap. My personal take here is that we're trying to make - In the essence, for me, I've been working on Next.js for so long. A lot of what we're doing now is actually bringing a lot of the learnings that we had from Next.js into the overall ecosystem. A really good example of that is the head management, for example. Very first release of Next.js, we had to work around this limitation of React, which is that you couldn't just inject tags into the head. We have to create this Next head, and it's override, build our own React-ish thing that loops over JSX and tries to magically inject it into the head. Over the last year, or the year before, so Josh on the React team Vercel, he spent so much time figuring out, can we bring something like this next head thing into React itself and bring it to all frameworks and all user.react? What this means is that you can now, with React 19, you can just write a meta tag in any component and it will just magically send it to the head for you automatically, or write a title tag and it does the same thing. It makes our lives easier, because now we don't have to maintain this brittle logic of trying to inject stuff into the head that React doesn't know about. It makes everyone else's life better, including Next.js users, as well as everyone else, by being able to inject link tags, head, or inject link tags, meta tags, title, that kind of thing, as well as integrating those deeper into React, which is link tags can now integrate with suspense and we can show a loading spinner, until the link tag is loaded and things like that. Stuff that I would never have been able to - we as a team would have never been able to add to Next.js even, because we don't have full control over rendering, which we have those, right? That's one of the examples. I think one of the other examples is just the overall React server components, proving them out. Like I said, there was other teams, like Hydrogen, and some people building other frameworks on top of the React server component spec. But it's really helped to bring our expertise in how we were building service at apps, bring that into React and give people all the things that you would ever need, right? An example here is if you want to pass some data from - an example here is a limitation that Next has, got server-side props, you were never able to return a promise, or return a date, or a JSON object that would have been recursive, for example, or things like that. React now has a serialization format that allows you to just return the JavaScript map and pass that to the browser from the server. It can serialize that and create a new map in the browser. Avoid hydration errors. It also is much more reliable when you have it in React itself. There's a lot of benefits from being able to work with React team directly. Hydration errors is a good example as well. There is some integration in Next, or we have to show the error overlay and things like that, but really, everyone's getting better at hydration errors, even if you're using other frameworks and other libraries as well. [0:41:35] JL: One thing I want to touch on in particular is like, we don't think about it just as Next.js as like, when we design components, etc. It has to go back to React itself in most of the mind of the people on the team. Let's say, it's rather like, React pushes the Next.js direction that whenever we design some changes, for example, we could have easily gotten our own Next.js type tools that allow you to tap into server components and see what they're made of and kept that for ourselves. Instead, we did the work to integrate into the React dev tools, so that any frameworks that will want to use server components will be able to tap into it. I think the awkward part maybe is that it's an insane investment of time on this, on the Next.js team to realize the vision of server components to its fullest. That's why things haven't caught up on recently. We have a little bit of a head start there, but I'm really confident, like frameworks like Redwood have started exploring it. Remix also have been looking into it. I'm looking very much forward to see what their spin on the server component is. [0:42:55] KB: As we talk about relationships with other community projects, and you said, you're never designing it just for Next, you're pushing for React, which improves others. It leads me to another question I had, which is around the relationship between Next and Vercel. I know there's historically even been a sense of, "Oh, we need a new project. Open Next in order to be able to build Next outside of Vercel." Jimmy, you mentioned before we got on the air that you're doing some work in that space. Do you want to share about it? [0:43:23] JL: Yeah, yeah. We're really excited about this. I think as we were building up Next in the past few years, we were really focusing on making the best framework end-to-end, as much as possible, in terms of something that works really well on dev, but also works the best as you deploy it. We want to push for the best ways to build websites. That doesn't just stops at when you build it. It also matters how you deploy it, how you use best static content, or how you organize your middleware. OpenNext allows you to deploy Next.js easily on serverless platforms. I do want to say that Next.js on its own, as always, been pretty easy to self-host easily. Tim can talk more on that, like the containerized mode where you can just run NextSTART, and that always has been great. That's just limited on its own, because it will just allow you to have a simple node server that will respond to as start request, and you run it in your own instance, or in your own $5 VPNs. That has always worked out. What has not worked real out of the books is really, that Next.js story as an infrastructure, basically. [0:44:37] TN: The framework to find infrastructure as being like Next.js is telling the provider. Doesn't matter if it's Vercel, or some other provider? This is the serverless function I want you to create. This is the route rules I want you to create. This is where the static files are, but the static files also need to have some headers, for example. All of that is baked into NextSTART. That's the Node.js prediction server, or custom server if you're using that. Basically, all those rules are there, right? It has the right static caching headers and things like that. If you're building a serverless platform, then you would have to figure that out mainly, basically, because all these platforms have different formats. There's not just one standardized output for all of these. That's where things like, OpenNext, for example, and serverless Next.js, I think, is one of the names of the other packages. Eventually, they are trying to create these are the serverless functions, these are the route rules, and then generate the route rules for a specific server. That could be AWS, or Azure, or GCP, or anything like that. Then, if you're using NextSTART, for example, you do have to - once you're starting to scale, it's beyond one instance of the Next server, most people run thousands, if not 100,000 of those, depending on the amount of containers that you're generating, basically. The thing is, we see very large websites, like self-hosting on NextSTART, or Node.js server as well. Then it's just that you have to use serverless per se. Next.js runs totally fine on server, like Jimmy said. It requires some extra setup. That setup was there, it just never explicitly documented in like, this is exactly how you do it, type of thing. That's what's changing. I guess, that Jimmy can talk a bit more about that. [0:46:38] JL: Yeah. On one hand, we're making sure the documentation gets better around that side. We're going to update the docs soon. With those examples we talked about, we were going to show you the really simple steps of how you could deploy on, I think, literally all of the providers you could think of. What we're doing as well is working with the up and Next maintainers, which are great. By the way, in order to change the way Next.js's architecture itself, so that we can avoid, in theory, having something like OpenNext existing, by taking their learnings and adapting it into our codebase, so that other providers, Netlify, Cloudfare, AWS can consume its outputs and shape that framework, defined infrastructure as easily as we can. I think the tension was around, if you want to do that right now, the OpenNext maintainers, they had to reverse engineer our codebase quite a bit. It's all in the open. It's also, the contracts are a bit unclear. The outputs are subject to change. Yeah, we can do a better job at documenting and at creating and enforcing a standard behavior there. Yeah, I'm really excited about this work. I think, we want Next.js to be the best as possible on every platform as we can. We're investing a lot of time into creating a set of community maintainers there. We want to make sure we support everyone in community in that regard. [0:48:09] TN: We just want to make sure that when you're self-hosting that that's not a bad thing, right? Obviously, we would love for you to use Vercel. There's many reasons to use Vercel, but that it would be the only place to host Next.js is definitely not one of our goals. It's more about making sure that everyone can succeed with Next.js. Day-to-day, if you're using Vercel, great. If you're not using Vercel, great as well. There's many other reasons to use Vercel, in my opinion, like pre-deployments, things like that. It'll be exciting to see where this whole effort turns out, because we just launched the new GitHub org that has all these sort of templates as well, various different providers, serverless providers as well. Some of them, they only support static, for example. Say, you don't even have a server, you don't want to use NextSTART, you want to use Next export, for example, or the output export, then we have a startup kit for that as well. [0:49:07] KB: Awesome. Well, I think we have run through our time here. Thank you, gentlemen. This has been great. Any last thing you want to leave our listeners with? [0:49:18] TN: If you upgraded to Next.js 15, you're not done yet. Try to run thorough back as well. It's opt-in still. The reason for that is that we don't have built yet. But what we've seen in our own apps and people reaching out to us it's definitely going to give you a big performance boost for development. Like you said, just faster iteration velocity, basically, for everyone. [END]