EPISODE 1829 [INTRODUCTION] [0:00:00] ANNOUNCER: TanStack is an open-source collection of high-performance libraries for JavaScript and TypeScript applications primarily focused on state management, data fetching, and table utilities. It includes popular libraries like TanStack Query, TanStack Table, and TanStack Router. These libraries emphasize declarative APIs, optimized performance, and developer-friendly features, and they are increasingly popular for modern front-end development. Tanner Linsley is the creator of Tan Stack, and he joins the podcast with Nick Nisi to talk about the project, SSG, type safety, the TanStack Start full-stack React framework, and much more. Nick Nisi is a conference organizer, speaker, and developer focused on tools across the web ecosystem. He has organized and emceed several conferences and has led NebraskaJS for more than a decade. Nick currently works as a developer experience engineer at WorkOS. [INTERVIEW] [0:01:10] NN: Tanner Linsley, welcome to Software Engineering Daily. How's it going? [0:01:14] TL: It's going great. How are you? [0:01:16] NN: Oh, I'm doing fantastic. Why don't you tell us a little bit about yourself? [0:01:19] TL: I make software, primarily open-source software now as of about almost a year ago. It marks when I went full-time on TanStack. It's going really well. Before that, I was running a startup with some friends called Nozzle. I spent 10 years there solving difficult front-end problems, which explains why a lot of the tools I have today exist. [0:01:48] NN: Yeah, that totally makes sense. [0:01:51] TL: Before that, I was an Angular, Ionic junkie who was making money off of WordPress. Actually, it's funny. I actually learned how to write JavaScript kind of through Angular 1.x. It was a weird experience. And then afterwards, I was like, "Oh, I need to learn JavaScript." That's kind of where I got my start into JS. But now, yeah, I wake up in the morning and I eat, sleep, and breathe TanStack open-source and make sure that it's helping people. [0:02:30] NN: Nice. Well, anecdotally, I can say that it is because it has helped me a ton, and many, many others. My first introduction to the TanStack would have been through React Query, now TanStack Query. Where did that lie? Was that one of your first projects? Was it the first project in the TanStack? [0:02:48] TL: No, the first project actually was TanStack Table. Back then, it was called React Table. That was one of the very first ones that I wrote that is still around today. Yeah, so I wrote ReactTable way back. We needed it at Nozzle. I have a video from React Summit in 2020 something. I can't remember. But I talked about React Table. That's a good one if you want to go check that out. But from there, after that, I wrote React Static, which I don't maintain anymore. It's un-maintained, I think. But this was back when SSG was really, really hot. And at the time, it was Gatsby and Next.js were really up and coming. And they were doing really cool things. And I put my hat in the ring and I started building React Static, which was this framework, essentially, before server frameworks really took off. And it was good. It was fast. Back then, everybody was like, "How fast can you build your static site?" And for a while, I was killing Gatsby and Next because I was doing - I think I was one of the first ones to do multi-threaded builds for SSG. If you had multiple cores, you could just zoom. And I got pretty good at React Static stuff. And then we realized that I wasn't using React Static at Nozzle very much. We didn't use it. It was just kind of fun. I was like, "You know, maybe I shouldn't put all my time into this." And then Gatsby raised $30 million. And Next.js raised - I don't know what their first round or two was, but the next round they raised was several tens of millions of dollars. And they're like, "Yeah, and we're going to be dumping everything we have into Gatsby and Next." And I was just like, "Oh my gosh. Well, I don't really have the time or the money to compete with this right now." I decided to throw in the towel. I gave React Static to a company, a group of people that we're going to keep maintaining it. And then after a while, I just switched my SSG to Next. And then after a while, I just stopped doing SSG. Not really. I mean, we kept using it through Next, but then everything started to become more hybrid. And then it just became a lot of caching, a lot of CDN caching type stuff. It's all server-driven now. I'm starting to get back to SSG a little bit. TanStack Start is going to have - already has some static site generation features to it that are pretty cool in my opinion. It's kind of like bringing back React Static for me because it was a lot of fun. [0:05:42] NN: Nice. Well, yeah, let's dive into that and talk about TanStack Start. It is more of like a full framework on the level of, would you say, Next, and Remix, and kind of like at that level? [0:05:53] TL: Absolutely. Yeah. It's a full stack framework, which just means it has a server-first. I mean, it has a server-first technical mentality to it. So just like Next.js and Remix, it's doing full SSR and hydration. We're running that server on deployed servers somewhere, whether that's serverless, or long-running, or whatever. And then it still becomes an SPA, just like many other full-stack frameworks. Once you've hydrated and streamed the response down, it becomes an SPA. And most of it is just based on TanStack Router. I would say 90% of the APIs that you use or are going to use when you build a TanStack Start app, you're just using TanStack Router. Start is actually pretty unrelated to a lot of the public API that you touch. There's obviously a lot of dependency on TanStack Router and Start to do the hydration and the streaming and kind of take care of the black box stuff that nobody really ever wants to touch the implementation details of. But then in design for how you interact with it, all of the routing and all of the application stuff is very separate. If you're doing something with server functions, that comes from Start. If you're doing something with SSR, you don't have to worry about that. You just use the Router. Most of the time, you're just writing an application as if you're writing a good old SPA. And the reason that works is because it's isomorphic. By default, and this is how Remix is too, we're rendering everything on the client. We're also rendering that on the server during SSR unless you explicitly say like, "Hey, don't try and render this because we're using local storage," or something like that. But otherwise, everything runs during SSR and on the client by default. And that's a pretty big departure from Next.js app router, where they're pushing everything kind of like, "Hey, not everything runs." Well, I mean, I guess client components. What a bad name, client components. Because, really, they render on the server too. They're isomorphic, right? The mentality is just in the difference of developer experience. With a TanStack Router and Start app, it's much more like Remix where you feel you're writing an SPA and you're kind of opting into server-specific features and server-specific things as you need them instead of kind of having those server features thrust upon you. Yeah, a good example of that is loaders. I mean, even Remix in their loader pattern, loaders are server-only. They only run in the server, right? For us, that's not the case. Loaders run everywhere. They load on the server. They load on the client before you navigate. But if you do want to run something only on the server, that's where you reach for a server function. You can create a server function, and then we will guarantee that it only will run on the server. During SSR, or if you're on the client, it will make an RPC call back to the server to make sure that it runs that logic there. [0:09:27] NN: Nice. And how do you define a server function? Is it in a somewhat familiar way, like use server, like a pragma like that, or how do you do it? [0:09:39] TL: Yeah, we have kind of two flavors. The first base layer of support uses the use server directive that you're going to see almost everywhere. You'll see that in React because they're trying to make it a standard thing. It's a bundler feature, like use server. We support that. So if you want to make a function and just put use server inside of it at the top, little string literal directive, that will work. We'll extract it and put it on the server, do a little RPC. That works. You can do that. But that's not what we recommend, mostly because it's lacking. It lacks a lot of features, a lot of things in my opinion. If you've ever tried to do validation, or middleware, or maybe you have a server function but you want to wrap it with some client-side functionality too, it just becomes unwieldy because it's this function that gets extracted and the lines between client and server get really blurry sometimes. And it's the implementation gets a little blurry too. Like, "Okay, so when I call this function on the client, what is exactly happening?" Okay, it's creating a fetch. It's doing a fetch call to my server. What kind of fetch call? Are there headers involved? Can I modify those? Is it a GET or a POST? Is it just sending raw response back and forth? Is it doing serialization? Because there's a lot of questions around like, "Well, can I send maps, and sets, and dates, and things like that? And could I use super JSON with this?" Right? And when you talk about those features, the use server directive kind of is unwieldy. It's not fun to use. We created a primitive for TanStack Start called Create Server Function. And if you've ever used TRPC, it probably will feel a lot like creating a TRPC procedure, or mutation, or query. You call create server function and, already inside of there, you can start customizing to say, "This server function should use the method GET or method POST." You can start customizing things about how is this going to go back to the back-end. You can also start adding middleware to that. So you can chain off of their .middleware and pass an array of type safe middleware functions. And middleware can not only change and read the payload and the result that you're getting back with a server function, but there are secondary channels on top of that network I/O for context. Middleware can even send context between the client execution and the server execution and back again to the client without you needing to worry about passing any of that information at the call site. For instance, we helped Sentry create a middleware for server functions that does full observability from client to server and back to client again. And all you do is add a global middleware. And every single server function now has observability in it, which is way cool. You can use it for authentication and things like that. And then there's also validation, which is where TRPC is a really big one. TRPC, we're very type safe first, right? And as soon as you cross the network, type safety is kind of fake unless you control it end-to-end. Which, I mean, if you're doing full stack, you could pretty much guarantee that most of the time it's going to be - if you just share the types, you're going to be okay. But we wanted some extra security around things. And so we added first class validation support for server function payloads. What you can do is you can say dot validator and you can pass any standard schema compliant validator. So Zod, ArkType, Valibot, or you can just write your own if you want. It's just a function that takes input output with types. And you can actually do runtime validation. And you can also say - validation by default only runs on the server. But if you wanted to, you could turn it on for the client too and get early errors from the client. [0:14:24] NN: That's what I was going to ask, is if it was just on the server or if it could be on both. And that sounds amazing. A question I have on that with the validator. If you have something, you're using the Zod validator and passing in your schema, I assume it's giving you some kind of standard - it's going to throw some kind of standard error then and then you handle that in a pretty standard way across everything? [0:14:45] TL: Yeah, if you use Zod and it throws. If it's server-only, we have a serialization utility behind the scenes that's making it so that we can serialize basically anything from the server back to the client. When it throws on the server, we'll take that error, we'll package it up, we ship it back to the client, and we'll re-throw it on the client, and you get to respond to that Zod error however you want. [0:15:08] NN: Okay, nice. And you can respond on both sides too. [0:15:12] TL: Yeah, if you wanted to. On the server side, the base validator can throw, and you can kind of wrap that in a catch if you want and say, "Oh, we'll do some extra server-side logic here if we want." The default is that if it doesn't validate correctly on the server, it will just go back to the client. And what's cool about that is validators, by default, are server-only. If you use Zod in a validator or something like that, we actually rip Zod away from the client package. So even though you're defining these functions right inside of your SPA kind of isomorphic code, the packages that you use inside of the server handler or the validator, they get ripped out of the client so that you're not shipping Zod to the client unless you want to. You can just turn on client-side validation too and then we'll ship it. [0:16:07] NN: Got it. But then on the client, it's giving you those Zod errors? Or how is that? [0:16:13] TL: Yeah. On the client, what happens then is we will run your validator client-side before we send the fetch request out on your payload. [0:16:20] NN: Okay. It would be a different validator? [0:16:22] TL: Yeah. Well, it's the same validator for now. We have a to-do item to make it so that you can customize and say, "Here's a client validator if you want to do that." We haven't had anybody ask for it yet, but it would be a pretty simple change. But yeah, that's the idea. We want it to be like, "Anything you can catch on the client, do simply now. Skip the network I/O. Otherwise, just send it to the server." [0:16:50] NN: Now, speaking of type safety, one of the big features that I see come across - and this might be more of like a TanStack Router thing and a TanStack Start thing. And correct me if I'm wrong, but the big difference, obviously, is the server-side functionality of TanStack start, but then also there's file-based routing within that. Is that true? [0:17:07] TL: The Router itself has file-based routing even if you don't use Start. [0:17:12] NN: Oh, okay. And that's just part of TanStack Router. [0:17:15] TL: Yeah, it's just part of TanStack Router. The router itself had nothing to do with server-side, anything, but just the router itself has a V plugin and a CLI. And in fact, it even has an Rspack and Webpack plugin as well. So you can run these plugins. And we support file-based routing. And those plugins are there to give you the full breadth of type safety that we can offer. File-based routing is actually the best way to get that. Otherwise, you end up wiring a lot of things together if you use code-based routing. [0:17:46] NN: Mm-hmm. Yeah, for sure. Now, I want to dig into that a little bit. What does it mean to be a type safe router? Because I see that touted as like a huge feature. And to be honest, I haven't dug into it enough yet to fully understand that. Could you explain that to me? [0:18:03] TL: I think the best illustration of what we mean by that is, if you go to any example, or pretty much any application that's built with TanStack Router, and go look at the route definitions, go look at creating a route and using route APIs, and you tell me how much TypeScript you see in those files. And I'll answer that for you. It's probably none or maybe just a little bit if you have decided to abstract some things on your own, you got to make your own function signatures or whatever. But for the most part, you can just write with TanStack Router and never need to cast anything, or write type code at all, or annotations. You never have to narrow manually. It honestly looks like you're just using JavaScript, but it is 100% type safe behind the scenes because everything is inferred. And what we mean by that is there's a big difference between libraries like Next.js and React Router that they're written with TypeScript, and they do have types, but most of the time you need to remember to put those in there. There's some step involved where you need to get involved to some level to make sure that things are going to be type safe. [0:19:40] NN: Like providing something to a generic. [0:19:42] TL: Yeah, providing a generic. Or even with React Router's new stuff, you have to remember to import the types and then grab the right types off of you know that file and put them where they need to go. Or even Next.js and Remix, they both use like building utility where you have to remember to use the utility. And the bottom line is that these other routers that aren't type safe, they will allow you to write unsafe code. I mean, that's fine. They've been around longer than us. They need to support that. We could have done that too. I wrote a router called React Location that allowed you to write unsafe code, but I didn't want that. Not only do we make it really, really easy to just write type safe code, but we also make it somewhat difficult to write code that's not safe because it's just inherently built into the entire architecture of the router. From the minute that you define your router in your routes and start going down into components, and loaders, and things like that, search parameters, everything is fully inferred. And all of those generics - I mean, if you just want to see how many generics we have in TanStack Router, go look at the source code for TanStack Router and you'll see some of our functions have 20 or 30 generics being passed around, which is fine. We're taking on that complexity so that you don't have to. And what you get is a system that even for junior developer or somebody who's new can come in and get the docs and use autocomplete and write code that essentially guides you to the happy path, discourages you from making mistakes without you needing to even remember, "Oh, I need to make sure that I cast this as type safe. Or I need to make sure I remember this generic. Or I hope I'm importing the right file here or something like that." I gave a talk at UtahJS last year that was around TanStack Router and it kind of went over all of the different ways, kind of a long-form answer to what you asked and what is the difference between writing something with TypeScript and writing a type safe system. And it shows you firsthand like, "Oh, this looks type safe, but it's actually not, and let's show you why." I would recommend going and watching that video if you're interested on that topics some more. [0:22:31] NN: Yeah, for sure. And this is why the type system in TypeScript can be so complex, is so that you can hide away a lot of that advanced type safety from end users, and they can just benefit from it. [0:22:45] TL: And I won't lie, it's really grueling work. I started the adventure of type safe routing four years ago now. In the first two years, I didn't even write any runtime code. I was just messing with types and trying to figure out how could I even architect this in a way that wouldn't require crazy, crazy things. At the end of the day, I found a way. And after we had exhausted every possible avenue of doing this without language service plug-ins or massive amounts of code generation, we had done everything we could with the native TypeScript features. Then we went in and said, "Okay, how can we make it a little bit better now?" And that's when we added one file that does some code generation, right? And that's it. That's what the plugin does. It generates one file that's just creating some shortcuts for TypeScript. One of the most interesting things about type safety is that TypeScript has no idea what a file is in a file system. It has no idea about what file you're in or the hierarchy of your file system. And that's a very important thing if you're doing file-based routing. [0:24:04] NN: Yeah, for like sub-routes and things? [0:24:06] TL: Yeah, nested routing. And so we had to come up with a way to teach TypeScript about the file system that was performant and lazily evaluated and performant enough to scale to tens of thousands of routes that wouldn't crash the TypeScript language service. And we did it. Christopher Horobin is really the TypeScript junkie who's behind a lot of those performance hacks. He's a very, very smart person. We got really far, but he's the one who came in and has really put on the last buffed-out polishes on the type system for TanStack Router. I proof of concepted it, and I made it work, and I made it work right. He's making it work fast. [0:25:01] NN: Nice. Nice. You mentioned working on just pure types for a long time without any runtime code. I'm curious, did you use any way of doing automated testing to ensure those types were correct? Or how do you approach that when you're not actually writing runnable code? [0:25:20] TL: Well, I mean, if you're just writing just pure types, you can go really, really far without needing testing because something will just not compile or break if you're doing it wrong. [0:25:33] NN: Yeah. TSC is your test? [0:25:36] TL: Yeah. And at some point, though, it gets to be big enough to where you need to guard against regression. So when we were just building fresh, it was just like, yeah, TSC is good enough to check ourselves. But then when we say, "Okay, we figured it out, solidified it," we needed the tests are more for regression catching. We just use Vitest and we write our own .d, .ts files and we use Vitest's TypeScript stuff to say, "Expect this type to equal this type." And for the most part, not for the most part, it works great. Yeah. We'll go through and we'll change the types or fix bugs or whatever and it will say, "Hey, you have a public type contract that you're breaking." We don't do that for private internal types. We only test public types so that we can go in and mess things around and re-architect the types if we need to. As long as we have those outer contracts, were good. [0:26:41] NN: Yeah. And that's exactly why I was asking about that is I'm in the mindset of thinking about the developer experience and specifically, like you said, not regressing it. And so having a way to ensure that this is not just going to give you some weird type or some unknown or any type. It's going to be what it always was. [0:27:00] TL: Yeah, we have extensive type testing as well. Just for Router, we have over 200 test suites that run just for all the packages in Router. It's a lot, which is a stark contrast from about a year. About a year and a half ago, we had zero. Big props to my team. They're much better at writing tests than I am. Sean Cassiere, and Manuel Schiller, and Chris, Christopher Horobin on the type side, but like they're all much better developers than I am. [0:27:41] NN: Another question I have around Router and TanStack Start, those are both in the React ecosystem, right? Are there plans for them to kind of follow other TanStack projects and kind of abstract themselves from React and be more multi-framework supported? [0:27:59] TL: I was actually on Stream about a week ago with Ryan Carniato. We officially announced TanStack Solid Router. [0:28:09] NN: Oh, wow, cool. [0:28:11] TL: That's already out. Fully tested. Passes all the tests. Got the sign of approval from a lot of the solid team. In fact, I think Burke, and his name's Brinley - I always knew his screen name is Brenelz. But Burke and Brinley from the SolidJS team, they really like TanStack Router and Start. And they only started on it three and a half weeks ago. They're like, "Hey, let's write an adapter for Solid." They threw in the test suite, and they just started cranking away, passing tests, and then they're like, "We're done." I was like, "What the heck?" We launched TanStack Router for Solid last week and it works great. And then I just got a message this morning from - let's see. I want to double check who it was. It was from Brenley. He's like, "Hey, so just so you know, TanStack Start for Solid is pretty much working." [0:29:05] NN: That's amazing. [0:29:05] TL: Are you joking? I mean, there's still some things to polish up. But it's incredible. Actually, we named it TanStack Start because I knew that Router was probably going to go to other frameworks, but I didn't know if Start would. Well, just a couple of days ago, Manuel Schiller on my team, he's like, "Hey, by the way, we're renaming TanStack Start, the package, to TanStack React Start." And I was like, "Oh. Oh. Oh." And he's like, "Yeah." There's going to be - there already is an internal package at TanStack/Solid Start, which can be confusing because there's also Solid Start. but it's a work in progress in terms of like how we're managing those two projects, Solid Start and TanStack Start. We are working very, very closely with the Solid team. They are in the TanStack org, we're in their org. We are cranking on some really cool stuff right now. Solid Start is actually already using a lot of the new plugins that we built for things like server functions. The stuff that I told you 10 minutes ago about server functions, 15 minutes ago, we built our own plugins to do that in a way that's like framework agnostic. So you could do it across React, or Solid, or whatever. [0:30:33] NN: It's validating that abstraction on top of server components and things that are more React specific? [0:30:39] TL: Yes. The use server directive. There's a package called like TanStack Directive Functions Plugin. It has nothing to do with React. It's just like set it up. You can inject your own runtimes into it that can call into your own code and we made it - I mean, in TanStack fashion, I made it extremely inverted on control. We put it in and then the Solid team, Burke and Brenley, were like, "Oh, let's replace the one in Solid Start with this." They swapped it out. And then even Brandon who made AnalogJS, he was like, "Oh, I'm going to swap mine out too." And so he swapped his out to use this server function plugin. And then Dev Agrawal, he's like, "Oh, I'm doing something for Signals. I'm going to use this for Signals, too." He swapped it out for Signals, because it's a directive plugin, not a use server plugin. You can actually support other function directives if you want to extract them out, which is kind of nuts. I think he's playing with something like useSocket and then it extracts it out and you can do like custom socket logic on client and server. It's cool stuff. Needless to say, we're working together very closely. I don't know if Solid Start and TanStack Solid Start are going to merge someday. I'd say it's a possibility, but it's more likely that they just kind of take on different roles where Solid Start might be just kind of the example framework to say, "Hey, this is how you can do a router agnostic full-stack framework on top of Solid. Check it out." Right? And it's scoped down a little more kind of like this is a good way to learn about it or just you do something simple. And TanStack Solid Start will be more of like a full-fledged product that you'll say, "Okay, we really want to use Solid and we really want to have like a full-fledged meta framework that's going to benefit from extra stuff." Maybe we'll use like TanStack Solid Start. That's probably where it's going. But we'll just have to see. We're all just kind of playing it by ear where it's just like, "Let's just go out there and build cool stuff that we can share and see what happens." So far so good. [0:33:14] NN: I'll say that's amazing. You've had this ecosystem of TanStack products for a while, right? Query, Form, Table, and now a router and a whole framework. Do you see them working together as an ecosystem where developers could maybe pick up and build pieces with these individual pieces and build on top of them to then support these more top-level Solid or React framework-level things? [0:33:40] TL: I mean, I'd say that's already happening. It's already happened. Yeah. Because we have framework adapters for all of the other libraries already. Some of them are more mature than others. And a lot of that is just based on the popularity of the framework and how many people use it, right? Svelte, Table needs some love. But there's not a lot of Svelte devs out there who are also like, "Oh, I'm going to use TanStack Table and then let's make it better," right? I mean, some of it is just talent there, but the adapters are there and they work. You can wire them together if you want or you don't have to. Definitely, our goal is to stay away from some kind of monolithic structure where everything only works well together and they have better support for each other than they do for other things. We want to make sure that it's more like Unix-style composable blocks that work good with everything. If the right APIs are there designed with good inversion of control, they should work good with everything. Actually, two weeks ago, Jack Herrington built a new tool called Create TS Router App, and it's a drop-in replacement for Create React App. [0:34:59] NN: Really? [0:35:00] TL: Yeah. Other than some things some differences between webpack and modern Vite. It doesn't support like old-school ES5 output and stuff like that. But for all intents and purposes, it is a drop-in replacement for, "Oh, I was going to use CRA. Oh, I'm going to use CTA." What's cool is you can just say NPX create TS Router app, drops in and it looks exactly like CRA. But then behind the scenes, you go to like that main app file, it looks exactly the same. But if you go up a level to like the main entry, you'll see that we've already set up TanStack Router for you just with a single app or a single route that's just going to this one component. And you're like, "Well, what if I didn't want code-based routing? What if I wanted file-based routing?" Well, then you can add a --file router to the create TS Router app, and it will give you file-based routing. And then you're like, "Oh, what if I want to use Solid?" You can do --solid now, and it will do create React app essentially but with Solid instead using TanStack router for Solid. And then going even beyond that, we have add-ons where you can say --add-ons and it brings up this select list where you're like, "Oh, I want to add Tailwind, Shadcn, Sentry. I want to add Netlify stuff, like a demo for Netlify things." You can just check off a bunch of stuff and we So install them, give you demo pages for them, and wire it all up for you. And then we also have templates, too, which create React have had templates as well. But like we have a template called TanChat that's coming out. I think it's an anthropic, "Hey, here's just a really demo fun TanChat thing." Sentry has an example that you can manually trigger errors in a couple of different places and watch them come into your Sentry dashboard live with full-stack observability. It's really cool. [0:37:18] NN: Not that it matters. I'm just kind of thinking. But I know that with create React app, it was kind of doing this weird managed thing where it wasn't exposing you directly to Webpack, right? It had a minimal configuration. [0:37:30] TL: Yeah, they had their own package called Create. [0:37:32] NN: React Scripts, I think. [0:37:33] TL: React Scripts. Yeah. No, we're not doing that. I think that's dumb. I mentioned to somebody that it's almost like pre-ejected in a way, but not in a way that Webpack was where it's like now you have this huge Webpack config to handle like - really, there's a vite.config.js file sitting there, and you're like, "Oh, it just works because we're using the React Vite plugin and it works great." And if you want to go in and add anything or do anything with Vite, you just add the plugin, you do whatever, you know? And it's like you can still upgrade Vite, and upgrade your plugins, and upgrade the tech even though you're customizing things, right? It gets away from that, like you're locked into this like, "We're going to fully manage everything for you because we don't trust you." I mean, with Webpack, there was good reason around that. It's like, "I don't trust a lot of people to do that either." But with Vite nowadays, it's like, "I trust people with Vite. Go ahead." There's only a few things you could probably do that will mess things up terribly and you can always just roll it back or whatever. [0:38:41] NN: No, that sounds definitely like a better approach. And that's kind of why I was asking, because there's such a great ecosystem of tools within Vite itself that hiding that away or making that more difficult to adopt anything else is almost a detriment. But it sounds like you're obviously doing the right thing. That's great. [0:38:55] TL: Well, what's cool too is it works with Start. Start is currently in beta, right? And for now, for today still, we use Vinxi as a little runner. You say, "Vinxi start, Vinxi build," or whatever. I'm actually working on dvin. Internally, we call it DaVinci, which is DaVinci. But we're working on just using Nitro and Vite directly, like a Vite plugin. We wanted to do that from the beginning, but we just needed to move fast. Vinxi let us do that. But now we're to the point where we're just getting rid of superfluous stuff. And we could talk about that, but we don't need to. But anyways, if you add --start to that create TS Router app command, you'll get a start app. And it will actually look exactly the same as create React app, but it's server-side rendered, has routing install already. It's pretty cool. Now I'm obviously biased, but I think it's the best way to start a new app these days. [0:40:00] NN: Nice. Yeah, I'll definitely have to check that out as I'm constantly creating new apps. You did mention TanChat, and that got me thinking about AI. And so I wanted to ask you what AI means to you kind of day-to-day? Are you using it your day-to-day development? Or what does it mean to you? [0:40:20] TL: Yes. I pay for ChatGPT and I use that all the time just for personal stuff. I mean, I basically use it instead of Google now. And, actually, I can't remember the last time that I used Google to like research something. I use Google all the time to search for a site that I need, that I need to go somewhere, you know? It's become more like AOL. What was it? AOL keywords or something like that? You remember those? [0:40:50] NN: Unfortunately. [0:40:51] TL: Yeah, but Google now is less of like a research tool for me or like question tool. I just send all that to OpenAI, ChatGPT. And then for programming, sometimes I'll use GPT because it's just option space and it's just kind of there. But I'm using way more Cursor lately. Really, Cursor, I still think it's better today than what Copilot has and what Windsurf has. They're all really good, but cursor is just amazing. And I don't even use a lot of the agent stuff. For the kind of code that I'm writing, there's not a lot of prior art out there. I don't really trust agents to go and write the kind of library code that I'm writing. When I'm building an app, or a site, or templating some Tailwind, or something like that, I was like, "Oh, yeah. Stick an agent on that and let them write kind of the grunt work stuff." But for me, it's way more of a utility to have cursors sit on top of like the TanStack Router repo and see and index all of the patterns, and type utilities, and things that are inside of our entire codebase. And then for me to go and try and write a new feature, it's helping me remember, like, "Oh, yeah, you have this API. I'm going to use it. You use this pattern here. Let's use the same pattern here." And it's frighteningly smart at autocomplete and just taking my ideas of, like, "Oh, this is how I want to architect this thing." And it just kind of like tries to read my mind. And because it has enough context in my projects, it usually does. And I wouldn't say, for me, it's not necessarily coming up with novel ideas for me. Instead, the two places that I think that it does help is it just helps me go faster. Gets my ideas out faster. I don't have to type everything. I mean, even Copilot was great at that even if it was one line at a time. It just helps me go faster. And also, it helps me when I brush up against areas of programming or other APIs or services that I'm not super familiar with. I'll kind of be like, "Hey, I think this is what I need." And then it will kind of autocomplete. And I'll be like, "But that doesn't work." It's almost like a learning tool as well to like learn fast to say, "Hey, you know what? I don't know how this node streaming API thing works." I'll just be like, "Command K. Write me this logic, but I need you to use node streams. Then I need you to convert them to web streams." And then I'm watching it go and then I'm learning as I use it too. [0:43:45] NN: Yeah. I mean, that's a fantastic way to put it. We're no longer really in the world where we're staring at a blank file and figuring out how to get started. You can just throw something out there and you're good enough to know this is close or not close at all and kind of refine it from there, either with AI's help or manually. That's how I'm approaching it as well in my day-to-day work. It's a lot of managing context. [0:44:11] TL: That's a great way to approach it. And I would also say on top of that, there's always going to be discussion about, "Is AI going to replace me or whatever?" And I get asked that a lot in private DMs or whatever. It's like, "Are you worried about this?" And to that, I would say look at how much code you're writing that the AI is currently unable to do correctly. If that is close to 0%, then you're going to be replaced soon. If it's 50% man, you're fine. If 50% of the code you're writing is impossible for an AI to figure out how to do correctly, you got good job security, you know? And that's for me, if I ever get to the point where like "Oh, I'm not writing." It's like, "Oh, AI could write all - design me a type safe router repo to be used by everyone and everything in their cat and dog or whatever." As soon as they can do that, I've got to evolve. That's usually a good measuring stick for me. And I think when you use that measuring stick, there are actually very few people who are truly just YOLOing some AI code out there without even being able to say, "Oh, I got to test it, and try it, and debug it." Right? But it's getting there. It's pretty scary how good some stuff is that can one-shot things. [0:45:37] NN: And you mentioned prior art and how it's not super useful for that. There's not a lot of prior art in what you're doing. And I think that coming at it from another perspective, you are in a unique position in that you're like releasing a framework within the last year, where there's not a lot of prior art for these models to be trained on specifically. I'm just curious, Next, and Remix, and all of them do have that prior art. The AIs can have a little bit of context to help, but they won't know a lot of the new features, but it's like a feature thing. If I ask an AI about TanStack Start, I haven't, by the way, but if I did, it might not know much of anything. And just as an author of a new framework in Paradigm, I'm curious if that's something that's on your mind of how to approach that or how to help developers get over that hump of, "ChatGPT is not going to be able to help me create a server function because it doesn't know what that is?" [0:46:30] TL: Yeah. Some of that is just time. Time will heal that to some extent. I also think that there's entropy that needs to happen for us. And I'm willing to be patient for that. There's also atrophy for outdated and old patterns for other frameworks as well. Now you can go to Next and say, "Hey, write me a Next app or write me a React Router 7, whatever," and you're going to get Pages Router, and you're going to get React Router 6, or Remix, or something like that. There's challenges all around. I would rather have the challenge of, "Hey, you know, they haven't indexed us yet. They haven't been trained on us yet." And you can use tools like Cursor to point your stuff towards our documentation. Or we also - I can't even remember what it was called because that's how new it was. But it's like a protocol, AI tool protocol? What's it called? MCP. What's it called? It's called the Model Context Protocol. We are adding support for TanStack to be like an MCP service so that you can point your MCP compliant or supported thing at TanStack and say, "Okay, TanStack is now a tool, a utility of knowledge." It can use our database as a way of teaching you how to do better stuff. I'm looking at things like that. I just think I'm not necessarily worried about it, mostly because the same things that I would be doing to teach an AI are the same things I'm going to be doing to teach users. We have to make sure our documentation is outstanding. Plenty of examples. Get more and more people writing projects with it that are open source and public, that use the good patterns, resisting breaking changes, resisting stuff like that. It's almost okay that they're not indexing as yet because it's still beta for start. And some things might change just a little bit. I'm not worried about that though. It is hard even for me, I get into the source code of TanStack Router and I'm like, "Hey, I need to do this thing with AI." Because there's no prior art, it starts bringing in React Router APIs and Next.js APIs because that's all it has. And I'm like, "Okay, you got to stop." [0:49:09] NN: Yeah. And another thing that I've seen people do, or I've kind of started to see this on like dock sites and things, is like, "Here's a Cursor rules that can help you work with our project." Maybe something like that too. But I also think the MCP thing, that's so new. I think I only heard about it because I just started playing with Claude code and I think that that can do something with it. Yeah, this is changing so fast. And sounds like you're on top of it. That's awesome too. But also, not worrying about it because it's not really needed yet. Cool. [0:49:40] TL: Sorry. I just kind of messaged my wife really quick. [0:49:42] NN: Sure. Yeah, is there anything else you wanted to bring up or talk about today, Tanner? [0:49:49] TL: Oh, let's see. I'm excited about DaVinci. We're going to be using Vite to Nitro directly, and a lot of that includes the new environment APIs. And React Router, they're experimenting with this now too. It's a hard upgrade path. It's a different world like the new environment APIs, but they're very valuable because you can run things, you can run like a native Dino kind of environment, or you can run Workerd if you're going to go to Cloudflare on your machine and kind of replicate that environment that you're going to be deploying to on your own machine, which is really neat. We're working really closely with the Nitro team to make sure that we can support as much of Nitro as possible, and it should be almost everything. As soon as we get rid of Vinxi, we're going to have support for deploying to over 30 deployment destinations right out of the gate. And you'll be able to write server-side code once and literally just migrate between all of them because that's what Nitro is. It's like the server-side toolkit that kind of goes everywhere. As long as you're using the Nitro APIs, you could ping pong between like Netlify, Vercel, Cloudflare, whatever, and not have to change really anything. If you do it right, all you have to do is change a string from Vercel Edge to you know, Netlify, Lambda or something like that and Nitro just takes care of the rest. I'm really excited about that. I'm excited about the SSG stuff. That's big reason that I'm doing the move to Nitro and Vite is because I want more control. I want you be able to ship SPA mode, like true SPAs with TanStack Start where actually you can pre-render HTML documents as landing pages that are kind of like PPR, where they're like as much of the page as possible is already rendered out. And then where you want to start your dynamic stuff, you can have these dynamic holes that fill in as soon as you mount. And then obviously, this will probably be after we do 1.0. But we will bring server components to TanStack Start very soon. They're just going to be a server function that happens to return React code. Yeah, it'll be really simple to use. I think it's going to be refreshingly simple for people to use, even more so than React routers, React server components. They're doing almost the same thing, but you can only use them inside of a loader. We're going to make it so that you can use them anywhere that you can define a server function, which has nothing to do with the router. I mean, literally, if you just wanted to have server functions and React and no router, you could if you wanted. That's a weird world. It's almost like what Waku is, but it would be totally possible. [0:52:51] NN: Future looks very bright. I'm very excited about all of this. [0:52:54] TL: Yeah, me too. [0:52:56] NN: Where can people find you? [0:52:58] TL: I'm usually on Twitter @tennerlinsley. If you want to get more personal, you jump in Discord. We can talk in Discord, share code. Yeah, mostly in those two places. I don't stream a whole lot. I prefer just to work. But, yeah, I mostly hang out on Twitter and in my Discord. Yeah, I'm very open to people who want to DM and chat and whatnot. If you have cool ideas or feedback, or as long as you don't want to shout at me, we can chat about whatever you want. [0:53:31] NN: Darn it. Okay. No, thank you so much. And thanks so much for coming on and sharing all of this. I genuinely learned a lot about TanStack Start. I had no idea about the solid variant of that. So I'm very excited to go look into that. [0:53:45] TL: TanStack Solid Start is probably coming very soon. In the next couple of weeks. [0:53:50] NN: Awesome. I can't wait. [0:53:52] TL: Well, it's a beta. It'll be beta just like the React version. [0:53:55] NN: Yeah. Nice. Well, thanks so much, Tanner. And we'll catch you next time. [0:53:59] TL: Thanks. [END]