EPISODE 1725 [INTRO] [0:00:00] ANNOUNCER: In 2022, Stefan Li and Stew Fortier envisioned a document editor with language model features built in. They founded Type.ai, received backing from Y Combinator, and have since been at the frontier of building a next-generation document editor. However, to ensure a robust and perform at front-end, Type.ai needed to take advantage of many modern browser features. Stefan Li is the CTO of Type AI, and he joins the show to talk about the state of front-end dev, the service worker API, indexed DB, the shared worker interface, Web Locks, and more. Gregor Vand is a security-focused technologist and is the founder and CTO of Mailpass. Previously, Gregor was a CTO across cybersecurity, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk. [EPISODE] [0:01:08] GV: Hi, Stefan. Welcome to Software Engineering Daily. [0:01:12] SL: Thanks for having me. [0:01:12] GV: Yes, so Stefan, really great to have you on the podcast today. Just a quick, fun fact, it was you who, in fact, even introduced me to SE Daily many moons ago. So, this is kind of fun that we're recording this episode today. You are one of the co-founders and you're the CTO of Type.ai. We're going to hear all about the platform and we're definitely going to be diving into a lot of the technical parts today. But I think just before we dive into Type.ai, I certainly know you've had a pretty successful career before all this in different roles, in software engineering. Yes, could you maybe just give us a bit of a whirlwind tour of Stefan's life before Type.ai? [0:01:56] SL: Yes. Whenever someone asks this, I'm forced to recognize the fact that I've been a software engineer for like 15 years now, which is kind of shocking whenever I think about it. But yes, so right after uni, I started in actual mobile development. And this was kind of just a couple of years before smartphones started coming out. So, it was mostly C and Java. Then, when smartphones came out, I started doing some iPhone stuff and then kind of transitioned into Android. I spent a couple of years at Sone Erickson, then a couple of years at Walmart, helping them build their first mobile apps on Android. Then after that, I did a few failed startups, then spent some time at Meta, again, doing Android stuff. Then after that, I decided I was going to try my own thing again. I started a solo project, which was like a flashcard-based application. I did that for a couple of years. That was mostly web-based. So, I can transition into web development at that point. The reason being that, it was a desktop-first application, basically. So, it made sense to transition to web. I did that for a while. It was kind of tough just being a solo developer. I hit like a point where I was like, I still had some energy left, but I was like close to kind of giving up. At that stage, I met my current co-founder, Stew, in New York at a co-working space, actually. This was end of 2022. So, yes, Stew, he actually sat next to me at our co-working space and we kind of became friends. He one day just pitched me this idea of like, "Hey, what do you think of an AI-powered document editor?" At the time, it felt like a very novel idea. This was before ChatGPT was out by a few months, basically. So, language models, they were a thing, but they were nowhere, sort of, like regular people just didn't know what it was and so on. It seemed like a really novel idea. I was like, "Yes, this sounds good." I had some experience working on editors from the flashcard application, rich text editors. So, yes, we decided, "Let's do this." We applied to YC, and then, yes, the rest is history. [0:04:13] GV: Yes. Then you go into YC and, yes, got some funding from there. I remember, I think I remember you mentioning when you kicked this off and I think you said to me, "Have you heard of the OpenAI APIs?" I was like, "No, I haven't." It's very funny to think back, there was a time when a developer wouldn't even be aware of them and it was not that long ago in relative terms. So, that's kind of fun. [0:04:37] SL: Yes. Things went crazy very fast. I mean it was like a matter of months and we went from thinking we were a novel idea to having like maybe 20 sort of competitive products which was like pretty gnarly, but in some ways, also forces you to have conviction in your idea and continue forward. [0:04:58] GV: Yes, completely. Before we dive into sort of more the weeds a bit on the pure technical, could you just give a kind of - I mean you've sort of done it already, but like a high level off Type.ai. I guess, you've already kind of touched on it, it is very much powered by LLMs and I'm aware it's OpenAI is one of them, like GPT is one that you use and Claude from Anthropic is, I believe, one of the other main that you use. Could you just speak a bit to kind of the choices there as well? That'd be kind of interesting to hear. [0:05:30] SL: Yes. So, the sort of two-sentence version is like we're building a document editor with AI tools on top that helps the user write in different ways. By document editor, we mean something akin to Google Docs, and then AI tools that can help you sort of generate ideas, that can assist you with kind of iterating on your text, generate content. Really, the goal is kind of just to accelerate the writer's journey, right? Going from inception to the finished stage. We just want to help speed that up in any way we can, while retaining or maybe even improving quality. So, there's really two main pillars here. One is like creating what we feel like is a really good editing experience, even if you take the AI stuff aside. Then, there's like the AI component, which obviously is very important, but we kind of want both to be able to stand alone in some ways. The AI component is, yes, how do we best provide the UX interaction between an editing environment and an LLM? In terms of the models, we try to build it with the thinking that these models will always improve, right? We try to solve problems that are challenging on the UX side and not necessarily super challenging on the AI side with the expectations that models get better and we'll just solve those problems better and better, which makes life easier for us and for our users. It's been kind of interesting the decision to be made, on how we exposed which models are being used to the users and so on. There are users that are very well aware of the fact that there are different models and even aware of the differences between them, and then there are ones that just don't care. So, there's like a balance to strike there and like what choices do you make by default and what options do you want to leave to the user? Yes. I'm not sure that answers the question, but happy to dig into any of that more if you want. [0:07:32] GV: Yes. I mean, I think maybe just super briefly, this isn't an episode on LLMs, but I think it is interesting just, I assume a lot of the listeners will be aware of those two models, or I say two models but given their various versions, but GPT and Claude. It is interesting, I first experienced Claude through Type.ai. That was that was kind of interesting. As well as my other half, she recently only discovered Claude. Of course, I've been well aware for quite a few months now as a result of Type. But her kind of experience of that model is now kind of changed her perception of it of actually what's possible. So, I guess is that kind of how you've had to think about the choices as well? Why Claude? Why not something else? Or why would you consider taking GPT out in the future if there was something that made more sense? Or are you thinking just to keep adding models? Or is a reason it's two models that just - how did that kind of play into the decision-making? [0:08:28] SL: Right. So, I mean, I guess if you zoom out a little when you think about what are the sort of parameters informing, what to choose? One is like obviously quality of the model. The second is like cost. The third is like performance, right? But assuming those things, it's like you kind of just want to go with whatever is best at the moment, "best", that has like the best combination of these characteristics. For the longest time, that was OpenAI. Then, right now there are basically three big players, right? It's OpenAI, it's Anthropic, and it's Google. There might be more sort of players emerging. We'll see about that. But right now, those are the three realistic options if you want the highest possible quality. For the longest time, OpenAI, I say longest, but it's only been like, what, 14 months or 16 or something, right? But OpenAI was like the de facto leader. That lasted until Claude 2 came out, Cloud 2 Opus. Suddenly, they had actually surpassed OpenAI in terms of quality. At that point, it's just kind of like a no-brainer that you want that into the product and you want users to be able to leverage that increasing quality. I think that's just how we're going to keep operating. We want to make sure the product works well with the models that provide good value to the users. I think, just in case it's not clear to listeners, this sort of style of output from, say, GPT versus Claude, is just quite different, especially in a document editor where you are trying to help people write better, I think. Certainly, I love Claude for that purpose. I think it's light years away from GPT in terms of just what comes out at the end. But then I'm also, I mean, I've used Type.ai quite a lot, but then I'm not always sure how much is the secret sauce that Type.ai has been putting on top because I'm aware, you ask for, say tone reference or fact reference, and then how much is kind of clawed out the box. I mean is there a way you can give a simple answer to that? [0:10:35] SL: Yes. I mean to be honest there's not that much secret sauce in terms of how we interact with these models. I think the beauty of these models is that they are quite easy to interact with. I mean, they're almost magical in that sense. But, yes, we definitely notice too. For our product specifically, there's like two sort of things to keep in mind, two tracks essentially in terms of how the model behaves. So, one track, it's reasoning power, which is about telling it to do the right thing. In terms of getting it to, for example, modifying the right text, retaining the richness of the text, retaining the formatting, inserting it the right place and so on. That is just pure reasoning power. Then there's the track of like, does it create actually good text in terms of like what the writer desires? Now, Claude has gotten to a place where it's clearly better at, I think, the latter, like producing text that writers desire. It's also now better at the reasoning as well. But anyway, those are the two things to sort of keep in mind. Then, in terms of like the "secret sauce", is mostly about how do we get the models to do things behind the scenes so that it's just seamless for the user? No matter what operation they're kind of taking inside the application, which depends on the context, right? That can be whether you're in the chat or you're in line in the document, or you're generating a draft based on input and so on. [0:12:10] GV: Yes. I think that's a sort of nice segue. I mean, a document editor for anyone is kind of, it's quite a personal choice. I mean, a lot of people, it's a forced upon them, I guess. You work somewhere and it's Google WorkSuite or it's MS 365. But at the same time, I think everyone kind of has their preferred, for me, generally speaking, it's Google. I much prefer writing in Google Docs than I do these days in MS 365. So, I'm aware that kind of the experience piece to this has had to be a huge component to what the product is. As you say, you're building on top of these models. That does require quite a lot of effort in one regard, but actually a lot of the effort is in how you make this available to the user, and what they're interacting with every day, and on what device, et cetera. So, that's kind of, I think, where we're going to go for a little while now, which is actually like how this was all really put together, like under the hood. I'm just going to kind of dive in fast here because I know from day one, you've been quiet, and given that I also knew you when you were doing the flashcard startup as well. You've been quite kind of bullish on what could be termed as thick architectures. But maybe could we talk a bit about in this context what is thick versus thin at all, really, and how does this kind of relate to Type? How does this relate to the concept of local first and so forth? [0:13:33] SL: Yes. So, I think for me the starting point is like answering the question, what type of user experience are we trying to provide and build? I think one way to do that is to look at other applications that at least I admire and look up to as kind of role models. That's applications like Linear, Figma, Superhuman, right? These applications, they kind of have some common traits. They have like very robust syncing that kind of happens seamlessly. They are very fast. There's basically no loading spinners when you do anything in the application. There's no delay. They are kind of collaborative in nature. Well, at least Linear in Figma is, Superhuman is less of a collaborative feature or collaborative app. But still, as we'll get into, it's almost the same challenge to solve collaboration and it's to solving real-time sync. Yes, for me, it's like I look at those applications and I go, that's the kind of experience I would like to deliver. At least that's what we're aiming for. I wouldn't say we're quite there, but that is kind of the holy grail. Two common things that these have, right? Again, it's like robust thinking and it's high responsiveness. If you think about what's the best way to solve that, it turns out that you kind of automatically go down the local first track. Yes, let's see, how do we tease that apart? [0:15:03] GV: How would you define just for those that, again, aren't super familiar, how would you define local first? [0:15:08] SL: Yes. There's kind of like the stricter definition. First of all, it's actually like terminology that's kind of gotten more popular last couple of years, right? I don't know exactly when this thing was coined, but basically when it started, it was meant to mean all your data is always local. It shouldn't be dependent on a server. Essentially, all actions you can take should be able to be taken with no connection. Even if the server goes down indefinitely, the app should still be able to function. You should even be able to sync the data kind of locally between, say, devices, two devices, even if there's no server. That's like the really strict definition. There's kind of like a looser definition, which is essentially everything is still cached locally, but you still have some dependency on a centralized server. That's like, I would say, the more sort of practical version of it, which a lot of these apps that I mentioned kind of fall under that definition. [0:16:06] GV: Yes. In terms of, you kind of touched in it, referencing some apps that you and I look up to Linear, Figma, Superhuman. But again, why do it? I guess, is then sort of like, one of the the key things you believe you can deliver by taking a kind of local first approach? [0:16:26] SL: Yes. Again, assuming you want these two things, right? You want really high performance and you want robust sync, and then maybe you want collaboration as well. You start thinking about how do I solve this, right? Well, the first thing that comes to mind is sort of like you need some form of optimistic updates. But if you do optimistic updates without being fully local first, now you have the problem of follow-up updates that depends on your initial updates. How do you handle those, right? You kind of play this out and where you end up is like, "Oh, this needs to be local first for everything to work, basically." Then you get the added benefit then when you go down that route of the app will work offline and you will also solve a lot of the complicated issues that arise for collaboration, which is something we don't support right now. But we definitely aim to support it relatively soon. One way to look at it is like there's like kind of this suite of features and there's like a lot of overlap between them. If you decide to go local first, you kind of just solve, you're forced to solve all of them, essentially. So, I think that kind of is why. Why do it, right? Again, this is all to kind of deliver the user experience that we believe is high-quality, basically. And then, to go one step further, what does that mean in practice, right? So, in a document editor like ours, for example, you should be able to create folders, create documents, edit the documents, search your documents, all that should be as fast as possible. Instant, if possible. That's kind of the end result. [0:18:06] GV: Yes. Kind of covering off the key tenets of performance is clearly something that's been hugely important to you and what you believe for the user, which makes a ton of sense. I guess, maybe some people listing could think that local first just automatically means that the offline experience is kind of solved. But I imagine, there's actually quite a lot that's still going to make offline support, actually, an experience that someone would expect. If they're aware that there is offline support, whatsoever. [0:18:36] SL: Right. I guess, baked into the kind of what local first tends to mean, is that not just the data is local, also that you should be able to sync it, right? So, one way to look at it is that the offline part is actually simple. It just means like the app continues to operate when you don't have a connection. The hard part is like, "Okay, when you do have a connection later, how do you sync this data in a way that, where it's consistent across clients?" Essentially, how do you achieve strong eventual consistency? Which I'm guessing we'll dig into more later, but that's the crux of it. That's kind of what makes offline first hard. It's the syncing. [0:19:18] GV: Yes, that makes a ton of sense. Before we maybe dive into the sinking mechanisms, I think it might be also just good to touch on the actual kind of state of the web that makes it even possible today. Because I imagine, as you mentioned, through a lot of your career, you were working with native applications. And kind of by default, I mean, I'm just putting it out there, I've never worked on native applications. I've always worked on web applications. So, I assume for native, like local first, that's just the default, and actually, we're now looking at how does web kind of emulate that. Is that a good way to think about it? [0:19:57] SL: Right. Because to your point, with native applications, whether it's a mobile application or it's a desktop application, right? For definition, you kind of download a binary and then that is installed, and then from then on, you have it, right? Which is obviously not the case traditionally with web applications. So, the first question is, how do you even kind of cache the application itself, right? The big unlock there is the service worker API that has been available for quite a while now. The service worker API essentially allows you to install a specific type of web worker, which is a separate thread essentially from the main JavaScript thread. So, it allows you to install that as a piece of code. That will remain even as the browser tab or window closes. The service worker allows you to intercept any request that your application does and gives you an opportunity to cache that request and to fetch something from the cache instead. That is essentially the mechanism that allows you to fully cache your application. Not necessarily the user data, but the application itself and allows that to be reloaded and used, even if you don't have a connection. Then, there's like a bunch of other technologies that are in conjunction with that, that allows you to build the remaining pieces, kind of like. You also need some way of storing data locally. Most of the time, that would be done through IndexedDB, which is a standardized web-based database. Then, you have stuff like shared workers and Web Locks, which helps with some of the other pieces, like coordinating between tabs. We can get into either of those in more detail as well. It's kind of cool. The last, I would say, two to three years, a lot of these things have started, maybe going back even further, right? But kind of maturing to the stage where you can now do this on the web. [0:21:55] GV: What's the difference between a service worker and shared workers? [0:21:59] SL: The intent of a service worker, is to do what I said earlier, right? It's to cache the app itself and assets that the app might fetch through HTTP requests. Now, it can be abused to do other things as well, which we have done in the past, which is probably not a good idea. But that is essentially what it's designed for, what I just mentioned, right? It has like a very sort of specific life cycle that is designed with that in mind, that decides how you update the service worker, when does it start, when does it stop and so on, right? Whereas a shared worker, they seem very similar on a high level. So, it can be hard to tease out the differences. But a shared worker is similar in the sense that it's also shared among tabs. There's only one instance of it running, but it has a different life cycle. It starts very rapidly when you open up your first tab. It closes down rapidly when you closed all of the tabs. It remains alive at all times as long as there's a tab open, whereas the service worker can kind of shut down at any moment. There's other differences too, but I would say those are the main one. The shared worker is what you would go to for sort of coordinating between tabs, right? For example, if you have a synchronization process, which we have in our case, but you only want one instance of that process running, no matter how many tabs you've opened, that would kind of go into shared worker. Whereas the service worker, again, it just caches kind of like the assets of the application. We tried to use the service worker for some of the stuff that we now do in the shared worker, and we saw some issues there. The problem with the shared workers, that it's still not available across all browsers, actually. It's relatively new in its current state, and Android still doesn't have support for it. So, we actually don't support multiple tabs on Android. That's the reason why we try to kind of abuse the service worker a bit there. But yes, those are basically the differences. [0:24:05] GV: Okay, yes. I think it is different, but my point of reference is Chrome extensions, which also use service workers, different type, I believe. But I think some of the challenges are kind of similar. Even just the fact that you can't actually have a Chrome extension on an Android phone is fairly frustrating as well. So, some similarities there. I'm aware that you've gone through a couple of major iterations at this point. So, kind of, I think the way you termed it just in our pre-chat was sort of sync v1 and sync v2. I think it's always really interesting to hear kind of how actually a product has been evolved behind the scenes because no one actually is still running what they were running two or three years ago at this stage of a product. How would you kind of - what was the first version of syncing and how did that kind of operate? Then, how and why have you moved to a second full version of that? [0:25:02] SL: Yes. Maybe I'll do it the other way around actually. I'll tell you how it currently works and then we can talk about why it needed to turn into that. We have like an event-driven architecture, essentially. So, any action that you take in the application, say you create a folder for example, results in an event. That event will then propagate through the system. So, on the client's side it'll pass through a couple of stages. It will update the application state in memory which immediately is reflected in the UI. Then, it will update the state on disk on the client and then it will get uploaded to the server. On the server, it'll first get applied to the state that's on the server, and then it will be added to essentially an event queue so that it can be distributed to any other client, whether they're online or not. So, if they're online, they'll receive it immediately. If you're offline, they can pull it once they come online. If you zoom into the client piece a little bit, there's a couple of different processes. There's like one process that's responsible for uploading these events. There's another process that's responsible for downloading new ones for the server. There's one process that's like responsible for coordinating and applying all these events to disk. The reason the system is designed this way is so that it can be sort of completely non-blocking. You can be synchronizing, meaning uploading or downloading events and taking actions in the UI at the same time and there will be no interference. That's something that wasn't possible in the first version. The second big piece is that we've turned all our data structures into CRDTs, essentially. Yes, we can get into CRDTs more, right? But the sort of high-level features that CRDTs allows us to rely on is that our events can arrive in any order, which makes the system more flexible because, there's like less concern about race conditions and things like that. It allows us to basically focus on the fact that an event needs to reach all of the nodes in the system. Nodes, meaning multiple tabs, multiple clients, and the server. As long as we know that any event reaches all nodes. We know that the state will be consistent across those nodes. That's how it works today. In the previous system, our data structures were not CRDTs, but we still had this kind of event-driven system. So, we were reliant on the events coming in at the same order, essentially. How can you guarantee that, right? One way to guarantee that is that, let's say you've been offline for a while. You've created a bunch of events. Now, you go online and you try to upload these events, but there are new events on the server that have arrived while you were offline. Now, it's like you can't just send those up because now they'll be out of order. One way to solve that is to do a rebase, essentially, what you do in git. You rewind all of the events locally, you download the server events, then you put your new ones on top, and you do any conflict resolution needed during that phase. For example, let's say you moved a folder into a folder that's deleted after the rebase. Now, you can just like decide what you want to do it in that case, the folder no longer exists. Well, maybe you decide, "Okay, we just don't move it then. We ignore it." So, you can do that during the rebase phase, right? Now you know, okay, now it's okay to upload those events to the server because the server is in the same state that I was in when I applied those events. Now, with the new system, we don't need that whole rebase mechanism, which simplifies things. It also simplifies the multi-tab, the problems involved there. There were other problems with our old architecture, like I said earlier. We're kind of like abusing the service worker for some stuff that we're no longer doing. It's just overall a lot more robust and it's also more prepared for multiplayer, which you want to add in at some point. [0:29:31] GV: So, I guess just to sort of rewind briefly, so the system today is CRDT-based. That's conflict-free replicated data types, just for anyone not super familiar with that acronym. I guess, it seemed like an obvious question because we all start one way and then realize it wasn't the right way. But why did you actually not start out with that approach? [0:29:57] SL: The first thing I'd say is that, I had not built a system like this before. I think it's actually, today it's like much easier to find material on how you should structure the overall system. I should say, for the audience, I started on this infrastructure on the previous project, but then kind of transitioned over into type. But when I started that, there was just less information out there. Also, frameworks in general around things like CRDTs, for example, were much less mature. Also, the system, the sort of sync v1 system, it did work. It's just that it didn't feel quite future-proof, right? Yes, so the answer to that question is, I guess, one, is just experience. Experience gained and then new software becoming available through libraries, new learnings, all that stuff. [0:30:54] GV: Yes. I think it's really important and it's something that I've known you for quite a while, so we have various chats and I've always been aware that you don't settle for second best when it comes to engineering, and that's what I love, is that you can take an approach but it's totally okay to accept that there's now something better you could be doing. I think a lot of people would actually just say, "Well, it works." Yes, it might not scale as much as we wanted to into the next three years. But, hey, I think something that probably you and I share is that level of engineering quality where if we know that something can be done better, it's very difficult to ignore that and wanting to go back, and actually do it kind of the way that you now know makes sense. [0:31:41] SL: Everything can always be done better, right? It's a matter where you kind of draw the line. I think, I can look at a lot of software that's like done better than what we're doing. So, clearly, they have drawn the line further away. We all have our own frame of references in terms of, okay, what constitutes the right quality bar? I think, at the end of the day, again, it comes down to - for me, it comes down to the user experience. Is the system robust enough to provide the user experience that you want to build? If it isn't, then ultimately, you're going to have to sacrifice on the user experience side. And then you're not building what you want to build. I think that's at least how I try to look at it. So, you don't want to have something that is complex and super elegant for its own sake. It should be to kind of serve a purpose to the user at the end of the day. I don't follow that rule exactly all the time, but I try to use that as a kind of guiding compass. [0:32:49] GV: Just diving back to CRDTs, briefly. I was aware I was talking to another startup CTO and real-time and an offline is important to them as well. He mentioned Yjs. And I think I briefly mentioned that to you. You said, I think you're partially using Yjs. I think, just to take a step back, what is Ygs and how are you guys utilizing it? [0:33:14] SL: Yes. So, Yjs is essentially a CRDT library. It provides primitives that are CRDT data structures. If we back up a little bit and we talk about what's actually a CRDT. So, they're both kind of simple and complicated at the same time. To try to do the simple take, it's essentially data structures that allows you to - well, so first of all, there are different CRDT types. The type I'm referring to now is called, I think, a Delta CRDT. When it comes to Delta CRDT, what they are is a data structure that allows you to create deltas that will take that data structure from one state to another, so that if you now have duplicate two copies of a state, you can update them both through some kind of patch. Which in itself is nothing crazy. You can obviously build anything. You can build like a map that has those type of characteristics, for example. But the special thing with the CRDT is that the Delta has these three kinds of mathematical properties. They're associative, commutative, difficult word to pronounce, and item potent. These are like for people that remember math from whatever college, it's like commutative means that they can, the patches can come in in any order essentially, with the same result. Associative means that the patches themselves can be merged in any order and the result of that can now be merged with the state and you'll still see the same result. Item potent simply means that you can apply the same patch multiple times with the same result. That's basically it, right? If you build a data structure that qualifies in the sense that it has these three properties, you essentially have a CRDT. So, for simple things like, let's say you have a, I think one of the simplest CRDTs is like a grow-only set. A set that only grows and you never delete from it. That's like, you can kind of reason yourself into building that very simply and that's a CRDT. But let's say you now have a text file and you want to be able to make arbitrary modifications to that text file and you want that to be in the form of a CRDT so that updates are consistent across nodes, that becomes very, very complicated. The engineering behind building that is probably something you don't want to do yourself. So, that's an example of something that Yjs provides out of the box. It also provides primitives like lists and maps and sets that are CRDT compliant, for lack of a better word. So, that's definitely a tool one can use when creating this kind of a system. We use Yjs for some of our things, and then we have kind of homegrown data structures for some other stuff. [0:36:24] GV: Yes. Okay, super interesting. Can you give like an example? I'm super curious, what's an example of a homegrown data structure that doesn't fit Yjs in your case? [0:36:34] SL: Yes. It's not that it doesn't fit necessarily, but Yjs is very powerful, but it's also not super ergonomic to use. It does stuff like compression and so on, right? So, your patches come out as like just a byte array. There's a lot of kind of black magic involved. If you're doing something that could be designed fairly simply as a CRDT, so for example, in our case, we have our chat logs needs to be synchronized across clients, right? That can be actually relatively simply done without using something like Yjs. You basically make sure that all of the events have time stamps, logical timestamps. You design it, so that you have absolute ordering across clients and some stuff like that and you have essentially a CRDT. In that case, for example, if it's overkill to use Yjs, essentially. Because you'll just end up with a lot more opaqueness, for lack of a better word. [0:37:39] GV: Yes. Without being ultra-familiar with Yjs, I can understand what's going on there. As soon as you're working with byte arrays, life gets more challenging. That's just a key thing either way. I work with them when it comes to WebAuten, effectively. Luckily, there are libraries and episode that was on in the past. It helps deal with them, but it still isn't simple, shall we say. So, I can understand that. We're cruising towards the end of the episode, but just what are, you've talked through kind of one of the main challenges that you've been solving over the last few months. Where do you see, what are the next maybe challenges, let's just say from a pure engineering standpoint for now, like challenges and/or upgrades that you're kind of looking at with the product? [0:38:28] SL: Yes. We just finished this kind of re-architecture of the sync system and I think that took about two months. So, that's like two months with no new features, which is never great when you're trying to find product market fit. We're going back to just the feature-building mode now. It's hard to say exactly what the challenges there are, are always numerous, right? But we have a couple of features in the pipeline that we are excited about. I generally just don't like to talk about things before they're built simply because I want to see that we can actually deliver on what we're trying to build. But it'll be a combination of sort of challenging UX stuff and as usual, the challenge of being able to talk to these LLMs in a way that gives us good results. [0:39:21] GV: Yes. Nice. Exciting. Well, anytime I see feature update emails come in, I'm usually pretty excited and I head in and start playing around with the new things. So, that's exciting. Given that the audience if they've heard a lot about the product, especially some very detailed parts of the product, where's the best place for them to kind of get a feel for using Type.ai? I guess it's type.ai? [0:39:44] SL: Type.ai, correct. Yes. Just go to type.ai and check it out. [0:39:50] GV: Okay. Well, yes, Stefan. Fantastic to have you on today. It's always a special one for me when I get to bring on someone I know very well. So, I really appreciate you taking the time in your evening, New York evening, and morning over here in Singapore, so really appreciate the time. You're obviously a very busy engineer, so giving up your weekend to talk about this. I, and I'm sure, all the listeners really appreciate it, so thanks for coming on. [0:40:13] SL: Yes. Thank you. It's been fun to be here. I appreciate having me on. [END]