[0:00:00] ANNOUNCER: Ongoing advances in Generative AI are already having a huge impact on developer productivity. Tools like GitHub, Copilot, and ChatGPT are increasing the velocity of code development, and more advances are on the horizon. However, an ever-growing challenge for developers is how to manage their coding resources, things like code snippets, website links, messages, and screenshots. This is hard for individual developers, but even harder for teams. Tsavo Knott is the co-founder and CEO of Pieces. Tsavo thinks deeply about developer productivity, and he joins the podcast today to talk about how Pieces is using AI to automate the process of saving, curating, and iterating on coding resources for developers and teams. This episode of Software Engineering Daily is hosted by Mike Bifulco. Check the show notes for more information on Mike's work and where to find him. [INTERVIEW] [0:01:03] MB: Hi, and welcome back to Software Engineering Daily. My name is Mike Bifulco. I'm one of the co-hosts of the show, and I've been particularly fixated lately. And by lately, I probably mean for the past two to three years on a topic that I think is finding its way across the industry for software developers, for product teams, for programmers that have been learning and trying to learn and get better at things, and has particularly been intensified lately by this dawn of Generative AI and LLMs taking over everything we do. In particular, for me, that's sort of learning and note-taking, and keeping track of what you're doing in general. Today, I'm super happy to be sitting down with a new friend of mine by the name of Tsavo Knott to talk about the application he's building called Pieces for developers. Tsavo, thank you so much for joining me today. How are you doing? [0:01:47] TK: I'm doing great. How are you, Mike? Thanks for having me. [0:01:49] MB: Of course, yes, I'm doing really well. I'm like really psyched to talk to you about what you're building, because it like hits all the tingly spots for me in terms of learning and writing code, and really clever ideas for things that are like, once you've heard the idea, it makes a ton of sense, and it feels like Pieces are falling into place, no pun intended, I suppose. But things are falling into place in a way that like totally makes sense for where technology is now. But also putting an extra layer of thoughtfulness around it and creating opportunities for productivity, and for knowledge sharing, and things like that, in ways that are just slightly different than the way they used to be, but importantly, creating kind of a new paradigm. So, let's start there. Tell me what Pieces is. [0:02:27] TK: Yes, so Pieces is a place to put things. It's not only a place for you to put things, but it's also a place for the AI itself to put things. It is aimed at kind of improving developer productivity throughout the work-in-progress journey. So, what I mean by that is, as you're doing the work in the browser, the IDE and the collaborative environment, those are the three major workflow pillars where there's a lot going on. We want to capture that workflow context and the key materials and have a home for them, and make that home interactive, with Generative AI, and Conversational Copilots. But also, deeply aware of your current context, the things you need, maybe who you need to reach out to, and so on. You mentioned at the beginning here, the advent of Generative AI, this new era that we're in, it's actually creating a couple of problems that I think pieces really starts to take aim at. The first problem is that everyone is now moving 10 times faster, or certainly expected to be. If you think about it, as the number of searches go up that you do on Google, or as the number of conversations go up, that you have in ChatGPT, or the Pieces Copilot, or whatever, the volume that you're interacting with material wise, is going way up, in addition to that speed. So, you're remembering things less, and you're interacting and connecting dots more. For us, we're like, “Hey, we need a system to put these small nuggets, these small tidbits, and have them there for you, there for your team, and just integrate it throughout your whole workflow.” That's what we're doing at Pieces and I'm sure we'll get into the details. [0:03:59] MB: Yes. I'm obviously very interested in this for a variety of reasons, not the least of which is that I'm integrating more and more workflows into my life every day for work. I'm curious, maybe the story of how you got here? How did Pieces come to be? [0:04:11] TK: Yes, I would say, I came to be kind of going through this professional process of creating software. In company number two, we were building a tech platform. It’s pretty simple. This is like 2015, 2016, and it would plug into the school's database and create group chats for all your classes, clubs, dorms, teams, and so on. We ended up running that. It was called Mesh My Campus. We ran that for a couple years, raised a good amount of funding around it. But we ended up sunsetting that because it is in the tech space, kind of pre-COVID. The investment in that region wasn't too hot at that time. But there was something interesting in that process of building that software, and that was, I am copying and pasting all the time. As a developer, as a designer, as an animator, everything I do is kind of interact with a larger body of material, and then take subsets of it, and I kind of go through that curation process, and I want to put that somewhere. So that, plus the idea of building this file system that you can upload files, send them to your classmates, download them, open them. It just revealed itself to me that files are massive, and people are dealing with small things. So, we began to pioneer this thing called File Fragments, which is effectively like a stabilized clipboard across Mac OS, Linux, and Windows. Then, we also began to reimagine the experiences of saving those things, searching those things, interacting with them via copilots. But this whole heart starts with being able to save a file fragment and enrich it with on-device machine learning. So, small models that know what these materials are. That's kind of how we got into it. We're like, we're dealing with small things a lot more, than we are, large things, and files are very bulky and I want something at the lower level. [0:05:47] MB: Yes, you mentioned two terms there that I think are subtly important in the larger context of this conversation, but curating what you're doing, and then enriching it is kind of a brilliant take on things. My impression of a lot of developers picking up like engineering for the first time, or maybe interviewing for their first job is that rote memorization seems to be part of the tactic to get the job, like LeetCod, and things like that seem to be like, let's memorize the Dijkstra algorithm for going through a graph or whatever. In the world we're starting to live in, you don't need to memorize those things. You need to be able to conceptualize and understand them. And I think some of that is curating, like, “Hey, here's all the algorithms I can have in my back pocket, and then enriching it, so that the system can tell me what these things all do.” So, can you give me maybe a core use case for something where the curation and enriching of the information might help me get something done? [0:06:35] TK: I think that, in the developer journey, inspiration is a large kind of driver for the end result. So, I was doing this the other day, and we're working on our website right now, and I was looking at all these clever ways to do responsive text inside of a website, right? Tech scaling is always the classic one. I'm on like 50 different blogs, looking at all these tutorials, and how people are using viewport units, and container queries, and all this stuff. And there are interesting approaches from each of these, right? So, when I'm in that phase one of researching and prototyping, I want to save those little inspirational nuggets somewhere. But also, I want to know where they came from, what else they're related to, and I want to have them kind of all in one location that as I'm going through my work in my IDE, they can be surfaced, they can be relevant. So, for us, that was phase one. I want them for now and I want them for later. But phase two is overcoming this challenge that thinking to save something is a conscious process. Then, organizing is a process that takes a lot of investment and a lot of work. So, to hit on your point around enrichment, I'm not going to take the time, I'm not that good of a note-taker to title things, tag things, organize them, and so on. So, I kind of want to just a place to dump things. Then, from there, have the whole system figured out, which is a holy grail of, I guess the developer’s brain, right? Half remember something and look up the rest. So, organization, and to get good organization, you need good enrichment. When you put something into Pieces, it's going to grab all of that workflow context, who you're working on something with, where it came from, what sites were you visiting before you came to something important so you can pick up where you left off, backtrack, and just remember a certain point about something. Go to Pieces, and then from that small nugget of memory, find the entire thing, and kind of get back into it. That’s the goal, is you need enrichment, you need context to be captured, and you need it to be available in all three pillars of your workflow. [0:08:30] MB: Yes, that's super cool. One of those things where the blending of all the stuff that's available on your computer is quite a bit more powerful when you're able to recall it two or three weeks later after you've fixed the problem and you do fix the problem the second time around. [0:08:43] TK: I'll just add to that. It's so funny, with the Generative AI stuff and the improvements in search, you're actually fixing more problems more often that are varying to a larger degree in problem solution type, right? So, one day, I'm maybe doing CSS and the next day I'm doing Dart WebSockets. The next day, I'm doing GRPC. So, the variety in my workflow is going way up, because I can solve things faster and find information faster, both in part to Pieces, and some of the generative AI stuff. But really, Pieces is now that that component that captures the things that I don't have the headspace to remember. So, that's kind of the main goal for us is let it be proactive so you can move fast, but also you don't lose everything in passing. [0:09:24] MB: Sure. So, given the Software Engineering Daily is a bit of an audio medium. We do have a YouTube channel available for people to watch recordings and whatnot. But I think for folks who are just listening, what I'm curious about is maybe can you kind of describe the UI workflow for using Pieces for someone who's got it set up on their machine? [0:09:42] TK: Yes. So, it's as simple as this. We didn't want to change any of your classic behaviors, starting with copy and paste. I think StackOverflow have built an entire business model around that. It’s Command C, Command V. So, that's all you need to remember to get started. Just go to a site, copy something, select, copy, open Pieces and paste it in. From there, we have a bunch of integrations, browser integration, IDE integration, Microsoft Teams integration, everywhere you can imagine, Pieces is available. So, you can select some stuff, right-click, save to pieces. You can search things right in place. You can use the copilots and all of these places. Being integrated is important for two things. One, it makes it easier to just fire and forget a material that you need. Two, it captures a step function and context awareness. If you're in the IDE, it's pulling related people from git collaborators. It's pulling the files, the projects, all of that stuff. If you're in the browser, it's pulling the source URLs. It's pulling some of the related browser history, if you will. I will add, for everyone out there, this entire system runs completely on device, right? So, it's air-gapped, you can turn off the Wi Fi and use it. But I think that making it super easy to just save things to Pieces, first and foremost, gets you to phase two, which is now I'm in Pieces, and you can view things in a list view, a gallery view, but most notably, kind of this global search. So, once you're past 50 items, and you're like me, I just dump everything in there. I go to global search or I go to the Copilot. The Copilot is effectively a conversational ChatGPT that's grounded in the materials you've saved, the websites you visited, the people you relate it to, and all of that. So, when you ask a question, you don't have to give it like, “Hey, this is my language and this is what I needed in, and this is what I did yesterday.” It's just already aware of that context. After a little while of use, Pieces is going to be capturing things for itself to make future experiences, like search and Copilot interactions even better. Then, I'm excited because this quarter, we're getting to the holy grail of it, and I might have mentioned this in passing to you. But it's kind of this TikTok for developers, right? It's to the point where it is saving and serving materials to you at a rate that is so performant that by the time you switch from the browser to the IDE, those lists, those related people, the related snippets, links, everything there is in a list, that's continuously restacked and reranked for you, for what you need, right when you need it. So, you can think about this as like Copilot right above the code level, and across the entire operating system. The two kind of things that we think we're leading in is on Mac OS, Linux, and Windows, so you can get Pieces. Then, privacy is a big thing, especially for developers. So, all the models can run completely on device. You can use Llama 2, several versions, just right there. We pack it in and it's good to go. [0:12:23] MB: Yes, that's pretty outrageous. So, fully disconnected, right? If I'm in the middle of the ocean, with no Wi-Fi, no satellite network, anything like that, this still will serve me with autocomplete based on the model and context that I've fed into Pieces in the past. Is that right? [0:12:37] TK: That's right. And I'll add, we're doing a lot in the teams and enterprise space right now. And what happens is you kind of have developers using Pieces in their own workflow. But then, when you go into a team environment, you can actually do a lot of peer-to-peer stuff. So, Pieces OS runs a little server on your computer. It's what allows all the integrations to connect to it. It allows you to build things on top of it, and so on. But that server, you can actually kind of do a flattened network, and basically say, “Hey, I want to include Mark’s search results.” Or, “I want to include Max’s workflow activity or something like that.” Of course, that's opt-in with all the sharing permissions, but it is nice, because the entire system, even in a team environment can be kind of air-gapped and direct. So, it really compounds productivity that individuals get when you bring it to a team. [0:13:24] MB: Yes, for sure. So, let's talk a little bit more about that sort of team collaboration using Pieces. I'd imagine this requires a little bit of habit forming, like getting used to, “Hey, I'm going to copy and paste things I want to remember into Pieces.” But also, if it's paying attention to what you're doing, there's some sense that it's getting smarter along the way. What does the collaboration workflow look like as those habits are formed amongst teammates? [0:13:43] TK: I'll add this, and this is kind of the secret sauce. Kind of the first important thing that you just mentioned there, you have to be aware to save things. You need to realize, “Hey, I need this because later on, I'm going to anticipate that problem.” That actually is a skill that a lot of more senior developers develop where they've realized too late, too often, that they didn't have something. But nonetheless, it is a habit you need to develop. So, in q4, we just wrapped up a bunch of stuff in q3 from the Pieces OS side, the tech side, that in q4, you will really see this thing turning on the autosave. So, we will bring you up to about 300 things in rotations, continually ranked, and also deprecated. You actually won't even need integrations either. We have a model that is able to take a screenshot of your desktop, if you will, again, on device, segment that screenshot where it says, “Hey, this is code. That's the URL, everything you need.” It's a pretty cutting-edge model, actually. And it does this very, very performantly. It's not recording all the time. But when you switch apps, or you switch tabs, or you do something that is a workflow change indicator, if you will, we're going to capture that. So, even without integrations, we're going to be able to understand your behavioral patterns of a workflow and then improve that for you. If you never save anything, you're you'll have search results, you'll have Copilot context, and you'll have materials that when you look at this in a team setting, no one saves anything, everyone has everything. So, I think like, that's really interesting is to be able to search someone else's workflow on how they solved a problem regarding a config, or maybe a server down, or some type of situation. So, I think that the proactive nature of this thing, shadowing you and capturing what's important, and then scaling that to a company, that means that all the stuff that is not getting saved or captured anyways, because people weren't thinking to do it, is now getting captured, and it becomes searchable, available, and so on. [0:15:40] MB: Sure. I think that's one of those things that happens a lot as teams grow and change too. Like the unplugged experience of this, the software less experience of this is that your senior-most team member changes teams, gets a new job, starts managing a different team, and all of the things stuck in their brain kind of disappears, or you're onboarding someone new, right? A college hire or an intern is starting for the summer and you need to get them up to speed. Usually, that involves a person-to-person download of information, like a sit down and talk for a long time about things. If you're building a team-aware context of notes and processes and tasks, I think that's probably one of those things that can help get people up to speed really quickly. [0:16:17] TK: You're absolutely right. So, I mentioned a lot like this idea of Pieces capturing who is related to your stuff. In that case, exactly, you can come into pieces as a new employee or a new intern or something, and you could have a few things saved, do a Google search, or a Pieces search, if you will, and it's going to get team member profiles right there saying, “Hey, reach out to this person, because they just did something very similar a couple days ago.” You'll start to see this almost like completely proactive experience rollout in q4 and q1, because, again, there's so much that is lost in translation, loss in passing that is so valuable. Just one website that you saw that just helped you intuitively solve a problem, that someone else could access, and again, solve it twice as fast. That's what we're trying to do. [0:17:01] MB: Yes. There's this paradoxical feeling of being a dev, in particular, if you're connected to the social graph in any way, where you’re like, “Oh, I've heard about this before. This problem has been solved. I know I saw a Gist about it, or a CodePen or something months ago. I have no idea who shared it. I have no idea what the actual answer was. But I remember it and I remember it was cool. That's something that I run through easily, several times a month, and would love to kind of pluck that responsibility from the back of my brain, or from my own notetaking habit, and put it into something else. [0:17:30] TK: That’s spot on. That's what happens and that's what's going on in the human brain. We're just trying to capture it, right? Get it closer to the workflow, integrate into those pillars, and leverage some really cool high performance on device ML. [0:17:43] MB: Yes. To that, and your ML magic behind the scenes is that you've built the Pieces Copilot, which then starts to take this stuff and allows me to do, for lack of a better term, copiloty things with this information that I've stored. So, now I can truly have a conversation about and with the snippets and notes and websites that I visited, right? What does that look like? [0:18:02] TK: So, the context is everything when it comes to copilots. If you take a look at like high-performing copilots and buy high-performance, I mean, a small model that's giving you the highest fidelity responses, right? That means that your orchestration layer, your retrieval augmented generation, everything there needs to be like super precise, and you need to have context that is breath. Of course, you need some depth for very specific copilot responses. But most of the time, you want something that is aware of everything that was going on, right? Even aware of your team members stuff. So, for us, we say, “Hey, you can throw a large language model in the cloud, like ChatGPT, or POM. You can go into Pieces and select your model.” Or you can use a small model like Llama 2. But at the end of the day, all of these large language models are plug-and-play, because the orchestration layer, one, has a lot of context across your whole workflow. But two, is able to use that context to surface and organize materials ahead of time. Even when you ask for that response, it's going to say, “We already know the top five things you could possibly ask about right now, with a high level of certainty based on what you're doing.” So, by building in not only the what, but also the when, and the where, and the who, into copilot’s kind of context layer, then you're going to have really, really unique responses. [0:19:19] MB: Yes. That's pretty wild. The feeling of I'm going to turn to my smart robot and ask it a question and it's already answered it, is kind of like a very out there wild, new-to-me experience, certainly. I love that idea. I think that's really exciting. [0:19:31] TK: I'll say this real quick. The stuff I just talked about with regards to the ranking and the kind of pre-grounding, if you will, that all powers the feed. And the feed is, again, your materials that are just ordered 1 through 10. You can scroll it and so on ahead of time. So, when you have a feed that's really smart, and then you take that feed and those items are now used to ground the copilot, you're going to get like the right context, the right responses, and even the user doesn't need to think about what context is set for the copilot. Right now, that's a manual process. You have to say, “Hey, select this project, select these sites, select these materials”, even all of that can be automated. You see it with YouTube, you see it with TikToks, they always talk about the algorithms, how good they are getting you in every single day. I think if we take that, and we apply it to your resources from your workflow, it’s great for grounding a copilot, but it's also great for having what you need right there, without asking anything. [0:20:24] MB: Yes. I think in the context of social media feeds, when people talk about the algorithm, it tends to be with a negative connotation. We're all cursed to be subject to the algorithm. But when the algorithm starts working for you, that's a very interesting paradigm shift there. [0:20:38] TK: Yes. I have an interesting philosophy about what I build. I kind of refuse to build like video games and social media apps and stuff like that. I'm a very utilitarian type of individual. So, I would just say, those are great algorithms. Let's apply it to your material, the things that can help you and your team move faster. [0:20:55] MB: One of the things I really like about Pieces from having used it a bit is the view of context for something that you're looking up. So, in the context of the Pieces app, you can type in a question, or a search, or however you want to look at it, and you'll get a series of responses that if you haven't used it before, kind of looks like a search results, right? Like your Google results for something that you've searched. But what it does really well is something that almost harkens back to like the early days of search for me where it's like, here's what you searched for, here's related terms or related topics, terms might not even be the right thing there. And then also, like, here's the people that this all came from. So, I like the idea of knowing that like, say, for example, I'm looking at cores, or switch, every developer in the world just shuttered listening to those words come out of my mouth. But like, if suddenly, I not only have cores, but I have the related terms to search for that. And also, like, maybe the top two or three people who are good at solving these problems. I can also turn to them directly for expertise as needed to. [0:21:50] TK: That's exactly right. Because at the end of the day, your copilot or your search results are only going to get you so far. So, if you're limited by the fidelity of those responses, then the next best thing is who to talk to? And do they have anything? I think the related people component is really important to capture, and it's not only inside of your IDE. If you save something from a blog, or from YouTube, or whatever else, it's going to pull that author. So, it says, “Whose blog should I read about whatever?” It's going to start to surface, “Hey, this is one of your top-read authors for CSS or whatever.” I think it's people, both public, open source, but also internal to your team being surfaced in those results so you can go and look at more verbose materials or reach out to them via a collaborative channel. But yes, search. We're working on it. [0:22:35] MB: Yes. So, this is, I think our conversation so far has focused pretty heavily on like developer tooling and developer things and answering dev questions. But I think one of the realities of the world we're working in is that like, the problems we're solving spin more than just I need to write code to do thing. Some of them are maybe more like soft skills-related problems. So, is Pieces the sort of thing that can enrich workflows with, say, designers, or PMs, or something like that, as well? [0:22:58] TK: Yes. So, we're really excited because our focus all along, and you'll notice this, has just been on small things, right? So, enriching small things, things that you copy and paste. That's where it all started. Designers actually share a lot of the same workflows as developers. They're looking for inspiration, they're looking for templates, they're looking for all that stuff. They're copying and pasting from project A to project B. They're actually working with developers a lot, and more often, they're going straight from design to kind of boilerplate code. So, you will see us in 2024, start to generalize the brand and have it be, of course, really still a plus for developers. But it'll be Pieces, as opposed to Pieces for developers, right? So, we'll start to move into Pieces where it's even better for front-end devs, or mobile devs. Pieces now where it's really great for designers that work with devs. So, moving up that digital supply chain, if you will, from dev to design, and looking at the cycles that occur naturally. That's a really nice place for us. I can tell you this, I got a demo on all hands. I think it was two all-hands ago, and the models for tagging, titling, all this stuff still on device, still very robust, and they actually use the same pipeline that we have for code. But number two, in Pieces. multimodality has been a big focus. So, if you take a screenshot, pulling out the code, or being able to tell your copilot to watch this video ahead of time, and pull the code and the transcript and ground it. But long story short, when you get something like a screenshot, whether you're a designer or a developer, you want to take it and turn it to something more valuable. The thing I was most impressed with was I got an early preview at the PNG to SVG model. It's the OCR equivalent of what we have today for developers. And you'll know our OCR does two things. One, it obviously extracts the code. But two, it repairs the broken characters in that code, like ones that are tricky. Colons, parentheses, brackets, all that. In the design space, you could say, “Hey, I need an icon.” I know this desktop app has this icon right here, screenshot of it, put it into pieces, and boom, you've got the SPG of that icon out. You can look at the XML code, you can look at whatever else. Again, the multi-modality, we really want to look at what are the common things, the common problems that you run into at the very small level where it's like, “Oh, that frustrating icon that's stuck in UI, I got to go find it somewhere.” Versus screenshot, SVG, now, you're on your way. So, we think that'll be really nice for mobile devs, front-end devs, and then also just full-blown designers eventually. Yes, Figma plugins are coming very soon too. [0:25:29] MB: Heck yes. Wow. I have spent my career dodging back and forth between UX and design and development roles, and I have personally spent many, many hours tracing over pictures to make SVGs. I can tell you, if you're listening to this, and you've never done that, it is like one of the world's most tedious activities. If that's good for you, so much the better, but it is not my jam at all. I would love to never have to do that again. [0:25:53] TK: I kid you not. I'm a co-founder. I’d do anything that's needed and I also have a design background. But I kid you not, the Azure Data Studio logo, really hard to find out there. Last week, I found myself because we had a plug-in for going out, I did the same exact thing, taking a path tool, doing the curves, doing everything. It was it was brutal. [0:26:11] MB: It's a labor of love and the real curse behind it is that no one will ever know you did it, because you've created the same picture that they've seen before. [0:26:17] TK: That's exactly right. I'm excited about the small things that Pieces will support for developers and designers, on top of that. [0:26:23] MB: Definitely, yes. I really like the notion of privacy you all built into. You mentioned before that this is something that works offline and inherent in that, is that like, the data that I'm recording for this is mine and stored locally on my machine. Can you talk a little bit about the portability of that? Moving to a new device, how does that work for me? [0:26:43] TK: We are doing a lot on the device-to-device stuff. Basically, you can think of the stuff rolling out in early q4 is effectively AirDrop, if you will, for your materials. Between people, between devices, and we also have this thing rolling out called Workspaces. So, you can AirDrop a certain kind of grouping of your things. But privacy is super important to us. Actually, thank you for bringing this up. Because a lot of people are like, “Oh, do you train your models on the data and stuff like that?” The answer is no, because the data we actually need is out there readily available. And these big companies and even open source movements, have open CV datasets that we can leverage. At the end of the day, going from a big model to a small model is hard in certain regards. But when it comes to the data, we have a surplus of that, right? It's more so the prompt engineering, the fine-tuning, how you kind of do the algorithm enrichment to make that model better. So, there's no data training going on. Then, the other thing is like, we're a software startup, working with code, right? You know how long SOC 2 is. You know how long GDPR is. You know how long these processes are. But our investors, they want us in enterprise, doing enterprise pilots, as fast as possible. So, this came in a couple weeks ago, but long story short, we had a bank. I won't name the bank or where it is. But they came in, a couple of developers, and they were like, “Hey, we like what you're doing. We want to check it out, do a pilot, bring it to the team.” I was like, “Okay, great.” Hop on the call with a few decision-makers, and they're like, “All right, where does this stuff go? What's a diagram of your system and all that?” I just recorded the whole demo, Wi-Fi off the entire time, copilot and all, and they're like, “Okay, well, clear as day, it’s local.” I think like making a system that starts from the largest level of constraints, on device, ultra-secure, ultra-private, that enables you to move into those environments, and enables them to take more risk, and also doesn't close the door to like DoD, or like healthcare, or whatever else, in highly-regulated environments. But then opting into cloud things is the second phase, which is the easier task there. Now, we're getting to the easy stuff, where you're going to think we're shipping features every week. But at the end of the day, we started our journey, the hard things. Now, it's just layering on the knowns. [0:28:57] MB: Sure. Do the hard thing first and incrementally add to the nice to-haves after the fact is a great approach. So, I want to get a little bit into the nuts and bolts of things and talk about how it works. Can you tell me, at whatever level you're able to share, kind of the architecture behind Pieces and how you built it? [0:29:13] TK: Yes. So, big challenge. I think that it's so funny. I have a few mobile devs that are like I'm shipping this app on iOS and Android, and it has its nuances and different UI stuff, and so on. But it's a challenge. We're doing this across Mac OS, Linux, and Windows with on-device edge ML that has to interface with C++ and Rust and all that. So, we needed a system that was like really kind of sticky and kind of isomorphic in nature. We ended up going with Dart. And Dart is this really kind of contemporary language, great asynchronous support, pretty solid garbage collection. But it allows you to compile to binaries for each of those platforms, as well as compiled to JavaScript for web runtime environment. Then, the UI layer we ended up going with Flutter. However, the systems are very separate to where all the capability that occurs on device actually is able to also run isomorphically in a server. So, Dart has been the secret to our work here. And then we needed something that could tap into device hardware for model acceleration, GPU detection, all of that. And Dart has this excellent FFI capability, Foreign Function Interface. That is what lets us have a parallel C runtime or a parallel Rust runtime ahead of time binary for those things, and just talk right to it from the Dart code, right from the Dart runtime. So, I think the flexibility there and the isomorphism have been really the A-plus decision for us. And if you look at other companies out there, their series B, only on Mac OS, saying, “Hey, Windows coming soon or Linux coming soon.” So, we're pretty proud of again, cross-platform, all types of developers, all types of environments, and then reducing technical debt, because we have such isomorphic code. [0:30:56] MB: Yes, no doubt. It must mean you have some fairly isomorphic developers then. What's your engineering team look like? [0:31:01] TK: Yes. So, I would say there's about 15 or 16 engineers. The company's like 19 people. So, also, even our growth is like developer advocacy. But my background is actually from isomorphic JavaScript. Way back in the day, I was doing stuff in the browsers. There's all types of nuances when you have to transpile it for Internet Explorer, like, those are the dark days. Isomorphism for me was really nice. I was super fascinated by high-performant clients, but also where you can take that code and deploy it to a server. So, I actually worked in the JavaScript space around web components, and high-performance isomorphic code for seven or eight years. Then, I needed something that was both native compiled to JavaScript at a high-performance level, but also, each of the native binaries for Mac OS, like Apple Silica, or Intel. I also needed something that could interrupt really well with other languages. We found ourselves thinking about this from a perspective of, “Oh, take a Javascript app, wrap it in it Electron, or Chromium or whatever, and deploy it to the edge.” But when you start to do so much at the operating system level, file interactions, right-click experiences on your desktop, low-level hardware acceleration, like, you need something that's a beefier language. And Dart was right there for us. We're pretty excited. [0:32:15] MB: Yes. Is there an open-source story for Pieces? [0:32:19] TK: Yes. So, we are – I’ll say this. Open source is a lot to manage. I feel for every single open-source contributor and author out there. Because at the same time, you're building software, you're also managing and building a community, right? Everyone has their levers that they want to pull, the features that they want to have, the pull requests that they have open. So, it can be a bit chaotic. I would say, once we're kind of up on our feet, past our series A, there's going to be a lot of open source that starts happening. We this, there's really not a lot of sense around having something that's locked down from an IP perspective, or whatever, because anything that you think you have, can probably be done by someone else or AI, within a few years. And it's like exactly what we saw with Meta. They're like, “Hey, cool, ChatGPT. Here's Llama 2.” That threw a wrench in everyone's kind of business plans. You just see it so often that for us, we're like, “Hey, we want to enable not only us to build experiences, but also anyone to build experiences on top of Pieces.” But yes, 2024 and beyond, you'll start to see big open-source bushes. [0:33:22] MB: I think I’m going to give you more credit than you're giving yourself here. But I will mention that github.com/pieces-app has quite a bit of open and visible software there, up to an including a plugin for Obsidian, which I think is at least really interesting to see how these things work. That sharing a degree of openness from the get-go that I think is also admirable too. [0:33:39] TK: What I mean like open source, I think like for us, it's a big business investment. So, we'll be shipping nine SDKs. Most of our code is generated, but it'll be enabling developers to build on top of Pieces OS, internal tooling, whatever integration they want. So, nine SDKs, and that actually is starting to get solidified in q4 Here. We use all the SDKs ourselves. But documenting them, having the open API spec out there, dealing with people's bugs. We have no clue who someone, who uses the Go SDK, we have no idea what they're going to come back and say. They might say, “Hey, this is broken and your code generator, go fix it.” That's a whole thing, right? [0:34:17] MB: Yes. Totally, totally. Okay, so we talked a little bit about what you're building with, the composition of your team. Also, stories for the future, and for integrations, and really cool to hear that you're building out SDKs and client libraries that devs will be able to, at least, start to consume at some point in the near future. What are your favorite things that you're looking forward to in terms of building that are coming soon? [0:34:39] TK: I think the design stuff is really interesting, primarily as a front-end developer speaking, obviously, I'm full stack. But the designer in me make sure like the UIs and the websites and stuff looks A plus. I'll tell you, the front-end development flow is such a messy workflow, right? It's so all over the place. There's like four languages, HTML, CSS, JavaScript, all that. I think like supporting that cohort of users is going to be a plus. I think that there's a lot of improvements to the copilot coming up, chain of thought reasoning, on device optimization, stuff like that, that'll make it really, really world-class. Then, the peer-to-peer stuff, I think will be excellent. So, we got our plate full, I would say, but everything we're doing is spot on, at least that I'm excited about. [0:35:23] MB: Sounds like it. Yes. It's an exciting roadmap to get into. For folks who are listening to the show right now, tell us how to get started with Pieces, and also, I'm a little bit curious about when is the copilot available for use? [0:35:35] TK: So, the copilot, you can try it out today. But the persisted copilot chats actually go live in, I think, a week and a half, October 14th is the target date. So, those are persisted chats on your materials, which is really nice. Then, you can use the PaLM models, the Llama models, or the open AI models today as well. We will have some stuff in 2024 where everyone in 2023, we're considering early adopters. They'll get a discount for the advanced copilot add-on in 2024, which is just like unlimited everything, right? A plus experience. But yes, you could try it, pieces.app, go give it a look. Remember, save things, use the copilot, search for those things. It's all starting with a copy and paste or simple question to the copilot. [0:36:18] MB: Yes, right on. And if folks listening to the show give it a shot and they have feedback for you or for your team, what's the best way to do that? [0:36:22] TK: Well, we have contact on our site. I think you can email hello@pieces.app. You could probably email me. I don't know if I should put that out there. But yes, it's just tsavo@pieces,app, real simple email. I'm on LinkedIn. I'm on Twitter, but the team, we're pretty active. We have a discord. We do all types of live streams and stuff like that. So, if you want to come talk to us, please do. [0:36:42] MB: Cool. That's perfect. Well, Tsavo, thanks so much for joining today. It's been really interesting talking to you and I'm super excited about Pieces. I hope the folks listening to this show dive in and give it a shot. If you're listening, the site is pieces.app. We will drop tons of links to the things we've chatted about in the show notes today. Tsavo, thanks so much for joining me. I really appreciate it. [0:37:00] TK: Thanks,Mike. Love the tangents. Love the kind of rift and excellent stuff. It's all pretty interesting. [0:37:06] MB: Right on, man. Well, come back anytime. We'll talk soon. [0:37:09] TK: Okay, thank you. [0:37:10] MB: Take care.