EPISODE 1708 [INTRODUCTION] [0:00:00] ANNOUNCER: Rerun is an open-source SDK and viewer for visualizing and interacting with multimodal data streams. The SDK lets you send data from anywhere, and the viewer collects the data and aligns it so the user can scroll back and forth in time to interpret it. The tools have been applied in spatial computing, augmented reality, virtual reality, and mixed reality. Emil Ernerfeldt is the co-founder and CTO of Rerun. Emil is also the creator of egui, which is a popular GUI library written in Rust. He joins the podcast to talk about his history in game development, building super fast tools, and developing Rerun. Gregor Vand is a security-focused technologist and is the founder and CTO of Mailpass. Previously, Gregor was a CTO across cybersecurity, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade, and can be found via his profile at vand.hk. [EPISODE] [0:01:08] GV: Hi, Emil. Welcome to Software Engineering Daily. [0:01:11] EE: Thank you. It's nice to be here. [0:01:13] GV: Yes. Emil, thanks for coming on today. We're going to dive into Rerun, which is what this episode is all about. But to begin with, I would love to hear more about your journey to this kind of point in time. We do quite a bit of research here on the podcast before we go into episodes. I think you've got quite an interesting one. I'm already going to put out there. I believe you have worked on a remake of the 1985 arcade classic, Gauntlet. Which I'm sure, a few of our listeners can definitely think about. So, with that in mind, yes, can you take us through your story before co-founding Rerun. [0:01:51] EE: Sure. I'm from Stockholm, Sweden. I kind of fell in love with programming when I was about 15, had a programming course in high school. Right away, I wanted to program visual things, visual interactive things, and mostly games. That's what I thought was really fun. You can build these things using basically mathematics that just come alive for those magical. After studying computing science for a few years, I got a job doing it to the physics sandbox of all things, sold for education, which was really fun thing as well to work on. Because it was a little program we can draw a car with your mouse, and then that car would just come to life right away. That was a really fun thing to do. I moved around to various different things, because I love learning new things. After that, I went to Arrowhead Game Studios, which at the time was just building a remake of the 1985 Gauntlet. Arrowhead is more famous, I think for Helldiverse, which is very big right now. But yes, back then, there were like two teams there of 10 people each. It's a very small team building this game. That was really fun, creative environment to work in. I know, as one of the few programmers on team, we were three programmers. I also found like a niche building like dev tools and visualization tools a little bit. I found that that's something I really liked doing as well. If I will look over the shoulder of an artist, and see that their tools are bad, that hurts me. I feel like as programmers, we have kind of a responsibility to improve things when we can, because we kind of have the power to build these better tools. But yes, that developmental that game was a little bit stressful, I would say. After finishing it, I left for another company called Volumental. They were working with 3D scanning, so they got their hands on these like Kinect depth cameras. It was from Xbox, if you remember back in those days. This would have been in 2015, around that time. We're trying to build like a company around 3D scanning. That's where I met my now co-founders, Niko and Moritz. I didn't know anything about computer vision, or 3D scanning, or anything. I had to learn basically everything from scratch. Again, I find myself - I need to visualize things, because when I came in, everything was written in Python. We had basically these scans that were in file format, and then you'd run some Python script on it, and we'll output some numbers. You want to change a Python code to make those numbers go down. This is too abstract; this is not doing anything for me. Niko, he came up with an idea like we should build a little bit visualization toolbox for this. So, using skills I learned in the game development, basically, I hacked together something in C++, where we took every step of the pipeline of the 3D scanner and outputted it to file. So we then could look at it, play back, and see exactly what happened. This was kind of a crucial thing for us to be able to debug what was happening and just to understand what was happening. Interestingly enough, like it started off like a debug tool for us developers, like our engineers trying to figure out, "Okay. How should I improve my algorithm to make the 3D mesh look more like the input point cloud input images?" But as it grew and matured, we then start using it for observability of 3D scanners out in the world, as well as debug what was happening there. Then, we also saw people started using it for marketing within the company, like we've tried to use these visualization tools for marketing and so on. The seeds of Rerun started already there, in many ways, like ideas. Even though it was then, like four or five more years before we actually started founding the company. Between then, I went back to the gaming industry a little bit. Because again, I like learning new things. In the meantime, also really fell in love with Rust, which is, I think, an important part of Rerun's DNA. [0:06:00] GV: Yes, we're going to get on to the Rust significance in the episode. I'm curious, I think you mentioned sort of sounds like Rerun is very much an open-source project. It sounds like it started at some point quite before, I guess, where you guys are today. I didn't actually look up the first commit, for example. So yes, where did that kind of start, I guess. [0:06:24] EE: So, we founded a company officially in 2021, little over two years ago. We basically decided on it yeah, about end of 2021. So that's when we decided to start a company. It's more like the ideas have been percolating in our heads earlier, especially in Niko's head, our CEO. [0:06:43] GV: Okay. I'm sure for a lot of listeners, they still don't really have any idea what Rerun is. What is Rerun at a high level, and maybe kind of going into that, could you provide just some kind of examples, or just describe what is a demo of sort of how Rerun could be used to analyze something. I'll just say, time series data is kind of quite a big component of this. But let's kind of - I'll stop there, and let you explain. [0:07:13] EE: Sure. There's several ways of describing what Rerun is. I like to think of it a little bit as visual printf debugger. Usually, when you have a program, you want to figure out what's going wrong with it, you may attach a classical debugger to step through the code or something, or you add some log statements to try to figure out what happened when and see what's going wrong. But that doesn't really work when you're working with high level visual data, like images, or tensors, point clouds, and so on. So, say you're building a little vacuum cleaning robots. That's your startup. Your robot is happily cleaning your apartment, and then suddenly start ramming the wall over and over again. I got to figure out, okay, what went wrong. What you want to be able to do is see the world through the eyes of this little robot. You want to see the camera images that it has, assuming it has a camera, see the LiDAR point cloud, some sort of 3D LiDAR scanner. You also want to see like, "Okay, it's probably tracking a map of the apartment as it's moving around some sort of SLAM simultaneous localization and mapping. So, you want to see, is it actually where things is in the apartment? So, using Rerun, you can then use our logging SDK, which is very similar to like a text logging. Except, you actually log high level things like images, or a map, or arrows, or point clouds. So, you just throw that into your code, a little bit log lines here and there. Then, you can stream that to our viewer, and view that live. You can see live what's happening to the robot. You can also pause, and go back, scroll back in time, scroll back and see what led the robot to think that it's - actually, it was thinking it was in the kitchen when it was in the living room. Why did they make that mistake? So, scroll back in time and see that it's looked at - its camera image was actually this one point, catching a glare from the sun, which made the segmentation image. It was output of some neural network be all messed up, which then led to this third algorithm that analyzed the segmentation image to do a mistake, and so on. That's really one of the core use cases of Rerun, is like, figure out what this program of yours is doing. So, it's geared towards anything that has 2D spatial, 3D spatial and a time component to it. So, robotics, computer vision, anything with sensors, basically. [0:09:42] GV: Yes, got it. I love the robo vacuum example. I have one and I ended up just watching it a lot to see like, "How did it make this decision? It doesn't have a camera on it." It's not too difficult to understand. I think I understand what's going on there. But it's the same example I brought up in another episode with a different company doing slightly different things. So, I love that's where you went with that. Yes. I mean, just to kind of recap, there is this element of streaming data, you're visualizing it, there's then the elements of the platform, I believe that are really what you sort of term building, and then extending as well. Could you maybe just speak a little bit to those? Then, we'll kind of dive a bit deeper after that. [0:10:24] EE: Sure. Yes. You can see. We're open source and free to use. We're an open core company. So, I should say that right away. So, if you want to use Rerun right now, you can, at least if you're writing in Python, C++, or Rust, because that's where we have logging as the case for. So, right now, there's basically three parts to Rerun, which is the logging SDK, the viewer, and the database. Logging API is what I already told you about, like out of few log lines in your code, and that's it. It then streamed to our viewer, which has an embedded database in it. So, the database is a multimodal time series database. Multimodal here just means, we support many different types of data, 3D, 2D. In the future, we're going to have audio and basically anything you want to throw at it. The time series part of it is an important part, where you can log things, and attach it to a timeline, or actually multiple timelines. Usually, when you have camera sensors, and so on, you have a clock that measures the time of capture, sort of capture time. But then, often, you do some processing on that, and then you log the data. You have capture time and log time. Maybe, we're also interested at what camera frame where we at. You may want to log the camera frame as well, as you want to associate each piece of data with several different timelines, and rerun indexes on all these timelines, allow you to scrub on any one of them. Out of order ingestion is no problem either. This is kind of unique to Rerun. That's why we're building our own database for this. Then, the viewer is, as it sounds, a viewer for viewing your data in your database, either streaming in or from file. A viewer can either be installed natively or run in the browser. [0:12:11] GV: Yes, exactly. The browser bit, I was going to come on to. If any of the listeners out there, if you just go to rerun.io very quickly, go to some of the demos, and suddenly, I was expecting maybe a video or something. Actually, your browser turns into - feels like maybe just to some analogize, it kind of feels almost like a video editor, and then you suddenly realize it's not, it's got all these pains with different things going on. This was just super impressive. Yes, I would love to just learn a little bit more like, was that a nice to have, or did you feel that you really had to go that kind of extra mile to make that a browser experience to kind of get the platform across those? I would just love to hear about the challenges with that specifically. [0:12:58] EE: Yes. Well, thank you for appreciating that. A lot of work went into that. No, it's not a nice to have. For us, it's an integral part of what we're building. We want to build something that's easy to use, easy to install, easy to share. That goes throughout our product. If you want to use it in Python, it's pip install, and then you're off to go. But it also means you want to share some recording with a colleague, you don't want to have to tell them, "Install this viewer thing." "No, I just want to just want to have one click, zero install, to view something." So, having it in the web, I think, is almost table stakes at this point. At the same time, we didn't want to build a classic web app built on electron to get it on native, and have all the legacy JavaScript APIs, and slowness that comes with it. This is kind of why Rust was a perfect choice. We can get to that later. But having it in the web also means, you can embed it anywhere. Anywhere, you can have a web view, you can embed the viewer. We have it running in Jupyter Notebook, for instance, or in Hugging Face basis, and radio, or in a notion document. This is only possible because we have a web viewer. [0:14:07] GV: Yes, that makes a lot of sense. I'd say, it was something that just sort of jumped out at me. You don't often have the demos that are that. I effectively, I just felt, "Oh, I understand the platform 100% now, which is awesome." So, let's get into the Rust part. I will put a disclaimer out there. I don't have a strong understanding of Rust, and I'm sure some of our listeners don't, and I'm sure some of our listeners have a ton of understanding. I am aware of kind of why Rust tends to be reached for, and I'm aware it's got a really kind of passionate community. I'd love to hear from yourself, Rust, why from the beginning, and like what's been your experience with it now? [0:14:51] EE: Yes. I found Rust around 10 years ago, and pretty soon, I realized like this is the future. Because I come from C++ and C++ is great in many ways, like it's a huge toolbox, but it's built on pretty shaky grounds. It's built upon C, and years and years of cruft that is accumulated. It's not super nice anymore, I would say. On the other hand, you have more modern languages like C#. I mean, C# is 20 years plus old at this point, but it's still a lot more modern than C++. High level languages like Python, and so on. They're great in other ways, like their memory safe, you're not going to get segfaults or security flaws in terms of use after free, and buffer overflows, and so on. All that things you get in C and C++. However, they're usually a lot slower, and they have a garbage collector, which means, they use more memory. That also means, you cannot easily port them to different places like WebAssembly. Rust comes along, and really solve this in a beautiful way. It has the speed of C, but with the safety of something like Java, or Python, or C #, which is an amazing feat. It does so without a garbage collector. It does so via a strong ownership model, and a borrow checker, that at compile time, make sure that there's no shared mutability. There's no way you can have mutable access to an object while someone else is looking at it. This is the cause of so many bugs, even high-level languages like C#. You're given a pointer to some lists, and you think like I own this list now. But it turns out, someone else also has a point that I think is mutating it while you weren't looking. This is like, why, when I started using Rust, like okay, this is going to displace C++, and I'm just been waiting for the rest of the world to catch up, and it's slowly is. That's just one of the things I love about Rust. It's also has an amazing build system, cargo build, and it's built compared to C++, where the build system is a horror show, because there is none. There is 10 different ones that are bad in their own way. Rust have some types that is tagged, unions, concept from I think ML original, not machine learning, ML, but the programming language ML. Which is, once you start using some types, it's like, I cannot imagine living without them anytime at this time. Yes, there's so much good stuff in Rust. Well, one of the things I really like about it is, store to compile to WebAssembly. For those don't know, WebAssembly is a compiled target, is a binary compiled target that can run in a browser very fast. You can compile Rust and other programming languages to Wasm, and run Wasm in the browser. This is how we run a web viewer. Wasm is also great, because you can run it somewhere else. You can build plugins in Wasm and have a plug in system that is sandboxed. This is something we aim to do in the future for plugins in Rerun. I think Rust is uniquely situated as a good candidate for compiling to WebAssembly, because it hasn't got a garbage collector and so on. I can talk for hours about Rust, I really, really like it. It's just fun to use as well. It was easy to find people to hire as well, because there's a lot of people I think jumping ship from C++, and want something new and nice. I think Rust is it. If you want more, you can read my Why Rust blog posts on our website. Actually, just Google "Why Rust." I think it's one of the top hits. [0:18:19] GV: Nice. Yes, it's always just kind of fascinated me, I would say, more so than say go. That if something is built in Rust, or someone is working in Rust, they really love talking about it. That can only mean one thing. Like, they feel that passionately about talking about it. It's not like, if you're working in node, you sort of sing from the rooftops, "I'm working in node." No, it's great, that's fantastic, but nobody particularly cares, unfortunately. Is it fair to say that, for example, yes, if we just briefly go back to the browser demo. Is it fair to say that was only kind of realistically possible in, let's just say, a certain timeframe, because this was in Rust and WebAssembly was part of that process? [0:19:02] EE: Yes, I would say so. There are ways to compiling C and C++ to WebAssembly as well. So, it definitely would have been possible to build Rerun on C++ as well. I think it would take a lot more time though, for us to build it, because we'll be using old language which doesn't have all the nice new features, right? I think, using Rust is like just multiplier in productivity. You don't have to care about memory use after free. Threading is almost trivial, and you can do threading very easily. Also, I've been in my spare time, I've been building a GUI for Rust called egui, which I started off like a hobby project. Now, it's what we're basing Rerun on. It was built from the start to be this cross platform, portable GUI that is basically - you hand it events, mouse events, keyboard events, and it hands back some shapes for you to draw on screen. Which means, you can put it anywhere. It looks the same on Web, as it does on Mac, as it does on Windows, and so on. So, because we had that, it was just like, "Okay, let's just build this thing." It's so far so good. [0:20:12] GV: Nice. That's what egui, I think you said so. Where to find that? [0:20:16] EE: egui.rs. E-G-U-I .rs. [0:20:20] GV: Nice. Okay. Let's move on from Rust. As you can go to or back to potentially Rerun, specifically. The Rerun SDK, as you sort of briefly touched on when you're describing the platform. It supports logging multimodal data, like tensors, point clouds, and text. This is pretty diverse datatype. You've touched on, you're actually, I think, building your own database. But how do you ensure that these can be sort of efficiently stored, queried, visualized. They're quite, as said, like quite diverse data types in one place? [0:20:59] EE: Yes, they are. We're building this on top of Apache Arrow, which is a kind of standard serialization format, which has support for all the atomic types, u32, or f32, or whatever. Structs and tuples, and unions, enums. That's like our base layer, and we're not reinventing anything there. But on top of that, we then build our own abstractions. There's one way of expressing a float in arrow, but there's not a one way of expressing, let's say, a 3D mesh in Arrow. So, we're building our own data model on top of Arrow. We're building this using our entity component system, which is an interesting idea from the game industry again. Quickly, an entity component system is a way of describing entities, that is things as a set of components. For instance, a point can have position components, and color components, and radius components, label perhaps. You can add your own components to them as well, like I have a confidence component, like how confident I am that there's a point. Or a standard deviation component or whatever. You can just throw components onto your entities. This makes it very modular, and very easy to extend. This is kind of our basis for our data model in Rerun. We have built-in archetypes, which is like collections of components that make sense, like a point archetype as the one I mentioned. But we're building this in such a way that users can add their own archetypes, and components, and mix and match them the way they want. Yes, to make this kind of efficient, we have this logging API for Python, Rust, and C++ that we code in from our definitions of our components. We generate the code for the Arrow serialization and deserialization. Usually, that serialization, deserialization is just like a memo copy, because that's how arrow is built. It's a columnar format. He's like built for efficiently putting the same data after each other. It's not built just for serializing one string, but a thousand strings after each other in an efficient way. [0:23:12] GV: It was Apache Arrow, the kind of the only choice for this, or was that also a decision that had to be made. [0:23:19] EE: We could definitely have considered doing something like FlatBuffers, instead. It's likely that we probably are going to start supporting different formats as well, maybe embedded within Arrow. But Arrow is really the standard when it comes to databases like this. We didn't want to reinvent everything, and cut against the grain. One of the things that Arrow has is a lot of tooling, for instance. There's the parquet file format, which is a format for efficiently storing our data in a fast way, compressed way, and so on. I don't know if we'll end up using that, maybe we will, but there are other file formats out there that is also efficient for storing Arrow. We don't have to reinvent everything from scratch. That's the most important thing. [0:24:02] GV: Okay. That makes a lot of sense. One of the big kinds of use cases here, it's just being able to handle data semantics, like spatial relationships. As again, if anyone goes and watches or uses the demo, effectively, you'll get quite a good sense very fast of kind of how that looks. How does the platform actually sort of understand and process this? I think, again, you touched on that sort of briefly earlier. How does it understand the spatial data? Are there examples, kind of maybe a little bit more kind of real world in the sense that people are actually using this to solve a specific problem effectively that you've observed? [0:24:43] EE: We have like this semantic layer on top of our, as I said, these components and so on. We also, in the viewer have the intelligence to understand these components. When it comes to things like the connection between 2D and 3D, let's go over an example then. We have one user called the Biped and they're building this vest for the visually impaired. If you're blind and want to like walk around in the world, this vest has cameras on it, and then speakers that tells the wearer what's around them. This is a pretty cool product that we're building. They have depth cameras that look at the world, and try to estimate where things are, and have RGB cameras that try to - that they need to match that too. So, you want to be able to create like a 3D world. Then, look back and compare it to the 2D input images, and try to figure out how accurate is the 3D world I'm building up around met. So, how you do that in Rerun is, you would log the transforms for these cameras. They're basically their poses in the 3D world. You can log their pinhole parameters, like the focal length, the camera intrinsics, as they're also called. Once you log that, the reader understands like, "Okay. This is how I transformed from this 2D image to this 3d world. Now, you can put them next to each other in the viewer. If you hover something in the 2D image, you'll see a ray shoot out from the camera in the 3D world. You can see like, okay, so this matches to that, and you can see if you have something wrong in your calibration, for instance. Similarly, you can take things in the 3D world, maybe you add bounding boxes around cars in the 3D world, and you can then project them back into the 2D camera space or spaces. Because you may have multiple cameras and see how well they compare today to the images you have. This just comes for free when using Rerun, which is a really cool thing, I think. [0:26:46] GV: Yes. Okay. That's super cool. I mean, performance feels like the thing that must be pretty challenging here, potentially. It is predominantly open source, so I guess, the user is kind of also having to own where it's being run. Can you just speak a bit to kind of how that's been architected to make performance, is a consideration that a user has to think about where they're running it? I have to hear more about that. [0:27:15] EE: Yes, performance is definitely one of our top priorities. We want something that's easy to use, nice to use. That includes being performance, no one likes using a slow product. We spent a lot of time making, both the logging side fast, and the viewer side fast. We still have some ways to go there. We kind of, early on, optimized a lot for big data, like measures, and images, and so on. In terms of a lot of users just logging - also wants to log scalars, add like 1000s scalars a second, or 100,000 scalar a second. Now, we're kind of restructuring some of our code a little bit to also handle that use case, because we want Rerun to handle whatever you throw at it at whatever rate basically. As a user, hopefully, you shouldn't have to think about it too much. You just throw data at Rerun, and we ingest it as fast as we can. Most of the processes are done on a separate thread than the login threads, so don't block that. It's a challenge, but it's also why we wrote it in Rust. Rust is really fast on its own. This makes it very easy to paralyze things. It's blazingly fast as the mean goes. [0:28:23] GV: Nice. Where a typical user is running this? Is this on local machine, or is this typically more a container that they push up somewhere and run it that way? I'm curious about that. [0:28:34] EE: Yes. Right now, it's running locally on your laptop or whatever. If you're a user, we want to make it as easy as possible. In fact, if you're using Python, you pip install, Rerun SDK, and then you're writing your code rerun.initspawn. Then, Rerun viewer just pops up, and now, you can just log to it. There's nothing else. You don't have to start it up separately, you don't have to start any container. and so on. We just want to make it as easy as possible. No data leaves your machine. But we can also stream data over the network, so you can also run your viewer on one machine and stream the data to it from a robot or something like that. In our future, we want to also then make it easy to put your recordings and your data on the cloud. This is kind of our idea for our future commercial products to make it easy to share data and index data on the cloud. [0:29:31] GV: Customization and extensibility, they seem to be a pretty core tenet of Rerun. I sort of noticed, for example, things like - I think it's the blueprint SDK or API. Maybe you could you just walk through a bit how is a user going to be extending Rerun, and making it work for them effectively? [0:29:55] EE: Yes. Our main competitor in Rerun is in-house tools. People at companies that work with computer vision, and robotics, and similar things. Almost all of them build in-house tools that are custom for what they're doing. We talked to hundreds of companies like this, and they would all like to not have to build the in-house. But it also means, whatever tool they use must be flexible enough to cover what you're doing. So customizability, and being able to write plugins, and so on for Rerun is very core to what we're doing. We're still pretty early in that story, I would say. So, we have a roadmap for doing that better and better. What you can do right now is log custom data that throw any data that can be converted to arrow, which is any data. Then, you can view it in the viewer as, versus a table of data. That's not very useful. Often you want to view it as custom 3D primitive for instance, so we would like to add a plugin for the viewer to view your data however you want to view it. Right now, if you want to do that, you can write a plugin in Rust for a viewer, and just compile your own little viewer using this plugin. It's not very ergonomic. We want to do a lot better. Similarly, we have data loader plugins, which is, let's say, you have your own data file, or your data format, and you want to log it with Rerun. You can write a little data loader program that handles file, so this file types, and logs it to Rerun in the Rerun format. That's also like a data loader plugin. We also want to make that a lot more easy to do. In the end, we also want to make data transformer plugins. So, say you log data and you log some point cloud, and you have your confidences, or different measures on each of these points. Then, in the viewer, you want to be able to just like, "Okay, I want to color the points based on the distance to camera. But also, if the transparency is controlled by our confidence parameters and so on, you won't be able to write these pretty high-level things to transform your data. We want to have a plug-in system for that as well. Our plan there is probably have something that compiles to WebAssembly. So, you can write plugins, basically any language, and so on, and we can all going to have a look up plugin marketplace at some point, where you can just share plugins with other people. But yes, I will say right now, all this customizability is more our roadmap than reality. What we do have now is the blueprint API, as you mentioned. And this is our API for setting up the layout of the viewer. So, saying like, I want a 2D view on my RGB camera in the top left corner, and the top right corner. I want my RGB depth image, and I want the 3D things projected into that. And down below, I want the 3D space and so on. That's something we just introduced, and you can control that from Python. So, you can set up everything with Python, or you can just drag and drop things in the viewer, and save that blueprint to share with others. That sort of customizability where Rerun becomes more visualization toolkits, like more and more a tool that you can customize and use however fits your company is where we want to head. [0:33:16] GV: Kind of leading on from that, at the moment, Rerun is predominantly open source. It seems to have a pretty thriving contributor base when I looked at it. Not that we should dwell on metrics like this, but over 5000 stars, which is not insignificant. So, it's clearly, got quite a bit of recognition. Did you always know this project would get this kind of much of attention and support? Is that just actually being more of the catalyst that it's got that support, and that's now why you're kind of looking - for example, to take it in a more - in a commercial direction. This can really be your life for the next 10 years, for example, or something like that. [0:34:01] EE: Even before starting Rerun, before raising money, we talked over 50 different companies in the field, like computer vision, robotics, and so on. What we heard was the same story over, and over again, is that, they were all building their own in-house visualization tools. It was taking a lot of time and effort from what they really wanted to build, because this is not the core strength. They want to build visualization tools. They want to build robots, or scanners, or whatever. So, they were building these things, and they were not as good as they want it to be. They would rather just use an existing tool if one existed, but there is none. Rerun is finding a new niche in the marketplace. I would argue, there are similar things that are doing similar stuff, but not quite as focused as we are. So, we had a very strong belief that this could really be something. This people are going to like this, like a lot of companies is going to like this. The open question was like, how many companies are there out there that are just going to use this. Exactly, what do we need to build? So, there's been - how can we actually monetize this, and so on? How should we build this. There's been a lot of exploration in the last two years, just to figure out exactly what we're trying to build. We know what kind of use cases will the build for, but it's been a lot of back and forth with design partners, like early users to try to figure out exactly what to build. But we're really happy with the uptake. It's really fun to see as well, like we have single researchers using it. But we will, all the way up to like some of the biggest companies in the world using it internally. It was just really cool to see it's being used so across the board, and in some very unexpected places as well. Very early on, we had someone come onto our Discord and share a video of him debugging, visualizing StarCraft II replace with Rerun, which was not a use case we had in mind, but it was really cool to see. So yes, we're really happy. [0:36:00] GV: I love that example you just gave. I think I'm also going through a sort of, maybe I want to call it a PMF, Product Market Fit journey, I guess. Yes, I think seeing use cases that you never even thought about, actually, the fun ones. It doesn't mean they're suddenly - you can't say, "Oh, the PMF on this specific part is just more genuinely fun and nice to kind of see actually someone using a product in a completely different way than you thought about and it makes sense for them." Whilst, you have other users that probably are more towards where you assume it's going to go commercially. As you move towards a sort of commercial side of the project being developed, that still sort of has a relationship with the open-source side. How are you approaching that with the community, with contributors? How are you thinking about the licensing and all of that? [0:36:52] EE: Yes. It was a pretty early decision on our part to go open core. We believe like this is the only path forward for something that needs to be as extensible and customizable as Rerun. So, we want to make it easy to people to extend the logging SDK, and the viewer, however they need and want, and ideally, share their plugins with the world. We're really happy to be like, we're building this in the open. So far, every single line of code will be written as open source, and licensed under MIT, and Apache. So, it's just free to use forever. But we also want to build a business, and our idea there is, the current open source free to use product is geared towards single developers trying to work on the robot machine, or on their machine, and little robot or something like that. That's already a great experience. There's so many people using it like that. But then, at some point, you're going to realize like you're running of disc space of recordings, because you're recording a lot of data. Then, we want to have a very easy way to like, "Okay, we'll help you all for that." So, just upload it to the Rerun cloud, then you can easily browse it in the browser, and share it with colleagues. This is kind of a data platform where we are starting to build pretty soon, which we're easy to share data, easy to find data. If you have your data on your own cloud, we will help - can build a product to help you index that, and browse it, and so on. Again, this is so - you don't have to build it yourself. We think this is a pretty nice split, where we don't have to make the open=source product worse in any way. It's as good as it can be for the single person using it on a laptop. But as soon as there's more and more data involved, or sharing involved, where there's cloud stuff needs to come into the picture, basically, it becomes a pretty natural transition to have a paid service with us. [0:38:53] GV: Nice. Yes, that sounds like a pretty solid approach. A lot of developers, I guess, they're the kind of - often the one called the evangelist inside the company that brings the product in. So, they try it out, effectively open-source, and they use it so much. Then. the company kind of says, "Oh, wow. What's this tool you're using?" Then, actually, so many of the commercial features are really what's needed to kind of make it possible for the company, they're going to keep using it for their commercial purposes, as well. So, that makes a lot of sense. And yes, just to kind of, I guess, wrap up, you talked a little bit about kind of where you see Rerun, but where do you see it evolving over the next few years? Are there any other - I don't know, even just dream capabilities that you're thinking of you haven't really mapped out exactly what you're thinking, if this is where we can go, you're excited about that? [0:39:44] EE: I'm pretty excited about the field in general. I think the robotics field is about to kick off in a big way. Like there's new, smaller, cheaper sensors coming out, drones, smaller, cheaper, robots, smaller, cheaper, and there's more and better software for them as well. Like there's all these machine learning things coming around. So, I think we're very excited to be part of this whole ecosystem. What we really want to do is just make Rerun, like a little oil that makes the whole machinery run nicer. So, make a tool that just gets out of your way, and just give you the power and use of it as possible. To that end, yes, we want full customizability and plugins that you can just run. We want to be able to easily put your data on the cloud and browse it in a browser. At some point, if the data volumes are big, we should render it, render the whole viewer on the cloud, and then pixel streaming to your browser. Which means, you have basically infinite strong computers doing the rendering for you. It's just streamed to perhaps your phone. So, you're not limited by the device in terms of visualization capabilities. I mentioned Rerun becoming like a data platform, where you can record and share data live with very low latency, and browse, and find old recordings, and then have those part of your developments, the pipeline, your debugging pipeline, your observability pipeline, your training pipeline for your neural networks. You can pull in the data, and train your neural networks, and compare it to some output. I see this like a Rerun is well positioned to be a very core part of this emergent field. We're excited about that. Beyond that, I mean, just like more of the same, more support for audio, support for better video encoding, support for more languages. I think we have many years of plans ahead of us that we want to work on now. [0:41:51] GV: You've seen the space kind of buildup, and now, you are able to be right kind of in the middle of it because you're building the tools that enable all these things to happen. That's really exciting. So, very happy for you guys on that one. To bring it back around, I can't wait until my robo vacuum, or rather, I'm going to buy a new one at some point that is cameras everywhere and really understands my apartment. Doesn't just bash into the same chair five times. So, for that specific use case, I'm really excited. I know there's much more important use cases as well, but that's very exciting. So, just to kind of recap, where's the best place for a developer get up and running, get started with three Rerun? [0:42:32] EE: Yes. Let's go to rerun.io as our web page, and you have all the instructions there for - if you're using Python, it's pip install, Rerun-SDK. If using C++, it's a couple of lines of C make that you throw in there and you're up and running. If you're using Rust, this cargo install, Rerun CLI. But yes, goes to a web page, rerun.io [0:42:54] GV: Awesome. [0:42:55] EE: Try it out yourself. As you say, there's a viewer, there's rerun.io/viewer and you can play around in the browser and try it out. I encourage you, if you're doing any sort of thing that has 2D, 3D, or time component to it, try it out. [0:43:07] GV: Yes. Well, I think that's a great place to leave it. Emil, thank you so much for coming on. Really appreciate you giving the time. It's just such an exciting project and where you guys are in your journey. So, I hope we get to catch up again on this in the future and see where you're at. [0:43:25] EE: Thank you so much for having me. This was a lot of fun. [END]