EPISODE 1759 [INTRO] [0:00:00] ANNOUNCER: Runway is an applied AI research company building multimodal AI systems, model deployment infrastructure, and products that leverage AI for multimedia content. They're among a handful of high-profile video generation startups and have raised impressive amounts of funding from investors such as Google, NVIDIA, and Salesforce Ventures. The company recently released their Gen 3 Alpha model which is trained jointly on videos and images and will power text-to-video, image-to-video, and text-to-image tools. Joel Kwartler is Runway's group product manager. He joins the podcast with Gregor Vand to talk about Runway and the technology the company is developing. Gregor Vand is a security-focused technologist and is the founder and CTO of Mailpass. Previously, Gregor was a CTO across cybersecurity, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk [EPISODE] [0:01:10] GV: Hi Joel, welcome to Software Engineering Daily. [0:01:13] JK: Thank you for having me. [0:01:14] GV: Yes, Joel. It's great to have you here today. You come in with the company Runway and the platform Runway. We're going to hear all about Runway very soon. It's in the LLM/AI space just to kind of cover that one-off. I only say that now because I think hearing about sort of your history before Runway will be kind of interesting. What was the journey to joining Runway? [0:01:38] JK: Yes, of course. And we'll cover this more in depth but a Runway is not in the LLM space. It's more in the Generative AI diffusion model space. [0:01:46] GV: That's a great distinction. So yes, thank you for clarifying that. [0:01:48] JK: No worries. My journey before Runway really sort of led me directly to Runway in a couple of weird ways. I'd always been foot in the creative world and one foot in the more technical products, tooling worlds. In college, I studied basically computer science in English and was always back and forth between those two fields. So, I had actually become part of this group that it was doing like ML for comedy writing back in like 2018-ish called Botnik Studios. It was a mix of like ClickHole Onion writers and like, you know, ML PhDs. And we were just sort of playing with the generation of ML models back then that were like Markov chains, adversarial neural nets, things like that to see if we could generate anything that was funny, basically, and we'd train predictive text keyboards, and match tone, or get a bunch of outputs from neural nets and go through them as comedians and try to pick out the funny ones. So, that was part of the realm in which I'd always been paying attention to when is machine learning or how does machine learning accelerate creativity and the stuff that you might want to create. At the same time, in my career, I'd been working at a bunch of startups that were in these creative tooling spaces. I was at Figma, I was at Source Graph for a while, sort of always building things for, in my definition, creatives, which includes engineers and includes designers because I was a little bit selfish. The tools that I enjoyed using and products that I enjoyed using and for people who I thought like they're things and that's the coolest thing you can do. So, building tools for people was always fun. Those started to combine or really start to cross paths and like late 2022. I'd gotten the feeling that like, I wanted to maybe step back from just traditional tool building on the product side as a result of the Botnik work. I had been paying attention to GPT-3. It was getting really interesting. Some of the image models, it felt like, "Oh, we're at the precipice of what might be a huge step up in suddenly what it means to be using technology." Ultimately, it was like, "Okay, well, there's got to be a company already in the space doing interesting things." Of course, there was, and it was Runway. So, it was sort of just this perfect combination of like, well, suddenly I don't have to be sort of like one foot on the creative comedy side of the Los Angeles. I do some stand-up on the side and one foot in the like tech startup side, so that I could just be like both feet all in. So, I joined Runway at the start of 2023. [0:03:59] GV: Yes, that's fascinating. I guess I never thought about kind of ML and comedy coming into like the same space, especially back in 2018, that clearly is sort of very early to any of this kind of thing. So super interesting. What is your role at Runway today? [0:04:14] JK: Yes, at Runway, I lead the product team. So, I have the very exciting and very fun job of getting to work really across every other team we have at Runway, and getting to work with even folks outside Runway, like our users, our customers, the folks that were taking feedback, and we're trying this stuff and giving us feedback. It's all the way to the researchers, the engineers, the designers working on building into the product, making sure then we're communicating via sales and marketing teams and our community teams. So, Runway is maybe a traditional startup squared in that we have like all the traditional teams, then we have all these like additional very exciting teams like our creative team and our research team that you're not going to find maybe as an early stage startup unless you're at a place like Runway. [0:04:52] GV: I mean you're sitting in LA today and I believe the company is based in LA. Is that correct? [0:04:57] JK: Runway is actually a remote-first company based in New York. So, we have an office in New York, and then we also have an office in San Francisco. Then, we have sort of the remote first just means that like folks, even if you're in New York, San Francisco, there's no like days in the office requirements. You're welcome to come home whenever you want. But it's fun once or twice a year, we all get together at our office for like our film festival or for our offsite and get to meet everyone and work together in person for a week or two. [0:05:20] GV: Got it. I guess I was curious on the LA connection in case, ultimately, a lot of film production, et cetera, happens out that way, and whether there was a strategic help there. [0:05:31] JK: Yes, it's definitely been useful with it, I think. And I think we have a couple of other folks who are in LA, and it was just, again, sort of a natural, like people who really interested in the space were in LA, and they were also interested in the ML side, and so they joined Runway. [0:05:45] GV: Awesome. So, let's dive into the product. As you've called out, it is not an LLM, it is Gen AI. Thank you for the correction there. Let's hear about Runway. What is Runway? [0:05:54] JK: Yes, Runway is sort of a rare company in that it's like a full-stack applied AI research company. We both like invent and build these AI models and then we also invent and build the tools on top of them that really unlock the new forms of creativity and streamline the entire creative process from concept to finished product for really pretty much every use case you can imagine. So, it's a really unique place because those are, I think, two of the maybe juiciest or most interesting areas to work on right now in technology in the world. The research side of like, what's possible? I mean, there's, okay, if it's possible, we first interact with it. And it's nice having both of those sorts of in-house to Runway being full stack is you don't even have to work on them separately, but they're both there. They actually can inform each other. So, we get to bring learnings from our product directly back to the research team, and likewise we can bring sort of research experiments directly into new product experiments in ways that we couldn't if those were separate companies. [0:06:48] GV: So, I mean, if I'm a user, I'm just trying to paint the picture for our listeners if I was to jump into Runway, like, what's the first thing I'll be doing and what are the outputs kind of look like to me as a user? [0:06:57] JK: Yes, great question. Often the first thing people do, they jump into Runway and they jump into our Gen-3 Alpha model, which is sort of the latest version of video generation models. And then you can start with either like a quick text sentence or an image of let's say a dog running through a field made of balloons, and it will generate sort of a video of that. Or often we see people taking images that like our actual photos they took, and then adding fun VFX to that that you'd never - so, a photo of my kitchen filling up with balloons and it basically like sort of creates the effect without needing a on pipeline that would take a long, a long time to do. [0:07:32] GV: Yes. I think sort of from what I read, one of my sort of positions as for human imagination. I think a lot of critique on anything in the Gen AI space that is, for example, music or art effectively, I think a lot of people have opinions around, well, who's doing the imagination bit now. So, how do you guys sort of look at that in terms of, I guess, assisting imagination without kind of removing the human's need to think about things? [0:08:02] JK: Yes. I think one of the nicest things about Runway is that was effectively a solved problem from the start of it because the co-founders all met at this like art tech program at NYU, and they were all sort of in both the art and tech worlds already. So, there was never a question of like, "Oh, some technologists have this model that they could do interesting things. How do you involve artists?" It was like, from day one, the DNA of the company was artists and technologists together. We really see that as we grew. We hired a lot of people both naturally. They might be working as engineers, but they're visual artists on the side. Or our direct creative team, where we have a very sizable in-house creative team. One of the things that I was evaluating when I was considering joining Runway was, is there ever a chance that the artist and tech needs like diverge and it was clear that was never going to happen at Runway because founded by artists, for artists effectively, and it was going to be focused on building tools for humans, as opposed to building tools where the humans are not involved. [0:08:56] GV: Got it. Okay. Let's sort of, I guess, look at how the output actually happens in a sense. We're just talking about speed versus quality here, I guess, which is one of the biggest considerations again, when people are using any kind of Gen AI tool. I'm sure you can get things fast, but does it sort of feel like something incredible? How do you guys balance that from a product perspective? [0:09:20] JK: Yes, great question. I think that even goes back to the question of like, how does it enhance human creativity? The speed is a big part of it, right? Because it dramatically increases how fast you're able to iterate on your ideas. When you see something that's real, it's very different than like imagining how my look and I can spark new ideas. So, that really accelerates things. But likewise, the quality has to be really good for that to actually be interesting to go through. The ideas, you've got to actually see and have a reaction to like, "Oh, that one really, so that shot, is really not what I'm looking for. I'm not going to go in that direction." So, we really focus on both. Both are really important to the creative process and the way that humans create. We do a good job, I think, collecting a lot of feedback from customers to make sure we're balancing those needs effectively, but ultimately, we've seen over the past year, year and a half of our research that like both just improved dramatically. [0:10:07] GV: So, Runway produces content, like if I just look at Runway's website, for example, and like look at all the examples, like to me, I haven't seen anything quite like that. At the same time, again, people listening today might kind of think, "Oh, well, I already used this other tool for Gen AI, whether it's image or video space." How would you, I guess, start to really be able to describe the differentiation? I mean, I'm aware there's sort of some features that are kind of put forward things like multi-motion brush and camera control. Could you maybe speak a bit to those and then, yes, anything else in terms of how you sort of position this? [0:10:42] JK: Yes, good question. I'm happy to talk about the features. I think I would say broadly, the features that are different are almost just like an effect of the cause that Runway is the most focused on sort of like building tools again for creatives. So, we work really, really closely with creatives to understand what they need and where their workflows are going, for example. We have some controls, like you mentioned, multi-motion brush camera controls, which give you very direct, like camera level control. When I want to zoom in while I'm moving to the right. Or I want these three marbles to fall off the table, but I want these two marbles to float into the sky. Those kinds of granular controls that you need to really create interesting and unique contents. More broadly, Runway had a very stable vision the entire time that I've been here since the company was founded, which is ultimately we think that technology is going to enable anyone to tell any story they can imagine at the highest quality imaginable. You're not going to need $100 million VFX budget to tell the sci-fi story that you envision or to commercial that you envision. So, as a result, that drives a lot of our research and a lot of our product updates. You see, the effect of that as a user, you end up with all these features that are going to be very unique because we know you need those controls, but it also, I think drives our research vision and it drives how we approach building products, which is like, we release stuff as fast as it's ready, so that we can get it to the hands of users and learn what's it useful for and how should we continue to improve it. [0:12:08] GV: Yes. I mean, just taking a sort of side off the pure product for a second, you've talked a couple of times now about users and customers and feedback. So, who at the moment, would you say are like in what sort of spaces and especially commercially, obviously don't need names exactly, but commercially, who is kind of using this kind of tool? [0:12:26] JK: Yes. Just me, actually. I think I'm the only user we have. I spend all day on different machines trying to pretend to - no, I'm kidding. I think that what's amazing about Runway is we have a lot of users pretty much from every domain, every industry, every vertical, that's really led us to double down on our philosophy of let's release this so we can discover these cases we wouldn't have even thought about because we only have experience in this industry or that industry, or we're only talking to that customer this week. So, we really see Runway's tools are used from everyone from Fortune 500, Global 2,000 type companies, to freelancers, to marketers, to film studios, telling new types of stories, streamlining their workflows. But even beyond the traditional longer form video content, we have folks using it for Previs and storyboarding to just explore all sorts of different directions much, much faster than you would be able to with traditional tools. We have editors who generate videos in Runway that they then composite into existing footage so they can do that last mile of tricky VFX that really puts the sparkle and polish on something that otherwise would have taken them a long time. We even have artists like Madonna, A$AP Rocky creating music videos or visuals for the show with Runway. So, it's really expanded everywhere. I would say there's no like, these people use Runway. It's like three buddies uses Runway. [0:13:37] GV: Yes. Okay. That's interesting. I like the examples, especially the music examples where you realize actually a lot of what is on, I guess, on the screens behind them when they perform or kind of these like looping visuals, which probably used to take a long time to kind of figure out. Now, I can be imagined like a lot more kind of creativity effectively, like where you can just kind of see a whole bunch of like ideas and actually almost finished product and kind of pick one. Is that sort of a good example? [0:14:06] JK: Yes, that's in a way, what we see people use Runway. [0:14:09] GV: Yes. Away from the music example, I mean, I think you just mentioned things like storyboarding and obviously that's traditionally has been something that people have really kind of like had careers around. Do you see that kind of being a movement where people that have already been in that industry are actually transferring over to kind of becoming masters of Runway? Maybe not like today, today, but is that sort of a path? [0:14:34] JK: Yes. I mean, I think what we see is there are a lot of folks in the traditional entertainments and VFX world who were some of the earliest and most excited adopters of Runway, because for them, it sped up the stuff that maybe was felt slower about their workflows or they would be working with someone who'd give them a little feedback on something they'd have to change and it would take a long time to make that change and create a flow. So, I do think that we see, and what we're excited by is the ability to beat up the fun parts, which is like the ideas, the stories you want to tell, getting into the details, creating your own vision and less worried about, "Okay, now I've got to like manually create this effect somehow using a particle editor or whatever that might be." [0:15:15] GV: Yes. That makes sense. So, back to the product, kind of first is to stick on the features for a second, then we might just go a little bit more sort of into some of this to the technical side. So, with the product feature, I'm just curious, from a roadmap perspective, how do you guys even figure out where you - I mean, what I'm thinking is that, to develop a feature for something like Runway must just take a long time. So, it can't be this kind of maybe super fast iteration. It has to kind of be maybe more considered. But yes, you tell me, how do you guys figure out how - [0:15:46] JK: Yes, I hear you say that, and in my head, I'm like, "Man, I can't imagine. My life would be very different if it was slow." It is very fast, actually. It's fast and it's very exciting as we've grown that we have a bunch of fast things that are then stacked on top of each other. So, it feels like there's always something big going out that week, which is great. I think what we found over time is that the vision that I mentioned, having this sort of like, we're building tools for humans so anything you can imagine you can create, and the end goal is top level, production level quality for every possible thing you want to create has been really helpful in actually allowing us to be a little more flexible with our short-term roadmap, which I think is necessary given the types of stuff that we work on and the ways in which things maybe sometimes speed up or maybe sometimes don't. So, we are able to be a lot more flexible in the short-term in terms of like, okay, what's the next thing that seems most valuable based on what we just last released that we should be getting in the hands of users next? Sometimes that changes, because it allows you to learn a lot from, "Oh, we released this," for example, motion brush. Suddenly, there was a huge hit. People really liked that control, and then they wanted more of it. So, okay, well, now we've got to develop more in that direction versus just assuming you got that one thing and now, let's get back to the roadmap that we planned at the beginning of the year nine months ago. I think that we approach things with a much more flexible opportunity focus that lets us move so quickly as a result. [0:17:06] GV: Yes, I think my question has come from, I guess, a place where at the end of the day, I don't work on Gen AI at all. I'm just a consumer. So, I think to me, it feels like, how can it move fast? But I think it's great to hear. That sounds like the only way it can work. So, that's kind of fun and fascinating. Let's go a bit more into just the technical side. I appreciate you're on the product side. But can you kind of explain anything towards say, just the technical architecture behind the models or just the platform in general? [0:17:35] JK: Yes. I mean, I think that our approach in general comes from some of that product and research perspective on like being user first, where we want to build very robust, very stable, and very usable products. So, as a result, our approach is to make sure that the stuff that we're releasing fits into all those categories that we're not going to have some spike on a release and then go down or we're not going to have an issue where people can't understand how to use the products. So, that's sort of our, I think, our technical approach is to make it very accessible. [0:18:09] GV: I guess kind of hand-in-hand technical and product is data privacy and security. I'm from the security side which sort of sometimes comes into privacy, sometimes, mostly comes into privacy as well. But this has obviously been quite a sort of hot topic in the maybe more LLM space, but Gen AI in general. How are you handling that? I mean, in terms of the inputs from users, maybe could you just kind of run through like what kind of inputs can a user even give? Then, how is that sort of then considered from a privacy perspective? [0:18:40] JK: Yes. Almost to reverse that order, I think we were very early at Runway and sort of scaling up the maturity of our security and data infrastructure teams and tooling with the knowledge that like if things continue to progress the pace we expected, like we would want that already in place. So, in terms of like what users can provide changes and it often grows as these models become more powerful. Initially, with our first text-to-video model, it just was text. You could just provide some text. Then, after that, you could provide text in an image, either or together. And then after that, you could write text in an image in sort of different directions, other style modifications on top of that. So, I think that as the models improve, there's more things that you can provide. As a result, we wanted to make sure we were well ahead of the curve and turn this to standard best practices for security and privacy, and even some additional systems that we added that we felt like we wanted to have. [0:19:31] GV: Yes. I mean, I don't know, if I was to - when I use Runway, can I upload photos, videos? Is that kind of an input I can give? [0:19:40] JK: Yes, exactly. We have a video-to-video model that you can use as well, as you can give us photos, videos, text, those are sort of the main categories. We have audio models where you can give us either transcripts or audio to sync to as well. [0:19:53] GV: So, if I was to upload, call it a video, do I have controls after that to kind of, I guess, remove that video of the platform? Not maybe as part of content I've created, but do I have the ability to then remove that again? [0:20:07] JK: Yes, exactly. We've got all of this enterprise-grade, deletion, data protection, security things you'd expect as a user. If you wanted to, although, I personally hope you wouldn't, especially now that we have this conversation, if you wanted to go delete your Runway account, you could do that as well. [0:20:19] GV: Nice. It's just one of these considerations now that almost just comes hand in hand if anyone is going in the direction of producing a commercially available LLM or in the Gen AI space. It is something that just comes with the territory now and a very, very difficult thing to still navigate around. But it's great to hear you guys sound like you had that baked in from the start. So, it makes a ton of sense. If we also look at how the content continues, so I think one thing I'm quite intrigued about, both from the technical side and the product side, is if I have already created something, and then I'm wanting to go back a week later, and I want now five more of these things in exactly the same style, but just with some tweaks. How challenging is that? [0:21:07] JK: Yes. It used to be. I'd say a year ago, more challenging and then we heard from users who brought up the same like yourself, so we made it very easy and now there's just, I think it's literally one button, you can go back to a video that you've created with Runway and jump right back into reusing all the settings, all the inputs, to create more, and then you can tweak those settings, you can tweak those inputs, you can extend it in different directions through time to what you wanted to, but maybe didn't finish doing a week ago. [0:21:35] GV: Can you describe the process like behind sort of, if I type words, like how do those words, I guess match up with something visual? Just in layman basic terms for me. [0:21:46] JK: Yes. The overall concepts you can almost think of it in terms of like a new type of camera that you control differently right than a traditional camera where you've got to be - the only way you can physically point to in the real world that you would see. Here it's much more like these models have like an understanding of the world. They've got the world inside of them and you're using you're using and maybe your image or maybe your video depending on the model you're using to direct that and then it's effectively returning to you what you've directed. So, the difference is just instead of the camera only capturing the part of the world that you can show it at the moment, it actually has knowledge about the world and you're just the director using text or motion brush or different controls that we have in the product to pull that content out and create it. [0:22:32] GV: I think one example would be interesting on the front page of Runway. There's something that it mentions like an ox. Actually, the image is a - I'm from Scotland and it's what we call a Highland Cow. So, I'm really curious like how that sort of matches up. If I type Highland cow, would I get Highland Cow? I'm just curious like how, where that kind of, not exactly where the content comes from because I know that can be a sort of topic that's difficult to discuss. But yes, if you kind of see where I'm going with like how words match up to images or say, images, but the content. [0:23:02] JK: Yes. You're right. There are still cases, these are still extremely early models. We expect many generations of improvement beyond where you can give it a distinction between an ox and a Highland Cow and get that every single time. I think what we see is part of that comes from just the models are still developing and building their understanding of the worlds. Sometimes working your Runway especially, and you're like, "Oh, this thing, it's not working. Why isn't it working yet?" It's helpful to just take a step back and be like, two years ago, if we'd shown this to anyone, you would have run down the street screaming, "Oh my gosh, this is amazing. You guys have to see this." It's fun to see and it's exciting to see how people raise their standards and expectations because they should, ultimately, the vision that were driving towards. But I think that it's helpful to remember that we're very early and this is the worst it's ever going to be. [0:23:50] GV: Oh, for sure. I wasn't in any way sort of criticizing the ox versus Highland Cow. I was just - it is amazing. When I saw the example, I kind of had that reaction you just talked about. I kind of run down the street, very excited. I was just kind of curious how this sort of, I don't even want to call it library because like library is such a terrible word. But just how that matches up with, how does this system know if I was to type like mug, like where does that come from? I guess, is one, yes. [0:24:17] JK: Yes. That just comes from the understanding of the world that it's built. I would say, especially for like unique things, we work with a lot of enterprise customers who have many concepts that they've literally just invented, or even creators individually, who've just created something that isn't going to be in any understanding of the world, and that's where we see some of the customization tools and pipelines that we built and run way and we work with the enterprise customers on especially sort of being helpful. Because then let's say you're doing a sci-fi piece and you've got this like sort of cow thing, but you just created it. There's no way that it would be in the world. So, being able to customize the models further than based on your creative vision, I think is a big important part that we focus on as well. [0:25:00] GV: Got it. Yes, that's super cool. That's very fun. I maybe should have touched on this, I guess, with like the data privacy and the security piece. Probably one other question people have in that area is, I guess, the topic of deepfakes and sort of what are you, I guess, able to do to prevent that happening, generating content that could be deemed deepfake? [0:25:21] JK: Yes. For sure. We have a whole bunch of measures in the product and a whole bunch of dedicated folks who work on this. We have a couple of new and improved visual and text moderation systems that have automatic oversights on filtering what we deem to be inappropriate or harmful content. We have C2PA authentication, if you're familiar with that, which is sort of like a provenance certificate on showing you the media was created with Gen-3, in this case, in our Gen-3 models. It's always been the case that as the model capabilities and the ability to generate high-fidelity content increased, we continue to invest ahead of that curve on our alignment and safety side. So that when you're back two years ago, when you're getting maybe smaller, pixelated, jerkier footage, it's not as much of a concern. But knowing where things were going, which we had insight into, given that we're doing the research in-house, we've always been able to get ahead of, "Okay, before we release this next model, we're going to need these new levels of systems in place." So, that's always been our approach to make sure that those are in place before you release. [0:26:21] GV: Yes, great to hear. So, looking ahead, from what you can share, where's Runway going? What kind of things do you guys see on the horizon that would probably make it into Runway? [0:26:31] JK: Yes. We're really focused on building sort of general world models, which are effectively like systems that understand the whole visual world and its dynamics. We just released about a month ago is a major step towards this goal. But it's still very early. We're still a couple steps, maybe many steps from that goal, so this is the first and smallest of our upcoming models. It can still struggle with certain complexities, to your point, like it can still confuse subspecies of - I don't know if they're subspecies. I shouldn't say that, but different types of cows. So, our approach is to basically build up to that full general world understanding, and we found, even with like Gen-3 models that building that up then teaches the models all sorts of other like interesting properties. So, we've seen a lot of very fun like physics and texture simulations that people have been doing with some of the models. The way it animates waters is a lot of fun to play with. So, those capabilities naturally come from our goal on building polls. [0:27:30] GV: Yes, very exciting. A couple of questions I just tend to ask now at the end of episodes. One is just what's the typical day for you as the PM at Runway? What is a typical day? [0:27:41] JK: Yes. The shabby answer to that is there's really no typical day, which is what makes the role so fun. I think, the product team at Runway, but a lot of the teams at Runway get to be involved in so many cool, different areas from like working with creators who are professional creators all the way down to like, hobbyist creators to working with researchers or researching on these models. So, I would say for myself, the typical day to the extent that exists is a mix of like talking with users about both the use cases and the like user experience and interfaces working with our researchers to evaluate experiments or bring that user feedback back to the research team, working with our engineers and designers on the product side to actually these things into the product and make sure they're stable and ready for a big release, reviewing our metrics and quantitative signals and making sure that we are releasing things that are valuable to people to working with our sales, finance, and marketing teams to make sure that we're telling the stories and building the business that we want to be building. [0:28:37] GV: Yes, cool. Typically, the answer I get is there is no typical day. [0:28:41] JK: Sorry. [0:28:42] GV: No. Of course. That's what is kind of fun about technology, but at the same time, when I talk to like CTOs, like often it's hiring, hiring, hiring, hiring action. So, it's always kind of fun to kind of get just a theme on like what actually someone is doing day to day. I think our listeners really appreciate hearing that in terms of like roles that they're probably thinking about wanting to go into as well. Final question, sort of the same track, like knowing what you know now, like today, what advice would you give yourself starting out in this field whatsoever? [0:29:15] JK: Yes. I mean starting out in this field whatsoever, it would have been like go join Runway. Whatever year this timeline starts, they're up to some really cool things They're a great group of people. I think to go back to realistically when I first started at Runway, what would my advice be at that point? I think coming in I was comfortable in having like this background on the creative side and also this background on the tech startup, SaaS tooling side. But felt just like in an eight sense of like, okay, I have this experience and I'm building SaaS products, how the business models work, the best way to interact with users, the best way to sort of plan for releases and for roadmaps. I think took months early on to just adjust to like - but Runway is of this new generation, different type of company where like having a search team in-house, having a creative team in-house, being able to total shatter expectations of what's possible a couple of times a year. Just very much changes traditional playbooks. So, I think learning to use them as certainly an input in deciding what's the best thing for us to be focused on, what's the best thing for myself to be doing on the product team at Runway, but being comfortable jumping into like, "Well, actually, let's just try this thing, because first principles will lead you to believe. It might be an experiment that could pan out." I think we've done a good job building that culture across the company now to where people certainly bring in experiences from their other roles, but we try a lot of very interesting things and a lot of them work, which is really exciting. [0:30:38] GV: I think that's a really good point. I think fear of failure, I think today can mean that people don't experiment and to quote, probably everybody knows who I'm quoting, "stay curious." So yes, I think that's a of place to leave it, which is, yes, just that these tools like Runway will never even be conceived if people are not able to just experiment and like that thing, that project, it might not go somewhere, but at the same time, project can be for yourself, it can be for many others. But having just that curiosity and yes, don't fear of failing, so to speak, it's not failure, it's just trying stuff out. [0:31:11] JK: Yes. Exactly. [0:31:13] GV: So, Joel, yes, it's been great to have you here today. I really appreciate the time and sharing about Runway and I definitely won't call it an LLM ever again, so I apologize about that. [0:31:23] JK: That's good. Yes, well, thank you for having me and I'm going to take that feedback on the types of cows directly back, make sure we're testing for that in the future. [0:31:30] GV: Yes, I want to see Highland Cow on the website. [0:31:33] JK: All right, we'll see what I can do. [0:31:35] GV: Thank you so much. Really appreciate the time and hope we can catch up again in the future. [0:31:39] JK: Of course. [END]