EPISODE 1729 [INTRO] [0:00:00] ANNOUNCER: Harold Halibut is a 2024 narrative adventure video game developed by German developer Slow Bros. The game has a distinct look owing to its use of stop-motion animation with 3D scans of physical sets and puppets. Onat Hekimoglu worked on Harold Halibut as the director, game designer, composer, and person of many hats. He joins the podcast with Joe Nash to share the story and technical details of how he and his team developed their unique game. Joe Nash is a developer, educator, and award-winning community builder, who has worked at companies including GitHub, Twilio, Unity, and PayPal. Joe got a start in software development by creating mods and running servers for Garry's Mod. Game development remains his favorite way to experience and explore new technologies and concepts. [EPISODE] [0:01:00] JN: Welcome to the show, Onat. How are you doing? [0:01:02] OH: Hi, thank you for inviting me. I'm doing great. Thanks. [0:01:05] JN: Awesome. So, to kick us off, I always love to find out about our guests' game development journeys. Yours, we'll come back to this again, the game. Yours is very interesting. Can you kick us off and how you got into game development and how you come to be making this game? [0:01:17] OH: Yes, sure. It's an interesting story indeed. So, I actually originally studied film and I was, after finishing film school, I was trying to find a producer or someone who might help me with my first feature-length film and because I'm quite impatient, which sounds strange when working 14 years on the same game, actually. But I was quite impatient at that time and I just wanted to do my own film, like write the script for that and be the director and so on. As an unknown director, it's actually quite complicated and frustrated with the options that I had, not finding any budget for that and so on. I sat down and I was surfing the Internet when I randomly encountered this like point-and-click adventure engine, which it was called Visionaire Studio. I don't actually even know if it still exists. But it was this, you could just put in. It was basically a what you see is what you get, kind of editor where you could put together point-and-click adventures in the tradition of like old LucasArts games. So, basically, I went to two friends of mine and I was like, "Why don't we make a game? We could tell the story that we want to tell in the way we want to tell it. We won't need any budget and we will probably be done in two years." All that, these things weren't that true, obviously. But yes, so that's how it all started. Then there was the second coincidence was like two weeks after we had basically decided let's make a game. I was at the Gamescom and going through the corridors and I coincidentally met a friend of mine. I knew from the town where I studied film and he was representing the Cologne Game Lab, which was a newly established Masters in Game Design right here in Cologne. Basically, two weeks after my decision to create a game, I applied for the Master's in Game Design and started studying there. That was basically the beginning. [0:03:11] JN: That's awesome. Yes, your background in film definitely explains a lot about the game. That's fantastic. Also, weirdly common recently, when we interviewed the creator of Animal Well, I think he also came from film. This is like the - [0:03:22] OH: Oh, interesting. [0:03:23] JN: Yes, that's really cool. So, obviously, a lot of [inaudible 0:03:27] from here is going to talk about Harold Halibut. Really fantastic and interesting game. So, to kick that off for folks who haven't encountered it yet, can you tell us a little bit about the game? [0:03:33] OH: Yes. Of course. So, Harold Halibut is a modern adventure game. We call it a narrative game because whenever we say adventure, people think of point and click. Adventure, seemingly, it's still hard to explain to a lot of people who are not used to like walking simulators like Firewatch or so, or Night in the Woods. What exactly a modern adventure game is, but yes, that's basically it. One of the unique things is we are building sets and puppets like in a stop motion film and 3D scanning everything to bring it into the game so the whole game basically looks like a stop-motion film that you can play. It's heavily focused on conversations with other characters so we don't have any puzzles and it's very focused on the story and a world-building basically. [0:04:16] JN: Yes, absolutely. If you're listening to some audio, you are missing out on some of the parts from the game being on its background. It is wonderful to see them. It is a really beautiful game. If you haven't seen it yet, I highly recommend taking a look at it. It's the stop-motion style really comes through. So, I guess that's where I want to kick off, which is how did that come to be? I think the reason we invite you in the show is I've been saying to our producer for ages, there's this wild game coming out and I cannot understand how they've made it. I cannot understand how they've like made this stop-motion game. I've been wanting to chat about this for ages. How did you decide on stop motion and go, "Yes, we're going to make an entire game with like what looks to be hand-animated things." I know that it's not entirely, but where did that come from? [0:04:59] OH: Yes. Again, a very interesting coincidence. So, I just told you that I met with two of my best friends and we decided to create a game. From the three of us, none of us could draw, none of us could do 3D stuff. And basically, none of us was what one would call an artist. But we were good in building things and we love stop motion films. So, we decided to make a stop-motion point click adventure set in underwater world. This basically seemed like the easier way for us, which is funny. It's basically always a matter of perspective. If you boil it down one of the reasons for the uniqueness of the visual style of our game, basically its entire identity was because of our inability to do it in any other way. Yes, that's how it all started. We soon realized, by the way, that it's hard to go on without any artist, especially when it came to character design and all that stuff. So, Ole, the fourth of our core members, basically joined like very shortly after. Basically, it was, we asked him for some concept drawings for Harold, and then I went on to build the puppet, which looked like an abomination. We threw it in the bin and called Ole if he did want to join our team full-time. That's basically how it all began. But yes, the original thing was both, we loved stop motion film, obviously, but it's also, it seemed like that's the only thing where we can deliver something of quality, which, yes, it's interesting. That was the start. Then, there on, obviously, it was a huge - I mean, that's also one of the reasons some of the listeners now might not know it. It's been a 14-year-long journey. So, the first idea for the game was 14 years ago. Obviously, I also started studying game design then, and we were working on it part-time in the beginning. But one of the things we spend a lot of time in the very beginning was how do we actually accomplish this? Our first intention was basically because it started out as a point-and-click adventure, to create sets like in a stop-motion film, photograph them, to create actual stop-motion animation in front of a green screen, photograph each frame bit by bit, put everything together. From there on, the workings would be like it to the point-and-click adventure game, only that everything is basically built physically. Where this led us was this cheap-looking Photoshop collage-like thing. Because in a stop-motion film, each frame that you take contains all the information you want to put in there. I mean, you might have some slide editing, color grading, some BFX or so in post maybe. But still, the characters are part of the environments. The lighting lights them and they throw shadows on the real environments and all that stuff. So, we would have needed to fake like a lot of things and do things in layers. It just doesn't - the original models were looking so nice that it just felt like too much of a degradation of going from there. We started experimenting and after some experiments with basically the backgrounds, for example, projected on 3D mesh representations of the environments. Also, normal mapped sprites was one of the approach to bring some more dynamic lighting in there. But none of that seemed to work well until we did our first photogrammetry experiments. For those of you don't know what it is, it's basically a 3D scanning technique where we take hundreds of photos from all kinds of directions of the objects. Yes, the software can then analyze common points of the photos and create a 3D model. I think it was 2013. At that time, it wasn't like common that you used photogrammetry for game assets. In fact, I think The Vanishing of Ethan Carter was actually the first game I know of where they used photogrammetry assets, 3D scanned assets. That came out in 2014, I think. So, it was an interesting time because our first tests already gave us the idea of, okay, we could be on to something because now we can. We basically have 3D models off the sets and the characters and we can light them in the and everything kind of belongs together, right? But the technology was quite early. Our computers were very, very slow. It took like a day to process a single model. The quality wasn't great. So, there were a lot of things. Especially, it was hard to come by any resources. All the resources you could find about that was like I'm scanning the plot of land I want to build a house on or so. Or we are scanning geological occurrences to analyze them in our university. That was basically the kind of applications that it was used for at that time and not for games. So, it was a process, basically, finding out what the best way is to do it in that scale, moving from photographing in an environment with a normal camera to a turntable setup, which thankfully some other people had done before as well for, again, other things like 3D scanning sneakers or so. But yes, information was sparse and hard to put together. But that's basically how we came to the technological baseline, I would say, that we were using till the end. It grew from there and evolved from there. But yes, that was basically that. And one big advantage that it had was - by the way, sorry, I mean, I'm going through this like very - [0:10:30] JN: This is perfect. [0:10:33] OH: Yes. So, it was an interesting time because of two reasons because my master thesis was the first prototype of Harold Halibut and that was in 2012. So, 2012 was basically to see what happened in these two years. We had this like 2D point-and-click adventure prototype which we could look at and where we saw like the shortcomings and what we liked about it. First of all, it was the visual thing I told you about before. It looked like this cheap Photoshop collage. Our solution for that was to go like full 3D basically, with 3D scanning. Then on the other hand, it was the gameplay, the puzzles. Actually, thinking about it, we wanted to tell the story and we wanted to focus heavily on the narrative and base the flow on that, and we had the feeling that puzzles depending on how they are created and also extremely depending on the player who is playing them will drastically change the time needed for the flow of the game. Basically, they were more like you know stretching the times between the story flow which we didn't like for the story that we wanted to tell. That was the decision where we also, it was a time where other games started to come out. Because our initial decision to go the point-and-click adventure route was mainly because we thought, okay, that's probably the minimum amount of gameplay you can't have for a game without going like full FMV or so. So, we wanted to have like the benefits of the interactivity emotionally and empathetically, that enables you to feel the story in a different way than passively consuming it. But also, we didn't want to like complicate the things a lot. Basically, that was the cut point where we decided, okay, it's going to be a modern adventure, heavily focused on storytelling. Everything is going to be a 3D scan asset and we will have 3D camera movement and so on. [0:12:29] JN: That's perfect. So, there's like a whole bunch of this where we said, the length of time the game has taken, the photogrammetry process. But while we're talking about the design of the narrative bit, that was one of the questions I had, because I think - and I don't know, saying, now knowing you originally had puzzles, I imagine this is a bit of a legacy of it. But one of the questions I had for you in designing a narrative game and being so heavily narrative driven was how you did pace the, I don't have a good way of describing them, because they're not puzzles, but like the more gamey bits. So, for example, fixing the 3D printer, right? That's a very like, "Oh, now I'm doing game mechanics part." So, I guess my first question is, are those like, what remains of the puzzle sections? Then B, with that in mind, how do you pace and decide to put those into the narrative? [0:13:12] OH: Yes. That's an interesting question. No, it's not remnants of the puzzles. It was more like we weren't against having other mechanics in there. It was more like, that's also the reason why all these little interactions, we call them playful interactions more than actual puzzles or so, are very easy to solve. They don't provide any challenge. We thought it would be just not be nice in some cases to have this playful interaction that lets you feel a little more like you are actually doing the tasks that Harold does. Because most of them are these hand mini-games we called them, where you see Harold's hands and we are basically manipulating objects, screwing with a screwdriver and so on. We originally had even more of those. It's just some of them were still breaking up the flow too much and some of them were just to - just took quite a while because every little interaction, even though they seem like easy to do, and they are compared with more fully-fledged gameplay mechanics. But just the sheer amount of them meant that we spent quite a lot of time actually creating those. So, yes, we scrapped some of them away after a while. [0:14:24] JN: Awesome. Yes. It's saying like how you described the purpose there is, giving you a little bit of taste of Harold. I think that immediately made me think of the very, I think it's the very first one you do. You clean the filter and you're just presented with a user interface of like completely alien language. You have no idea what this user interface means, what these symbols mean. But like, it's very intuitive what you have to do. You very much put this in mind, Harold knows what he's doing here and I'm just doing what Harold would do. I don't need to know it, right? That's brilliant. Cool. So, go back to the photogrammetry. Fascinating process. I really want to understand, I guess, like, the steps that the model goes through. Because you said you went from like a model and making them themselves, that I imagine must be an absolute trip. But then, ending up with 500 odd photos and that being a usable model in Unity, I believe, that you can manipulate and do a thing you need to. I imagine there's a lot of work there to make that workable and optimized. Can you walk us through that process? [0:15:16] OH: Yes, of course. So, after building one of our models, you can see some behind me. Unfortunately, it's just going to be - [0:15:23] JN: I think I can see Harold, right? That's Harold in the case? [0:15:26] OH: Oh, yes. There is Harold in the case. [0:15:27] JN: The man himself, excellent. [0:15:29] OH: Yes. So, yes, we built the models. There is actually a slightly different process for sets and for puppets. But, yes, most of the parts, I will walk you through like some of the differences in a bit. So, once we have the model, we put them on a turntable, which was this very cheap IKEA turntable that we bought, like, more than 10 years ago. It's interesting because we wanted to switch it with a better model after a while, but we came back to that one. I don't know. There was just something with that. Basically, we would put models on a turntable and take photos off from each direction and from different heights as well to cover as much as possible all parts of the model inside of those photos. Depending on how important that model is and how large they are. For character models, usually, we were at around like 300 photos. For like smaller assets, usually 70 or 80 were totally fine. But yes, it depends on how much you need to be able to actually cover the surface. [0:16:31] JN: Sorry. Are they like t-pose in this point like when you take each of the character - [0:16:34] OH: Yes. That's a very interesting question. Because i already told you that we started with it being a stop-motion game. So, our first Harold model was basically a stop-motion puppet. We had a rig that we could actually move around and so on. There is still, Harold can still do that. Harold is the only character left that has a bendable skeleton, basically. But once we decided to go the 3D scanning route and we saw that it works the way we hoped it would work, we basically started building statues instead of like puppets. Basically, the characters are all in t-pose. Sometimes they have, or oftentimes they have removable arms just to be able to dress them and that's the only reason. Beyond that, the characters can't actually move in the real world. They are just like statues. Yes, are kind of in t-pose because that's the way we work with them digitally later on anyway. Once we took the photos, in the beginning we were using Agisoft Photoscan. It was running on the CPU, so this took like more than a day or so for those scans. Then at one point, Capturing Reality came out, which I think I bought it now or so, which was amazing because that did like a lot of processing on the GPU. The times that we needed to process all the photos drastically shrank. So, it was like an hour for a scan or so. That made it actually possible, just to show you an overview, we have probably a total of around like 1,000 assets in the game that are all hand-built. Just scanning them is quite some effort. Like I said, with Photoscan, it would have taken like a thousand days just to scan, like, process the scans. So, yes, it was basically an order of magnitude faster, so that was really great. But yes, Capturing Reality then creates this point cloud and then a resulting mesh. And usually, again, depending on the quality of the photos, the resulting mesh is actually super high poly. For character models, it's usually over 10 million polygons, which is great to capture a lot of the actual detail. It's obviously impossible to use in a game engine and it's even less - I mean with technologies nowadays, like Nanite and so on. Technically, you could put it. It just doesn't make sense because you can't edit. It's really nice. You can't rig it and like animate it and so on so. There is no like logical sense why you shoot a 10 million polygon model. Yes, basically, from there on, we go into an almost normal like retopology workflow processing on these high polymeshes. Basically, our character artists for example, they go in and basically, we built the entire model with a clean and nice topology so it can be animated with a more sane amount of polygons. And for our like-game characters, it was usually around like 20,000 to 30,000 or so. Then, from there we basically then go in and make the details from the high poly models to the low poly models so that we have the original texture as kind of like a base color. It's still not a base color because technically, it still contains kind of a little bit of lighting information. We try to reduce the amount of lighting information baked in the textures by doing all our photos like really flatly lit so we have soft boxes from the - [0:20:07] JN: Yes, like the lightbox thing, like the product photography thing? [0:20:09] OH: Yes. Because ideally, you don't want any lighting information. But yes. After baking all this information, we are presented with basically normal map that recreates these high poly details and a base color. But that's kind of it and most of you might know from like, yes, if you have something to do with game development, that for physically based materials, we need some more information like the roughness of surfaces and if it's a metal or not. This is a process that we do manually for our characters especially. So, a lot of it is basically altering these maps by hand. With metals, it's easy because things usually are metal or non-metal. It's more complicated with the specularity or the roughness of the objects. We sometimes can use the base color as a baseline for at least giving it some texture. But then for their actual values, it's actually nice to have the original models here. Especially, in the beginning, when we didn't have like a lot of digital references, we used a spotlight, shine it to the models and try to like mimic it one-to-one in Substance Painter then. For the sets, it's actually quite interesting there's a slight difference with that. There's a very similar workflow but because most sets can be broken down into individual walls, like the one you see here, this one. [0:21:39] JN: That's the - for audio listeners, we're looking at the general store from the game. It's also a lot bigger than I expected. That's really cool. [0:21:44] OH: Yes. Most sets can be broken down to individual walls, which are flat, that enabled us to use a material scanning device, which, again, we were so lucky in so many places, by the way. Some friends of us in Technical University in Cologne happened to build a material scanner. Actually, the world's best material scanner at that time, I think, from what I've seen so far. They needed some people to do some prototype scanning. So, we let them basically scan all the sets and it was amazing because it's only capable of scanning flat surfaces or flat-ish. But it provides perfect normal maps of the surface structure and also roughness values, which seems to be very hard. So, if there are some experts and materials getting in the audience, they might know roughness seems to be difficult. That's what I've heard from them. [0:22:39] JN: What year was this? We'll come back to this later, but I think the time period of this game coming on, yes. [0:22:46] OH: 2014, maybe 2015. 2016 was our first trailer, first public trailer, and that was before that, definitely. [0:22:55] JN: Cool. [0:22:57] OH: Unfortunately, they don't have the company anymore so they never released the product and so on. We were probably the only ones that tested their tech. Hopefully, they will build another one for us for our next game. [0:23:11] JN: You've achieved a feat that will never be achieved again due to lack of a scanner. [0:23:15] OH: Yes, exactly. I mean it could obviously, but it's not like with a lot of manual work, you can achieve the same quality. It's just a lot of manual work. [0:23:25] JN: Yes. I mean, hearing you did it manually for the characters is alone, I guess, it's kind of very emblematic of the whole reason you went with these puppets in the first place, because you didn't have the art skills, so you went with puppets. But like, again, knowing you did it manually. Then, the characters have such rich and complicated costumes. In the first couple of minutes, you meet a guy in a silk robe. You meet this like security guy that's got like a big heavy - like you feel the fabric and knowing that you're entering those texture values manually is wild. Very cool. [0:23:53] OH: Yes. Thank you. We got used to a lot of tricks by now. Actually, the manual process isn't taking so long anymore. But there's just this like direct, like the immediate you know translation coming from the material scanning thing. By the way, meanwhile for people who are interested in trying out at least this kind of technology, I think the Substance people released a kind of tutorial thing, how to do that with your cell phone as a camera. Basically, it's similar to how we did our first experiments with normal mapped sprites because we were basically also using different cardinal directions, photo from different directions, and creating normal maps out of those. That's basically the same way how you create normal maps from flat surfaces, basically, just by shining a light from every direction and then taking a photo of different lighting conditions. Then, I think, Substance Designer even has a note that is able to process basically, the surface structure and create a normal out of these photos overlaid onto each. It's unfortunately not working perfectly well with larger surfaces, but it at least, maybe that inspires someone to build a low-cost material scanning device, because I think the ones I know exist are usually for like the textile industry and so on. It's not affordable for a game studio, I guess. [0:25:17] JN: Or has restrictions, like you can't put a character for it. Okay, so I want to jump in a question here because there's one of the things that I've been - again, so you mention a thousand models. When we work with like digital assets, we have the amazing version control and the ability to like save different versions of our things and go backwards and forwards in time. Okay, first of all you keep in track of all these assets? That must be a nightmare in itself. And then B like, what is your process of like you're halfway through development and you decide a model needs changes? Do you change it in the physical model and then rescan it? Or is it just in the 3D model? Do those models diverge over time? How are you managing these assets? [0:25:50] OH: That's very interesting. Let's start with the sorting thing. It's a very interesting question because beyond sorting the physical stuff, we also at that time, because it's our first Unity project, right? I actually did like two or three smaller projects meanwhile, which was very important. Because through those I got to know how important like having a great structure is and we very early invented our own sorting and naming scheme with IDs that were human understandable, so not just a list that you have to compare with something. It was just based on a gut feeling. We did that like in 2012 or '13. It's so proven and rock solid it works until today and we will use it for every single game from now. It's actually very easy. It's just basically one letter or two for the category C, is for characters, for example, and then just we are numbering that with 0, 0, 1, hoping that we don't have that more than 999 for a category. Then there is an underscore. Then the second number is more about which parts, for example. That's very interesting. It works both. There are some more like additions and so on and suffixes based on it. It's also really nice because you can parse it really well inside of Unity if you want to process. At that time, I didn't even know that there was the point, because I didn't know anything about programming like in 2010. So, I wasn't even - I didn't even know that I could automate things like automatically set up each normal map as a normal map inside of Unity. But it all worked well because of like the way we named things, which was really great. [0:27:31] JN: Incredible. [0:27:32] OH: It also worked well for the physical sorting of those assets. In the beginning, we basically just had a shelf with a bunch of randomly spread around assets, but then we started to create these boxes, again, with the same categories. Then, for the sets, we had individual shelves. Then, we have one shelf with all the characters, and they are like in these very nice foam boxes where we store them. Some display models, like the ones that is behind me, which we oftentimes bring to kind of - [0:28:02] JN: So, they've been at exhibits. You've exhibited them at galleries since the games come out, right? [0:28:06] OH: Yes. Exactly. There was one really nice exhibition in Zurich, actually, last year and until now, it was oftentimes, we brought, like, two or three boxes with us and showed them somewhere at the Gamescom or at the GDC or whatever. But in Zurich, we actually had a like full-fledged Harold Halibut exhibition as part of the bigger game-design exhibition. I think game design today was the name of the whole thing. It was in a proper museum and they actually built a whole room just for Harold Halibut, which was very nice. So, we had like puppets on one side, like every single puppet of the game and then some sets and individual assets on the left side and then the game in the middle. Yes, that was a really nice thing. I think we are actually going to have another exhibition sometime later this year. It's not like super clear, but yes. [0:28:56] JN: Yes, that's awesome. [0:28:57] OH: We will keep everyone updated on our socials and so on. [0:29:01] JN: Perfect. Yes. The long-term importance of a good naming scheme definitely makes sense. So, I guess, the of change of this question. If you make changes, how does that work? [0:29:09] OH: Yes. Exactly. So, there is also too, it's actually one of the things Ole, our art director loves so much. When he's talking in interviews, he's always like, "Not having the option to use Ctrl-Z also creates something specific about these things. It's only some rare occasions where it's actually bad. So, he had this one instance where he had just finished one modeling, one of the characters, and usually we put them into the oven, bake the oven hardening clay which is super scalpy, to be able to paint it afterwards. The character basically fell down from the table and the entire head was smashed, so it was basically a day or two of modeling work gone in a second with no option to bring it back. Once the things are digitalized in the game, they are basically 3D model, or regular 3D models. We can do both. So sometimes, for example, the initial stairs that Ole and Fabi created were twice as high as they are currently. Even the current space are relatively high for video game stairs. It was a very exciting time and findings for us actually, because we were building things like they would be in real world in terms of proportions. We soon realized that, and you can see that even nowadays, like Harold running up and down the stairs look super silly and it was the best we could achieve, actually. Because the stairs are too steep, and we also didn't want to - obviously, we could do better by slowing down the character. But then again, the game is already a slow game just in terms of style. We thought it wouldn't be nice and that's one of the things. Now, you can imagine if the individual steps are twice as high, it was even harder to kind of place the feet and so on. There was this which led us to - we fixed that digitally because the individual steps, we actually had four different step models and they are put on top of, like besides each other for like the main corridor stairs. Things like that can easily be fixed digitally because we are just repeating something, right? But we had another instance where the lab for example had to be like a little bit deeper and I think we had a similar situation in the lounge room because the lounge room was so tiny, which was originally the idea. The idea was we have a lounge which is so tiny that it's uncomfortable. The idea was very nice, but for a game, you need at least as much space that you can move around in there. The space was so tight that when Chris is sitting in the lounge, you couldn't move into the lounge anymore. So, we had to extend that one tile. For that, for example, we didn't like change the set entirely, but the floor is segmented into these tiles. So, we built an extension for the set in the physical world, just scanned that in, and added that to the digital set. [0:32:13] JN: Fascinating. [0:32:13] OH: Yes, there were all kinds of things. If it's like smaller objects, if a lamp wasn't high enough, for example, we oftentimes fix that digitally. It works with some objects. It doesn't work with others. In general, we have quite some freedom for small adjustments and angle adjustments and things like that, that all works quite well. It also works because we didn't like - it's not like the original 2D photographed backgrounds where everything is photographed as part of the set. But each asset is individual. So, even if we messed up the scale of one asset in like building it physically, we can still scale it in the digital world, which yes, helped a lot. [0:32:53] JN: Awesome. Okay, so I want to spend some time on the time you set making the games. We mentioned that a couple of times now. I think the first thing is really interesting to me is like, I feel like if I come back to a project after three months, it no longer compiles. How do you cope with building a game over 14 years from a technology perspective? Surely, Unity changed from under you so many times. [0:33:15] OH: I will tell you some, I think it was probably the nightmare for every single engineer that might be in the audience. But we started with Unity 4.6 and we were on a beta version of Unity constantly until last year. So, it was bleeding edge and we didn't leave out a single version. I didn't know about these - a lot of people are like, "You are updating your project, actually?" Like, yes, constantly. No problem. I didn't know about that these things that you usually don't do that. You just stay at the version. But I was just going, like I use other software, like my audio software, well even there, I'm more careful because of compatibility in plugins and so on. But I basically went from one version to another and the good thing is that especially in the beginning, it was like, the first couple of years were finding the style and like very rough prototyping. We weren't building the entire game. So, even if things broke, it was fine. Since like 2016 or so when the actual building the game started, I think the biggest problem people usually have when updating from a Unity version to the next is, that's just my assumption, because everyone tells me it's like really, really bad and I didn't have that experience actually. Honestly, so when I was updating from one version to the next, sometimes there was like a slight API change. I had to fix like three lines in the code. Then maybe, change the way I had set up a couple of things. So, it was one or two days of work and then it was fine. I think the biggest problem is if, let's say you have been using Unity 2017 and suddenly you have to update to Unity 2021 or so. That, I guess, probably doesn't work well. [0:35:03] JN: Because you're doing every version. Your changes are tiny every time. Yes, that makes sense. [0:35:06] OH: Exactly. Because the iterations were so small, it was always fine. In fact, I didn't even know about version control until 2013 or so. It was wild. The beginnings were really wild. [0:35:18] JN: Very exciting. So, I think I've seen you mentioned somewhere that it might have even be - I don't know if it's loading the assets or studio lighting. But like along the way, there were changes to Unity that made the game possible that you didn't start with. Is that correct? [0:35:30] OH: Yes. So, I was actually, constantly, for example, the HD render pipeline, which enabled us to achieve the look we were going for. Before we were just using the only pipeline that existed at that time, Unity's default render pipeline, and it was fine. You could already see where this was going. I tried to use lighting the way I used lighting from film and I was trying to use cameras. Cameras were already great because of cinema machine. I was in contact with the devs of the cinema machine team basically. It was lovely because I was sometimes complaining about some nerdy film stuff like the aperture values are the wrong rate round because the depth of field got shallower. The higher the aperture value was which is just plainly wrong from a film's standpoint. Things like that. The default render pipeline, it was fine. It was physical-based rendering. But for example, lights didn't have an inverse square fall off but they were, I think, falling off linearly and things like that. Everything just didn't look at as realistic as I hope. There were also other things like how the shadows looked. There was no subsurface scattering and all that kind of stuff. I knew about these kinds of technologies because of triple-A games I looked at the time. I was like, "Okay, if this exists in this triple-A game now, probably it will come to Unity in like sometime soon, hopefully." That was a bet, actually, with a lot of things throughout the development. I think that's probably also the reason why this doesn't look like a 14 -year-old game. Because it's basically made with like last year's tech, right? First of all, because of updating constantly, because our source materials, the texture resolution, we have a 4K texture for each single part. Basically, the native resolution of all our textures are at least 4K. Sometimes we reduce them in the game. Some things like virtual texturing, for example, was also one of the other necessities without which the game wouldn't have worked. Because we have like 30 gigabytes of texture data and no tiling textures at all in the entire game. When we were still using normal textures, we were using 8 to 10 gigabytes of memory per scene, usually, like at full resolution that was. With virtual texturing, for those of you who don't know it, it's similar to Id Software's Megatextures and it's basically textures, tile-based texture streaming. It was possible to bring that to a fixed memory usage which was especially essential for consoles, you can imagine, because memory is always sparse on consoles even today. I think the main difference is that like [inaudible 0:38:14] memory is gone and PCs just don't do that. It's fine on PCs. But yes, the thing is we oftentimes, especially because we knew it will still take a while, and oftentimes we had like times in between where we didn't have the budget to continue. By the way, that was also one of the reasons why it took so long. We rarely stopped doing something the way we wanted to do. We never had a situation where we were like, "Oh, we can't do this because it's not possible in Unity or the technology for that doesn't exist." Because we always hope that it will just happen until we are at that point. If it doesn't, we could stil cut it off or make it in a different way at that point. So yes, that's together with updating the Unity version each time, I think, led to it actually looking very recent. [0:39:08] JN: Yes. I mean, we could do a whole other episode on like the timing and how you all arranged your work and like how you've paid for it. I think there's a lot that's very scary as a game dev about like spend the amount of time in the game and the funding, but also the lack of compromise and the taking your time. I think the Guardian article about Howard Halibut does a really good job at going into like how you approached that. So, I'd highly recommend the Guardian article for anyone listening to this. As we're low on time, I want to jump on to, I guess two quick more questions. First up, knowing your film background now, this makes a lot more sense. But I know you used motion tracking for the animation. When it comes to the scenes between the characters, did you do that in the real world between two actors in motion tracking? Or was that all in digital? [0:39:48] OH: Funny question. I'm actually acting all the characters in the game. [0:39:51] JN: Amazing. [0:39:53] OH: Yes. Again, mostly a time-specific process, because we have motion captured every single day and dialogue in the game. It's more than nine hours of dialogue. I think now recently some games have started doing that. Even the First Horizon didn't do that in that extent. I don't even know if a current - I mean, I think there are - [0:40:12] JN: Baldur's Gate 3 very famously did lots of motion capture with like scenes. [0:40:15] OH: You have the hero cut scenes which are completely motion capturing and then you have some random animations for like random dialogues. But we basically motion-captured the entire game. And we did that with two people, basically, our 3D mastermind, who's also our motion capture technician, I would say, and me as the actor. This made things like really fast because I'm also directing it. So, I didn't - there wasn't this like layer in between. I also had all these themes in my mind already and we had already built them, by the way. So, it's not like that. We basically set up tripods for the other characters when we were acting them out. We had already recorded the audios. Otherwise, this wouldn't have worked either. Basically, the audios provided the perfect timing, the perfect cues for me to jump in and act. I didn't have to move my lips because the facial animation came through [inaudible 0:41:09], which is basically audio-based facial animation. And it was really nice because I could move there and then swap the position with where I had put the tripod. You can imagine with one-on-ones, it's really easy. You basically just follow the audio cues with a slight delay and my delay was perfect every time. So, I knew I had to - at one point you get used to it. I knew exactly how much I had to like shift the resulting animation in the pose, basically in Unity. And it matched perfectly with the audio without any additional edits. [0:41:40] JN: Fascinating. [0:41:40] OH: Now, where it got more complicated was with all the interactions where people are running around, changing positions, and so on, interacting with each other. One character slapping another, I won't spoil too much. We were so surprised that all these things worked as well. It was sometimes really, we were doing these and then doing a quick preview. And obviously, it wasn't like unedited mocap footage. So, there was a cleanup process involved, especially, if we were handing things over or like with the slap. There are obviously the contact points and so on have to be edited. But in general, the movement actually worked and felt like really natural between people you didn't have the feeling like that they were separately recorded which was really nice. [0:42:29] JN: That's perfect. Then final question before I let you go, what's next? So obviously, you've now - I hope this game has made the next one slightly easier in terms of funding. But do you have any idea what your next project is going to be already? And can you tell us? [0:42:39] OH: Yes. Actually, we all went on vacation in the past months and we are like slowly starting with conceptual work on. We actually had a couple of ideas, even during the development of Harold Halibut, but we never had time to really think a lot about those. So, we are slowly starting with that process now. It's nice because we don't have to rush it. Again, like you said, because of Harold Halibut, especially, it's still - we just brought it out. Also, another interesting thing might be that a lot of people think like, "Oh, this has taken so long because of the handmade process and so on." In fact, our workflow is now so efficient that I would say it's not slower than if you want to reach that level of quality with a traditional 3D methods or so. If we made the same game today with the budget needed for it, it would probably be four or five years. And 14 years, because it was a learning process and experimentation process, and also it's - [0:43:40] JN: It's literally a part of your degree. [0:43:42] OH: Yes, exactly. Finding funding, I mean, you might know that it's really hard and these things will be hopefully easier because the process is in place and hopefully funding will also be easier as well. Now, with where we are, I'm not sure which idea to follow and which game to make next. We already know it's going to be in the same visual style. So, I mean, like I said, the technology, the pipeline, the workflow is established. That's what I can say to that. [0:44:08] JN: Perfect. I'm sure many fans of your work, that will be one of the most exciting things to hear. That's it. It's been amazing. Thank you so much for running us through how everything came together to make it's a really unique game. I highly recommend folks play it. And yes, thank you for joining us today. [0:44:20] OH: Thank you, Joe. [END]