EPISODE 1669 [INTRODUCTION] [00:00:00] ANNOUNCER: Gabriel Gambetta is a Senior Software Engineer at Google where he works on YouTube. He's an expert in computer graphics and game development. And is famous for his articles on engineering fast-paced multiplayer games. Gabriel joins the show to talk about his history with game development, client-server game architecture, rubber banding, ray tracing, rasterizers, and much more.  Joe Nash is a developer, educator, and award-winning community builder who has worked at companies including GitHub, Twilio, Unity, and PayPal. Joe got his start in software development by creating mods and running servers for Garry's Mod. And game development remains his favorite way to experience and explore new technologies and concepts. [INTERVIEW] [00:00:55] JN: Welcome to Software Engineering Daily. I'm your host for today's episode, Joe Nash. And today, I'm joined by Gabriel Gambetta. As well as being an educator, author, and actor, Gabriel has headed up a game studio and is currently a software engineer at Google. Welcome, Gabriel, to the show. Thank you for joining me today. [00:01:11] GG: Thanks Joe. I'm very excited to be here. [00:01:13] JN: Obviously, in that intro, I mentioned a bunch of things. And I've known you for like parts of your career journey. But can you walk us through your career thus far and what's taken you from a game studio to Google?  [00:01:22] GG: Sure. Of course. I am a bit all over the place. I still haven't figured out what I want to be when I grow up. Still trying to find that out. Yeah, the games company was in my early 20s. What happened roughly speaking was that I started programming from a very early age on a Spectrum, a computer from the 80s that we had at home. I always wanted to make video games, which is I guess most of us, or at least a lot of us get into engineering, because we want to make video games. I always wanted to make video games. But the reality was that - this part of the story I'm telling, this was in Uruguay, South America in the mid-80s, and there was no software industry pretty much, let alone game developers.  By the late 90s when I started university, I had to choose a more sensible career path. I studied software engineering. And I had a classmate who shared my passion for computer graphics and making video games. And it's funny because our classmates looked at us as the hippies of the generation because their dream was to work in a bank. And I was like, "Please, no."  But I was doing my thing. I was studying. And kind of out of boredom, to some extent, we said, "You know what? Let's just make a game." We made a game. And this was now the early 2000s. And the shareware industry was booming. Even from our distant corner of the world, there was a clear path to making and selling video games. It was very exciting.  It was the days of you downloaded a demo. Then if you wanted to buy the game, you bought the full version. We made this game. We put a demo out there. And I think we sold one copy. And on the one hand, it was a bit discouraging because we had obviously huge ambitions for this thing, which in retrospect was terrible. It was very naive. But we did sell one copy. And that proved it was served to us as a bit of an end-to-end test that, "Oh, my God. It's possible to start with a game idea and end with dollars on the other end. Maybe this will work."  Yeah, early 2000s, I was still in the middle of my career. I had accidentally started a game company because there were no jobs in games. I was working part-time in a local software house, one of the few ones in Uruguay. And because of my interesting graphics, I also ended up teaching computer graphics at university, which leads to the other thing where you mentioned that I'm author. I ended up writing a book about computer graphics. But I don't want to get ahead of myself.  The second game we made was more successful that we could have anticipated. We kind of accidentally created a genre, which was very popular in the mid-2000s is all these - I don't know what they call them these days. But these kind of games where you control a character that has to serve drinks or manage a restaurant in some form. We accidentally started that. Then were bigger and more experienced players made most of the money in that genre. But, yeah, we accidentally started that.  And by the second game, I think it had gone well enough that I took what it seemed at the time like a big risk. And I'm not a big risk taker. But at the time I felt this really strong drive to - in some sense, I was living the dream. I quit my full-time job and went full-time into game development. I think that was late 2004. And then I run the company for another four or five years, depending on how you count. That's the story of that game company.  [00:04:53] JN: That's awesome. Yeah. I don't know what you call that genre either. But it's had a real resurgence recently because there been a bunch of - again, like the server thing like Playtop. Playtop is one of them that has been very popular on Twitch in the last couple years, which it's still a booming genre. [00:05:04] GG: Very interesting. [00:05:06] JN: You worked on some big IPs as well, right? With another company. [00:05:09] GG: Yes. [00:05:09] JN: CSI and Criminal Minds. And it always makes me laugh because you say - I think it's actually in the book. You say in the intro to the book that you worked on some games that no one would have heard of. But those are big IPs there. [00:05:21] GG: People have heard about the IPs. Not necessarily all the games. Yeah, what happened was that, at the very beginning, we were just putting games out there and we were pretty unknown. Some of our games were moderately successful. And that part of the making games experience felt pretty much like pushing a boulder uphill. It was a lot of effort to make things happen.  But we got around 2006, I think, to an interesting inflection point in which people - and by people, I mean companies started contacting us saying, "Hey, we like what you're making. Let's work together." In 2006, I think I started going to a games conference in Seattle. It was called Casuality at that time. I think it's now called Casual Connect, if it's still going.  Up to that point, I was feeling pretty isolated in the game development world because it was me in a room with a couple more people but nothing else - we didn't have a community. The only community I had was an online forum in the gamer forums where we talked with other game developers at similar stage of the journey.  Going to Seattle and meeting people in person was mind-blowing. I finally could put faces to names. Video chat was not like much of a thing even back then. It was pretty great. And part of my mission when going to this thing, which was pretty expensive considering I was flying from Uruguay, was to meet people and try to sell our services as a game development studio.  And one of the companies that took us up on that was Legacy Interactive, a company based in Los Angeles. That because of graphical proximity I guess they had a lot of connections with entertainment industry. I think the first game we did with them was Sherlock, which is not exactly like a super-hot IP. It wasn't back then. Maybe our games kind of revitalized the interesting Sherlock Holmes. [00:07:15] JN: You created Benedict Cumberbatch is what you're telling me.  [00:07:17] GG: Exactly. Exactly. I should have a chat with him. We made Sherlock, which was a great experience working with this company and working on this. It's a game that holds a very special place in my heart. I couldn't tell you why. But it's such a cute game and I really loved working on that one.  And this movie IP stuff or TV IP stuff came from them. They had the contacts. A bunch of games we made with them, they had the IP or licensed the IP and wrote all the scripts. Not the game scripts themselves but kind of the story of the game. And gave us the assets. They had probably licensed the likeness of the actors as well.  I think for CSI, the actual actors did some voice recording, which I would have loved to be in the room. But I was in a different continent. So, it didn't happen. But, yeah, we ended up making CSI New York. We made Criminal Minds. We made Ghost Whisperer. We made Murder, She Wrote, which is also, yeah, not exactly super-hot IP. But it's a very well-known one for sure for a certain segment of the public. And that's how we ended up making this kind of higher profile games. [00:08:29] JN: What then led to joining Google? I think at the height of the studio, you were making all these games. What made you want to jump and I guess do traditional software engineering? [00:08:37] GG: Yeah. It was a combination of many things. At some point throughout this thing, video games stopped being fun for me. I think they became work. They became research. And the only thing I've played since then is first-person shooters this time. This time on an Oculus Quest. And I love it. But I'm not a gamer anymore, I guess.  At the time, for many reasons, I was doing more kind of business development. I was doing more maintaining our in-house game engine. Not so much making games. The whole thing has started to lose a bit of kind of the magic, let's say. I never got to make the games that I really wanted to make, the games that I wanted to play. I was making the games that the market would pay for and [inaudible 00:09:21].  Then a couple of personal life things happened. My dad died when I was 26, I guess. This was 2007. He was way too young. He was 59. For other reasons, I decided to leave the country. I moved to Spain without a clear plan other than I'm going to keep running the company from the distance, which was maybe a more aspirational plan than anything else.  Also, the 2008 crisis finally hit us. A couple of titles we're working on were cancelled. I was in Spain without a clear plan. A couple of projects that we were planning to work for the following year kind of didn't happen. And out of nowhere - well, not exactly out of nowhere, but a Google recruiter contacted me and said, "Hey, would you be interested to work at Google?" And for me back then, this was back when Google was cool.  [00:10:20] JN: He said from a Google meeting room. [00:10:23] GG: Exactly.  [00:10:24] GG: I'll get back to that. But it had always been this kind of impossible dream of, "Oh, my God. The pinnacle you could aspire to as a software engineer was working at this magical place called Google." When I got that call, I said, "Yeah, sure. Of course, I'm going to interview."  I interviewed. I got hired. I moved to here, to Zürich in May 2011 and kind of wrapped up, kind of ended up handing over the projects we had to clients, to the people who working them. Basically, I didn't want our clients to look their outsourcing partner. I didn't want people who I was working with to be out of jobs. I kind of find a way to remove myself from the thing. And I think they did a couple more games in this kind of partnership as well. That was nice. [00:11:15] JN: Yeah. I think your story is really, obviously, a number of years ago now. But I think it's really useful and interesting in this current moment. Because how you got into it, working part-time software houses, building up the studio. And then the reasons, the economic reasons anyway for why you left and to where you went given the current state of the game industry and the broad layoffs and that kind of thing. And people thinking, "Oh, if I can't get a game in a AAA, I'll go indie. I think your story really does a great job of underlying like what is like, and the risks, and how that happens. Yeah, thank you for diving into that. Along the way that you mentioned you weren't building the kind of games that you wanted to build, which, obviously, I have to ask what were the type of games that you'd want to build?  [00:11:54] GG: Right. What we were making was games like Sherlock and stuff like that, which are hidden object games. Or find the five difference between the two scenes. And there was a story recent that Sherlock was trying to solve a case. Or match three games that I probably don't have to explain. And what I was playing was first-person shooters. I was playing Enemy Territory. I loved Enemy Territory. I played some FEAR Combat as well back in the day. And I kind of wanted to make that kind of thing. I wanted to make the game that I wanted to play.  We were very far from AAA, of course. We were not going to make a first-person shooter. But I came up with this idea of making a top-down 2D shooter that felt like a first-person shooter. And I actually started building it. And there were some games like that at the time. I don't remember the name, but some top-down 2D shooter with zombies kind of thing. And I wanted to make it feel as multiplayer as Enemy Territory. I had this idea of using kind of a visibility algorithm to simulate not being able to see behind a column, for example. Your field of view would be kind of dynamic Fog of War depending on where the character was.  And, of course, that's how I got into multiplayer, networking, programming. Because that was one of the keys to making this work was to make the multiplayer aspect work well. That's when I went kind of a bit off the deep end into researching this thing. Of course, the gold standard back then was the Quake 3 net code probably. I think Source was already around. And there were a couple of sources that explained how the whole thing worked.  Also, the Quake 3 engine was open source. I don't know if I actually looked into the source code for that. But I looked at the source code for different reasons. The collision detection thing. But I don't know if I did multiplayer. But what I did was a lot of research until I really understood the problem and how to solve it and implemented it on the game. And it was working really well. We were playing on the LAN with simulated network conditions of packet loss and all of that and worked pretty well.  [00:14:03] JN: Fantastic. I imagine that research. Some listeners may - if you were to go Google how to make a multiplayer game, you'll probably end up running into a tutorial series you're very famous for, which I've gone back to a number of times since you first introduced me to it at Improbable. And that is a tutorial series on networking for fast-paced multiplayer games. I'm guessing that creating that game is what led to the creation of that series.  [00:14:24] GG: That's correct. Yeah.  [00:14:27] JN: And this series is referenced all over place. There's implementations in every possible game engine. When you wrote it, what was the intended audience at the time? Did you see it having like the legs it did? [00:14:39] GG: No. Honestly, I didn't. What happened is that we never completed that game for a variety of reasons. And one of the things that I enjoy doing is writing fiction, non-fiction. You name it. I like to write. I kind of decided to dump my research into articles. I don't really remember why I did that. It's more partly for myself, I guess. I don't want to forget about the things I learned partly because I enjoy explaining things, I guess.  If you look at my computer graphics book, I also didn't invent anything new. But I think the value my articles bring and my book bring is that the clear explanations. It's something that people - over and over, that's the thing that people keep saying. It's like I never understood this topic until I read your article. In this case, I also didn't invent anything new. I just fully understood how Source did it, how Quake did it. Implemented my own version and explained how to do it.  And now I wouldn't anticipate that it would become so popular. Someone on Twitter last year called it that classic. And I was like, "What? What do you mean?" I was very surprised. But, yeah, this article is 15-years-old now. I guess it might be a classic. Crazy. [00:15:50] JN: Yeah, I was trying it. Because I don't think it has a publish date on it. But, yeah. I think, obviously, you showed me it in - what was it? 2015?  [00:15:57] GG: And it had been there for a while now. Yeah. [00:15:59] JN: Yeah. Exactly. Yeah. What you say about the clear communication, I think is really important. Especially in these two domains. Obviously, right now, we're talking about networking. But we will talk about computer graphics soon. But they're both - well, computer graphics especially is a domain that I think for a lot of software engineers who didn't get their start in that world and get their start in video games always seems very intimidating because the level of maths, right? You've got linear algebra, and you go look up resources, and all these symbols you're not familiar with. And so, I think the way you synthesize that material and make it very clear is incredibly important contribution.  I guess before we dive - you've explained the game that you wanted to build a little bit. But I want to dive a little bit into some of the terminology. This guide is explicitly addressed to fast-paced multiplayer games. What does that mean? [00:16:42] GG: Yeah. that's a term that I came up with, I think. There's no category of fast-paced. But I said that as basically as opposed to turn-based. I can go into detail of what this is and what the differences are. But, basically, imagine the most turn-based game you can imagine, which could be, for example, chess. There is no - the game has a timer but it's not like a millisecond timer. You have maybe minutes to make a move. If you're playing chess multiplayer, you have a server and you have two clients. The client A says this move. The server says okay. And sends the new state to the other client. And all is well. Right?  then things can get a bit more real-time and you get to RTS kind of games where things move faster. But you don't necessarily need millisecond accuracy. Even if you do millisecond accuracy under some conditions, even though these games are called real-time strategy, they might still be turn-based except that the turns are very short, if that makes sense.  I think Age of Empires, famously, the very first one, I think it was a server that just run in lockstep, which means clients send things. The server processes everything. Send this back to the clients. The client's render. You can push the limits of what you mean by real-time. I guess there's a bit of a gray area if that's real-time or just very fast turn-based.  But what I wanted was something like first-person shooters did, which is it's real-time. Milliseconds count. And, also, there's no time steps in some sense. Time is continuous. It's like you can be pressing buttons, doing actions all the time. And there is a simple way to solve this, which is you have a server. The client sends a thing saying, "Hey, now this is my position." And the server says, "Cool," and sends that to every other client. That could work if everyone behaved nicely. The internet, that's not the case. Specifically, people are going to cheat. People would come up with a hacked client that says, "Hey, my position is 100 meters ahead," or whatever it is, walking through walls, accessing impossible places, jumping straight behind your enemy's back, whatever it is. You can't trust the client to tell you where it is and what he's doing. This is a concept of authority, authority over the data, which is who has kind of the canonical version of what happened.  And to make these kind of games work, you need the server to have authority. The server is the one who tells everyone else, "Hey, this is what's going on, which is obviously the case in chess, which is obviously the case in many other genres. But it's not so visible maybe.  How do you deal with this? The way you deal with this is the only thing the client can send to the server is not here's my position, but, "Hey, I press the forward button for a second." The client tells that to the server. The server says, "Okay. You were here. You move forward for a second. This is your new position." That's a new model in which server is authoritative. Each client sends its inputs to the server. Not its position. But just the inputs. The server does its calculation of things ended up after a step of time. And then it tells every other - well, it tells every client really, "Hey, this is where things are now in the world." With that, you remove the need for turns. It's just the server updates at a certain rate. It's receiving inputs kind of, let's say, continuously. It's updating the world either continuously or in time steps. It's sending the state of the world to the clients also at time steps. In some sense, it's all well and good. That eliminates cheating.  What it doesn't do is give anyone a good playing experience for these two reasons. First, imagine I am playing with a client that moves forward for a second. I press the button. And let's say that, immediately, after I press the button, the client says a thing to the server saying, "Hey, I want to move one step forward." But the client can't move immediately because the client has no say over the position of the player. It can only say, "Hey, I want to move forward." It needs to wait until the server gets that, processes that and comes back saying, "Hey, you moved one step forward."  An implementation of this would have two problems. One of them is the delay. If you have a connection, an internet connection that is not amazing, you're going to have a few tens or hundreds of milliseconds of lag. And a couple of hundred milliseconds, maybe it's not much for a desktop app. But for a game where you expect instant responses when you move, it would kill the immersion. It would not feel responsive. Waiting for 100 milliseconds before your character starts moving would not feel great.  The other issue is that because the server - because of network consideration, the server is may be sending world updates, let's say, 10 times a second. 10 times a second is a lot of processing. But imagine if your game rendered at 10 FPS, right? If things - including yourself, things moved on the screen at 10 FPS. That's for the client.  And, obviously, for every other client connected to the server, they will also see people jumping around at 10 frames per second, which is terrible. There's two ways to solve this thing. The first one for the point of view of the client that is making the movement is client-side prediction.  From the point of view of the client that is doing the movement, you can do what is called client-side prediction, which is we send the input to the server but the client itself also simulates the results of its own action. It tells the server I'm moving forward one step. But it also starts playing the animation of moving forward one step.  Then the server gets the input. Says, "Okay, this is a valid move. This is a new position." And if all goes well, the position that the server computed and the client computed should match. And at this point, this lag becomes invisible for the player. Because from their point of view, the action was executed immediately. But you still have server authority.  There's one more thing, the server reconciliation thing, which is part of this, which is depending on the exact timings of the packets going back and forth and so on, imagine that the player wants to move forward two steps. That's two actions. It might happen that when the player is taking the second step, only then the server comes back saying, "Yes, your first move was valid." If the client would just naively override its own state with the server state, you would see the character jump back. Before a second later the server says - and your second move is also valid. So, you jump forward again. It would look like you start moving smoothly, jump back, jump forward. Not great.  What server reconciliation does is every input includes a sequence number. Just a counter that increases. And when the server sends the world state back to the client, the world state says, "This is a world state up to input number whatever." At this point, when the client gets the new state of the world, it can say, "Okay, I sent all these inputs that were processed and the results are included in the server state that I just got. But, also, there's a bunch of inputs that I sent to the server but the server hasn't acknowledged yet."  What the client does is accepts the state the server sent. Discards all the inputs that have been already processed and reapplies the ones that have been sent but not processed. And that completely eliminates any visual glitches of jumping back and forth. It's reasonably simple conceptually but it works incredibly well. [00:24:35] JN: Okay. I have a couple of questions at this point. I guess my first one - because so much is obviously then happening on the server, it's not enough for the server component of this to simply be receiving sates and shuffling them around. It's doing actual physics simulation. And I guess this is when people are talking about running headless Unity on the server-side for their games. they're having to run the full game engine to do this part. Kind of awesome. [00:24:57] GG: Right. Yeah.  [00:24:59] JN: Okay. And then another one, just from the game players' view, one of the effects you see when your internet starts to be choppy is rubber-banding in these games. And I guess that's caused by states not being received and the reconciliation not happening. What's going on there?  [00:25:13] GG: The example I just described is very simple because you move forward, you predict a move forward, the server confirms what you do and all is good. The problem is - this happens a lot in narrow corridors, or doors, or stuff like that. What if multiple people are trying to go through the same door at the same time? What happens then is the two clients say, "Hey, I want to go through the door." Right? Both of them start predicting on their side. And maybe one of them is going to say, "Hey, I actually managed to -" the physics are such that they managed to go through. But the server, when they get both inputs because of timing differences, it's the other one who goes through the door and the first player who kind of bounces back. What happens is the client is predicting one thing. The server is telling the client something else happened. And that is - which doesn't match what the client has been predicting. When the prediction and what actually happened mismatch, that's when you get rubber-banding. Because the client has to accept what the server said. Maybe it looks like you walk through the door. But, oh, no. Actually, a fraction of a second later, the server was like, "No. The other player went first and you slammed yourself against the player over the door." Right.  [00:26:25] JN: Right. That makes sense. Okay. There's a scenario, which, funnily enough, happened this past week, which has been very useful for thinking about this interview. There's a new very fast multiplayer game out called The Finals, right? And this is a very last game with destruction. It's as complicated for networking, for states shuffling around as I imagine it could be. I had a scenario where I saw a sniper from two roofs away. I was on a roof. Two roofs away. Saw a sniper looking at me. And I then slid off the roof and was falling on my screen. I was off the roof and falling. There was a building between me and the sniper. And then I was headshot. How has that happened? How does that work?  [00:26:59] GG: I can speculate. Because, obviously, I'm not familiar with the network architecture of this game. But imagine a simplified version of this game in which you have two players and each player has two buttons, which are shoot and hide. And let's say that both actions can happen instantly.  At a given moment of real-world time, let's say, the sniper presses the shoot button and you press the hide button. Or let's say even that you press a hide button like a fraction of a second before the sniper. Your client starts predicting the hiding movement sliding down the roof. The sniper starts predicting the shot. Let me put it this way. Player movement is a thing that is relatively easy to predict. Everything else that materially affects the state of the game world is a lot more difficult to predict well.  When you have stuff like gunshots, explosions, and so on, what the client can do is it can predict - by predict, I mean it can show the animation of the shot, the explosion, everything else. It could also show a blood splatter on the target. But it can't actually tell the server, "Hey, I killed this guy." Because, again, that would open up for cheating. Right? Only the server can determine that.  The shooter presses the shoot button, you press the hide button. And then, depending on the architecture of the game, it might be that the player with a better connection wins. Because which event does the server see first? Does the server see the shot first or does the hiding first? If the server first says, "Oh, this player wants to shoot. What is he aiming at? Where is the other player? Wow. That looks like a headshot." Right? And just a fraction of a second after, your hide command arrives at the server but you've been shot in the head already.  Meanwhile, you've been predicting, "Oh, my hiding was successful." But when the server comes back you, your client says, "Buddy, you got a headshot." It might happen the other way around as well, which is maybe the sniper shoots half a second before you hide but your hide command gets to the server before the sniper does and you're safe.  It's one of these things that is impossible from a kind of physics point of view, is impossible to do right because updating the server is not instant. You have the - you're unlikely - you can't everyone in the world zero ping. Because that would conflict with the limited speed of light.  Different servers, different games try to work around this problem in different ways. They might apply an offset to one of the two depending on whether they want to favor the sniper or favor the one who got cover. Because as a sniper, you also don't want to have the opposite experience, which is I'm aiming perfectly. I shoot. And somehow, I don't hit anything. Right? You can't make everyone happy.  Some games go into more depth than others into trying to make this a good experience for everyone. I think it's Valorant that has a blog series that is super interesting about all the ways in which they solve this, including level design. Because they try to minimize the scenarios in which this can happen by level design. Who sees whom first?  [00:30:13] JN: Yeah. [00:30:13] GG: Right. It's a super interesting series. [00:30:16] JN: Oh, wow. Designing your levels around networking constraints is something I never heard of. That's fascinating. Okay. [00:30:22] GG: I know, right? You can go very deep into - and these studios obviously go very deep into let's give the gamers the best possible experience. [00:30:30] JN: Yeah. Absolutely. I guess my last question on the multiplayer section. And, obviously, again, realizing that the games we're talking about you haven't necessarily worked on. But, although, the big thing here is the server-side reconciliation or server-side authority is dealing with cheating. But many games still ship with client-side anti-cheating. What is happening there? Is the server not dealing with all the cases? Are we talking about different types of cheats? Obviously, lots of games still have widespread cheating problems. The Final still has aimbots, et cetera. How is that going on? [00:31:06] GG: Right. There's many ways why this can happen. Server authority is the philosophically most clean solution. But for practical reasons, some games might give partial authority to clients. Maybe you can tell them where you are but you can tell the server some other things in an authoritative way. Or there's another way to do this, a bit of a hybrid, which is that the client has authority but the server verifies that the results of the client are plausible.  If the client says I moved a meter forward, the server is going to be, "Okay. Fine. Your new position checks out." If the client says, "I moved a kilometer away," the server is going to say, "No. That didn't happen." Right?  In these cases, you can have some sort of client-side cheating to optimize for these small things. And even if you don't in a philosophically pure server authority scenario, you can still cheat in ways that don't necessarily involve changing the state of the game. And you mentioned an aimbot. And that's a very clear example of how this can happen.  I guess there's two kind of aimbots. One of them is can you aim behind a wall or can you only aim at things that you're actually seeing on the screen? Depending on the server, a naive server would just send the whole state of the world to every client, which means, on the client, you know where the players are even if the game engine is not rendering them. And you could have a client that just fakes mouse movements or whatever it is to make you aim at a player you can't see. That's not related to authority. Because the server is qualitative. It's just the client exploiting the data they're receiving.  A less naive server could compute the visibility of what the player can see and not send positions of players that are behind walls from the point of view of that client. But then, if you do that naively, you could run into a player kind of running out of a wall. But you don't see it running out of a wall. You see it only a few steps after the wall because of the whole prediction and interpolation thing that is going on.  You can have a server that hides players that are behind walls except those who are just around the corner so that you can render them correctly. And an aimbot could still exploit that. Even if the server would solve that perfectly, there is value in having an aimbot that can help you aim perfectly at something they're looking at. And there's no information hiding that the server can do in that case. The client needs to know where the player is so they can render it. But an aimbot can still be more accurate and more precise that your own hand trying to aim. [00:33:40] JN: Right. And those small adjustments aren't necessarily big enough for a server to catch cheating. Okay. [00:33:45] GG: Exactly. Those are the kind of reasons why some sort of client-side cheating are still possible. [00:33:51] JN: Interesting. Okay. Awesome. Cool. Thank you for answering that. Cautious of time, I want to move on to. We mentioned your computer graphics a couple times. Starting off as a lecturer and teaching this. Obviously, having implemented some of these things yourself. And then moving on to the book. First of all, can you tell us a little bit about the book? It's published with No Starch Press, right? Which is one of my favorite technical book publishers. I imagine it was a lot of fun working with Bill and the team. [00:34:14] GG: It was amazing working with them. It's my first legit published book. I have my own novel on Amazon. But that's self-published. It's the first time I actually work with a publisher. And the experience was great. I really didn't know what to expect.  But let me tell you a bit about the story of how the book came to be. It's not unlike the other series of articles we were talking about. Rewinding back to early 2000s, I was in university. One of the classes that I was looking forward the most was computer graphics. I was a third-year course. And the career was five years. A third-year course, computer graphics. The first two years I was like, "Oh, my God. Yes. I can't wait for these things."  And when we got to it, what happened was that the professor was not very interested in teaching it and his slides were from a decade before. During the day, we're playing Quake 3 or whatever. And in class, we're learning dithering for one-bit monitors. The black and green kind of things. It felt immediately outdated. Where are the textures? Right?  Despite the disappointment, obviously, this other guy and me had to work through the assignments and so on. And for the final assignment, everyone got the same assignment. It was make a rotating cup in Wireframe. Instead, what we did was a bus driving simulation, like a 3D game where you control the bus in third-person. And we used texture from the bus from our CD and so on. We went way above and beyond what the course was asking for.  And I think at the time there was some sort of Linux Day where people just brought their Linux computer and shown stuff. And because this thing ran on Linux, I brought my computer. We had the bus simulator as a playable thing. And the dean of the university, or dean of engineering, I think came by and was very impressed by what he saw.  One thing led to another. And the next year, I was teaching computer graphics. I found myself teaching computer graphics. And this guy gave me full freedom to kind of redesign the program from scratch. I basically made a bunch of slides and figured out a certain sequence of steps to build a rusterizer, which is how I had built it for this game. Yeah. A couple of years after, I was - I think, throughout the whole time, when I was making video games, I was also teaching computer graphics in university.  I ended up with a bunch of slides, notes, problem sets. A lot of experience between five and 10 years of experience teaching the same thing year after year. And by trial and error, understanding where people got stuck. How I got to people? How I got to explain certain concepts, right? Because maybe one year I tried the thing, didn't work out. The next year I say, "I'm going to say these different words. I'm going to approach this problem from this other point of view. And I'm going to reorder these two concepts." After a few years, I had a pretty streamlined course. We were doing the rusterizer. We were doing the ray tracer. It was really funny. Because a lot of the time in the class was about the rusterizer. And then for the final assignment, a month before the course end, I would share a screenshot with them. Until then, they were doing a rusterizer that did some sort of shaving. And then I show them this screenshot with a perfectly round spheres, and reflections, and shadows. Well, this is your assignment. And you have four weeks to do it. And most of the time, the reaction was they thought it was a joke. But in the end, all of them finished with a working ray tracer. And one of the things that I love the most about this whole experience was when some of them would put their first renders as their screen savers. That was a very touching moment. That's when I felt, "Man, this is why I do it." To see their faces when they realized they made this thing themselves.  But, yeah. Long story short, I was teaching computer graphics for a few years in university. When I moved out of the country, obviously, I stopped teaching computer graphics in university. And, again, I found myself with a bunch of material and nothing to do with it. Again, I just put it on my website. Just cleaned it up a bit. I turned the slides into nice diagrams and just put it there and kind of forgot about it.  Every once in a while, Hacker News would discover it. Every once in a while, it would hit the front page and I got a bunch of comments. And people are always, "Oh, wow. This is great. This is not intimidating." That's the other thing, right? I designed my course to be as non-intimidating as possible. It uses the minimum amount of math required to get to rendering something.  The focus is not on, "Hey, let's make the most performant thing." It's let's make this simplest thing conceptually simplest thing that will render something that looks nice, which is I think the appeal of the book. That you can pretty much get away with high school math. And the little linear algebra it uses, there is no theory. There is no legion. There an appendix. But it's more like this is a vector. It's three numbers. This is how you multiply two vectors. It's like this is a set of recipes. It's like, "Oh, there's no - oh, this is - you need to derive an orthonormal basis of the vector space. This is called a product and you compute it like this."  [00:39:41] JN: I can vouch for this. My math ended at 18. And I've done the book. It is doable. It is very approachable.  [00:39:47] GG: I'm happy to hear. Because it's a tiny bit of linear algebra and almost no trigonometry. It uses cosine in some point. But it's just a function that does something. it doesn't have a geometric interpretation really. [00:40:00] JN: And the back of the book has a linear algebra cheat sheet in it as well, which is very -  [00:40:02] GG: Yeah. Yeah. There's the appendix. That has a bit of that. Yeah, the book was being found, every once in a while, by Hacker News over the years. I think I put it online in 2010 or something like that. And the comments were always, "Oh, this is super interesting. This is not as intimidating as I thought," and so on.  And one of these times that it hit the front page in Hacker News, I got an email from a guy saying, "Hey, I'm an editor in No Starch Press. Would you like to publish this as a book?" And I was like, "Oh, my God. Yes. This is a dream can true. These things can become an actual book." I was like, "Yeah, most of the material is ready. I just need to clean it up a bit and we're good to go."  Something like a week and a half later of working extensively on the book, it was ready for publishing. As I said before, this is the first time I was working with a publisher. I have no idea what the process was. I very naively thought, "Oh, this is - whatever is on the website just printed, it's going to be fine." Took a year and a half to refine it to the point that it's a good book.  And in that sense, it was amazing to work with everyone at No Starch. They were very supportive. They were very flexible with time. I moved internationally a couple of times when all of this was happening. I was quitting jobs. I'm getting new jobs. There was no pressure from them to, "Hey, we need by this date."  I can't explain what they did. It's still my writing. It's still my book. But it would be half as good if they hadn't done whatever it is they did. It's like a million tweaks, a million suggestions. The editor was not super technical but kind of technical. He was invaluable in saying maybe you should reorder these two paragraphs. Maybe you should add a thing at the end summarizing the chapter and that kind of thing. [00:41:50] JN: They were able to play the audience as they went. Because they had - they were enough there. But not a subject matter expert. [00:41:55] GG: Exactly. Yeah. I think that's exactly it. And they were amazing at it. The book is like a million times better than the text I had before it became a book. It was amazing working with. And like the other article, right? I'm surprised how well-received it is. I have a kind of a search alert for the name of the book. And every once in a while, it's - again, like in the early days of game development where I was saying that it felt like pushing a boulder uphill, it came a point - at first, I was like on Reddit, on Stack Overflow, whatever, someone was like, "Hey, I don't understand rasterizing. How do I do this?" And I'm like, "Oh, I wrote chapter of the book. Check it up here."  By the way, the No Starch people let me keep a full version of the book on my website for free, which is amazing. It's incredible of their part. That goes directly against selling books, which is why they do they let me keep it on the website for free, which is - yeah. I would say, well, you want to understand lighting, I wrote a chapter of this thing. Go check it out.  But then something started happening, which is my keyword alert thing sent me an email saying, "Oh, someone mentioned the book, and I go check. It's someone [inaudible 00:43:08] asking, "Hey, how do I do lights?" Someone else who I've never heard of says, "Oh, there is this great book called Computer Graphics from Scratch." I can tell you how rewarding that is, which is like it's not me recommending the book and people saying, "Wow, the book is - thanks for the recommendation. That's super useful." It's someone else recommending my book. I probably can't convey what it means to me, but it's kind of mind-blowing.  [00:43:37] JN: That's awesome. Yes. I mean, I wonder if a big - so for me, one of the things that's really interesting about your book, as someone who is dabbled in that kind of stuff a couple of times and always bounced off it, one of the reasons I really like your book in particular is that a lot of the code examples are pseudocode. I really like that in particular because I feel like often a barrier for me as a web dev dummy, I come from the JavaScript land. That's where I am or has gone in JavaScript. I try. I get to one of these books, and it's like, "Cool, you must be an expert in C++ to even get started."  I do think that is part of what makes your book universally applicable. But, also, I imagine it helps with kind of being timeless, right? Because people aren't reading your books and saying, "Oh, that's not the way C is written anymore." Do you think that's been a factor in why people are continuing to read it? I mean, it's been a while now, right?  [00:44:21] GG: Yes, yes. It's been a bunch of years. I think all of the coding stuff that I have on my website is pseudocode because I don't want to - again, I want to be accessible. My pseudocode is almost Python.  [00:44:33] JN: Close to code.  [00:44:34] GG: Yes. It says more about Python than it says about my pseudocode. But anyway, there's a couple of things. I want people to be able to read it and understand it. The other reason is I really don't believe in copying and pasting code because I had my time of when I was learning OpenGL or whatever that I just found these tutorials, and I would just copy a bunch of text without really understanding what it was doing. It kind of works, but it's very limiting in the sense that if you don't understand the code you're writing, you're not going to be able to fix it. You're not going to be able to diagnose it. In some way, I guess, having a pseudocode forces the reader to understand what the pseudocode does, rather than just copying and pasting it and running it and moving on.  [00:45:18] JN: That's really interesting. Yes. Because even if it's Python, there's still some translation, some working out the library equivalence, right?  [00:45:24] GG: Yes. It will be very high-level Python what I'm writing, but it won't work if you just copy it. That said, all of the algorithms in the book have a JavaScript implementation because all of the - every chapter links to a live demo which renders a scene. But, also, if you look at the source code of the page, the whole thing is there. It's not a static image. Some of them have a couple of knobs, things that you can turn on and off to illustrate some concepts and so on. So there is a JavaScript implementation.  I guess with the JavaScript implementation and the pseudocode determine the programmer can get the ideas and make their implementation in whatever language they want. What you said about being timeless might be true. For an interesting reason, it's not just, "Wow, this is not how C is written anymore." It's people don't write rasterizers anymore in software. They do it in the GPU.  But that means that you can implement these same algorithms in the GPU now. There would have been no way for me to kind of predict this when I started writing the book. We still had fixed pipeline video cards. Now, everything is programmable. Yes, the exact same algorithms apply, except they run in the GPU now. Then you read them in CUDA or whatever it is.  [00:46:36] JN: Yes, that is interesting. Okay. So we've mentioned a bunch of words I want to break down for folks who aren't familiar. Obviously, the two things that happened in the book that you mentioned already, you build a ray tracer and then you build a rasterizer. You kind of spoke about some differences where you're talking about your students, but can you break those two down for us? [00:46:50] GG: Sure. Both rasterizers and ray tracers try to do the same thing, which is start with a geometric definition of the scene and materials and lights and everything else, and give you an image of how the image looks like from a camera. In theory, you can arrive at the exact same image using either of the two techniques. In fact, at some chapter in the book, I have a comparison of same scene rendered both ways, except the one with the rasterizer has - you can see the vertexes in the sphere because it's not super high polygon kind of on purpose. But both can do the same things pretty much.  The approaches are entirely different. Rasterizers, as the name suggest, draw the image raster by raster, a raster being a horizontal line of pixels on the screen. What a rasterizer does is it goes object by object surface by surface and says, "Okay, I have a triangle. Where in the screen should this triangle be drawn? Okay, these are the three points. Okay, cool. What's the light at these three points? I'm going to compute the lighting. I'm going to compute - I have the texture coordinates. I'm going to draw this triangle with the lighting and the texture, and move on to the next triangle." That's what a rasterizer does effectively. It starts with each object in the scene and draws it on the screen.  Maybe the one complication is that this thing I just described would be rendering this correctly would depend on getting the order of the objects right because you would need to draw things from back to front. Otherwise, things that are in front, if you draw them later, if you later draw something that is in the background, that would override it. So there is this thing called a z-buffer or a depth buffer, which you keep track of. This pixel currently drawn on the screen represents an object at what depth. Before drawing anything else, you check that you're not overwriting it with something that is further away.  That's essentially what a rasterizer does. It's very fast. So that's why video games, even until today, are mostly rasterizers. Part of the trade-off is you lose some accuracy. There are some approximations in the ways that light is computed. Depending on how you do, maybe light is computed once per triangle. Or it's computed once per vertex of the triangle. Or it might be computed once per pixel of the triangle. In the distant past, video games look like - every object in 3D games look faceted and flat because they were computing one lighting value for the whole triangle. Then they started looking curvy and shaded because they were computing one light value per vertex. Then as computer power increased, they started looking shiny in spots because now you could compute lighting on a per-pixel basis. But the most basic can just output triangles really fast.  There's a lot of interpolation involved, as in the mathematical operation of calculating intermediate values between two values. That also made them very good candidates for GPUs, which are basically enormous matrix multiplication and interpolation machines, which is why they're very good for AI as well. But you can do a rasterizer that's just triangles that computes lighting.  When you start doing shadows, things start getting tricky. You need to start going to hacky things. Shadow maps, if you want to have soft edges, which is how does the scene look like from the point of view of the light, and you do a bunch of things to say, okay, should this pixel be in shadow or not. Or you can do stencil shadows which are more geometrical, but you get hard shadows like Doom did. But you can do shadows with some work.  Then if you on top of that want to do reflections, it's all, again, starts getting hacky. You need to compute an environment map from the point of view of the object. How does the object see the world? What does the world look like from the point of view of the object? Use that as part of the texture of the object to pretend it's reflecting light. When you start getting to caustics, refraction, all these things, it just becomes a mess. At best, you have approximations. You can have very good-looking approximations, which a lot of video games do. They look gorgeous, but they're extremely hacky from a conceptual point of view.  On the other hand, on the other end of this whole thing, you have ray tracers, which is like, "Okay, why don't we just simulate the way light behaves in the real world?" Rasterizers, instead of starting from each object, start from each pixel of the image, of the output image. They go like, "Okay, what color should this pixel be?" The way that works is the equation of a ray of light that starts from the camera and goes through the pixel into the scene is then intersected with every object in the scene. That's a bunch of equations that need to be solved.  At some point, you get, okay, the ray intersects this object. The closest object this ray intersect is this object at this point. At that point, you could say, "Okay, I'm going to just paint this pixel with the core of that object. Cool." Then you went to lighting. Well, it's relatively straightforward because you have the intersection of the - you know which point you're computing the lighting for. You know where the lights are. You apply the lighting equation. You get a value. You can draw shaded objects.  Then you go, "I want to draw shadows." What is a shadow? A shadow is, okay, there's a light. There's a point in the scene. It's like is anything between them? If there's something between them, that point is in shadow respect to that light. You already have a function that can compute intersections of rays with objects. You trace a ray from the point you're trying to determine the color of to each light. If the ray intersects something, you're like, "Okay, this is a shadow." So you do that a bunch of times, and you're like, "Okay, this light illuminates this object, or it doesn't."  Conceptually, it's very simple, and you also already have the math. You already have the functions. Then you go, "I want a reflection." Okay, you know the direction of the viewer, the ray of light that comes from the camera to the object. If you can compute the normal at that point, you can also compute in which direction the ray is reflected or which direction the reflected light comes from. Again, you have a method that you give it a ray, and it tells you, "Well, the color of the light coming from this direction is this one." So you just call that recursively. Bam, suddenly you have reflections.  Conceptually, it's relatively simple. It's a lot more - it's a lot purer. Same with mirror refraction, caustics, you can just go, "Oh, this object is translucent, and I know what material it's made of. I'm going to alter the direction of the ray." Suddenly, you can simulate a physical kind of piece of glass and act as a magnifying glass, just because you're simulated how light works. So it produces images that are gorgeous. It feels a lot less hacky than a rasterizer because everything stays geometrical. It stays physical.  The problem is it's extremely slow because of the enormous amount of math involved in all these intersection computations and so on. Whereas the rasterizer, it's like, "Okay, pixel, pixel, pixel." Do linear interpolation using integers, and we're done, right? Both of these approaches can lead to very good-looking images, but they follow very different approaches, and the trade-offs are extremely different.  Until recently, video games were all rasterizers. Pixar was all ray tracers because if your movie requires 24 hours to render a frame, that's fine because you do it only once. Then you play it at 24 frames per second. Everything is fine. For anything that is interactive, you need an interactive algorithm. For anything that doesn't need to be interactive, you can take your time.  [00:54:50] JN: Cool. That makes sense. You said until recently, so obviously, now we see games talk about ray tracing. I'm guessing there's been a shift there that's enabled fast ray tracing.  [00:55:02] GG: Yes. Some of the ray tracing stuff has been implemented in hardware. Hardware has started adding some support to accelerate this math in the hardware. Yes, we're seeing a lot of hybrid approaches. So video games, I don't think there's any video game that is fully ray-traced, but what you have usually is bunch of rasterizing passes that add a lot of detail. Then you have a ray tracing path that adds more detailed reflections or caustics or some specialized things. It's still not fast enough that you can apply to everything in this scene. But I think more and more games are adding touches of ray tracing to make the images pop, if that makes sense.  [00:55:41] JN: Yes. It does the things that rasterizing is hacky for or - [00:55:45] GG: Exactly, yes.  [00:55:46] JN: Awesome. Yes, that was a great explanation. Thank you very much. I guess, so one thing I want to end on as we get close to time here is you started at the start of this chat talking about how you learned to program on the ZX Spectrum. Recently, you've gone back to the ZX Spectrum with your book. I believe that you wrote a ray tracer for the ZX Spectrum. I think I saw one article, say, that hadn't been done before. I don't know if that was - how accurate.  [00:56:13] GG: Yes. What happened was I was browsing Hacker News, as you do. Someone published a ray tracer for some retro computer of the distant past. For a long time, I've been thinking. Man, I need to make a ray tracer for the Spectrum because the Spectrum and ray tracers are something that I love. But I had never kind of found the inspiration. When I found this article, I said - I thought, "You know what? I'm going to do it." I basically just took my - I didn't take my code but the same concepts that are explored in the chapters in my book. I just implemented a ray tracer for the Spectrum.  A ray tracer, as I said, is conceptually relatively simple. The math is relatively simple. I think the basic program that did this is 100 lines maybe. So I wasn't entirely surprised when it worked. I got something working very quickly. Now, most of the fun in this project has been working around the graphical limitations of the Spectrum, which were many and very quirky, rather than the complexity of the ray tracing. The ray tracing is the easy part.  Being able to draw that on a spectrum is a lot trickier, so I had to - basically, because the Spectrum, the resolution is super low, but that's not that much of a problem. The main problem is that because of limitations of the time, each eight-pixel by eight-pixel block of the screen can only display two colors. So choose wisely. A lot of the difficulty in this was - and also, you only have seven colors in two shades, so dark green, light green. How do you do shading, which involves taking a color and multiplying by a value between zero and one? How will you do that when you have black green and then bright green, and you can display all three on the same block?  I don't think you can even display bright green and light green on the same block. I think the brightest by block. So getting the ray tracer to render one block per pixel was relatively easy. Increasing the resolution was trickier, first, because this thing is incredibly slow. To give you an idea, it's based on a Z80 processor that can't multiply. Let me repeat that. The processor can't multiply. You have to - if you want to multiply two numbers, you need to write code to multiply two numbers. Think about that. It's super slow.  Also, I was writing in basic which is even slower. But I had these limitations. How do you deal with this very limited color possibilities? I found some ways around it. Then I wanted to add shadows. I was, "Okay, I'm going to have to do dithering like either color and black in the right proportions because there's no other way this is going to work." Yes, it was a very fun project to work on. I worked on it obsessively for a weekend.  There was a time where, as I wrote in the article, that I actually stared at it in disbelief. I think when I implemented shading and it worked, I was like, I was looking at it from a distance because it's very low resolution. I was like, "Man, this looks good." It's not the first time it's been done. After the fact, I did find some people pointed out this previous one. The earlier one uses different trade-offs and implemented slightly different - I think it's from back in the day. I don't think it's a modern ray tracer. It was hackier in the sense that, for example, it renders a checkerboard floor, which is very, very classic for ray tracers. By the way, does it - it's kind of hacked in the code, right? Whereas my ray tracer actually has an array of objects, so you can go and just change the occurrence of the spheres or stuff like that, and the scene would change. Whereas this other ray tracer is a lot more - it can render just that scene.  Also, I think it made different trade-offs regarding how it handled color and so on. But I'm not territorial. The more, the merrier. We need more Spectrum ray tracers. I never claimed it was never been done before.  [01:00:18] JN: No, no. It wasn't your article. It was another one that sort of about it. Yes.  [01:00:21] GG: The other thing is because I implemented my own ray tracer, I could compare the images generated by the Spectrum with the same scene generated by this the ray tracer from the book. It's like they don't look that different if saving all these limitations, right? It is the same scene. You can see the same ray tracer working on a different architecture.  [01:00:41] JN: Was this like a hardware Spectrum or an emulator? This isn't your original Spectrum from your new visit.  [01:00:46] GG: No, no, no, no, no. I do have a Spectrum in the basement, and I have a modification to plug it to a recent TV. But, no, this was all emulator work. The final rendered frame, I think, after all the optimizations and everything else takes 17 emulated hours to render, which is why I didn't use the real Spectrum, too. I think for completeness I should probably try to run it in my childhood Spectrum, just for fun, if it doesn't burn - [01:01:11] JN: A sad way to kill your childhood Spectrum. Yes.  [01:01:13] GG: Yes, actually. Yes, it might just melt if I had crunching math for 17 hours. The other thing that I might do at some point is rewrite it in Assembly so that it goes faster because this basic dialect is super slow. Honestly, one of the things stopping me is I don't want to write multiplication routines. I might find a library and just do that.  [01:01:33] JN: Obviously, we mentioned that you left Mystery Studio for Google, and you're back at Google again. But along the way, you also kind of dipped back into what at the time was game development with a company called Improbable, which is where we met.  [01:01:45] GG: Correct.  [01:01:47] JN: You mentioned being kind of not disillusioned but falling out of love with games. What is it that brought you back to the game industry and then equally back to Google? [01:01:56] GG: As I said, I still don't know what I want to be when I grow up. I explained earlier. I left video games and came to work for Google. One of the factors was running a company was fun but also relatively stressful, lots of responsibility, and so on. At that time, I welcomed - I want to have a job. I just want to have a job, go do my job, and then go home, and not think about my job.  My game development days were very much game development was my life, and it was unhealthy for a bunch of reasons. I also wanted a bit of a change of pace. It's like, "Okay, I'm going to just work as an engineer." I did that for three, four years, but I was getting to a point where I was getting kind of the other impulse, which is like, "Hey, I would love to do something more exciting. I want more than just a job. I want something that really speaks to me." I was thinking. Should I leave? Should I find a work in finance because I never tried? What to do next? Out of nowhere, I get an email from a recruiter saying, "Hey, my name is Sean. I'm a recruiter from Improbable. I just found your articles about multiplayer networking, and that has a lot to do with what we're working now, what we're making at Improbable. Want to have a chat?" I was very skeptical at first but ended up joining as we know because that's where we met, as you say. So I went back from - it wasn't just corporate versus games. It was also big corp versus small startup because when I joined Improbable, I think I was employee number 30 or something like that. I was back in into, "Man, this is very exciting. We're going to change the world. It's just a bunch of us trying to make it work." All that responsibility, the work is my life kind of thing came back.  Yes, long story short, one of the things that I managed to do at Improbable was wear many hats because the kind of things that you do in startups. When we met, I was head of marketing, believe it or not. I mean, you believe it because you saw it happen. But how did I end up being head of marketing? It was like, "Okay, we need to explain to people what Improbable does." It was when we were coming out of stealth. We also need someone who understand game development. We also need someone who can write. There's only a bunch of us in the company, so you now head of marketing. I guess that was a fun time in my life, very unexpected.  After two, almost four years of doing this, I was, again, "Man, I need to step back with my with my responsibilities and take it easier." So I quit. I didn't come back to Google immediately. I spent a year in Madrid, trying to become an actor, which is another one of my passions. I'm at Google, so you can conclude that that didn't work out. I'm still plugging at it. I have an agent now, which is I'm making very slow progress. But I also have needed to get a job.  [01:04:47] JN: There are films out there with you in them. That's the - I think you've been an actor. I think that's happened.  [01:04:52] GG: There are some short films. There's a music video by Shaggy, which is - [01:04:56] JN: That was what I was going to bring up because that happened while you were at Improbable. Yes.  [01:04:59] GG: Exactly, yes. That was very funny. I've been an extra on series and films and things. But I still haven't made it as an actor for any definition of making it. I'm still plugging at it. I'm still slowly getting auditions and that kind of thing. But, yes, I quit Improbable. I spent a year in Madrid, trying to make it work. Also, I moved in with my girlfriend, which we have been doing long distance for - me from London, she from San Francisco. So she said, "Let's -" Well, we said, okay, time to move in. Where? Why not Madrid? I spent that year in Madrid. Then I came back to Google, as I was saying, at the beginning of the pandemic, like March 2020, straight into lockdowns. Now, I've been doing this for four years, which as you might have perceived, tends to be my cut-off for wanting to do something more exciting. Yes, I might be doing something completely different by the end of the year. Who knows? [01:05:52] JN: Awesome. Well, I look forward to seeing what that is.  [01:05:55] GG: Me, too.  [01:05:57] JN: All right. Well, that brings us about to the end of our time. Gabriel, thank you so much. I've definitely learned and refreshed a lot. Yes. Thank you for your time today.  [01:06:04] GG: Yes. It's been a pleasure. It's been a really fun chat, so thank you so much. [END]