EPISODE 1652 [EPISODE] [0:00:00] ANNOUNCER: The power of 3D graphics hardware and rendering technology is improving at an astonishing pace. To achieve high graphical fidelity, assets that compose 3D worlds must feature an ever-increasing level of detail. Andrew Price is the founder of Poliigon, which is an asset production studio and store. Andrew also runs the highly popular Blender Guru YouTube channel where he teaches viewers how to use Blender. Andrew joins the show to talk about how different virtual assets are made, building his company, the impact of AI on graphics production, whether graphics have achieved photorealism, and much more. Joe Nash is a developer, educator, and award-winning community builder, who has worked at companies including GitHub, Twilio, Unity, and PayPal. Joe got a start in software development by creating mods and running servers for Garry's Mod, and game development remains his favorite way to experience and explore new technologies and concepts. [INTERVIEW] [0:01:05] JN: Welcome to Software Engineering Daily. I'm your host for today's episode, Joe Nash. Today, I'm joined by Andrew Price. You may know Andrew from his popular YouTube channel, Blender Guru, but Andrew is also the CEO of Poliigon, an asset and texture library for CG artists. Welcome, Andrew, thank you so much for joining me today. [0:01:22] AP: Thank you very much. Pleasure to be here. [0:01:25] JN: So, I want to dive in straightaway. We mentioned Blender Guru. So, I personally know you as we’re threading in the intro. I was one of many people I know who did the donut tutorial during lockdown. So, I want to first of all start by asking you how did you get started with Blender? What was your journey into 3D and modeling and CG, et cetera? [0:01:41] AP: Yes. So, it was back in, I was in high school. This was 2003. I just love video games. So, I was playing Need for Speed, and specifically, the car that's on the turntable, and you're like in the car selection mode. I was like looking at it and I'm like, “How is that made? And how can I make one?” Because I thought, it has to be possible to make something like – and so I Googled, this is before Google, but msn.com and searched for free 3D software. I think the first one I tried was a software called Animator with an eight in the name. It was very, very janky and horrible. Then, I found another, I saw a picture and it was a red sports car. It was 3d. It was hosted on a website called blender.org. And so, that was how I discovered Blender. I thought at the time, like if somebody else, because obviously this is free software, and so there's no schooling system. There's no university course in it. It's free. If somebody else could teach themselves to use the software enough to make that car, then I should also be able to do the same thing. Yes. It was just a process of looking on forums for articles, or like mini-tutorials people had written because it was before video really taken off. Just trying to learn the software and piece together everything. I would say, it took me four years, actually, before I made that car. Because I later learned a car is a very complex object that even takes professionals like upwards of two to four weeks just to finish it. Yes, it's very, very challenging. That was my story. [0:03:19] JN: There's lots of things there that I want to follow up on. But the fact that you stuck with the original vision of doing the car over that whole thing as well, and then like golf chasing other rabbits, is particularly notable. That's awesome. I guess the follow up on that. So, you got into it in high school, you spent that process learning it, when did you decide to start making Blender content? When did like Blender Guru get born? [0:03:40] AP: Yes. It was interesting. So, originally, I just wanted to be a freelancer. Actually, no. I didn't want to be free – I wanted to work at a studio. But no studio would hire me. So, I thought, well, I could next, rung down from that, I could be a freelancer because nobody has to say, “Yes.” Except, you have to win a client and you're in. But then, also, I couldn't get any client work. And I myself was consuming a lot of tutorials. So, I thought as a way to get your name out there, you could make tutorials and then they would find your name and then they would give you a job. Now, that I think about it, not the greatest strategy because like the people watching tutorials are 3D artists, not clients who don't know 3D. So anyway, actually I did get one job from it. So, I started with a video on modeling a car tire. That was my first video. Then I did a grass tutorial. That was my second one. That was very popular. I was posting it up on Vimeo because YouTube at the time had a 10-minute upload limit so I couldn't go over. I was posting it all on Vimeo and sharing it on forums and things like that and I made one on a smoke simulator. How to use the Blender smoke simulator, which was new at the time, and that got me my first and only, I will say, freelance gig, doing smoke for a Bridgestone tire commercial. [0:04:55] JN: Wow. That's a high profile one. [0:04:55] AP: It wasn't that bad. Yes. I mean, it was pretty sure, it was like local to Australia. It wasn't a big thing. But it was okay. But I hated it. I realized in the process that this is just not for me. I enjoy doing – the reason I like 3D is because you're creating something that you want to create, and doing something that somebody else tells you, suddenly it's less appealing. But my dad for a Christmas gift, he gave me this course. It was a giant box of DVDs from like one of these like Internet marketing personalities, these gurus way back in the day, where it was basically the novel concept of selling goods online. That had not really been explained. It was like really early in the creator economy kind of thing. So, it was just basically a course on like, you can make something and then you can sell it on the Internet. I went to this course. It was like a live thing and there was a DVD thing. I was like, “Oh, wow.” So, you could you could write an eBook. It's a PDF. It’s literally easy to make and then you could sell it to an audience that you build by making free stuff. I was like, “That's kind of interesting.” Then coincidentally, at around the same time, I went to my first-ever Blender conference. So, I flew from Australia, to Amsterdam, which is where the Blender headquarters is. At that conference, Ton Roosendaal, the founder of Blender went up on stage. One of the things he mentioned in his keynote was that education was a big desire. A lot of companies are asking them for how can we learn Blender, and they just don't have any education. So, I was like, “Huh, well, maybe that could be my thing.” I could just continue making tutorials and somehow make some money out of that. That was how I ended up doing tutorials forever. [0:06:34] JN: Perfect. Perfect. Yes. I mean, that totally makes sense. I mean, we could probably spend a whole episode just talking about how the life of like a YouTube creator who's doing that has changed over the course of the commerce and Internet cycles. But I guess to not spend too long on it, there's been a lot of talk recently with all various YouTubers retiring, all this kind of stuff. Is Blender Guru still part of your life that is giving you value in and is like, you think the Blender education pie is still working for you? If that makes sense. [0:07:04] AP: Yes. I mean, it's definitely changing. I'm realizing that the YouTube audience doesn't really – like the style of let's sit down and make this from start to finish. It doesn't really work. When I say work, I mean, it doesn't do anywhere near the success of what you call edutainment. If you're familiar with like Veritasium or Mark Rober, that kind of style of – where it's really, it's like entertainment, but you're learning something. So, I experimented with this recently, I made a tutorial – I made a video, not a tutorial, on CGI versus practical effects. I did this in The Shining, that scene where the elevator doors open, and then all the blood explodes out into the thing. So, I talked about like, that was a practical fact. Let's see if I could do it in Blender. Months of work, but mashed up really quickly into this condensed format. Then, I talked about the value of practical versus CGI, and why CGI is so chosen, even though it looks worse than practical. The YouTube algorithm loves it. It's pretty much one of my most popular videos, four million views or something. Whereas like, I sit down and like, let's do this, like I made one on like a puddle. Let's make this puddle. It's full like people that are really into Blender and they want to know how to make a puddle, which is I thought very versatile. Practically, you can use it for a lot of cases. But it just doesn't pay, unfortunately. It’s only like 300,000 views or something, like a year or so later. So, I'm sort of realizing that like, maybe that style of video is kind of dying out, like the long format. People will say like, “No.” If you ask the audience, you ask on Twitter, “Do you want more of this?” They’re like, “Yes, I want the long form.” As I get, but you're not clicking it. You're not watching it. Unfortunately, you say you like it. But unfortunately, nobody is actually consuming it in the numbers required. So, I'm realizing maybe that style format is really suited to paid courses. Because people pay for courses they actually want to pay. So, a lot of people, they start learning Blender from the donut tutorial. But there's actually, I think, a large portion of people that want to buy something because they feel you get this structured curriculum, because that's what's expected from that medium. Whereas YouTube, they don't even think to look there. I'm sort of realizing that format better for a paid course. And YouTube is more for your lead generation, which some people will be sad to hear that that's the devolution of education, but that's kind of where it's at. [0:09:34] JN: Where the algorithm is putting you. That totally makes sense. I can also kind of see how that reaction from the audience, you said that they're wanting it but they're not clicking it. I am that guy. I see the new video. I'm like, I will find time for that in the future, add it to my bookmarks. Bookmarks are write only, never go back to it ever again. But I wanted it. I just haven't had the time for it. That's probably – [0:09:54] AP: Exactly. The other sad thing is that like technical tutorials, they have a shelf life, because even a year later, I mean the shortest actually, the shorter shelf life video I ever had was about two weeks. I released that tutorial, and then Blender released an update two weeks later that made that tutorial redundant because they changed the interface in some way. It just happens constantly. So, it has to really work immediately. Because even a year later, people are just like, “It's dead content.” So, sadly, it’s how it is. [0:10:23] JN: When you're on a course platform that makes it easier to swap videos out than YouTube does, like updating a video is notoriously difficult, right? [0:10:32] AP: Yes. Exactly. Once it's done, it's done. Yes. [0:10:35] JN: Fascinating. Okay. So, I mean, obviously, we mentioned in the intro that you're also CEO of a company of Poliigon. Can you introduce Poliigon? And for folks who haven't heard of it? [0:10:43] AP: So, Poliigon is a library of textures, models, and HDRIs that people use in games, VFX, and primarily, architectural visualization. That's kind of a big demographic. So, for example, if you're an artist, and you want to make an office, you need carpet textures, you might need wood, you need some models of desks and chairs. That's what a library like ours would provide. So, you'd come to our library, download the desk, download the chair, and put that you're saying. That is very common for production to use paid models like this, because the time it takes to model a chair, right? Get all the little nuts and the bolts and all that kind of thing. It could take you a week, right? So, for one artist to sit there for a week, the boss might have spent two grand, or a grand, or something, just building one single chair. So, it just doesn't scale. That chair can be reused. So, you make it once and it could be reused in multiple projects. But maybe that person or that company only needed it for that one shot. That doesn't make sense. So, instead artists, it’s actually like a job on two-sided marketplaces. So, Turbosquid is the biggest one, where basically individual artists can make something and upload it put a price tag on it of $10 or something. Then, a company, can buy it for $10, and instead of spending $1,000 for one of their own internal team to make a chair that's just generic and basic, they just buy it, and it saves them $990 in that case. [0:12:11] JN: Yes. That makes total sense. I mean, obviously, we have over in the software world, we have that exact parallel with you start a new program, you import all the libraries, you want, your package manager goes and grabs them and you're off to the races. You don't have to rebuild it from scratch. So that, I think, everyone listening will totally understand that concept. One thing I particularly liked on that notion of two-sided marketplace, I was watching your videos recently, the Family Guy house reconstruction one. The process of that sofa was very cool. The fact that it ended up with like, “Oh, we just throw this on Poliigon.” Can you briefly talk about like the challenge you had with making that asset? Why it was worth putting online? [0:12:43] AP: Yes. So, the sofa’s soft things organic blowy things are very challenging. This is one of a very complex assets to create, because most stuff is hard. Hard surface, if you're making a desk, it's almost like the computers designed for it, right? Because you start with a box, you shrink it, and then you extrude it across. Like, “Look, you've got a perfectly flat desk”, finished, right? But something like a couch, that's got volume to the cushions and then as it expands in this shape, you'll get like wrinkles along the edges. But it's also got a form of itself that has to retain. It's very, very difficult. So, there are ways you can do it, so you could just be a very good sculptor, or modeler, and sculpt in where the folds need to go. But that's a very niche skill to have. There's actually not many artists that can do it very well. Because you have to understand if you add volume in this place, it's going to pull the fabric from this place, and it's going to create wrinkles and creases that look like this. It's going in this direction, and it's really difficult. It's almost like a litmus test we can use for sculptors when we're looking for them. Like, “Can you make a sofa?” It's like, most people can't. It's really, really hard. Yes, I mean, I tried it. I mean, blenders even got some like fabric tools, which does like a mini-simulation. So, if you pull it, it kind of creates creases where else it goes. But it doesn't understand the rest of the form. It's very local to a thing. I talked about how I tried it, it didn't really work. But I knew it was like a hero asset for the scene. It's the purple couch for the Griffin family. So, I thought let's try a different method, photogrammetry, which is another method for modeling something. Instead, you have the real object from real life, and then you take a series of photos around it using a camera, in kind of a turntable motion, and it's simplifying it more than it should. But it's very complicated to get just the right amount of photos. And then you use software like RealityCapture or AgiSoft’s Metashape, and it aligns those photos together. Like there's a little spot here on the couch, and then I can see that same spot in another angle. So, it must be the same part of the mesh and it forms and generates a mesh from it. I did that. Yes. Just used that to duplicate, create the rest of the cushions, and then extract some of the detail from it into a brush, which I could then sculpt to create different variations. Very complicated process, and not at all what you would actually do in production, I think. Just completely infeasible. You would never use it. But I did it, because it makes sense for a YouTube video. You want to learn something new. You kind of have to go through those ropes. [0:15:29] JN: Well, as you said, it's also, you say, you probably would do it in production. But I guess it wouldn't be the couch, but there would be something else that's the hero object in the scene. In that case, it probably would get that bit of attention. [0:15:40] AP: Definitely, yes, yes. [0:15:40] JN: So, throughout the exploration of that, and also your company, you mentioned a couple of I guess, like Blender, technical term, CG technical terms, that I'd love to dive in with you a little bit. So, first of all, texture mapping. What is texture mapping? How does that work in Blender? [0:15:57] AP: Yes, so texture mapping, it's an ugly challenging one. It's applying a 2D texture, like an image. If you just like Googled seamless wood texture or something, you get like, wood, right? Then, you have to apply that to a three-dimensional shape. At first, it sounds easy. It's like just slap it on there. But it's three-dimensional. So, it wraps around. And if you just left it to the computer, you would have like stretching where it tries to stretch it around. So, the most common way to map an object is to manually put in seams into your 3D object. The easiest way to think of it is like, if your object was made of paper, where would you cut with the scissors in order to lay it out flat onto a table, right? So, if you had a car that was made of paper, and it was like sitting there on your desk, where would you put those little cuts, so that you could lay the paper out flat? Those cuts are called seams. Once you've got all the seams in, you can then UV unwrap it. That's the first and most common way of texturing, UV unwrapping. It just forms, yes, your 3D mesh is now basically put onto the 2D plane, and then you just line it up with the image, wherever you want the texture to go, it'll go. So, that's the first method. There's also procedural textures, which is where you leave it up to the computer to create the actual texture itself. So, it's not an image texture. It's not actually 2D. It’s a three-dimensional texture. And in that way, it doesn't actually require any UV unwrapping or any complexity, because the software is very good at just – it's a three-dimensional object, so it's just going to apply this procedural texture kind of all over – I mean, it's kind of the way I think of anything. It's almost like the texture exists almost in a 3D volume, and the object is inside, and it just kind of like, wherever it hits it just – Anyway, that's procedural texturing. Yes, I would say like, the most common way to create or put textures onto a mesh, it changed in the last 10-odd years. It used to be, you would do that process of UV unwrapping. Then, you would bring that into Photoshop, and then you would use Photoshop’s painting brushes to paint onto this 2D-looking mesh. You would then save the texture and open it up in your 3D software, and then see how it looks, and you go, “It doesn't look very good.” Go back to Photoshop and tweak it over here, save it, go back to your software, open, it doesn't look very good. You just repeat that process until Substance Painter came on to the same. So, it was a company called Allegorithmic. They're now bought out by Adobe. But Substance Painter, essentially, just put those like side by side in the same software application. So, now you're actually not painting onto this 2D, UV unwrapped, very complicated thing. You’re just literally painting onto a mesh. You've got this window with just your object there, and you can take a brush and you can paint colors directly onto the mesh, and it just automatically puts that onto the correct 2D texture. So, that has become industry standard. I don't think there is a game studio or a VFX studio that is not using Substance Painter in some way. It really just like completely changed texture painting. [0:19:01] JN: Cool. There's not an equivalent feature in Blender. So, blender, you'd still do it the old way or use Substance Painter and bring it into – [0:19:07] AP: It does. Blender does have that. So, it's got texture painting tools. It’s had it for, I mean, probably since I started using it, maybe 15 years? The problem is, is what Substance Painter does differently, is that it's not just color. It's also adding detail to the other texture channels. That's another components or texture channels. So, I've just been talking about like color texture so far, right? I’m talking about wood. It’s just orange and black and whatever, right? That's the wood. But if you look at real wood in real life, you bring your eye up close to it, you'll see that it's a lot more complex than that. It's got little bumps in it, right? It's also got smudges, right? Some parts look more reflective than others. So, the software separates that detail into what you call like an another texture channel. The color information is just put in base color. That's the name of that texture. The roughness just goes in the roughness texture channel. And that's like the smudginess, the glossiness, the reflection information, and then the bump information is put into your normals, which is a technical term for similar thing. Anyway, so that is extra information, which if you use the old Photoshop method, it was even more complicated. You had to try to remember where the previous – like the color had like a dark line here. So, maybe there's like less of a smudge, and you had to paint it again. Very, very complicated. But now Substance Painter, you can paint, like you could paint rust or like a rust smudge, and it would paint into the red channel, the color channel to get like the rusts, smudgy look, whilst also painting roughness into the roughness channel, at the same time. Blender can't do that. It will only let you paint into the base color, and then paint separately into a roughness color. But you can't do them all at the same time. But yes, Substance Painter is great for that. [0:20:54] JN: Amazing. Okay, and Poliigon, you are making and selling assets, including textures. What is the process for creating new textures? How do you decide, “Hey, that's an interesting looking piece of wood and that piece of furniture there. I need to bring that into the marketplace.” How does that happen? [0:21:10] AP: There's not an easy answer to it. Every object is different. For example, wood is very, very detailed. So, it's better suited to capturing with a camera. We typically do that with not just like taking a single photo of wood and calling it a day, because it also doesn't capture that detail that I mentioned. The roughness or the bump information very important. Doesn't capture any of that. So, instead, you use like a more complicated process of photogrammetry. Anyway, there's lots of different ways to do it. But it's complicated. It's time-consuming, but it creates very good detail. Very good true-to-life replication of that wood. But you might also find is very hard to find the wood you need. So, acquiring the asset in real life, like contacting a wood supplier and saying, “Can we have your assets so that we could scan them?” And then they find out like, they’re charging $1,000 a square foot for this type of wood or something. I'm like, “Ah.” But we can't afford to just buy it. And also, we don't need it. Can we give it back to you at the end? “No.” It's like, okay. You have to kind of come up with some deal, and then sometimes it's too complicated, then you go, “Screw it. Let's do a different method where we don't have to acquire it. We instead do it procedurally using another piece of software called Substance Designer. So, the other one is Substance Painter, painting onto the mesh. Substance Designer uses procedural texture creation. You don't have to have any photo or anything from the real world to start, you instead just start with like computer graphic-created noise, and then you go, okay, let's increase the contrast of that. Let's add another texture on top of it. Now, we've got like a [inaudible 0:22:47] crack kind of look. And you try to replicate real life through a time-consuming procedural workflow. The downside of that is that it's often looks procedural. It doesn't look as realistic as real life. But it's great for things that you can't catch up. So, one example, rammed earth, it's like a type of concrete. But it's like natural concrete. You see it in like high-end homes or museums, maybe. Architectural people, they really want that texture. For a while, we were trying to find it, try and like find somewhere in real life, a wall, or a museum, or someplace where we could – and we just couldn't do it. But it's so complex, but like, it just doesn't make sense to make procedures. We have to be able to find it in real life. And then we just couldn't. We just gave up. We were like, “Well, it'd be better to have something, even if it's not exactly photorealistic than have nothing at all. So, one of our artists was very, very good at Substance Design, and was able to create an honestly pretty good close replication of the what rammed Earth looks like, just using Substance Designer, and we made like 30 or 50 versions of it. Because that's the other thing that's good about the substance procedural. It's very easy to multiply that into multiple types of assets. You can just quickly change the colors or how many chips or little chunks of stone that appear in it. You can just drag a slider, more chips, less slip chips, so you could make thousands of variations very, very quickly. Whereas if you scan it just from a subject, you're often kind of locked to the detail that's there. So, there's pros and there's cons and we're constantly kind of like flip flopping and changing our workflow around how we will approach something. Sometimes it's really just try it and then fail, and then go like, “We thought we could create wood. We can't.” We have to find it. That's now the goal, something like that. [0:24:43] JN: Yes. That’s left me with two questions. Number one, what's the weirdest item you have acquired to put through photogrammetry? And number two, is there a texture that has just completely eluded you and you can't do it in either method? [0:24:55] AP: Okay. The second one, yes, wood. Wood is honestly really, really challenging. It's so hard because – I mean, it should be possible, like wood planks. For one, there's not really a wood company. They're all kind of like contractors that just like work for a home supplier, and then they get wood from a source, and you want to be able to work with them. But they're so busy. They don't really care that you're going to – so, what worked which we're going to do for wood, but we found a marble supplier in the United States, Best Cheer Stone. We said, “We'll pay you”, I think we said like 100 bucks a slab or something like that. If you just let us come to your factory, our scanners will go there, we'll scan each of your slabs, we’ll pay 100 bucks a slab, and then even more so, we'll give you the photos. Because that's something that they struggle with is they know they need these correct photos for their pamphlets and stuff. But their own, like they take it with their iPhone, and it's got the wrong lighting. It's all blurry. So, we're like, “We'll give you the best photos you've ever seen and we'll pay you for the process.” They said, “Yes.” So, we need to find one of those for wood, but it's just kind of eluded us up until now. Then, as for the weirdest thing that we've scanned, I mean, we try to keep it fairly generic, because like, we actually shoot down weird ideas from the – because they'll be like, “There's this weird type of fabric that is only used by this French designer in this thing, and it's just weird and obscure.” I'm like, “Yes, but who's going to use that? It's got like two uses. One of them is the runway, like the fashion runway. Who is actually going to use something that looks like that? So, let's not do it. Let's find something that's a little more common.” [0:26:37] JN: If you're listening to this, and you have a warehouse full of wood that needs photographing, you know now where to go. [0:26:41] AP: Please contact us. Yes. [0:26:43] JN: The last one for this little bit question time. One of the weirdest things that like someone who's not in the 3D space encounters when coming to somewhere that Poliigon is these like, enigmatic spheres of different materials and lighting, and that kind of thing. So, what is an HDRI? And what is it used for? [0:27:02] AP: Okay. Yes. So, it's one of the, I guess, you call it the three categories of assets. The first is textures. We talked about models, you obviously know. And HDRIs. HDRIs, typically when the way we refer to them is a 360-degree capture of a space. So, typically, it's like an outdoor, like, you've got the horizon, you've got the sun, you've got the sky, and then you've got like a field or something below it. That's kind of a common HDRI. It's called an HDRI because it's not just like a single photograph that you would expect to see, that's eight bits, and that's a JPEG or something like that. It's saved as an EXR, and it's saved with multiple ranges of exposure. So, when you photograph it, you're not just doing a single capture, but you're photographing it at different stops, different exposure stops. One, whether, everything is dark, but you can just see the little circle for the sun. Really low exposure. Then again, when everything is like blown out, where it's the opposite, and you can only just start to see some of the shadows in the grass or something like that. Then you do that across the whole scene. So, you take, I don't know, maybe like 10, 15 photographs or something. The camera can actually just do it automatically. Then, you use software to compile them all together to capture that range. So, it looks like a single image. But you could drag a slider to go zero exposure, so you can hardly see anything and just drag it up like that. Now, why is that useful for 3D artists? That information, when you put it into Blender, or Max, or any other package, it can read the exposure ranges and create an exact, basically, exact lighting and reflection information from the scene. So, if you then take in an object of a chair that you've modeled or something like that, and you use that HDRI that was captured on a soccer field or something like that, it will look exactly like the lighting from the real life, real plain. The effects, they love this, because typically, they've got a practical scene with actors walking around and things like that. But then, they need to have a robot that stands right there, that CG. If you didn't have this information, you would have to manually try to like look at the set that was there and go, “I think there was a light that was like about here.” So, you would add a light in Blender, for that point. And then another one over here, and then you go, “Okay, well, now the lighting is okay, but it's not getting the reflection of the ground.” So, you'll maybe add a ground and then try to put a texture on it that kind of matches what was there. Very complicated. Instead, you now just have somebody who stands there where the robot is going to be with a reflective chrome ball. It's sometimes called the ball guy on set. And between takes, they go, “Ball guy.” He just goes out and stands there, and they go, [inaudible 0:29:55], and then that's it. Now, they got the exact light for where that robot stand. So, that’s an HDRI. [0:30:03] JN: Fascinating. Okay. Yes, I've seen that ball, on like, behind-the-scenes version stuff. That's awesome. Amazing. Cool. So, I guess moving on to talk about Poliigon more generally. We've spoken about all these different types of assets. Obviously, one of the most, I guess, basic type of asset you have is just 3D models of things, right? How many are kicking about in there right now? How many of you made at Poliigon? [0:30:29] AP: Yes. I mean, we recently like unpublished a bunch, because they no longer matched our standards. Because we've been around for about eight years. So, we got rid of some old stuff. But we’ve got about, I think, like 800, 900 models, and then about 3,000 textures. Then I think, I don't know, 200 HDRIs. Yes. [0:30:47] JN: So, that’s – you ran through the process of creating models earlier. A lot of work has been put into those. How do you handle that labor? That intensive labor law? How's the team structured around building those? What's the process of putting those together? [0:30:58] AP: Yes. So, it's a remote team. There's about 30 of us and based all over the world. We got about five or six in the United States, bits and pieces across Europe and everywhere else. Yes, just basically like contractors to, we'll pay you this much to create this type of asset. Then, it just gets created. The hard part is getting it consistent. If it's something that is made from, we call it hand model asset. Rather than using photogrammetry to create realistic model of let's say, a muffin. That's just photo scanned. If somebody has to model that muffin from a shape, and then put some icing, and texture, and all that kind of thing. Well, it could be done any number of ways. One of them could be realistic, one of them could have issues with the mesh, one of them could be too stylized. So, you have to art direct it, you have to kind of curate it. Somebody has to oversee it and go, “You're missing this detail. This needs to be fixed.” It's actually one of the costs of like handmaking something. Same as procedurally making that texture using Substance Designer. You just have to assume that there is always going to be this revision process to get it to the correct level. Whereas photogrammetry, one of the appeals of it is that what's real is real, right? You've only captured what's really there. So, you can't really critique it. Besides to go like, technically, it's not correct, or there's some issue. But besides that, all the detail is correct, because it's real life. Anyway, yes. I don't know if that answers the question. [0:32:34] JN: Yes. You mentioned there that Poliigons are going there for, was it eight years? Really interesting. What kind of challenges have you run along the way? Is there particularly a big challenge that you've had to overcome in creating Poliigon? [0:32:45] AP: Yes. I think one thing we decided to do with models was to not just give people. So, typically, a lot of like TurboSquid and 3dsky and all these different two-sided marketplaces, they'll give it to you in whatever format. The artist who made it wanted to give it to you. So, sometimes you get a Blend file. Sometimes you get a Max file. Sometimes you get a Maya file or an Unreal file, or something like that. So, you typically find an object that you like, and you go, “Oh, that couch is going to be perfect for my scene. It's a Max file. All right.” That means, as the artist, you have to manually download it, import that Max file, convert it into a file format that Blender can read. Now, it's just a gray model. It doesn't have any textures on it, because the shading, the material is something that it always, always fails at. So then, you have to manually drag in the textures that they've provided for you and hope that it kind of works. What we did instead was we said, “Let's do that hard work for you. Let's give you the Blender, that Maya, that Max, the Cinema 4D file.” Then, the artist comes to Poliigon, and they go, “I'm using Cinema 4D download.” Then, they just get it. With that, though, is very challenging. Because every software is different. They all do things differently. Also, by the way. It's not just software, but it's renderers. Within Cinema 4D, there's about five different popular renderers. There's Arnold, there's Octane, there's Redshift, there's Corona, et cetera. So, you have to also make it available for those which all have their own challenges. That was the biggest challenge. We've recently, I guess, in the last year or so worked with somebody from the game industry, who was very good at automation and understanding what's required to make that process really work. So, we're now starting to see the scale of the team. Whereas before it was a little bit, things were just way too costly to create, because the team was a lot greener. But now that we've got that management, it's going along smoothly, which is great. [0:34:41] JN: Very cool. I guess, the flip side of that question, is there anything coming up in the future in terms of new rendering technology or things that might unlock new types of assets for Poliigon that you're looking forward to? [0:34:53] AP: For rendering? It's an interesting one. I mean, the biggest one is, I mean, AI definitely is this new thing that nobody really knows what is going to be the use cases for it yet. But there are some pretty novel uses for it. So, one, is I'm sure you've probably covered it on your show, at some point already. But 3D Gaussian Splatting, 3DGS. [0:35:20] JN: Oh, I don't think we got – [0:35:21] AP: No? [0:35:22] JN: No. What is that? [0:35:23] AP: Okay. So, if you're familiar with the company Luma. They're based in San Francisco. They just raised a bunch. But they're kind of the, I guess, leaders of that space at the moment. Anyways, it's like photogrammetry, but it's not creating a mesh. It's creating an, okay, it's different to – do you know what NeRF? Have you heard of that term? NeRF? So, that was the first one. All right, yes, Nerf Gun, same spelling. But stands for Neural Radiance Fields. Basically, the way it works is so photogrammetry, you take all these photos of an object or something. Traditional photogrammetry will then align those and then we'll create a mesh from it, and then try to put some texture on and you get like a mesh that you can then just like import into 3D software. It's a mesh. It knows what it is, because that's exactly how it works. NeRFs are a bit different. It's instead kind of creating like a point cloud. That's the way I understand it. I've been told that's not correct as well. But it's basically a point cloud. It's like little points in space and it kind of matches – it's where it thinks the object starts. If you imagine like, this cup, right? My finger goes through it, and then it like stops, right? Because it's got this cup. It tries to put in, I think it's like a prediction value on like where it thinks that things start. So, it goes like 00001, right? I think it starts here. Then, it puts a point there, and says that's that thing. Then the color of those, because if it was just points, we’re all gray, and you can’t see it. The way it works is depending on where you are looking at it, it will use the color information that was captured on that point. Then, it takes another viewpoint and says, “This is the color information.” So, that point now that was here, and it was like red color. Now we've moved a little bit well, now it's a little bit orange. Right? That point has now changed. What makes it interesting is that it can predict different angles that weren't captured. This is my understanding of it. I've tried so much to understand this. But almost nobody also that is trying to put it in a form that people can grasp. They will just use terminology – anyway. But basically, I think it's like a self-learning model. So, let's say you took 200 photos around a car, it will read into it and try to guess like, okay, I'm going to use only 15 of these photos. I'm going to try to guess what does the car look like from this angle. And it will try to guess what that information would look like based on those 15 angles. It will then compare what it thought it was going to look like, with what it actually looks like, because it has an angle, like a photo from that angle. It will then correct its model. It will go, “I was off on this kind of case.” It will do that all the way around it and it will then be able to make good predictions of angles that it doesn't have as well. Like maybe it's slightly off in this way. So, you basically end up with something that looks like a 3D model that you can like drag around, and it looks very realistic, because it's capturing the real light and information that was captured on the scene, and it's putting it into this point cloud, what you call the NeRF. Then, August, September of last year, somebody released a paper, which was like, “Okay, y'all know NeRFs, y'all know how cool they are, great. 3DGS, 3D Gaussian Splatting, instead of taking those little points, which were apparently very hard to render, like in a real-time frame rate, or any normal computer, was too computationally expensive. Instead, puts Gaussians, which is kind of like a gradient. That's the way I think of it, on that space, and it can stretch, or shrink, or whatever, those little points. It looks the same as the NeRF, but it's now real-time. So, you can now get realistic-looking captures of real life that you can just view in a browser that looks incredible. The detail that's there because it's real, like from on scene, it's better than any video game, and you can look at it on your phone. It's running on your phone, right? I mean, if you just go to like Luma, Luma Lab. I don't know if they called Luma Lab or Luma. Anyway, Luma is what their program is called. But if you go to their site, you can see captures that other people have made. Yes, it'd be like a forest, or a statue, or something for your life, and you can just like drag around and it's like a 360-degree like model or a capture of a space and you can run it on a phone. You can run it on anything. That is a very novel way, a novel rendering technology. What's interesting is Unreal Engine, I don't know how they've done it. But they've made it so that you can import that information into a game or a 3D scene and use that inside of the thing. Now, it doesn't look very good. It's trying to gaslighting information, because much like photogrammetry, what you got, like what you captured, was using the lights. If it was a daytime scene, you've got hard light, right? If you've got a statue of a guy standing there like that, well, now underneath his arm, it's going to be very dark. You're not going to have any information there at all. If you then put that into a game that's in an overcast scene, or even a nighttime scene, well now, you've got a mess. It just looks horrible. It just looks fake. Because your eye will go like, “Why does it look so bright when everything is dark or whatever?” So, I think it's trying to guess different lighting setups based on the information that's there as well, which is also very challenging. Anyway, very, very new, very unique. So, there's that. I would also say another one, but it more applies to the artists that authoring content is something called SDF, Signed Distance Fields. Normally, like in Blender, if you follow on my tutorials, you make a doughnut. You start with a Taurus, and you have this mesh that's made of points, and then the points, you can make four of them, and that's a face. That's a mesh, right? But a lot of – it gets really challenging. If you want to make like a fire hydrant, it’s a cylinder, but then has another cylinder going out of it. Well, how do you join those points to make it so that the cylinder going out of it actually works? It's a lot easier to understand. It's like a cylinder and then another cylinder. They should be able to just join, but they don't. It just doesn't work like that, because it's a model that needs to have the points lined to other points. It's just really an annoying process. Then, apply that to like a handgun or a weapon in a game, and you've got this thing with like gun rails and little bits and soft bits and hand grips where you hold it, it just becomes almost like a Sudoku puzzle of like an extreme level where you have to imagine like, how many points do I need here to align to this? Then, there needs to be less points over here so I have to – and it's this like juggling game. Well, SDF is much more about how you would imagine it. I'll start with a cubish object for the handle and now I need a roundish object for the hand grip. It will just automatically merge them. Almost like it's made of clay. You’re just kind of like working with clay. So, the software that actually uses that, I think, there’s two. Plasticity is a very new one that just came out. And Substance Modeler, Adobe Substance Modeler. Those are two – there's going to be another one. I think, yes, those are the two that come to mind that use this approach. Other packages have been very slow to pick up on this or adapt to it. But those two are kind of leading the front, I'd say. [0:43:07] JN: Exciting. Thank you for that. [0:43:10] AP: There was a lot of technical mumbo jumbo there. But, yes. [0:43:14] JN: Just to briefly close that loop. Two more questions for you. One of them, so I had this moment when I was looking through Poliigon, where I was just like, looking at all these models and looking at all these textures, and a lot of them were really realistic. Then, I got to the cinnamon buns. They gave me a bit of a crisis, honestly. I was looking at these cinnamon buns. I was like, “Those are real cinnamon buns.” It’s just flat-out real. What I want to ask you, is are we at the point where we're just straight-up photorealistic? We can just do it? It's a solved problem. If not, what's left? [0:43:44] AP: Yes. That's a very good question. It depends on the topic. So, if you think back – can I ask how old are you? [0:43:53] JN: I'm 31. I tried to forget that for a moment. Yes, I’m 31. [0:43:56] AP: Okay. Yes. You grew up with the old school like PlayStation 2 games, that kind of thing. If you remember, some of those games, like the Grand Prix, some of those things, actually looks kind of realistic, right? Because you're far enough away from it and it's a very kind of – it's an easy light setup. There's not a lot of challenges. They're just direct hard sunlight. There's not like complexity of shadows. So, it's very easy to do that. If you look at like FIFA games today, the same thing, they look very realistic. NBA, these sports games. Certain things are very easy to do well. But there are certain areas that have always remained a challenge. One of them, humans. Making realistic humans that can play in a movie alongside Robert De Niro and you can go like, “That's a human being not – that's a horrifying CG character that should not be in this movie.” People know about the term the uncanny valley. It's a real thing and it sucks. But yes, your eye is just really able to tell minute flaws in a human face. You will not be able to spot those same flaws in a dog or an animal, because you're not familiar enough with that animal to understand where it's wrong. You could have like the best artist to create a human face, and you could look at it and go, “Something's still not right.” Whereas you could get a junior first-year university student doing a 3D course, and they could make a deer, which has all sorts of problems with it, and you go, like, “That's a great deer.” You're able to pick the flaws on things you understand and less able to on the things that you're less familiar with. But then there are also other things, which are computationally expensive. Fluid being one of them. I just watched Godzilla Zero. Have you seen that one? [0:45:53] JN: I haven’t seen it. But I have seen people talking about the VFX of Godzilla Zero. [0:45:58] AP: Yes. The VFX carried that movie hard. Actually, it makes sense, because the director was a CG supervisor. That's his background was VFX. He really just like, what makes the movie great is like, it looks like $150 million movie on a $10 million budget. How did they do that? I don't know if that makes very good viewing experience for the average person. But I found it entertaining. But the water in that was really good. It's like they had these scenes where Godzilla is like, following this boat on the ocean, and it's the water and the spray. The reason it's still so challenging to do is that water is made up of tiny, tiny, tiny little particles, that when they crash into something, they get tiny enough, they become spray. So, this is spray. And if you had to simulate each of these tiny little points to create the appearance of spray is way too computationally expensive. So, the best we can do is create like fakery, right? So that when it gets to a certain, like, as it simulates, these like large objects separate into smaller objects, which then once they separate into a small enough object, it switches to become a different object type, which is a volume to create that spray. Then, you've got like, what is the size of the box in which this simulation is occurring? In that scene, it's like, I don't know, 200 by 200 meters or something. That's a lot of water that's like interacting. It's really challenging. So, the best software for it is Houdini. It's the industry standard software. But it's always going to be a fakery and an approximation, because the real world is so much more complex than what we can simulate. Another one, I always think of every time I go for a walk in nature, is nature. Nature is very complicated, right? Where the rock is, and then the tree is, and then the tree had to kind of like go around the rock. And then you've got plants, but the plants are actually behaving on like survivability. If the tree is too big, or it's casting too much shadow, the only thing that could survive, there would be like little mushrooms or other types of plants that can only live there. If there’s light, you've got like a bush or something like that. Then, leaves have coated everything. The trees above. The leaves as they’ve fallen, and they've fallen into the crevices of the rock. So, you've got build-up there. Now, it's darker and there's like grunge. All that stuff. You just don't see in any game, any movie, because it would just be like a waste of time, because it would be so computationally expensive. No reasonable supervisor on a movie or in a game is going to be happy with an artist that tried to simulate all that. So instead, you just grab a rock from a site like Poliigon or Megascans, you grab a tree from this thing, and you put it there. Maybe, you do a little bit of texturing, where they kind of get near each other. But then you go like, “Call it a day. That's it.” So, there's still so much more we could do for realism. I don't think we're anywhere near a peak, because there's just infinite amounts more detail than we can reasonably compute right now. [0:49:17] JN: That description of the forest reminded me of, I think, it was a tweet, I saw the other day where it's like, “You should learn to paint because it makes you see things in light and shadows that you didn't see before.” You just described a bunch of things. I'm sure people walking through a forest haven't necessarily on par on that perception. Will this change the way you saw the world? Do you like now look at things and think like, “Oh, there's some good textures”, and that kind of stuff? [0:49:39] AP: Yes. I mean, it's actually like a problem every time we go on holiday. I'm not really there. I’m just like, we’ll go to the beach I'm just staring at the rocks. My wife's like, “What are you thinking about?” I'm just like, “Thinking about how complex is that water.” I'm never really there. It's always noticing it. [0:50:02] JN: Well, this has been awesome. I've definitely learned a whole bunch. And my final question to you, as we come towards the end, are there any other Blender creators, or uses of Blender that you've got your eye on, and would recommend folks check out? [0:50:16] AP: Yes. Okay. I mean, the stuff that I find interesting, it's really complex, but it's geometry nodes, which is a feature of Blender. It's about two years old, three years old. But it's basically using programming in a way. But in a visual, it's using nodes, right? So, use one node, which defines the rotation of an object, and another node which creates a random value. And you put that and you connect it together, and now you've got a random rotation. So, it uses that to create these really complex things that people have created. If you just go on to Twitter, and use #B3D, you'll see everyone's blender creations. That's the hashtag everyone uses. Then, if you type in nodes, or geometry nodes, you'll see some really wild stuff that people are creating of – I mean, yes, the most complex like a car going through a mud puddle, or rain effects, or like a missile simulation. So, the missile fires up, and then it changes and goes a different direction. It's like a homing thing. You can do really crazy things with it. That stuff is very, very fascinating, personally. [0:51:30] JN: [Inaudible 0:51:30], is a guy who has been doing a lot of geometry on Twitter. Definitely, is a great shout-out. Some really interesting things going on geometry notes. [0:51:42] AP: Or CFI. [0:51:43] JN: Yes, thank you so much. This has been super great. What's new on your channel? What should people be checking out? [0:51:48] AP: Well, on the channel, not a lot. I'm actually working on a Blender course, a paid course, as I mentioned. So, that's something that I'm busy. I'm interviewing artists, just user interviews to find out what are they struggle with Blender? What are you trying to learn, so that I can try to create a course, which would help complete beginners to 3D, master the fundamentals in the essentials in about six weeks. It was a bit of a stretch. So, I'm working on that course. That's a lot of time there. Then, for Poliigon, it's a lot of initiatives in the works, but we're remastering the catalog at the moment to bring it up to new standards. But yes, I don't know. Go to my YouTube channel, Blender Guru or poliigon.com. [0:52:35] JN: Awesome. Cool. Thank you so much for joining us today. [0:52:37] AP: No worries. Thanks for having me. [END]