[00:00:01] JMC: Hi, Bryan. Welcome to Software Engineering Daily. [00:00:04] BC: Yeah, thanks for having me. Great to be here. [00:00:05] JMC: I think this is your – it is not your first appearance on the show. I'm fairly sure about that. Can you correct me if I'm wrong? [00:00:14] BC: I think that's right. I think I may have been on Software Engineering Daily maybe like – I mean, like years ago. I think like maybe more than five years ago. I think it's been a while. [00:00:23] JMC: Yes. Yes. You've mostly moved out of or away from Twitter. Your recent talk at The Monctober, on The Monctober Fest are hosted by the brilliant guys at Redmonk, you said that it all starts with a tweet. If you've moved out from or away from Twitter, does that mean there's no more talks coming up? You're not starting anything, right? [00:00:49] BC: Yeah, that's right. Yeah. Actually, I say that I've moved away from Twitter but it's not – [00:00:54] JMC: It's not true. [00:00:55] BC: Oh, God. Like a lot of people. I mean, right now – sadly, right now, I'm on Twitter, Bluesky and Mastodon. I don't even know – I don't even know what's going on. [00:01:10] JMC: The meme – that meme from The Sopranos that just when you think you're out, the gravitas of Twitter just pulls you back in. It's so true. [00:01:19] BC: You know what it is? It actually – and, hey, Bluesky, if you're listening, you need to give people some unlimited invites. I know you're going to have scalability problems. And I think like a lot of people have this issue, where I have got – ultimately, Twitter is not one community. It's a bunch of sub-communities. And I've got – we've got our – and it's not just software engineering as community, right? You got the Rust community. You've got kind of the FPGA open hardware community. You've got the – and a bunch of these technical communities have actually moved to – and I noticed that like when I tweet or toot something out, I guess in Mastodon's parlance, something that's technical, that gets a lot more traction on Mastodon than it does on – so a lot of technologists have rightfully said that they're fed up with the way the company's being won and so on and are on Mastodon or Bluesky. What has sucked me back in is I am – I'm an Oakland A's fan. We live here in the East Bay. And currently going through a – not that you would follow American baseball. But the Oakland A's are living the plot of a movie. A movie with a diabolical and stupid owner, John Fisher, who is – and so, he's trying to move the team. And it's this whole saga – [00:02:46] JMC: Oh, from the city. Away from the city. [00:02:48] BC: From the city. Moving to Las Vegas. It's a big, big mess. I mean, it's also like great and that the fans are standing up and protesting. And that's been all – but it's been very – that community has been really important. Community has always been important to me. But that community, that Oakland A's Community, is still very much on Twitter. [00:03:12] JMC: On Twitter. Yeah. Yeah. [00:03:14] BC: So I – like you really need – and like weather Twitter is still on, I don’t know if you’re going to weather Twitter at all. Weather Twitter is amazing. Weather Twitter – [00:03:26] BC: Yeah. Going back to the – there is something about the tech/software community. And I guess it has to do with the politics of each one of the community. And that's something that I don't want to delve into. But that's my only explanation to why the call of, yeah, the software community, the tech community, has moved out from Twitter almost completely. I live in that space. But I'm not a technologist myself. I'm not a developer myself. And – [00:03:56] BC: I think that – and I think in this regard, I think that software engineers in particular are – the kind of the malpractice at Twitter I think is particularly upsetting to software engineers rightfully so. I think that the – yeah, I think that – in this I think probably is the future. I think that it is probably the future that is going to be more fragmented. And I think this idea – we kind of talk about what's going to replace Twitter. And I think in the indefinite future, I think lots of different things are probably going to replace Twitter even though that's kind of a pain in the butt. [00:04:34] JMC: But wait. You just said that most of the communities apart from that one remain at Twitter. Do you reckon? [00:04:40] BC: Well, what I found is that like – so, for example, this Oakland A's community is still very, very much on Twitter. But tech is on Mastodon. Other communities are on Bluesky. And I think that this is probably – and then a bunch have moved to other spots. It means that you get the whole Reddit nonsense that's happening at the same time. I think it's fair to say that social networking is in the midst of a lot of upheaval at the moment. So it's going to be very interesting how it settles. But, of course, you got this – to go back to your essential question, where am I going to have my hot takes that become the foundation of future talks? And Spaces, Twitter Spaces, which are highly missed. Yeah. So we actually do that. That I think Discord has broadly replaced Twitter. We definitely got off Twitter Spaces. Anyway, we will find outlets. I will find outlets for my hot takes. That is extremely important, obviously. We have to have a place to – have our hot takes that become the inspiration for future talks. That's the humanity's most dire problem clearly. [00:05:50] JMC: Exactly. Exactly. Referencing again the talk that you gave at Monktober, which everyone can find in YouTube. It was uploaded a few months ago. The event took place, well, a few months ago. Mostly a year. I can't really remember. It's in Portland, in Maine. [inaudible 00:06:05]. And you talk about many things in that talk. I mean, you've been a long-time speaker at that conference. But the last one, you mentioned specifically the spirit of Silicon Valley, right? And you actually mentioned it in ways that are for you not – you describe it in ways that are not only inspiring for you. But you actually train yourself as a son after that. There's affiliation. Talk to me about, please, the – what is – how would you describe the idea of Silicon Valley? And why do you feel that yourself professionally and personally you're a son of that idea and the generation previous to you? [00:06:47] BC: Yeah. And unfortunately, I think Silicon Valley has transitioned over the years. And I would say that I view myself as in the spirit, in the original spirit of Silicon Valley. [00:07:03] JMC: Yeah. Specific. Yeah. [00:07:06] BC: In particular, I mean the roots of Silicon Valley were these engineers coming out to Shockwave semiconductor. And realizing that Shockwave was a jerk and wanting to go their own way. Shockwave, one of the inventors of the transistor. But an extraordinary difficult human being. And as it turns out, a eugenicist and the racist and bunch of other things. I mean, in many regards, that kind of origin story of Silicon Valley, it was very prophetic that you had someone who was potentially a brilliant technologist but was such an impossible human being that they limited their own efficacy. But he had grown up in Palo Alto, which is the reason he had returned there. And this group of eight engineers, which shockly called the traitorous eight, and they wore as a badge of honor, went to form a subsidiary of an East Coast company called Fairchild. Fairchild Camera and Instrument. And they – that group became Fairchild Semiconductor. And Fairchild semiconductor. To me, that is the true origin of Silicon Valley, is at Fairchild. And at Fairchild, they very much invented the future with Gordon Moore and Andy Grove. And actually, our board member, Pierre Lamond, was actually at Fairchild. And wild to hear Pierre's stories of being at Fairchild. And this is at a time when the transistor is brand-new. The integrated circuit is being invented. And these are the things that become the foundation for every single thing we do, is ultimately due to the semiconductor advances in Silicon Valley in the 50s, and 60s, and 70s and 80s. That sense of innovation and of venturing in boldly into the unknown with a group of fellow engineers. That to me is Silicon Valley. That is the true spirit of Silicon Valley. And it's tragic to me that – but maybe it's also right there in the origins. That that spirit is corrupted by this kind of lust for mammon and material wealth, which is like – I mean, like listen. Nothing – material wealth, like fine, great. It's ultimately – like it's not the meaning of it all. And to me, as a technologist, I am much more interested in the kinds of things that the breakthroughs that we can have that can serve us all, right? That can serve all of humanity. And I read a terrific book years ago, The Sloan Technology Series, which is a great series of books, a book called Dream Reaper on the invention of the bi-rotor combines, by Craig Canine. And the bi-rotor combine being a new kind of combine, the combine combines reaping and threshing. And it's as much a history of agricultural technology as it is anything else. And agricultural technology, and the history of agriculture technology, something that we should all like take real appreciation of because that is the reason that you and I are not in the fields today. And ultimately, it is innovation that has allowed us to not just survive, but thrive. And the history of humanity is a history of that technological innovation. And so, to me, that's what Silicon Valley is. It's more than a place. It is that very much that spirit of innovation and of solving hard problems. But it is true and unfortunate that that has been distorted over the years. I would like to see Silicon Valley return to its roots of real deep Innovation and solving some of our most pressing problems. [00:11:30] JMC: Okay. There are two things there that I'd like to pick on. One later, which is the core of the conversation that I want to have with you, which is hardware – software code design. And so, I presume – and please don't elaborate now on this. But I presume that the Fairchild's era was more about hardware than software. I'm fairly sure there was firmware involved. But I think the importance of software came later for everyone in the world. Yep. But also, picking up on what you just said. Apart from – regardless of the perverse incentives that now dominate Silicon Valley like you just mentioned, there's also a sense of a dawn of an era, right? And limited by physics, right? You mentioned in that talk and another talk, in the talk that you recently gave that Open Firmware Convention, that, well, Moore's Law is ending, right? I'm not sure if it's dead. But it's ending. There is an end of a cycle, right? Regardless again of how perverse incentives have become right now. I guess, how does it feel that you want to embody the spirit of the early age Silicon Valley when the limitations of physics are starting to surface have become really patted? Where's the next innovation going to happen? [00:12:52] BC: Well, I mean, I think that Moore's Law was an observation by Gordon Moore originally in 1965. I mean, Moore's law was not even – I mean, it wasn't even codified really. I mean, it was kind of deposited in Gordon Moore's original [inaudible 00:13:09] paper. But Moore's Law is not a law of physics. And in fact, at any given moment, if you took your time machine and traveled into Silicon Valley and asked what is the state of Moore's Law? People would tell you that like Moore's Law, it can't last more than another couple of years. Moore's Law will end. And now I think that all that said, like, "No. No. No. This time we actually are at –" this time, really – but I think it's important to realize that like that transistor density. And, again, Moore's Law was what could have been rightfully conflated with transistor speed, and density and economics. My kind of thrust has been that Moore's Law – and I gave talk a couple years ago wondering if what was it actually Wright's law all along? So Theodore Wright was an economist at an aircraft company in the 30s and observed that the more we make things, the cheaper they get. Because we get better at it. And if you look at Wright's law, it explains the economics of transistors arguably better than Moore's Law does. I actually think that like it's not that Moore's Law. It's just like the end of Dennard scaling 2006. Dennard scaling, for years, the clock rate of your CPU would double every 18 months. And if you came up in that era, as I did, it was remarkable. Because, I mean, they had to actually make new computers go slower to be able to play old video games that had old timing loops in them. And I don't know who the marketing genius who was, who instead of adding a go slow button, they added a turbo button. And the turbo button was always pressed. And if you wanted to slow it down, you would – which was, again, an act of genius. That was the kind of the Halcyon Days of Dennard scaling. Dennard scaling ended in 2006. And the CPU that that we're running on are really no faster from a clock perspective than the CPUs we were running on 15 years ago. What has shifted is that – but the transistor density has continued to climb. We've got more and more cores on that die. We've found ways to take advantage of those cores. And now with transistor density, now it's self-slowing, we're forced to innovate again. We're forced to innovate differently again. We are – I think that even Moore's Law, I think you can kind of see it continue with another couple of generations. But I do think that we are going to shift our focus. And I believe we're going to shift our focus to how do we make better use of these computational resources. There's a lot of waste, frankly, that has been hidden by the rising tide of Moore's Law. And there are a lot of things that we can do to be smarter in software and systems. And those are the kinds of things that we've been exploring post the end of Dennard scaling. There are a bunch of other things that we can go do. And I think that when you – I mean, indeed, that you could argue that the end of Dennard scaling has prompted software to get more efficient. There was an ear where like it just felt like software was getting more and more morbidly obese. And you can view things like Rust and the rise of Rust as a real response to that. Where, actually, getting us back to a leaner artifact and getting us back to artifacts where we do care about memory sizing. We do care about text sizing. Program text sizing. And so, I mean, to me it's exciting. It means that we're – the problem is going to shift again. And speaking strictly from an oxide perspective, we believe that one of the ramifications of that is that we need to stop building computers to throw out after a year and a half or two years. And what has historically been true in the server space is that they've got very much have that personal computer zeitgeist where the machine itself is kind of junk. The machine that's surrounding the CPU. Because a better CPU is going to replace it in a year and a half or two years. It's like, well, if a CPU is actually not something that is going to last a year and a half or two years but it's going to last more like three, or five, or seven, how would that change the way that you architect the system around it? And we think that there are a bunch of things that you would do differently to reflect a machine that is going to be more durable. And there are a bunch of things that you putatively can't afford to do if you're going to throw it out after a year and a half that you can afford to do if it's something that's going to be really [inaudible 00:18:16]. [00:18:18] JMC: Before we move on to, again, the core of what I want to pivot the conversation, which is exactly you mentioned it. Outside, there's no better way of embodying the ideals by which you want to rule your life that actually taking it to applying them and taking them to practice. And you've founded a company that is about hardware and software code design. Or hardware and so forth. But you mentioned Rust. I've got an unpopular opinion about – I have nothing against Rust. It's just that there's always been C and C++. So people effectively caring about optimization and lightweight software [inaudible 00:18:56] and getting the most out of hardware. That would be an over simplification of what C++ is about. But in the context of this conversation, I think it would be a good summary of it. And it's around 90% of the code that runs the world. And that arguably almost around half of the code that runs critical infrastructure, if not more, from the latest data that I've got. And it's not going anywhere. I mean, I'm not sure what your opinion is about. But I see a lot of optimism about Rust. And yet, like we were talking about Twitter eventually, the pull, the gravitas for C++ is going to be strong enough for it to hopefully – and this is my opinion or my sort of like wishful thinking, it will evolve to something that makes it safer. I guess what's your opinion on that? Because, again, there was always – the firmware community was always strong on C and C++, right? Correct me if I'm yo-yo. You're probably more of an expert than I am in that. [00:19:55] BC: C, primarily. But, yeah. I mean, I've spent my career doing OS kernel development. I've been right and I was implementing in – and when I actually started my career in the mid-90s and going into OS kernel development, I was being told that like C was dead and that it was all going to be C++ and then, especially, Java. And with the rise of Java, we wanted to put Java everywhere. And I was at Sun during the heyday of Java. I mean, we eat at a cafe that was literally called the Java Java. And they – Sun – and now Facebook, now Meta, I guess. But Sun wanted to do Java microprocessors and Java operating systems. And like that didn't make sense to me. Because what I saw was actually these systems needed to deliver highest performance. And C was the way to do that. Not even C++, right? But C. But the problem with C – and then – I mean, especially C++. C++ has, to me – to me, C++ was always the worst of all worlds where you end up with all of the unsafety of C. But with the ability to create these abstractions much faster than you could debug them. So you end up with these systems in C++ that are effectively undebugable. We never used C++ in the operating system kernel. I think as true for most OS kernels don't have C++. C++ obviously metastasized at Google and beyond. And there was a lot of important C++ out there. But to me, I had written about a hundred thousand lines of C++ and I decided that I was writing no more because I felt there were – what C++ gave me was so little. And it didn't give me anything with respect to actual safety. It a lot away in terms of run time and especially complexity. And C++, I mean – C++, I mean it's like super-sized me for abstractions. Do you recall the Super Size Me movie where he's – and to me, it's like abstraction junk food. And it creates the ability to kind of, again, create these abstractions very quickly and not be able to debug them. For me, I was at a bit of a quandary because I was struggling to find what is that thing that will replace C. There are a lot of problems with C. And I say this to someone who – again, I have written a lot of C. I know how to write memory-safe C. And memory safety is one of those things where the challenge of memory safety is not at like – freeing what you allocate is not hard. And it's not hard to avoid things like double freeze in general. What becomes much, much trickier is when you have an interface boundary and I want to call into your code. Well, now we're going to delay, which isn't going to really help us. So we're going to have to have a contract. And that contract's going to be implicit. And the way we would write C, we were pretty disciplined about it. And we had a bunch of patterns where you could reliably know that when you want to create one of – we wrote very effectively object-oriented C, where if you want to create a foo, you're going to call foo under bar create. It's going to return a foo T. Any operations on foo, we're going to take a foo T pointer to operate upon. And then there's going to be a foo destroyed that is going to actually implicitly free the foo. Which is fine, but very much relies on convention and doesn't actually allow you to do things that are really sophisticated. In particular, one of the things as I was first experimenting with Rust and realizing that the experience that I had is that my very naive Rust outperformed my handwritten C. And I'm like, "Why is this?" And in the particular program that I was writing, the reason for that is that, in my C, for this particular data structure, I'd used a balanced binary tree as you do. Used a balanced binary tree library that we wrote years ago based on [inaudible 00:24:32] trees that is extremely robust and I can use very quickly. In rust, you use a B tree if you want to – there no red, black tree. I'm sure there's red, black tree implementation somewhere in [inaudible 00:24:44] implementation somewhere. But the kind of the default standard collection data structure that you use is a B tree. A B tree is a better data structure than a balanced binary tree. It is a more cache-efficient data structure. There are lots of reasons. But a B tree is really gnarly to implement. And indeed, it's like where is the C library to implement a B tree? I mean, it is obviously possible to implement a B tree in C. It is really, really difficult because a b tree fundamentally relies upon moving the location of memory objects. That's the way a B tree works. And when an object is going to be promoted up into a larger node, it's going to be – the memory for that object is going to be moved. And this is where the contract breaks down. Because if we've got a foo library that takes a foo create, a foo destroy, you rely on the fact that that library is not going to change the location of that foo underneath you. Because you ultimately are – you've exposed the innards, the implementation. Even with careful C exposed implementation. The problem with that is C, by its nature, very much inhibits composability. And C++ papers over this. You get marginally better composability. But you haven't actually solved the safety problem at all. It's very easy to have a C++ system that operates across purposes or [inaudible 00:26:16]. And you're like, "Where the hell am I right now?" And it's very, very hard to debug. And – [00:26:24] JMC: Do you envision a future – guess your ideal scenario for the future is of every C and C++ replaced by Rust? [00:26:33] BC: Yes. [00:26:33] JMC: I mean, not necessary – okay. But could you envision even medium-term scenario in which coding with C and C++ with a an AI code companion guiding best practices, principles, whatever you may want to call them, enforcing those actually? [00:26:54] BC: Give me like ChatGPT to help navigate memory safety? I mean, Jesus Christ. [00:27:01] JMC: Is it no? [00:27:02] BC: No. I mean, that's its own punishment. That's its own punishment. Honestly, I would not try – unfortunately, I'm a parent of teenagers. And I know that there are certain ideas that you don't talk a teenager out of. Because it's like – no. You now what? In fact, I insist that you do that because you need to learn a life lesson here. And me describing why that's a bad idea is not going to be retained. You experimenting with this bad idea and – and so, you go in as a parent. You kind of constantly have the like is this bad idea going to get you killed or injured? Like that one, I need to intervene in. But if it's not going to get you killed and injured, if you want to shave your head or the diet blue. [00:27:51] JMC: Yeah. Burn a finger with a candle, whatever. [00:27:53] BC: Yeah. And so, ChatGPT, to navigate memory safety, I very much put in that category of no. In fact, I insist you go do that. I think that will be – and emphatically be its own punishment. I mean, that is a true – that is not a solution to the memory safety part. [00:28:17] JMC: Okay. [00:28:19] BC: But, again, I would encourage anyone who is feeling that desire in their loins, I would – please. Go act on it. Just don't inflict the rest and this will be great. [00:28:28] JMC: If everyone is coding C, C++ with ChatGPT is a code companion or Cody, or Code Whisperer, or any other – Copilot, whatever, let us know and we'd like to know. Let's go back to – yes, exactly. There's no better way to embody those principles that we've been described at the beginning that found in a company, which is Oxide. By the way, today, it has been an important day at Oxide, right? A huge milestone has happened. You just told me before we started recording. Can you describe what that company is and what happened today? [00:29:02] BC: Yeah. Today, it hasn't opened yet until this evening or this afternoon. The first oxide rack will be loaded into a crate and headed out to its first customer. So we've been – [00:29:15] JMC: Can you tell us where that customer is? Is this in a town like Portland Maine or something? Or is it in a big city? [00:29:24] BC: That exactly. I would want to respect their own privacy. But the – we've got – fortunately, when you solve a hard problem like this and you really broadcast that you intend to solve it, people present themselves. And you've got technologists present themselves to help you do it. And customers present themselves and say, "Hey, we've been looking for–" I thank God someone is finally solving this problem. And those that have been suffering with Dell, and HP and Supermicro are excited that we are taking on. Our earliest customers are in that category of folks that have been suffering with. [00:30:11] JMC: You took investment from Capital. But you take investment also for these initial customers that trust your vision, right? [00:30:19] BC: Absolutely. [00:30:21] JMC: Tell us, what did you ship out? What are the principles that went into that rack? Like there's a long work. But what are the principles that you envisioned and that are not embodied in that rack that is shipping later today? [00:30:37] BC: Yeah. In particular, what we saw – I mean, a couple of them. At the kind of highest level, we are trying to bring modernity to on-prem infrastructure. Core thesis of the company. Jeff Bezos is not going to own and operate – or I guess, Andy Jassy now, is not going to own and operate every computer on the planet. That there exist reasons to run your own servers in your own data center. And it's not for everybody certainly. And we are big public cloud proponents. Especially when you're small, when you're just getting started, public cloud is great. And in particular, we are huge believers in elastic infrastructure. You should be able to hit an API endpoint and go provision a virtual computer, and a virtual nic and virtual storage and be able to hydrate that by hitting API endpoints and using Terraform and so on. So huge believers in that. What we are also huge believers in is that there are reasons to run that in your own DC. Those may be security reasons. They may be latency reasons. Uh, they may be uh regulatory reasons. They may be risk management reasons. And they may be economic reasons. So as it turns out, if you're going to use a lot of compute, just like if you're going to use a lot of anything, it's generally a better idea to own it. You're going to use a lot of compute. You actually don't want to rent it. You want to own it. And I think, increasingly – you know what? It used to be true way back in the day. At every AWS Reinvent, a price cut would be announced. And they did it for so long that there was kind of this entrenched idea that the cloud is only going to get cheaper. And we knew it's like – because I worked for a public cloud company at the time. It's like that's going to have its limits. And reinvent is not about the price cuts anymore, you know? We don't really get those big price cut announcements. And certainly, there are some things like bandwidth that have never seen a price cut. And what people are realizing is like, actually, this is pretty expensive. And it's like, "Yeah, this is –" kudos to the execution of Amazon for giving people the idea that public cloud was a terrible business that no one wanted to be in when it actually is a very – it's a high-margin business. And it underwrites the rest of the retailer. We see that economics as increasingly a driver. But honestly, we see all those drivers. And the security and risk management latency in addition to regulatory compliance in addition to economics. And know each of those kinds of folks got kind of a different angle that they are bringing. All of them share a common frustrations though in the state of the art for on-prem computing is important. It looks like the personal computer because it is a personal computer. This is something that basically hasn't the actual machine architecture, has not evolved – the CPUs evolved. The CPU is very sophisticated. It's gotten better and better and better. But the machine around is trapped in time. And to say nothing of like the software that you want to run on top of that. If you want to actually – what you want to deliver is elastic infrastructure. And you being – I am the platform group in a company. And what I'm responsible for is that infrastructure for my developers. And I'm trying to do that on-prem. If you're doing that on-prem or I need to do that on-prem for the other reasons. If you're doing that on-prem, you're stuck in these legacy and machine architectures. And then what do you do for software? It's like those things don't come with software. When you buy a Dell or an HP system or Supermicro system, there's – actually, I mean, sadly it's actually even worse than that. They do come with software. The software they come with is this atrocious firmware running the baseboard management control, or the ILO, or the I Track. And the software that actually controls the computer is not great, to put it euphemistically. It's got a lot of problems associated with it. But it's actually – I've got a bunch of software that I actually don't want. It's kind of in my way. Then I don't have the software that I do need, namely the software that I actually want to be able to run not just an operating system but an actual true, proper elastic infrastructure. I am responsible to actually go and develop that distributed system on top of that, whether I'm buying an off-the-shelf product from the likes of VMware or I'm trying to tack into an open source project of OpenStack, it's pain and suffering as far as the eye can see. Because when that system doesn't behave and when my developers say, "Hey, you know this permission that I had I, I went to go provision this VM and it's taken like 10 minutes. What's going on?" Or I can't provision it all. It's like everybody points fingers to everybody else. VMware is pointing fingers back at Dell. Dell is pointing fingers back at Cisco. Cisco is point – and the problem is that the end user of this is the one who has had to do this integration and is suffering with the fact that these things weren't actually designed together. There is no co-design in the computer that's been designed by the Dell, HP, Supermicro, the elastic infrastructure that's been designed or evolved, the networking infrastructure. All of these things are operating across purposes. Kind of the big thesis for Oxide was we want to slice through all that. We want to truly co-design hardware and software. We want to use that to deliver a coherent system. And that out of the box delivers elastic infrastructure. You should power it on. Provide it the necessary configuration to speak with your network. And you should be provisioning VMs. Doesn't sound – like that shouldn't be as hard a problem as it is. But the reality is these layers are ossified to the point that, in order to actually create that system, you've got to demolish all the boundaries between these layers. And there's a lot of hardware. But then there's a lot of software. And so, that's what we've been building. That's been the thesis of Oxide. And that's what we've been on the journey to build. And one of the challenges we've had is what is the minimum viable product. I mean, every startup has this shown. What is the minimum viable product? And one of the challenges that we have that a lot of deep tech or hard tech startups have is the minimum viable product is pretty big. And in particular, part of the problem with the Dell, HP, Supermicro is that – or the Cisco, Arista, Juniper is they are delivering this sliver. And when you are only delivering that 1U, 2U server, actually it's very hard to assert control over the rack. And our belief was and remains the minimum viable product is a rack scale computer. So that includes the powershell. Powershell controller. It includes the switch. So we developed – in addition to developing our own compute sled, we developed our own networking switch, which is we joke that we're nine startups within one startup. There are days when it feels like more like two dozen because we've taken on a lot of very challenging problems. But the upside of having done it that way and of having it co-designed the switch with the compute slide, with the cable back plane, with the rack is that we can truly integrate this stuff together and solve some of these really thorny problems and deliver to the end user a turnkey experience. You can view it as – I mean, our successful products in computing have done this, right? This is very much in Apple ethos, right? This is what Apple has historically done. Now where we diverge is we also believe that when you take this fully integrated approach, there is a lot to be gained and nothing to be lost as far as we're concerned by being completely transparent. Everything we do at Oxide is open source. Everything we do is out there for people to see and understand. We've been very transparent by how we're building it. We're not secretive at all because we want people to understand what we've done and the approach we've taken. Where people disagreed with it, we've always wanted to hear about it. I think what technologists have found as they've waded into the details of Oxide is, "Okay. Finally, someone has designed it the way I would design it." And that's because we have pulled in a bunch of folks that, for different aspects of the system, have a belief in how this should be done. And that's what's reflected in the Oxide rack. [00:40:00] JMC: Building on this, you've manifested in the past also, not criticisms, but concerns about how fully open source the risk five project is or V. I never know how to pronounce it to be honest. I think it's Riskified, right? [00:40:14] BC: Riskified. Yeah. [00:40:16] JMC: But you build your hardware mostly in AMD architecture? Correct me if I – [00:40:23] BC: Yeah, AMD Milan for the CPU. Cortex-M7 for the service processor. M33 for the root of trust. And then Intel Tofino for the top rack silicon. And then we are using a bunch of other components from other vendors for different – but those are kind of the major computation components certainly. [00:40:47] JMC: And how much collaboration have you found from those providers to open source everything that is run on it? Has it been difficult? [00:40:56] BC: They've been great. I mean, they've been really – and I think that, fortunately, they see the same thing that we see that this lowest layer of platform enablement software has – it's been a real problem then. It's remained proprietary. I mean, to take the AMD, to their credit, they are – the openSIL, something they announced a couple weeks ago, which is this lowest-level silicon initialization library. And they're really committed to – we are not using openSIL exactly. But we are extremely supportive of that effort where AMD is pioneering open silicon enablement. And Intel too has been very receptive to that. I mean, I think it's been – which is not a future that one would have envisioned a decade ago, where – and there certainly are folks that still view this stuff as ultra, ultra-proprietary. When we make decisions, that's something that we factor in. We really look to what is a company's software and firmware disposition. We use [inaudible 00:42:13] for our next in part because [inaudible 00:42:17] – we love [inaudible 00:42:19] software disposition. And [inaudible 00:42:21] – even a nick that is not a smart nick. A traditional nick. There's a lot of sophistication in the nick. And [inaudible 00:42:30] has always been exemplary in getting those drivers open source and having a driver model that has got longevity to it and so on. That's something that we actually really looked at when we're evaluating these different components for the rack. We look at it at a vendor's disposition with respect to that. [00:42:49] JMC: Going back to your product. In terms of – I don't know. I'll mention a few areas of the product that you're shipping, memory management, networking, I/O pressures or whatever. Tell me the three things that you feel more proud about what you've achieved in terms of design. They might be innovative or optimizations of previous things. But in those terms, anything? [00:43:11] BC: Yeah. Boy, there is so much. Because we have – when you take a clean sheet of paper, this thing is so ossified that you that would I say the kind of the extant server ecosystem infrastructure is so ossified that you can't just take a little bit of it. You kind of have to take the whole thing, which is what we've done. But we take the whole thing, it's not one innovation. It's like there are so many different ones. Well, first of all, the fact that we have been able to pull off our own server design. The fact that that our compute sled has no bias in it. There is no AMI bias in the system. This is a dimension in which we are regrettably total pioneers. Because you think that the rest of the industry would do this. But even the hyperscalers have suffered at the hands of these kind of traditional bias vendors that are responsible for those lowest level silicon enablement, platform enablement. We have no bias. The first instruction after the AMD PSP executes is our operating system as it pulls up the rest of the system. Watching that come to fruition. I mean, these things boot like a rocket. They spend most of their time training dims about a minute and 12 seconds to train the terabyte of dims. When we come out of dim trading, it's 20 seconds to pull the rest of the system up. 30 seconds to pull the rest of the system up. And like servers don't traditionally boot anywhere near that fast. Servers take a long time to boot. [00:44:53] JMC: Are there any tradeoffs from removing the bias that have to be factored in? [00:44:59] BC: Yes. The tradeoff is that we are responsible for getting this thing to work. And that is the danger. The danger is that you – and actually, when we took that path, we knew it was going to be a steeper path and a harder path. What I did not realize was it was – ultimately, in the limit, it was a faster path. Because we controlled our own fate. It was very, very hard and we have an extraordinary team that got very good at inhaling all documentation on the part and then some. But, ultimately, we were able to deliver a platform faster by having control over our own software. That is the decision that I have been grateful for many, many, many times over. I think the ability to do that. The fact that we have eliminated the BMC. Replaced it with a proper service processor. It runs an operating system that we developed an all-Rust operating system called Hubris, appropriately enough. Because the Hubris developer an operating system, speaking personally, I spent a lot of time on the debugger for Hubris, which we call humility, appropriately enough. And watching that become really load-bearing for us has been a source of personal pride for me. I mean, part of what I love about the Oxide rack is that, to an employee at Oxide, everyone can look at the rack and point to an aspect of it that is theirs. And the aspect of it is like I did that. That like the reason it behaves that way, the reason it has that decision is because of something that I did. And a lot of that stuff is subtle. It's something that only an engineer really appreciates. And there are countless examples of that. And it's a lot of fun to watch that all come together. I love the aesthetics of it. The rack, it's beautiful. And we don't really view racks as – servers are kind of like [inaudible 00:47:17] and loud. And our server is beautifully quiet. We use one of the very first design decisions that we made based on – some of the early folks are talking to included a technologist named Trammel Hudson. If you don't know Trammell, just electrifying technologist who's really been a pioneer in open systems. And Trammel said, "Hey, you really want to look at what the Facebook folks did with 80-millimeter fans. They used these taller enclosures with these bigger fans, 80-millimeter fans. And it seems like a good idea to me. And as we looked into, it it's like, "Yeah, that's a really good idea to get up off this 1U, 2U kind of tyranny to get up to this. And we've got a 100-millimeter tall enclosure that fits that 80-millimeter fan. 80-millimeter fans move a lot more air. And they do it with a lot less energy. As a result – and we worked with our fan provider [inaudible 00:48:18] to allow our fans to operate at 2000 RPM at zero percent PWM rather than 5000 RPM, which to say, like when the thing is at its lowest setting, how fast is it moving? By default, that was a 5000 RPM. That was like way more air movement than we needed. It operates at 2000 RPM at zero percent PWM. 2000 RPM is quiet. And so, when you're next to the Oxide rack – and in fact, when we went into compliance to actually for radiated emissions, the folks at the compliance lab, they see a lot of servers. And they're like, "Are you sure it's on?" Because it's so quiet. And you look at the draw and you're like, "Okay. It's definitely drawing like 15 kilowatts. But it's quiet." And if you go to the back, you can feel the heat pouring off it. You're like, "Okay. No. It's on. It's definitely cranking." But we have become so accustomed to the violence of these 15,000 RPM fans. And they are so loud. And on the one hand, like we designed this rack to be acoustically pleasing. I mean, it's not designed – like that is not what we set out to do. But on the other hand, the the acoustics of the extant data center, the acoustics are this – it's almost like an odor. It is this visceral reminder that this domain has suffered for lack of real systemic holistic thinking. [00:49:56] JMC: Inadvertently, what I'm hearing is that you said how to deliver something that is a pleasure to work with, right? Whether it's from an acoustic level, a visually-appealing level or from the actual value that it provides by enabling elastic infrastructure to run in it fast and sort of like – [00:50:18] BC: Yeah, that's exactly it. And it's like – and we wanted to build something that we would be proud of and a foundation that we can go build on. And I think that we are – what's exciting about this is it's not that – this is not the – and today is a terrific milestone with this first rack being created up. The crate, by the way, its own engineering marvel. Because to ship a rack with the sleds, it's been a huge amount of work from [inaudible 00:50:49] folks. But it's like this is very much the beginning. And we now have a platform upon which we are going to just – like AWS circa 2006, 2007, 2008, where they really saw the ability to go build all these additional services. We got the ability to go build a bunch of that and be able to really deliver modernity to the on-prem operator. [00:51:18] JMC: And this is a perfect segue for the last question actually. I know you've – from other conversations we've had in the podcast, and interviews and stuff, that you, for now, at least you've discarded the home lab sort of like market segment. You're not going to ship anything for that, right? For companies you're going to ship – I guess what about the new trend on GPUs? And, I guess, high-compute requirements services and LLMs and stuff like that? Do you see that as an opportunity for you guys to actually intervene there? [00:51:48] BC: Yeah. It's super interesting. We're obviously like everybody. We've been – the GPGPU has become very important. We needed to go solve the compute storage network problem before we tackle the GPGPU problem. We do not tackle the GBGPU problem. It's got CPU. It's got a compute. It's got networking. It's got storage. We're going to be pretty careful about how we go into that. Honestly, we struggle to see how we can deliver real Oxide value with Nvidia. I mean, all kudos to Nvidia, obviously, for really having the vision that's actually there. Nvidia videos of very proprietary company. And that's not really consistent with what we want to go deliver. And our belief is that we, at Oxide, really need to take responsibility for the experience. And we can't do that when we got a deeply proprietary partner that has its own ambitions. And ultimately, Nvidia – I mean, boy, the Arm acquisition had gone through. God only knows where humanity would have been led. Because Nvidia has the idea of actually re-proprietorizing all of compute. And it's a problem. And, hey, Nvidia, if you're listening, like maybe you could experiment with like truly open sourcing things instead of open sourcing a trampoline into your own proprietary firmware. How about you actually open source of stack? Go to truly open designs. Because I think that it makes it really hard for people to integrate in Nvidia – Nvidia GPGPU into a broader system and then take end-to-end responsibility. I don't think it's going to be Nvidia. It's not impossible. But I don't think it's going to be Nvidia. And if it's Nvidia, like, well, okay, who? And there's some interesting folks out there that are taking some interesting swings to this problem that we are really paying very close attention to. But what we need to figure out is how do we go partner with someone to really deliver Oxide value? And oxide value is the ability for the end user to have total visibility into how that infrastructure is being used. Where the power is going in the system? Where the heat is being generated in the system? Where the performance is in the system? Because we want to be able to make rack-level decisions about where things are scheduled, where they run and provide visibility into those systems. So the operator can know that my elastic infrastructure is doing what it has set out to do. And in order to be able to deliver that kind of value to the customer, we need to have solutions that are much more open than what we have today. Paying very close to a bunch of different companies out there. I've had a lot of very interesting conversations with folks. But for the moment, that still lies in our future. [00:54:57] JMC: Brian, I hope that lory that left your headquarters this morning, that truck with your product was the first of many many. And I only wish you the best of luck for Oxide. And I thank you for joining us today in this conversation. [00:55:11] BC: Yeah. Thank you so much for having me. It's really terrific to be with you. [END]