EPISODE 1914 [INTRODUCTION] [0:00:00] ANNOUNCER: FreeBSD is one of the longest-running and most influential open-source operating systems in the world. It was born from the Berkeley Software Distribution in the early 1990s, and it has powered everything from high-performance networking infrastructure to game consoles and content delivery networks. Over three decades, it has evolved through major architectural shifts, from symmetric multiprocessing and kernel scalability, to modern storage systems and predictable release engineering. John Baldwin has spent more than 25 years working on FreeBSD as a developer, contributor, and consultant. In this episode, John joins Gregor Vand to discuss the origins of FreeBSD, how its governance model differs from other open-source projects, its role inside systems like Netflix's CDN and the PlayStation 4, the challenges of maintaining a 30-year-old codebase, and much more. Gregor Vand is a security-focused technologist, having previously been a CTO across cybersecurity, cyber insurance, and general software engineering companies. He is based in Singapore, and can be found via his profile at vand.hk or on LinkedIn. [INTERVIEW] [0:01:26] GV: Hello, and welcome to Software Engineering Daily. My guest today is John Baldwin. Thanks so much for being here with us today, John. [0:01:34] JB: Thanks for having me. [0:01:36] GV: Yeah, today we're going to be talking about all things FreeBSD. Some of our listener base will already know exactly what that is. Others will perhaps have some inkling or have read that somewhere to do with something they use in their daily life, which we'll get on to. But I think the first part to go through, which is what we like to do on Software Engineering Daily, is just to kind of understand your background. John, you've had a very interesting technical career. So yeah, I'd love to just step through that before we get into what is FreeBSD. [0:02:08] JB: Sure. I guess I have enjoyed working with computers and working with software and kind of how you build things from kind of a young age. I first started programming when I was around 12 or so on a Commadore 64 BASIC in high school. I started programming in Pascal and learning some assembly, and had interesting in low-level details like operating systems. A good friend of mine in high school and I, we actually wanted to write our own operating system from scratch, which was a bit overly ambitious on our part. But we were already interested in kind of that level and kind of doing systems-level programming. When I was in undergrad, I was first exposed to FreeBSD. This would be in the mid-1990s when free and open source UNIXs were coming onto the scene. And that's the one that I first started using in college as an undergrad. I got really interested in using it both as a user myself. I kind of forced myself to use that as my daily driver during school and for my classes. But I also ended up being a sysadmin at my university and kind of managing our undergraduate lab. And as part of that, I kept working with FreeBSD more and more. And by the end of my undergrad, I actually started contributing enough patches to join the project as a member, even as a senior in university. So then, when I graduated from school, I had the opportunity to actually go work for a FreeBSD company doing the thing that I really enjoyed doing. And that was my first job out of college. And so I've been active in FreeBSD and working on various parts of the system since around 2000 or so. And I've gone through various jobs and various employers, but the one constant on all of them has been that, in some way, each job I've gone through or now as a contractor, each client that I have, they use FreeBSD in some form. So, I've had the wonderful pleasure and opportunity to work on this project that I really enjoy that many people only get to work on as a hobby. And I get to work on it, paid in some fashion, and get to spend my time working on this project. And even though my employers have changed over the years, I now have this kind of - it's crazy, a 25-year career of hacking on an operating system kernel and various bits of userland and other things that go along with that. So, that's been really neat, a lot of fun. I feel very privileged and honored that I get the chance to do this. And that's the way I approach all my work, is that it's really a joy to do it. [0:04:20] GV: That's awesome. And I think, yeah, we obviously have quite a few listeners that are probably at the beginning of their software journeys. And I think it's like a really sort of interesting way to frame it that actually you can get so deep into a specific technology that actually employer - not that the sort of employer doesn't matter so to speak. But if you're able to weave that thread of the technology that you just enjoy through it, then that maybe makes life more enjoyable as well. [0:04:46] JB: It's fun to work on. I'll say that. And one benefit, I guess, of - I guess it's a trade-off. In the one case, you would love to have this nice, stable job that lasts forever that is the same thing all the time in some ways. But by moving between different employers at different times, I got to work on different types of projects and different workloads and find out different areas of stuff. So, one of my jobs is at the Weather Channel working on local TV insertion, where you insert local content that's relative to a geographic region, like the local weather forecast instead of the national weather forecast. And so that was exposure to TV and video. And I could tell my parents, "Hey, that thing you see on TV, I help work with that," versus other jobs that involved much more low-level performance and low-latency networking and all sorts of different things. And one benefit of that for me is I've done a tour through different parts of the kernel and had to learn different parts of the operating system over time, motivated by the different projects I was working on. That's also been neat to have that chance to wander around in FreeBSD source tree and kind of in parallel to wandering around both the US as a country and the different jobs I've had. [0:05:51] GV: Yeah, that's awesome. So let's get into FreeBSD. What is it? Why did it come about? Let's start there. For those that just aren't fully in the know, what is FreeBSD? [0:06:04] JB: FreeBSD is a general-purpose UNIX-like operating system. So very similar to Linux or macOS, for example. FreeBSD started in the mid90s. So it's actually descended from the kind of variant you might call it or flavor of UNIX that was developed at UC Berkeley in the 80s, where the first kind of widely used version of TCP/IP came from. And Berkeley kind of wound down their research stuff in the early '90s, but they released what they had done as open source. And so various communities popped up trying to use those bits. And in particular, there was a gentleman, I think his name is Bill Jolitz, and he wrote a series of articles, and Dr. Dobbs, talking about porting 4.3BSD, I believe it was. Or maybe it was a version of 4.4, to run on x86 PCs, those 386 at the time, over Usenet. And this is before my time. A community developed around collecting patches that would run on top of 386BSD to fix various bugs and issues that came up. And there were some personality issues. And eventually, people kind of - Bill Jolitz would vanish for a while, and then come back, and then vanish for a while. And people got tired of waiting for the next release of 386BSD to actually come out. Eventually, the people, the community that had collected around 386BSD, they split off into two groups. One of them was NetBSD. They split off first. And the second was FreeBSD. And they said, "Let's take this foundation we have of BSD from UC Berkeley and the patches that Bill had initially developed as 386BSD, and then a whole bunch of patches from various people all over the world over Usenet and make a release out of it." And then that started them rolling. And that's kind of how the FreeBSD project got started. [0:07:45] GV: Yeah. And I think it's helpful to sort of, I guess, slightly compare/contrast very, very high level with Linux in the sense. These are two lineages of UNIX. Is that right? [0:07:56] JB: Yeah. Linux was written from scratch but influenced by the design of the way UNIX works. So it aimed to be POSIX compatible. So in particular, GNU already existed as a project backed by the Free Software Foundation, and was trying to make an alternative OS, an alternative to commercial UNIXes at the time. And one of the things GNU didn't have, well, they had a kernel called Hurd, but I think Hurd was perhaps not as well along. And Linus came along and wrote a kernel that would sit on top of the rest of the GNU system. And Linux worked really well, and it became a community that people could go and work on it, get a free UNIX-like system, something that was comparable to maybe boxes they had used at work that were UNIX environment. But this is one they could install on their home PC and run. And so FreeBSD and Linux were both kind of operated in that same kind of space. [0:08:45] GV: Yeah. And to kind of - I guess those trying to understand why use one over the other just at this high-level, Linux being this kind of fast to change, community-driven, very, very large community-driven. FreeBSD perhaps being sort of more of a workhorse that's more "stable" for those that need to use something. [0:09:08] JB: Well, I think sometimes things happen by accident in history. And one of the things that happened in the 90s is in addition to there being people who took BSD from UC Berkeley and tried to make an open-source distribution, there were also a group of folks, some of the people who had been at UC Berkeley, who formed a company called BSDI that was going to sell commercial version of BSUS. And they decided that one of their marketing strategies would be to pick the telephone number of 1800. It's UNIX. Even though AT&T owned the UNIX trademark. And that made the lawyers at AT&T very unhappy and resulted in a pretty big lawsuit between AT&T and UC Berkeley. And when that happened, that kind of put a bit of a kibosh around many folks in regards to the BSD community because there was a lot of uncertainty about, "Well, what was going to happen to this project?" And along with NetBSD and FreeBSD, in this whole universe, what's going to be the outcome of the lawsuit? Is the source code still going to exist? Is this like a safe platform I could build something on top of, or is it going to get yanked out because the source code - the judge will decide it's encumbered? And so I think one side effect of that is that people who weren't certain found Linux as an alternative where that uncertainty wasn't happening. And so you had a shift of mind share of developers because they were kind of weighing both options early on. And the BSD world had this obstruction in the way. And by the time I got there, the obstruction had been lifted. I didn't start working with FreeBSD till a couple years later, in the mid '90s, when I was in college. At that time, the lawsuit had been resolved. But in terms of early developer mind share, damage had been done, if it makes sense. In some ways, it wasn't even a technical. It was just things outside of the technical realm that individual developers don't have control over. And it's just kind of the way - it's luck that kind of sometimes decides how things go. [0:10:50] GV: Wow. Yeah. I had not picked that up. So that's like really interesting history. And as you call out, it is sometimes something that is completely untechnical that has ended up driving a direction or decisions. And just sort of - I guess we'll get into some of the use cases in more detail later. But just to set the scene before we dive in things that FreeBSD might have touched that you're using as a listener. At a very high level, PS4 OS runs on FreeBSD, for example. Netflix, their CDN servers, for example, also a huge use case. And macOS. Well, I say most of the listener base. That's incredibly biased of me. I'm on a Mac. But a lot of our listener base are on macOS. And so it's in there as well. But we're going to get into those use cases a little bit later. I think looking at how FreeBSD has actually worked as a project, because we've just talked about sort of, well, there was a sort of mind share split, I guess, as we've just learned between Linux and FreeBSD. As projects, they're set up quite differently. And yeah, I guess it'd be really interesting to understand how does the FreeBSD project work. Let's go from there. [0:12:01] JB: Sure. One common model in a lot of open source projects is you even have an acronym for it, a benevolent dictator for life, or BDFL. You have a person who kind of has a vision for what the project should be and kind of how it should go forward. And that's a model that works quite well in a lot of open source projects. FreeBSD has never really had that model. Earlier, when I said that FreeBSD came out of this community of folks around the 386BSD patch kit, well, there was already a group. There wasn't like a single person. And if anything, the single person they were waiting on proved to be someone who was absent. From its initial start, FreeBSD was already a group of several folks together and no one single architect or no one single kind of benevolent dictator. And that group over time started expanding. Initially, they had a kind of a leadership team. We still have the same name for the core team. But the first group of people that were on the core team were basically the people who had root on the box where the source tree lived. And FreeBSD from its inception used source code control. We had CVS from the very early days. And the box that had the CVS repository, if you had root on that, you were part of the core team. And over time, you would add more folks who were either committers. So they were people who could commit access. So you could SSH or you could use CVS at least to SSH and push commits to the repository. As well as folks who were in this core team. And they were all self-selected or invited. That's still true for committers today kind of or for the most part. But for core, it was originally some a kind of self-selected group. And it kind of slowly grew over time. And wasn't as good about maybe removing people who were no longer active, for example, as it could have been. And around 2000 or so, there was a bit of friction between the developer community and core and not thinking that core was quite responsive to simply what the developer wanted to do. We had a little kind of mini internal revolution by the developer community and crafted a set of bylaws and instituted elections and started electing our governing board. And we've held regular elections every two years since 2000 and have a different rotating group of folks who are on our core team. And we have different trade-offs, I guess, compared to the benevolent dictator model. One of the advantages that we have had is that many of our senior developers from that kind of first generation, well, they worked on FreeBSD at that point. But then some of them found other things to do when they left and went off to other projects. And we were able to have younger folks who had come up through the ranks, grow into leadership positions in the core team and kind of survive having multiple generations of leadership in the project, so that we can live without - in fact, we don't have a dictator. That if they leave suddenly or they unfortunately hit by a bus, we can survives that. We've already kind of survived that multiple times in our history, in fact. We have the ability to have a structure that will sustain beyond the last of any one individual. On the other hand, when you have committees and groups, things aren't always as efficient. And also, the way that technical direction has historically worked in the project is that developers as individuals work on - kind of a phrase we have for it is you scratch the itch that you have. Sometimes it's an itch your employer has, which is often the case. Or sometimes it's just something that you have. You went and bought a laptop and some driver on it doesn't work, or suspend and resume doesn't work quite right. And so you want to go get that thing to work on your own time, and that's where some of the patches come from. Or if it's employer motivated, an employer like Netflix has a use case for something, and so they spend their own resources employing somebody to work on fixing bugs or adding features that they need that they then contribute back upstream. And so technical direction arises from these individual developers working on different tasks. And there's not always a single unified vision or direction. And that sometimes that's not the best. Sometimes we could benefit, I think, from having a bit more direction. That's one of the things internally we've talked about in our leadership as we're kind of evolving and changing over the years is do we at some point need to have a bit more active direction and trying to have a road map as a project of where we're going. And part of that is talking to our corporate consumers who are active in our community and our individual developers and trying to figure out, "Well, where are things where you guys are aligned?" And there's common things that if somebody wants to volunteer in doing something and they would like to know, they don't have a particular itch to scratch, they just want to know where is the best place to spend my time, that's I think the value of a roadmap could be is those folks would know how best to contribute their time. That's currently one thing we are looking at as a community is maybe trying to play with developing a project roadmap that we'll update over time. But that's still something to be developed, and it's a lot of non-coding work, which is not always the most fun work to do in open source. [0:16:42] GV: Yeah. And just roughly percentage-wise, what would you say in terms of those that contribute, those that are within sort of fairly major companies that rely on FreeBSD versus the kernel hackers and the person who's at some point picked up a laptop and wants to hack away on it? [0:16:58] JB: The FreeBSD project is kind of a couple of different open source projects, or at least two big open source projects within a single open source project, at least as compared to, say, the Linux landscape. In FreeBSD, we have a lot of development of source code for our kernel and our base system utilities. And then we have this system called ports that allows us to import third-party code, or rather build packages of third-party code. Things like KDE, or Gnome, or Wayland, X, Windows, all sorts of stuff like that. Tens of thousands of those packages. And together, that allows you to build a distribution. And in the Linux world, these things are kind of split up. So you have the Linux kernel folks work on just the kernel piece. And you have a different group of people who might work on the C Runtime library, like glibc or musl. And then you have folks over in Debian, or Ubuntu, or other places who then kind of assemble different bits from these different sources and glue them together to make a distribution. And in FreeBSD, we do all of that in one place. But it does mean in particular that the work model and a lot of the workflow that happens inside the - develop the kernel and the base system part is very different from working with third-party packages like KDE. And that shows up in various different ways. One way it shows up is a lot of our work on the source side tends to be funded. I would say maybe at least 80% or so of commits that go in probably have a tag that they're sponsored by somebody. Or if you look at the lines of code, at least 80% probably are paid for in some fashion by some kind of employer, or a client for a consultant, or something to that effect. Whereas on the port side, where we're dealing with how we manage patches to third-party software and keep it building and working on current versions of FreeBSD, a lot more of that work is volunteer basis, almost inverted. I would say 90% of that work is probably volunteer instead of funded. We have this different mix depending on what kind of work is going on inside of our project. [0:18:51] GV: Got it. And just before we kind of move on to some of these key use cases, if you like, that we've seen FreeBSD end up in, would you say that having this government's model as opposed to, call it, benevolent dictator model, does that create a degree of trust and that's why FreeBSD can end up in these really critical large projects? How is that sort of decision made, do you think, when someone's coming along that is like the size of a Netflix and trying to figure which path to take? [0:19:23] JB: I honestly don't know if our governance model is a factor in anybody's decision. When I'm aware of people who have decided to use FreeBSD in a product, often it comes down to the fact that an engineer who's participated in the design of the architecture of the product is familiar with FreeBSD in some fashion. Either they used it before. Or I have a prior job. I won't name them to protect the innocent. But they were trying to pick a platform for a product. And at the time, Linux was having some turmoil and swapping their VM system, it felt like to them, every 3 months. And they looked at that and said, "We don't want that. We want to use FreeBSD." Ad that was the basis of their decision. And it was kind of a gut feeling on the part of their system architect. I think in many cases, many of these things are a lot more serendipity than they are very well thought out in terms of deciding what kind of technologies they want to use. Or what things do I know that the engineers who are designing or picking the system design, what are they familiar with? That's what they'll choose. [0:20:20] GV: Yeah. [0:20:21] JB: You can't always just say that's on a very clear technical basis as much as I'm just comfortable with this. [0:20:26] GV: That makes a lot of sense. I'm probably speaking purely from a personal perspective, where a lot of my job is basically de-risking projects from a technical perspective. So I'm often looking at these maybe slightly more macro areas as well. But the reality is often, as I know, a developer has picked up a technology, and that's why it is. And then you have to kind of come back later and address some things based on that choice. But that's the way time goes, or history goes rather. [0:20:51] JB: That's what happens. [0:20:53] GV: Yeah. I mean, let's go into some of these examples of where FreeBSD has ended up. I think it'd be interesting just to touch very briefly on PlayStation. But I know that we're going to talk at more length about Netflix. That's the one that you a lot more of a hand in. Yeah, just for example, why do you think PlayStation ended up with FreeBSD? And then we can jump into Netflix as well. [0:21:17] JB: As a project, we haven't talked a lot with Sony. But earlier on when they were working on the PS4, some of their engineers did talk with us a bit and gave a few talks at some conferences. And from what I recall from when we were talking with them, that before the PS4, when Sony would build a platform for their new PlayStation console, they had to figure out what software they were going to run on it. And they would maybe pick bits and pieces of software from different places on the internet, maybe a bit of their C library from one open source project, and a TCP/IP stack from somebody else, and various shared libraries and things, and kind of assemble what was effectively a homegrown operating system for each kind of version of the PlayStation. And this meant that they were in the kind of operating system support and development business as well as building a gaming developing platform and hardware business. And I think with the PS4, they decided we would rather not be in the operating system business anymore than we have to be. And we would rather use something that was off-the-shelf as our starting point. And when they're evaluating alternatives, one of the things that also happened around the time of the development of the PS4 was the FSF released version three of the GPL license, which included some clauses related to granting of patent rights and so forth. That for various, not all, but some companies, that gives their lawyers a bit more heartburn than previous versions of the GPL did, for example. And I believe that was true for the case for Sony. And that when they were evaluating what platform to use, one of the big reasons they chose FreeBSD was that FreeBSD was BSD licensed. And that they did not have to worry about dealing with GPLv3 and what possible implications that might have if they were to use GPLv3 software in the PlayStation OS. That's my understanding of how they ended up landing on FreeBSD for the PS4. We have gotten some contributions from Sony on the PS that came out of the PlayStation effort. Some of it is direct to us. For example, some support for things like AVX and our kernel, I think, came from Sony originally. But they've also contributed in other ways that have really invented the project. Another thing that was happening kind of in the - I guess that's about the right time frame, mid-2010s or 2000s, was the rise of LVM as another alternative open-source compiler suite. And LVM was very attractive to Sony. And they spent a lot of work contributing to the linker side of the story, which at the time wasn't nearly as well developed as Clang as a C compiler. So they contributed a lot of effort into the LLD linker. And one of the ways FreeBSD has benefited from that is that now we use Clang and LLD as our default tool chain for all our platforms. I think with FreeBSD 15, for example, I believe we have one binary left in the base system as GPL, which is like div 3 or something. But the rest of our system now is fully BSD licensed. That's been one big benefit that we've gotten from effort that Sony has done. [0:24:00] GV: Nice. So then, Netflix is obviously quite a big story for FreeBSD and I think for you personally as well, the work that you've done. So let's get into that. Where does FreeBSD turn up in Netflix's infrastructure? And then what kind of happened there? [0:24:17] JB: Sure. So to be clear, I've done some contract work for Netflix, but I'm not an employee of Netflix. [0:24:21] GV: Yes. Yes. I should have clarified that. Exactly. [0:24:23] JB: Disclaimer. There may be bits that I don't know. I'll say it that way. But Netflix uses FreeBSD and their CDN. If you're watching a movie from Netflix and the bits are being streamed to your device, probably it is coming from a FreeBSD box that's at your ISP. And typically, they're trying to avoid sending lots of bits across the actual internet and only sending the bits locally within your kind of local WAN for your ISP or so forth. And they do that with a distributed set of boxes to build a CDN. And the boxes in the CDN are running FreeBSD. And that's where they're playing with pushing a lot of bits out of boxes, hundreds of gigabits of TLS encrypted traffic to all sorts of customers. And so that workload is a very high performance workload in terms of the raw throughput. The individual connections, my understanding is they're not aiming for 100 gigabits on a single connection. Instead, you need thousands of clients connected to a box. Many of them at the end of very slow links with maybe terrible latency and packet loss and drop. And so being efficient about pushing out as many bits as you can while tolerating a very mixed quality of the links that you have, that's kind of what they are focused on and working on. And the things the changes they make in particular to FreeBSD around a lot of work in the TCP stack and dealing with TLS offload and things like that. [0:25:39] GV: Yeah. So there was some collaboration with Chelsio, I believe. And that's who you were contracting for. [0:25:47] JB: So, one of the things that Netflix was very interested in early on was the ability to encrypt all the traffic that was going by. And they had a couple different reasons for doing this that I probably don't need to get into. But this presented a bit of a technical problem. Traditionally, when you were just sending web traffic, because that's effectively what serving movies is. It's just web requests, fetching a couple of megabytes at a time of these backend movie files. Traditionally, OSS, both Linux and FreeBSD, had an optimization for a web server, that if you're sending a chunk of a file over a socket, you could make a single system call, called sendfile, and the kernel would take care of running the state machinery between as interrupts are coming in from the network device and interrupts are coming in from the storage device. That when a block shows up, you can send it straight out the socket to the NIC and never involve UserLAand at all. And the kernel can do this all asynchronously and driven. It's very efficient. Well, the minute that you throw TLS into the equation, all that goes out the window because somebody has to encrypt the traffic. And the old way of doing this was you did all the encryption in UserLAnd. So now you have to go get bits from the disk using a blocking system call, wait for the bits to come from the disk into UserLAnd, copy them out to UserLAnd, do the encryption, copy that data back into the kernel into a separate buffer to be sent out on the NIC on the socket. And unlike sendfile, where if you're not using TLS and you're sending the same file to like 20 different clients, you only need one copy of that file's data in-memory. Inside the kernel, we can share the pages that hold that data among multiple open network connections. That's really easy to do. You just share the same page and send it down multiple network connections. With TLS, it's not the same data, because every different connection has its own session keys. You have to encrypt the data differently for every session. And so, no longer can you share the data. You use much more memory, you start using more bandwidth on the PCI bus, and so forth, and it kind of all cascades down into horribleness. So, one of the solutions that Netflix pursued to help address this was to move the TLS processing out of user spaces into the kernel. It allows you to get back to using sendfile and have the kernel manage all the workflow. And initially, the way they did this was you still end up with multiple copies of the data, but you're able to do the bulk encryption, like the AES-GCM is what would most commonly be used nowadays, to actually encrypt the data and send it out on each connection. And they had an internal version of that. One of my first projects that I worked on with Netflix was helping to clean it up a bit so that it could be upstreamed into FreeBSD because they didn't consider this to be their secret sauce. Their secret sauce is making movies. Their secret sauce is not TLS encryption. They were happy to have that be something that we could put into stock FreeBSD so that other people could use it and also contribute to it. Then as a follow-on project to that, another of my clients is a company called Chelsio that makes SmartNICs that have more than just kind of Ethernet inside their NIC that can do various things, like TCP offload and whatnot. And they have the ability to do the actual encryption of individual TLS packets on the NIC. So then I extended the framework that had come initially from Netflix to allow for some network connections. We may not need to do the encryption and software in the kernel, but we can send the raw, unencrypted data all the way down to the NIC, and it can encrypt it on the wire on the way out. And one of the advantages of that approach is you no longer need separate copies of the data. You're back completely to the original sendfile version where you have one copy of the data for the raw file on disk of what the movie is, and that's the only copy you need inside memory on the host. And only the NIC is actually dealing with encrypting the data on the fly. That was kind of an interesting project because I got to work with - it ended up being effectively a joint project of what does Netflix need as a customer. What can help Chelsio kind of sell their NICs? And what's a design that works for both of those to allow us to glue things together. [0:29:26] GV: Yeah. And as you call out, it was a sort of collaboration where Chelsio, they had to open up their hardware specs effectively to make this possible. Netflix kind of funded the whole thing. Is that sort of, I mean, just in broad rush terms how it worked? [0:29:40] JB: I'm a contractor for Chelsio. So I can access their docs under the NDA that I have as a contractor. The Chelsio specific bits I wrote on the Chelsio side, but I collaborated with Netflix on the design of how this would plug in and kind of how the framework would work. [0:29:56] GV: Ah, okay. Yeah. No, that's very interesting. I think, again, for our listeners, sort of understanding how these things come together is often helpful as well. Equally, I work on projects that has these kind of same dynamics, where sort of it's three parties. And party A doesn't necessarily directly talk to party C. But because you've got B in the middle, the whole thing can work, basically. Yeah, makes a lot of sense. And then maybe just touching briefly on macOS again, what was maybe the history there for why FreeBSD ended up in macOS? [0:30:24] JB: I think my understanding is after macOS 9, I think is what it would be, Apple started working on macOS 10. They borrowed a bunch from, I believe, NXT. That's kind of where the Mocky bits came from, because there's bits of macOS that are very Mock-derived. But there's also a fair bit that is BSD-derived. In particular, I believe the initial network stack in macOS 10.0 largely came from FreeBSD, or at least NetBSD, but BSD bits in general. And a lot of their lib C, their equivalent of glibc and userspace is largely derived from FreeBSD. And in particular, one of the things that also happened is several of the folks that I originally worked with on my first job out of college ended up going to Apple and kind of taking part of the FreeBSD culture with them, I guess you might say, or at least like the mind share a little bit, and working on whether it was userspace bits, or kernel bits, and all sorts of Apple products that came down the line. There's a lot of kind of friendship between the projects in part because there's a lot of friends between the different communities. [0:31:24] GV: Yeah. Okay, makes sense. So let's move beyond where FreeBSD has sort of ended up and actually a little bit to maybe a key part of the journey of FreeBSD itself, I guess, is what we call SMP. Symmetric Multiprocessing. I think this was something that sort of had to evolve over a few years even. But yeah, could you maybe just talk us through what was the problem, and why did this have to be solved, and how was it solved? [0:31:55] JB: Problem is physics at its root. I mean, SMP is the problem that, at some point, CPUs, we can no longer scale them individually to get performance. We had to scale horizontally instead of vertically. We start having to add multiple cores into systems. And dealing with multi-threading is a complex problem in any environment, much less an OS kernel. FreeBSD first started supporting SNP systems, like dual Pentium systems, or dual Pentium Pro back at that kind of era, very late 90s. Not a lot of parallelism. And that was kind of the point when I got this first job working on FreeBSD stuff. And I was supposed to, fresh out of college, work on like a bootloader for Itanium. I think some changes to the installer to have a better user interface in the installer was one of the things I was supposed to work on. But one of the things that happened is right around this time of 2000, the company I was at was called Walnut Creek CDROM, and it merged with the company who had the 1800 UNIX telephone number to form a newer company called BSDI, who had this commercial BSD operating system. And to help bootstrap FreeBSD's effort at having a more mature support for multiprocessors, they kind of gave us a code dump and said, "Hey, you can borrow stuff from us." An initial version of kind of their way of doing locking and so forth. And so we had some developers in the community who are working on that. And I started helping out on IRC during the nights and during the days at my new job. And somehow, a couple of months in, I'm actually doing a lot of this work, even though I'm fresh out of school and wet behind the ears and probably shouldn't be doing this work. And having to learn a lot of things about atomics and so forth, which actually the Itanium manuals are very useful for me because I learned about things like acquire-release semantics of memory. That's now how you talk about concurrency, and C, and so forth. But I started working on this project, which, internally in FreeBSD, we called it SNP next generation. SNPNG is what we called it. And it's a long-running project. Initially, FreeBSD, when they were just trying to get this to work on plain x86, the dual Pentiums, they did the most naive thing you can do, which is you have one giant spinlock around the entire kernel. And anytime a user process would go into the kernel, it would have to wait for this one giant spinlock. There's a wonderful book I read when I got out of college, or maybe it was my senior year or so, called UNIX Systems for Modern Architectures, which modern meant late 90s. And it talked about the way that systems like system 5 release 4, and a few other systems, how they had done SNP. And the model that previously had is the one that I believe the book describes is about the worst possible thing you could do in terms of it doesn't scale at all once you start having more numbers of cores. The effort around SNPNG was how do we not have one giant spinlock. And how do we have a system where multiple things can happen, not just in user space, but in the kernel, concurrently on different CPUs. And FreeBSD chose to go with a design that modeled more the way that Solaris and perhaps IRIX kind of, and other commercial UNIXs did it, where we did things. We created dedicated threads in the kernel, where interrupt handlers would run, instead of running interrupt handlers on your kind of borrowed context. Linux still actually does the kind of borrowed context thing. I believe Windows does, as well, in effect. Although a DPC in Windows is kind of like kicking things off to a thread sort of that we do in FreeBSD. But we had this. That was one of the first big changes was how we move our interrupt handlers into threads. That was actually the first thing that was kind of helping to stabilize and get landed into the tree. And then from there, I started working on things like, "Well, we've got this code dump from these nice, generous folks. But it's kind of very x86 specific." And we had some kind of early patches for an alpha port that we had to try to bring up SNP on alpha, and I kind of atom smash those things together and tried to clean up the SNP code to be more portable and not very x86 specific, which meant refactoring the mutex code we'd inherited, and defining extensions to our atomic operations to deal with memory barriers, and various things to kind of get us to the point where are a little bit cleaner and kind of can expand over time. In terms of the project itself though, multi-threading a kernel, you might think of it just like performance scaling across multiple cores is a never-ending project. You're constantly just finding new things. Netflix continues to push the boundary in certain areas of the system. We have mostly gotten rid of kind of this legacy giant lock that we had as our holdover. I think it's now effectively around the keyboard driver and a few other things that just kind of aren't worth fixing almost at this point. It's mostly done, even though the variable still exists. But even so that we have a multi-threaded kernel, performance scaling continues to be a thing, right? We continue to grow sideways. When we were first doing SNPNG, we were worried about scaling well on four core, or maybe six core, 8 core systems. And now we have to scale on 512 core systems. So, it's kind of a never-ending problem because physics. We just don't get to have 40 GHz processors. Oh, well. [0:36:56] GV: I mean, just briefly going back to kind of how you described the project being set up overall. Is that still almost like a certain set of the contributors that work on that scaling side, or is that again just a distributed problem across all of the pieces? Yeah. [0:37:14] JB: Much more distributed problem. Developers scratching individual itches, right? For example, they're very worried about network scalability or virtual memory system scalability. When they are looking at scaling problems in things like sendfile, that's the areas they look at. They're not necessarily trying to make sure that some little timer device driver can scale across cores for some reason. That's not the problem they're trying to solve. They focus on the part that they're trying to solve. And that's still how things in general work is people are working on what's relevant to them. [0:37:43] GV: Yeah. As FreeBSD has evolved, modernized, storage, I believe, has sort of been an interesting area that things have had to develop. As I think a lot of our listeners will understand, just this evolution of how storage operates, going from HDDs to SSDs, and then now we've got NVMEs. We can probably do a whole episode on that in the future. I think it'd be interesting just to understand how FreeBSD has had to adapt to this evolution. And then we're also going to just look at how a major release came out, I believe, just the end of last year. Version 15. Be very interesting just to hear what leads up to that. And then, again, perhaps what is 15 as well. That's a lot of questions. Let's go back to storage. How's FreeBSD and storage had to evolve? [0:38:28] JB: Well, I think, in general, storage has evolved in the industry. If you look at old spinning Rust and the command sets we had with things like ATA, or SCSI, and SCSI in particular has very complex state machines. And if you go look at the spec for - I say the spec. There's actually several specs. If you look at the specs for SCSI, it's very, very complicated. And if you go now to look at the spec for NVMe, NVMe is very simple. And part of what's happened is as we've moved away from spinning Rust, we move to flash memory, it turns out that a lot of the complexity you needed to deal with controller timeouts, or seek latency, and so forth, those are no longer relevant in a flash storage world. And so NVMe arose a bit out of, "Well, can we make something that is a lot simpler and cuts away a lot of the complexity?" And that complexity though gets mirrored in the software stack, too, right? In FreeBSD, we have a mostly unified kind of storage framework, something that I think was a standard developed outside of FreeBSD but FreeBSD adopted, called CAM. Common Access Method I think is what it stands for, which is how we deal with SCSI and ATA, and how we deal with NVMe. But if you look at the bits, the state machine handling for NVMe instead of CAM is much simpler than the state machine handling for SCSI and ATA and so forth. We've been able to make that model work fine. But certainly, moving forward, life gets a little less complicated. You don't have to worry about disk scheduling in the same way. Now with NVMe or flash storage, you do have to think about things like trim, which you didn't have to think about before in the same way on spinning Rust. But you don't have to think about the elevator algorithm and trying to sort and minimize seek head the amount of time you're moving the head around and what the seek time of moving your heads is. That's gone. Seek latency is basically gone. But you do have to kind of think about when do I trim, how much do I trim? That's kind of my big bottleneck of how much I/O can I schedule between trims. And so it's a different kind of problem, I guess, to think about. And then there's also network attached storage, things like iSCSI and NVMe over Fabrics, which is you want to present the same command set but over a network connection. And there, you don't definitely seek latency. It's not quite the same thing at all because it's all this kind of virtualized notion of a storage device. And that all kind of hooks up in the same way. It all hooks up under our storage layer in the same way. [0:40:47] GV: The APIs had to modernize as well alongside storage not necessarily because of storage, but in terms of the modernization. [0:40:55] JB: Well, I guess one way to look at it is that as a long running source project, gosh, 30 years old, and it's crazy to think I've been around it for 25 of those 30 years. Over time, you accrete a bunch of technical debt. And you have APIs that looked good in the past. And then as you've worked with them for several years, suddenly they're not quite as useful maybe, or they're a little crufty. And so one of the things that I also do, this is more my volunteer time, is I will try to find some things that are kind of crufty and clean them up. It is a balance. You don't want to do churn for churn's sake, because that doesn't do anybody any good. And people who are carrying downstream forks like Netflix, you want to cause them undue suffering by making them merge a bunch of conflicting changes. But in particular, in our device driver frameworks, I had found a couple of things over the year that annoyed me, where due to some legacy reasons in every device driver, we required you to declare a global variable that was used in a macro that then no one ever used. And so I did some work over the past couple years to transition us to where you could use the macro both with and without the variable. And then once all the tree had been converted, then I kind of dropped the compatibility shims. I've done another one this year. The compatibility shims are in FreeBSD15, but I haven't finished the transition. The finish will be in 16, where when you're dealing with I/O resources in a device driver, things like memory-mapped I/O, or an I/O port from like a PCI bar, but also other things like PCI bus numbers and so forth that a driver might need to kind of allocate and hold on to, or map, and then use it to read write registers. The way our API worked is there was various functions you had to call both to allocate the resource when you wanted to use it, maybe to map it, and then to release it when you were done, you had to pass several redundant arguments kind of to every step along the path. And so even though in a true object-oriented design, the very first thing you call to allocate a resource, it gives you an opaque object back. And the opaque object knows things about itself. And it should know all the parameters you pass to it. So, one of the changes I made was to make it learn the one missing parameter that it didn't already know about itself, so that for all the following calls, like releasing a resource or mapping ones into a CPU address base, you just pass the resource now instead of having to say, "Is it memory versus I/O? And what identifier did I use? Which PCI bar was it?" So that it just makes that a little bit simpler and less for the programmer to think about. And those seem worthy cleanups, and they're ones where using either evil or magic depending on your perspective. And the CP processor, you can allow both forms to work for a while so that, across multiple versions, you can allow there to be compatibility so that device driver developers don't have to worry about it. If they adopt the new one, they can still use it fine when they merge changes to older branches. And that's kind of the strategy I've taken in that case. And those are nice cleanups to do. It's healthy, I think, in code bases to clean up things that get crufty after a while. Because if you avoid dealing with technical debt, it just gets bigger. And at some point, becomes a bigger mountain to have to shovel through. [0:43:56] GV: Yeah, absolutely. I really like that framing. Because at the end of the day, this is helping developers use it. And yeah, I think often, other projects, usually just because of time of the maintainers, it's just almost impossible for them to come back to these things, even though they fully admit or agree that something is crufty or just out of date by the standards of how a developer might want to interact with that framework. Yeah. [0:44:19] JB: And also, one of the things that, especially early on, we would kind of talk about in the FreeBSD community is that in a corporate environment, when you're writing software, you're under a time crunch in a schedule. And it's ship it fast and not ship it well, right? Often, you write code that you know is kind of crap and a prototype, and that's going out the door. And there's nothing you can do about it at the engineer level. And one of the things that I think some people really enjoy about working in open source in general, and in FreeBSD in particular, is we have a place where we can do things right. And you can take time, and you're not under a time crunch of a schedule in the same way to push something out the door. And you can sit down and think about what is a good design? And work through the good design, and take your time to do it well. And maintain it and keep it clean kind of in this platform. And that's one of the reasons I think people are attracted to doing this kind of work in an open source case is, without that time pressure of an employer, you get to do it right and kind of do it well. And to have peace about what you did, instead of knowing that some product shipped, and you wrote it, and you would rather disown it. [0:45:22] GV: Yeah. Well, talking about shipping, version 15 came out at the end of last year. I believe that was roughly a kind of two-year cycle versus when 14 came out. Is that kind of a normal cycle? And I guess what contributes to the stage, the thought around a major version bump on FreeBSD? [0:45:42] JB: It is now kind of a normal cycle. Historically, we've aimed for a kind of cadence of roughly two and a half years or so between major releases. Or before that, it was a bit more up in the air. One of the changes that's happened in the last couple of years is we had a different person become our lead release engineer. And this individual, he's a really great guy. His name is Colin Percival. Very smart. He wanted to have a very fixed kind of release schedule, which is what some folks in the community have been calling out for years, but he sat down and done it. And so he's got us on a pretty fixed schedule where we're now - and 15 was our first major release that was kind of on this new schedule that he proposed. We're doing a major release now in Q4 every other year. Every two years, we'll have a new - so 16.0 will be two years from last December. That's already kind of set in stone for us. And our minor releases are now once a quarter. The only difference being, when we're doing a major release, we take a whole half year to focus on that. So we don't have a Q3 release on those years. So now we have - I think 14.4 is going to be Q1 this year, and 15.1 will be Q2. So now we're actually on a better cadence that I think is healthy for us of having releases at a fixed interval. And this means they're not gated on a feature. And so developers still got to develop. That's the sauce that goes into releases is that the itches they scratch, the things they work on. And so it's a coordination of developers working on stuff. But then being respectful of the timelines too. I think one of the things sometimes we would see in the past when we didn't have a good fixed schedule is when you would announce a release, because there was uncertainty about when the next minor release would be. Then you get this last-minute flurry of a ton of stuff that would land in the tree all at once. Not always the most stable of things being merged last minute into the tree. Whereas having a fixed schedule means that we can - and our release engineer is empowered to say no if he needs to say no, or she needs to say no. And developers know, "Well, if I miss this one, well, there'll be another one on this branch in six months." It's not the end of the world. I know it's coming. I know my bits will get out, and it takes a little bit of the stress off. It also allows companies who support drivers in the tree, for example. They wanted to have a schedule to know how to schedule kind of their internal resourcing. And now they have that. And so they can figure out when they want to put bits into the stable branches in advance enough of when a release would call out. [0:47:59] GV: Yeah, that makes a lot of sense. And yeah, that predictable cadence, I think, as you say, sort of helps literally everybody who's part of the process and the end consumer, if you like. I.e., the company's taking which bits they would like to as well. As we kind of start cruising towards the end of the episode, I do want to touch on Cheri. That's C-H-E-R-I for those looking this one up. What is Cheri? So, it's CheriBSD, I believe. And like what is that project? Yeah, that's a sort of interesting evolution. [0:48:29] JB: Cheri is a research project that's kind of been developed at the University of Cambridge in the UK. That's where most of the effort is centered. There are a few of us, like myself, who are contractors, who can help with the project, who are scattered in the US and other places. But it's mostly a team of folks in the UK at Cambridge. And Cheri is trying to leverage some ideas from an older set of computer systems called capability systems. And I won't dive too much into that. But folks who are a bit more gray hair than I do might be familiar with capability systems, which were a different way of thinking about software. But Cheri aims to take some of the ideas, not exactly the same capability systems from the past, but some of the ideas, and see if we can use them to make existing real world C and C++ code more memory safe. Because existing C and C++ code is very memory unsafe today. If you're familiar with buffer overflows, the reason we have them is that, at its core level, at the ISAs of contemporary CPUs like x86, and ARM, and RISC-V, the way we think about regions of memory is we have a pointer that has the address, the starting address. And it knows nothing else. It doesn't know how big the region is. It doesn't know if you've moved partway into the region or partway out, or if you've kind of jumped in front of the region. The pointer has no idea. It's just a number. It's just the address. And so this means that it is up to software and software engineers to maintain all the metadata correctly about how big - when I call malloc to get memory, how much did I ask for? And keeping track of how much I asked for and making sure I don't get that wrong. And if history has shown us anything over the past many several decades, it's that we as engineers get that wrong all the time. Kind of one of the big changes Cheri makes is that Cheri proposes a different register type down in the hardware. That is something called a capability. And this part is kind of similar to capability systems, where you can have a register that has not just an address kind of in the low 64 bits or 32 bits. But you have another word of data that goes along with that that includes other information about a region of memory. So it include information like bounds and permissions. And the bounds are encoded using kind of a floating-point scheme to avoid having really big pointers. They're only twice as big, not like four times as big, which was the starting point. But we're able to store this information about pointers down inside the ISA. And in addition to having this metadata, we also have this one-bit tag on the side that we use to maintain whether or not pointers are valid. And part of the reason the tag is important is it allows us to constrain the operations you can do on a capability. So you can reduce permissions on a capability by reducing the bounds or handing out some of the permissions you already have, but you can never increase your permissions. You can't gain broader rights than the capability that you already have. And in particular, if you try to do an operation that might do that, what'll happen is maybe we'll actually modify the metadata word of the capability, but we'll also clear the tag to say that the thing you got a result you can no longer use. Then the other change we make in the ISA is we make load and store operations now use these capability registers as your base address register instead of a plain integer. Now, all your loads, and stores, and memory accesses have to be authorized by a valid capability. And we can verify when your access in memory isn't in bounds. Are you doing a load against a capability that has read permission? Are you doing instruction fetch against a capability in your program counter that has execute permission? And if you break those rules, you get an exception, instead of wandering off into undefined magic machine land where bad things happen. Then, on top of that, we modified the ISA to now have this new type, and we modify all the general-purpose registers in your CPU to use this new register type. And we modified the memory subsystem to allow us to store tags in memory and make sure it's all coherent. For example, if you do atomics, we'll make sure the compare and swap does the right thing with tags all the way through. Or if you do just random memory rights to something, you'll actually clear tags. You can only preserve tags if you go out of your way to do the right thing correctly. Then, on top of that, we have extended LLVM to support Cheri architectures. And when you're compiling for a Cheri architecture, all the pointers in your C and C++ application now become these wider capabilities with metadata. And that includes both explicit pointers, so things you declare in your source code. But also, there's lots of things that runtime creates kind of inside your process that you, as a developer, never see. Things like GOT tables, which are how we kind of indirect to get to global variables inside of other parts of your program. Or like in C++, if you have a class with virtual methods, then you have an array of these pointers to all your function pointers, a vTable, and all those things become capabilities in Cheri C++. And it turns out that for some code, things like operating system kernels, or maybe your C library, or C runtime that implements malloc, you do have to make some changes. Because, for example, in malloc, you need to make sure that if someone asks for 16 bytes, that you get a chunk of memory from the OS, you need to narrow the bounds to make sure that the pointer you return to the caller can only access 16 bytes. You have to make some changes in places like that. But in a lot of application-level code, it turns out you don't have to make so many changes. For example, we've been able to run most of KDE and Qt on top of a system I'll talk about in a second without having to make hardly any changes inside of KDE or Qt itself because it's pretty clean, well-disciplined C++ code. And that just compiles for this alternate ABI out of the box. In terms of places you can run Cheri. When I started with Cheri, it was kind of definitely a very - and still is a research system. There's all sorts of research we are still doing, but it was focused on MIPS. And so we had an extension to MIPS, which is a lovely little architecture is a fun way to describe it. But other folks have also shown interest. In particular, ARM has found Cheri very appealing. They actually built a custom CPU and SoC called Morello that implements kind of Cheri extensions on top of the ARMV architecture. And I have one sitting over here. It's like a quad core 2.4 GHz processor. And that's the thing that like it can run KDE. It can run kind of a full-blown UNIX. CheriBSD is a port that we have a FreeBSD to run on top of these type of systems and to support these architectures. My box is running CheriBSD. And it can run KDE, and can run a web browser. We actually even have an early port of Chrome, which is an incredibly amount of work for people to do. But it is a real thing, and it can run real software. There's also been interest in the RISC-V community. There's currently an ongoing effort to standardize a new extension called RVY that, hopefully, later this year, perhaps, will finally be ratified as an alternate base ISA to RISC-V to support Cheri systems. It's a very interesting bit of work, and it's kind of complements other work going on the kind of memory safety space. If you look at other things like Rust or other languages like that, Rust still needs bits of itself in its runtime, so forth, that are written in C. And so Cheri gives you the opportunity to enforce the memory model all the way down into even the bits of C to interact. We had an earlier research project that dealt with this in Java and dealing with JNI, where you could enforce the memory model from Java down inside the JNI code that was running because it had to use Cheri capabilities to access anything inside of Java's heap. And so, with all the restrictions that Java wanted to enforce. Or things unsafe Rust code could still have the same restrictions because it's in the ISA. It's not something that can be bypassed. It's not purely maintained kind of in software. [0:55:49] GV: Yeah, I was going to touch on the Rust piece because we do have a lot of Rust listeners, I believe. And so probably some of them were saying, "But why not just use Rust?" Yeah. And you've given very good explanation of, at the end of the day, having a much lower level implementation of this memory safety can be very advantageous. [0:56:06] JB: Well, and I think - I mean, memory safety is a big issue, right? I think Microsoft did a study that something like 70% of the critical vulnerabilities, where they had to ship a patch for, boiled down to memory safety in some form or fashion. And we need different toolkits to address different parts of the problem. It doesn't seem to me realistic that we're going to rewrite everything that's currently in C in some other language. Here's billions, if not trillions, of existing C code in the world for better or for worse. And we're kind of stuck with it. And some of that stuff will get rewritten in Rust. But some of it may not. So it may be very hard to rewrite. We need different tools for different options. And so I think Cheri is a good alternative to give us more tools to attack the problem. [0:56:46] GV: So we're coming to the end here. You've imparted a lot of knowledge and wisdom today. I think a lot of our listeners will find that they've learned, I'm not going to say, something today. I learned a lot today. I sometimes ask this question to guests just to sort of round out. Really, just what is something that you know now that you would maybe like to tell your self coming out of college or high school, whichever? What do you know now? I mean, related to perhaps software development or a career in software development. What would you tell yourself then that you know now? [0:57:17] JB: One of the things I've had the privilege of doing a few years ago was teaching undergrad course for four semesters on OSs, as you can imagine. And so I actually did get to talk to my students a few times along these lines. I guess some of the things I would tell them are have fun. That's definitely true. Things like when you're doing a job interview, recognize an interview is a two-way street. It's not solely you as the candidate who are trying to do something. You should be paying attention to the culture of where you're interviewing and decide, "Do I like these people? Can I work with these people?" That's a valid thing. You should not purely think about that. It's only a view of being service to the company. I always recommended a couple of books. Some books that really I enjoyed when I first got out of college were The Mythical Man-Month. It's one of probably my favorite book on development, and sadly far too relevant today. And I think Frederick Brooks said that on the 25th anniversary, which is what I was reading in the 90s. But it was still sadly too relevant and still very relevant today. Another book I really enjoyed is something called Peopleware, which talked about some of the - neither one of these is about fixing a bug in a - neither one of these is about writing for loops or anything. They're about thinking about the art side of software engineering and what you're doing. I guess one of the things I would tell my students is that your job is not to just grab random things off of Stack Overflow and throw them in, because you won't earn your salary if that's all you're doing. Your job is to engage your brain and think about solving the unique problems that your employer has. And lots of times that does mean assembling things from other pieces that you find, things you find on the internet. But there's going to be some wrinkle where they just can't use something off-the-shelf. Because the off-the-shelf thing would be a lot cheaper. Justify your existence. There's got to be some wrinkle to it, and lean into that. In this case, I was usually told at the beginning, do your homework assignments, not because I need copies of your homework assignments, because you need to practice. you need to get your 10,000 hours in to get competent. [0:59:12] GV: Yeah. And I think that's more relevant than ever. We've managed to really not even talk about AI, whatsoever, on this episode, which is great. I think we seem to touch on it almost in every episode. But I think all the things you've just called out. I was talking to one of the other presenters the other day on a more - we have a regular SED News episode, where we just talk about kind of more just very current. And we were talking about what does the education landscape look like for people. Today, I argued that a CS degree - and disclaimer, I don't have a CS degree. But a CS degree is still very powerful because it's about how you think about solving problems. And everything you've just said, John, I think holds true. It is just about leaning into solving problems and thinking about how to solve problems, and so forth. Yeah. [0:59:57] JB: Someone's got to make the AI machines work. [1:00:00] GV: That's also true. Yeah. [1:00:01] JB: For the record, my son is a first year. He's a freshman in CS. And that's actually a lot of fun because I'm perhaps assisting his teaching and making him learn the painful low-level bits that maybe he's not getting in class. [1:00:12] GV: Well, I think he'll have probably a very good career as a result, assuming he wants to continue with software development. So, yeah. Well, John, again, thank you so much for the time. I've learned a lot. I'm sure the audience has learned a lot and managed to sort of peek into the world of FreeBSD, which is very interesting. And as we've learned, powers a lot of what we basically use daily without even realizing. So, thank you very much. [1:00:36] JB: Thank you for having me. [END]