EPISODE 1804 [INTRO] [0:00:00] ANNOUNCER: John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS computer systems and Atheros Communications. Currently, he serves on the board of the Gordon and Betty Moore Foundation and is the chair of the board of Alphabet. John received the 2017 Turing Award for, "Pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry." In this episode, he joins Kevin Ball to talk about his life and career. Kevin Ball, or KBall is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies, founded the San Diego JavaScript Meetup, and organizes the AI in Action Discussion Group through Latent Space. Check out the show notes to follow KBall on Twitter or LinkedIn, or visit his website, kball.llc. [EPISODE] [0:01:26] KB: John, welcome to the show. [0:01:27] JH: Thanks. Delighted to be here. [0:01:29] KB: Yes, I'm excited to dig in. I mean, somebody with your background, you get introduced all the time. It kind of speaks for itself and you've got so many things you could say. I'm actually curious, if you were designing your own introduction, how would you introduce yourself to an audience? [0:01:45] JH: That's a good question. I'd say I have been extremely fortunate to have entered the computer field in its early days and to be able to do incredible things because of the remarkable advances that have been made in the field. That's been just incredibly exciting and I'm so glad I decided to be a computer person. [0:02:10] KB: It has definitely been a wild time in the computer world. Though interestingly, like you started early, but RISC is still running. I mean, with RISC-V, that's kind of the hot topic now. What are your thoughts on like, what is our continued bandwidth in the RISC space? [0:02:26] JH: Yes, I think what happened is interesting. I think in the end, what really made the RISC ideas really take off was the demand for more efficiency. That comes in a number of different ways. Because now a lot of the devices we use are battery-powered, not plugged into the wall, so energy efficiency is really important and RISC is much better at that. But also, because we've gone ubiquitous and there are computers everywhere, right? Look how many computers are inside a brand-new car? I mean there are 50, 100 microprocessors inside there. So, the price does matter all of a sudden. We're not just building chips that cost several hundred dollars each. We're building chips that cost ten dollars each or twenty dollars each. The whole efficiency thing won out in RISC, and now even in the large data centers, you see these companies that are the hyperscalers are building out-of-risk chips because the energy consumption is a big part of the bill that they pay in their data center. So, they worry a lot about this energy efficiency issue. That in the end, that was the key inside of RISC, that we knew how to build processors which were much more efficient in their use of silicon area and their use of power. That's been a winning combination now for probably the last 15 or 20 years as we've switched to a new computing world from the old world of having desktops and things plugged into the wall. [0:03:54] KB: Yes. Well, and as you highlight, our constraints may have shifted, but efficiency is still super important. So, we've been on this long run for a really long time. Moore's law carried for so long in scaling and each generation of chips getting smaller, even if it's slower, it's still mind-boggling how far we're going. But I feel like we're seeing the end when that comes and we're having to embrace something different. [0:04:20] JH: Yes, we're plateauing. I mean, first of all, Moore's law isn't a law. It's a kind of objective for the industry to scale against. But we see it slowing down. Now, let me point out that slowing down, if you look over the last 50 years, 50 plus since Gordon made his prediction. We've scaled by a factor of about 10 million and we're off from Moore's projection by about a factor of 25. But the gap is getting bigger, and it's really been the last few years it's opened, and it's opening more and more and more. That's going to demand that we rethink computation, we think about efficiency, we think about different ways of doing things. [0:05:01] KB: Well, I think one of the things that it's pushing people towards is more heterogeneous computing. Less - [0:05:09] JH: Absolutely. I mean, you look at the Apple chips, they're multiple processors, but there's a high-performance processor, there's a low-power processor, there's an AI processor, there's a signal processor, so we're moving more and more of that. That's, again, this drive for efficiency and using the silicon and power efficiently, both matter. [0:05:30] KB: Yes. So, I'm interested in your thoughts on what that ends up looking like for a software development team. This is Software Engineering Daily, so we're writing software, not just the hardware piece of it. How does that heterogeneity play out into the tools we use to write software? [0:05:45] JH: Yes, I think it basically requires more work on behalf of the programmers to really get a good fit between the processor and a processor, whatever processor they're using in a heterogeneous world, and the application. That, for better or worse, that problem has gotten pushed off to the software. Actually, beginning when we went to multicore, like the reason we went to multicore is that we didn't know how to build faster single-thread processors. We didn't have any idea how to do it. We were at a dead end. We had used up over a period of 15 or 20 of years. We used up all the good ideas and mostly instructional-level parallelism, and they ran out of steam. So, then we had to go to multicore. Now, of course, when we go to multicore, then the programmers have to find the parallelism and decide what threads to run where. As we've gone to heterogeneous, as you alluded to, things get even more tricky because you've got to figure out, "Okay, not only what are the threads that I can run in parallel, but which thread should run on which processor?" So, that's going to require - I think for better or worse, programmers are going to be responsible for more efficiency going forward and getting efficiency out of the thing. It's funny that many years ago, I was talking to Maurice Wilkes, who was the last living pioneer from the golden age, from the ENIAC age, in the post-World War II era. I said to him, "Maurice, what's going to happen if we can't continue to build hardware that's faster and faster and faster? We've been going one and a half times every year, and what's going to happen when this slows down?" He goes, "Programmers are going to have to get a lot smarter and a lot more careful about the code they write." I think he's right. That's what we're seeing now. [0:07:30] KB: In some ways, it actually reminds me a little bit of what happened when RISC came in, where you were saying this used to be in the hardware. You have these complex instructions that are doing all this stuff, and you're saying, "Well, let's make software do it." [0:07:42] JH: Yes. I think that's right. I think there is a parallel and a part of what drove RISC, at least from my research group, was the notion that you should never do anything at runtime if you can do it at compile time. A lot of what was going on were things we could do, we could do at compile time. So, rather than reinterpret complex instructions, compile down. Get rid of a layer of microcode and compile right down to the hardware primitives. I think nowadays that's changed in that the processors have gotten a lot more complicated. Memory hierarchies are getting more complicated. If you look at TPUs or TPUs or anything, there's a lot more focus on controlling the memory system by the software, rather than by the hardware. Now, today that happens with a combination of smart compiler tools and people who understand how to write their algorithms so that they compile well for those kinds of machines. It's that combination. It requires, I think, a level of understanding of the underlying hardware mechanisms to really become a good programmer that can program something efficiently. [0:08:52] KB: Yes, well, and it's like when single-threaded performance just kept getting better and better and better, we didn't have to worry about it. You could almost completely disconnect those. Hardware teams working on their side, software teams working on their side. So long as you end up generating the bytecode, it's going to work and it's going to keep getting faster. I don't think we're in that world anymore. [0:09:10] JH: We're definitely not in that world anymore. I think it's just you can't just rely on the hardware guys to make things faster because it's not going to happen, unfortunately. If we could, there were a lot of incentives not to rewrite software because a year later, it was going to run 50% faster. Well, no more. Now, a year later, it runs 5% faster if you're lucky. You're going to have to find ways to rethink that interface. I think it's interesting because it's really about how do you think about the interface between the hardware and the software system? How do they come together? How much does the programmer have to know? What's the compiler responsible for? How does that all fit to deliver performance? [0:09:51] KB: I saw in one of the talks that you gave that when the first RISC revolution was happening, one of the challenges was that the tooling didn't exist. In fact, the tooling was being generated inside of academia because companies weren't doing it. What do you feel like the missing layer of tooling is for this generation of, okay, now we're moving into the heterogeneous world? [0:10:12] JH: I think we still have this gap. When you move to these domain-specific architectures, things that are tailored for particular classes of algorithms. Today, lots of machine learning things, obviously. But a wide range of things. Graphics clearly has this, lots of signal processing has this special purpose aspect to it that can be captured. The key thing is to figure out, can you build an architecture that does really well in these kinds of applications? But it's sufficiently flexible to allow a wide range of applications. Then of course, figuring out how to get that match between what the hardware can do well, and what algorithm the programmer really wants is still an open issue. We've got it for some things, but if you look at lots of the things we run, whether they're on graphics units or they're on something doing machine learning, they're doing linear algebra problems. They're comparatively well-structured, even with sparse linear algebra, it's comparatively well-structured compared to a random piece of code you want to run, right? Figuring out how to align these things and how general can these architectures be, how wide a range of things can they run is still a critical open problem. And the tools will determine that to a large extent, how to get that interface to work between the hardware and software. [0:11:37] KB: Now, certainly, when you're shipping graphics units and you're manufacturing tons and tons and tons of these, you need that level of generality. But I think another thing that's kind of interesting is with cloud FPGAs, you can sort of create your own architecture for your problem space. Is the efficiency good enough there or is that still, when you go to FPGAs, that's still leaving too much on the table? [0:12:00] JH: Yes. I mean, you can do this. I think there's an efficiency loss that's pretty significant. But if there's a lot of gain from the flexibility that's achieved, and you can really change that flexibility, change the structure of the FPGA to do some other problem, you can imagine situations where it makes sense, particularly when the algorithms are changing quickly. Rather than build an architecture that's adapted to a particular class of algorithms, it might be smarter to go to an FPGA structure that would allow the algorithms to continually evolve and still be able to match pretty well to the new algorithm. So, there have been some people at Microsoft that have done some experiments with this kind of approach. Probably they've moved the furthest. But lots of hardware developers use FPGAs as starting points now anyway to get something that works that's reasonable in terms of getting the hardware before they go to something that's a more customized design and is going to cost not only a lot more to design, but a lot more to fabricate as well. [0:13:05] KB: Yes. Another area here that I think is interesting, I'm going off of something I saw you talking about, I think in a talk in 2023 was essentially treating machine learning as a way of programming, where you're programming software per se, but you're programming it with data rather than programming it with code. I'm curious how you think about that with relationship to efficiency, right? On the one hand, it's almost as flexible as you can get. Give it some text, you'll get text out, one of these LLMs, you'll get amazing, or you train it on some other data domain, but it's also massively expensive. So, how are you thinking about the role of programming with data in this ecosystem? [0:13:48] JH: Yes, I mean, you're right. That programming with data is the right way to think about it. You've shifted to the use of data for programming, but then of course the cost is the training, particularly if it's a large data set that you need to get trained on is what's really costly, right? And the model, depending on how big the model is. I think one of the interesting things we've seen is that some of these smaller models that are trained more carefully and that are inspired by a large model have achieved enormously incredible results. Okay, so the giant models do these incredible things, but a model that's a lot smaller, let's say a billion parameters versus 500 billion parameters is able to do pretty well for lots of applications. One of the things I think we're going to see is the models for endpoints, for example, what's on my phone? I want a machine learning model on my phone that'll help me with text and search and some other things, but I'm not going to put a model on that has 500 billion parameters in it. So, I'm going to have a small model on the phone that's going to do a lot of things. Probably one of the outputs of that model is, I'm not sure, call the big model in the cloud and go do that, and we're going to have to figure out how to make that work in a way that's appropriate and seems smooth and works well for people. But I think we're going to see more and more of that and particularly smaller LLMs adapted to particular domains, whether it's inside a camera, inside a phone, inside some kind of other device that may be on a lot of the time. [0:15:21] KB: Yes. I think that's a fascinating domain. The more you can constrain the problem, the more you can fine-tune the model to particularly do that. I love that. So, curious in that same interview that I'm thinking of, you predicted that LLM-enabled technology would be truly useful in a year or two. I think this was end of 2023. I feel like one of the things I've seen is this was the year that LLMs broke through for software development, right? Coding assistants with LLMs have gone from a niche that a few people were exploring to just exploding. What other domains are you seeing that type of breakthrough in? [0:15:55] JH: Well, I think coding is certainly - you're right, and coding is amazing because you wouldn't code anymore without an LLM assistant of some sort, right? I mean, you wouldn't do it because the leverage you're going to get from it for lots of code is just so high, right? So, it's probably delivered, it's obviously things like abstract data types and various forms of polymorphism delivered lots of programmer productivity. Well, this is delivering another big hit in terms of improvement and productivity. I think we're seeing it in writing. We're seeing it around things that help you digest complex and large documents. I'm thinking of something like NotebookLM where you can ask it, "Tell me what the key things I need to understand in this 100-page manuscript are. What are the key insights?" And you get reasonably good answers out of these things, amazingly good. For college instructors, you can say to it, "Design me five test questions based on this material here," and you could get great things out of it. So, I think we'll see a lot of help on that. One of the things that instructors generally hate to do is grading. All teachers hate to do the grading part. They like to see their students succeed, but the grudge work of - I think now we've seen some systems based on LLMs that could do grading as well as people, and I think that'll be a big improvement. I'm very big on this idea of using machine learning and AI to eliminate human drudgery. We're not going to completely replace jobs, but we're going to replace some of the stuff that people really don't like doing in their jobs that is more rote, more straightforward, that we could do with an LLM. And I think we're going to see more and more of this occur. [0:17:37] KB: Yes, I completely agree. I think it's a really interesting problem domain because you have to, sometimes you have to completely reshape how you're thinking about it, right? Using the coding with LLMs example, right? You have to shift how you're attacking your software problems. But what you get out of it is the elimination of a lot a lot of drudgery. I think one of the key problems, and I'm curious where you're seeing this, is almost what you talked about there with regards to when does the small model call out to the big model? Similarly, we need a question of when does the big model call out to the person and say, "You know what? I can't do this. I need you to get involved." [0:18:15] JH: Yes. So, I think one thing we're going to have to do in all these LLM-based systems is tune the system so that they say, "I don't know." Not my best answer is X, but X might be my best answer, but I don't have a high degree of certainty in that. We've got to get there. I mean, there are these examples you hear about periodically of people using LLMs for writing, and then, making up citations to things that don't exist. It should never do something like that. Just as it shouldn't write a piece of code that it really doesn't have high confidence is the right way to write the code. I think, as my colleague, Dan Boneh pointed out, one of the problems with these coding tools is that they'll sometimes write a piece of code that has a big flaw in it, and it won't know that it's got a flaw in it. Of course, that's tricky because as a programmer, reading somebody else's code, whether it's another person's or machines, and figuring out, is this right, is a hard task. But I think that's the sort of thing that we're going to have to navigate through and try to make the systems better at being more cautious when they don't have high confidence in what they're predicting. [0:19:29] KB: Absolutely. So, texts, and LLMs, and coding have been getting a lot of buzz, and image, and video gets a lot of buzz. But in some ways, I'm more excited about things like AlphaFold, or other things like this that are not in the text domain. I saw that one of the DeepMind founders won the Nobel Prize for Chemistry this year. Actually, I think the Nobel Prize for Physics was also in a machine learning-related domain. [0:19:52] JH: Yes, both of them. [0:19:53] KB: I feel like those are the dimensions that are going to completely change the world. I'm curious. [0:19:59] JH: Yes. Science, I think is going to change dramatically. I think these machine learning tools are going to be the new tool of science, as important as microscopes have been, as important as various tools for looking at the structure of molecules and DNA have been. This is already happening. The chemistry example is a great example. I mean, AlphaFold has discovered more protein structures than 50 years of protein structure work discovered, and that's an amazing result. So, I think we're going to see more and more of this, and people are doing all kinds of problems that are computationally not tractable if you do them from basic scientific principles. But where the LLM, where a machine learning system can be used to reduce the search space so dramatically that you can get the answer. You're still doing a little kind of physics simulation things that we traditionally do in much of science, but you're using it over a much smaller domain than you would have before. You figure out the basic structure of the protein by knowing what other proteins with similar molecules, similar atoms in them have. Then, you use that to guide the process of getting the detailed structure, and it results in significant improvements in performance ability to do much, much more. I think we're seeing this in lots of science, we're seeing it in astrophysics, where people look at the structure of galactic systems and understand how they're evolving. We're seeing it in - one of the things I thought was amazing, there are people working on this to understand turbulent flow. One of the hardest computational problems we, we do and solving that problem is extremely difficult from basic principles. On the other hand, you might be able to use these tools to kind of get the basic structure, and then, use simulation to get the accuracy that you really need in these systems. Look at weather prediction. I mean, an amazing result that DeepMind people have beat the best weather prediction system out there, which was developed over a period of 20 years in terms of computational ability, and they're able to outperform it. So, I think I'm really excited about what this is going to do for science. [0:22:13] KB: Yes. There's something you talked about there that I think is a really interesting big picture theme, which is, these generative systems don't have to get to the right answer. They just have to narrow the search space. We have all sorts of domains in which we have formal validation that works when you have an answer or a small number of answers, but we're exploring the entire search space is totally intractable. If you can use this system to narrow you in, now, you dump it into a formal validation. I think, mathematics is another interesting area here, where we have formal validation checkers, but proof generation is hard. So, use an LLM of some sort to generate viable truths, narrow your search space, and then, now, you can dump it into a formal validator. With that model, I'm curious, actually, you probably know more than I do about what are the different domains. We talked about a few. We talked about weather, we talked about chemistry, and protein folding. What are some other domains that this can open us or us in terms of narrowing the search space down, and then, we can dump it into either a formal validation or a human validation? [0:23:16] JH: Well, I mean, lots of classic problems which are NP complete, so that we don't know how to do them efficiently. If you can narrow the search space, you can come up with an answer. It may not be the optimal answer, but it may be very close to the optimal answer, and that could certainly be appropriate. There are lots of interesting problems that reduce down to these very fundamental computational problems. For example, generating test patterns for software or hardware, generating sets of tests that will test everything. That's a really hard problem if you have to do it completely. But if you were guided by a system, you might be able to narrow the range of it, so you could get a reasonable number of tests that would adequately test the system. I think we'll see other examples like that, where as you said, narrow the search space. You're doing a complex optimization problem. But if you can narrow the search space, then you can get to something that is close, if not, perfectly optimal. Close to the optimal solution very quickly. [0:24:24] KB: Yes. I love that as a kind of idea generator for domains to attack. Anything where you have an NP-hard problem, but you can validate any particular solution, this might be a useful technology to try applying. [0:24:36] JH: Yes, agreed. [0:24:38] KB: All right. Bringing this back around to software engineering and the tech industry. We're obviously in a very tumultuous time, lots of things changing here. How do you see all of these breakthroughs impacting the tech industry and the world of software development over the next few years? [0:24:54] JH: I think, one of the things we're seeing is, we're seeing a kind of almost a back to the future evolution in the tech industry, in the following sense. If you look at the tech industry prior to about 1985, 1990, there was a lot of vertical integration. I mean, IBM did everything. They designed their own chips, they designed their own disks, they did everything. It was a vertically integrated, all the way up through the entire software stack. They did everything. Then, the industry moved to, particularly with the PC, and the emergence of shrink-wrapped software, it moved to very horizontally. You had Intel down here at one layer, with the disk guys over here as another, and then on top of that, you had Microsoft, and then on top of that, you had the application layer. Now, all of a sudden, because of the need to vertically integrate much more to get the applications closer in touch with the hardware. We're seeing a reintegration in the vertical direction. So, you look at, Microsoft has certainly done this, Google has done this. I mean, there's a vertical integration now across those layers. And even a company like NVIDIA integrates all the CUDA, and software work around CUDA gets integrated into the hardware and the design of next generation GPUs. There's a lot more vertical transmission up and down that stack, which I think is changing the way we think about programming and the industry going forward. But I think it's fascinating because I think it leads to a level of collaboration across these boundaries that keeps the field interesting and exciting going forward. [0:26:38] KB: Yes. Do you see startups also doing that level of vertical integration? [0:26:43] JH: I think so. I think a bunch of the startups are trying to, to - the extent that a small company can do much of anything because it's got to focus. But they are certainly, they are taking advantage of that integration across that stack to try to achieve something. I think we'll see more and more of that. I mean, there's so many. I've never seen - I mean, the number of startups is just insane right now. Partly driven, obviously, by this AI revolution that's occurred, and the discontinuity it's created, and the opportunity people see. But I think that's an exciting thing about our industry, that we're constantly reinventing ourselves, and new things are coming along, and changing the industry. I think that's what's made it really a fascinating field to be in. [0:27:31] KB: We've talked a lot about machine learning. We've talked about some of the stuff that I've seen you talk about in the past. I'm curious, looking forward, we're entering 2025 now. What are you most excited about that's coming into the industry right now? [0:27:46] JH: I think this switch in how we think about programming models is a really crucial one and how we think about the applications of this. This is still relatively new technology, it hasn't really - it's still going and changing at an amazing rate. So, I think, there's a lot of excitement there, but there's still a big gap to go. I mean, if you look at the effort that I have to put into training a new model, and I look at the gap between how much computational cost and energy goes into training a new model. I look at how a baby learns to talk, for example, and the amount of energy consumed to train an LLM versus train a baby is gigantic. So, there's obviously a large gap that we still don't understand. The big breakthrough in machine learning happened because we realized that to create intelligence, it was about learning. It wasn't about memorizing facts. It was about learning things from data and experiences. So, we've learned that, but we're still not building learning machines that are terribly efficient, at least, if we compare them with what's in our cranium up here. We're much more efficient learning machines. Now, can we adopt some of the ideas, can we get more inspiration from the structure of human brains that we can use in these systems? I think, lots of people are playing at the edge of this. I don't think anybody's gotten a breakthrough yet, but we'll see. Somebody may. [0:29:26] KB: Yes, that is very interesting. I think a couple of immediately interesting threads to pull on there are, one, we're continuously learning rather than separating learning and inferring in some ways. I don't know what that looks like in the machine world, but I think that's an interesting difference. [0:29:43] JH: Yes. The amount of data that we're used to train these systems is far more than what people end up training on. I mean, if you look at the AlphaGo playing chess, AlphaZero, which plays chess, learns it from just understanding the movement of pieces, but no strategy. It had to play 90 million games to get up to really a superb level. No human player has to play nearly that many games to get to a master chess level. Now, it's a bit of apples and oranges comparison because the way it learns is very different than the way our brains learn. But maybe, we can get some inspiration from the way our brains learn that'll improve the way we train and create these machine learning models. [0:30:27] KB: Yes. Well, I think, to your point, we shifted into learning instead of writing down rules, and memorizing facts, or things like that. However, as humans, we do create rules in ourselves, and we sort of then operated a higher level of rules. I wonder what that looks like in the machine learning world. Maybe it's not at the level of the model, maybe it's at the level of the system, the model is embedded in. I've seen some fascinating things with using LLMs to generate tools for themselves, which they then learn how to use. So, you can mix the unstructured learning and kind of structured code or logic. But yes, it's a fascinating domain. [0:31:06] JH: It's a different domain and that, if you look at this famous book, Thinking Fast, Thinking Slow, that talks about how brains operate and our ability to do certain things very quickly, and other things, we've got to calculate, we've got to do a more deliberative process. But our LLMs, they're not at that level. They have kind of one way to do it. They take this model that's in many cases really big, and they throw the data in, and they get the answer out. But a lot of times, they probably wouldn't need that complex of model. Now, whether or not you can build some kind of system that operates in the way the brain operates, and that it only has to use a small amount of its capacity to do certain things and call on a deeper, more complex model. But integrate it in some way that it recognizes it internally, which is what we do in our brains. Maybe something like that can work. [0:32:02] KB: All in all, a fascinating time to be alive and in the tech industry. [0:32:08] JH: Absolutely. [0:32:08] KB: One other area related to this, I'm curious your thoughts on. I know a lot of people, particularly as LLMs have dramatically scaled up the amount of code any individual software engineer can write. I've been asking the question of, okay, what does the software industry look like in terms of, is it still a great place to work? Are there still going to be lots of programmers in 10 years, all of these different dimensions? I'm curious, you're seeing that at the scale of an alphabet or a Google, how they're navigating that. What is your view on the future of a career in software, in a world where we have these models to write software for us? [0:32:43] JH: Yes, I think it's a good question. Here, I draw on the lesson of history. I mean, if you look at how much more productive a programmer is, even without LLMs, let's say, in the just prior to LLM, Copilot era. And you say, how much more productive was that programmer, say, than programmers 50 years earlier? Let's go back to this, let's say, the 1960s, right? They're writing an assembly line, which it's really. So, programmer productivity improved by leaps and bounds, certainly more than an order of magnitude, maybe as much as two orders of magnitude over that time. But the number of programmers in the world went up by a lot. So, there was a way - there was a recreation of lots more things that we could do with computers. I think that's what will be key here. If we can be creative about creating new things, then, the demand for programmers will continue to go up. Now, programming skills will change, and how programmers work will change, and individuals are going to have to learn new ways to do that work, and get efficient with new tools. But I think, the industry will still be an exciting place to be. There are other parts of the employment sector that probably LLMs are going to reduce employment in over time, in the same way that lots of people used to be typists or data entry people, and we don't have a lot of the people who do that anymore, because that process has been automated. So, there will obviously be some tasks where there's automation, and we automate it, and there isn't demand for - there isn't an obvious demand for that skill level anymore. That challenge there is going to be, how do we retrain and prepare people for new careers when that's necessary? [0:34:26] KB: If you were to point people either early in their careers in software, and you said, "Okay, what it means to be a software engineer is going to change, it's going to look something different." Where would you recommend they focus their time and energy now? [0:34:40] JH: Well, I've always been a believer that kind of building a core set of a good foundation is a good starting point. In the software industry, in computer science, building a strong foundation is crucial because some of the problems are not going to change. How do you test? How do you debug? How do you know the code is really well written? How do you think about all kinds of software engineering tricks that we use? How do you think about issues like security, which has become so much more important than it was in an earlier time? So, I think, building a strong foundation is really going to be crucial. The tools that you learn initially, let's say, while you're a student going to school, those are going to change. Those are going to change, and they've changed dramatically. Just look at the last - I mean, students who graduated 20 years ago are programming in something completely different than they were using 20 years ago. So, I think there's going to be that kind of evolution. Mastering that, you have to be able to learn new things. I think part of a good education is it teaches you how to be a lifelong learner. In our field that moves so quickly, you have to be able to learn new things. [0:35:50] KB: Absolutely. Well, we've covered a lot of different things. We're getting close to the end of our time together. Is there anything we haven't talked about that you would like to touch on for folks? [0:36:01] JH: I guess, what do I worry about? I do worry that there's lots of good to be done from these new generation of tools, but there are also ways in which you can misuse them. Software is malleable. it can be used for lots of different things. How do we as a society really ensure that the technology we're developing does good in the world, really does the things we want to do, and constrain, to the extent we can, constrain misuse of that technology? I think we're going to have to worry about that. I worry that we become so cyber-centric in our lives that we have to worry a lot more about security and protection in our cyber systems. That's going to require a level of diligence by software programmers who understand these things. I think that's really different. But I think it's an exciting time. One of the amazing things when I think about being in this field for 50-plus years is to kind of see it reinvent itself all the time. Something new comes along, new ideas come along, and we see his burst through. This AI revolution is amazing. People working on these various AI technologies for a long time, they were making progress, but they were making slow progress. Then, all of a sudden, boom, and a breakthrough. I think that's the kind of - we've seen that a number of times in the history of the field and I think it's really been what's kept it so interesting as a discipline and field in which to work. [0:37:41] KB: Awesome. I think that's a great close. Let's call that a show. [0:37:46] JH: Yes. Okay, great. [END]