EPISODE 1648 [INTRODUCTION] [0:00:00] ANNOUNCER: oneAPI is an open standard for a unified API to be used across different computing accelerator architectures. This, including GPUs, AI accelerators, and FPGAs. The goal of oneAPI is to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture. James Reinders is an engineer at Intel and has experience with parallel computing, spanning four decades. He joins the show today to talk about oneAPI. This episode is hosted by Lee Atchison. Lee Atchison is a software architect, author, and thought leader on cloud computing and application modernization. His best-selling book, Architecting for Scale, is an essential resource for technical teams looking to maintain high availability and manage risk in their cloud environments. Lee is the host of his podcast, Modern Digital Business, produced for people looking to build and grow their digital business. Listen at mdb.fm. Follow Lee at softwarearchitectureinsights.com and see all his content at leeatchison.com. [INTERVIEW] [0:01:21] LA: James, welcome to Software Engineering Daily. [0:01:24] JR: Thank you. It's my pleasure to be here. [0:01:26] LA: For those who are listening and not yet familiar with oneAPI, oneAPI is, can you describe and tell me what exactly is oneAPI? [0:01:35] JR: Oh, absolutely. For better or for worse, we use oneAPI to describe both our tools, which are implementations and our effort. The key is really, the desire to be able to write code that's very performance portable, that it's device independent, rather than write to proprietary techniques. This has become in the spotlight again, because of accelerated computing, because the use of accelerators, whether they be GPUs, or FPGAs, or DSPs, or AI chips, the desire to make sure that you can write your code and use all this fascinating new hardware has brought us back to these standards. It's really not something new. In general, we have been wanting our code to be portable for a long time, hence, standards like C and Fortran and Python, whatever. There's hundreds and hundreds of standards to help us with this. But accelerated computing has really done that. At Intel, we're doing two things. We're really helping support standards to do that, define them, make them truly open with open governance, and we're proving them with implementations. The standard isn't worth much if somebody doesn't actually implement it and make it work. [0:02:51] LA: Right, right. You mentioned performance. Is performance really the driver for why when API is so important? Is performance the key attribute that you're trying to make highly performant portable applications, as opposed to just functionally performant applications? What's the driver for the specifics tied to performance? [0:03:13] JR: Right. When you think about accelerated computing, when you think about why do you use a GPU, or an FPGA, or DSP, or an AI chip to make things run faster, you're trying to make your application run faster. There's a performance component. Sure, we can give up a little performance in order to make our code portable, but it still has to give you access to that hardware capability. It wouldn't mean a lot if Intel said, “Oh, we've got this performance portable thing and it works great on Intel hardware.” Sure, it supports everyone else. But then, you find out that no one else's hardware gets shown off. That would be bad. Performance falls into this. But it's a little fuzzy, right? It’s, what's enough performance? A lot of people have debates about that. If you're programming in Python, you might be more flexible about performance than if you're down in the weeds with some heavy numerics code in C++, or Fortran. It varies a little what the limit is. It doesn't necessarily have to be the absolute top performance, but it has to be credible performance, so it doesn't look like you're injuring one hardware and doing great on another. [0:04:21] LA: Right, that makes sense. You've used the word hardware many times here. Is the goal hardware independence? Are you focused on making standard interfaces for GPUs and things like that? Is the focus primarily on the hardware independence versus the, let's say, OS independence and other capabilities? [0:04:39] JR: There's multiple things that fall in play. You've got hardware, OS, and architectures. I think, the givens are vendor independent. If you're talking about a GPU from three or four different companies, you want it portable across all of them. OS is in general, yes, people want their code to be portable across OSs. Although, there are a lot of users out there that have picked their OS and the OS portability isn't so critical. But for larger, more popular applications, OS is important. I think a third category is architecture independence. It'd be nice if it ran on a GPU – [0:05:19] LA: CPU architecture specifically? [0:05:21] JR: Yeah. Well, do you want your code to be able to be accelerated by a CPU when it can do it, or accelerated by a GPU if there's a GPU there, or if there's an FPGA? That's an interesting problem we can help with. But the differences between those architectures put more limits on it than say, the differences between a GPU from one vendor and another, or between two OSs is a lot less than the difference, say, between a GPU and an FPGA. Ultimately, we'd like to help with all of those problems to the extent that it's useful and practical. [0:05:55] LA: I can tell you from some of the earlier days, when I was doing a lot of this work, which we'll talk about in a little bit, but the whole idea of having software independent FPGAs was a unique concept in its own right. I mean, when you start talking about using FPGAs, you're almost, by definition, forcing yourself down a hardware dependence path. [0:06:15] JR: There are interesting elements of it there. What we see with something like an FPGA is that there are parts of it that we can really help with and make independent, like making some algorithms portable. FPGAs also get their power from being able to program at a low level, handling the network interfaces, and so on. Those tend to start becoming unique enough between the devices that you spend some time there writing unique code. That's why I say, when it's practical, or it's advisable, we can help with certain parts of the problem. We focus on that. We're not doing some altruistic – or I shouldn’t say altruistic. Impractical thing, where we say, “Oh, we're going to abstract everything.” In the cases where we can abstract it so it's portable, why would you implement it multiple different ways just to get the same effect, if you could do it once and make it portable? [0:07:06] LA: Right, right. Now, I know part of the oneAPI family is a C++ abstraction layer known as SYCL. Is that actually part of oneAPI, or is it a separate standardization effort? Is it a separate entity? What's the relationship between open API and SYCL? [0:07:25] JR: Yeah. The relationship between oneAPI and SYCL, SYCL is a template library, if you type definition to extend C++ into the accelerator realm. By that, I mean, give you the sorts of controls necessary to offload C++ code to an accelerator. There's three things that SYCL does. One, it helps you identify what accelerators are in your system, so enumerate them. The other is it helps you share data with them. Keep in mind, that's complicated by the fact that C++ thinks all memory is shared, right? Now with accelerators, it's often disjoint memory, right? You can't have the same address both on the device. Sometimes you can, but sharing data. Then the third is to offload code. It probably runs a different instruction set, which again, is something that C++ isn't trying to solve. Enumerate your devices, share data with them, share code with them. Now, I said it was like a template library definition. It doesn't really extend C++ in a strange way, but having a compiler that understands what you're doing so that it can compile the code and produce binaries that run on these multiple devices requires a little compiler magic to make it easy. That's SYCL. Everything I said was about taking C++ and allowing you write C++ code and run it on a device. Well, you could say, well, what about libraries? What about tools? What about debuggers? What about profiling standards? That's what oneAPI is. oneAPI centers itself around SYCL, but SYCL is a chrono standard. It's a wonderful standard from the Chronos group that the same group that did OpenCL, that does Vulcan, does other standards. Intel is just a strong believer and supporter of SYCL. But then, we've said, “Hey, what could we build around it? Could we build some math libraries and some communication libraries that all understand SYCL and complete that SYCL universe?” That's a great way to think about oneAPI is it's, hey, what about all the other things other than just augmenting C++? [0:09:30] LA: SYCL is providing the language, or the C++ specific extensions, capabilities for working in these hardware environments in standardized ways. oneAPI is is a standard for the libraries that sit on top of that. [0:09:46] JR: Yes, that's an excellent summary. [0:09:48] LA: Great. Okay, well, I'm going to pause here for a second, just because I have to tell you, a lot of what we're talking about is déjà vu for me. The reason why I say that is about three decades ago, when I first started working in this industry, I was working on building a standardized C++ interface library for talking to test and measurement systems. I was working for Hewlett Packard at the time. This is HPIB, VXI, VME bus, those sorts of things. Working at Hewlett Packard, I created an industry standard library for instrumentation IO that was written in C++. One of the first times that had ever been done for that industry, and this was early in the C++ development days. Believe it or not, what we named the library was SICL. It was with an I, so S-I-C-L. We called it SICL. I even wrote my first book on that topic, specifically on building portable C++ interfaces for test and measurement control. All three people that read the book, I'm sure loved it, but it was something that was very near and dear to my heart at the time. It's actually still in use today. It's part of IEEE 1174, I think is what the IEEE standard is that it goes with that. It's called VISA now and used by companies like, National Instruments and things like that for instrument control. It's funny that in the basically, as a parallel environment, doing parallel work and parallel solutions, you end up with a lot of the same things. But 30 years later, and the same names even, or similar names that are going on. It's a story and I'm really relevant to what your SYCL and your oneAPI, but I thought it was an interesting story. It does bake some of the question is, 30 years is a long time. What's happened in the industry in 30 years that has made this more practical now, versus 30 years ago? I know all the problems we ran into 30 years ago. There's a lot fewer problems now, or a lot different set of problems and probably a lot harder other problems. For instance, I know the C++ language itself is substantially more mature than it was back then. But what else has changed that's made this more of a practical solution nowadays? [0:12:01] JR: It is funny, what a small world it is and how some of the things like the names recurring, or the thoughts. But it is a great thing, question to ask, what's happened in 30 years. I mean, I remember those days. Well, the early days of C++, the people doing things with it that challenged the compilers and people thought you were crazy, because who would do this in C++, because the compilers were immature. Didn't optimize that well. You'd be much better off writing code like that either in Assembly, or in Fortran, or anything but C++. Now, of course, C was a very efficient language, but C++ wasn't then. We looked at the problems with C++ compilers back then and one of the big problems was the compiler created a lot of temporaries. That's an internal to the compiler. It's like, “Oh, well. You've got this abstraction and this polymorphism and this overloading.” Internally, the compiler would say, “Well, take this. Put it here and then put it here.” Well, if the compiler didn't optimize that away, it was doing extra effort which slowed down your code, which really could make a mess out of things. Back then, it was like, wow, can we ever build a compiler that can do all of this stuff? Well, computers have gotten a lot more powerful. I like to say that people want their compilers to run, compile it in a few seconds, or a minute, or they want their overnight tests to run overnight. Well, as computers get more powerful, you can do more in a few seconds, in a minute overnight. I think that's one of the most dramatic things that have happened in tooling is that things that 30 years ago would have looked really impractical, because, oh, my gosh, if a compiler did that, it wouldn't compile the program in my lifetime. Now, what compilers do is staggering. That opens up the ability to target hardware better to optimize for it. I mean, if you look at the instruction sets of modern architectures, it's unbelievable how many instructions they have. 10 or 20 years ago, assembly language for processors quit caring about people writing handwritten code. It cared about whether a compiler could target it. It's subtly different and it needs to be more symmetric and it's okay if it's a little more complicated, because the compiler will do it, but it needs to be something a compiler can target. We saw architecture evolve to be able to be targeted by compilers more. We saw compilers become more powerful. It still looks to me like, I just say, compile my program. Under the covers, the changes in tooling is immense. That brings us to the modern days, where we're not accelerating architectures now by raising the clock rates as dramatically as we did for a long time. Instead, we're looking at architectural innovations and things like massive parallelism. Why is a GPU interesting for some problems? Well, because it's just a massive unit for doing floating point computations. Then you look at other algorithms for AI and maybe something different. People talk about tensor, but there are other people doing graphs. It's an exciting time, because there's a lot of hardware innovation left. When you couple that with how sophisticated the tools can be to just magically make it look like a compile, it's wildly different than 30 years ago in terms of its complexity. On the top, it looks similar. Underneath, it's just we have a lot of knobs we can turn these days. Hence, the desire to enable all of that, right? Give me that simple interface, but let me target all these wild variety of hardware that might be coming now and in the future. [0:15:44] LA: A lot more is possible now. [0:15:45] JR: Oh, yeah. [0:15:46] LA: Both from the architecture of the processor, which also equates to speed, obviously. But also, in the compilation process and the sophistication of the compilers, etc. Does this mean that the goal of oneAPI could, in fact, be true independence, hardware independence? Or is that still a feature state that's a long ways off? Yes, we can do a lot more now than we could ever do before, but the problems are also a lot harder now, and so it takes more effort. Where is the pendulum here? Are we moving, or are we keeping up? [0:16:21] JR: I think it stays the pendulum. As an engineer, I'm amused that if you remember the CISC versus RISC debates that were hot at one time, by the time Intel got to the 486, it was fundamentally a risk ship. It's running a CISC construction set, which by the way, after a while, people started saying, “Wow, that's a compact instruction set, which has advantages as well.” What makes me think of RISC is, well, if you can simplify some part of your problem, then you can add a new type of complexity. That's how I look at things. I think that what we need to do is simplify to make core aspects of what we're doing more portable, so that we don't have to be rewriting it and reengineering it on every device. Then we innovate outside of that. Those innovations, the pendulum reference is good. From time to time, they'll be very proprietary innovations. I think that you watch and innovation can be proprietary, but then you want to break away from that proprietary and you want to generalize it so everyone can use it. That happens in so many industries and has happened many times in our industry as well. I think that's where we're at is accelerators have become a permanent part of computer architecture. They weren't even relevant 20 years ago, right? There were accelerators that came and they went. They came and they went. I think, it's reasonable to say accelerators are a permanent part of computing now. Every phone has some accelerators in it. Every laptop, every supercomputer, well, most super computers, an accelerator of some sort. Sometimes those accelerators are hidden on the on the CPU, but they have some of the same targeting challenges to use as using a GPU. If we can generalize that and just accept accelerators as a permanent part, pay attention to making code portable, so that we can write it, use lots of different accelerators, then we'll innovate on top of that. Then we'll probably have to figure out how to standardize that 10 years down the road. [0:18:21] LA: Definitely. Yup. Yup. The problems are still there. They're just more sophisticated problem sets right now. They're higher level problems, I should say. [0:18:30] JR: Absolutely. Isn't as amazing, as we knock off one set of problems and that's all standard. Now, we invent a new set of problems. As an engineer, that's super exciting, right? It's the innovations. I mean, we're barely at the start of AI, of course, these days. I am confident you look back 20 years, or 30 years from now, all the AI we're doing now looks so crude, like we didn't even know what we were doing. Because that's the way computing always is. It's an interesting question, what can we do to make our stuff as general as possible as portable, so that we can keep growing? [0:19:02] LA: Right, right. One mainstream argument that people make about languages, like C++ in the environment that they're in is that higher levels of programming abstraction is the way to achieve independence from hardware independence, from vendor lock-in independence, from all those sorts of things. The higher you can make your – the work of programming, the higher level you make the work of programming, the more independence you can get naturally. I say that, because you hear that, but there's obviously some fallacies in that argument, right? Java is not a replacement for C++. Python is not a replacement for C++. Ruby isn't a replacement for anything. Higher level languages have advantages for certain classes of problems, but you still need C++ and languages like C++ for certain types of problems. At least, that's my view. I'd love to hear if you agree with that. I'm assuming you do. But if you could answer that, but then, also, ask me, what are the types of problems that really make C++ still the best solution for solving those problems? [0:20:17] JR: I do agree. Higher level abstractions generally are more portable. The reason for that is that you're abstracting more. You can even make them more portable by making them less general, right? More domain specific. Domain specific capabilities. I look at things like TensorFlow and PyTorch. To me, those are domain specific languages. Whether it's a language or a library, there's an interesting interplay, right? If the API that I'm programming to is more abstract, it can be more portable. Now, you lose some – if there's an innovation in the hardware that's unique to one vendor, it may be more difficult to take advantage of that. There's always a home for there being something outside of that high level abstraction, a little bit of extra secret sauce. Everyone needs that. Not just one vendor. But that's a little outside the mainstream. The mainstream, it's highly desirable to have these high-level interfaces, whether they're libraries, or languages, whatever. But they have to rest on something. In fact, one of the challenges, I think, that people writing some of these high-level interfaces are is that there's insufficient foundations for them, right? In an abstract sense, think about TensorFlow, I think the original version of it targeted NVIDIA GPUs, so it probably did that fairly directly. But then, when somebody says, “Hey, what do you do to support AMD GPUs, or Intel GPUs? Or, can you use my CPU? Or, if I'm a startup and I have a new AI chip, can you target that?” You have to go all the way up to TensorFlow and play with its guts. Now, over time, they'll figure out how to generalize it. There's other projects I work with, like Cocos in the Defense Department. They do a lot of codes that are run in the national labs. They try to write to this Cocos, but then they have backends that target all these different proprietary methods. I think there's a lot of opportunity to clean things up, like at the C++ level, where if C++ is more portable, then these higher-level things can just use C++. Whereas now, they're using C++ augmented this way for one vendor, this way for another. The future is real clear to me that you standardize with C++, you make things written there more portable, you make libraries more portable, the foundational levels. Somebody's always writing us for foundational levels. The other thing, of course, so those foundational levels. The answer to your question, what's C++ good for? First of all, the foundation level. Things need to rest on something. C++ is a great level to rest on. The other is, if you're innovating outside the box, if you're trying to do something no one's done before, it's a good chance those abstractions that exist, those domain-specific, or abstract languages aren't quite what you have in mind. You might argue, that's where the abstractions, why they get invented though, is those abstractions were written in C++. C++ is a great place to innovate, to do new things. Go back to my comment about, we’re at the very beginning of things like AI, there's so much innovation left. I'm confident that you'll see a lot of the most radical innovation will happen at the C++ level, and it will create an interface that maybe more people use. But the innovation itself came at the C++ level. [0:23:37] LA: Got it. Got it. The reason for the C++ layer, the reason why you say the C++ layer so critical is because of the performance characteristics. It's a foundational interface, yes, but is the foundational interface because it's high-performant, or is it more to it than that? [0:23:54] JR: It's high-performant and it's proven itself to be portable, right? It's a layer. If you go the level below that would be to start doing assembly language, and that's not portable. Whether you be writing an x86 assembly language, or PTX for NVIDIA, or whatever, that next level down is not portable. Yeah, having something that's performant, reliable, portable, that's a capability that C++ supplies that I have no concerns that C++ is going anywhere. But as we get more and more people we call software developers in the world, C++ won't grow as much as the other areas, because the whole purpose of – the whole way that we grow more programmers – like, I consider data scientist programmers. They're not programming at the C++ level. By the way, just as an off comment, [inaudible 0:24:45] before, there are more Fortran programmers today than there were 10 years ago, which shocks people. It's not like it's a huge growing field, but it is so fundamental as a great language for scientific work and so forth, and there's so many important codes written in it that you need people working at that level. You don't need an explosion of new people. You're not going to double, or triple the number of Fortran programmers you need in the world, but you still need a core base. I didn't mean to completely equate C++ to Fortran. C++ is growing, I'm sure, at a faster rate. [0:25:20] LA: The analogy does hold. C++ to Fortran, compared to higher level abstractions to C++. [0:25:27] JR: Absolutely. [0:25:28] LA: It's a reasonable analogy. [0:25:30] LA: Let's get back into oneAPI for a little bit then. oneAPI is also a changing standard. In fact, you've just released a major upgrade of one API, I believe. What are some of the things that are new and innovative in oneAPI today? [0:25:46] JR: We did announce a new oneAPI, but this goes back to a little confusion we create, because we talk about our tools being oneAPI tools, and we talk about the standard. We did release a new set of our tools. Intel's been doing software tools for many decades now, compilers, analysis tools, libraries, and we support oneAPI. We support this idea being performance-portable across different architectures, strong adherence to standards, ranging from open NP to C++, to Fortran, to SYCL, to the Python, and there's more and more. We work very hard to support those standards. But what we're doing is an implementation that supports Intel's hardware. We also, interestingly enough, make sure that there's an easy path to get support for other people's hardware as well. With our SYCL compiler, that was an interesting challenge, because SYCL adds these capabilities to C++, but it has to compile for these different devices. Of course, we implemented it so it can compile the Intel GPUs, Intel CPUs, Intel FPGAs, but what about NVIDIA GPUs? What about AMD GPUs, so on? Well, it turns out, our technology working in the LLVM compiler world. It turns out, there is an AMD GPU backend in the LLVM world, and there is an NVIDIA backend. We made it so that those LLVM backends can be used in conjunction. I call it a plug-in. Not everybody's happy that I start calling it a plug-in, but it stuck. Basically, if we deliver our compiler a certain way so it's not locked up, you can also use the NVIDIA and AMD backends with it. Codeplay, this company in Edinburgh, Scotland, that actually joined the Intel family about a year ago, we acquired them, they remain fiercely independent. Supporting NVIDIA and AMD is a big part of what they do. Trying to make sure things stay open. They make sure that the NVIDIA backend and the LLVM world, the AMD backend and the LLVM world plug in to our compilers. Making this work, it doesn't just happen. LLVM, it should in theory work, but any time I say, “Oh, it should just work,” it's like, “Oh, good grief. But how much work is it?” We make it all just work. At the SYCL level, we want plug-ins. At other levels, like say, with OpenMP, well, that's easier. NVIDIA's got support, AMD's got support. These are standards we've already come together on, Fortran, C++, even Python, PyTorch support. Intel makes sure that we have very serious implementations, but I think a big difference we're doing is we're trying to lead the way on making sure that we aren't doing things that harm the portability of code. One instance, though, that is interesting, if you look at our approach to parallel STL, a feature of C++, we've got a very serious implementation, but that implementation tries to make rational decisions. Like, if the computation I'm doing, it might be better to keep it on the CPU. It might be so small that offloading it will destroy my performance. Or maybe, it does need to be offloaded. Or, if the computation before us was offloaded, even if I have a small computation, it might be better to do it on the offload device, rather than drag the data back. We're trying to do that intelligent balancing between the different devices on a system, and even make it so that if you have a CPU from one vendor, a GPU from another, or maybe even a GPU from another, that we can balance that. We've made a lot of progress on it. But our approach, fundamentally, is give the users enough control that they're using the whole machine, but they were doing the intelligent balancing. The same thing applies to do concurrent, which is a Fortran concept, or parallel STL. This is different, because most startups, or other vendors that have accelerators, their approach has been, “Oh, I'll give you a parallel STL.” When you link it into you, first of all, the device has to exist when your program runs, or I'll cycle – I issue an error code. Then if your device does exist, I offload everything to it. Big, small, whatever. We don't think that's the right long-term answer, so we're putting the extra engineering effort into saying, can we do this intelligently? If I wake up and that parallel STL was linked in, but the device isn't there, I'll just do it on the CPU. Because instead of aborting the program. I'll do intelligent work. Standards are great. Interpretations of standards can be very innovative. Intel's doing both. We're really supporting standards, but the implementation, the tools we're delivering, people can rely on today, also embody this philosophy and demonstrate that what I'm talking about can be done, and can be done well. Of course, it's never perfect. Our users will say, oh, well, you're trying to be brilliant, but you weren't here or here. Then that improves us. I think we've proven for quite a long time that user feedback definitely gets back into our tools and improves them. It makes among the best, if not the best tools available in the industry. [0:31:02] LA: That's great. That's great. What's next? Where's this technology going next? Specifically, I'd be remiss if I didn't say, please talk about AI in that answer. [0:31:15] JR: AI is a huge topic, as you know. There is a lot to consider there. I think Intel's rules – break it down a few ways. I've already talked about how important, I think, it is that the foundational elements be portable, because I think as people are innovating in AI, they're trying to do it in an abstract manner. But if they program down to proprietary interfaces, their code isn't portable. They can demonstrate it on someone's hardware, but then when somebody says, “Gosh, can I run it on this other hardware?” By providing that portable interface, whether it's SYCL, or whatnot, it helps AI innovation on top. By the way, a few months ago, I was at Supercomputing. I'll tell my little side story. I left Intel for a little while. I've been at Intel a long time. But I left Intel for a number of years, kind of semi-retired. I didn't quite get bored, but I came back. But I came back because of oneAPI. It seemed like a worthy thing to come back for. I've been back in Intel for a little over three years. You keep telling people, “Oh, we can solve this problem. We can make things portable. You wonder, is anyone listening? Can I really deliver? Can we really make it happen?” Even though I see some successes, I will tell you, the experience at Supercomputing a few months ago, talking to customers, it was like, “Oh my gosh. They are listening.” We found people who were using SYCL, some customers that were using SYCL, and the only hardware they used was NVIDIA. I thought, “Wait a minute. Did you really say what I thought you said?” They said, “Oh, yeah. We don't want to use proprietary interfaces. But right now, we've decided to go with their hardware, but we're going to do it in a non-proprietary way.” I'm like, wow. That was not a trivial decision. They say, “Oh, yeah. It works great. We had to do this, this and that, and your tools help us, and these tools from Codeplay.” I heard that more than once. It's like, oh, okay. There is a strong desire to see this basic level be more portable, and people get it. We can help AI that way, foundational. The other thing though, is Intel has had a long history at making things ubiquitous. You hear Intel talking about AIPC. I'm super excited about that, because I get to work with a lot of people in a very rarefied space. I mean, I have had the good fortune of working with people who get Nobel prizes, that get Gordon Bell prizes, things like that. They're the scientists that wow all of us, myself among their bigger fans. But what about the rest of us, right? It's like, I remember mainframes and mini-computers were great, but when you actually got a PC in front of you, everyone had one, well, we still have that. We have devices right in front of us and AI techniques are filtering down. It's amazing as computers get more and more powerful, like we were talking about tools getting powerful over 30 years, the PC can do a lot. The AI techniques, we’re learning so much about them. We've innovated. Yeah, we can put large language models. We can put head tracking and put all sorts of AI right in your hands. Of course, it's already happening. You already have some AI and phones and whatnot. I think that's going to be a big exploding area expanding. I think, Intel is going to help a lot. Just the engineering of these AI platforms that are everywhere. Couple that with our very open approach to software. Another thing to throw in is trust. That comes in many different forms. There are all sorts of great topics about trusted AI. Then, of course, there's also security and Intel is very active in both discussions, hoping that we can be a positive aspect to delivering AI in a secure and reliable fashion. I've never been a fan of apocalyptic sci-fi, if that makes any sense. I'm a huge Star Trek fan. When people say, “Why are you such a Star Trek fan?” It's like, because Gene Roddenberry believed technology would make the world better. That's a very different view than some apocalyptic movies, which we can all enjoy. [0:35:20] LA: I share your views on that topic. I don't like the zombie apocalypse types of science fiction, but I am a huge science fiction fan. I fully buy into the Star Trek concept of the future of technology. [0:35:34] JR: Absolutely. My fellow engineers at Intel, and I think most engineers I work with at other companies, too, are hoping that our technology can bring that goodness out in the world and minimize how much badness it gets used for. I think that's very important, especially with Intel's role at helping make it more open and make it more ubiquitous. [0:35:54] LA: Thank you very much, James. I have to tell you, this has been a absolutely fun interview, because it brought me back to my roots. These conversations are conversations I used to have when I worked at Hewlett-Packard. I loved that job. I loved doing all that work. I've moved on to do other things in recent decades. But my roots are still strong in that area, and so it was great to have this conversation and go back to those roots. Thank you very much for joining me today on Software Engineering Day. [0:36:26] JR: It was my pleasure. I hope I get the good fortune to run into you again in the future. [END]