EPISODE 1820 [INTRODUCTION] [0:00:00] ANNOUNCER: Rigetti Computing is an American company specializing in quantum computing founded in 2013. The company develops quantum processors and hybrid quantum classical computing systems, and aims to make quantum computing more accessible for research and commercial applications. David Rivas is the CTO at Rigetti Computing. He joins the podcast with Kevin Ball to talk about the company, the fundamentals of quantum computing, the state of the technology, and where we're headed. Kevin Ball or KBall is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies, founded the San Diego Javascript meetup, and organizes the AI in Action discussion group through Latent Space. Check out the show notes to follow KBall on Twitter or LinkedIn. Or visit his website, kball.llc. [INTERVIEW] [0:01:04] KB: David, welcome to the show. [0:01:06] DR: Great to be here. Thank you for having me. [0:01:08] KB: Yes. I'm excited to dive in. Let's actually maybe start off a little bit. Do you want to share a little bit of your background? [0:01:14] DR: Sure, sure. First and foremost, I don't have a PhD in physics, which is an unusual thing in the context of a quantum computing company. Actually, my background is I've got degrees in electrical engineering, but it's been mostly a software career since then. I was at Sun Microsystems for about almost 10 years. I was part of the early Java team. I was the architect for the original media framework that went into Java. I continued as sort of a thread in my career of being software guy at a hardware company for a while. I was at Nokia for a while. I ran a software startup for a while, generally been on a variety of sides of the business side of these things as well. Then I ended up here at Rigetti, initially because there was a real need for somebody with software expertise and system-building expertise to come into the company. Then I took over the CTO role a couple of years ago when there was a bit of management shakeup, and one of the things that we were looking for was somebody to help focus the engineering teams on various deliverables that we had to hand. The business of being the software guy in a hardware company has its pluses and minuses, but one of the things that really is true is you get very close to the middle of these systems and that's something. [0:02:15] KB: Yes, absolutely. I started my career as in sort of that software-hardware blend. I came out of physics, though, only the bachelors did not go to the PhD. There's definitely something neat about kind of being able to see all the different pieces. Let's maybe do a little bit of an overview of Rigetti because as we chatted about before the show, it's a little off the beaten path for a software audience. [0:02:36] DR: Right, right. There is this thing happening called quantum computing and essentially using a different modality for producing computation. If you think about classical computing as being built up from the traditional electronics and what we would call classical physics. Out of that, honestly, one way of looking at a classical computer, and this was how it was looked at in the early forties by people like von Neumann is this is something that can give you tens of thousands and now billions of multiplies in a very, very short period of time. Yes, there's other parts to it, but it's a very useful way of thinking about it. In the case of quantum computing, we're using different resources. We're leveraging the mathematics of quantum mechanics to do our computation, and that gives us not thousands and thousands or billions of multiplies, but it gives us billions of quantum operations that we can perform. There's a whole set of different mathematics that you get to apply to the problem. Rigetti's 10 years old. As far as the industry goes, we've been doing this for a pretty long time. We are what we call a full-stack quantum computing company. What that really means is we do everything from fab our own chips. We're unique in some ways in that we have our own captive quantum fab. We make our own devices. These are superconducting qubits, which means, among other things, that they have to be cooled to near as zero. While we don't build the dilution refrigerator that goes to colder-than-space temperatures to put these things in, we build a whole bunch of other stuff, including hardware, electronics to control them, physical hardware that goes inside the fridge, the chips themselves, as I said. This is something that doesn't get talked a lot about in the industry. An awful lot of software to both control the system itself and perform the computations necessary, as well as provide the expected collection of tools that you need to actually write programs to make use of these things. One other comment about that is that these systems are inherently hybrid systems. By that, I mean you have a quantum processor, but you also have a whole bunch of classical computing surrounding it. A reasonable model, at least for today's quantum computers, is to think about the QPU, as we call it, as an attached processor to a classical machine. [0:04:44] KB: Yes, that makes a lot of sense. We're in this hybrid architecture computing world now where that's a very common model to have this accelerator for this or attachment for that that you can pull things out to. I like that comparison of, okay, classical computing gets you very large numbers of multiplies, and we're seeing that play out at large right now with GPUs and these things of how many multiplies can we get. Can we talk a little bit about what is that different operator that quantum computing or QPU gets you, and why do people care about this new operator? [0:05:18] DR: Yes. This is a difficult question to answer. Well, it's a little bit like, well, how do you program a quantum computer? There's a couple of ways to come at it. One, in terms of answering that question directly, the operations formally can be expressed as a class of matrix operation that you perform on the underlying qubits. There are two ingredients to a quantum computer that are usually cited as the key differentiators here in terms of just the basics associated with the matrix mathematics, and that is this idea of entanglement and the ideal of superposition. I'm not going to explain those things specifically, simply because to get them precise really requires that you write some stuff down mathematically. Usually, when you use words, you end up with approximations that aren't quite right. I will say this, though. Superposition, it's not quite that bits are in a zero and one state at the same time. Perhaps a better way of saying it is that you have a tool to hand that allows you to represent really a complex number. That has the notion of a superposition between the imaginary and the real part. That's one way that is a little more mathematically precise. Now, it turns out that you can use that to probabilistically state how many zeros you want or how many ones you want when you sample this qubit. By that, I mean every time you take a measurement of a qubit, you get one or the other out of it. You either get a zero or a one out of it, but there's sort of a probabilistic element to it that you can, as I said, tune. Then the entanglement aspect of this is brought to bear in that you can now combine these things together such that the best way to state this is that you can represent the state of a system that has essentially two to the n variables where n is the number of qubits you have. That grows really fast, right? [0:07:05] KB: Yes. Exponential scaling is powerful. [0:07:08] DR: One of the ways to get your head around this is take it as read for a moment that a qubit, unlike a traditional classical binary bit, can represent essentially a real number or the state to some extent of a system. The way we represent state in classical computers is by a number of bits as a memory cell. I was just looking, and it looks like roughly 160 terabytes is where we're at with respect to very large memory machines at this point. Now, maybe we'll go up a little bit here, but we don't expect it to go up. That's 2 to the 40m right? We don't expect it to go to 2 to the 50 anytime soon. But we certainly expect to build quantum computers with 100 qubits, 100 logical qubits. Build quantum computers out of a lot more physical qubits. We can talk about that in a little bit if you want. But the difference between 2 to the 40 and 2 to the 50 is, well, it's 2 to the 10. It's quite gigantic. And 2 to 100, well, that's well above. I think 2 to the 80 or thereabouts, maybe it's 2 to the 70, is where you can start saying that's the number of particles there in the universe or something like this, right? What that means is the state space of the problem that you have to hand that you might want to represent can be very, very, very large compared to the limits of a classical computer. The other thing that is true about quantum computing that is capturing the interest of a lot of folks is that you are leveraging the underlying mathematics, quantum mechanics, to perform operations, which means that for a class of calculations that you might do, you're no longer using numerical approximations in the way we do in classical, right? This isn't a numerical analytics problem anymore. It's a problem where you're using a system that actually very accurately represents the underlying physics of what's going on. You expect much, much better precision out of any kind of computation you paste. Things like energy calculations of atoms or molecules for that matter we hope can become very, very precise. [0:09:01] KB: Yes. Let's maybe dive in a little bit there, actually looking at these underlying details because I like to understand it. Then we can get into the software layers you've built above it. You mentioned a difference between physical qubits and logical qubits. Then another thing that I've seen kind of going around in the industry is some quantum computing companies are using what they're calling gate model quantum computers, where others are using what they're calling a quantum annealing system. To somebody who's not in this space, this feels like lots of jargon. I don't understand the distinctions. Can you flesh those out? Explain it like I'm five-level. [0:09:35] DR: Yes, sure. The gate model is usually attached to a general-purpose quantum device. It's a model for representing general-purpose computation on a quantum device. The annealers are essentially performing a particular type of quantum calculation that lends itself to express optimization problems, for example, very well. They're not truly general purpose. That doesn't make them any less useful or interesting, and it doesn't make them at all any less quantum. But it does make them a little bit less general purpose. We're headed down the path of building a general-purpose quantum computer, and yes, it can be expressed as a gate model quantum computer. Does that help? [0:10:16] KB: That does help. Yes. Essentially, if I'm understanding, quantum annealing is like it's using quantum phenomena to express a subset of the possible things that you might be able to do, but it's not general purpose. You're sort of limited by the set of problems you can transform into the domain that it works well. [0:10:31] DR: I want to be careful. Limited is an interesting term there, right? I mean, it's sort of like saying, well, GPUs are limiting. Well, yes, I suppose. [0:10:39] KB: Turns out you can transform a lot of interesting problems into that space. Yes, that's fair. Okay, that helps. Then coming back to what you were talking about in terms of logical qubits versus physical qubits, and if I'm understanding correctly, the scaling model that you're talking about here is in terms of logical qubits, the ones you can actually utilize in software. [0:10:58] DR: The idea here is just like with memory error correction, or for that matter, any kind of error correction that takes place in a classical processor, and there's a tremendous amount of it that we just don't even think about it anymore. Error correction can be applied. You leverage redundancy to support greater reliability, and you can effectively arbitrarily dial in how much reliability you want as you increase the redundancy. There's a whole genre of work. There's really a whole research domain going on in quantum error correction right now that are producing different models to support that computation. What you want is you want to be able to use the technology you have and a particular error correction protocol to get to what we'll call a logical qubit. The ratio in terms of number of physical qubits you need to the number of particular logical qubit is very dependent on the QEC algorithm that you'll actually implement. There's also another element to this that's important, and that is the different algorithms converge differently. By that, I mean you can either use more or less qubits, physical qubits, as you grow the number of logical qubits you want to represent. Obviously, we'd like to use less physical qubits as we get to more logical qubits, and that's a pretty active area of research. [0:12:08] KB: Got it. If I were to apply a classical metaphor, it's almost as if you're doing RAID for storage or something on your qubits where you're saying - [0:12:14] DR: For approximation, that's not unreasonable. It turns out to be quite complicated, but RAID was complicated originally. Yes, there's a lot of mathematics there, but that's right. You're using redundancy to support the notion of reliability. [0:12:26] KB: Okay. Cool. That's super helpful. Thank you. [0:12:28] DR: There's one other thing that, if you don't mind, I'm going to interject. [0:12:30] KB: Oh, go. [0:12:31] DR: Since you were talking about annealing versus gate model computation, it also turns out that there is a wide diversity of underlying quantum systems that can be used to build a quantum computer. As I said, we are superconducting quantum system. Essentially, what we're doing is we're leveraging the chip technology that is pretty well-developed in the classical world to produce an element that can effectively behave as if it is a single atom with a single electron running around it and then manipulate it as such. This is not exactly what's going on in the underlying system, but that's the kind of thing that we're attempting to do here with our superconducting quantum systems. We leverage the superconducting aspects of electricity down at the lowest possible levels, at the lowest possible temperatures in order to get this kind of system. There's a wide variety of other systems that are out there. There are people who use actual atoms. They're called neutral atom computers. They manipulate them with lasers. There are people that use single photons, and they manipulate those things with lasers as well. Each of these systems has different engineering constraints around it and can be - you bring to bear different engineering disciplines and technologies around them to build these systems. We're heavily in the superconducting camp because, as I said, the chip technology supports it quite well. There's a pretty well-understood scaling strategy to get to large numbers of qubits here that leverages a lot of what's already known in classical semiconductor technology. [0:13:57] KB: All right. Let's maybe move into the software domain a little bit. Now, we've talked a lot about the underlying model of what this is. What does it look like to write software for a quantum computer or even more specifically a Rigetti computer? [0:14:10] DR: Yes. This is a great question, and it's an interesting one. I consider myself to be a reasonable programmer. I don't do it professionally anymore, but I did it in a wide variety of languages. My last foray, non-professionally, was probably about 100,000 lines of Rust, fabulous language. Running into writing code for these systems was a completely new and different thing, and I don't feel myself to be remotely proficient at it. For the most part, the people that are really writing code for quantum computers, some of which have backgrounds in classical computing, understand the underlying physics of the system that you're doing, and in particular, the mathematics of quantum mechanics to actually get this stuff done. It looks fairly radically different. It is a very difficult thing, and in fact, essentially a pretty broad research topic to think about how to map any particular problem that you're trying to solve into an actual quantum algorithm. At a very high level, of course, the first thing that you want to do is you want to figure out how to use those qubits as representations of particular variables that can be changed in your system. It quickly gets far more complicated than that. Now, the good news is that we have the last 80 years of classical networking and computing and in particular software development to be able to bring to bear on these problems. A wide variety of techniques are being leverage higher level abstractions. Problem-based level abstractions then get essentially tied into the underlying quantum algorithms that you might apply to a particular problem. Optimization is a good example of this. You're seeing some set of standard algorithms that can be developed for quantum computers that then can be leveraged in solving these broad classes of problems like optimization problems or molecular modeling problems or such. Now, the other thing to say is that the tool chain itself doesn't look too different. If you take the idea that you've got an attached processor here to a classical computer as your fundamental model for thinking about this, at least for now, you end up with an environment that allows you to write a classical program to specify the operations that you want to have take place on your quantum machine, and then have your classical program effectively download those operations into the quantum computer itself. I won't go into the details unless you want to explore them, but the idea isn't so farfetched from what many people are used to when it comes to programming GPUs. [0:16:28] KB: It turns out I do want to explore that because this is the stuff I geek out on. If I'm understanding correctly or diving in there, I think trying to connect the research that I did with what you did, I think what you're describing is the Quill language and then the bindings into Python or something like that. Is that correct? [0:16:47] DR: That's true. I'm going to go ground up, if that's all right, and we'll see if that - [0:16:50] KB: Yes, sticks to it. [0:16:51] DR: We have this quantum system down there. In a superconducting system, the way you manipulate it is you send these pulses of microwaves down to affect the underlying qubits. The fundamental operation here is a train of pulses going to each individual qubit that are in that sort of gigahertz microwave range. They have to be scheduled very precisely, essentially down to the nanosecond level. The program, if you will, looks very much like a schedule of those pulses. If you squint a bit, you can imagine that those pulses are a bit like the microcode that lives inside of a classical computer. Then the gate-level operations that you're performing that produce that microcode are the assembly language within which you're writing the actual application. Then you can imagine a layer up from that, which is the high-level programming language that you're using to manipulate all of these things. There's translators and compilation tools that are involved in all of those stages. There's one other thing to talk about from a systems perspective here and that is the way this is usually built in a superconducting realm is you have that dilution refrigerator with a chip inside it. You're controlling it with another piece of hardware that is firing those microwave pulses off down into that system. They call that the control system. On one side of that control system, you have the digital analog converters and analog-to-digital converters that talk in the microwave pulses. On the other side of that control system, you have a very high-powered computer, often also an FPGA to manage the scheduling that presents an interface to whatever software subsystem and in some cases an end user that allows you to say, "Well, here's the schedule. Here's the binary version of that schedule. Go run that for me and then give me the results back." When you think about the compilation tool chain here, you have a tool that produces that binary. Then you usually have software that's responsible for invoking the execution of that and obtaining the results back. I'll stop there and see if that provokes questions. [0:18:49] KB: Absolutely does, so a couple of different things. First on the microwave pulses, can you target per logical qubit or what is the level of fidelity of operation at the microwave? [0:18:59] DR: The microwave pulses are per physical qubit. When you get into the logical programming, there's a different set of translations that will take place to manage the business associated with implementing the logical qubits. I'll say it this way, controlling the system that implements the logical qubits on top of the physical qubits. There's another level of translation that takes place. [0:19:20] KB: Cool. That's helpful. Second question, how do you read data? [0:19:24] DR: Right. You fire a microwave pulse down into the system. In our case and in all superconducting that I know of, what you're really doing is you're pinging a resonator down in that system. That resonator will give you a signal back, depending on the state of the qubit, zero or one. That's a bit of a gross over simplification, but that's pretty close. [0:19:40] KB: Okay. Actually, still trying to understand the low-level programming model, and then we'll move up. When you send that microwave pulse, it is essentially both a write, you're influencing something, and a read at that time you're getting back system state. [0:19:52] DR: That's correct. In fact, it's a good point. There is no way to design a system. This is a quantum mechanical principle and a principle of quantum computing. There's no way to design a system that if you read the qubit, you don't destroy the state. If I read the qubit, it was in that superposed state, right? I might have set it to be sort of halfway between zero and one is one way of looking at it, right? Once I read that qubit, it's going to be zero or one, and there's no pinging it again. I have to set the whole computation up again to get a different result. Now, it turns out that also the system itself influences that. In fact, that's exactly what essentially causes the decomposition that we're talking about there, the collapse. [0:20:30] KB: Interesting, okay. I'm understanding now the life cycle of this, right? I pulse in some way to set something up. Then I pulse to read it. If I want to do anything else, I need to do a new pulse to set something up. Or can I combine those two of like I'm setting up the next thing as I read this thing? [0:20:46] DR: If you visualize these things as operations on qubits, and there's a large number of qubits, you can imagine a time-based graph. Forget what the time scale is, but a collection of operations happens in parallel. Pulses go down to multiple qubits. Occasionally, you read from some of those qubits, but you can continue pulsing other qubits because they haven't collapsed yet. What happens is that the program sort of evolves as a set of these pulses in occasional readouts. Then in most cases, you do a big readout at the end to get a result. It turns out also that in order to do error correction, that business of pulsing in the middle of the circuit, reading in the middle of the circuit, is very important to actually perform the error correction operations. There's a term for that called mid-circuit measurement, where if you think of the computation as being a circuit, you'd like to be able to measure in the middle of that circuit certain select qubits. [0:21:39] KB: Got it. Okay. Now, moving up the stack as you went, at the layer of - you have some sort of - I think you called it the controller that is parsing some level of, essentially, machine code and translating it into these pulses. [0:21:51] DR: The control systems, the way they're usually designed, think of them as a CPU with an instruction set defined for it and essentially an ABI, an application binary interface that is being leveraged. You're sending down a pre-formatted binary that then gets executed. Usually, the compilation steps take place somewhere else. You're effectively - when you schedule these things, you take a ready to run binary, and you go and run it. [0:22:15] KB: Got it. Okay. At that point, at the binary that you're putting down here, you have already done whatever translations you need to have the logical versus physical and all of those things. This is just like, "Here's a binary go. Here's a schedule." [0:22:29] DR: That's exactly right. Here's a schedule. [0:22:31] KB: Okay. Now, moving up a layer to whatever is doing this compilation, its outcome is going to be this binary that's a schedule. What does the input layer look like? [0:22:41] DR: Right. Let's not deal with error correction now because there's a whole collection of subsystems down there that I'd rather not get into at the moment. But basically, what you're doing is, one, you're going to represent. One input is the representation of that graph that describes both how the gates associated with the qubits that you're going to operate on. Coming from the top down, in a general sense, there's a whole collection of operations, a large number of gates that you can use to perform an operation. The quantum computer itself, generally the physical quantum computer itself, generally openly implements a significantly smaller subset. In most cases, one or two gates. They're divided between, in most superconducting quantum computers, what we call one qubit gates, so an operation just on one qubit, and then in a set of two qubit gates, in which case we're entangling two qubits together and performing an operation on them. You can create a pretty small subset of those that you can mathematically prove can be used to represent all of the possible gates that are incorporated into a quantum computation. One of the steps that's necessary is you take your program. Take it from the general-purpose gate set to the gate set that is native to the quantum computer that you're operating on. There's a translation layer there, and that's where a sort of an instruction set architecture becomes important because you need to define what instructions you can operate. Similarly, you need to know how to construct the pulses associated with those gates because those things are highly tuned to operations, and there isn't only one way to do it. There's many, many ways to do it. An awful lot of work gets done on any given quantum computer in defining those gates and building them in such a way that they can help mitigate errors and otherwise be most efficiently run. There's a translation step that takes place once you've got that native gate set to turn it into what is essentially that schedule that we're doing. [0:24:32] KB: Yes. It very much is just sounding like a classical compilation pipeline where you're lowering it bit by bit. You've lowered it from the set of transformations on the full logical set of gates to now it's the set of transformations on the gate supported by this computer. Then you lower it down into the pulsing schedule and take it through these phases. [0:24:53] DR: There is some fabulous work being done right now in the LLVM community in terms of building out various abstractions that can be leveraged so that we can build some better general-purpose compiling tools for quantum computing. Just to take that idea a step farther, if you think about what I said earlier is you've got a classical computer program that's being written. That's then going to write quantum instructions, and you can imagine a single compiler taking all of that and building the appropriate execution artifacts that then get properly scheduled in the environment. Well, I know some of that is actually happening in some places, and I'll suspect we'll be seeing a whole lot more of that. Right now, it is a little bit less efficient than that in terms of the tool chain that's involved. For example, we have separate tools that produce those compilations, and we actually tend to run them as services, so the application can determine when it wants the compilation to take place for efficiency purposes. [0:25:47] KB: Got it. That makes sense. If we talk briefly about that highest level but still quantum programming interface, think that, as I understand it, is the Quill programming language. [0:25:59] DR: We have two languages that share the name Quill. One is called PyQuil, and the other is Quill. You can think of Quill as our assembly language programming environment. This is the thing that takes basic gate-level descriptions, and you can produce a schedule directly from. It turns out that like in the early days of machines that implemented a microcode, there is value in actually programming at that pulse level they call it or, again, if you squint kind of like a microcode. We have something called an element of Quill is a tool that allows you to also program in the pulse level as well. Then the PyQuil is really a binding to a set of Quill operations. It's not really a compilation tool chain as much as an interpretive environment written in Python that allows you to produce Quill programs and then execute those programs. In our environment, there's a collection. Since it's involving execution and the coordination of all of those resources that I just described that you're really talking about something that has an SDK. The PyQuil thing is built upon C Rust SDK that then provides you not only with the tooling to do the translation necessary but the tooling to do the execution elements of this as well and the scheduling and all the rest that you want to do. [0:27:11] KB: Yes. This makes sense. At this point, I think we're talking - at the level that a programmer would be engaging with it today, it still feels like a very - compared to classical computing, it's a very low level of abstraction. We're still essentially writing assembly code equivalence. Is that a fair assessment? [0:27:32] DR: I happen to think so. I think we have yet to see the explosion in programming languages that we're going to see for quantum computing once the systems get to the point where they're actually doing the work that we think they're capable of doing. I used to talk about the sort of expectation of this like Cambrian explosion of languages because this is what happened in the early days of classical computing, right? We had this huge explosion of interesting ways. That hasn't stopped, and I suspect that's going to be true for quantum computing but not quite yet. [0:27:58] KB: Let's maybe talk a little bit about those applications and what you think the thresholds are to be able to start unlocking those. What are you seeing as the core application areas where quantum computing is going to - [0:28:10] DR: Yes. This is a pretty standard list you'll see out there in the industry. We have direct experience with working on certain enhancements to machine learning algorithms, as well as preparation of data for classical machine learning algorithms that are starting to produce some interesting results. The idea here is that one of the things that's true about a quantum computer is that it's effectively a stochastic machine, and that means that you can actually do things like produce unique distributions from them. In fact, one of the things that we're finding is that leveraging machine learning techniques, you're beginning to be able to model particular distributions that enable you to do things like produce what looks like random data, but model to a particular distribution. This has all kinds of use in terms of training for machine learning algorithms. You can think of rare event detection as a classic problem here. There's a whole bunch of things that you'd like to be able to have lots and lots of data for, but you don't. But if you can actually model the distribution and generate the data that way, well, then you can train a traditional classical machine learning model. That's one big class of things that we're expecting to see. Another class of things is an optimization. As I said, it's a well-studied problem. There remains an open debate about will a quantum computer actually ever beat a classical optimizer. The longer answer probably goes like this. We believe that we can certainly run larger optimization problems than classical computers can in the general sense. What seems to be true, at least in the classical case, is that every time you come up with a good optimization algorithm in quantum, some clever chap goes off and goes, "Oh, you know, if I just tweak my classical optimizer like this," right? You start applying serious heuristics to the problem, and you get a reasonable solution. That's great. I mean, that's how these things work. Like I said, there's still some intuition there. I'm going to hold that thought. Talk about one more thing and I want to come back to this. The other thing, and I hinted this earlier, is there's a whole collection of problems that involve modeling quantum processes. There's an expectation that, especially as we get to the higher qubit counts, we'll be in a position to do some of that modeling very, very well, as well. Here's the thought that I wanted to make sure that we talked a little bit about. We are currently building systems. Our largest system to date is 84 qubits, and it's not quite performing. There's a metric that is used out there called the two-qubit gate fidelity. This is like one minus the error rate of the two-cubic gates. The reason we need a error corrected system at some point is so that we can dial that number as low as possible. That's going to take a lot of physical qubits, more than 84 to get a 50 or 60-qubit or 100-qubit system. But we're getting pretty close. We're at about - at a half a percent, we expect to get down to a percent of error over a period of time. What I'd like to make is that once we get to, I don't know, 500 or 1,000 or 1,500 physical qubits, at that one percent or lower error rate, so 99.9 or maybe 99.99 level of fidelity, we have in our hands a computational resource that is not simulable by a classical computer and is more capable than we've ever seen before on some metrics. There's this question of, well, what's going to happen with that system? The short answer is we don't know. It's a little bit like what happened when von Neumann and friends found the ENIAC, right? They went, "Wow, 20,000 multiplies? What can I do with 20,000 multiplies? That's huge." Well, Morgenstern and von Neumann invented Monte Carlo methods with that, right? I suspect we're going to have a moment like that at some point with systems prior to full fault tolerance and full error correction. Where that's going to rear its head, I don't know the answer to that yet, but that's the beauty and joy of working in this field right now is you're building this resource that really does have tremendous promise associated with it. We're not quite yet sure where it's going to find its first true serious applications. [0:32:15] KB: Question about those scaling pieces because you said something that tickled something for me. You mentioned, okay, at that point, you're able to address problems at a scale beyond anything a classical problem can do. I'm also thinking there's multiple directions that problems scale. One of the big things that so many folks are tackling right now is just scale of data. Data is scaling. How much data can you feed in to one of these systems or get out of it? Is that dimension of scale there, or are we talking other dimensions? [0:32:44] DR: It's a great question. The short answer is, at least for the systems that we have at hand today, these are not suitable for pumping terabytes of machine learning like data through them. On the other hand, what they are suitable for is setting up models that require in-memory or near in-memory representation of very large amounts of data. This is a complex system where you need a lot of state to represent that system. That's something that we've never had systems that can do that. Where that's really going to get applied is unknown. Again, I think it speaks to the kinds of things we expect to do with physical system modeling that we haven't been able to do other than with wild approximations in our classical numerical algorithms. I suspect that's a place that will bear a lot of fruit. [0:33:33] KB: Got it. If I were to restate that a little bit, handling large amounts of data flow is not their strength. But being able to make quick operations on a very large, stateful data that you might have to load into memory or something like that, that is where we think these can reach a new dimension. [0:33:51] DR: Certainly, based on the architectures that we have and the way we express quantum computing, that feels right. [0:33:56] KB: Okay. That makes sense. Let's talk a little bit about the classical software around this. One of the things that I thought was really interesting when I started looking into Rigetti is that I can run stuff you just on Microsoft Azure? How are you thinking about all of the ecosystem of software around the quantum computer itself? [0:34:18] DR: Right. Well, you heard what I was describing before for the basic execution chain and the compilation tools and all of that. We've had that for some time. We were also among the first. In fact, we were the first organization to not just put an API up that says, "Talk over the Internet to our quantum computer." But we built out a system that effectively provided you with what was then a VM and then quickly moved to containers to execute classical hybrid programs in an environment where the connectivity between the classical program and the quantum processor itself was very, very low latency giving you high performance. Building that software leveraged most of the tools that I just described, as well as the requirements to build out infrastructure to support that high-speed connection to the quantum computer from the classical computer. We leveraged all of that when we moved. First, we moved to AWS. We were a flagship launch partner with AWS when they had their bracket system. Then not too long after that, we were up on Azure as well. The important thing, at least for us, is to try and both dog food the systems that we're building but also make them as scalable as possible. It is the same. We call it quantum cloud services. It is the same software that we use to run our own cloud system, that we use to run and operate our computers here. When we're doing bring ups, when we're making new machines and getting them first alive, it's the same software that we use to integrate into Amazon and Azure. The considerations between those two kinds of environments have a lot to do with, well, security issues. You don't get into Amazon without being very, very secure. They are quite serious about that, as well as performance issues, things like latency and such. At that point, it's not so different from being here inside Rigetti running an application. They get closer and closer every year as everybody works on the efficiency issues. Get close to answering what you were looking for? [0:36:14] KB: Yeah. Well, and I'm kind of curious then, how do you see people utilizing quantum computing going forward? Is it the same? Is it your spin-up in AWS instance and you hand it off to a service the same way you might to a DynamoDB or something like that? You're like, "All right, just go, run this on Rigetti." [0:36:33] DR: That will certainly be a part of the mix of things. Because of the nature of the kinds of problems that we expect to be working on, some of this will involve highly classified data. There will be the necessity for on-prem systems for sure. At this stage of the game, the market is pretty bifurcated. There is a group of folks who use cloud-based quantum computing in research applications that are predominantly software-based, as well as there's an awful lot of learning that's going on. The first interactions that most people have with a quantum computer are usually through one of those cloud services. But there's a whole bunch of work being done now to build out ecosystems around national quantum computing centers and research laboratories, the national laboratories. And so on-prem systems are becoming an important element of that. And in fact, one of the things we're finding is that governments are taking a serious interest, generally speaking, in quantum computing. And so funding is coming from a variety of governments to support the extension of super computing environments, and I'll talk about that in a second, as well as just general national quantum computing centers are being set up. We recently launched a system, the UK's National Quantum Computing Center outside London. We have systems at Fermilab, at the Air Force Research Lab, and then we provide this cloud quantum computing systems to our DOD partners and others all the time. Like I said, we run our own cloud system in a pretty high-performance way. There is a really interesting - we're in an interesting time from a supercomputing perspective as well. So Exascale has been reached in supercomputing, right? ORNL has produced two very, very important systems in the last five or six years. And there's a lot of discussion. I think you were hinting at this before. There's a lot of discussion about the varied hybrid modes of classical computation. And you can extend that to treating quantum computation as an extension of the toolkit, of the pallet that you have within which to build a computing system. And so as the funding cycles associated with supercomputing start to look towards that next five or ten-year system, there's a lot of interest and energy being spent on integrating quantum computers into that. In fact, we did an integration of our Ankaa-2 system with ORNL not that long ago, sort of over a network connection. The idea is to just, "Okay. So what's this gonna look like? How do we hook these things up?" [0:38:48] KB: Yeah, it's kind of a wild thing. So I think this is a good opportunity to sort of step back and look at kind of where these things are in their sort of evolution of use and what you see the timelines and milestones coming down the pike looking like. [0:39:05] DR: It's a very dangerous and not very rewarding thing to predict quantum computing. But I'll say this, and I'll lean back on something that I said earlier, look, the way this works, the way technology generally works is you build to what you have and you just keep improving it. And that's what we've been doing for some time now. We're about to reach a threshold where we're going to put a computational resource in someone's hands that they've never seen before, the power of which they've never actually had. And at that point, we should expect to see something interesting pop out of that. What it is? I don't know. But our roadmap clearly states, we'll give you more than 100 qubits by the end of the year. I suspect that'll be more than 100 qubits not long after that. We're pretty clear that in a couple of years, 1,000 qubits is right there. 1,000 physical qubits at very high fidelity, that's a big deal. You go farther out, the numbers start to look like 2030, 2032 before we start getting proper fault-tolerant error-corrected systems. That's what many people are betting on the Department of Defense, DARPA, recently put out a bid for people to build systems that roughly in that time frame would be fully fault tolerant. Actually, we're proud we're engaged with that thing now. But that's different from saying, "Oh, well, we'll reach quantum advantage in 20XY." I don't know the answer to that question. And honestly, I don't think anybody really does. The game right now is to build the best possible systems. And you see a few people doing that right now. And the thing that has surprised everybody in the last three or four years is just how fast that's come. We were scaling our systems from a cubic count pretty aggressively two or three years ago. We had 80 cubic count systems up on Amazon. And we're still at 84. And part of the reason we're still at 84 is that we focused on getting the fidelities up. I've been having our error rates less than every year or so, and that's a really good trajectory for us beyond. As I said, we'll crank the scaling back up. Interesting thing about that in terms of techniques for scaling. We're unique in the industry. We're about not to be unique in the industry, but we're unique in the industry in building quantum computers out of multiple chips. That 80-qubit system that I was telling you about, it was on Amazon two and a half years ago, it was built out of 42, 40-qubit chips. And these weren't chips operating separately. We got entanglement across the chip boundaries, which is the thing that you want. This is a grid of qubits locked together, and the chip-to-chip boundary had entanglement across to neighboring qubits. We announced we'll build a 36-Q system out of 9-qubit chips sometime in late spring, early summer of this year. And we'll certainly scale for anything above 100-qubit. Those 84-qubit chips are about where we want to stick when it comes to the size of the chiplet that we use. Now IBM is starting to do that, and others are definitely going to come along, but it's an important thing to note that that's how scaling is going to happen. [0:42:00] KB: Yeah, that's wild that you can get them entangled across chips. [0:42:04] DR: And the performance is as good as - [0:42:05] KB: That's where I was going to ask, is do you get the same error rates and things? [0:42:08] DR: We do. We were even a little surprised. I mean, the theorists were going, "No, we got this, right? We modeled it. It's going to be fine." But when it actually happened, people were like, "Look at that. It really did what we thought it was going to do there." It was a big deal, actually. We were pretty proud of that. And it was kind of lost on the industry at the time, I think, but we're bringing it back. [0:42:22] KB: What does it look like? You mentioned you're a full-stack company, and a lot of the stuff we're talking about in milestones here are kind of manufacturing level almost, right? They're like, "Okay, we're going to improve this process. We're going to improve this error rate." What does the software organization inside of Rigetti look like? How are you organized? And how does that interact with the hardware pieces? [0:42:43] DR: Well, that's a great question. I'm so glad you asked that question. It's lost how much software is required to actually build a quantum computer. We have two organizations. One is the QCS Software Organization. They're really responsible for that operating environment that I just described. Compiler technology, the execution pipeline, the interface to the control system itself. There is a whole other internal software organization that is responsible for building the tools that we use to bring up, measure, characterize, and otherwise manipulate the machine itself. I was telling you before that when you define a 2-qubit gate operation, you have to define the very specific microwave pulses that you're sending down to those two qubits. That's a combination of science and art at this point involving air mitigation techniques and such. And you need quite a lot of software to support the efficient experimentation of that. Our measurement and characterization software, it's something we call Treeline, has been a major effort for us and has been the thing that has allowed us to reduce - most recently in particular, to reduce those error rates as fast as we have because some of it is just pure gate manipulation and mitigation techniques. If you think about it, one little bit of information, there's a collection of operating points for each of those qubits, the frequency of the qubits, some aspects of the pulses that you need to define, kind of part of the instruction set architecture, if you will. And that manipulation or the definition of those parameters is an iterative task that is effectively an optimization problem that you have to perform. At 20-qubits, or 40-qubits, or frankly even 80-qubits, it's vaguely manageable. At 80-qubits, it's not so manageable but it's vaguely manageable. At 500-qubits, there's no other way but extremely efficient automation and the expectation is the optimization is complex enough that applying machine learning to that optimization is going to be necessary to get anywhere at all, and we've done a little bit of that already. Those tools are very, very important. FAB, FAB is producing data all day long, right? One sort of very glib characterization, a lot of what FAB does, is take a specification and then produce yield of chips for us, right? And in order to do that well at all requires a tremendous amount of experimentation and data collecting and an analysis of that data. We have that internal the team leverages that as well. But as I said earlier, fortunately, we live in an age where we're 80 years old with software technology, and open source technology, and networking technology, and high-performance classical computing technology. All of that is being brought to bear on solving these problems, right? We've got [inaudible 0:45:16] pipelines now set up to do the collection of all that data. So-we can real time integrate it with the systems that we're using. We can produce analysis and monitoring tools on that stuff to see where we are. The optimization thing that I was telling you about that has to happen to bring the system into being has to recur a bit because these systems drift from time to time. There's a whole process to do what we call retune with these things that has to happen from time to time. And we're getting to the point where more and more of that will be based - we'll proactively retune rather than schedule some time to go do the retune based on streams of data coming out of the systems themselves. A lot of modern software development going on here as well. And I guess I should say it should be taken as read that all of this is deployed in modern container and architectures. We can put it on-prem, we can put it in the cloud. We put a lot of it in the cloud already. We're leveraging all that technology. [0:46:04] KB: Yeah, well, and I think a theme that I've definitely been seeing recently is it feels like, in recent years, we've hit sort of thresholds from a software and machine learning perspective that are opening all sorts of doors in hard tech and sciences and other different things that we're just starting to see the impacts of. Now, looking at your systems, and you alluded to or you mentioned earlier, one of the possible application domains where quantum computing has an edge is in modeling quantum phenomenon. Are you using your own systems to help model or design next generations of those systems? Or are we not at the bootstrapping phase yet? [0:46:45] DR: We're not quite at the bootstrapping phase yet. I hadn't thought of that in these terms before, but that's a milestone that will be worth celebrating. And when people talk about using quantum computers and utility associated with them, there's utility now because obviously people are buying them, using. But be fair, it's a lot about exploring quantum computing and the system building associated with it. But the minute we start bootstrapping - and there's optimization problems all over the place, that will be very, very exciting for everybody involved. [0:47:13] KB: Care to wager a guess how far off it is? [0:47:15] DR: Not going to do it. Soon, I hope. [0:47:20] KB: We've talked about quite a range of interesting things. We're getting close to the end of our time. Is there anything we haven't talked about yet that you think would be valuable to go into? [0:47:30] DR: There's a couple of things that come to mind. One is this, the quantum computing start is a very scientific endeavor. There's a double-edged sword to that. On the plus side, we're so close to the science still. There's a kind of openness in this community that's exciting. People publish on the archive all the time their results. Even though there's significant competition in the industry right now, there's sort of a joy and a beauty associated with going, "Oh, look what they did," and learning from each other, and the fact that it is so scientific. The big conference is the American Physical Society Conference in March, and everybody's going to be there. We're all going to talk about their results, and that's really quite wonderful. It's different from going to any of the traditional computing conferences that I've been to in the past. It also seeps into the work itself. Nobody is here that doesn't really, really want to build this machine, right? That's what people are doing. The story I often tell people is when I came back from the first interview, I was talking to my family and blah blah blah. They're like, "Dad, are you going to do this?" I'm like, "Dude, they're building the ENIAC. I regret it throughout my life. I don't want to do this." It's kind of true. That's the feeling you get when you're doing this work, and that's a lot of fun, right? The other side of it, because it's so academic, is sometimes people forget we're building a computer. And that means integration, and speed, and lots of classical quantum computing. I can't tell you how many times we'll be putting a bomb together for a system that we need and somebody forgets to spec the classical computers that need to go in the rack, right? That's actually not happening anymore, but that was the kind of thing that used to happen. Or, "Oh, what about all that software that has to happen?" Because it's a bunch of physicists building this very, very difficult thing. One of the things that I'm quite proud of Rigetti doing is expanding the footprint of the team here to include proper professional software development teams, proper infrastructure teams, right? Teams that are capable of bringing the best practices from all over the place to bear on this. Yeah, we write a lot of Python and that's a very useful thing to do. A lot of our system stuff is being done in Rust now by people who know what they're doing and that's also a useful thing to do. [0:49:35] KB: Well, And Python, written by a software engineer with experience looks very different than Python written by a grad student who's doing this to try to get their results. [0:49:44] DR: Hear, hear. Hear, hear. And in fact, that's exactly what's been happening. And one of the great things to have watched over the last few years is folks that started off, sort of as those grad students now being very qualified professional software engineers here. So that's nice as well. [0:50:01] KB: Yeah, that's awesome. Well, this has been super fun. [0:50:05] DR: I'm glad you enjoyed it. I did too. [END]