Okay. Thank you, everybody. Welcome to Needham's 20th Annual Growth Conference. My name is Quinn Bolton. I'm the semiconductor and quantum computing analyst for Needham. It's my pleasure to host this fireside chat with Rigetti Computing. Founded in 2013 and headquartered in Berkeley, California, Rigetti is a pioneer in full-stack quantum computing. The company's quantum computers are based on superconducting qubits, which are widely believed to be the leading qubit modality, given their maturity, clear path to scaling, and fast gate speeds. Current Rigetti systems achieve gate speeds of 50-70 nanoseconds, which is about 1,000 times faster than other modalities such as ion traps and neutral atoms.
Additionally, Rigetti developed the industry's first multi-chip quantum processor for scalable quantum computing systems. The company manufactures its QPUs in-house at Fab 1, the industry's first dedicated quantum device manufacturing facility. Joining me on stage is Dr. Subodh Kulkarni, President and CEO. Subodh, thank you for joining us.
Thank you, Quinn.
Just maybe as background for investors that may not be familiar with Rigetti, can you please provide a brief overview of the company, why the company chose superconducting qubits, and what differentiates Rigetti from others in the superconducting camp?
So thanks for having me here. As you mentioned in your introduction, Rigetti is based in Berkeley, California. We focus on developing superconducting gate-based quantum computing technology. The main reason we believe superconducting gate-based quantum computing is one of the leading modalities in quantum computing is because of the strengths in scalability and gate speed. Fundamentally, we are using semiconductor chip technology. So we have the know-how and experience of five decades of semiconductor industry to leverage from. So we feel pretty good that once we have our chip defined, we can shrink it, we can multiply it, we can use all the standard scaling techniques that the semiconductor industry uses. So that's the big reason why scalability is such an important attribute of superconducting gate-based technology. It's not just us.
I mean, if you look at most of the investments going on in quantum computing right now, more than 90%, maybe even higher, is in superconducting gate technology. Besides us, you've got big companies like IBM, Google, Amazon, Microsoft, Toshiba, Fujitsu, but none other than the government of China. I mean, if anyone has deep pockets, it would be the government of China. The fact that they have chosen to invest in superconducting gate-based quantum computing, that should tell you something right there. So scalability is a big factor, but the other, and you mentioned that, is speed. Fundamentally, we are dealing with electrons to transfer information from one qubit to another qubit. And so we are dealing with speeds of electrons, just like CPUs and GPUs are. So typically, our speeds are in tens of nanoseconds, very commensurate with CPU and GPU gate speeds.
When you look at some other modalities like trapped ions or pure atoms, they're physically moving ions or atoms to do the same thing that we are doing with electrons. By definition, ions and atoms are 1,000 times bigger than electrons. So laws of physics hold. To move an ion or an atom, it is 1,000 times slower than to move an electron. And you see that in the gate speeds. So you look at gate speeds of trapped ions or pure atom companies, they're typically 1,000 times and sometimes even more slower than where we are. So when we or IBM or Google talk about 50 nanoseconds and 60 nanoseconds gate speed, you typically find companies like IonQ talk about 600 microseconds gate speed. That's a factor of 1,000 right there. And that, we think, is a huge benefit for superconducting.
So scalability and gate speeds are the main reasons why most of us are investing in superconducting. On the flip side, historically, fidelity was a big challenge for superconducting camp. And that's why many of these other modalities started gaining attention. But I believe that changed quite a bit since November of 2024 when Google announced their Willow chip. That's the first time any one of us in the superconducting camp came to 99.7% two-qubit gate fidelity. And since then, you see a lot of excitement in quantum computing building after that Google announcement. We did multiple announcements after that to talk about 99.5% or 99.7% fidelity. IBM has done their part. Even the government of China in the recent papers has talked about 99.5% and 99.7%. So multiple companies or organizations in superconducting are reporting 99.5%-99.7%. So historically, what was the weakness in superconducting?
We are saying we have caught up with other modalities, if anything, gone ahead in some cases. But our strengths of scalability and speed continue. So we feel very confident that of all the modalities, superconducting is going to be the dominant one going forward.
Excellent. There are multiple go-to-market strategies in quantum computing, one of them being the on-premise hardware sale. Rigetti has sort of chosen a different path than some of your peers, where oftentimes you'll sell just the QPU rather than the entire quantum computing system. Maybe talk about why you chose that route and some of the advantages or perhaps disadvantages of that approach.
Sure. So long term, we believe there are two ways of monetizing quantum computing. One is obviously sell physical quantum computers, and the other is sell quantum computing as a service, the cloud model. Over the next 10, 20 years, we are pretty sure most of the business is going to be a cloud model because quantum computers, we believe, will exist in data center in kind of a hybrid ecosystem along with CPUs and GPUs. But that's five years, 10 years from now. In the meantime, there's a lot of companies and government organizations and universities that are interested in understanding quantum computing, developing applications, developing different parts of the stack. So they are doing really research work right now.
And that's where they are interested in physically getting a quantum computing system within their facility, whether it's the U.S. government, DOE lab, DOD lab, or the U.K. government, or many other governments, or academic institutions, universities, and so on. In the superconducting area, there are multiple things that we use to build a quantum computer. But one of the main things we use is what's called a dilution refrigerator, which is basically the physical system that is cooling the chip down to the superconducting temperatures, the five or 10 mK type temperatures. So a big chunk, when you are looking at a quantum computer system, whether it's ours or IBM's or Google's, the big thing that you see, which looks like a cylinder, sometimes six to eight feet tall, about a couple of feet in diameter, that's really a dilution refrigerator that you are looking at.
The actual chip that is sitting at the bottom of the cylinder is a very small chip. That's where all the work is going on. But most of what you're seeing is a cooling system to cool the chip down to 10 mK. Again, I'll emphasize 10 mK, OK? That's pretty close to absolute zero. That's what we are doing right now in that dilution refrigerator. So what we find is many of these national labs and academic institutes, for various reasons, they have already invested in a dilution refrigerator. And then there is no reason for us to try to sell them another dilution refrigerator, which we are buying in open market. There are four or five companies that are supplying dilution refrigerators. So what we allowed is our customers to tell us what they really want to bring up their ecosystem.
So in a case like Fermilab in Chicago, DOE lab, premier DOE lab, they invented cryo technology. So it's a little bit of a joke for us to go and sell cryo technology to a Fermilab that invented cryo technology. So they have their own cryo setup and everything. So in that case, we enabled them to just get our QPU so that that can fit into their cryo technology stack. And we got them up and running with a quantum computer. In a case like Montana State University, where they had zero experience of quantum computing, but they wanted to get into it, then we sold the whole system. So we look at the customer and what their needs are. If they already have a dilution refrigerator, there is no reason for us to. So then we sell a QPU.
But if they are starting from scratch and they want a whole system, we'll do that. So in a case like U.K. government, when they were trying to come up in quantum computing, they wanted a whole system for us. So we did a whole system. So we are flexible in that way when it comes to what the customer needs.
Understood. Some of your peers have made claims of reaching quantum advantage on certain applications, whether they're academic or real world. You've been a little bit more conservative on timing of quantum advantage. It'd be great to get your perspective just on when you think you or the industry will sort of be able to reach quantum advantage and sort of what, at least at the superconducting modality, what are sort of the attributes of the system that you think will enable the company to hit quantum advantage on certain applications?
You are generally right that we have been more conservative when it comes to forecasting, both from a technology milestone standpoint as well as commercial business opportunity standpoint. Our view, consistent with BCG reports, is that this market is going to grow to like a $3 billion type market by 2030 and then $15-$20 billion by 2035, and so on. Some of our peers think the numbers are a lot bigger than what our estimates are, and that's partly supported by reports from companies like McKinsey, which are much bigger. We, of course, would love the market to be larger, don't get us wrong, but we think any time you are coming up with disruptive, complex technology, it usually starts off slower than what most people expect it to take off. On the technical side, again, we are more conservative.
Our view is that we are roughly about three years from quantum advantage, which is when we can clearly demonstrate superiority with quantum computing compared to classical computing in terms of performance. And we have quantified what that means in technical metrics. We have said that to get to quantum advantage in about three years, you need to be a minimum of 1,000 qubit. You need to be a minimum of 99.9% two-qubit gate fidelity. That's the accuracy of information transfer. You need a maximum of 50 nanosecond gate speed. And you need error correction. With those four things, we believe you can get to quantum advantage and start demonstrating superiority. And that's what will enable commercial applications to take off. We realize that's probably one of the more conservative estimates in the industry.
Some of your peer companies, as you said, say a lot bolder, have a lot bolder claims out there. But our view is that quantum advantage essentially means that we should be able to go to a data center, demonstrate the metrics with quantum computing, and they should be convinced that they need to start using quantum computing. That, to us, means the spirit of advantage. Right now, we have done that, and other companies have done that. We can take a mathematically esoteric problem. There are tens of mathematical problems that are near impossible to solve with classical computers. They are well known, like the spin glass problem, the Bose-Einstein condensate problem. These are extremely difficult problems for a classical computer to solve. We can take one of those problems, show that a classical computer takes infinite years to converge.
Then we can take our quantum computer or anyone else's quantum computer, show that it starts converging faster. And then you take the ratio of a finite number and divide it by 0, and you get infinity. And then you say we are a trillion times or 10 hundred trillion times faster than a classical computer. To me, that's meaningless bogus information. You intentionally chose a problem that was impossible to solve. So you had 0 in the denominator. Then numerator doesn't matter.
Anything divided by 0 is infinity, as long as it's a finite number. And that's the kind of bogus claims that are going around when you see people talk about quantum advantage. I refuse to subscribe to that. I think quantum advantage means you really can go to a data center, show performance, and the data center manager should say, yeah, I want to go with quantum computing.
So, it's almost more of a commercial view or practical. I don't want to say necessarily commercial because I'm sure there are academic or government applications, not Fortune 500, but got it. Maybe with sort of those steps or requirements to get to quantum advantage, let's shift to the roadmap. Last week, you provided an update on your Cepheus 108 qubit system, revising the date for general availability back by about a quarter. Maybe talk about just what you saw, why the delay, and how confident are you that you'll be able to hit the 99.5% fidelity metric by the end of this quarter?
We have two systems available for anyone to use right now. One is an 84-qubit monolithic chip. That's a single chip. That's at 99% two-qubit gate fidelity, about 70-nanosecond gate speed. One is a 36-qubit chiplet-based system. We have four 9-qubit chiplets. That's at 99.5% two-qubit gate fidelity and 60-nanosecond gate speed. That's available right now on cloud. Anyone can use it. We were planning on deploying a 108-qubit system at 99.5% two-qubit gate fidelity and 60-nanosecond gate speed. Our hope was that we would deploy it by the end of last year. We ran into an unusual issue with coupling of couplers. Essentially, what happens in quantum computing is you have qubits, which is where the data function is going on. You couple the qubits with what we call tunable couplers. These couplers are actually qubits too.
They are just not functioning qubits. They are not data qubits. But we found out that as we saw in our 108-qubit system, there's almost like 260 qubits effectively. And the couplers are just supposed to couple the data qubits. But once we exceeded a certain threshold, we started finding that the couplers started coupling with each other.
Oh, instead of the data qubits.
Instead of the data qubits, so they became active like a data qubit effectively, and that started creating complications in the actual calculation, so that's why we, even despite that, we still managed to get it to 99%, so we could have, frankly, deployed it at 99% last week if we wanted to or a couple of weeks ago, but we chose to do another iteration of the chip to get the coupling of couplers under control and get the system to 99.5%, which will take another round, which is about two to three months from now.
No problem.
So we feel pretty good. We understand the problem. And frankly, these are extremely complex technologies we are developing. Some of these things are to be expected. So when we give guidance for technology milestones, I mean, we do the best we can with some confidence. But some things happen, and you learn. And when you learn, you try to correct it and move forward.
Did you have to come back and sort of redesign or do better isolation between the tunable couplers? What was the fix? Is it sort of more hardware, or is it a tuning or a calibration?
No, it's actually a chip design. We design our own chip. We build our own chips, so in this case, because the couplers were coupling with each other, we have to find the frequencies of couplings, and we have to start isolating the frequencies, which gets into the heart of chip design because we have to essentially redefine the size of what we call the niobium pads, which is where the contacts are happening, so physically, we have to change the size of the pads to control the flux that is going into the chip, to control the coupling of the couplers with each other, so it gets into the real heart of how a quantum circuitry works. Again, this is nothing unusual. We knew that.
It's just that what caught us by surprise is once we exceeded 100 data qubits and corresponding number of couplers with it, we didn't anticipate so much of coupling happening amongst the couplers.
Got it. OK. On the last Needham's conference call, you put out your roadmap through 2027. Maybe share your roadmap with folks in the audience.
So, sure. Right now, our immediate goal is to get the 108-qubit system at 99.5%, 60 nanosecond gate speed. We believe we'll deploy that towards the end of Q1. As I said, we could have deployed it a couple of weeks ago at 99%. But we opted to do one more iteration to get to 99.5%. Our plan for the rest of the year is to go over 150 qubits at 99.7%. Our 9-qubit system is already at 99.7%. At 36-qubit, we are at 99.6%. The challenge for this year is to get to more than 150 qubits at 99.7%. As far as we know, no one else has demonstrated more than 150 qubits at 99.7% with tunable coupler technology. IBM is at about 120 qubits right now. Google is at 105 qubits. Most of the other companies, they may be talking big numbers.
But when you look at actual systems, they are in the 50 qubits or below range. So we are one of the first ones to be venturing into this 150-plus qubit at 99.7% type landscape. So that's the goal for this year. Next year, we think with our chiplet technology, and that's a key part of Rigetti, is the chiplet technology. We think we'll be able to get to 1,000 qubits at closer to 99.8% type number. But again, I will warn everyone. These are technology milestones prone to subject to changes as we discover more things and challenges and stuff like that.
Understood, but I guess as you build up to the 36 and certainly the 108, at that point, you've sort of proven out the tiling strategy, and going to bigger systems is just more tiles that you prove on the lower qubit.
Correct. I mean, we demonstrated tiling or chiplet strategy at 36 qubit level, right? And it works. And it's working at 108, except for these coupling issues that we ran into. So we need to get that under control. But going forward, the chiplet approach we believe is the right way to scale up to 1,000 qubit. Again, our goal is to get to that quantum advantage, 1,000 qubit, 99.9% with less than 50 nanosecond gate speed. We feel pretty good about the gate speed.
We have a lot of knobs on that side to get down from 60 nanosecond to 50 nanosecond or better. Certainly, fidelity is the biggest challenge. To get to 99.9% is not trivial from here on. That's why we think it's going to take three years to get to that point. And getting to 1,000 is not a trivial thing either. But I think with chiplet strategy, that's a lot easier than a single chip strategy.
I think that the roadmap to 1,000 or greater qubit by the end of 2027 may not yet have quantum error correction, which I think you said was one of the keys to hit quantum advantage. Can you talk about your strategy on quantum error correction?
So yeah, so that gets into the overall Rigetti strategy. In Rigetti, we believe our architecture is open and modular in nature. So we have created the whole stack so we can put different companies' innovative solutions in the stack. So we don't try to do everything ourselves. We are capable of doing it ourselves. But if some other company has a better approach and more innovative solution, we can incorporate the solution into our stack a lot more easily than other companies. So if you look at IBM or Google, they have designed a quantum computer like a mainframe computer. So they do the entire stack themselves. We have chosen to incorporate different companies' solutions in our stack. For instance, we are partnered with Riverlane in Cambridge, U.K., for error correction. We are partnered with Quanta Computer for control systems.
We are partnered with NVIDIA for distribution layer software, the CUDA Quantum and NVLink, and so on, and we'll continue to do that. We think that's the right approach, the open modular approach, because it allows us to incorporate innovative, better solutions than what we can do ourselves. I mean, take the example of NVIDIA. If anyone knows how to do distribution layer software, it's NVIDIA with the domination of data center with their CUDA software right now. So we decided it's far better to partner with someone like them than to try to compete with someone like them. Same with the control system. NVIDIA's closest partner for controls is Quanta. Quanta is their number one ODM partner. So we decided to partner with Quanta. They are a big Taiwanese company, almost $35 billion company. So we decided to partner with them rather than try to do it ourselves, everything.
So that's the approach we have taken. That gets us into error correction. So in error correction, as I mentioned earlier, we partnered with a very capable company in Cambridge, U.K., called Riverlane, about the same size as we are, but focused strictly on error correction. They have some extremely clever technologies that we like. In particular, what Riverlane has done very cleverly is real-time error correction, which is the first, as far as we know, anyone has done in the industry. So the Google Willow paper that I referenced earlier, as good as it was in the November 2024 time period, that was offline error correction. It was slow. So Google couldn't keep up with the speed of quantum computer error correction. With Riverlane, we have demonstrated that you can actually do real-time error correction in quantum computing.
So now the challenge is to bring it all together. So when we get to 1,000-qubit at 99.9%, we need to also bring real-time error correction into the picture. And that's what we need for quantum advantage.
OK. You had mentioned partnering not only with Riverlane, but also NVIDIA on the NVQLink platform. What does NVQLink bring? I thought that part of that was an ability to do quantum error correction, syndrome extraction, decoding, which it sounds like you may be partnered with Riverlane. So how do Riverlane and NVQLink sort of coexist in the system?
I mean, NVIDIA's role right now is to look at integration with HPC. So their view is, how does the hybrid ecosystem come together? Exactly consistent with our view. So neither of us believes that a quantum computer will replace a classical computer. And I know some of our peer companies think of it that way. But we certainly don't. NVIDIA certainly doesn't. And many others don't either. Our view is that a quantum computer will coexist in a data center with classical computing. So you'll have CPUs doing sequential computing. No reason to do additions and subtractions on the quantum computer. GPUs will continue to do parallel processing. Again, no reason to do parallel processing with a quantum computer. But simultaneous computing, which right now CPUs and GPUs struggle with, where you have thousands of variables interacting simultaneously, that part will come to QPU.
So our view is that distribution layer software like CUDA Quantum will trifurcate the problem, if you will. Sequential will stay with CPU. Parallel will stay with GPU, just like what's happening in data center today. But a new stream will be defined for simultaneous. And that will come to QPU. And then it will come back together. So the end user will not even know that it is working at that level. OK? End user will just see benefit in terms of speed and those accuracy of the model convergence and stuff like that. But they won't know how the trifurcation is done. And that's where NVIDIA is really good at. And that's where the integration of HPC, CPU, GPU, the trifurcation, and all.
Error correction, which is much at a more lower level than distribution layer, that's where you are going into the real raw quantum signal and trying to see is the signal right or wrong, create redundancy, and those kinds of things. That's where we are partnering with Riverlane. So I think both will coexist. There will still be error correction with Riverlane or someone else. But on top of that, there will be a distribution layer software from NVIDIA or somebody else.
So sort of simply, it's almost different levels of stack.
Different layers of stack, exactly.
Got it. OK. Let's shift to the DARPA QBI program. You were selected for stage A. You're awaiting selection on stage B. What feedback did you hear from DARPA on your ability to advance? What are they looking at? What do you need to show to advance to stage B?
DARPA is a big project, obviously commissioned by the U.S. government, to build what they call a utility-scale quantum computer. The timeline is roughly 2033. We are talking about hundreds of thousands of physical qubits, thousands of logical qubits, 99.99% type fidelity, real-time error correction, of course, very fast gate speeds. That's the goal that DARPA has set up for all of us. We are in stage A right now, along with some other companies like Google and some others. They chose a few companies, including IBM, IonQ, and a few others, for stage B. The feedback we got was that we need to demonstrate a clear roadmap on the error correction side and the long-range coupling side to get to phase B. We continue to do that. We'll do that in the next few months and get into phase B.
They have also created a phase C. And they have chosen a couple of companies in phase C right now. Microsoft and PsiQuantum are two companies that they have chosen in phase C. So DARPA's view is it's an open-ended program. It's a long-term program. As companies show different paths to their end goal, they will take us to different stages. Yeah.
OK. You talked earlier about some of the issues you ran into with the tunable couplers that you're respinning the QPU. Does any of that learning apply to long-range couplers? And can that help accelerate your development of the long-range couplers, which I think you said was important for the quantum error correction?
In general, yes. Coupling is an important part, as we just saw. But specifically, I think when DARPA talks about long-range coupling, or even we talk about long-range coupling, we are talking about coupling across millimeters or centimeters. So for instance, when we are thinking of a 1,000-qubit system, we are talking a physical dimension of a few centimeter by a few centimeter for the effective combination of chiplets. So when we talk about long-range coupling, we are talking about how do you couple a qubit at one corner of the lattice to the qubit at the other corner of the lattice? And what technologies do we deploy so that those two qubits are seamlessly entangled with each other? That's what we mean by long-range coupling.
So it's not the same that the couplers interacting with the couplers are adjacent.
But short range.
So it's a shorter-reach problem rather than long.
Yeah.
OK. Do you have line of sight to how to develop these tunable couplers? Do you feel like you've got a good strategy in place for that?
Yeah, absolutely. I mean, the technology, the tunable coupler technology, both we and Google have been pushing it for a while now. There's obviously some challenges with tunable, but we like it. And by the way, IBM has switched to tunable coupler technology too now. So effectively, IBM kind of conceded that Google and we had chosen the right technology with tunable couplers that we moved on a few years ago. And the main reason for that is you get an extra degree of freedom in entanglement. With fixed coupler technology that we were doing and IBM was still doing till about a couple of months ago, either it works, the entanglement, or it doesn't work. And if it doesn't work, you're out of luck. There's nothing you can do.
With tunable coupler, you can essentially, like an analog device, you can move around the frequency of the coupler till it hits the right sweet spot. So that extra degree of freedom becomes extremely critical to get the chip to work properly. So we believe in tunable coupler technology, so does Google, and now so does IBM. We think that's the right way to go. There are obviously some challenges because they tend to couple with each other, as we saw firsthand. But we'll all get over it and we'll figure it out. So we think it's a good technology to go with long term.
Moving to sort of customers and sort of QPU sales. You shipped multiple QPUs or systems last year. Can you discuss some of who you're shipping these QPU systems to? And what are some of the applications they're developing? Or what are the use cases of the QPUs you're shipping?
So we have shipped multiple QPUs or systems, if you will. The main ones that we have publicly disclosed are Fermilab in Chicago, Air Force Research Laboratory in New York, Montana State University, a company in Singapore called Horizon Quantum Computing, and a few others like that. The big one, the National Quantum Computing Center in the U.K., that's using our system over there. We announced in the second half of last year that we got two commercial orders for upgradable nine-qubit systems, one from a large ODM in Asia and one from a startup in California doing quantum research. I want to say all of these systems that we have sold to date, and even what we are talking right now, are for primarily research applications. No one is talking about putting a quantum computer in data center for practical workload yet. And why should they?
We are not at quantum advantage yet. So there's no reason to put a quantum computer to do practical workloads. They work, but they are still testing out applications and different parts of the stack. And they are doing research themselves. Our view is that for the next three to five years, the market is going to be on-premise quantum computing systems for national labs and universities, a few commercial customers that are doing quantum research type work. But it's mostly going to be research area.
OK. You had mentioned the AFRL. As a customer, you signed a $5.8 million three-year contract last year. I think focused more on quantum networking. Can you talk about that contract and sort of Rigetti's thoughts on the broader quantum networking? Is quantum networking something that you envision needing anytime soon?
Yeah. I mean, overall, our view, because of our vision of hybrid system and the way quantum computers will coexist with CPU and GPUs, we believe most of the workload will be carried by existing networks. We don't think that we are going to need a quantum network to start using quantum computers. The existing network should be able to handle quantum computers within the framework of a data center right now. But there's always going to be applications where your quantum computers need to talk to other quantum computers, whether it's across the room or across the ocean, whatever. And that's where quantum networks become extremely important. There may be some unique applications where you need to create a quantum network. And for that, that's where we are partnered with AFRL.
Our view is that with superconducting technology, where we use microwave signals, because that's where they respond to, the key technology enabler to enable networking is conversion of microwave signal to optical signals. And then it's a lot easier to network with optical signals than microwave signals, obviously. As all of us know at home, microwave doesn't quite work when you are even a few centimeters away from it, let alone meters and hundreds of kilometers. So you need optical technologies a lot better to network. So a lot of our effort and AFRL and other people's efforts to focus on conversion of microwave to optics without loss of any signal fidelity and vice versa. So that's what those projects are about. How do you convert microwave to optics and vice versa?
OK. Just you had mentioned research is a big use case for your systems. Over the next three years, and a lot of that sort of government-funded, late last year in November, the DOE announced $625 million in funding for the National Quantum Information Science Centers. How does that benefit or can it benefit Rigetti?
It absolutely benefits not only Rigetti, but our whole quantum ecosystem. In the US, the original National Quantum Initiative Act, NQI is what we call it, was passed in 2018. That ran out of money in 2023. It was for five years. Unfortunately, that didn't get renewed in time. That reflects on our revenues, if you will, because the contracts we were getting from Fermilab and Oak Ridge National Laboratory, all that money basically went to zero because they were not getting any money. The Trump administration wants to put an NQI Reauthorization Act, which, it got introduced to the floor last week. There seems to be bipartisan support. Something will pass here soon, hopefully. That's talking about a much bigger number than the original amount of $125 million a year or $625 million for five years.
But in the meantime, thankfully, what the Trump administration has done is at least resuscitated the funding back to the original level. So what we heard a couple of months ago was taking it back to that original NQI amount. And now there's a bill being debated in Congress for a much bigger NQI reauthorization.
Yeah, I was going to say that's my next question. As we saw last week, the National Quantum Initiative Reauthorization Act of 2026. I think we saw one of these in 2025. So there are a number of bills sort of that have been put in front of Congress. What do you think it takes to get that bill finally passed?
An executive action from Trump.
OK. And obviously, if that reauthorization bill is passed, it has significantly higher amounts of funding for the next five years than the 2018 act.
Yeah. I mean, overall, we are confident, based on everything we have heard from various bodies in the government, that quantum funding for companies like us in the U.S. is going to increase substantially in the future than what it has been in the past. And the various organizations that are talking about quantum computing are DOE through the NQI Act, DOD, or Department of War. I mean, we have seen different numbers being tossed about 50%, or we will see what the exact budget is for next year. But there are a number of line items in that budget bill that cover quantum computing. And that's where we are getting money from AFRL. And even the DARPA program is all related to that. And then the Commerce Department is getting involved in quantum computing too now through their initiatives. So there are multiple revenue sources coming towards quantum.
But the bottom line is the U.S. government is committed to significantly increase funding in quantum computing, with the overarching goal being the U.S. should not abandon the lead we have in quantum computing right now to some other country like China, which is investing extremely heavily in quantum computing right now.
Last couple of questions for me. Just over the last 12 months, you have significantly strengthened your balance sheet. Maybe just talk about the priorities for cash going forward. And will you look at potential M&A transactions? Or are you sort of focused on internal gate fidelity type metrics in the near term?
So last quarter, we were at about $600 million cash. We have no debt on the balance sheet. Our burn rate is roughly about $75 million a year. So we feel pretty good about we are well funded to get us to that quantum advantage point. And really, given where we are in technology, we really feel we have it all under organic control. We don't need to go acquire somebody else to help out our technology roadmap. Having said that, if there is a company or companies out there that will help us accelerate our roadmap, we'll absolutely take a look at it. But most of our effort right now is organic and focusing on technology milestones.
Perfect. Last question for me. Just are there any parts of the Rigetti story you think investors may misunderstand? Or any last messages for the audience?
Overall, we think it's an exciting area to obviously be looking at quantum computing, huge potential. But you need to have patience. This is not a next year story or even next two years story. If your timeline is truly five-year to 10-year type horizon, this is an exciting area to look at. And Rigetti is a super exciting company to look at. If you want to be a trader and look there, maybe better opportunities out there.
Excellent. Well, we will end it here. Subodh, thank you very much for joining us at the Needham Growth Conference.
Thank you, Quinn.