Hello, everyone, and good afternoon. Thank you for joining us on the second day of Needham's 20th Annual Technology, Media, and Consumer Conference. My name is Quinn Bolton. I am the Semiconductor and Quantum Computing Analyst for Needham. It's my pleasure to host this fireside chat with Rigetti. The company is a pioneer in full-stack quantum computing. The company has operated superconducting quantum computers over the cloud since 2017 through its cloud services platform and has been selling on-premise quantum computers and quantum processors since 2021. The company developed the industry's first multi-chip processor and manufactures its superconducting QPUs in-house at the industry's first dedicated quantum device manufacturing facility. Joining me from the company is President and CEO, Dr. Subodh Kulkarni. Subodh, thank you for joining us.
Hey, Quinn. Good to see you again.
Before I turn to my questions, I just wanted to remind those of you watching the webcast, if you do have any questions for Subodh, please submit those to the dialog box on your screen. I will sort of monitor those questions and try to work them into the fireside chat. With that, let me get right to the questions. Subodh, we've seen increasing interest in quantum computing over the last six to 12 months, and so there may be some investors who are newer to the company. Maybe let's just start with a basic question. Can you give investors sort of a quick overview and history of Rigetti Computing?
Yeah, so quick high-level overview of quantum computing. It's an exciting new emerging technology area. Fundamentally, we do computing very differently than classical computing. We use qubits instead of bits, and the qubits can be superimposed at different states, and they can be entangled. What that enables us to do is compute exponentially better than what classical computing can do. If you have n bits, your computing goes linearly. As two multiplied by n, then you have n qubits, your computing goes two raised to n. Also, the energy consumption goes down by the same factor. We have not only significantly higher computing power than classical, but also significantly lower energy consumption. That generates a lot of excitement given the needs for computing applications.
Within the quantum computing space, Rigetti started a little more than 11 years ago in, as you mentioned, superconducting, specifically gate-based quantum computing. We are using gates just like classical computers. We build gates out of qubits, and then those gates are doing those calculations, the pluses, minuses, and everything else. Superconducting, we are using fundamentally core temperature superconducting materials. That is where we create the quantum states, and that is how we do computation. Rigetti started about 11 years ago, went through series A, B, C, and then went public in March of 2022. As you mentioned earlier, along the journey, we have deployed several quantum systems, computing systems on cloud environment through AWS, Microsoft Azure, as well as our own cloud. More recently, as you mentioned, we started selling on-premise quantum computers primarily to the academic community and government national labs.
Exciting area, exciting space, but still very much in the R&D phase. We are still perfecting the technology, and we believe we have a couple more years to go before we can really say quantum computers are here to take on commercial applications.
That's great. You mentioned that you're a superconducting quantum computer company. Maybe just talk about the different quantum computing modalities that investors may hear about in the industry and maybe touch on just the advantages of the superconducting modality for folks.
Certainly, I'll get to that. Before I get into superconducting versus other modalities, let me first talk about gate versus non-gate approaches because there's a lot of confusion for investors who are new to understand what is this. Gate-based quantum computing, which is what we do and many other companies do, is basically very similar to classical computing. We create gates out of these qubits, and then we use these gates to do the computation, just like your classical computers do. There are some, because quantum is a unique property, there are some companies that are taking advantage of unique quantum properties and creating computers with that, such as Unilink. There's a company called D-Wave that does that, or Entropy. There's a company called Quantum Computing Inc. Even though the name says quantum computing, they are doing Entropy. They are fundamentally very different than quantum computing.
They are taking advantages of some unique quantum states, and that helps them solve some specific applications. In general, our view is that gate-based is going to be the dominant part of quantum computing. Most of the applications are going to be using gate-based quantum computing. There will always be some unique applications that may need, that may be suitable for Unilink or Entropy, but those are going to be niche kind of areas. I'll focus on gate-based from here on. Within gate-based computing, quantum computing, there are many different modalities. We belong to the camp that is superconducting. We create superconducting states. That's how we create quantum states. There are other modalities like trapped ion, pure atoms, spin, and many other different kinds of modalities. There's pros and cons of different technologies. Each technology has its strengths, some weaknesses.
The reason we, along with IBM, Google, Amazon, Microsoft, the government of China, Fujitsu, Toshiba, a lot of large tech companies and governments invest in superconducting gate-based quantum computing is because of the two primary advantages with superconducting technology, which are scalability and gate speed. We fundamentally use semiconductor chip technology. Once we have a certain number of qubits, we know how to multiply it and keep increasing the qubit count relatively easily, exactly using all the tricks that we have learned over the last five decades of the semiconductor industry. We feel pretty good about the scalability concept with superconducting quantum computing. The other big advantage we have is gate speeds. We are essentially dealing with electrons, very similar to CPU and GPU technology right now. Our gate speeds are in tens of nanoseconds, very comparable to CPU and GPU speeds right now.
When you contrast us to some other modalities like trapped ion or pure atoms, they're dealing with ions and atoms, which are by definition significantly bulkier than electrons. Their speeds are a lot slower. They're dealing with speeds of 10,000 times slower than where we are. When we talk about tens of nanoseconds, they're talking about hundreds of microseconds. Intrinsically, there's a factor of 10,000 speed difference. Of course, speed matters in any computing system. I mean, that's a critical big advantage for superconducting over trapped ion and pure atoms, but also scalability. They're dealing with tricky challenges in multiplying the number of ions in extremely high vacuum kind of conditions. The strength they have is fidelity. Historically, their fidelity is because they are dealing with pure ions, pure atoms. They were always higher in fidelity.
What has changed in the last six months is a series of announcements you have seen from companies like us, as well as Google, Amazon, Microsoft, IBM. They're all in that 99%, 99.5% type median two-qubit gate fidelity, which is a very critical metric. That's how accurately are your two qubits entangling. Historically, that number was in the mid-90s or high 90s. In the last six months, you are starting to see 99%-99.5% from all of us. If anything, Google's Willow paper is at 99.7%. That's the highest I have seen for superconducting modality. Frankly, all these numbers are comparable or better than their trapped ion and pure atoms and other modalities are. The one area where we were weak historically, we have caught up, but our strengths in scalability and gate speeds continue.
That's why you are seeing the superconducting game being even more bullish now going forward because the one area of weakness is taken care of, but the strengths continue. We feel pretty good about where we are going to be in the future.
Yeah, I was going to, my next set of questions are sort of on your roadmap side. Since you took over as CEO, you've really focused Rigetti on improving the two-qubit gate fidelity of the company. Maybe just spend a minute talking about the milestones on fidelity that you hit in 2024, and what is your roadmap for 2025 in terms of further improvements for gate fidelity?
When I joined Rigetti, getting close to two and a half years ago, Rigetti was already on the path of open modular architecture with the multiple chiplets, so chiplet approach, which I very much liked, one of the main reasons I agreed to take this job. The big issue with Rigetti at that time was our fidelity. We were in the mid-90s for the two-qubit gate fidelity, which is a critical metric. That is why I focused the company exclusively on that metric, not worried about increasing the qubit count or any other metric. From mid-90s, we went to high 90s in 2023.
Last year, we announced that our latest system that we have deployed on AWS and Azure now, we call it Ankaa- 3, that has 99.5% median two-qubit gate fidelity with this particular kind of gate that we call FSIM. For some other generic gates like ISWAP and other gates, it is a 99.0% median two-qubit gate fidelity system. So 99%-99.5% is where we are, a significant almost factor of five higher than where we were when I joined the company. Our focus will continue there. I mean, that is the critical metric even right now. 99.5% sounds good, but frankly, it is not good enough. We need to get to 99.8% and maybe even 99.9% fidelity before we can really use quantum computers for commercial applications. Our focus will continue to be on fidelity. What we have said is now is the time.
At 95%, it didn't make any sense to increase qubit count. It honestly makes no difference at 95% fidelity, whether you have 100 qubits or 100 million qubits. The performance is the same because it's just dominated by the errors. At 99%, 99.5%, here we are today. Now we start making a difference. What we have announced is this year, we want to demonstrate more than 100 qubits with our chiplet approach with 99.5% on generic gates and 99.7% or something on FSIM gates. That is what we are targeting for this year. Next year, we will continue to ratchet up the notch. We will go up in fidelity. We will increase the qubit count, and that is where we will go.
You mentioned that the tiling of QPUs to increase qubit count, that is part of your roadmap this year. Maybe just spend a minute on sort of that approach. I think it's unique, or at least Rigetti was first to market with a tile-based architecture.
Yeah, that's a fundamental differentiating factor for us compared to other players in the superconducting gate-based quantum computing world. We believe in open modular concept. So our entire stack is designed in layers, if you will, where we can separate out the layers, incorporate new innovative solutions from outside like NVIDIA CUDA quantum layer or Riverlane error correction, whereas IBM and Google have the philosophy of designing a quantum computer like a mainframe computer. Fundamentally, we have a very different concept of how a quantum computer should be architectured than some of the other players in the superconducting gate-based side. Within that, the key differentiator is the modular chip approach, which is the chiplet. We are definitely leveraging all the learning that the CMOS industry has done over the years.
If you take any advanced CMOS application right now, whether it's a GPU or even your iPhone, fundamentally, if you look at the 3 nm and 2 nm node technologies, you are using chiplets. The main reason for that is it's a lot easier to build physically a smaller chiplet and control all the uniformities across that distance than a single monolithic die and try to control everything across larger distances. We find the same reasons are true in the quantum computing world as well. When we build a smaller, physically smaller chiplet, it's a lot easier to control everything on that dimension than a larger single monolithic chip. We started working on that in the 2018, 2019 time period. We have filed several patents. Many of them have started getting issued now. We are the first ones to enter this area with the chiplet concept.
We have the first one of the systems we deployed on AWS a couple of years ago used the chiplet concept. Everyone knows it works. We have shown that it works. Now we are going on to starting to tile more complicated. We have done it with two chiplets a couple of times now. Now we are going with four chiplets. That is our key milestone that we want to demonstrate this year is four nine-qubit chiplets, so a 36-qubit quantum computer where the performance is same or better than a single monolithic die. That is the key milestone for us. As far as we can tell, that is the only practical way to scale up to thousands and hundreds of thousands of qubits.
We just don't see how you can build a single chip with 100,000 qubits and control it well enough to get performance of all the qubits identical. Whereas when you build smaller chiplets, it will be a lot easier to build 100 or 1,000-qubit chiplets, pick the right ones from your wafer, and then tile them together for a system, just like what they do in CMOS right now.
I guess in the CMOS world or the chiplet world, you do have some size limitations by the size of the substrate, not necessarily the size of the reticle limit anymore. Do you see or envision at some point running into similar limits with the tile approach, or do you think you can get to tens of thousands or maybe even hundreds of thousands?
Well.
Tens of qubits using a tile approach in a single package.
What you said is exactly right. I mean, everything that the semiconductor industry has gone through over the last decade in perfecting chiplets, we will be dealing with the same things. Now, the good news for us is a lot of groundwork has already been done because of the semiconductor industry and the CMOS industry pioneering chiplet approach almost 15 years ago now, if you go back and look at it. A lot of the fundamental issues on how do you take the substrates, how do you do packaging, how do you do cross-wiring, and all those things have already been done. We can take advantage of the last 10 years-15 years of semiconductor industry know-how and take that into the quantum world. We are definitely looking at what is the packaging, the state-of-the-art packaging done in semiconductor CMOS chiplet world and carrying that over here.
Yeah, we will run into the same issues that the CMOS industry ran into and solved it. The good part for us is they have solved the problem, so we can latch on to that right now. None of our dimensions, our qubit dimensions are a lot more forgiving than a state-of-the-art CMOS transistor, almost an order of magnitude. Our qubit dimensions are 30 nm, 40 nm, not quite the 3 nm, 4 nm that you are dealing with in CMOS. That also allows us much more. We are not going to be building millions and billions of transistors on a single chip or chiplets and that kind of stuff. We are talking about thousands and tens of thousands. I know that sounds like a lot, but when you compare it to CMOS, we are still dealing with orders of magnitude less here.
We think we have an easier task to scale up chiplets because of the work that CMOS industry has done over the last decade or so.
Got it. I'll have one more on sort of the tiling or networking. Some of your peers are looking at quantum networking to connect multiple QPUs together. Is Rigetti looking at that, or would you envision at some point needing a quantum networking capability?
Yeah, there is a philosophical difference here in how we view quantum computers playing in the ecosystem compared to some of our peers. Our peers definitely view you have to create a separate network of quantum computers to get quantum computers to be practically usable in data centers. Our view is that quantum computers will coexist with CPUs and GPUs and use existing networks to make them useful. We fundamentally believe in the concept of hybrid computing compared to a completely separate network needed for quantum computing. Partly that comes from the fact that our gate speeds are comparable to CPU and GPU. We can envision a world where they can all coexist right now. When you deal with trapped ion and pure atoms, then they are 10,000 times slower.
Almost by definition, they cannot work with CPUs and GPUs correctly without creating some massive buffers and all kinds of problems. In some ways, because of their gate speeds, they are kind of forced into thinking about those kinds of things, whereas we at least have the alternative of using and leveraging existing networks. Having said that, we are looking at quantum networks ourselves. There are some specific applications where you will need quantum networks. That is some of the work we are doing in the work of converting microwave signals to optical signals. You have seen a lot of press releases from us in the last few months where we are working with companies like QFOX as well as with Harvard, MIT, and some other organizations where we are converting microwave signals to not only just optical signals, but also fiber optics.
We think that is the right way to potentially network quantum computers, certainly with classical computers, but with each other as well. We are doing our part in quantum networking, but our philosophical fundamental difference is that we believe quantum computers will coexist in a hybrid setup. Quantum networks is not a necessity, but a nice thing to have as opposed to you need to have it.
Got it. Okay. I wanted to come back. You had mentioned the Google Willow announcement from last December. I think one of the very interesting parts of that announcement was that Google was able to show sort of better quantum error correction or reduced error rates as you increase the size of the error codes. I wanted to spend a minute. Can you sort of just discuss Rigetti's work that you're doing? I know some of this is done with Riverlane. What is Rigetti's approach to quantum error correction, both in terms of are you looking at surface codes or low-density parity check codes and explain the partnership with Riverlane? What does Riverlane bring to the partnership?
Sure. We fundamentally believe in error correction. Error correction is an absolute critical element of any stack. Without error correction, you will never have working computing systems, including the current classical world where we all use computers because error correction works. Quantum computers are no different. We will need error correction. So far, error correction has not been the bottleneck because, frankly, fidelity has been just the dominant factor here. As we get to this 99.5s and 99.7s and 99.9s in the future, error correction is going to be the controlling limiting factor for the system's performance. That is where we were doing our own error correction, but we saw the innovative solutions that Riverlane in Cambridge, U.K., it is a company of comparable size. We are about 140, 150 people in Cambridge, U.K. They are doing some excellent work in error correction.
That's their sole focus. Our open modular approach allows us to incorporate these kinds of innovative solutions from outside, as I mentioned earlier. We decided to partner with them, include their error correction code. That's what we are working on right now. You're right. The big part of Google Willow chip announcement, besides the high fidelity of 99.7% two-qubit gate fidelity, was also demonstrating surface error correction code. Certainly, that's a very important demonstration because for the first time, anyone showed that as you increase the number of qubits, your actual cumulative errors come down. They don't keep increasing. That's what error correction can allow you to do. Great result, great paper. Obviously, surface error correction codes have their limits. I won't go into all the details about different types of error correction codes.
You mentioned linear density parity code, which is what is used in major commercial conventional computing systems right now. For the same reasons that classical computing uses LDPC codes, we also believe quantum computers are going to need LDPC codes. We call them QLDPC codes. We certainly think that is going to be needed a few years from now when you really get into commercial applications. We are working with Riverlane right now on how to incorporate, how to first develop QLDPC codes, and then how do you incorporate them in the stack. Exciting work. You will continue to see more press releases coming from us and Riverlane in this area.
I was just going to say, I think you recently, both companies recently won an Innovate U.K. award to, I think, help fund some of that work, correct?
Absolutely. Yeah, Innovate U.K. and the government of the U.K. in general with the National Quantum Computing Centre, they have our physical quantum computer there. It's a 24-qubit system. What we announced is that the government of the U.K. wants us to upgrade that to a 36-qubit system, including the chiplet approach. They want to see how the chiplet approach works. In parallel, we want to bring in Riverlane now and take their latest error correction codes. Not quite QLDPC yet, but one level below QLDPC. The important part is the important focus is going to be real-time error correction code. Google's demonstration was exceptionally good, but it was not real-time error correction code.
What we want to show is that not only can you have these fast gates with superconducting, but you can actually do error correction at the rate at which you are generating the data, which is important in the long term. We want to demonstrate real-time error correction codes. Eventually, that will go to real-time QLDPC kind of codes. That is what you really need to get into the commercial applications.
Before we get into some of the customers and sort of other awards, you had mentioned the work you're doing, I think it was with QFOX. Hopefully, I have that name correct in the NQCC in the U.K. on optical readout. Maybe spend a minute. I think you've talked about as you're scaling quantum computers, like control signals, coax cables is a limiting factor. It looks like you're working on optical capability that I think reduced the heat load in the cryo chamber. Maybe spend a minute sort of discussing that award and the work you're doing on optical readout.
Sure. Right now, if you look at all of us, the leaders in the superconducting gate-based camp, we are roughly at the 100-qubit level. We are all using dilution refrigerators today. We are all using coax cables to do the signal in, signal out. It works, obviously. The challenge we see is as we go from 100 to 300, 500, and 1,000 and above, physically, you do not have enough room in the dilution refrigerator unless you start building massive dilution refrigerators, which is a whole different complexity by itself. We need to somehow figure out physically how we can fit in all the signal in, signal out cables. One obvious approach is going from coax cables to flex cables. Flex cables have been around, obviously, for a long time.
Flex cables, we have these multiple stages because we also have to cool them down, cool the chip down to 10 mK, 10 mK. I'll repeat that. That's pretty close to absolute zero. We need different kinds of cables, flex cables, sometimes thermally insulating, sometimes thermally conducting, sometimes electrically insulating, and so on. It gets quite complex as to what kind of cables you need at different stages. To convert all of the coax cables to flex cables is a challenge by itself. The whole industry is going through that along with us. The first thing that we need to tackle is in the next year or two, we need to go from coax to flex cables. I think a lot of the work, early groundwork has already happened along with us, but there are many other companies doing some excellent work in this area.
We are pretty confident we will be able to move from coax to flex. Again, once you start getting to 100,000 qubits and above, which is what you need for fault-tolerant quantum computing and utility-scale quantum computing, basically the DARPA goal, DARPA wants us to be able to build a quantum computer that can do utility-scale quantum computing, which means hundreds of thousands of physical qubits. We cannot fit in even a flex cable in a practical dilution refrigerator. That is where we start talking about optical signaling. Obviously, to use optical, fundamentally, the way superconducting circuits work, we are restricted to the microwave frequency. We are talking 4 GHz or 5 GHz, which is microwave frequency. We need to figure out how to convert microwave to optics.
That is what the work so far has been along with QFOX and other organizations like Harvard and MIT, where we are focused on conversion of microwave to optical signals. One of the nice parts about the paper we got with QFOX that came in Nature Physics a few months ago was we showed that you can actually do more than just open air optics. We can do it fiber optic, which makes it a lot more practical to implement. Now we can start talking about putting a fiber optic cable inside a dilution refrigerator to bring your signal in and signal out. That is a lot more practical than a flex cable. When we start envisioning systems that are more than 10,000 qubits, where flex is not going to work, we think fiber optic is the way to go.
Right now, what we are doing is basic research on understanding conversion of microwave to optics and loss and everything that goes with it. We are staging the technology, if you will. In a couple of years, three years, certainly, we will be ready to put on fiber optics inside a dilution refrigerator. We think that's what's needed to get to the DARPA milestone of utility-scale quantum computing in about five, six years from now.
Yeah. I was going to go sort of there next with some of the recent announcements and awards. DARPA's Quantum Benchmarking Initiative is certainly a big deal in the quantum industry. In early April, Rigetti and several other quantum computing companies were invited to participate in the first round of that program. Maybe spend a minute just giving the background of the QBI initiative at DARPA. What is ultimately the goal? You'd mentioned the utility-scale quantum computer by 2033 and just sort of funding available through this program. What are the next steps as you progress to a final company that is awarded that program?
Yeah, sure. We are excited to be part of the DARPA Quantum Benchmarking Initiative, the QBI program. A lot of the information about that program is available publicly. You can just go and do a search on the internet to get access to the information. The openly stated goal is that DARPA wants us to build what they call a utility-scale quantum computer, which would basically, we have used the word fault-tolerant quantum computing before, but kind of similar to that. Essentially, you can do any practical application significantly faster and better than what your conventional computing can do. That's the goal. Timeline is 2033. It's a man on the moon, moonshot type of an initiative. They really are pushing all of us, saying, how will you get us there?
Frankly, that timeline is far faster than what we were internally contemplating because to get to what they are asking us to do, we are talking several hundred thousand physical qubits. That is what you are going to need. You are going to need 99.9% two-qubit gate fidelity or better. You are going to need faster than 10 ns gate speeds. You are absolutely going to need real-time QLDPC type error correction code. Four things again. You need probably more than 100,000 physical qubits. You need 99.9% or better fidelity, faster than 10 ns gate speeds, and real-time QLDPC type codes. Assuming you can get those four critical things, and there are probably 100 other things that I am not going into right now. Assuming you can get those four high-level things, you will be able to demonstrate a utility-scale quantum computer.
That is what we propose. That is what we will do. Challenging task, exciting project. Yeah, we are proud to be part of the initial 15 companies that they have chosen. We understand probably 100 or so companies applied for that. In a way, it kind of helps everyone understand who's where right now. We are proud to be part of that group of 15 that got chosen for phase A. In six months, roughly, there is going to be a phase B decision. That is a very important decision. Obviously, we want to be part of the Go Forward group. My guess is they will probably reduce the number by a factor of two or a factor of three. You are talking about five to seven companies at that point. Our goal, we know we need to demonstrate what we have already publicly stated.
We want to demonstrate the multi-chip architecture, the chiplet architecture. We want to continue to show higher fidelity, faster gate speeds. That's what we want to show. Right now, for phase B, error correction is not that important because we are still dealing with physical hardware situations. About a year from now is phase C, which is where you are going to be dealing with one or two companies. That is where the award becomes substantial. It is going to be in the hundreds of millions of dollars at that point. They have not disclosed the final number because in a way, they want to keep it open depending on how many companies and what exactly is being discussed. It is going to be a substantial award for the final one or two companies that are going to win it. We are talking hundreds of millions of dollars.
That is really where we want to be. That is where we definitely need to get to our milestones that we have publicly disclosed by the end of this year and continue to improve on fidelity and qubit count and gate speeds to get to that milestone. It is an exciting milestone. We are really proud to be part of this project. That is certainly driving our roadmap right now.
Yeah, no, it's definitely an exciting project, fun to watch. It'll be very interesting as we move into the fall to sort of see who's selected to move on to phase B. Good luck with that. We'll certainly be watching. Another customer announcement, this one definitely with an air towards commercialization of quantum computing, is your collaboration and strategic investment by Quanta Computer in Taiwan. Obviously, many investors, I'm sure, know Quanta Computer, a very big player in the computing world. Spend a minute just sort of discussing your collaboration, what both companies have committed to as part of this collaboration in terms of funding and developing quantum technologies, and just touch on the investment, which was also part of that announcement.
Quanta Computer based in Taiwan is well known. I mean, they are a big-time ODM player in the computing world. Last year, they were about $43 billion sales and growing very rapidly. One of the main drivers of their growth right now is GPU servers. They are a very, very close partner of NVIDIA. As many of us know, TSMC, NVIDIA designs the chips. TSMC builds the chips. Usually, companies like Quanta take over at that point and build the rest of the hardware around it. They are the ones who are physically selling those systems to the cloud providers, the hyperscalers, Amazon, Microsoft, and so on. That is the way the current GPU ecosystem works. Quanta Computer on their own fully understood that quantum computing is the next big thing after GPU. They have been looking at where to invest.
Our understanding is they concluded that superconducting gate-based is the right area to invest, just like many of us and along with other large companies. Within that, they talked to many companies that are in the superconducting gate-based camp. They looked at where we are, what our roadmap is, and chose us. We are really proud to be chosen by, to some extent, Quanta Computer. From our side, we knew that long-term, we needed an ODM player. It does not make sense to build CPU, GPU, FPGA boxes in Berkeley, California, which is where I am. That is not exactly your high-volume, low-cost manufacturing location by any means. Long-term, we knew that we needed to choose the right ODM player to partner with in this area as the volumes increase and we go commercial.
Frankly, we were a little surprised that an ODM player of the size of Quanta would be interested in quantum computing at this earlier stage because we are still talking about R&D right now. Historically, ODMs do not dabble that much in R&D. Quanta is an exception among the ODM players. If you go back to their history, they were one of the first ones to invest early in laptops, which is what enabled the whole most of the MacBooks from Apple right now come from Quanta. That is public info. One of the reasons they are NVIDIA's biggest partner is because they were one of the earliest ones to invest in GPU technology. Their philosophy of being relatively early among all the ODMs is continuing. They are being relatively early from the ODM side to invest in quantum computing. We are really happy to partner with them.
Our cultures in that sense match. We both think of the business as a long-term. Anyway, it's a long-term strategic partnership. One part of it was they essentially gave us $35 million to get about 3 million shares. It came to about $11.59. That's a token investment. That's typically the way large Asian companies work. They want to show that they are part owner of the company. That's great. The more important part is they committed to investing $250 million over the next five years in the non-quantum portion of the hardware stack. If you look at a quantum computer, there is a chip. There is what we call a QPU unit. Then there's the rest of the hardware, which is quite a lot. There's a dilution refrigerator. There are all these cables that we talked about earlier.
There are several layers of firmware and software on top of it. It is quite a bit of investment needed in each of these layers of the stack. Certainly, on our own, we were doing everything all these years. As the business starts increasing in size, it makes sense to focus on what you are really good at. Clearly, what we are really good at is the chip design, the chip fabrication, and the immediate layer of the hardware and software that comes after that. When it gets to the bigger complex racks and CPUs, GPUs, FPGA systems, ASIC systems, that is really not Rigetti's strength by any means. It does not make sense for us to keep doing all that work. We view it as very favorable. Essentially, their investment helps us reduce our R&D investments in the future.
We view it as a very strong ODM partner that is committing $250 million besides that additional $35 million that they gave us. Very strong partnership. It is a long-term strategic partnership. Both of us view this as a solid 20, 30-year kind of a partnership, just like what they have done with NVIDIA in the GPU world and with Apple in the MacBook world.
Perfect. We've got about five minutes left. Wanted to sort of switch. I know some of the questions we've gotten around quantum computing is, and certainly Rigetti is in the development stage of developing the technology. U.S. government and allied government funding is important. Maybe just can you give us any updates sort of that you see right now? I know there's sort of the reauthorization of the National Quantum Computing Act that's still in Congress. Can you give us any updates on the status of that? I guess related questions we've seen from investors are, the Department of Government efficiency has been making a lot of cuts in the government. Have you seen any quantum programs fall victim to DOGE? I guess maybe tangentially, lots of focus on the government right now on tariffs.
Has tariff talk kind of distracted the government from some of these other initiatives? I know that was a handful.
Yeah, no, no. Let me take the tariff one first. I mean, we are in R&D. Fundamentally, most of our cost is R&D cost. We are not buying a lot of components, and we are not selling a lot of finished systems. Tariffs are relatively insignificant in the big picture. Of course, we watch it. It's still a contributor to our cost, but it's not a big deal. The other question, which is more important, is the government funding. NQRE authorization is absolutely critical for companies like Rigetti long-term. As of today, that hasn't happened yet. There's bipartisan support, as you probably have heard from us and other companies. The original bill should have been signed last summer, actually. Didn't happen. Obviously, a new admin has come in. There's still bipartisan support.
Literally, there was a hearing in the House just a couple of days ago, and I could see the transcripts. I wasn't there, but I could see the transcripts. And there's continued bipartisan support. All indications are even Trump has made a few statements saying that he supports quantum computing. We don't expect major glitches, but it hasn't happened yet. That's a critical piece of legislation that we need not only to get passed but appropriated because it's a $2.5 billion or $2.7 billion bill over five years. It's almost $500 million or so. We are eagerly awaiting signage of that bill and appropriations of that. DOD funding has started, as you can clearly see from the DARPA contract. Another award we announced is with the Air Force Research Lab for one part of our stack, which is that ABA technology.
We'll continue to get more and more of those kinds of things. We certainly are eagerly awaiting for the bigger thing to get released. I'm sure many other companies are waiting for that as well. Our focus is definitely on technology. In the meantime, our financial situation has improved a lot because of the money we have been able to raise, as well as Quanta Computer helping us out on that front. We feel pretty fortunate that we are in a very good position financially right now to stay focused on technology milestones. Hopefully, the U.S. government funding will resuscitate quickly.
Yeah, hopefully, it'll get signed. I know we're sort of up against the end of the session, but I just wanted to give you the opportunity if there were no questions from the audience. Were there any last-minute thoughts or any important points I didn't ask about that you'd like to share with the audience?
Just that fundamentally, we are super excited about this area, the potential for this. However, we believe we are very much in the R&D stage right now. We believe what is critical is technology milestones right now and getting to that. That is how we solve complex challenges and enable large business opportunities. Again, thank you for your interest and all the questions.
Yeah, no, thank you for joining us at the Needham Conference. We really appreciate it and look forward to catching up after earnings next week.
Take care, friend. Thanks for.
Okay, take care. Thanks, everybody.