IonQ, Inc. (IONQ)
NYSE: IONQ · Real-Time Price · USD
43.08
-0.76 (-1.73%)
At close: Apr 28, 2026, 4:00 PM EDT
43.28
+0.20 (0.46%)
After-hours: Apr 28, 2026, 7:59 PM EDT
← View all transcripts

Morgan Stanley’s Technology, Media & Telecom Conference 2024

Mar 5, 2024

Joseph Moore
Managing Director, Head of U.S. Semiconductors Research, Morgan Stanley

Okay, great. Thanks, everybody. In case you haven't seen me in the seven sessions I've already done today, I'm Joe Moore, Semiconductor Research at Morgan Stanley. Very happy to have the management of IonQ here, CEO Peter Chapman, and CFO Thomas Kramer. Guys, maybe if you could, we could do this as a little bit of an introduction to quantum in a couple of parts. If you could sort of start out by just talking about the promise of quantum technology, and then we'll talk a little bit about IonQ's approach to that market, which is quite a bit different than others.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Okay. So it has been said that a person who can explain quantum computing is worthy of a Nobel Prize unto himself. You don't have to go out back and do physics and all the rest of that stuff. You just have to explain what it is. So we're probably not going to win a Nobel Prize in this next 30 minutes to explain quantum. So quantum computing originally started with this idea that for the natural world it really didn't matter about Moore's Law. If it allowed it to go for another million years, that you still wouldn't be able to do computational things that were required, in particular for chemistry in the natural world.

It's interesting today as a kind of datapoint right now in time. We're just getting to a point where an IonQ quantum computer is equaled in computational power to a single DGX 100. But the interesting thing is every time you add a qubit, it doubles the computational power required. So you add 1 qubit, now you need 2 DGXs. The problem is that every time you add it, it doubles the need. If you add 2 qubits, now you need 4. So right now we're at 32 algorithmic qubits is equal 1 DGX 100 with 80 GB of memory. To get to our next system, which is 64 algorithmic qubits, you need 3.5 billion DGXs to equal the computational power.

You would need 3.5 billion times 80 GB of memory to be able to do the matrix math. So, of course, and this was the original insight that into quantum, that there's a set of things that just don't scale well in a classical computer. We happen to be that, literally, today. With our recent announcement of AQ35, is that we're finally to the point where classical computing, we're starting to leave it behind. And this was, Feynman actually saw this back in 1981. As he said, to be able to model chemistry, even if we allowed today's classical computers and Moore's Law to go for 1 million years, it really wouldn't matter. You still wouldn't be able to do the kinds of things we want to be able to do in compute. So we needed a different way to build a computer.

That, that realization was the quantum computer. So these quantum devices, and they're not good for everything. You know, it's a, it's a bizarre little device. It, it's not good at adding 1 plus 1, but it can solve differential equations. I was like, oh, how can that possibly be, right? So, there happens to be a set of problems that quantum looks like it's going to be very good at. It is chemistry is one, which is the original insight. The second one was machine learning. Everything that we've done so far in machine learning seems particularly applicable. Optimization problems, and maybe strong AI. So as you ask the question, what is it? It's a, a system which is not based on binary systems. It's based on a quantum system. It's not digital. It's not analog. It's quantum.

It just turns out the real world, the world that is, that everything is based on, is actually quantum. It's not digital. It's not analog. It turns out a digital system has a hell of a time trying to simulate what a quantum system does. And interesting, we happen to be at an interesting place in terms of, we're already at this interesting place right now for quantum simulation. But we're also at a very similar place, actually, for NLP. It's today for to train a large language model, it's that you need, we were told by one of our partners, you need 30,000 servers. Each one of those servers has 8 GPUs in it. So a total of 240,000 machines. And it runs for about 3 months for a total cost of about $1 billion in runtime.

So we're just getting to a point where our computational needs are really exceeding what it is that we can kind of classically build. Next year when, you know, the next version of ChatGPT needs 10 times the data, will it be $10 billion to train it? So, you know, our classical systems are not scaling well. And you even see people like Sam Altman come out and say, and Elon, who've come out and said, look, we need to have higher energy output for the planet so we can power more data centers for classical hardware. We don't think that that's the solution. We think quantum is actually the solution. It's not really reasonable for, the classical hardware side to build 3.5 billion processors to simulate one of our quantum devices.

So it's this kind of idea that maybe there's a new way to compute things that's much more efficient. Turns out that one of these we're talking about a single processor here versus 3.5 billion GPUs. That machine plugs into two standard wall outlets. So you can imagine what it takes to power, you know, 240,000 GPUs for three months. So this has a huge savings both in terms of the cost of the machines. I think we calculated out what 240,000 GPUs cost versus one of our systems. This is a hell of a lot cheaper. And certainly a hell of a lot cheaper to run. At AQ64, which is the system we're building right now it's a very long answer. Sorry, John. It's OK. It's good. At AQ64.

Joseph Moore
Managing Director, Head of U.S. Semiconductors Research, Morgan Stanley

Makes my job easier.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

It can consider, in a single instruction, at roughly the speed that you have of your laptop, it can consider 2 to the 64 different possibilities in parallel. So what you, you're on the internet. Go do a search for 2^64. The answer is roughly 18 quintillion different possibilities in a single instruction. So it can explore a computational space of that size. So now if we're using numbers you probably haven't used in, you know, ever. And so let's compare that with today. Frontier, the world's largest supercomputer, which is over at Oak Ridge National Labs, made up of lots of little blade servers and GPUs and all the rest. It can do 1.2 quintillion floating point operations per second. So this next chip that we're building can do 18 quintillion in a fraction of a second.

So the single chip is actually more powerful than all the supercomputing devices that mankind has built so far. Another way to think of it is that if we took everyone's cell phone on the planet and we built a supercomputer, that a single chip would equal all the processing power of all those cell phones. After that, we're going to go build the next chip, which is 256. So now we're getting into 2 to the 256. Now this is a really large number. So now we need to find kinds of things. How can I even explain to you what its computational power is? At 120 algorithmic qubits, it can consider in parallel equal to the number, the same number as there is atoms in the known universe, all 13.8 billion light-years across.

So if you were to convert all the matter in the universe into transistors, it still wouldn't be able to compete just for this one chip. But that's only 120. I get to be the Ronco salesman. But wait, there's more. So we're talking about 256. So it's, what, 2 to the 140, more than the number of atoms in the known universe. So these devices are now going to be more powerful than all the computing devices mankind has produced and classically will produce for the next million years. And by the end of the decade, we'll have 1,000 algorithmic qubits, and suddenly you will have 2 to the 1,000.

And so you're looking at numbers now that can no longer be represented. Just the number itself cannot be represented in 80 GB of memory as to how much more powerful these machines are than today's largest supercomputers. The multiplier by itself, just the one number, can't be represented. We're quickly getting into the place when, in fact, actually the largest number that you can represent on a computer has 308 digits today. And so we'll quickly be on that just in terms of being able to explain how much more powerful these machines are. Your laptop will no longer be able to calculate how much more powerful just as a single number. And so, that's what it is. That's what we're doing. And that's what we're building. And you might ask yourself, what is it you would do with these things?

I mean, you know, what I what's the what's the plan? Well, it turns out that, that there's problems all over the place that need this kind of computational power. You've actually experienced it already this morning. More than likely, you had a delivery that was made to you. And so the question is for the delivery for the logistics company, what is the optimal route to deliver? The average delivery company, the delivery driver delivers to 120 addresses. And so the question is for the 120 addresses in one day, what is the optimal delivery route? Geez, that's got to be an easy problem to solve because clearly we do it every day. So it must be optimal. But it actually, it turns out, if you remember a little bit of high school math, is that that's a factorial problem.

So you take 100, 120 minus 1 factorial, and that's the number of different ways to deliver that package. Should I go to this address first or that address first? And then where would I go after that? Well, 119 factorial is a massive number. And if you were to look at that classically and do it in parallel, it would take a lifetime just to calculate for one delivery driver. But with quantum, we have a quantum system big enough it could go through and calculate that in a single instruction. So and it's a little bit strange because quantum doesn't give you a discrete answer. It gives you a high, a probability. So you run it a bunch of times to be able to figure out you get a bell curve.

You figure out, basically, after running it 100 times, where it appears the most at the top. That turns out to be the answer. Now I'll just point out that if you wanted to optimize for San Francisco, the delivery route for all of San Francisco, how many drivers do you think are in San Francisco? Make a guess. I'll, I'll make it up. Let's assume there's 1,000 delivery people in San Francisco. So if I wanted to optimize all of San Francisco to make sure that I get the optimal routes for everyone in San Francisco, 120 addresses, 1,000 peoples, that'd be 120 times 1,000, 1,000 minus 1 factorial. What do you know? That's a number which is more than the number of atoms in the known universe.

So these are the kinds of problems that these things that you just think to yourself, well, this must have been solved. We must have figured out a way to get this to work. But it turns out it's not actually true. And these are the kinds of problems that quantum computers could be used for. OK. Well, that.

Joseph Moore
Managing Director, Head of U.S. Semiconductors Research, Morgan Stanley

I've probably answered almost the whole thing. I think we're done. Thanks so much for coming today.

Speaker 4

So I feel like the capability you're talking about, there's a general consensus that we'll get there, that quantum can address a lot of these things. I think what's different about you guys is you think you're going to get there a lot quicker than the scientific consensus. So can you talk about that? You know, you think about quantum as, like, low temperature, superconducting, you know, advanced physics kind of stuff. You guys have a very different approach. So can you talk about your approach to it?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Yeah. So there's many ways to build a qubit. I think there's, you know, a dozen or so right now. They generally fall into two categories. One is man-made qubits. And that would be the superconducting. And the other one, which is all-natural qubits. That's the eco-friendly qubits. So these are made out of things we use individual atoms, photons, kind of those kinds of things. People who are using a natural qubit have the advantage that their yield for their system since they don't manufacture the qubit is 100%. The people who are making man-made qubits, they have a problem, which is that their qubits are not just one photon or one atom but tend to be made on a chip. And so you need those things, the yield, to be really good. So there's kind of, there's two problems. One is how do you manufacture a qubit?

People who are doing natural qubits have an advantage over the people who don't. Both systems have a problem in that, qubits, by their very nature, want to be isolated from the environment. So the amount of isolation you can give them controls the what we call noise in this business. And the noise determines how large a quantum circuit you can run. So I'll, I'll give an analogy to this, which is imagine if you're doing Excel and you were going to do 2 times 2 times 2 times 2. And you're going to do that 8 times. And at the end of it, you were expecting to get 256. That's what we expect on a digital computer. But in quantum, there's an error. And that error compounds across the calculation. So when we do 2 times 2, we get 4.

But we also get 4 ± the error. So if the error is large, it compounds every time I do 2 times 2 times 2. Instead of getting 256, what I get is 256 ± maybe 500 or 1,000 or 10,000. And so the number, the answer you're getting, is pure noise. How many times you can multiply that 2 times 2 times 2 is really dependent on the noise and also the application. Like, for instance, maybe if you were, and this, this is not really a quantum thing. But just so you can understand it, maybe if I was calculating sales tax, I don't really care about a little rounding error on the noise. So that would be totally OK.

But if I had an application that I wanted to do 500 of those 2 times 2 times 2, even a small amount of noise because it compounds would limit the usefulness of the quantum computer. So ion traps, the technology that we have today, has the best noise or as little as possible in those systems that allow us to run the very largest applications. And so this noise controls kind of how large a quantum circuit or program you can run. And it gives us this huge advantage. A lot of times, people want to compare the kind of. It's very common. Everyone wants to compare different qubit modalities and all the rest. I don't think of it as kind of comparing ion traps versus superconducting. My guess is that these things will all have some useful time in the market.

But it's going to be at different points. And ion traps happen to be today and probably will dominate the market for the next 5 years. If you ask me 10 or 15 years or 20 years, I would actually choose a different modality. Like, Microsoft happens to be working on topological qubits. Looks seems really elegant. The only problem is we haven't seen one yet. Mankind hasn't found one. It's like the old, you know, Higgs boson particle before they found that. It's very elegant. But it's not here yet today. Maybe 20 years from now, that will be a great thing. And where IonQ has the advantage is that we will have the market, basically, to ourselves for the next several years in generating revenue. And our existing cash position gives us. So maybe 10 years from now, who knows?

You might see that IonQ buys a different kind of qubit company because we think it's time frame in the market is coming into being. So I don't see these as competitions. I just think that they all have different times where they'll mature. To take any qubit from kind of starting point to a product requires about $1 billion. So often you see people who say, oh, there's been a breakthrough in a laboratory that's on a daily basis. You know, I see things that say a breakthrough. But you need now $1 billion to go from that lab experiment to a product. And that's easily 5, 10 years. You're going to have to run through, what, 3, 4, 5 rounds of different series to be able to get that kind of money. And so these things just take time.

IonQ happens to be leading today.

Speaker 4

Great. Maybe you could address a couple of recent things that people have asked about. You know, one, in terms of the two co-founders. I mean, you've seen this company grow a lot from 60 people three years ago to over 300 now. But your two co-founders have both returned to academia, still involved in the company to some degree. And I know Chris Monroe was quite prominent at the Analyst Day a couple months ago. But can you talk about that dynamic and, you know, why they would make that transition?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

This is juicy stuff. You know, Taylor Swift, whether or not she's going to endorse, seems to be on par with whether our two co-founders. So, the two co-founders is, they are and originally were college professors. One was at University of Maryland. One was at Duke. It's an unusual story. Actually, NEA approached the two of them to start this company. There's actually nothing about this company which is normal. So, I mean, most companies start by a bunch of people going and chasing VC. We had VCs chasing the, the two co-founders. That's how the company got started.

We put together a deal with the University of Maryland and with Duke where we gave them roughly about 3/4 of, of the 3/4% of the company in exchange for the exclusive royalty-free license to the IP that the two college professors had done in the last 15 years and going forward till 2025 now. And so the work that goes on over at the universities is actually significant because they get a lot of government dollars to invest in quantum. And that IP comes to IonQ. So what happened is when we started with the IPO, there was basically just the three of us. And so we said, look, we don't want the two co-founders to come to the company because then the IP arrangement will go away with the universities.

It's economic value to the company because right now, there's about I think the number is about $100 million a year, which is going into those colleges for ion trap technology, which we get to monetize. So I don't want that to go away. In fact, actually, they're working on things which are probably 8-10 years away, which I'm not having to invest in because the government's kind of paying for it. So we said, Chris, you go. Unfortunately, we had University of Maryland and Duke. And we said Jungsang will come to the company. And one will have to stay. And we decided he would go to Duke. And we kind of lost out on the University of Maryland thing. So he went to stay at Duke and spent his time there.

And then, Jungsang took a sabbatical to come here and to work with us. And then Jungsang, at the very beginning, was VP of Engineering because we didn't have anyone. And then he went and hired a VP of Engineering. And it was great. We got that covered. And Jungsang then went and took VP of R&D. And then we hired a VP of engineering of R&D. And he left that position. He went to applications. And we hired somebody for that. And so he's just been a kind of pinch hitter at this point. He's been doing that for about two years now. And so now, he's finally getting to a point where we've got a complete management team of kind of experienced people. And he's going back to being a college professor at Duke. And so there's really not much more to that story than just that.

It's and the company still benefits from the work they do over at Duke because we still have the arrangement where the IP still all comes to IonQ.

Speaker 4

Great. That's helpful. And then I want to ask about some of your targets. And you guys have been very successful hitting the milestones in terms of the AQ targets. But maybe just first, definitionally, can you define algorithmic qubits and how, you know, some of your competitors talk in terms of qubits as a comparison?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Physical qubits?

Speaker 4

Yeah.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Yes. Not all qubits are the same. So in a layman's term, algorithmic qubits just mean useful qubits. I can kind of explain it in a little bit more detail. It turns out that for most algorithms that you're going to write in quantum, you need in gates roughly the square of the number of qubits, so, to be able to run an algorithm. So if you have 10 qubits in your algorithm, if you're going to use all 10 qubits, your algorithm has roughly about 100 gates to be able to use those 10 qubits. Now, that is not true for everything. It turns out there's some applications that consume gates much faster. And there's some applications that don't. So this is an average. It's sitting in between.

What we do is there's a benchmark put together by the QEDC, which is a consortium of quantum companies, and a set of algorithms, things like Monte Carlo simulation, where we run those things. These things consume them roughly at the square of the number of qubits. So now you look at some of the competition, they might say that they have 1,000 qubits. But it turns out the error rates are so bad that only, let's say, 5 of those qubits would count towards algorithmic qubits, even though they have 1,000 because the error rate controls how long a program you can run. So what matters is not how many qubits you have but how many you have plus the error rate.

Then the best thing would be to run this benchmark that sits down and says, OK, can you run this size Monte Carlo simulation to determine? That's what Algorithmic Qubits are. It's just simply a way to choose, a benchmark that hopefully aligns with what customers want, which is to be able to run larger and larger quantum circuits.

Speaker 4

Great. So AQ64, I mean, you, as I said, you hit the milestones to date. I think you're up to AQ36 as of the most recent quarter. You've talked about AQ64 at the end of 2025, which, and that's where you sort of say, well, we're going to surpass the ability of classical simulation, classical computing ability to solve these problems. What are the things that you still have to do to get to that milestone?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Well, so a bunch of good news. First, as it turns out, that we don't think you need error correction to do it. So in these noisy qubits, one way is to get rid of the noise is to do error correction, the same error correction that you see in classical hardware when you send a message over the internet. We actually send more data than the data that we want to send because we recognize over the internet that that packet will get noise in it. And so we need to be able to correct for that noise. So it turns out memory has this. Everything in classical electronics has error correction built into it. So quantum can use error correction as well.

So originally, we thought to hit AQ64, we were going to need roughly 16-to-1 error correction, meaning you're going to use 16 good qubits to be able to get one really good one. And the good news is now, we think we don't need to do that. We do; we found something in the last couple of years called error mitigation, which is in software; we can mitigate the errors. And so, and that's good, too, because it means we can fit AQ64 all on a single chip. So we don't need to span across multiple systems to be able to do it. However, to go beyond 64 to get to 256, then at that particular point, then you're going to now need to have enough qubits to bring in error correction. And now, you need to have not just one quantum computer but multiple of them.

And so, one of the really cool things about quantum is that if all of us had quantum computers, let's say we all had 64 AQ quantum computers. We could all come together as a group and fiber put together fiber optics between our computers. And we could make one really big quantum computer. If there was 30 people in this room, we could have 30 times 64 qubits. And the cool thing about these qubits is they do not care where the other qubit is to do computation. The qubits don't know if they're right next to each other or if they happen to be on the other side of the planet.

And the other cool thing is in classical computing is when I build a supercomputer and I take two blade servers, I don't get 2x the power because the network and the overhead in the software reduces that. Maybe I get, like, 1.3, 1.4 in exchange for my two. So there's this tax to be paid classically for networking. But for quantum, there is no tax. The fact is that the communications happens through a method that we don't really understand, which is this strange entanglement. And the qubits literally don't care. So what's interesting is we just announced the first networking part of the quantum to be able to network these chips together. There's kind of three steps to be able to make this happen. We just showed the first one. We just announced that.

By the end of this year, we'll have completed all three steps. So now, you will have a networked set of quantum computers that will allow you to get to much larger qubit counts to be able to hit AQ256. The other thing, and this is true for these, these things are true for all quantum, not just IonQ, is, to get to much larger quantum computers, they need to be built out of a lot of cheaper, smaller quantum computers. So the one place where Moore's Law does apply is actually in the cost. And so in every generation, the systems need to get smaller and cheaper because future generations of quantum computers will basically be like blade servers. And you'll need to put 1,600s, thousandss, maybe at some point, millions.

And so you need the cost to be able to scale as you ramp up the number of quantum computers required. And about half of IonQ is working on building more powerful quantum computers. And the other half of the IonQ is working on smaller, cheaper, and more manufacturable quantum computers. And IonQ is unique in that space. I think we're the only ones in the industry today who's thinking about how do you build kind of a productized quantum computer that you can easily ship. And you can see this in the generations in the Forte system, which is the 35 Algorithmic Qubits. It fits in 8 racks. And in the 64, the goal is to fit it into 3 racks. In the 256, the goal is to fit it into 1 rack. So even though we're getting bigger and bigger, every generation gets smaller and cheaper.

In fact, the 256 that fits in 1 rack actually has more than likely 8 quantum computers in that 1 rack. So you're starting to shrink down the size and with that, the cost of each one of the systems.

Speaker 4

Great. Maybe you guys could talk a little bit about the financial model, and to the extent maybe, you know, the different sources of revenue, talk through some of that. I know you start to, to move to more systems revenue at some point. Can you give us a, an overview there?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Yeah. So, if you look at kind of the breakdown, and for us, about half the revenue is government. Internationally, it seems stronger than domestically. Maybe that's the state of the U.S. politics. I'm not quite sure. And then the other half would be industry. So both seem to be quite strong. Compared to this time last year, our top of funnel has grown significantly. These systems are expensive, which means typically that they're difficult to predict in one quarter because and they're often tied to other things like government cycles. So, like, inside the United States, most government sales really show up in Q3. And so you can kind of predict some of these things as to when they happen, you know. Industry, typically, it's in Q4 as a pretty busy time.

So different time periods of the year will be, will be better than others. We are quickly getting to the point now, starting literally, you know, kind of this last week, where we're starting to take workloads from NVIDIA. Up to this point, if you were doing simulation, when our 11-qubit system, people would sit down and say to me, why do I want to use your computer because I can simulate it on my laptop? And they were right. And then when people sat down and said AQ29, they said, well, I can simulate that on a single DGX GPU. So do I need to use your system? But now, we're finally getting to a point where it can no longer be simulated. So that part of the market, IonQ and actually, you know, most people don't know.

But some of the larger clusters of NVIDIA's hardware happen to be sitting in data centers doing nothing but quantum simulation. And so now, those people are going to be coming to IonQ to do their work just simply because even if you have thousands of GPUs, you just can't simulate what it is that we do with one of our systems. So we're just starting to take those workloads from NVIDIA and bring them to IonQ. The other thing is, once you get to AQ64, is the economics change? Because what it is you can do with it suddenly is interesting. If in these early days, if you could simulate it on your laptop, that probably means also that there was a classical thing that you could do that was probably better than what you could do on a quantum computer.

But we're getting into this really interesting area now. And around AQ64, definitely by 256, where classical is just left behind. And so now, there's, you know, we just simulated with 35 benzene for the first time. So we're starting to get into interesting molecules from a commercial point of view. But once you get into AQ64 and 256, then you get into small molecules for drug discovery. And so it's starting to be a really interesting time. And that hopefully will drive demand for systems to sit down and say, well, if you're in pharma and you're in small molecule drugs, we need to have one of these systems to be able to simulate those molecules.

Joseph Moore
Managing Director, Head of U.S. Semiconductors Research, Morgan Stanley

Great. So maybe we could pause there and see if there's questions from the audience. Sorry. OK, start there.

Speaker 4

Thank you. You mentioned a few times how, like, IonQ is the, the leading player in the industry. There's definitely a couple different companies that all say that about themselves.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Sure.

Speaker 4

probably most notably of late, Quantinuum.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Yep.

Speaker 4

The spin-out of Honeywell. It's taken a bunch of money from J.P. Morgan and others. If you wouldn't mind just comparing, contrasting your approach versus theirs. I think they had some big announcement today about, you know, solving the wiring problem for scaling and.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Yep.

Speaker 4

How you think about that, both from an industry, you know, breakthrough and commercialization standpoint as well as from a competitive standpoint? Thank you.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Well, first, I think it's whether it's quantum or not, it's the job of every CEO to say that their company is the best. So, that's probably just take that with a grain of salt. So, each one of them is a little different. Quantinuum is, what they do is they, and they do this because they can't do what we do because of patents. What they do is they do quantum computing. And they do shuttling. They take ions just like us. And they bring them into a compute zone and entangle them and then shuttle them away and bring in other ions. So there's a lot of shuttling going on. If you actually look at what they're doing in terms of the timing and, and what they're spending their time, about 99% of the wall clock time is being used to shuttle back and forth.

So I'm going to have to draw into an analogy, which one hopefully you know. If you remember back in the day when we used to have hard disks with physical platters, we would optimize the hard disks so that the head of the hard disk would not move because the physical movement of the hard disk, actually was all where all the time was. And so we used to defrag hard drives for databases so that you'd put everything close together so you can minimize that movement. That's kind of the difference you could kind of think of as between us and Quantinuum. They're having to spend all their time shuttling back and forth, where in our case, we've got 32, now 36 qubits where no shuttling has to happen.

So, in terms of the amount of compute time that you get to use, in ours spends most of the time computing instead of actually shuttling. So that's kind of the largest difference between the two. The two things are very similar in the sense that they're both ion trap quantum computers. Maybe another way to think about it is, and these are poor analogies. But they have a 2-bit bus. And we have a 36-bit bus. And so, you know, it's a kind of different system. And it has different performance between them. I don't know if that answers your question. Yep.

Speaker 4

Hi. I apologize. I know nothing about quantum. I try to watch that TED Talk. I still have no idea how it works and how that computer beat the humans every time. Really simplistically, Google TPU is the most efficient matrix multiplier semiconductor unit. NVIDIA has the vast majority of the market because of CUDA and usability. What are the corollaries with quantum when it comes to software and usability?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

It's software is very different. So, you know, like a GPU, if you're going from a classical system to a GPU, you have to basically rewrite your software. Quantum is no different. Where the difference is that you're just quickly getting to a place where you just there isn't another solution. Like, if you wanted to go do a large molecule like, we're right now looking at a molecule which is very commercially significant. It can't be calculated classically. The only way to do it is use this quantum computer. So it just turns out that this molecule happens to be the basis for, you know, the drug industry. And so it's really an important molecule. We just haven't been able to do it.

So there will be some amount of pain into going and writing an application in a quantum way to be able to make that work. But there is no other solution. It's just not possible to be able to do it. It's like trying to do the simulation for AQ64 using a GPU. Who's going to go out and buy 3.5 billion GPUs? It just can't happen. So there's kind of in these set of things, they're often in a place where there's just no way to do it. There is no way. There is no alternative, so.

Speaker 4

And so just to follow up, I mean, there aren't that many people that work with CUDA. But there are a lot of people that can use PyTorch. What's the equivalent analogy here?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Actually, NVIDIA is working on their version of a quantum CUDA to be able to bring it up. They were very successful with that, obviously, with GPUs. They're now doing that. We don't have a horse in that game. We start kind of from the hardware. We go up in terms of operating system into a compiler. But we don't make software tools. So that is a number of other players. NVIDIA is one. Microsoft, Google, IBM, Amazon all make tools. And they're all competing at that layer to control the developer, you know, kind of. In the future, what I think, is that people will get to these systems by using libraries where the abstraction level is way higher. So if you're a chemistry person, you won't need to know anything about how to program a quantum computer.

You'll just have a library that you get to talk to it in, in chemistry terms. It figures out now how to run the thing on the quantum computer.

Speaker 4

How extensible do you think the compiler work will be? Will you have to redo it every time you upgrade the hardware? Or is it going to be?

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

No, it's interesting. The compiler, as we learn more about the systems, we learn new ways to optimize the circuits. And so that happens to be a very fruitful one of the things which is really fascinating right now is that the software side, which includes the compiler, is actually going much faster than the hardware. So these two things are going to come together. Just in the last week or two, there was an announcement by a little quantum company. They were looking at a molecule. And they calculated to be able to get to the ground state energy. It would need 1.5 trillion gates to be able to do that. And that would require a quantum computer that's years away. But they found a way to do it with 410,000 gates.

So they had a 4 million x improvement in the algorithm which suddenly took that molecule and made it near term. And so what most people are not watching is they're not watching the improvements in terms of the software side because there's huge improvements on the algorithmic side which is making these things come in much sooner than most people expect.

Joseph Moore
Managing Director, Head of U.S. Semiconductors Research, Morgan Stanley

Great. Well, we're going to have to wrap it up there. Peter, Thomas, thank you so much for your time.

Thomas Kramer
Chief Financial Officer, IonQ

Thanks, Joe.

Jordan Shapiro
Vice President of Financial Planning and Analysis and Head of Investor Relations, IonQ

Thank you. Thanks, everyone.

Powered by