To the second day of Needham's 26th Annual Growth Conference. My name is Quinn Bolton. I'm the semiconductor and quantum computing analyst for Needham. It's my pleasure to host this fireside chat with IonQ. IonQ became the first publicly traded pure-play quantum computing company in 2021. The company's quantum computers are based on ion trap technology, which has a number of advantages compared to other qubit modalities. Based on these advantages, IonQ systems outperform competitor systems on real-world application benchmarks. In addition to providing quantum computing as a service to customers, IonQ last year announced orders for the sale of four on-premise ion trap quantum computing systems. Joining me from the company today are Thomas Kramer, CFO, and Jordan Shapiro, VP of FP&A and Investor Relations. Thomas, Jordan, thank you for joining us.
Thanks for having us, Quinn.
I want to start, just, you know, kind of with a broad question. There may be a number in the audience who are still coming up the learning curve on quantum computing, and so, let's start with some basic questions. Can you, you know, first just provide an introduction to IonQ for, for those investors new to the story?
Absolutely. So you could ask, why does the world need another computing platform? Because what we have is great, and I love my iPhone, and I usually buy a new one when they come out. However, there's a set of questions and problem sets that classical computers are ill-suited to solve. Either they just cannot solve it, or more likely, they cannot solve it in a reasonable amount of time. And by reasonable, some problem sets could take years, and that's because classical computers have to do things in sequence. Quantum computers, on the other hand, work in parallel. So you can take one optimization problem, you can evaluate all the options at once. Now, what does that mean?
There's a particular problem set called the traveling salesman, which has to do with optimizing a route that a person takes when he or she is going to several stops. Think about a UPS driver, FedEx. Actually, most delivery vehicles can make 120 stops in a day, and that's it. So you'd think that they come in early in the morning, they get their truck, it's already packed, and they get a sheet, and the trip is outlined, you should go here, here, here, and that is fully optimized to save the number of drivers you need, the number of cars, and the amount of gas being spent. However, that is actually all true, except for the optimization of the delivery list.
If you have a possible 120 stops, if a classical computer should calculate which is the very best route, it has to calculate every single possible route. And that's what we know from high school math, is a factorial problem. So find out how many options are there in a 120-stop route. It's 120 , -1 , so 119 x 118 x 117, and so forth. That number is so large, it's 6.6 x 10 to the power of, I believe, 178. A classical computer wouldn't... even a supercomputer, wouldn't finish that calculation until all the packages were delivered.
Now, a quantum computer of sufficient size would take all the possible routes and push it through the algorithm at the same time, and what comes out is a probability distribution of what will be the best route of all the possible ones, and you run that many times over, and the most likely solution is the best solution. That's how quantum computing works, and that will allow us to solve many, many things that today we just can't. Take solar cells, they can capture roughly 20% of the sun's energy and turning into electricity. A plant can take the sun's energy and capture roughly 80%. We don't know how it does it, because we can't actually model the chemical combinations that does it.
Perfect. IonQ has selected sort of the ion trap qubit modality. Maybe spend a minute comparing ion trap to photonic and superconducting, some of the other qubit modalities out in the market, and why the founders chose ion trap.
Absolutely. So, Chris Monroe, one of our co-founders, was working on atomic clocks when he discovered that, oh, you can actually compute on these because they're very, very stable. And he's also known for having set up the first quantum gates. And we picked this modality of quantum computing because it's very, very stable. You may have heard from makers of superconducting qubits that there's a problem with something called coherence. That means that the computer decoheres, falls apart. For superconductors, it's measured in microseconds, how long a computer can keep running before you need to essentially reboot it. For ion trap, at rest, we've run it for weeks until we stopped that trial, since we want to use the computer. It's a very, very stable qubit.
We pick it from nature, so we've used two types of atoms, ytterbium and b arium, because of two things: They're very stable, and they're all identical. Like, if you manufacture something, you will always have errors, so you have yield problems. When you manufacture your qubits like you do in a superconducting outfit, even though the qubits that pass the yield test, these are the good qubits, they're not identical. If they're not identical, this will actually reduce the fidelity of the qubit and computer arm, and therefore also reduces the total capacity of any computer put together with a synthetic qubits. When it comes to photonic qubits, I think we all think they're exciting and great. There's just not been any single computer put together using that technology yet.
So what we can say is that they're fast, and everybody likes bright lights, but there is no photonic supercomputer in existence yet.
Photonics are really good for communication, and so we use, for example, photonics as part of our plan to get to scale, to communicate between different systems. But in terms of storing information, you're literally talking about storing information in light, which is very hard to control and keep on a chip.
But it's exciting. Like, 25 years from now, you can maybe make qubits from banana peel, like in Back to the Future. But we're concerned about bringing quantum computing to the market now, and when there is another qubit that we like in the future, we will use that one.
One of the questions we get is, you know, how do you compare the different quantum computers out there? Some companies focus on gate speed, some focus on the number of qubits. IonQ has come up with a term we call algorithmic qubits. Maybe spend a minute explaining that metric, because I think it's also important to your roadmap as, as in terms of how you're defining the capabilities of some of your future generation systems.
Absolutely. It's kind of like cars when... If everybody talks about their carburetors or we have the best brake pedals or, you know, tail fins were great back in the 1960s. However, over time, when you compare cars, it is down to, you know, 0 to 60, 0 to 100, miles per gallon. These are quantities that are easy to compare against cars, and if you get super exotic, you talk about handling. But in quantum computers, it's not as well defined, and it has been a problem for observers of the industry when you couldn't cut through the noise. So, but there is an association that all of the quantum manufacturers belong to, all of the software makers and a lot of academia, is called QEDC, Quantum Economic Development Consortium.
Consortium.
Consortium.
Yeah.
I always get the last one wrong. And they got together and said, "Okay, well, how can we compare?" And instead of using measures that only somebody likes or like this, I can show something good on this at this time, maybe, we should use something that's replicable and that you can do on all machines. And we should take algorithms that are commonly used in quantum and run it on everybody's hardware. And then you run these algorithms with 1 qubit, 2 qubits, 3 qubits, and you see at what point when you just add one more qubit to the computation, does the output improve, or does it just generate random noise? At the time that it starts generating noise, or the output is indistinguishable from noise, then you can no longer do anything with additional qubits.
So you have then found out the maximum number of useful qubits in your chip, and that's how you should compare the capacity of a quantum computer, because the number of useful qubits also will tell you how big the capacity of the computer is. So most people probably know that when you add one qubit to a quantum computer, you double the capacity. That is true in so far as you add a useful qubit. If you add a qubit that you can't compute on, you didn't double anything. You just had the same that you had before.
So using that algorithmic qubit or useful qubit, you know, measure, at what point do you believe quantum computers will start to achieve quantum advantage on, on applications? Because, again, you've defined your roadmap that we'll get into in, in a minute in terms of algorithmic qubits. And so, you know, do we need to get to 40, 50, 100, 1,000 qubits before we start to do sort of commercially interesting applications with this one?
Yeah. So I think. And there's always some confusion there because some people like to focus on physical qubits, and the number of physical qubits that you need, you don't know, because you need a conversion factor from physical to useful. So we only speak about useful qubits, and most people in the industry agree that between 60 and 70 useful qubits in a working quantum computer, you will start to exceed the best supercomputers in the world for certain use cases. Typically, that will be optimization cases. And we have on our roadmap, so last year, our roadmap was to achieve 29 algorithmic qubits, so useful qubits. This year, the target is 35, and next year is 64. So 64 is right in that 60-70 mark, and with...
A fully working quantum computer of that size, you will be able to consider 18 quintillion solutions simultaneously to an optimization task. What is 18 quintillion? It's a really large number. It's 18 and then 18 zeros afterwards. But what does that mean? Well, the world's largest supercomputer sits at Oak Ridge National Labs in Tennessee. It's really big, and it requires people in lab coats to, like, just hold it just so, so it works. It has something like 120 million CPUs. It cost $500 million to acquire. The operational budget is $200 million for 3 years. This is expensive. It can consider 1.3 quintillion floating point calculations in one second, whereas the 18 quintillion is in less than a second. So it's like these are not completely comparable numbers.
It only tells you how large of a problem set can you deal with, and this is really, really large. This is when we expect that you will start to see quantum computers go into production and being used in corporations for tasks that they need to operate in their daily operations.
So, the Tempo system, your AQ 64 system in 2025, you think starts to get you close to that kind of capability where commercial applications will start to show?
Yeah, that's right. We fully expect that this will show Quantum Advantage in some use cases, and not in all. It's important to note that quantum computers will not actually displace classical computers. It will enhance them. It will do things that you cannot do on classical computers today. But Walmart will likely never use a quantum computer to run their cash register, because that's a very simple calculation. They need to do it many, many times, and they're all done in sequence. However, if you're gonna run a fraud detection network for all of Walmart's cash registers at once, and predict which one's gonna fail, where you're gonna have fraud based on previous transactions, you will likely be able to do that more cheaply and faster on a quantum computer.
You, you hit AQ 29 in 2023, seven months ahead of schedule. You're on track, or your, your target is 35. This year, getting to 64 in 2025 feels like a bigger jump, certainly in terms of the AQ number. What do you see as some of the biggest challenges to hitting AQ 64? I think you've got a number of technology transitions, of which I believe the barium ion from Ytterbium, you've got your multi-layer glass trap that will come into, like, I think, that system. So talk about, you know, some of the challenges and what do you see... What, what do you see as the biggest challenges to the AQ 64, the Tempo system?
Well, first off, we are here to do the big leaps and have the bigger challenges, because if it was easy, this wouldn't be a stock you should invest in, because everybody would have done it. We have some of the finest people in quantum on our team, and we keep on delivering. We delivered our last roadmap goal 7 months early last year, and we have learned so much from that. And our current year goal, which is 35, is going to be pushing the technology that we have already released, and we're very confident that we're gonna be able to deliver this. We're also confident that we're gonna be able to deliver at 64 because of all the factors that Quinn already mentioned, so thank you for doing the thorough analysis.
The thing that we're perhaps most excited about, and this is hard because I'm easily excited and excited about so many things that we do, but the move from Ytterbium to Barium, and using a whole new atom for us, was what we considered, if not a challenge, it had, you know, error bars around it, how well that would work. And so when we, in May last year, announced we've done 29 algorithmic qubits on Ytterbium, we said, "This is great. We reached our goal for this year." Unfortunately, we couldn't just go home and take the year off. We continued working. We were working on two things. We're working on getting to our next goal. We were also working on getting from Ytterbium to Barium.
And later in the year, last year, we announced that we had reached 29 AQ on Barium as well, and we thought that, you know, there's gonna be lots of clapping and people saying, "This is great!" But it ended up that people asked, "Why did you reach 29 again?" And the truth is that we actually hadn't done it on Barium before, and that was proof positive to us that Barium was not only a qubit that shows very good potential, but we could harness that potential. So when we move to 64, that will be based on a Barium chip. Barium is more stable. It promises that we can get more fundamental fidelity out of it, and it operates in the visible light spectrum.
So, Ytterbium, we have to use—we communicate with our qubits with using lasers, and for Ytterbium, we have to use the infrared space. This is hard on the optics... which means that you have to swap out some parts, and these are not necessarily cheap parts from time to time. And it's also slower. If you use the visible light, you can actually have a faster laser, so higher megahertz, and that will also lead to faster gate speeds, which will become important future down the road for the industry.
When you also ask, what do you need to do to get to 64? I'd also highlight something that we don't need to do to get to 64. So late last year, we made another kind of subtle announcement that is very significant, which is that we believe we can get to 64 AQ using error mitigation rather than error correction. Now, why does that matter? There's this concept in quantum computing of error correction, i.e., using a lot of qubits to monitor each other, to correct each other, to make sure that you're getting the right result. That comes with overhead. Normally, there's a ratio, and that ratio depends on how good your qubits are. So we have very high-fidelity qubits. We expect that we can do an error correction ratio of about 1 to 16, so you need 16 qubits to generate 1 error-corrected qubit.
We've proven that we can do this with 13:1, and a lower-fidelity qubit modality might require thousands or 1 million qubits to 1. So there's an overhead associated with this error correction. But what we announced last year is that we think error mitigation, which is a more kind of nuanced interim strategy, will allow us to get to 64 AQ. Error mitigation basically uses fewer qubits to get the job done, to correct each other just enough to get the right answer, and uses software as well to run analyses on your results and determine what the right result should be. And so this is significant because it means we can get to 64 AQ, get to that computing power, without requiring as many qubits as you would have if you had needed to do error correction.
So that's one thing that we don't need to do to get to AQ 64, which makes, you know, the path easier than if you needed error correction.
Great. No, that was, that was gonna be my next question, so, so you, just addressed it.
There we go.
Let's turn to the commercial side of the business. I think one of the highlights of 2023 were your 4 sales, or at least orders, for on-prem ion trap quantum computing, 2 at Quantum Basel, 2 at the Air Force Research Lab. Talk about each of those deals, the significance. One is more for compute, which is Quantum Basel, and one is more for quantum networking, which is AFRL. I always get that acronym wrong. But talk about those two deals, and then, you know, for the commercial side of things, you know, how does that start to impact the revenue and, you know, income statement over time?
Well, these are... First off, that we can talk about on-prem hardware deal, hardware deal at all is the biggest step, right? Because primarily, we've been selling compute access, which is great if you just have a small problem and you can go onto AWS and use it. But if you have problem sets that you need to run for a longer time, or if you need to have more control over the actual computer, you may want to have direct access yourself. And we've said since we've gone public that we anticipated we'd be selling full systems eventually. We didn't know when it was, but everything in, like, in history has told us that that would happen.
And we started fairly early on after going public to get a lot of requests, and a lot of that was talk, but gradually it became real, and it was clear that people are putting out very specific RFPs for, "We need a computer that can do X, Y, or Z." And that culminated with our deal with Quantum Basel in May of last year, where they signed up to buy not one, but two computers of two different generations. So they're gonna purchase the Forte Enterprise, or they have purchased the Forte Enterprise, which will come out this year. Towards the end of the year, they should be receiving theirs. That is a 35 AQ system, and following that, two years later, they will take delivery of one of our 64 AQ computers, the Tempo.
That is groundbreaking, it's fantastic for us, not just because it's a big sale, but also because it is a foreign entity, so in Switzerland, that say, "Hey, we believe in this. We believe in these American guys, and we're going to buy two generations." We are actually retaining some of that compute capacity ourselves, so that we can use it as a beachhead, head into Europe, and we will be heading up our sales efforts from Basel. Following that, we also announced in September that we had sold two smaller quantum computers, these will be barium-based, for quantum networking to the Air Force Research Labs. And that will be more continuing their work on quantum networking, which is going to be important for the entire industry.
You want these computers to talk to each other, and we are happy to be working with AFRL on that task. Now, the, I think you asked about the size. The AFRL deal was $24.5 million, and the Swiss deal was $28 million.
On the Quantum Basel, the 28 in time in service, I forget the exact accounting term, but once that system is delivered, deployed, then you start rev rec-ing effectively the straight line over the 2-year period and then Tempo deployed starts to-
That is correct. They, the two deals have different, revenue recognition rules, because— So the Quantum Basel deal is exactly as you said, we will, we will take the, the revenue and recognize it over the time that the systems are in service in Switzerland. The, Quantum Networking deal is for custom-made hardware, systems, so those will be recognized on a percentage of completion. So it's— there's a couple of, rev rec models at work here, and for the accounting geeks and friends among us, we are happy to take calls afterwards, and we can walk you through the, the finer details.
Yeah. Can you talk about your pipeline for other potential on-prem quantum sales?
So my favorite answer to that question is we look forward to getting back to the forecast for 2024 on the Q4 call. However, it should be noted that because these deals are out there and they've been written a lot about in news media that's relevant to our buyers, people who didn't think... There are many out there who didn't think you could buy systems. There are hardware makers out there who are busy saying that you'll be able to get a system in 10 years, but in quantum computing, they say.
But what they should say, in, on our platform, because we have worked really hard not just on advancing the capacity and the power of our computers, we have invested as much in the technology to make it portable and so that our customers can put. And when I say portable, I don't mean laptop size just yet. That will be coming down the road, too. But, right now we're talking about the ability to build a computer to spec and drop it to somebody else's data center. It is nice that you can build a computer in your research lab and show that it can do something with blinky lights. Customers don't care about that. That's what academia cares about. Customers care about having access to compute power, and that's what we want to do, and we want to do it on their premises, pun intended.
Perfect. Just, you know, thinking longer term, at the Analyst Day, you talked about some of the use cases or commercial applications that you're seeing interest in. Machine learning was one. Optimization was something you mentioned earlier, but also some of the other quantum computing companies talked about optimization problems. So maybe just spend a minute talking about what applications, especially from the commercial side, you're seeing the greatest interest in quantum. Because I think, you know, many of us believe that you really need that commercial adoption to really see the need occur in terms of the revenue ramp.
Right. Yeah. I always thought it was funny when at various companies I've been in, you get inundated with visits from bankers who want to tell you about how you should go public, et cetera, et cetera. They can prove to you that it's good because the R -squared is 56 or something like that. And that means that they can explain 56% of the price based on, like, growth rate or whatever it is. That's almost just saying that: "Yeah, it's 50/50. You could have a good price or a bad price." Because it's really hard to do regression and multivariate regression, where you have several variables that's gonna predict an outcome. It's hard to graphically display is one thing, but it's also really hard to do. It's very computationally intensive, which is why most people don't do it.
But, machine learning has gotten better and better. And so now if you have enough compute power, you can do a lot. Witness ChatGPT, version 3 at 175 million input variables. Version 4 has 1 billion, and that's why they're so oddly good at coming up with an answer, is because they consider all the inputs. However, it is very expensive. Unless you have free access to all of Azure's data centers at night when they're not running customer jobs, and you can run ChatGPT for free on training there, you can probably not afford to do this yourself. And we have to. Like, we are already seeing how impactful AI can be, and AI runs on top of very large, machine learning models.
When you run these machine learning models on a quantum computers, we have seen two things, actually three things. A, they're more expressive. They're able to capture more of outlying possible events, so black swans, if you will, instead of just saying, like, "Here is the predicted outcome." You have to run many fewer iterations. Machine learning runs typically by running the same query over and over and over on training data, thousands of times, millions of times sometimes. We have seen that when you do it on a quantum computer, our quantum computers, you can get to the same predictive output or even a better predictive output, but by using one-thousandth the times you go through it, so one-thousandth the number of iterations to get to your prediction.
Similarly, we've also seen that we can get to, in very large models, to the same predictive output with 1/1000th the number of input variables. This translates to how costly it is to run this algorithm, and in some cases, many cases, it also translates to we will be able to run machine learning on problem sets that we can't do today and will not be able to because the training set is too large to do it in an economic fashion.
So imagine that you can go to an OpenAI or any one of the thousands of companies now using these large language models and tell them that you can run and train their model cheaper, with less time, less power. That is a compelling use case, right?
Sounds like better predictive value-
Better accuracy or fewer variables, right, to get to the same result.
I've got a few other questions that we certainly received from investors. One is around Dr. Chris Monroe's announcement to go back to academia last fall. You know, what impact does that have on the company, if any? And do you still have access to his research as he returns full-time to Duke University?
Well, I think the biggest difference is that now when we go out for dinner, I have to pay for dinner because previously he outranked me, and that we have a rule that's always the most senior person who has to pay for dinners. But having worked with Chris is fantastic. He had a vision for what quantum computing could be and proved that it could be, and since that time, we have busily been working on implementing that plan.
Mm-hmm.
He is going back to academia. He wants to continue to dream up big, new visions, whereas what we're doing is we're implementing those now. We have moved from the time when we needed physics breakthroughs, to when we needed engineering breakthroughs, to now we need engineering implementation. Like, our biggest challenge, or one of our biggest challenges, I'm not gonna overstate it, is that we wanted to make these computers accessible to regular data center nerds, people like me.
And for that, you can't just take it from the R&D lab and then, you know, put padding on it and then, "Here you go, customer." So we built a production facility in Seattle, and that is set up to make several of these at a time, but build them from spec, not deviate from spec, not tune them to see, can we make them a little better? Just make them, ship them. If you're one of the finest physics minds in the U.S., you might consider that boring. But it's a, you know, it's a business challenge and something that we're very excited to be doing, and I'm also excited to be talking to Chris almost every week, week continued. So, we thank him, and we continue to get stuff from him.
In fact, all the research that he produces now at Duke still comes to us via the patent agreement that we have with Duke University.
Great, and you had mentioned Seattle. That was my next question. I believe you were set to take occupancy of Seattle in the fourth quarter. Did that happen? And can you talk about some of the functions that will take place in Seattle? Sounds like that's where a lot of this engineering implementation work will be done.
That's right. And so we took occupation a little earlier in the year. We actually started the work on manufacturing one quantum computer there late December. All the labs are up. So while, you know, when I think of manufacturing facilities, I typically think of, you know, Henry Ford, and there's a line, and people sit with hammers and stuff. In high tech, that isn't really how manufacturing works. It's different labs where you move the part that's being worked on from one lab to the other, and there are clean rooms, and it looks, it looks very serious, a little bit like those science fiction movies where, you know, there's some new bacterium has been discovered and everybody's sitting in there. It's very cool.
We will be showing off our labs in Seattle very soon, and we are very excited for the work that's going on there. We will definitely have some research going on there as well, but primarily this is established so that we can manufacture hardware, hardware that we can bring home to our customers.
We hosted our last earnings call out of Seattle, and at the time, we had just gotten the lab permits to start working in the lab, a few weeks prior. But busily, 50 feet away from us, through a wall, were people starting to work on the first system builds out of Seattle.
Also, very importantly, I had a custom mini fridge installed that only holds Diet Coke for our earnings calls.
Absolutely.
All right. Last question. Obviously, the company and many of your pure-play peers are in investment mode. You're sort of burning cash, and so the question is, you know, why should investors sort of buy or look at Quantum today versus waiting for this market to further, you know, progress and perhaps, you know, getting to a point where the company's break even or profitable before investing?
Well, I'm of course not giving investment advice, but if everybody sits around and waits until the elevation in demand comes in, it's simply a question of like, who waited too long? Who got in before the prices went up? And we are, we set out our roadmap in late 2020. We're executing against a roadmap, and we have delivered on both commercial and technical milestones, which we look forward to. And I guess this is a technological revolution unlike any other we've seen, because when you add one qubit, you double, one useful qubit, you double the capacity of the computer. This is overtaking Moore's Law so fast. I mean, we're, instead of doubling every 18 months, it's doubling every month. On average, we've actually added one algorithmic qubit per month.
And so this is so fast, which is also why when the company was set up, it was the two founders working in a lab, and then some venture capitalist, Harry Weller at NEA, he was reading physics papers in his spare time, and he looked at this white paper for how one could actually create a working quantum computer. He took a car down to Duke and went into the lab and say, "You guys have got to do this, and I have to fund it." And they're, "No," but actually, yes. We can talk about how much money you need, but you have to do this. And that's how this company was founded. Because it was clear that this can actually be done, now somebody must be doing it, and we are.
Great! Well, we have reached the end of the session. So, Thomas, Jordan, thank you very much for joining us at the Needham Growth Conference. We really appreciate your participation.
Thank you, Quinn. Thanks all. Thanks, Quinn.