Welcome, everybody, both to those of you who are here in person and to all of you who are joining us live for IonQ's Analyst Day. Ever since we went public, I've been looking forward to this. We were cheated by COVID, so we didn't get to have an actual full-scale analyst day in person, and we do have that now. This is where we get to talk to you about what we do here at IonQ. It is all things that you have heard before, but with more color, and where we get to tell you the details of what makes IonQ, IonQ. Please, buckle in and enjoy the show.
Hi, everyone. We've prepared some cautionary notes. I am not going to do the full earnings call routine and read them to you. So instead, we advise you to please take a look at our website and review the cautionary notes, especially in advance of making an investment decision. We have a packed agenda for today. You're going to be meeting many members of the IonQ team across many different functional areas. To be fair to everyone online and in person, we're going to try to stick to the agenda promptly, and so we will do our best to make sure that every session we hit on time. One note is that the lab tour will be on-site only. With that, I will turn it over to Peter Chapman, IonQ's President and CEO.
Thanks, Jordan, and welcome to everyone. My name is Peter Chapman. I'm the President and CEO of IonQ. I just wanted to say how excited I am about today, 'cause normally you hear from me on earnings calls, and you ask me questions. But today, for the most part, I get to sit down, and you guys have a chance to meet the rest of the team here, and they really are the people who are making this happen. I just get to be the spokesperson most of the time. So you get a chance to ask actually the experts who are actually working on building these quantum computers today. So without further ado, let's get started.
So the next revolution, we believe, is quantum, and probably you do as well if you're here and listening, especially on this podcast. The shorter answer is, we think that quantum is gonna be the next big thing. With a little luck, this quantum thing has the potential to be as big as the internet or maybe even the PC itself. Quantum computers will not be good at everything. For certain applications, they have the potential to rival the world's largest supercomputers. But quantum computers are definitely strange beasts. A quantum computer struggles to add one plus one, but at the same time, maybe it's going to be very good at solving certain differential equations. So with that, we're going to talk about quantum for the rest of the day.
And I thought what I'd do next is actually just define what we think it means to be successful in quantum. And you likely have heard expressions like quantum advantage and quantum supremacy. I wanted to read to you Wikipedia's definition: "In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem." And I find the last part quite entertaining. So I can tell you today, we do not care about quantum advantage or quantum supremacy. These are really academic terms. Arthur C. Clarke, who was a friend of my father's, had said, "If an elderly but distinguished scientist says that something is possible, he is almost certainly right.
But if he says that it is impossible, he is also very probably wrong." And I think that really is the crux of part of the problem with these definitions of quantum supremacy and quantum advantage. It's very difficult to prove something can't be done. So, what do we care? For us, the test for me is: Can I solve a customer's problem with a better mousetrap at a better price? If I can, I'll make the sale, and IonQ will be successful. We don't actually care about the academic discussions about quantum supremacy or quantum advantage. So, so our goal here is to build quantum computers that solve problems for customers.
While quantum computers might not be good at everything, interestingly, many business problems can be translated into a problem that a quantum computer can run. Quantum computers seem like they will be very good at optimization problems, and luckily, many business problems can be converted into an optimization problem, which then means it can run on a quantum computer. This is a slide that we showed you early on, I think pre-IPO. That was the technical roadmap for IonQ for a better part of the decade. This year, we hit our technical goal seven months early, AQ 29. When we created this slide, we didn't have a lot of track record at that particular point, but now we've shown we can deliver, and more importantly, we can deliver early.
The tech team is now working on AQ 35 and AQ 64. AQ 35 on Forte and AQ 64 on a new barium system. So I show you this slide, because in 2025, we expect to hit AQ 64. That will be here in a blink of the eye. So, for us, all this work has come together to build this remarkable machine, and we expect it to be a better mousetrap and solve problems for our customers, in the very near term. So I want to talk a little bit about AQ 64 and why we're so excited about it. You've heard from us and others that every time you add a usable qubit, that you double the computational power of the machine. So at AQ 64, that's equal two to the 64.
You could type that into your browser right now and see what that means, but it's equal to 18 quintillion. I had to look up what that number actually meant 'cause I'd never heard that before. So it means that in 2025, we will have a machine that is capable of exploring a computational space of 18 quintillion different states in a single instruction. So now we have to unpack what quintillion is. It's such a large number, and almost none of us have heard of it before. So I'll try to put this in perspective. If AQ 5 was the tip of a marker, then that's what AQ 5... AQ 29 would be roughly, if I was to use AQ 5, and that tip would be about the size of a basketball court.
At AQ 64, it would be almost, would be larger than the landmass of the United States. So this is really, you know, the next, the next machines which are coming from IonQ. Another way to look at it, just trying to look for places where you can see quintillion, is that at Oak Ridge National Laboratory, Frontier, the world's largest supercomputer, can calculate roughly 1.2 quintillion different floating point operations per second. So we're talking about doing 18 quintillion different computational states in a fraction of a second. Just for fun, the other place, the only other place I've heard this, this word was, actually with Tom Cruise talking about, in the latest Mission Impossible, that it can consider a quintillion different possibilities.
And so, maybe quintillion in the human lexicon will become more and more popular going forward. Although I wouldn't spend a lot of time there, because if you just add a couple more qubits, then you're gonna need to learn the next thing after quintillion. And so, in fact, actually, we will very quickly get to a point where mankind has not come up with words yet for the kinds of numbers that we're talking about. Because at AQ 256, that's 2 to the 256. At AQ 120, the machine can consider the same number of possible states as there is atoms in the known universe, all 13.8 billion light years across. And at AQ 256, it's a number which is just unimaginable.
And if you looked at our roadmap, by the end of the decade, we're talking about having 1,024 usable qubits. So it's two to the 1,024. And that's just a number. Can't be calculated. Actually, that number, it can't be calculated on a PC today, just that one number. But my guess is if you could calculate that number, it would probably take something like 32 GB of RAM just to store the one number as to how big it actually is. So these are just unimaginably large numbers kind of going forward. So at AQ 64, this really is a huge milestone, not only for IonQ, but also for the quantum computer industry.
When we started, actually, when I started, what experts told me was that at roughly 65-70 good enough qubits, that's the kind of Goldilocks zone where suddenly quantum computing is really useful. At about 35 algorithmic qubits, you start to leave full simulation behind. And so now you're going to want to start to run it on a quantum computer rather than on a classical machine in doing simulation. And all of this is now not too far away. And we just announced our first sale of one of these systems, actually two systems, to QuantumBasel in Switzerland. So next, I wanna talk about how is it that IonQ pulled ahead of the rest of the competition?
You know, the very first thing is that we chose a qubit that Mother Nature provided for us. There is kind of in the quantum world, there's two different kinds of qubit modalities, one which is man-made and one which is using, you know, atomic particle, that Mother Nature. That's what we did. We're ions, and it's naturally quantum. It meant that we didn't have to tackle the problem of creating a man-made qubit. Second, we're piggybacking on mature technology on several areas, such as atomic clocks. Our ion traps and an atomic clock have a lot in common. That is very mature technology that is being used in, you know, with GPS systems, and fairly common.
Actually, an atomic clock is a chip nowadays that's only about 1.5 inches across, and fairly easy, I believe, to get. So, this, our ability to choose a technology, we've said in the past where we didn't need to have a breakthrough in physics, we didn't need a breakthrough in manufacturing, yields, material science, any of those things. That gave us a huge advantage, which allows us to be the first to this next phase of the market. The next big thing, which is people are going to start to realize, is the connectivity, and you already know fidelity matters more than physical qubit count. And so what you're seeing here is a picture of the connectivity of an AQ 29 system, is that every qubit can talk to every other qubit directly. So you don't...
You can see on the other side, you know, kind of a typical nearest neighbor system from some of the competitors. If you wanted to have this qubit talk to this qubit, you would have to use the intervening qubits in between to be wires, and you use an operation called a swap gate, and it happens to be one of the noisiest operations in quantum. And so you're basically using your qubits not for computation, but for connectivity. And this is not a huge issue when you have a small number of qubits, but as you get to a much larger set of qubits, suddenly this connectivity will really matter to you in a big way. And so we expect the world to start to realize that, especially developers, kind of going forward, the advantage that we have.
Of course, the second part is the fidelity itself, is ion traps. I don't think it's controversial at this point, is that ion traps have the best native 2- qubit-- average 2- qubit gate fidelities in the, in the market. And so, that fidelity then controls, the size of the circuit that you can run. And we'll talk about that as to what kinds of things you can do with AQ 64, coming up in part of Dean's, speech. So, for me, this is, this next section here, is really interesting. Maybe the big announcement of the day, in some sense, is we showed you this slide previously, and it was a, it was a, based on a BCG, data, and it talked about the three different phases of quantum.
The very first phase here was the NISQ era, this noisy, qubits that we were doing computation with. The second phase would be to do, have higher fidelities and to be able to get to using error correction. And so that would enable much larger programs, things like AQ 64, to be able to happen. And we reported that we thought, you know, we had shown that we've done error correction with an overhead of one in 16, meaning 16 qubits to get another nine out of the average 2-qubit gate fidelities. But what's happened in the last year and a half is that we think that we have a good shot of getting to AQ 64 without error correction, and that's really large for us. We're working on both an error-corrected approach and a non-error-corrected approach. We're working on both.
How did we-- how do we think now that we can do that? In the last two years, we found something else called error mitigation. And error mitigation was basically a statistical approach to go through and find some of the systematic errors that are in the system and remove them before we deliver it to the customer. And so, and this was particularly important for our systems because much of the noise that you see in our systems is-- comes from the control software itself, and so we can mitigate that through software. And so we don't know yet, but our intuition is that we can get here now without full error correction, and Dean is going to talk about that, coming up shortly.
So that allows us to enter this new phase, where we're now talking about AQ 64 in phase II, and, you know, instead of calling it error corrected correction phase, instead, we're going to say that it's an enterprise-grade era for quantum. Because at AQ 64, we believe that now we can start to build that better mousetrap and deliver value to customers. And I think I have not heard personally any other company talking about getting to a point within the next two years, where they will have easily manufacturable systems that have a sufficient power to be able to deliver real economic value to customers. Today, what you see in almost every case is proof of concept projects with enterprises.
They tend to be small and low value, and we now are actually moving away from those things and now working on full blown applications that will take advantage of AQ 64. I mean, we're now only two years away from that, and this is just like any other software development, takes time. If we were gonna go out and create a social media application, we decided all together we were gonna create a company to do that, it would not be an unreasonable expectation that that would take a year or two to develop. And so we're just at that right phase now, where we're starting to create software with customers that will take advantage of those, those systems coming up. So this is an exciting time for the, for the company kind of going forward.
Especially in the next two years, you'll be hearing a lot more about this. You'll hear about some of the applications that we're building today and the number of qubits required to be able to run them. The next kind of thing that gives us an advantage is, when we started the company, when I was first... I think there was 35, 40 employees, and Chris and Jungsang, myself, and a lot of students from Chris and Jungsang's labs were kind of the early employees of IonQ. Over the last four and a half years since I've been here, we have been moving the company from an academic organization. We're proud of our academic origins, but, you know, we're moving to an engineering-driven organization.
You're going to hear from the leaders that we've hired in the last several years today. You can kind of see some of their backgrounds, and they'll tell you about that as well. For the most part, these are people out of industry. We still have one more transformation to go, which is to go from an engineering organization to a product one. So that's kind of the next phase, I would think, over the next several years for, for IonQ. One other advantage is we early on were available on all three clouds, and we're unique in that way. Having a customer meant that we had to build a product that met an SLA. Strangely, the cloud guys want this thing to run 24/7. Go figure.
And so we had to be responsive to customer demands, and our customers have pushed us to build better products. They've made us a better company, but we aren't done yet. Without customers, though, we wouldn't be where we are today. And so that early first customer contact in our life cycle actually pushed us to build better products. When we first did the very first deal with Amazon, the computers would only run for a couple of minutes, and then they'd have to go into calibration. We famously joked at that particular time that we needed a bunch of physicists to turn a bunch of screws with some screwdrivers to get it to work.
And I said to them: Everywhere somebody out back turns a screw to calibrate the machine, I wanna have a stepper motor there, and I want the operating system to control it, and I want it to be the one that's calibrating the machine. And so, today, what happens is the machine is continuously calibrating. It's basically between jobs, it's going through and tweaking the machine, where four years ago, that was done by humans. And so, you know, we have made great strides in producing a product. I like to joke, no humans were harmed while we ran your particular job on our quantum computers, and that is increasingly so. We're not, not done yet. The next phase is actually to put these into customers' data centers.
Of course, we don't wanna have full-time people actually there working to be able to keep them up and running. So we're now focused very much so on how to be able to do that and also how to do the next generation systems, how to do field support. You know, today, a power supply might be in the back of the quantum computer. If it goes down, I might have to tear the damn thing apart to be able to replace the power supply. So I need now in the future to make sure the things which are likely to fail, that I can easily go in and a field support person with limited knowledge could pull that part and can get a new part in.
Okay, so another advantage to the company is that we have ever-increasing revenue. Of course, we get to apply that financial stream back to the product itself. So it gives us an advantage compared to others who don't have these kinds of sales. Then, of course, everyone has talked about this, but we have quite a bit of cash. We have a large war chest. This simply allows us to invest in ways that others can't. It allows us to hire the best, invest in building cheaper systems, more manufacturable systems, and that, as I mentioned, systems that can be supported at a customer site, and that's exactly what we're doing. People have asked us, with all that cash, are we going to be doing lots of M&A?
We have done one transaction, and we're always looking for a good deal. But we're really focusing our resources on how to build cheaper and smaller quantum computers with our vendors. To achieve this, just to be clear, it doesn't mean that we have to buy our vendors to be able to get to cheaper ones. So we're spending our dollars largely with things like paying for NRE with vendors to be able to get to a smaller, cheaper product. So the next thing which you also already know about is this is a picture of the Bothell facility. We signed the lease earlier in the year. Took a while to get building permits. You probably heard that before, especially if you own your own house.
We went through with architects and large construction firms, little things like reinforcing the roof with structural steel so it could handle the weight load of new air handling systems, so that we can build a very clean environment to be able to assemble these quantum computers. So I'm proud to say that we are on budget and on time, and in this fall we will start to take occupancy of the building and start manufacturing the next generation of quantum computers. And then the other piece that allows us to do things that others can't, because largely our cash position, is our expansion internationally. And so you can see here, we did a acquisition in Entangled Networks in Toronto. Of course, you're here in College Park. We just talked about Bothell.
Bothell, by the way, is about 65,000 sq ft. Just to give you a sense of scale, that's about twice the size of this entire facility. So, just to kinda give you while you're traveling around today. In Basel, we will open up a small facility to be able to house our quantum computers and also sales force and application development for the European market. We opened subsidiaries in Germany and also in Israel, and have started hiring there as well. So, all these things are possible because of our financials. So I just wanted to talk a little bit about the life cycle of the hardware products here, is that, you know, at the very beginning, it's all about research and development. These are kind of the early lab experiments.
Then you start getting customers who start pushing you to build better products. Then to get to manufacturing and get to a point where you can actually easily build these things. Strangely, little things like, can you build the same quantum computer twice? And, you know, do you have engineering change orders? And all those kinds of things that real companies do. And then lastly is, can you support the product in the field? And so all companies have to run through this cycle, and IonQ is kind of the, I believe, the first that I know of, that's actually doing exactly these things. So this is kind of that same slide that we talked with at the very beginning about the three different phases.
And so we think that we're about to enter this enterprise grade area with AQ 64 within roughly the next two years. And what does that mean? Well, it means that we're going to have first-mover advantage. And you've probably seen this slide elsewhere in other markets that say that the first movers capture 90% of the value in any new technology. I believe there's been studies that show this. And so we think in quantum, for both our customers and for IonQ itself, that we will have first mover advantage. And then my last slide is just to kind of talk about where we think the company will be in the future. And you know, if we're successful, I think IonQ has the chance to be one of the leading companies of our time.
That if we do what we think we can do, that this, if we were to look 20 years down the road, that IonQ might be in the same class as many of these large companies that are the most successful companies today. And that's why we show up excited every day, and I think why our investors are investing in the company and the belief that maybe we have a good shot at getting there as well. So, with that, I'm going to open up the for questions.
And then, the other... Just to let you know, the rest of the day, I've kind of previewed what we're gonna talk about today, but the rest of the day, there's going to be the experts who are gonna dive in on things like, how do we get to AQ 64 and all the rest? So, I'm happy to answer any questions, but you really are gonna get a chance to meet the experts today. And then, you know, for some questions, it might be better to ask them than me. So with that, I'll open it up for questions.
Sure, Peter, you touched on AQ 64 and whether it will or will not have error correction. Which way do you think it goes?
I think it goes without error correction, would be my guess. But, in everything that I guess, maybe not everything, but many things we do out back, we're investing in multiple paths... so that we have multiple shots on goal. So we're working on error correction just as much as we are without it, but I think the current belief is we can get there without it.
Would there be a difference in usable time if it's error-corrected versus non-error corrected?
Yes. I mean, the error correction itself takes time to run, so it takes away from, you know, your computational time, which it would be better, but it's just easier. It would turn out to be a lot easier because I wouldn't need hundreds of qubits to put onto a chip. It means I would need only a little more than the 64 to be able to get there.
Got it. Thank you.
Any other questions? Then in that case, I'm going to turn it over to our next set of speakers. Thank you. In a few minutes. Okay.
Okay, we will now turn it over to Dean Kassmann, IonQ's VP Engineering, and Pat Tang, our VP of R&D.
Hello, everyone. It's a pleasure to be here today. I get a chance to talk about something I love doing. So, I'm Dean Kassmann. I run our engineering here at IonQ. I joined IonQ in 2021. I oversee all hardware and software development at the company, so this kind of ranges everything from cloud down to firmware, as well as kind of like ions out to our enclosures. Before IonQ, I worked at Blue Origin, where I ran all research and development for the company. Started there many years ago, when the company was very small. Was part of the original team that did the very first soft powered landing from space of the New Shepard rocket. Led many teams while I was at Blue Origin.
Eventually took over the flight sciences organization and kind of started the initial design of the New Glenn orbital rocket. Eventually then ran and kicked off the R&D department and started that from scratch. And so in 2021, I joined here and started a little bit on the R&D side, but then quickly moved over into engineering. So today, I get the lucky pleasure of talking about kind of our path to AQ 64 and some of the ingredients that are required to get there. Before I do so, I wanna kind of first kind of do a little bit of backup and talk about AQ itself and the metric. So the AQ metric is an application-based metric, right?
We've defined it specifically to be a representation of the kind of circuits that comprise workloads or, kind of the building blocks that are part of what we would normally consider in kind of the user application, problem spaces. We have 6 metrics that kind of comprise the overall benchmark. And so when you read the overall benchmark chart, I'll be talking later about, depth and width. And so that represents depth being the number of kind of 2-qubit entangling gates that we can achieve. For AQ 64, that's around little north of 4,000. And so for width, that is the, number of qubits that we end up using in the benchmark. Different benchmarks, different components in that benchmark, push different pieces. Some of them require very high-quality qubits, but a fewer number, or sorry, a fewer number, but very high-quality qubits.
Others require a much larger number. The benchmark came about as a result of kind of a collaboration and through QEDC, so it was many companies contributing. In addition to AQ, we do run component and subsystem-level benchmarks as well. It's not our only measure of system performance. I wanted to give a little bit of background here because it becomes important as we start to push to AQ 64 and trying to just kind of define the boundary of what we're trying to shoot for. Peter talked about this a little bit before in terms of what happens as you increase the computational power with every additional AQ. The computational space for an AQ 1 system is 2, right? It's the width of a paperclip, double-sided coin.
As you start to move up, that doubling of the computational space makes a huge difference in terms of the kind of problems that you can solve. I don't wanna belabor this, but Peter kind of covered really, well, as he kind of talked about just the size of the computational space for AQ 64, which means that that entire space is accessible to the algorithm or the circuit. Now, right now, Peter also kind of talked through the overall technology roadmap that we have. This is the same slide that Peter showed. I wanna speak to this from kind of the engineering viewpoint a little bit.
So we've been making continual strides over the years in terms of the things that go into our, our systems, the things that are driving the AQ 25 and AQ 29 kind of deliveries, the ahead of schedule. I'll talk a little bit about the, the 29 and what went into that. And then I'm gonna spend a lot of time on kind of the, the 35 and 64 ingredients that go forward from there. Just as a reminder, kind of right now, our AQ 35 development work is on, ytterbium, where AQ 64 will be pivoting over to barium, and I'll talk a little bit about that as well. So for AQ 29, there is quite a few different components that went into that. We moved from our AQ 25 architecture to AQ 29 to use acousto-optical deflectors or AODs.
That gave us better beam steering capability, and it allowed just overall better beam quality at the delivery of the ions that resulted in better fidelity. We introduced error mitigation techniques more formally and kind of invested more heavily. I have a little bit more detail on that later. We've been continually improving kind of our hardware, in particular, our hardware control electronics. And so being able to get very high performance control electronics to drive the waveforms that we use in executing gates has been extremely important in trying to increase fidelity. And we've been continually improving our compiler. Now, for AQ 64, there's a lot of new things that we have to add to the plate. And so, as I mentioned before, we're going to be moving to overall barium. There are some benefits that I'll be talking about.
We're going to continue to leverage error mitigation techniques. We're gonna be talking about Multicore and RMQA. There's new trap technology that we're introducing for AQ 64. I'll cover some of the, the larger kind of software improvements that we have across the stack. And then there's some of the other items that kind of fall along with that. The continued investment in the compiler pieces, there's gonna be multi-zone operation, which adds additional parallelism. I'm not gonna touch on AQ 64. That's where I'll hand things off to Pat, and he'll kind of describe a little bit of things beyond AQ 64. And so, but there's a tremendous number of kind of exciting investments that we're kind of, adding to the, the pot as we move forward. And so I want to first talk a little bit about Barium.
We've kind of briefed this before, but Barium has a fundamentally lower spontaneous emission limit. That results in a higher fidelity ceiling that we expect to achieve in Barium. It also has a visible spectrum as opposed to ultraviolet, that gives us access to basically just a wider set of optical engineering. We can piggyback on telecom with some down conversion and other techniques to be able to drive the kind of a wider technology selection. And so that gives me more engineering freedom in how we think about the designing of our-- kind of our optical subsystems. Now, one of the things that we are looking at, the Barium gives, is it has a longer-lived kind of internal atomic structure. There's longer-lived states in that structure, and that allows us to basically lower the overall SPAM errors.
It also allows us additional options in the atomic protocols that we employ on barium, to be able to kind of do things that are a little bit more exotic, that we hope to be able to talk about in the future. There's other things in terms of kind of single isotope approaches, that allow us to both do cooling as well as computation with a single atomic species. And so those are all kind of some of the, the, I guess, leverage points that we're looking for for barium. We have a lot of investment in barium to date. Some of the original barium work that we've had focused on very long chains, kind of high-performant, long-chain operation.
A lot of the investment that we have since kind of we've talked about Barium in the past, is now looking at scaling Barium to kind of what we need to do to go to multicore and then multi-QPU. So we've increased our overall investment in Barium over the last several years. So, Peter talked a little bit about error mitigation. So, error mitigation is a technique that we use to kind of remove some of the systematic and, some of the stochastic errors in the engineering design. You know, our overall approach is to be able to first design the errors out. Second, for those errors that you can't design out, you need to mitigate. At some point, you can no longer mitigate the errors, and you need to use error correction for those techniques.
So right now, the main source of errors in our system for kind of our stochastic and kind of the systematic errors are surrounding around our 2-qubit gate errors. They're the bottleneck. Our 1-qubit errors and our SPAM errors are well under control. And so while we've been investing in error mitigation techniques, it's allowed us to hit AQ 29, and it will be continuing to be a kind of a key ingredient as we push to AQ 64. The debiasing and the sharpening techniques that we introduced kind of allow us to resolve the signal out of the noise for the overall quantum computation to a much greater degree. And there are other techniques that we'll be adding kind of to the mix as we move to AQ 64.
The overall belief right now, as Peter mentioned, is that to be able to hit AQ 64, we do not need to employ error correction techniques, but we'll be able to double down on some of the error mitigation. It's not to say that we're not investing in error correction. That's continuing, but for the purposes of the current AQ 64 drive, we believe we have everything kind of in play. So one of the things that we need to do, though, is move to Multicore. And so RMQA is IonQ's kind of overall approach to a Multicore system. So this allows us... Let me back up. A single chain is kind of limited by our ability to kind of deploy high-fidelity, entangling gates across it. At some point, through whatever error sources, you get limits on the ability to execute circuits.
It's not only due to fidelity, there are other error sources that prevent overall application and circuit performance, and so you need to be able to deploy those gates, and so at some point, you hit a natural limit to what a single chain can support. You then need to move to multiple chains. And so what we have is a RMQA architecture, a multicore architecture, where you have multiple chains or sub-chains or cores in a single ion trap, in which we do shuttling and merge operations to be able to combine and separate those cores. This also means implicitly that we have multiple operational zones to be able to execute gates in parallel across those different cores. And so this will become operational as part of our AQ 64 effort. To date, we haven't had to use it.
We've been able to get the performance that we need in a single core, in a single chain. As we move to AQ 64, we'll need more ions, and we need to go to multiple cores. And so it's a huge push now to be able to drive a multi-core architecture. Our current trap support it, but we're looking at additional investments, and that's kind of where our trap development and investment comes in. And so we've talked very briefly on past earnings calls about the MGT or the multilayer glass trap. We have kind of an evaporated metal on glass trap right now, or EGT, which has been the workhorse. It's right now deployed in all of our development systems and research systems, and we'll be continuing to kind of use that for many years to come.
However, we've been investing over the last several years in a new technology, MGT, that kind of is a multilayer design. It allows denser routing for more electrodes. More electrodes mean more ions, more ions means more performance. And so the overall development is a kind of multilayer routing. We have more robust manufacturing techniques in MGT, so we have higher yields that come out of that. We also have more quantum zones, like I mentioned. All of those drive the multi-core capabilities. There's other items that are part of the MGT that kind of are co-designed with our overall optical system design and architecture. And so right now, the MGT is in its third generation of design, characterization, and fab. And so we have additional generations to move to as we move to our AQ 64 system.
This will become our workforce moving forward. I haven't really talked about, and IonQ hasn't really tremendously kind of dove into the software side of things either, but I wanna talk a little bit about some of the software changes that we're making and kind of improvements across the stack to drive to AQ 64. IonQ is a full stack software company. My team runs software from the cloud all the way down to firmware, so we cover all of the integrations with our cloud providers, but we're also doing the low-level, real-time control for gate pulses and everything else. If I look at our overall software stack and think about kinda AQ 64 and what's needed to get there, if we start at the bottom with kind of our real-time control.
And so for real-time control, we'll be introducing micro calibrations. Peter talked a little about kind of the automation and calibration aspects, but being able to do micro calibrations in real-time, to be able to keep the system kind of at peak performance throughout its overall kind of uptime, will be an important improvement. We're also looking at look-ahead and pipelining techniques to be able to get more throughput, and as well as just better streaming of kind of real-time instructions down onto the hardware and to our gates. If we move up at the OS level, we've had an ongoing effort as an overall kind of a rework of our OS, and so that's being kind of a new generation of operating system will be deployed as part of our AQ 64 work.
At the compiler side, we're gonna be continuing to invest in kind of an optimizing compiler. It's an art to be able to take a very large circuit and then compile that down across cores, as well as across down to the hardware, to be able to, you know, figure out and map to the individual hardware. IonQ's overall position is that you cannot use just a generic, uh, compiler. Being able to take advantage of the hardware and understanding of the hardware topology, enables us, right now, to have the best compiler on our hardware that we've ever seen. And so we're gonna continue to invest in that for the AQ 64 pieces.
You know, our SDK and API pieces, you know, there's gonna be additional improvements that we have for end-user tooling to be able to support kind of hybrid workloads, as well as just submitting and retrieving jobs, right? Just kind of end-user experience is important to us. It's also going to be improved to support kind of an overall hybrid fabric, where you're looking at GPU, QPU, and just normal CPU kind of operation. On the other side, we have, like I said, ongoing investments in our automation and calibration work, and so we need to keep the systems at optimal calibration and automation.
And so the idea of being able to take our commissioning times and drop them down, being able to drive our overall, I would say, kind of automation and kind of operator workloads down, you know, that's just part of bringing a product to market. And so we have to drive those improvements on the software side so that the software just works. It's almost an appliance, right? It'll take some time to be able to get there, but it's a big investment that we're making across the entire stack. So, with that, I'm gonna turn it over to Pat. He's gonna focus on now some of the things that we also have in work for things beyond AQ 64. Okay?
Thank you very much, Steve. Thank you. Well, welcome to IonQ. My name is, Patrick Tang, and, I was once upon a time a quantum physicist in the semiconductor world, and I took a detour in consumer electronics, in Apple and Amazon. And so my last stint was an engineering VP working on Kindles, Echoes, Fire TV. Very different world, to what we have here. But it's immense privilege for me to be here to come back full circle into my quantum mechanical roots to be working at IonQ. So thanks for your attention. I want to dive a little bit into the physics of what's behind this power of, of quantum computing and what's leading us to AQ 64.
What we have here is a diagram of what's comprised inside our quantum computer, a qubit, which holds the information, and a laser system, which manipulates this information. So we're all familiar in the classical world, where information is held in binary, in ones and zeros. But here, what we have here on the qubit sphere, you can see on the diagram on the left, is that other than just one zero on the north and south poles, we have the quantum states which act on the surface. So each state on the surface of this qubit is an extra quantum computing state that we can manipulate. So what does this mean? It means that when we're computing, each state on this sphere holds a possibility, a probability of being one and zero simultaneously.
So this is the nature's first gift or superposition, which is behind quantum computing. And then we control this immense capacity to hold information with lasers, which brings us to this diagram on our right, which is a representation of our 2- qubit states at the north and south pole, one and zero in the ion state, and then we can manipulate transitions of this state using two lasers offset in energy by precisely this qubit energy itself. So by manipulating the intensity, and the phase, and the exposure time of these lasers, we can transition to any state on this qubit sphere. So in terms of hardware, this is how quantum computing is done at IonQ. So it turns out that that's very nice theoretically, but in reality, our lasers are quite noisy.
You can see here that our orange and blue lasers form a very noisy cone, both in intensity and frequency. We have to find pairs, individual pairs of the orange and blue spikes, right, in order to match the qubit energy. It's a very messy affair, and the match is never perfect, so we end up with extra intensity, extra frequency, which acts as noise, which disturbs our quantum system. Much in the same way that heat kind of disturbs a CPU and degrades performance. The technical feat we've achieved recently is produce a laser with two pure tones. This technical feat was done with a combination of cloning a laser at low frequency and offsetting them in slight energy frequencies, converting them up into higher frequency in order to match the qubit energy exactly.
So that we have all the energy dumped into quantum computing, as opposed to excess noise to disturb the quantum system. So what we've done, in effect, is analogous to moving away from a noisy, distorted sound of electric guitar to a tone which is purer than acoustic guitar, but at the same loudness. So what it means in terms of computing is that we've increased our fidelity by at least tenfold, with corresponding increase in quantum computing power. But quantum computing power is one thing. How do we scale up, right, beyond AQ 64, right? And we do this by interconnecting our different quantum computing units. So what you see on this diagram on the right is a constellation of individual QPUs, which are interconnected, and they're interconnected via entanglement.
Entanglement is nature's second gift to us in quantum computing, and it ensures that information is connected and correlated to each of the individual QPUs. The point to really kind of drive home, it is the number of interconnects, correlated interconnects, which contribute to quantum computing power, rather than the number of individual cells themselves. This is what leads to the quantum computing power that Damir was referring to. In IonQ, we are going to be interconnecting two of these nodes. We're working towards that as our first step to creating the truly interconnected, distributed computer, much in the same way that a CPU has multicore connected to give it extra power. We talked about increasing compute power, but it's also important to think about scaling down in footprint, right?
So just, just so that we can make quantum computing more accessible to others, right? So this diagram on the left is the design of our next generation, extreme high package, vacuum package. It will hold a vacuum similar to that of the Large Hadron Collider, with the same, similar vacuum, but obviously at a much smaller footprint. And in order to manufacture and assemble this package, we have to build a much lower vacuum, we call an ultra-high vacuum, which is on the right, to assemble this package in, and this will be coming online very soon. So this is how then we're gonna be scaling as one part of this quantum computer that Chris will show you. We're gonna be reducing the vacuum to a deck of cards, right? Sizing to a deck of cards.
So I'd like to just conclude this talk, brief talk with one last thing to show that this is a reality, and we're driving this home. So I have here a prototype of this vacuum package that I'll welcome to pass around and for you to observe. Thank you very much for your attention.
Okay. Right. Hi there. I guess a question for both of you, but maybe more for Pat on your slides there. How are you leveraging the current, you know, research and, you know, industrial base for lasers today versus making something that's entirely new? Or when you talk about clean frequencies, are those things like the comms business has tried to achieve in the past? Are you trying to do different things and therefore have to find new suppliers and new approaches to get these things done?
Definitely new approaches to this done. So just to reiterate, I kind of missed out on the point that this frequency combining had to happen at lower frequency. Because technically, what I just said could not be done at the frequencies which are, you know, usually practical. So we had to do that at low frequency, offset them, and then frequency convert into high frequency. So it is novel from that respect, but we're leveraging a lot of off-the-shelf path parts to do this, but we're combining them in a way which is unique to IonQ.
Okay. So is it difficult to get the frequencies you need, or is it the noise part? Or what, what other elements are you trying to do that haven't been attempted or at least focused on by history?
We had to correct for phase noise, and we had to ensure, obviously, during the transfer of the frequencies upwards, that we're maintaining intensity, phase, stability as well, in order to hit these ions. So quantum systems are very delicate, so it's very important that we're hitting these lasers with as accurate of phase amplitude as possible, right? So that was a uniqueness in the IonQ work. It's really the combination of parts which you can get. And to answer your question about collaboration, yes, obviously, we have collaborations with the academia to just really stay, you know, up front, state-of-the-art in terms of laser control and also our quantum computation, computational space as well. We mentioned error correction, that's a big space as well, so we obviously invest in that area as well.
Okay. Okay, so just my last question, just understand, how much of this is ongoing inside of IonQ? Are you developing your own lasers or leveraging outside, you know, suppliers? I cover a number of optical comms companies, so I'm somewhat familiar with a lot of the companies out there. Are you doing this entirely in-house or leveraging what's outside?
We're leveraging lasers from outside.
Okay.
Okay? But I think the way we combine these lasers is what's unique inside IonQ, in terms of the optical path.
Okay, thank you.
Thanks for all the great color there. Just wonder if you could talk a little bit about the effort that went into the design cycle and the shrink here for this prototype that we're passing around. What is the heavy lift there, and do you feel like you've achieved all you can, or can you do even better than this, maybe further reductions or from a cost perspective?
We could always do better. I think part of the hard technical feats involved in this we have to preserve polarization as the light coming out. So there's a large material science effort going on in here, especially around combining windows, right, to this package. That is not trivial because you want to be able to maintain the vacuum, but also preserve the light polarization at the same time. There are two challenges. So there's a material science challenge we're facing right now to how to attach these windows to the vacuum package, and at the same time, withstand atmospheric pressure as well, right? So there's quite a lot involved, but we have some good paths to get there, so, but we're pretty confident that we will.
Right.
One of the things I found interesting is in the vacuum package itself, you see this, what looks like a little glass window. And it turns out that hydrogen will just float through glass, and so it'll destroy the vacuum. So our everyday experience with how things work, you know, we kind of think we can hold a glass with water in it and whatever. But what we're doing here, you're actually having such an extreme vacuum, and it doesn't take much to disturb that vacuum. So now the question is, what's the other materials you could use besides standard glass? And we've found some that kind of stop those kinds of things from happening. But, you know, when you first do it, whoever thought that hydrogen would just float through a glass, a glass window?
Turns out it happens. So those are the kinds of kind of crazy things that we're finding and exploring. I'll just say something, too. It's interesting and from a software point of view, and there used to be an old interview question, back in the 1980s, that said, Originally, tennis balls had a little valve on them like a basketball or a football. And back in the day, every once in a while, a tennis ball would fall on that little valve, and then it would bounce funny. And people who played tennis thought, "Oh, this is bad." So they came up with a way to produce a tennis ball that didn't have that little valve on it. And the interview question is, how would you design such a system?
For a software engineer, strangely. The answer, it turns out, just in case you get interviewed as a software engineer in the future, so you know the answer, is what you do is you take the machine that builds the tennis balls, and you put it in a pressurized environment and now the tennis balls are pressurized, and then when you take them out of the building, that's pressurized. Now, the tennis balls are pressurized without having to insert into it. We're doing exactly the opposite. So it's a new interview question for, I guess, for quantum engineers in the future, is the machine that Pat showed was a vacuum that we're assembling the vacuum chambers in. So the exact opposite of what the tennis ball, which is in a pressurized environment. So-
Peter asked me that question as well.
Yes, it's... Well, it seemed really important for trying to figure out how to do our vacuum. Anyways.
A question for Dean. You highlighted a number of areas you're working on to achieve AQ 64, the barium, the error mitigation, the multicore. You know, if you had to sort of rank order, which is the most challenging or, you know, where do you think you still have the most work to do among those four or five things you highlighted to get to AQ 64?
So in all of the quantum engineering design, it's generally the optical systems that are the drivers of performance and the most notoriously difficult on the design side. There's a lot of design constraints that we work against. You know, Pat talked about some of them in terms of, like, getting this kind of basically the beam pointing and everything through there. We have had success with our overall AODs, but we're looking at, you know, multicore work now. And so I would say the optical system design is generally the will continue to be the bottleneck and kind of the biggest push-up to do.
Thank you. This may be for Dean as well. I guess, why wouldn't you be using barium qubits now? And what are some of the main challenges you see in implementing barium into your next system? And also, do you expect your competitor, your main ion trap competitor, to switch to barium as well?
There's two questions there. Let me try to do the first one first. So, right now, we have barium in work, in our development systems. So when you go on the tour later, some of the systems that you'll be looking at are running barium as we speak. That development activity is continuing. As I mentioned, we've been investing in kind of the high-performant long chains. We're looking at what we need to do to go to the multicore and kind of longer pieces. And so to date, the main reason is that at least on our public offerings, like Forte, that we've talked about, we just haven't had to. And so the simple answer is that, I don't want to overindex and make the, the, anything more complicated if we already have a high-performing kind of a ytterbium system.
And so the ytterbium systems are going to be our workhorse through AQ 35, right? And so the additional investment that we've made on barium will pay dividends when we get to AQ 64. The other options, we haven't really talked a lot about, but one of the other kind of objectives of that system that Peter touched on earlier was kind of the fact that we are also taking a manufacturing mindset. The trap, you know, reduction in size, like the MGT, those are all kind of manufacturing, product-focused investments. The other move to barium in terms of the visible light aspects and kind of the lower maintenance costs associated over UV, all goes into kind of why we want to try to push on barium moving forward. In terms of your second question, they do use barium, right?
But they use a multispecies barium and yt terbium. Some are cooling ions, some are computational ions, and so, it's just a different architecture. But, they need to go to a dual species approach, where we're, for our near term, looking at a single species approach.
Thank you.
Yeah. I just wonder if you could talk to sort of the size of your guys' R&D workforce, and you mentioned the importance of software. Can you talk about how many of those people are focused on software versus systems versus more, you know, theoretical PhD-level science?
So let's see. I don't want to get into numbers, but, like right now, between Pat, myself, and Dave Mehuys, who will be talking tomorrow, we all touch hardware in the company. My team, engineering, is the largest by multiple factors. I would say, you know, Dave's team is next, followed by Pat's. And so, Pat kind of owns, within R&D, kind of a lot of the lower TRL development, technology readiness level development across the board. He is working both push and pull technologies, things that need to be kind of, say, "Dean, I need you to really think about on-ramping this into your engineering development." There's other questions that I ask Pat: "Can you kind of deliver that?" And so that's where most of, kind of the, the more the kind of applied research is occurring.
I'm really trying to drive the engineering work forward, and so that involves all the kind of classical software development, kind of firmware development, FPGA development, higher up the stack. Right now, we tilt heavier towards hardware than software, if I think about the overall balance of my organization. But there is a fair number of integrators, right? Where we are looking at people who are doing the hands-on work in either theory work or kind of the hands-on integration work associated with the systems that you'll see on the tour and then Dave's organization takes kind of the engineering artifacts, the kind of technical data packages that my team creates, and uses those to try to drive and inform tooling infrastructure for kind of the low-volume manufacturing work.
Just a question from our live stream audience. You mentioned that there's a large amount of the IonQ team sitting within the R&D and engineering organizations. What motivates these people every day? What are the things that you talk about in your team meeting that get people excited?
Perhaps, I can start off what motivates me. So, you know, what motivated me to come to IonQ is, I believe in the quantum mechanical scheme that we have, to get to scale, is the most elegant physics out there. So, up to this day, I'm still enamored by the elegant physics which drives this. So I think in the long view, I think that's a very good motivator for us, and the fact that, you know, we are beating what we promise as well, out there as well. So we have a metric, we have elegant physics, and I think that really kind of drives our organization.
So it's interesting. I have slightly different feedback from my team and kind of just where I stand, right? I come from Blue Origin. You know, I'm used to having very audacious goals set in front of me, that you have to basically figure out how to deliver on, regardless of kind of their difficulty. And so, one of the beauties of what we're working on, there is very complex, hard physics at work. There's also just a lot of engineering and a lot of block and tackling, and a lot of company building that we have to do. A lot of the people that are at IonQ now, one, enjoy the challenge, right? We are setting a kind of, a set of goalposts that are audacious, right? And so that kind of gets them out of bed in the morning.
The second is that it's just, you know, there is an elegance to the work that we're doing and the, kind of the, the theory and everything else in terms of kind of the atomic and the AMO physics that's at play. At the end of the day, you know, we're looking at trying to build computers, not to build computers, but these are a tool to basically change an industry and change the way we can solve and look at problems. And I think that higher mission is really a big part of what gets really people excited. It's what, partly what brought me to IonQ.
I have a question on the terminology. So I'm familiar with the term error correction from the storage industry, but you guys use the term error mitigation. I just want to understand the difference. Sounds like it's proactive versus trying to react while the system is running, if that's the way you're doing it, are there teams that are separately focusing on what you call error mitigation? I'd love to hear what you're including in that versus error correction, which I'm familiar with. Just wanted to get that terminology cleared up.
I'll take a stab at this and maybe then pass it off to Pat or even Jungsang. But we do have separate teams that are looking at error correction and error mitigation. And so the error mitigation work is something that is not dynamic. It is something that happens prior to compilation and after compilation. It's generally algorithmic in the sense that you're looking at how to, you know, modify the circuits or how to do post-processing to kind of elicit kind of a better signal-to-noise out of the results, right? When you run a circuit, you normally run it many times. You get kind of a histogram of outputs, and there is a real value somewhere in there. Very few circuits are kind of a one shot, you get a single answer.
It's multiple runs of the actual circuit to be able to, to eke out the answer. There are some mitigation techniques where you will employ what are called ancilla qubits to be able to augment the circuit. Those are kind of smart instrumentations of the circuit that allow you to kind of, once again, eke out better signal-to-noise or understand kind of some of the error sources. It's not the same implementations as kind of the error-correcting codes, but they are there to be able to without needing to go to very large kind of error correction ratios, which require, you know, you're looking at, you know, thousands of qubits for, like, a very large AQ system, error-corrected system. So we have an ability to with a smaller number of operational, high-fidelity qubits, really kind of push the envelope.
Not sure if this is an appropriate question for you two, but since you're on the R&D side, and then, Pat, you mentioned the elegant physics of ion traps. Could you maybe spend a minute looking at the other qubit modalities and say, what do you think the biggest challenge is to superconducting qubits or photonic? You know, what are the challenges you think that those, you know, modalities face, you know, to scaling to a quantum advantage system or fault-tolerant system, you know, over the next decade?
I want to be careful about not to be critical of my friends like that, but in quantum computing is a balance of fidelity, speed, and scalability, right? So each scheme has its different advantages, and I think we have very, very long coherence times here in IonQ. So I think we have our own roadmap to get there as well. So I think it's hard-pressed to kind of do a apples to oranges comparison between these different schemes, but just to let you know, it's a balance of these three factors, right, between these technologies.
I'd like to comment a little about that one. Yeah, thanks for the question. So, you know, at the end of the day, we're trying to build computers that solve problems. And some of that component matters, but architecture matters, tools and software matters, and, and eventually, the application matters. So, if you look at it from the, the overall perspective, we focus very much on the, on the hardware and qubits. But I think as, as Peter mentioned earlier, this connectivity and the architecture, how, how to actually build systems that can actually tackle real-world problems, all, all of that matters, right? So I think it's very important to, to see that. We, we have a very, strong, quality in qubits because these are atoms, they're all individual atomic clocks.
All of these are foundations of what makes qubits behave quantum mechanically. We start from that foundation, but we also took an approach where all the qubits are connected to each other. That makes the algorithm applic-- implementation extremely effective. You should look... And that's what allows us to get to high AQ numbers. The error mitigation is, you know, when you see errors, error correction is actually a very expensive way to fix errors. If you understand what the errors are, you can typically fix them way before you actually throw a lot of resources. These are very natural ways. If you look at communication systems, you have, like, cell phones and optical communications. We do a lot of error mitigation before we get to forward error correction.
It's a very expensive process. So I think there is a lot of kinda common knowledge and foundations that we build these complex systems on. So I think we should look at it from that perspective. Now, I know all of our competition also is also looking at all of their challenges, and I'm sure they're being extremely innovative, but at the end of the day, it's ability to build machines that can solve problems.
I just wanna ask a follow-up. My perception of gate models is that you do the programming through laying out the circuits. Is that done by the customer, or are you guys also now helping to do the error mitigation? Is that kinda becoming a blended effort, or how is that gonna hand off, I guess? 'Cause I heard that in your explanation of error mitigation.
What we see right now is both. We have, as part of our compiler team, as part of our architecture team and system performance, individuals that are focused on kind of these error mitigation techniques, some of the compiler optimizations. We also have, our kind of solutions and applications team interacting directly with customers, and so they have an ability to understand the problems, and if there are specific tailorings or error mitigation kind of customizations that can be added, you know, that's the beauty, right? Working that problem space and those problem sets give us learning that we then roll back into the kind of the larger software base. Not all error mitigation techniques are applicable to all problems, right? There's... It's not like a universal fault-tolerant error correction code, right?
Some of the error mitigation techniques work really well for a given problem class, and so you need to kind of understand which compiler settings you're setting, right? Just, or just how to think about your, your problem. And so, but, we have people kind of both within my organization as well as, within Jungsang's organization, kind of both thinking about this and working through that, and then, we collaborate on a regular basis to try to drive our new capabilities that are developed kind of in the kind of the base software stack to our solutions team, as well as them bringing that kind of knowledge back and kind of informing the software development.
I'll just mention just an interesting relative to a CPU versus a QPU. In CPUs today, like when I was at Amazon, we built microservices. What you see as Amazon.com is no longer a monolithic application; it's an application which is spread over thousands of servers with little microservices. So there might be one little service which does taxation and another one that does another thing, and they're all glued together, and that's what you see as the final application. But there is not a compiler technology today for a CPU. They can take a monolithic application and then spread it out over a number of CPUs and GPUs. Instead, we require software engineers to figure out where the boundaries are as to what things should run on which machines.
Like at Amazon, we would say, "Well, this will be the taxation component." The compiler didn't figure that out, a software engineer did. What's interesting in our compiler is we're now going to a multi-core system, and we don't want the end user to have to go through and divide up the application to figure out what runs on different QPUs. Instead, what we're doing is taking a monolithic circuit, one large circuit, and then the compiler is going through and saying, "Oh, I think this part of the circuit should go over here, because maybe it has the least connectivity that's needed for another piece that's gonna run on a different QPU." And so it's going through and breaking the circuit up over multiple QPUs for you, instead of having the customer actually having to do that.
If we're successful, those are the kinds of things that customers shouldn't have to be able to think about. And I can say, as a software engineer, boy, I wish I had that in the CPU world, where I could take a monolithic application and just somehow magically spread across all my available compute resources. So we're really excited about the compiler technology that we're developing here at IonQ.
Just another question from online: Can you speak a little bit more about the vacuum and what that means, how you measure it, and how it compares to other vacuum systems in space, things like that as well?
It's certainly cleaner than space near Earth, I can say that. And I stated that it's gonna be as good as the vacuum in our kind of particle colliders. And measuring the actual vacuum turns out to be a huge challenge. And so we have, we're having to employ really metrics on our ions themselves, right, as they're being trapped, in order to kind of measure the vacuum. So that's the level of vacuum that we're getting to, right?
Just to do a riff on that, the problem is that measuring the vacuum that we're in right now is mankind doesn't have tools that allow you to get to that level of vacuum. So the vacuum is just so damn good, how the hell do you... There's just not good enough tools yet to be able to measure the vacuum that we're trying to create. So we're doing better than the best measurement technology that's currently available to mankind.
And then last one from online: You talked a little bit about the pathways to get to AQ 64 and beyond. Are there any hurdles that you foresee, and how confident are you at getting there in 2025?
So I'm very confident. I mean, I guess the scientist in me knows that anything can happen, right? We are dealing with, you know, atomic physics and everything else, and the kind of, optical engineering. But, you know, the, the engineer in me, you know, is basically, we have an engineering plan, now it's just a matter of execution. And so right now, the biggest hurdle to 2025 is simply execution. And so... But it's fully within my grasp.
Okay. If there are no further questions, we'll thank Dean and Pat. We will now proceed to our lab tour. The lab tour will not be live streamed, so everyone online, we advise you to keep your browsers open, and we will see you back here at 11:45 Eastern for the next session after the lab tour... Thank you, for everyone who's been waiting patiently online. We're going to continue with Jungsang Kim, our Co-founder and CTO, talking about how quantum applications work.
All right. Well, welcome everybody to IonQ, and our Analyst Day, first in person. You know, my name is Jungsang Kim. I'm a co-founder. We—Chris and I founded the company, technically in 2015. And, I've been thinking a lot about kinda what it takes to, you know, start from in these trapped ion, which was more of a physics experiment, all the way to commercial relevance. And I think we're at the stage where real-world applications are being looked at. And just, as Peter mentioned earlier today, the field really started from science to now engineering, and I think we're eventually migrating to products.
And I think in the application side, a lot of the academic community has been focusing on the question of Quantum supremacy, right? Which is, what are the applications and algorithms where quantum can do much, exponentially better than classical? That's been the main focus of the question in the early days. But I think things are shifting now that we actually have real-world quantum computers with significant AQs that can run relatively complex algorithms. We've actually, over the last couple of years since we have introduced Aria, we've actually started engaging with a lot of our partners and customers and clients to really think about the real-world problems that the academic community really has not been questioning, right? These are real-world practical applications.
So what I'd like to do in the next maybe half hour or so is just to outline, give you some examples of. Basically, I'm gonna give you three examples of the algorithms and applications that we've been thinking about, just to get you a sense of what quantum computers are capable of doing. So this is really kinda headed towards what we call the enterprise-grade quantum applications. And this is where quantum solutions will actually be better in some commercial sense, whether it's cheaper, faster, more capable than classical solutions that are out there. Hopefully addressing, with the goal of addressing, real-world use cases that the commercial world will benefit from.
You know, having these high-AQ machines, where these types of approaches can actually be developed, tested, and validated, and with also a very clear projection of what the advantages are into the future as we continue to increase our AQ, is one of the areas where I think the most exciting development has been in the last few years. We, IonQ and its partners, lead in quantum application development, which has a huge market potential in the future, predicted by BCG, as shown here with some of the numbers. Today I'm gonna give you, walk you through, three different examples.
First, we've been working with Airbus on cargo loading optimization, and this is a canonical example of what's called an optimization or logistics problem. The second is, we've been working with Oak Ridge National Labs, and we've actually managed to simulate the benzene molecule, which is, again, another canonical example of molecular modeling simulations. Then the third one, the last one, is quantum machine learning. Machine learning is a very diverse field with lots and lots of applications, so is quantum machine learning. The one that I'm going to discuss today is for image recognition and classification problem that we've been working with our partners at Hyundai.
All right, so with that, I'm gonna walk you through some of the more details of the problem setting and so on. So this will be a little bit of a more technical discussion. So the first one is cargo loading optimization with Airbus, and this is what the problem looks like. So again, this is a somewhat of a simplified problem. Right now, I think when you have a bunch of cargos that has to go into an airplane, there's a very- there isn't a very systematic way of optimizing how that's loaded. So here, we think about the problem into a package and bin. So we have a bunch of packages that has to fit into some finite number of bins.
We simplify the problem by thinking about, you know, there are three-- the bins are actually categorized as fixed sizes, and then we have three different size of packages. The first package fits into a single bin. The second package are half-size bins, which means that you can put two of them into a single bin. The third is a package that's big, it requires two bins to fit. We think about these three types of bins, packages that have to fit into an airplane bin.
And then the airplane is now divided up into this segment of bins, one through N, and making sure that we can load the packages, with a lot of constraints, that actually makes the transportation work. So that's the challenge and the problem. And as the number of packages and bins increases, this becomes a very, very expensive and challenging problem to solve. So you know, so let me then dive into why this is important. So the task here, we worked with Airbus, was to develop a proof of concept quantum approach for this problem for airplanes.
Of course, if we can solve these problems efficiently or effectively, the potential impact will be, you can lower the operating costs, you can reduce the fuel emissions, and of course, increase the load loading predictability and logistics so that there's a lot of business benefits to optimizing this problem. So you know, there are some constraints, and the constraints here makes the problem hard. If the problem is not constrained, then the optimization is actually not too hard. But there are some constraints that I impose. First of all, you can't actually overload the plane so that the weight kinda exceeds what is limited for the aircraft. So that's the first constraint.
The second is we actually have to load it in a way that the plane doesn't tilt too much. Meaning, if you put a lot of weight on the front or the back, then the plane will tilt, and therefore, we actually have to make sure it's balanced. There are some shear limits of the aircraft that must be respected, meaning we can't we have to be the structural strength of the airplane has to be respected. And then we I told you about the three different types of packages, right? One that fits in one bin, one that actually you can actually fit two of the packages in one bin, and the third one that will take up two bins. And then the total volume of packages can't exceed the volume of the bin.
These are some of the constraints, and now we actually have to find the optimal solution, making sure that these constraints are respected. We have to set up a quantum algorithm that actually resolves this issue. Typical problems like this, you know, as you have a bunch of packages on the left and a bunch of bins on the right, you can draw a line of which package goes which bin. And then the number of lines you can see here can grow very, very quickly, as the number of packages in the bins grow, okay?
Of all the possible lines we can draw, we have to find a set where each package is assigned to each bin, but in a way that all of these constraints are satisfied. So in order to actually solve this problem, we are not only running the optimization algorithms, but we also have a mixture of some machine learning algorithms that actually help identify the right solution set, subsets that satisfy these constraints. As the number of elements increase, the possible combinations grow very, very quickly, and this becomes a very difficult problem very, very rapidly. The way this quantum computing works is we actually think about all possible assignments that actually satisfy these constraints, and then we actually compute the cost function.
The cost function is, you know, how optimal this is, and then we actually find, you know, variational, approach to actually optimize this thing until we find a good solution. What we have done with Airbus is solve a very concrete problem that utilizes up to 28 qubits. In this example, we think about loading seven packages into four bins, and that problem seems to be a relatively small one. Nevertheless, the actual procedure, end-to-end procedure for setting up the problem, satisfying the constraints, running it on, developing the quantum algorithms, running it on a quantum computer and validating that it works, was done on a 28 qubits Forte system.
This actually was the largest optimization problem utilizing the largest number of qubits that we know of to date. Of course, if we can actually, we're, we're certainly working on scaling this to a larger set of problems. We're continuing to innovate the methodology so we can actually tackle larger and larger problems as the quantum computer size increases. But the potential benefits of this loading optimization at scale is the increased efficiency, fuel cost savings and labor savings, and more, more optimal operation of this aircraft problems. So that was our first example. I'd like to move on to the next example, which is molecular modeling simulations with Oak Ridge National Labs.
I'll try to illustrate why some of these chemistry problems are also extremely challenging. This is actually one of the earlier contexts from which quantum computing was proposed in the 1980s by Richard Feynman. So let's think about a simple molecule like water. So water has one oxygen and two hydrogen. And you see these little dots we put on the oxygen molecule? Those are the orbitals, meaning that's where the electrons reside. And then when O and H connect, that is where two electrons, one from hydrogen and one from oxygen, actually now hybridize, and that's where the molecular bonding happens. So those lines are like two orbitals, and each of those dots is an orbital.
So here, there are the number of orbitals that we think about. Orbitals is where the electrons can live, and then the electrons actually occupy those orbital states. And then now, the electrons in each orbitals can actually interact with each other, and then, depending on how their interaction is, their energy is lowered, and then the lowest energy state is what's the most stable, and that's where the molecule stabilizes, okay? So with a water molecule, it's relatively simple. There are six electrons on oxygen and two on the hydrogen that we have to think about, and we have done several simulations of water molecules with quantum computers at IonQ. The next molecule that you can think about is a little bit more complicated. This has a carbon in the middle, three hydrogen and one fluoride.
That's called a methyl fluoride. You can see that the number of orbitals is now quite a bit larger, right? There's you know four so-called covalent bonds, and then there is six additional orbitals that are localized to fluorine. This kind of molecule is actually relatively simple, but we can actually still simulate this, and we've done that simulation here as well. Now, the next one we were looking at, which is the subject of this specific study, is a benzene molecule, and that requires six carbon and six hydrogen. You know, this benzene has an interesting history where people knew that there were six carbons and six hydrogens, but there are so many you know bonds or these orbitals from carbon, that people didn't know how structurally this was done.
And, I think one of the German chemists in his dreams were thinking about, you know... He dreamt about a snake chasing its tail, and then he realized that there is a configuration where you can actually have six carbons and six hydrogens and make it very stable. So this actually is the structure of a ring, and within that ring, a lot of the electrons are shared across that carbon ring with spokes that are connecting to hydrogen. And you can see that there is a lot of symmetry in this. Symmetry in here means that if you take that benzene molecule and rotate it by 60 degrees, then it actually repeats itself.
So there's a lot of symmetry, and you can actually utilize that symmetry to simplify the problem, although there are many, many orbitals that are interacting with interacting electrons within that molecule. So this benzene, I'm gonna show you that this benzene molecule simulation is something that we have very recently done. It is actually one of the largest molecule that was simulated using a real quantum computer. But there, the innovation really was kinda utilizing the symmetry to reduce the problem, making your quantum circuit as efficient and compactly compiled as possible, and then running the highest performance quantum computer to actually execute on that. So those were the three combination of efforts that were enabled us to simulate the benzene molecule.
Now, of course, the real holy grail is now going to a much more complicated molecule. In this example, I show you caffeine, and this is something that we all drink every day. And you can see that in caffeine, there is this ring. There are a couple of rings that you see. They're not all carbon rings. There are some nitrogen rings and so on. And by the time you get to this kind of molecules. And caffeine, from an organic molecule point of view, is actually not a very complicated molecule. Some of these organic molecules have, you know, thousands of atoms, and very, very large number of orbitals.
But even this caffeine molecule, with this many atoms, relatively simple, has enough orbitals that it is impractical or impossible to simulate this kind of a molecule using classical computers. Okay, so I think the trick is, if we get something that is substantially bigger than benzene, we get to get into a regime where classical study of these molecular dynamics become very, very challenging, and actually, very soon, it becomes impossible to do this on a classical computer. So just like the optimization problem, as the number of orbitals increase and the number of electrons that are interacting increases, the computational power that's required to consider all of these electron interactions within the molecules actually blows up exponentially.
And this is where Richard Feynman said, "Okay, these quantum systems are interacting very strongly." It becomes very quickly intractable with classical computers, and that's where the first concept of quantum computers were introduced. In this example, we use what's called a variational quantum eigensolver, which is we actually come up with a quantum state, which we... It's called an ansatz. It's actually a model of quantum states that reflects the actual interactions within the molecule. And then we actually tweak the variational numbers, parameters, to see if we can get to the lowest ground state, which actually matches the real known energy states. And you can see here a history of the molecules that we have simulated over the years.
You can see that there's a couple of water simulation we've done, right? There's one in January and one in November, one in January 2023, another one a few years ago. Then we have this hydrogen ten. That's a hypothetical molecule people introduced to benchmark the complexity of quantum chemistry. You see that the lithium hydride and lithium oxide, those are actually relevant molecules for battery chemistry that we've studied in the recent past. Of course, by the time we get to benzene, you can see that, you know, these mol- these dynamics utilize some finite number of qubits. Somewhere in this example, somewhere between 2 qubit- 12 qubits.
But as you can see, as we get to more and more complex molecules, the number of entangling gate operations or the depth of the circuit actually increases very, very quickly. So in order to study this benzene, we were able to utilize the symmetry to compact the problem, and then come up with a very efficient ansatz or the guess functions that will get you to the accurate answers. But utilizing this to simulate benzene was the most complex quantum chemistry simulation that was performed on a real quantum computing hardware to date. Okay? So, we're actually at the forefront of this.
This is really a, in order to get the right answer, is a combination of, innovative algorithms, optimization, and high-performance hardware, to actually execute on all of them. All right, so that was the chemistry of simulation. Now, as we discussed, if you look at the problem size as a function of the wall clock time to solve it, that blue line, you can see that the y-axis, the vertical axis, is now plotted in log scale. So it goes from seconds to millennia very quickly, right?
And then, as the problem size increases, estimated classical wall time to solve this problem grows exponentially, meaning you can increase the problem size a little bit, and it will go from, you know, days to years to millennia very, very quickly, and that's kind of the challenge of exponential scaling. Now exactly where that line lies depends on the classical computer you use, right? Obviously, when you use a laptop versus, like, a Frontier at Oak Ridge, that line will move. But the fact that that's a very steep exponential line, that only moves very slowly with as a function of the classical computational power you throw in, that fact doesn't change.
Now, if you look at the red line, the quantum solutions, you can see that the slope is extremely, extremely mild. It's only logarithmic timescale, and that's because it's exponentially faster in this comparison. And that is actually what gives quantum computing the power to simulate these problems. So at some point, if you think about, you know, on the order of days to weeks is a reasonable timescale to solve a pretty complex molecular problems, there will come a time where these two lines intersect. And then even if you throw much higher classical computer power, that line is not gonna move very much.
So I think getting to these molecular simulation methodologies that scale efficiently on quantum computers is going to be one of the potential wins that quantum computer can bring to the table. All right. With that, I'd like to then move on to our last topic of a quantum machine learning for image recognition. This is a collaboration we've had with Hyundai, and the first problem that we actually tackled and solved is this problem of image recognition or the road signs, right? So this is a picture of a German road sign data set, and there's 43 different types of classes of images that we looked at. And the question is, can you use a machine learning method to identify which road sign you're looking at?
Of course, if you think about, you know, self-driving cars, autonomous driving, and so on, then this type of algorithm execution to identify road signs, to take the input from the road so that you can act on it, is a really important element of that, of that technology. So, what I would like to do is we actually created a little video of how this quantum machine learning for image recognition works. Okay? It's a pretty extensive process and very counterintuitive, but hopefully, this video that we're gonna show you next is going to walk you through that process. And then after that, I'm gonna take some questions. So-
Self-driving cars are becoming a reality in front of our eyes, and machine learning is at the heart of it all. An ongoing challenge in this pursuit is that vehicles must interface with our physical world. They need to understand and abide by the same signage and traffic rules as all human drivers on the road. There are over 500 official road signs in the U.S. alone. Confidently and quickly recognizing them all could mean the difference between a successful ride and a deadly one. Quantum computers can help improve machine learning models for image recognition, as demonstrated in IonQ's collaboration with Hyundai. But how does it work? Let's consider a simple case of two common road signs. We start by loading training images onto a quantum system to optimize the machine learning model.
As we entangle qubits, we form a data storage space through the various combinations of superpositions. This data space doubles with each new entangled qubit. When we entangle 8 qubits, the data space can hold 256 unique values, which is enough for a small image, 16 by 16 pixels. If we double the entangled qubits, the data space can now store 65,000 pixels. With 24 entangled qubits, the quantum system can store 16 million pixels, or an image 4,000 pixels on each side. By comparison, an image that size would require about 32 million bytes of storage on a classical computer. Quantum systems grow exponentially and become very powerful. Even a simple entangled system can improve machine learning. With only 8 qubits, we could hold a compressed version of the training images, 16 by 16 pixels in size.
Surprisingly, we can already start seeing the power of quantum at this small scale. Here's how: Loading an image means simply copying the values of each grayscale pixel to its corresponding superposition spot. After loading, the quantum system holds the compressed training image. Now that we understand how loading a single image works, we can gather a collection of well-labeled training images. The image recognition algorithm transforms each image using a quantum model that depends on a small set of input parameters. We begin the process by selecting random values as input parameters. As the quantum operations, controlled by the input parameters, are applied to an image, the input parameters trigger transformations in the entangled system, impacting neighboring pixels and shifting patterns around. By the end, the entire quantum system can be summed up by a single output value.
This value doesn't mean anything until we compare it to the labeled images. The training process requires the application of these input parameters across the entire training set of images. The algorithm loads each image separately, applies the transformation, and reads its output value. Initially, we expect these output values to seem random. Then, the algorithm starts adjusting the input parameters, paying careful attention to the output values and looking for early correlation with the image labels. With each iteration, the quantum system produces different output values as the input parameters are adjusted. With each iteration, we start to see these values diverge between the different classes of images. The input parameters continue to adjust according to the results, until we have differentiated groups of results that correlate to the known labels.
Based on the results, after several hundred iterations, we now recognize one end of the output spectrum as a stop sign and the other as a no-entry sign. Now, we are confident that with these input parameters, the quantum model produces output values that distinguish between the two classes of images. This is the learning part in machine learning, but quantum requires far fewer parameters. This iterative algorithm is a hybrid process. Some of it is performed in quantum, and some uses classical computing. The classical computing resources are used for data storage, any preprocessing of data, such as scaling and selecting the input parameters. Once the data and input parameters are provided, the quantum computer stores the data in quantum states, handles all of the parallel transformations, and produces the final output value.
The classical side evaluates the output values, which closes the loop by selecting the input parameters for the next iteration. Our example demonstrates a few key advantages of quantum computing in image recognition using machine learning. In the entangled system, transformations occur in parallel across all desired pixels, regardless of quantity. There are types of transformations that classical computing can't perform, and they lead to new insights about the training images. Each new qubit doubles the size of the computational data space. We can use this expanded storage and computation space to increase the resolution of the training images, or to maintain multiple copies of each image so the system can explore different structures and features that emerge with different transformations, and it does it all in parallel.
Finally, quantum models can produce the same or superior results with far fewer input parameters, which could greatly reduce the resulting size of the model, making it far more efficient. IonQ expects that when we cross 40 high-performing qubits, this quantum algorithm will be able to explore image recognition transformations that are otherwise impractical using classical computing. When we reach 64 qubits, this quantum algorithm is expected to produce superior results to classical machine learning algorithms using classical solutions. Future image recognition applications will need to scale up and be extremely reliable, yet simplified. Quantum machine learning is one of the most promising paths for achieving this goal. Not only can it help self-driving cars recognize signs, but it has the potential to advance a vast range of industries.
Quantum machine learning can help with everything, from adding automation to manufacturing and agriculture, improving healthcare outcomes and discoveries, introducing retail and logistics efficiencies, identifying security and defense threats, and making transportation safer. Visit IonQ.com/qmldemo1 to discover more about image recognition and quantum machine learning.
All right, I hope you enjoyed that video. Just to summarize, I think what I would like to convey to you is we are actually looking at problems that are of real-world relevance today with quantum computers. And in all of these problems, in many cases, quantum solutions have not been explored, but every time we dive in with our customers and, and, and, and explore, there are very, very interesting and impactful ways that quantum algorithms can impact and influence a better solution. So we believe that that activity of application development will, enabled by the powerful, high-AQ machines that we develop, is going to be an area where there will be a lot of very, very important and exciting progress will be. All right. With that, I'd like to open up the floor for any questions.
Jungsang, I think it was the benzene simulation where you used 8 qubits and I think 69 gate operations. It seems like Aria with 29... Or sorry, Forte with 29 AQ would be able to do more, you know, handle more qubits and/or gates. And so what was the limiting factor in that benzene example that kept you to only 8 qubits and 69-
Yeah, so that, that's.
-gate operations?
Yeah, that, that's a great question. So, you know what? We actually started looking at benzene, which typically people thought we needed a lot more qubits and a lot more gate depths to simulate. But I think one of the biggest innovation is we can actually now take that same fit chemical systems, and we were able to compress it down to smaller and more efficient circuits, so we could get to the good answer.
I think what that leaves is, if we took a stab at a bigger molecule, and if we can actually do that same compression and same innovation and deeper circuits, we should be able to look at a bigger molecule that way. I think that's the lesson. Here, I think our goal is to take a target molecule and find the most efficient way to get to the most accurate answer possible. We were able to do that. I think that was a progress. It-
So you weren't sort of hardware-limited in that case. It was-
We were not hardware-limited.
Yeah, okay
... in that case.
You had talked earlier on the bin packing, and I believe it was the bin packing one, but you said something about the lowest energy level and finding that. Did I hear that correctly?
Yeah. In all of these variational algorithms, it turns out that optimization, the chemistry problem, and machine learning all have a very similar structure, where we come up with what we call an ansatz model with a bunch of parameters, and then we tweak the parameters. Now, in molecule, that figure of merit that we look for is the lowest energy because that's where the molecules actually stabilize into. In terms of optimization, we compute what's called a cost function. Again, that cost function is basically, well, for any optimization problem, you want to minimize the cost of doing whatever that constraint problem is. We have to know how to compute that cost function and make sure that that's minimal as we find better solution.
In machine learning, we use what's called the loss function, and this is actually how accurately can you predict the answer that is also parameterized, right? So these are the final goals that you want to minimize, whether it's energy, the cost function, or the loss function, that actually tells you that you have a better solution. So, depending on the context of the problem, it's called different things, but they actually all do the same thing, right? When you want to optimize or find the best solution, you have to have a target that actually improves that number.
Yeah, I just thought it was interesting. Annealing is the only time I've ever heard someone refer to that lowest energy level, and so I was kinda surprised to hear you mention that. Is there any correlation there? Is it the same, similar, or is it just maybe terminology that's the difference?
Yeah, it is, it is, it is similar concept, and the terminology is a little different because, when we do optimization, we look at cost of that optimal solution rather than the energy. But conceptually, I think they do very similar things.
Thank you.
Any questions?
Just wondering, since you're giving specific customer examples, how are they approaching this commercially? I mean, is this... It seems like to the extent that they're doing things that they could do classically, but obviously, the, the temptation here is to scale it down the road. Like, just can you talk about how people like Hyundai, how people like Airbus are thinking about their long-term investments here?
Yeah. So, you know, when so many of these engagements that we have published results started a few years ago. So this is where, you know, people are saying, "Okay, can quantum computers solve practical problems?" That's kind of the way we got started. What we have done is, after looking at this, we actually got a lot of insights on how, more specifically, quantum can actually give you the advantage. And one of the examples we talked about is the model is more efficient. We can actually have fewer parameters, which means that they train more efficiently, and so on and so forth. So once we learn these things, and then now we want to go and look at the very specific use cases where that will be optimal.
Of course, today, at AQ levels that we have in the labs, you can actually simulate these things with classical computers. But the question is: How do those opportunities and advantages scale as we approach AQ 64 and things that are now quantum advantage capable? So those are kind of the big questions we're asking, and then the methodologies we're developing and looking at the projection of the potential advantage. It's kinda where a lot of the current activities are. Yeah. Peter had a comment.
Just to add a little bit to that. So, you know, what customers are doing right now is doing the algorithmic work to be able to come up with the algorithm, knowing that when they need an AQ 64, sometimes even a bigger system than that, to be able to run these algorithms. So they're kind of saying: "Okay, I know when you're gonna have an AQ 64 system. I don't wanna start development that day, 'cause then I'll be behind. So I'm gonna start it beforehand and do a much smaller problem, but I've now got the algorithm.
And so if I've got the algorithm and it's ready to go, all I need to do is wait for you guys to build a bigger system, and now I can run that in a production environment on a quantum computer, and I can see my ROI." So people today are not expecting, in these things-
ROI.
-to get an ROI today. They're basically saying: "Okay, two years from now, I should be working on the software today to get started to make sure that when the machine is here, I'm ready to go to be able to take advantage of it, and I can use that in a competitive environment against my competition." So that's really what's going on. Some of them, too, are not, are not maybe... They're maybe a little counterintuitive. Like, why is Hyundai working on image recognition, you know, for a quantum computer? Do they think that there will be a quantum computer in every, you know, every car? And the answer is no.
But we're creating the model on a quantum computer and doing better than what you can do classically, but then maybe the inference for the model that we create would actually run on a GPU or a CPU, right? But the model is actually better on a quantum computer. But maybe, you know, 50 years from now, I'm sure every car will have a quantum computer in the glove compartment, but until then, probably the inference side of these things will run in a classical environment.
This may be difficult to answer, but in some of these examples, maybe the machine learning . How do they go from sort of the, you know, what you're trying to achieve to coming up with the gate model, you know, to implement the machine learning, you know, to optimizing the parameters? I mean, how is that... Is that really what's going on in terms of when you say algorithm development? It's coming up with, "Hey, we're gonna have this, you know, sequence of gates, and this sequence of gates, you know, achieves the outcome you're looking for." I mean, it's. I don't know if there's a way to expand on that, but,
Yeah, I think that's what. It's but it's an end-to-end solution. For example, when we do a image recognition, first of all, we have classical images, and then we actually have to—and then traditionally, you know, they process this image classically and create some models that can actually differentiate the two, tweak the parameters, and then you go. Somewhere in between, we actually have to inject quantum, and we can inject quantum in many different ways. We're starting small, where we're only replacing the model with quantum, but that still requires a classical image to be loaded into a quantum state at some point to run the quantum circuits to actually evaluate the model, and then we tweak the model.
What we're finding out is the quantum models are more efficient, meaning we have fewer parameters, we can train with fewer data, and so on and so forth. Well, we can also start to expand the tasks in that end-to-end chain where quantum can do more, and those things are all being explored. So again, there isn't a single solution to this. There are many, many different quantum approaches, and new innovations actually add to new solution approaches that are happening very, very rapidly. Yeah.
I'll just add, this is actually, yeah, image recognition is something that's actually out of my past, 'cause we worked on optical character recognition with Ray Kurzweil. And so, what was interesting is that when I was doing optical character recognition, there was about 29 passes of pre-processing that happened on the image. You can imagine in these, for instance, in these images we looked today, is maybe one pass would be to go through and do edge detection of the sign to remove the trees and everything outside that. So a bunch of classical approach would be to go through and to pre-process. Like, as an example, maybe the camera image is not 100% onto the image, so maybe it's skewed. So maybe it's got, like, a keystone shape to the sign.
I need to de-skew it before I run, you know, the actual thing. When we were doing it for optical characterization, there was 29 of those. Maybe the text is sitting on a curved surface. Now, before I start the OCR, I need to take the curved surface and bring it back to flat. Strange things like, there's a flash going on, and so at the center of the image, there's a bunch of white pixels, which is the flash coming back to me because I'm on a glossy surface. 29 of those steps happened before we actually got to the actual image recognition, OCR. What I find fascinating here is we didn't do any of that. We didn't go through and do any of that pre-processing.
The quantum system did it all as part of the quantum process, and as we added more qubits, it just got better. But if I was to look at this problem, more than likely, I mean, I haven't done self-driving cars, but my bet, my bet would be that the engineers would be going through and trying to figure that out and say, "Okay, we need a special case for rainy days, because I need to be able to take droplets on the lens and take those out somehow before I hand it to the image recognition." Quantum didn't need any of that. It just seemed to get better as we added qubits, and that's really interesting, too, because in this problem set, classically, as I approached perfection, the amount of software engineering became, you know, exponential unto itself.
We got to about a 98% accuracy, but we recognized from a software point of view, to get the last 2% was gonna be at least as the same level of energy that we'd already put in for the 29, 'cause we'd probably have to come up with another 29 special use cases. What's fascinating so far is quantum doesn't seem to need that. It's just adding some qubits to it seems to improve the results without me spending a lot of energy to go through and actually engineer a better solution. So when I look at kind of the work we did with Ray versus the work we're doing here, it's a completely different approach.
The classical approach required lots of software engineers to be really smart and look at the data and figure out what is... Well, what does a snowy picture look like? And how is that different than a sunny one? And how can I pre-process those both to the same, to the same day? So anyways, it's a, it's a really interesting, you know, approach here. And then at the end of this, we'll produce a classical inference that actually will run on a car.
Yeah, actually, if you look at the end-to-end process, you know, in order to do a good recognition, we actually have to do all, all of these processes, right? The question is, what, what chunk of it will be taken up by, by quantum, and what is the, the novelty? But, you know, I think in some of these early examples, yeah, we, we, we did quite a bit of pre-processing in the classical domain. But we're certainly moving into areas where many of that task can be done by quantum as well.
the advantage is a model that maybe has fewer parameters-
Yes, that-
When you deploy it in your inferencing engine, instead of having 1,000 or 10,000 or 1,000,000, you have some order of magnitude lower parameters or smaller model, faster.
Yes, that. So that efficiency of the model is something that, but also, a more broader quantum machine learning community is starting to really appreciate. So, those are opportunities where more efficient models with fewer parameters that can be trained with less iterations and fewer data can actually come into play. So... Yes, last question.
So I had a quick question, excuse me, on Airbus or even Hyundai. Can you talk, walk us through maybe the customer interaction from when you first began discussions until you kind of ran through a solution? What does that timing look like, and what does that heavy lift or these applications that you can present and become almost a solver that you plug into, and just kind of the graphical user interface? Just trying to understand the resources dedicated to that.
Okay, that's a great question, and it's a journey. I think we're actually building as we go, right? So when we first interacted with these customers, they bring their business problems, and they're not necessarily saying, "I know there's a quantum solution. This is just a problem I have. Can you actually help solve it with quantum?" So the initial interaction is we have a lot of quantum experts, but not necessarily domain experts. They have a lot of domain experts who doesn't necessarily have quantum solutions, and there's a lot of mutual education, and we come up with various ideas and validate that those quantum solutions will actually help you tackle these problems. Now, we're actually accelerating on that, right, compared to the very early stages.
And now, the next question is: How do we actually then put this onto a more production-scale solution? And of course, we have to project that into the future as more quantum advantage-capable machines comes into play. So the next question is: What are the platforms and tools that are needed? And I think, Dean mentioned about, earlier about that platform and tools. So if you can actually start to really... We're working on figuring out how to package those things so that they can operate better on a production scale. And all of those are necessary, so that when we actually meet that, you know, commercial value, then we're ready to deploy the scale.
So we're really going from an exploration stage and now into kind of a more of a how do we package them in terms of production environment? So it's a spectrum, and I think as we accumulate more experience, it becomes more and more efficient to actually iterate that more quickly. So that's kind of the advantage that we have. All right.
Thank you.
Excellent. Hello, everyone. Welcome to IonQ, to those who are here, and welcome to IonQ for everybody who is joining us remotely. My name is Ariel Braunstein. I lead product for IonQ. I joined IonQ a little under two years. I actually joined the same day as Dean. We are the same cohort. Prior to IonQ, I came in from Google, where I led some of their advanced technology projects, AR, VR, and a few things that I can't actually talk about. What brought me to IonQ is the privilege of working on the next generation of computing. It is truly a privilege and I think a responsibility. That's me. Rima?
Thank you. Hi, everyone. I'm Rima Alameddine. I'm the Chief Revenue Officer at IonQ. I joined nine months ago. Previously, I was VP of sales at NVIDIA, where I built from scratch and led the enterprise sales AI business for the, for half of the country. Then I transitioned to lead the America's enterprise sales AI business, for a number of verticals. I'm thrilled to be here at IonQ, for many reasons. And I see a lot of parallels from quantum today and the early days of AI, and I believe it's gonna follow the same path, but just be even much more impactful. So thank you for joining us here today.
Good afternoon, everyone. My name is Margaret Arakawa. I am the new Chief Marketing Officer for IonQ. This is my seventh workday, so I have the same questions you do. But, as far as background, I worked at Microsoft for almost 20 years, and I worked on every commercial product that Microsoft sold, 'cause I started as an NT product manager. From security, networking, developer tools, database, everything. I started selling on premises, then I sold hybrid, and I sold the cloud. And that journey, similar to what she said about the AI journey, I loved that journey, and yet still a lot of people have not moved to the cloud. But all of the first workloads were the ones I worked on 25 years ago.
The last job I had at Microsoft was I was running the Windows 10, client OS business in the United States. I was revenue accountable for scaling that business. It took me two years, but we got to finally launch Windows 10, and the primary reason for its success was we listened to customers, we deployed with them, and we scaled via partners. That is literally all the mistakes we made before that we made sure not to do for the Windows 10 launch. I've also been at startups and small tech companies where I learned how to sell and market within the AWS and the Google Cloud infrastructure and ecosystem. What was great about that is I was now a buyer of their services and a competing buyer between Google and Microsoft and AWS, as well as a partner.
So I learned very closely how to grow a business with AWS, Google, and Azure. I'm excited to be here, as I said. I feel like my dream job is here because I love the journey. I have patience, I have grit, and I love the resolution. And one little tidbit is, I helped launch the very first Gap.com. You know, T-shirts? Gap.com. It was the mid-90s in Silicon Valley, and I had to call on the Gap to convince them the internet was going to be big. We had excruciatingly slow dial-up. There was no SSL, there was no TLS, there was no encryption of credit cards. It was just going in clear.
Everything was slow, everything was copper wires. We didn't have fiber optics. The first fiber optic cables that went underneath the Pacific Ocean, I think, was 1996. I had to convince The Gap.com, "Launch your first website. Hello, world, we are Gap.com. People will buy on the internet. There will be security, and the speeds are gonna be astronomically fast." That journey brings me to this company, where I believe in the journey, and I can't wait to be part of that.
Very nice. I will advance here. So I'll start with the product introduction to our product, and I'm starting actually in the past. IonQ, thanks to 25 years of academic research by Chris and Jungsang, started working on practical quantum computing in 2016. So we are already at it for the better part of a decade, and the company took a very intentional position of making sure that everything that we do touches the world, has customers use it and provide feedback. Usually, in the beginning, it's kind of a mixed bag, and that's okay because those nuggets of mixed bag actually make our product better, and that is true for our hardware and our software. We constantly touch the world, constantly work with customers, get feedback, and make it better.
As a result of it, we can see here in this slide, this is sort of a high level. Is that an echo? No. This is a high-level view of our products and services. At the bottom layer, you can find our physical infrastructure, the quantum computers, and you can see multiple generations, and we'll go into those in the next slide. On top of that layer, you have the software, the, what we call the quantum platform, that enables access to all of our systems and enable all, any feature, any capability of these systems. The quantum cloud allows two types of access. There is shared access through our quantum data center, and there is dedicated access. Dedicated access can either be in our data center or on-prem at a customer's facility, right? So we're trying to accommodate different customer needs.
On top of the quantum platform, we offer algorithms and applications that should help our customers achieve commercial value as fast as possible, and as we know, as a need of a nascent market, we also, also offer consulting services to help our customers get up to speed, bring their workforce to the level necessary to do the integrations and the development and the innovation that we know is possible. Let's start with the first layer, the physical hardware that we have. At the moment, we have three commercially available systems that you can access: Harmony, Aria, Forte. There were multiple generations ahead of it or before that. As some of you have seen in this tour, we have generations that were not made commercially available, so we're starting here with Harmony.
We've also disclosed two additional generations of systems through commercial deals that we have yet to announce. As you can also see, we have pushed the performance of these system generation after generation, and as explained by, I believe, both Peter and Dean, we measure that or the proxy for performance is what we call algorithmic qubit. And we pushed it from 11 with Harmony to 25 to 32. Just as a reminder, AQ 32 is compared to 25 is, I believe, 16 times more powerful. As we move to AQ 35, we're talking 64x more powerful than the previous generation. Going to AQ 64 is 536 million times more powerful than the previous generation, which is AQ 35. So that's not a small, incremental step.
That is an industry-wide revolution of what will be possible with these systems. So going the layer above that, we're talking about a quantum platform. As I said, we offer shared access and dedicated access. Shared access is preferred by customers who are looking for a very flexible plan that gives them access to all of our latest technology. So they're still experimenting, they're trying different things, they have sporadic access needs, while they choose dedicated access when they want uninterrupted access to a specific system that they're betting on, or it's an on-prem installation, which give them those benefits on location.
Regardless of their choice, we keep a long list of common benefits that they get through our platform, regardless, and this is, of course, a very sort of summarized list, and we maintain as many features to be common across these different modes of access. That quantum platform was also envisioned from day one to support multi-region data sovereignty. So we began with a U.S. data center that offers both dedicated access and shared access, and we even have international customers who are accessing our U.S. platform, and that's perfectly fine. But we have to think ahead to the needs of customers who do need to comply with regulations about data sovereignty, GDPR, and others. And for that, we build a platform that can accommodate those needs that are regional.
Our first international presence is in Switzerland, QuantumBasel, our partnership there. That is a beginning of a regional data center that would service Europe. We definitely envision additional data centers that could come up in different regions based on their regulatory needs and investments by our partners and ourselves. On the right side, you will see a unique situation where some of our customers have the rigid requirements for data to never leave their facilities, and you can all imagine what are the scenarios for that. And for that, our quantum platform can reside within our customer's data center and within their network, and data can completely be retained within that environment. Last, with the sequence of slides, is the fact that we recognize that high-end QPU and incredibly innovative algorithms is not everything.
Meaning that is what enables the ability to create an application, but to run an application, you need a production-ready environment, an enterprise-grade solution, that covers an entire stack that Dean explained earlier. And that allows us for reliability, flexibility, security. It's very important to have seamless integration into the customer's own production stack, and that is part of that contact with the world. We learn about what is the common production stack, what are these needs, so by the time we get to production environment in terms of AQ and algorithms are being discovered, we are ready to serve them. And then it's all about rapid development and rapid deployment in order to extract that value for our customers. And with that, Rima will take you through the go-to-market.
Thank you, Ariel. Okay, so we all know it's an exciting time to be in quantum. Its revolutionary capabilities will address and help us solve complex problems that we cannot solve with classical computing. As you can see here, Boston Consulting Group estimates that quantum computing will generate end-user revenue of up to $3.5 trillion. And here you could see the inflection graph, and, as you could see, machine learning will be one of the first approaches to actually generate commercial value for customers. And today, we're already seeing value in machine learning, but for smaller problems, like, Jungsang shared a little bit earlier.
But as you heard from Dean and others, we are actually working on growing these machines, and that same algorithm that you developed, that runs today and shows value on a smaller problem, you can run it on a larger computer, and it will have much more value. So, as you heard, we are increasing our qubits regularly. Actually, we're adding an average of one algorithmic qubit every month. And why is this important? Because every time we add one algorithmic qubit, we are doubling the size of the computational space and the capacity of our machines, and that allows us to run much bigger problems and have a much larger impact. Here at IonQ, we are, as you can see, the first phase is the experimental phase.
We here at IonQ are exiting the experimental phase and entering the enterprise-grade phase, and that is a term coined by BCG. Then the phase after that, that starts in 2025, that is the phase where customers will start generating revenue from these algorithms that they've built. From what we see, we are the only company with a roadmap that aligns to this vision. One thing we know is that customers are laser-focused on solving business problems. They want to solve them faster and more effectively, and quantum computing is just another tool to help them solve these problems. Hundreds of use cases are being developed across many different verticals in these four different areas: machine learning, optimization, simulation, and cryptography.
We're working with customers across all these different areas 'cause they all wanna take advantage of this explosive opportunity. And machine learning is furthest along, and we expect it to be the first to start generating revenue for our customers. So we all know that timing is key when it comes to the value that can be captured from disruptive technologies. We've seen this play out in AI. McKinsey estimates that the first movers are poised to capture most of the economic gain from AI. Actually, these are the numbers. They project that 120% gain for early adopters, only 10% gain for the fast followers, and at over 20% loss for the non-adopters. So the numbers tell the story.
When you think of quantum, BCG actually believes that gains will be much more pronounced, where the early adopters will capture 90% of the value. So basically, if an enterprise waits, they will be left behind because most of the value will be captured by the first movers and the early adopters. And many customers are realizing that and starting to develop their algorithms today. So to capture these opportunities and to grow our business and help our customers with this paradigm shift, we have four strategies that we're focused on. First is quantum economies, and that means helping build or drive economies where quantum is at the center. The second is commercial value creation, and what that means is working with customers to solve their pressing business needs.
Third is government enablement, and that's where we're working with governments to help keep them at the cutting edge and solve their vexing problems. And fourth is growing through partnerships and working with our partners to help them work with their, their customers and accelerate the adoption of quantum. And as you can see here, we're already working with many customers and partners to help them capture new opportunities and enter into new markets. So what is a quantum economy? A quantum economy is a quantum technology hub that is focused on driving economic growth through attracting a highly skilled workforce, innovative startups, and collaborations with industry, government, and universities. And this is a large opportunity for us, and it's a global, global opportunity, and it's also diverse with many different approaches. The commonality between these different quantum economies are two things.
One is they're mission driven, and two is that they are focused on making sure that they build an economy and drive economic growth in their space, and they partner with the right right entities to do so. This is a great example of a quantum economy. The University of Maryland started early and partnered with us to build a quantum lab. And the reason they did that is to actually prepare the students for the future and attract an ecosystem of startups, and also partner with government and industry to build use cases in the quantum space. And we're thrilled that you'll have the opportunity to join us for the opening, the official opening and ribbon cutting of the quantum lab this afternoon. The next quantum economy example that we're so excited about is our partnership with QuantumBasel.
So who's QuantumBasel? They are an innovation hub in Switzerland, and they're focused on creating and coming up with quantum solutions that will benefit the world. And so they invested in two generations of our future computers to build these algorithms today, like we're, like we've been talking about, building these algorithms today and running them on our future computers so that they can solve these big, vexing problems. And especially in Switzerland, they are at the center of the pharma business and the financial services business, and so they have plans to work with industry to actually solve a lot of the most pressing problems. We're also really excited about this because it brings our IonQ computers to Europe, and that helps us offer data sovereignty capabilities to our European customers.
We are very looking forward to welcome IonQ as our addition to our global partnership system for quantum computing here at uptownBasel. IonQ's technology, IonQ's R&D, will attract more business partners to invest into quantum computing, will attract more startups to join our ecosystem, and furthermore, will attract future workforce to collaborate with ours. We are looking forward to this partnership.
So as you can see, Damir Bogdan, he's always at the forefront, and he shared with you his vision for the innovation center in Switzerland. Now, let's move on to our second strategy, and it is building commercial value and helping our customers create the algorithms that will help them solve problems. As you can see, we're already working with a number of customers to help enable this vision and help them solve their problems. I'm gonna share with you an example to give you perspective of how this comes to life. We all know that correlations are used for a lot of decision making. They're used for, like, trading decisions and portfolio optimization in the financial services space. They're used to interpret MRIs in the healthcare space.
They're used to understand reliability metrics in the engineering space. So there are a lot of great applications for correlations. So in this example, what we did is we worked with a large U.S. brokerage firm. And what we did is they wanted to understand the correlation between two stocks, and what we did is we analyzed the correlation between Apple and Microsoft. Our model learned the correlations, and you could see that in the first you could see the target distribution in the first plot. And then we used generative adversarial networks to generate outliers. So the first example, what we did is we did it on a classical machine. We used a generative adversarial network, and we generated the model on a classical machine.
As you could see, the model missed many areas of outliers that are very important for a trader analyzing their portfolio. And they did that after going after 20,000 iterations of learning to build that model. Then what we did is the next two plots were done on IonQ machines. The first one used a quantum generative adversarial network, and as you could see, with only 1,000 iterations, it caught a lot more of the outliers. And then the third one used a circuit-born machine, which is a type of generative adversarial network leveraging entanglement. And when you look at the results, they're astounding. The plot is almost identical to the first one. It actually was able to generate a lot of the outliers, and it did that in only 26 iterations.
So why is this example important? For two reasons. One is when you use quantum computing, it actually is more expressive. What that means is it's able to share with us insights that you cannot find on a classical machine. As you could see here, if you use only a classical machine, it did not generate these outliers that are important to a trader trying to analyze the risk of his or her portfolio. One, it's able to share insights that are not possible today on classical machines because of the expressive nature. Two, it is also much more efficient, and what that means is it could do it in less iterations. 20,000 iterations versus 26 iterations. It's much more efficient in how it does that. This is just one example.
As you heard from John Seng, we're working with many customers to develop these algorithms and prove value today that they can run on larger machines. Actually, this exact model, we've published a number of papers about these techniques, and we estimate that this model will scale and will start providing production value at 50 useful qubits, which is just around the corner for us. Our third strategy is government enablement, and that is where we are working with governments to support their core missions. So, what does that mean? One, we're working with them on innovative projects to push the boundaries of what is possible technologically. Second, we are developing quantum algorithms with them. For example, here in the U.S., we're working with Oak Ridge National Labs to on a project that's a grid modernization project.
And third, we're helping them build the workforce of the future so that they can take advantage of these capabilities. So just a few words about our partnership with the Air Force Research Lab. We actually started working with them last year, and we're focused on computation and network research and building algorithms to help them stay competitive and to advance our national security interests. So as you could see, I actually went through three different strategies. Two of them were mission-driven: the quantum economy strategy and government enablement strategy. And what we see is that these mission-driven customers will gravitate towards investing in systems and are working on developing algorithms on these systems. And then I covered our commercial value creation strategy, and those customers are ROI driven.
So those customers are working with us more on accessing our services and our computers via the cloud, and working with us to build the algorithms that they can run on their future computers. So, with that, I am gonna next turn it over to Margaret, who you've just met. She's been here seven days. But the reason we wanted her to cover the fourth strategy is because we wanna give you all the opportunity to meet our new Chief Marketing Officer. We're thrilled that she's on board, and with that, I'm gonna turn it over to Margaret.
Thank you. All right, the last strategy, the fourth strategy that Rima talked about is about scaling through partnerships. That's one thing I learned over and over and over again. Ecosystems take a lot of work. The slide that Ariel showed that talked about the hardware layer, the cloud layer, the tools, the APIs, the apps – quantum computing needs economies and governments to help with that, and so do we. One surefire way to scale and the one thing that we thought we could do in the old days is democratize computing power. I was at Microsoft, and we put PCs on desktops. How do you get quantum computing, the most bleeding edge of all compute technologies, into the hands of developers? You do that via the biggest cloud providers on Earth: Azure Quantum, Amazon Braket, and Google Cloud.
IonQ is the only company that actually has is available on all of the major clouds. And as you know, every high-performance, compute-intensive customer need, all of those customers are on those three clouds. There's not a single customer that is not working with those three vendors. And as hyperscale cloud vendors and partners with us, we're excited that we get to get broad access to a lot of different customers. We also partner with leading consulting software, OEM, and integrators. It was funny 'cause I looked at this slide, and I thought, "Oh, my God, I've worked with every one of them for years and years and years on compute," whether or not it was on-premise, hybrid, or cloud. Only place I haven't gone to or I will, I'd like a trip to Sweden or Switzerland to go to QuantumBasel.
But a lot of these integrators, it takes time for them to get on board. We got them on board, and what they have access to, much broader than we do, they have access to all of the commercial customers. When Accenture says, "You should do something now," customers listen. They will do the proof of concept. They will do the early work, and that's why we needed these customers. Working with Dell, Dell is actually both a services provider and a hardware provider, but even though they were classically hardware, they are all in with IonQ, and they are looking to develop services with customers and with us. So with that, I have one small announcement, which is actually not on this slide, but I get [audio distortion]
I don't know. The next era of quantum computing is here. I had to say it that way. I feel like that's an Iron Man moment. But actually, the Iron Man moment or the next era of quantum computing is gonna be discussed more, not just here, but next week, Wednesday. Mark your calendars. It's Quantum World Congress, IonQ.com/livestream. We'll have an incredible event where we get to discuss more in depth in front of the world's quantum leaders, as well as you, who will always obviously be live streaming that event as well. But definitely take a look at that. Please don't miss it. We'll remind you, and we'll invite you again, 'cause it's an important event for us. It will be seminal, so I invite you to join us. Q&A?
Yeah, And now we're going to Q&A.
Yeah. We can't leave
Not off the hook.
Yeah.
Any questions?
You remember, right? The call to action. You remember the Iron Man moment? And then you're all gonna do it.
Maybe to start, one question for Rima, from that was submitted online: Why are customers coming to IonQ and buying quantum compute? Why, why aren't they buying simulation of quantum computers today?
Yeah. So, the reason is, you are not gonna be able to simulate for very long. The estimates are somewhere between 35 and 40 algorithmic qubits. After that, you cannot simulate. So if you have not been running these algorithms on a real quantum computer and learning how to use a real quantum computer, you're gonna be left behind. The, Before you get to fault tolerance, there is a ton of value that is gonna be created on quantum computers, and you will not be able to do that on a simulator.
We were that clear?
Yeah, I was just gonna say, we're gonna-
We left no questions unanswered.
So clear.
Hi, guys. Have you gone through the exercise of sizing how many on-premise computers, quantum computers, might be demanded by the market? How to think about that at least, if not an actual kind of element. Then, of course, there'll be a cloud element here, which might, you know, obviate the need for the on-premise. Maybe just give us a framework for that as we think about this.
So we have thought about it, and that's, as we're—like, if we look at the forecast of what we're building, it's based on how we're thinking about the market and when we think different things will hit. It's something we would—we look forward to sharing more about with you on our earnings call. Fourth, especially our Q4 , especially our Q4 earnings as we talk about next year.
But we can say that BCG's reforecast, which gave a much more aggressive view about the progression, we're in alignment with them. We think that their optimism is well-warranted.
Yeah.
Just say, obviously, we think the future is strong. We were building a manufacturing plant to build quantum computers to be able to hit expected demand. But in terms of actual numbers for next year, Q4 call, we've all, you know, listening to, for, Thomas to get next year's numbers, so. ... we told them all. They're all the analysts are gonna keep asking you this question. Some way that they might be able to eke out, like, "You know, how much, you know, toilet paper do you plan to use?" "How much toilet paper do you use per quantum computer?" And that would somehow leak into an estimate as to what we thought for next year. So we are not giving any of that information out today, but good job.
Thank you.
Just if you kind of think across the ecosystem and the different quantum players, it seems like there's a pretty consistent group of customers that are working here. And I know, talk about Accenture or Boeing or any of the other car manufacturers, they seem to be playing across multiple different quantum strategies, right? And that's fair. How do you think you capture that business, and where—what are you seeing in terms of... And maybe it's too early, but do you find them coming back, and how do you think about just capturing that business relative to your competitors?
Yeah, actually, thank you for the question. The customers that we work closely with, and we're building algorithms together, we're getting repeat business from every one of those customers. Because what we're doing is we are also mentoring them and helping them learn how we're building these algorithms, and they're getting results. They're seeing value, so they're continuously continuing to, you know, give us new use cases to build. What we find is when customers on their own go and try out things, a lot of times they get frustrated and it's hard for them.
Actually, because of that, a lot of our partners have now asked us to offer services on their platforms, and that's what we're doing. We're gonna start offering services on their platforms so that customers can, on different platforms, on cloud platforms or with, or integrators or other partners, so that they can leverage our application development services and continue to be repeat customers.
I want to bring it back to the concept of AQ. The benchmark of AQ was designed to express customer value. So it's not something that we created as an esoteric, some, performance metric of, you know, as some of our competitors are coming up with some weird things that are borrowed from silicon world. We simply created a benchmark based on commonly used algorithms that derive, bring value to customers.
So if you simply compare AQ performance across systems, you can see why customers flock to our systems, because they can simply do more with these systems. So it's a very simple equation of what value you can get at which stage. There are algorithms that are not ready yet, meaning in terms of AQ, and will mature as AQ increases. That is the most useful tool that we were able to come up with to help our customers make these decisions, and it seems to be working in that regard.
Yeah. So it's the algorithm and the hardware.
Yeah.
Both of them together bring these great results. Thanks.
I wonder if you could talk about the cloud hosting relationships that you guys have. You know, what's the state of those? How actively are those cloud companies marketing these kinds of capabilities? And, you know, can you give us any proof points on how that might ramp up?
Yeah. So, we work, we work with all our cloud partners, and it's at different stages with different cloud companies. But actually, we are working closely with them to put additional services on their cloud, because they realize the value, and they wanna make sure that their customers are happy, and they have repeat customers. So actually, our engagements are becoming stronger and deeper, and we're getting engaged even more closely because they want us more engaged with our services on the platform, not just access, to the earlier question, so that we can provide even more value to these customers. Did I answer your question?
Yeah.
Okay.
I'll just jump in as the newbie that went to all the websites and tried to sign up myself.
Yeah.
They're all very different kinds of companies, right? And what's interesting, if you go to the... You can look it up right now, IonQ, and, you know, Google IonQ and Microsoft Azure, IonQ, and, and Amazon Braket. What's great is you will find us on every one of those pages. We are listed first. We are a partner that's been at the bleeding edge of partnering, but they, too, are all learning from us. But what's nice is they're allowing people to use our computers on their cloud to do simulation, to use the Aria computers, or to use the Harmony computers. It's not, it's not obfuscated. It's very clear. The Microsoft page reads, "It's in the learning base." You actually have to go through, and it'll tell you all you need to do to get on board today. Amazon AWS is incredible at that as well.
They built a marketplace and an ecosystem that accelerates that. They're great at that. I would say Google is the cloud that is the, the youngest and is getting there. But go to their webpage, they are the only one that have Subscribe right under the name of IonQ and Google Cloud. And that's when you know that you're democratizing, you know, access. You're lowering the bar so that people can get up there. Because that's what AWS did brilliantly, right? They allowed young, innovative developers to make up things that we never knew that needed this compute power to do it easily, with trial, and with a lot of help.
Right. And when we talked about contact with the world, how valuable it is to us, they are the gatekeeper to the largest pool of compute-hungry customers in the world, in history. And they are working with us to figure out how to convert these users into quantum users. Therefore, they are incredibly valuable to us, and, we're seeing great returns for that investment. It's an ongoing investment, but it's a very productive one.
... And I'll just add a little bit to it, which is, we were working with these guys to come up and innovate for new problems, new solutions, and those kinds of things. So it's a very active thing. They back and forth between the two of us, there's, you know, weekly meetings, we have engineering tasks that we're busily doing, that they're busily doing, so that we can announce new products together going forward. So it's a very active partnership.
So it's these things are not—it's just not one and done, you throw it over the wall and hope that it's gonna work. We go through—we recognize, for instance, right now, queue times on the public clouds to get to our systems. doesn't enable a certain kind of application. So, what can we do together to make that easier? We look at these things, and then there's an active work stream going on to be able to fix some of these problems.
One thing I would also note, 'cause I live 13 minutes door-to-door from my house in Kirkland, Washington, to our location in the big Seattle offices and the big factory. We're located in the cloud capital of the world. The big three clouds are there, as well as the other smaller clouds are there, and every B2B company in the world visits Seattle, and we have access to them, and that's a great reason why we're opening a office and factory there.
You've said that quantum machine learning may be one of the first commercial applications, and you went through some partnerships, and Jungsang went through the use case with Hyundai. But wondering if you could spend a minute talking, you know, how are you engaged with folks like NVIDIA, that obviously a leader in AI, or maybe some of the, you know, the AI model companies or those engagements or partnerships you're seeking? Is there some sort of feeling that there's competition between classical and quantum or, you know, can you leverage those platforms or that part of the ecosystem?
Yeah. I mean, we see NVIDIA as a partner, and we see the companies that are building AI models as partners, because actually every single application is gonna be a hybrid application. Jungsang described, you know, some of the work that we're doing. Every single one of them is gonna be plugged into a classical computer. Actually, Peter was describing how powerful these computers are, but they still cannot do one plus one.
So we envision the world as a hybrid world, and integrating into a current workflow is gonna be really important. This will be the solver that goes into a larger application that customers run. So, companies that are building models, NVIDIA and others, we see them as partners that we all need to actually bring quantum computing to the stage where it's actually delivering value for everyone.
And they all see the end of the ability to simulate quantum. So there is a window in which they can do it, and that's great. It's valuable, but there is an end.
Yeah.
And it's close. It's, like, around 40 algorithmic qubits. That's why we all joined now. 'Cause there's work to be done, but it's in two years. It's not 10, it's not five, it's two. So we've got to get a lot of students to become quantum computer, you know, experts. We've got to raise government's awareness, we've got to do all of that. But that's, how many times in your-
That's the quantum economy, right?
Yes. But how many times in your life can you say that? Can you see it? Can you feel it? Is it close?
This might be a CEO question, but just following on from that, I mean Intel and NVIDIA have quantum investments themselves but they sort of seem to see it as, like, 10 years from now, this is something we need to be aware of. If it's really in the next two years, like, shouldn't we see that soon? Shouldn't we start to see M&A activity? Shouldn't we start to see that activity kind of go into overdrive from, from those guys?
You know, it's funny, over the last four and a half years, I've seen lots of people, competitors and such, talk about timelines, and they, and it's very different, and it's based individually by company. And so you might have one company that maybe won't have a working quantum computer until 2030, so they tend to have a point of view, which is quantum will take off in 2030. Somebody else might not have a working quantum computer until 2040, and they think, "What do you know?
Strangely, quantum won't take off until 2040." I wish everyone in the quantum industry would say that: "I think quantum will take off," blah, blah, blah, and at the end, add those just a few words, "for my technology." But instead, what they do in the media and people pick up is somehow that that's something which applies to all, quantum technologies, every qubit modality. So where we think right now is that, what you're hearing is now we're within two years of kind of the promised land of what everyone wanted to be for quantum. But not everyone shares that point of view because they have different timelines. And so for Google, I believe they've said they won't have a commercial quantum computer until 2028, and I'll take them at their words for that, right? And others, it'll be even longer.
IonQ is now finally at a place where it's not 10, 15, 20 years. That's been said by many companies for years, that quantum is gonna, you know, be forever. This really is our ChatGPT moment, where and probably just like Sam Altman, I'd imagine if Sam was to run around two years ago and say, "AI is coming," and AI—I bet he could have stood naked on a street corner with a sandwich board on, saying, "I think AI is coming in two years." Absolutely no one would have listened to him. And in fact, actually, you would find lots of people who said, who would say that it's never going to happen, and then sure enough, overnight it did.
And so, we're, we believe now with AQ 64, that we're finally there, and so we haven't achieved it yet, still roughly two years away. We're working on that, you know, diligently. We published a roadmap years ago, and we've got a pretty good track record of actually exceeding what it is we said we were gonna do. We haven't gone back and republished a roadmap. We haven't changed it one iota. We haven't changed the financials one iota from when we... or prior to the IPO. We just keep on hitting what it is that we say we're gonna do.
So, and my guess is, even despite having today's call and things and all the rest, no one will believe us, and we'll be in a ChatGPT moment roughly two years from now, and then people will saying, "Where the heck did that come from? I didn't see that coming at all." And so, that's probably, it would be my guess. And actually, I'm not sure how much as a company that we really need to spend a lot of time trying to promote this thing because it's so close. I probably can't spend enough marketing dollars to convince the world. It's probably just cheaper to actually just go do it. So, and that's basically the plan.
And actually, I just want to add one thing to what Peter said, which is, like, some of these people maybe are just waiting for fault tolerance, but value is here before fault tolerance. If you, like, you heard throughout the presentations with error mitigation, what we're doing and how you're gonna get value very soon. And like Peter was saying, it's so close for us, and it doesn't need error mitigation. And so that's another thing, that maybe companies that don't have a QPU that's ready, they're like referring to, like, the Nirvana state, which you don't need to actually deliver value from these applications. Yeah.
I think IonQ is unique in the sense that we think that we should add in just the minimum amount of either error mitigation or error correction necessary to unlock the next stage of value . I think sometimes some people. We were down at a conference in Miami earlier, and there was a young woman who got up on stage by another company and said they thought that you needed to get to 15 nines of accuracy before you could unlock value. So that's 99.99999, with 13 nines after it. And, and we look at that, and that's kind of like crazy talk from our point of view, because that's a such a level of perfection that really will be 15, 20, 25 years from now to get to that level of perfection.
But, you know, there's all that time in between where we can deliver value with systems which are have a scaled amount of noise that still allows you to do something useful, but it's not perfection. And so, our goal always was: Let's just do the minimum required to be able to get to the next phase so that we can win in that. The goal has never been, let's immediately go to perfection.
Let's go, let's just get. Let's make sure we do the minimum so that we can, as quickly as we can, unlock the value for that next phase. And we keep on doing that, right? Then we'll keep on doing that going after AQ 64. We will go through and say, "Okay, what is the minimum I need to do to be able to get to 256?" I won't suddenly jump and say: Well, I need to get to AQ 10,000 as the next step. That would be another 10 years. So, and-
I think also to your point of customer value, if Netflix had waited for the internet to be perfect as they transitioned from DVDs were we have streaming? Because they certainly streamed at a very, very, very slow rate. You know, Amazon.com—I actually got to pick and pack books at Amazon.com, when all they sold were books. They invited me into their distribution warehouse. What were they doing? They were creating an infrastructure for delivery. They were building the back end of AWS, but they couldn't have done it, and Peter's a prime example of the hard work put into something that became the billions of dollars of business. So we're trying to get the next Netflix's, the next Amazon's to start now. So alright.
Thank you, everyone.
Thank you very much.
Thank you, [crosstalk]
We'll take a five-minute break for questions. For our next section, we will turn it over to Dave Mehuys, our VP of Production Engineering.
Good afternoon, everyone. Thank you for coming. My name again is Dave Mehuys, VP of Production Engineering. I've been here at IonQ for about a year and a half, and in that time, we've been building an operations team to focus on scaling, production, and deployment. What does that mean, operations team? It means a bunch of different things. It means supply chain, it means planning and materials procurement, it means developing manufacturing processes, assembly test. It means facilities, it means quality, it means integrating all those things together and preparing us to deploy in our own data centers, as well as future data centers. My background actually is more from the telecom environment.
So, I did spend a little bit of time at a different quantum computing company called PsiQuantum, right before I came here, but I spent a lot of my career at a company called Infinera. So that's in the telecom space, and I was pleased to lead a bunch of different teams there in manufacturing, new product introduction, component engineering, systems engineering, customer service, and technical support. So, that'll hopefully give me a nice background for working here at IonQ, and I've definitely found that to be the case in the last year and a half. So this is definitely a journey. As Peter alluded to, you know, we started off with deep academic roots and, you know, making systems with successively higher performance and different capabilities and features.
And we are in the midst of transforming that into an engineering company on our way to becoming a product company. Dean has joined us here, and I have to say, I'm just super pleased to partner with Dean and Pat. I love that our backgrounds, you know, we overlap in one sense, in that we've all worked with complex systems. And that's definitely something we have here. We also come from different spaces. I come from telecom, Dean comes from aerospace, Pat comes from commercial and other things like that. So I think we're amazingly like-minded and aligned when it comes to how we work on that transformation. Absolutely a pleasure to work with these guys every day. So I wanna talk a little bit about, I guess, just to open and sort of where we are in that process.
You know, we've done a great job hitting our technical roadmap, leveraging those academic roots and, you know, the AMO scientists that started here at the company into building, you know, incredibly fantastic demonstrator machines. But that's not all that we've done. We also, in terms of... you know, if we look at the last few generations of products, every one is successively moving a little bit more towards something engineered.... you know, purposely engineered, purposely kind of moving towards a more product form factor. So that work started actually already a few years ago. I was so pleased when I got here to partner with Dean on formalizing some of the business processes that we need to work on to continue this and to more intentionally steer that ship towards a product company.
So I think I was here a month, and Dean and I sat down, and we said, you know, we, we need to work on a PDLC, a product development lifecycle process, that really organizes and unites our team to focus on the things that we all need to do to purposely engineer systems to be produced, to be deployed. We don't want to lose that great DNA that we have for making, you know, fantastically capable, higher AQ machines. But we also know if we want to scale these things, we need to also add some other elements to our engineering processes and disciplines to make that happen. So we've already seen the fruit just in systems that we've been building here in College Park this year, in terms of approving things like assembly and test.
I mean, those obviously aren't the only things you need for manufacturing, but they're key elements that move us in the right direction. And there's a lot more detail in the PDLC that Dean put together, and we're collaborating on, that move us in that direction. So key elements of that obviously are, you know, design, documentation, and training. Even though the heart of my team's operation will be in Seattle, we'll talk about that a little bit, we've been hiring people here in the last year. We've been hiring people in Seattle the last year. We've been working shoulder to shoulder with Dean's team in the lab, building our current generations of machine.
That's important for people on my team to absorb the knowledge transfer, and hopefully we give a little bit back because of our backgrounds in terms of, okay, these are the things we'd love to, you know, teach and approve about, you know, how we design things for manufacturability and how those things get incorporated into our PDLC. So it's super pleasure to work with Dean on these things. And why do we do these things? Because we wanna make our customers happy. We want them to get a quality product, we want them to get it faster, we want it to deploy it, with up times that make them happy, meet and exceed their expectations. And right now, as you know, you know, everything we've made to date is in the four walls of this building, and that will soon change.
So we're excited to talk a little bit more about that. As Peter mentioned, you know, we leased a building in Seattle. It's actually in Bothell, which is very near Redmond and Bellevue and Kirkland, so near a lot of great talent in the Seattle area. And it's our first quantum computing manufacturing facility in the U.S., and it will be our manufacturing hub. Not that we don't do any manufacturing here. We actually do do some component and subsystem manufacturing here in this building, and we will continue to do that as well. But Seattle is where we will do much more subsystems and where we will integrate systems together. So we're super proud about that. Hopefully, you got a chance to see some of the storyboards outside on your tour.
Probably one thing you also noticed on the tour is the absence of empty space to expand. So we're showing a picture here of empty space to expand. It is in Seattle. I had hoped to have a picture here for today because this is pretty much exactly the way it looks in Seattle right now, minus a few ceiling tiles and things that they're touching up. But we are basically ready to move in. So as soon as we get our occupancy permit, the early part of next quarter, we will be moving in, and we will be soon after starting production. So we are super excited about that. A little bit more about it, it's 65,000 sq ft, so it's about twice the size of the facility we're sitting in today.
As I mentioned, we'll be ready to start, you know, putting in our-- all our equipment and starting manufacturing in Q4 this year. We have a great team working on facilities, both our own internal team as well as our partners. We have great partners in Seattle in terms of, our real estate brokers, our project managers, our general contractors, our architect. I mean, from the point we signed a lease to getting this building designed, permitted, and built, nine months. So we're super proud about how fast that we've done that. Likewise, the team that will go and work there, myself, Dean, Pat, and even other functions have been busy hiring people there since the middle of last year. We are on track to the facility. We are on budget to the facility cost.
We are on track for our hiring, and we're doing that intentionally, because we don't think strategically it would be wise to locate manufacturing of such a complex system far away from engineering or R&D. We are, in fact, strategically co-locating R&D and engineering and manufacturing in the same buildings because we know that getting these initial systems up and off the ground will take a team effort, and I'm super happy to be part of the team that we have here. Beyond Seattle, Seattle obviously will be our manufacturing hub. It will also be where we plant our West Coast data center. We have a data center here as well, which you saw today, which is where a lot of our R&D demonstrator machines go.
We will have a data center in Seattle, which is where some of our production machines will go. You've heard about, you know, dedicated access as well as sort of enterprise access, for the machines that we build, and we hope to make a home for, for that here in our data center in Seattle, as well as in other places across the globe, such as our Basel Innovation Hub. Just like we worked with the team in Seattle in terms of designing the building, requirements for a data center and other facilities, we are doing the same thing in Basel now. We have a great partnership with QuantumBasel. We have a third party that we're working with on the facility design as we speak.
And so we'll be working on that and hopefully at the same rate of speed, I guess, as we did with Seattle. So in 2024, we will be occupying that and doing great things there for our customers in the European region. So we're also excited to do that. That does mean, you know, how we build machines from the way we've done them here will have to change a little bit, and as I mentioned, we've already started that. The machines you've seen here, largely built in place, largely built by the people that design them. That's not so scalable. Obviously, we're building a team of operations folks that will work side by side with the engineering teams, the sales teams, go-to-market teams, product teams, to make sure that we can actually, you know, deploy these anywhere.
So we will continue to build some components and subassemblies here, Seattle. We will integrate systems in Seattle, but we will deploy them in many different places, including Basel, next year. So this time next year, hopefully we will have data centers not only here in Seattle, in Basel, Switzerland, and we will have plans to deploy, our quantum computers in, in those places. So we're excited for that opportunity. We know that means, you know, we have a fun journey ahead. But those of us that have done these kind of things before, this is the fun part of the journey. As Dean said, we're very highly confident. We see no reason why this can't be done. You know, I love to talk to my team about the challenges that we have.
I'm super excited to be here after a year and a half, you know. All the challenges I see, they're regular building company challenges, they're not will the technology work challenges? And so we're super excited about the opportunity that we have here. Okay, a little bit more about the future in terms of, you know, how we in the production engineering department partner with basically all the functional teams in the company, especially Dean's team, especially Ariel's product team. You know, what are we heading towards in terms of a North Star? You'll hear more about that, as Margaret alluded to, I think in the coming days and weeks. But we know there are best practices for complex systems that we can and should leverage for quantum computers, like many of us have seen in other adjacent industries.
Things like rackmount form factors make it easy to plan, easy to deploy, much easier to build, test as subassemblies integrate together. So these are best practices for systems engineering that we plan to leverage here as well. Modular design, that's another key thing. So, ready to manufacture, easier to spare. You know, Dean and Pat and I, you know, we wanna help each other. It's pretty obvious how, you know, Dean and Pat can help me. I'd love to be a great service partner to them. When we have a modular design that's rack mountable, we have a platform that will serve us, hopefully for quite a long time.
It will be easier for me and my team to help Dean and Pat by providing them things that aren't changing very much, so that they can put in subsystems and components and things that do change, and we can leverage that into successively higher performance. So hopefully, that flywheel that I know Peter likes to use that analogy, this is an element of making the flywheel turn faster and help us all get to where we want to go faster. So we're excited about that. Deployable and serviceable. You know, I came from the telecom industry. We had an installed base across the world.
We had a fantastic group of tech support teams and network operations teams that, you know, could monitor the health of systems across the world, that could understand when things were, you know, perhaps going out of calibration or might need some upcoming support, and we could be on top of that before they happen. So that's the kind of paradigm that we want to port into our product planning as well. Okay, so a little bit more about the future, in terms of, you know, where we're going. Some of the things we need to do, certainly in a production engineering context, but even in a company context, you have to build the right foundation. You have to do these things no matter what you build, if you want happy customers and you want a quality result.
So I talk about three things here: governance, you know, quality management system. Quality is everyone's job. It's not just a gate at the end of the process, that you have a scorecard and go, check, check, check, right? So we're excited to get that going here and make sure that we build quality into everything we do. It's not just manufacturing. Lean manufacturing is something, you know, we are designing from the get-go. We want manufacturing processes that have the ability to continuously improve, improve productivity, reduce waste, reduce costs, manage inventory in a responsible way.
To do that, we also need some tools, and again, we're partnering with Dean and folks in the finance department to make sure that we stand up the right tools to help us plan effectively, to help us document our designs, to help us work with our supply base, so we can share those plans with them, because they're effectively an extended part of our company, just like our employees are. So we can do this in a way that helps us also to make the flywheel turn more smoothly. And also ESG, Environmental, Social, Governance. I mean, we're focused, probably in my area, more on the sustainability aspect, but ESG is something we know we need to do as a company across the board. So we're road mapping that as we speak.
From a facility standpoint, you know, we've taken the best of what we've learned here. We've brought in some other elements from other, you know, industries that are represented from the kind of DNA and the teams that we have to design flexible and scalable facilities. The manufacturing facility and rooms and the R&D labs, for example, in Seattle, they have exactly the same spec because we know we want to be flexible, we know we want whatever we do in our R&D labs, in our engineering labs, we wanna be able to replicate that environment in manufacturing and vice versa, so that we can leverage that. It also makes the building incredibly flexible. We'll use a manufacturing standard for environment that's you know, the gold standard effectively for electronics and pharmaceuticals, and medical industries, and lots of talent there as well.
We'll make sure we have critical infrastructure redundancy, so when the power may happen to go out, we've got generator backups, we've got UPS backups. We can ensure that valuable uptime for our customers. And finally, supply chain. I can't underestimate the importance of that. You know, we have tremendously good partners. We will invest in additional partners as we go forward and understand what those new requirements might be. So we're excited with the partners that we have. We're excited about developing and expanding those partnerships. You know, one thing we'd like to talk about here is TQRDC, which is something probably pioneered by HP, I think, way back in the day: technology, quality, responsiveness, delivery, and cost.
Again, we wanna have sort of a common measuring stick for our suppliers, but also something, a platform to talk to them about the expectations. We do have a diverse supply base. You know, some suppliers are exactly what we need them to be. Other suppliers are maybe not yet there, and they're more used to serving an R&D market. If we can talk to them in this language, and we are already beginning to do that, we can align on expectations in terms of what we both need to do to be successful. We hope what helps us helps them. I mean, that's the partnership that we wanna have, and those are the kind of suppliers that we wanna have.
So, with that in mind, again, both the planning and management tools, the integration of those tools, the sharing of key data with our suppliers, are all things that we're doing as well in terms of adding the right bit layers of business process to that. With that, I will stop and give you an opportunity to ask any questions. Yes.
Just a question about sort of sourcing and supply chain, especially on the optics side. You know, today a lot of stuff is sort of off the shelf, but as you try to miniaturize, it sounds like there may be opportunities to either go to platforms like silicon photonics. But how are you thinking about sourcing. You know, those kinds of components? You know, will you have to do more design in-house as you look to miniaturize, or?
So we're exploring all options. I mean, that... You know, my background in telecom, that's kind of the industry that I come from, and so, I'm familiar with that kind of partnership with suppliers. We have suppliers that we've engaged today that we're actively talking to about doing those things. Size, weight, and power will matter in the future, in addition to cost. And so partnering with a supplier that can roadmap with us, that we can even share our roadmaps and do, perhaps, you know, joint development with, is high on our list, and we definitely are currently engaged with suppliers that can provide that capability for us.
How do you think about just as you design the systems, are there subcomponents that you may be able to send out and build elsewhere, and then put those together and assemble? Or do you maybe envision building everything together at this facility?
I guess what I think is for a while, integrating things here is probably something we need to plan to do. As far as the components themselves, I mean, we do have suppliers that we get custom components from, things that are designed by us and, and there now. Putting successive levels of integration at a supplier will make sense at a certain volume point, at a certain, you know, I guess, cost point and those kinds of things. So it depends on also where it is kind of in our hardware stack. If it's something, you know, that's very close to us, we'll probably keep more of it in-house. If it's something we feel we can comfortably outsource, to a contract manufacturer, box build, or somebody vertically integrated to do both, that's absolutely something we'll explore.
I'll just kind of... This is Dean. I'll just leverage, kind of what, Dave indicated. You know, there are areas that we have to vertically integrate. Our trap, for example, is one of those, right? There's a lot of co-design that is required of our trap and our optical subsystems, and so those two are, are done in-house. And while we don't do trap, kind of the, the low-level component fabrication, right, that we send that out to, to fabs and other things. I mean, the-- you saw kind of some of the, the heterogeneous packaging that we do do in-house, right?
And so, other items, you know, we do not do sheet metal bending here, right? And so we have that all done out of house. But, it's to basically Dave's point, that, it's gonna be at, partly scale. Part of it will be those things that we do need to have tightly coupled co-design or basically things that we need to kinda drive vertical integration on.
Any other questions? Okay, no further questions. We will take about 10-minute break before our final session of the day.
Hi, everybody. Welcome back for our final session of the Analyst Day. I realized I forgot to introduce myself at the beginning. My name is Thomas Kramer. I'm the Chief Financial Officer at IonQ, and it is great to be here. It is fun, it's hard, and it's invigorating. When I was listening to Margaret's story about putting up the first web page for The Gap in 1995, I was reminded that back in 1999, I co-founded an internet company, an online registration company. We didn't track the statistics, but at that point, roughly 20% of the United States had email, and we had just started a company to let people register over the web by using email and sending them email.
That ties back to, yes, it would be really late for us to start a quantum company after you've reached, whether you call it, quantum supremacy or commercial supremacy. It's too late to start it after it's done. That little events company just has now gone public twice and been taken private twice. A couple of months ago, last time by Blackstone for almost $5 billion. And so vision is what's required when you start companies. Management is what's required when you manage companies, and we have a good group here. We're having fun, and I'll get into the material in a little bit. Jordan?
Hi, everyone. Good afternoon. My name is Jordan Shapiro, and I am IonQ's Vice President of Financial Planning and Analysis and Head of Investor Relations. I've been at IonQ for about three years, but prior to joining the company, I was an investor in the company at New Enterprise Associates, or NEA, one of the world's largest and most active VC funds. And so I've had the privilege of being along for the journey since some of the earlier stages of IonQ, and as the company was preparing to go public, I saw the opportunity to hop in and continue contributing to this incredible mission that we have here today. Like Thomas mentioned, it's a blast. We are continuing to have fun every day here building the company.
Absolutely. In fact, I come from two decades of software background in SaaS. This is easily the hardest thing I've ever done. Also easily the most fun. And what we're gonna go through now is stuff that you already know because you read our financials, but I needed to have a speaking part, so here we go. It's important for us to remember that we have several ways of going to market. This is important when you model. It's also important to understand our business model. This slide, this exact slide, was part of our pitch deck from when we went public. The icons may have changed. Actually, I think they're the same. Colors have changed, but we haven't changed our business model. That's pretty unique when the industry didn't exist at the time.
But Margaret talked a lot about our cloud partners, and we started by going to AWS and being listed there. So we have revenue that comes through our cloud partners. These tend to be smaller, and there's less revenue. It's the smallest of our revenue groups. Today, we don't break out the segments, but it's a good way for people to get their first entry into Quantum, and anybody can just go to AWS or Google or Microsoft Azure and play around. In fact, you all should. Margaret did. I'm actually impressed by that, and so, good stuff. Now, the larger part of our customers actually come directly to us, and they buy compute access, and which here is called preferred, preferred compute agreements.
However, most of them also buy application co-development services or professional services, if you will, because today there isn't enough of a professional quantum development society and people out there who can do it. People come to us, we help them. Sometimes we help them get educated enough to do it themselves, and sometimes we take their domain expertise, and we translate it into quantum, and we help build algorithms. This is very successful. It's very sticky. People love doing it that way, and it's a great way for us to get customer success and happiness up.
We talked about when we went public, that one day, sometime in the future, people are gonna want to buy these machines. We kind of, like, it just feels like it's gonna happen. We said it really fast, we didn't linger on it, and people didn't believe us. Like, we weren't sure we believed it ourselves. We, we knew it would happen, we just didn't know the timeline. Then, at the beginning of last year, we said, "Well, we kinda have to tell you that this is going to happen because we are seeing too much of it."
Like, it's just people talking, there's no contract, but we are seeing sustained interest from multiple parties. And so we started alerting the market to, yeah, hardware, hardware will be sold. Quantum will be run on-prem. Last year, we got the first taste for that by selling a customized quantum computer to the Air Force Research Lab in Rome, New York. This was a computer they're using to experiment with quantum networking. In reality, it was a quantum computer. It was a Harmony class computer. So we could have called that as a system sale.
We didn't, because it wasn't intended as such, and also it wasn't setting the right price marker in the market. But we had a big sale, and we announced it, and we were happy about it. A few months ago, we announced a sale to the Swiss group, QuantumBasel, where we not only get a sale in territory in Europe, we get to keep part of that capacity and use that as a base for our European operations. So this is a monumental development for us, and it pretty much marks the next stage of us going to market. So what is the market? We have three distinct groups that we think about. There's government, commercial, sometimes called enterprise, and academic. Academic sometimes is thought of as government, but oftentimes they're also separate.
We had anticipated that government, defense, and academic would actually dominate the market because with every revolutionary new technology, like, you know, the internet or computers or lasers, the government has played a very dominant role. The reality is that for us, the largest sale that we've ever made was made to a commercial entity. That's QuantumBasel. And so we are already seeing that this development is moving really, really fast in terms of how you see technology adoption in, in markets and how they sell. The systems hardware options, I want to clarify something that Peter said on our last earnings call when he said, "We're seeing a lot of interest, not just in systems, but in system use, particularly for networking." And we have, today, two partners that we're doing this with.
There is AFRL, Air Force Research Laboratory, and also we're doing some work with UMD, to capture a qubit and translate it to a photon and back. Which is, I've been told, the makings of quantum networking, but you shouldn't ask a CFO for engineering details. But we're seeing a lot of people having an interest in this, and we're talking to lots of interested parties. Enough so that, yeah, there's money in it, and there's near-term money that we've already proven. In the future, this could be its own revenue line because everybody's gonna need networking and we will be here for when they do. So how does this translate into revenue recognition? That's often a question that I get, like, the night before somebody's publishing something.
But the easiest way to think about it is, access agreement, they are essentially the same as SaaS. You just stretch it out between, the service time and the contract. There is a similar version of that which we call service period-based. I will get back to it, not only because it's so hard to, actually pronounce, but because it's easier to understand if I go by the way of percentage completion. And we arrived at percentage completion because of the final one, which we're not even doing yet. Revenue recognition for Best Buy when you sell a computer is just when you get the cash, you, you recognize it. At some point, when we have standardized quantum computers, and we're selling them like hotcakes, we will do that too.
In between, like the work we're doing for the Air Force Research Lab, when we're customizing a computer, it is customized to a degree, so you-- if they had just walked away right before we delivered it, we couldn't just give it to another customer. That means that, you can't do upon delivery. Also means you can't straight line it over the service period. You do percentage of complete, the effort it takes to actually create it. We anticipate that we're gonna have several of those sales, and we will indicate when we do, so that you will know how to think about the revenue recognition period. But essentially, it's just if there's customization, probably it's gonna be a percentage of complete. Now, we're gonna go back to the service period-based. This is what we're doing for QuantumBasel.
QuantumBasel has bought not just one, but two generations of quantum hardware from us, and the first one is expected to be delivered towards the tail end of next year. This is the AQ 35 machine. Now, you would have—you could have thought that, well, we're building it, customizing a little bit, and therefore, be percentage of complete. After long discussions with our accounting fairies, they told us that, "Well, since you're bringing it back, it's because we're retaining portions of that capacity." The revenue from that machine will be the period that it's in service in Switzerland. Once it's up and running till it's decommissioned, but it will be straight line between those two period, timelines or dates. Same thing for the second computer, which is AQ 64.
In addition, this is what makes it a little murkier, of course, is that there's an access agreement so that, QuantumBasel can use our machines here today while they're waiting for the machine that will be on-prem in Switzerland. That, access agreement stretches the entire five years of the contract, so that when there's a maintenance window in Switzerland, they can use that same access line. That makes it a little bit more complex, but not insurmountable, and of course, we'll get back to the details on the Q4 call, in terms of rev rec for next year.
So that's how we think of revenue recognition. There is also a model which is usage-based. That is not a dominant revenue factor for us, so we are not seeing, like, wild swings because somebody ran a lot of algorithms in one quarter. Though it could happen, but it's not like how I would do most of my modeling. Gross margin?
So Thomas had the exciting job of talking about revenue. I will share a few notes on the other side, which is to talk about margin and inventory and how we're thinking about building up and our cost centers. So on a margin basis, the first thing to understand is that, like Thomas mentioned, we have a number of revenue streams, and we don't break out our revenue streams by segment today. And it's important to note that the industry is still developing, so we're still learning about our pipeline and learning about market demand in each of those segments, and they also have their own margins.
Whether that be us selling hardware or selling services or access to systems, that contributes a different margin profile to the company, and you'll see that margin profile reflected here over the last years and year to date in 2023. The other thing that is important is that while the industry is in flux, we can continue to evolve our pricing based on our customers' needs. And Thomas mentioned that our pricing, for example, for AFRL, might be different than how we set our price point over time as the exact nature of the system sales we deliver changes. That is also true for margin. We have this opportunity today with a healthy margin to consider where it makes sense to trade off on market versus on margin versus market penetration.
We always like to say, for people in the room, "If you wanna buy two systems, there's a special deal for you today." We mean that in the sense that there are ways that we can consider our margin profile as it contributes to continuing to distribute quantum into the market today. Lastly, what we'll say is this, current evolving nature of margins, we expect will continue to stabilize as the industry itself matures and those revenue streams become more clear. So stay tuned for updates there. Next, I wanted to talk about an important element of our financial structure today that is relatively new, which is that IonQ is continuing to build up inventory.
So there was a question asked earlier on our tour of how we think about building systems more quickly, especially with the supply chain. We like to say that most things that we build at IonQ, you can build with components that you buy off of Amazon.com. That's not exactly the case. They, of course, include some specialized components, and even common components like chips today, sometimes take some time to arrive at a customer like IonQ. So to that end, to start thinking about resiliency in our supply chain, and in part, what Dave was mentioning in terms of preparing us to manufacture, we've started building up inventory at IonQ. That allows us to mitigate risk in delivering systems to customers over time. Really important as we see that demand increasing and customers wanting their systems sooner.
And then also allows us to be more serviceable and to improve uptime. So if we have replacement components, for example, let's say an optic needs to be swapped on a quantum computer, having that in-house through our inventory gives us more flexibility to serve the customer, serve cloud customers, et cetera. Lastly, we think about this from an agile budgeting perspective in terms of making sure that we have, that we have flexibility in our budget to increase inventory if needed, if customer demand picks up, and that is important to us as the industry is still maturing and we're getting better sense of pipeline for system sales, et cetera .
What you see here on the right side of the slide is just a little bit of how we think about the inputs for inventory. There's a feedback cycle here. To simplify the equation, we look at what our forecast is on demand, i.e., what the market is asking of us, and what we have in-house, what our current inventory is, and what we need to build. That feeds into signals that fire off and tell us what to buy. Then we continue to look at what our upcoming needs are and what we have in-house in a cyclical manner to build out our inventory.
Yes, actually, I remember when I just started and one of our engineers came to me and said, "I need this machine." And the name was this long, and I couldn't tell what it did. It turns out, what it does is measure surfaces at a microscopic level to see if there are any bumps in it, and then it moves itself over so that it can stitch together and get the entire surface in one image. And it was, I don't know, $120,000. I'm like: That's an expensive machine. Can't you just take a camera and move it over by hand?
It turns out you can, but the reason, what made me think of it was that, yeah, we like to say we buy it off Amazon, but while we don't necessarily go to Amazon, a lot of this stuff is just from catalogs, from vendors, and like, "Yeah, we make this, and you take one." After I started, like, yeah, we still have to get the best machines. We have to get what the engineers want, but we can actually be more systematic about it and how we ask for discounts, how we group our orders so that we can get volume. Also, as Dave talked about, strategic sourcing, knowing that our vendors will exist tomorrow, that the products will not break, and if they do, there's a plan for how to replace whatever it was that broke.
This is a sign of a maturing organization, and it turns out that you can do this, you can get better at sourcing, have things more easily available for lower cost. All you need to do is have somebody who focuses on it, and this is how we're professionalizing the entire organization. I obviously know a lot more about what we do in the financial world. That's why I will bore you with it. But the exciting thing to talk about is our, our cash balance, which I think it is great. I mean, it's good to have cash. Cash is king, but it's not necessarily that the cash, having the cash is important. It is not to need to go out and look for more cash.
The fact that we have a good balance means that we're not all the time worried about fundraising. In fact, that's the whole reason we went public, was so that we would not be on the road every 12-18 months. That has worked well for us, and it enables us to, at the same time as we are doing R&D on our next generation and the generation following that, and the generation following that, we're doing that today simultaneously, we're also building out a facility that can manufacture these things to an exacting standard that means it'll work in our customers' offices, not just here. Yes, we were asked about the M&A opportunities in the market. We evaluate a lot of companies, and the measure of how successful your M&A operation is isn't necessarily what you buy, but the things you don't buy.
We've been very successful at not buying much so far. We've made one acquisition, and we made that after very, very careful consideration of how that matched with our needs. But the reality is that when you have a cash pile, that means that, well, we will consider it as it comes along. We're not an M&A machine, we're not financial engineers, but we're doing it carefully. One thing that I wanted to point out is that there is a function of us having gone public as a de-SPAC, is that there are warrants out there.
The unfortunate effect of warrants is that because they are priced to market every quarter, when our stock price goes up, which generally investors think of as a good thing, the cost of a warrants increase, and so we will have a higher loss that period. Conversely, if our stock price goes down, we might actually have a gain one quarter. I'm just asking you take this into consideration when you're modeling, and it's also why we're focusing on adjusted EBITDA as a key measure right now, instead of EPS or net income. This, you've all seen. It's there in case you want to look at it and have, ask questions, and we're open for questions.
So as we kind of think about your, your different revenue streams, is there a way that we can disaggregate where your revenue is coming from today, maybe from the actual compute relative to maybe services or hardware? Just trying to get a sense of, of the, the revenue power of the systems today and, and kind of how that can change over time.
Absolutely. Because we have given a lot of visibility into our sales, you can actually find out a lot by reading our press releases in that case. And also, it just is intuitive that if you sell a system for multiple tens of millions of dollars, which is the price tag on these things, system sales is gonna be a major part of our top line. And to the extent that we manage to get enough of them out there, we get enough people developing on them, now, over time, software will eat the world, also in quantum.
But for the near to medium term, we will be making a lot of our revenue from systems, and then we will prepare ourselves for making the best software available, and there will be revenue streams in the future from that too. But we don't have those now. We have professional services, though, and that there's an appetite for that. People are very happy when they get to work with our developers.
I'll just add a little bit to that. Sometime in the future, when we talk about applications and such, one would hope that sometime in the future, we're in logistics and batteries and drug discovery. Maybe not in a direct way. We might not be actually producing batteries, but we're helping do the design of those batteries and giving-getting some sort of royalty as people produce those. And same thing for drug discovery and many other things. So it's... You know, I would think that if you were to ask 10, 15 years from now, if our only source of revenue was system sales, we somehow failed. That we need to, at some point, be more than just the system sales or just running on the quantum computers themselves.
Thomas, I'll ask you a question that was submitted online, which is: How do you think about our cash position as that pairs to hitting AQ 64 and IonQ entering the enterprise-grade era?
So that's actually an easy question, 'cause we have visibility to 64, right? And the hard question would be like: What's your cash position in 2040? No idea. The reality, though, is that when you get to 2040, there's a reason you got there, and so revenue streams will be available. But right now, getting to 64 does not require any unnatural acts. In fact, our roadmap, as it currently stated, we should be able to get to cash flow profitability with the cash we have on hand. And so we will use some of that, yes. We will hire people. We will hire more people on the engineering and production side of the house than we will on the G&A and sales side of the house.
We will probably also hire more people in the sales and marketing side of the house than the rest of the GNA, because we don't have a lot of those already. And also, you know, I'm a cheap guy, and I don't want to have a lot of overhead, because every time I spend too much on overhead, I can hire fewer engineers, and that means that the roadmap is takes incrementally longer. And this is about, this. Like, you know how they say it's, it's a marathon, not a race? It's both. This is just the longest race ever, and we're gonna keep running really fast. Joe?
Yeah, I guess one of the things cynics bring up with regards to you guys is the related party transactions with University of Maryland. Maybe you could just give a chance to talk to that a little bit and, you know, how sustainable is that? It looks like it's very profitable. Can you speak to that relationship?
Absolutely. So the related party transaction we have. So we don't, technically, from a GAAP perspective, we don't have a related party transaction with UMD because it doesn't qualify under GAAP. We keep listing them because we are not hiding the fact that one of our great partners is UMD, and we made a deal with UMD and Duke to give them 0.5% of the company so that we could get all of their patents pertaining to ion trap quantum technology. Every few years or so, we renegotiate for future patents. But if you look at companies that have been spun out of academia, this is significantly cheaper in terms of the cost to the company than any other major deal out there. So that, that was just a good transaction.
In reality, the biggest transaction we did was that we got our two co-founders from UMD and Duke, and these institutions produce a lot of our new hires as well. The fact is that since UMD produces so many computer scientists, they need access to quantum computing, and that we sell them at an arm's length at market prices, which are well established across all of our other customers as well.
I was just gonna add on that, the deals with the two universities is one which is royalty-free, which is the thing that makes it unusual. So we don't pay anything in terms of royalty back to the universities. It was a one-time transaction, which was an equity deal, and that's the part that makes it so unusual. Okay, I think we're done for the day. I wanna thank you all for coming today, and also for those people who are listening online, hopefully you found this interesting and worthwhile of your time. We understand everyone is busy, so we really do appreciate your time in coming here.
We will look to do another one next year, although next year, I think we'll do it in maybe the January timeframe and in Washington State at our Bothell location. You get a chance to see the Seattle weather during the wintertime. So just bring your raincoats with you. Thanks, everyone. Really appreciate it today, and with that, we will sign off.