We really appreciate you taking the time out of your day to listen to us live here. My name is Nikhil Dhingra, Director of Product Marketing here at IonQ, and I'll be kicking us off. Before we get started, just a few housekeeping items. Time permitting, we are going to take a few questions at the end of the show. If you do have questions during our session, please use the Q&A option in Zoom to ask the question. We will address some of them at the end of the session, time permitting. All participants are automatically muted, so again, please use the Q&A option and not the Raise Hand option, because we can't address raised hands. At the end of the session, you will get an automatic survey sent to your email.
It's a short 2- to 3-minute survey, and we greatly appreciate your feedback so we can continue to make these better. A recording of this webinar will be available on demand shortly after the show. All registrants will be notified when that's available. We will get started in just a couple of seconds here. All right, so we have a great lineup today of IonQ leadership. We've got our President and CEO, Peter Chapman, Dr. Dean Kassmann, SVP of Engineering and Technology, and our CMO, Margaret Arakawa. So I'm very excited to introduce a great lineup today. With that, I will pass it over to IonQ CEO and President, Peter Chapman.
Thanks, Nikhil, and welcome everyone, and thanks for joining us today. Nikhil, can you next slide forward? Okay, so I'm not going to read this slide. However, before making investing decisions in IonQ, I do encourage you to read it in its entirety first, and a copy of this can be found on the IonQ website. As a non-lawyer, I can try to attempt to summarize this for you. We're going to be talking about stuff that we hope to accomplish in the future. However, there's an old English proverb that, "There's many a slip 'twixt the cup and the lip." That's all to say, our actual results may vary. Nikhil, next slide, please. You might have seen this slide before. It's our mission statement. We often, in many of our presentations, put it at the very beginning.
I thought I'd just talk about a bit about where are we on this road to this mission statement. When I joined IonQ some five years ago, I asked the question: What is quantum, you know, good for? And I got an intriguing answer. "We don't know yet, but we know it'll give you a superpower, but we don't know which one." But who doesn't want a superpower? Well, five years later, I think we're homing in on exactly what those superpowers are, and I thought I'd just talk a little bit about some of them. One is certainly chemistry applications. To be honest, for me personally, this is a little surprising. I had thought that for chemistry, we were going to need a lot more qubits.
In fact, actually, if you look at our roadmap that we did prior to the IPO, you'll notice that chemistry was one of the last things. But as you'll hear today, we're now exploring a chemistry application to help a customer solve a $20 billion problem. And so, through, you know, some brilliant work in the algorithmic side, is the chemistry side is coming in sooner than we had originally expected. The area in chemistry that I'm most excited about is in applying quantum to drug discovery, and the huge potential impact we might have on improving the quality of life for everyone. The second area, which we've been steadfast in, is in machine learning. Lots of prior customer engagements have shown excellent results.
One of the new things that we're doing this year, just getting started, is applying quantum to large language models. We're just at the beginning of that journey, but, you know, I hope to have results within a year or so. If we're successful there, we'll be able to offload significant workloads from GPUs and significantly reduce the energy requirements for data centers for large language models. So it's an exciting area for the company going forward. And then, lastly, and it was the reason I joined IonQ, is, you know, how can quantum be applied to strong AI? And we have some hints of that as well. So this is an approach not using large language models. So that's an area as well that I think is going to be extremely interesting going forward.
I'll just say it's an exciting time for quantum, as we finally bring on machines that can no longer be simulated, and we start to provide the computational power to tackle some of these big, hairy problems. So, we look forward to you joining us in that journey in the next 18 to 24 months as these machines start to come online. Nikhil, would you mind... Next slide, please. So this is just some of our select customers, and I just wanted to thank all of our customers and partners for their support. You know, simply put, IonQ would not be successful without you.... Customers are demanding, constantly pushing us to better serve their needs, and as a result, making IonQ and our company better. So a sincere thank you to every customer.
I heard another quantum CEO recently say they don't have any sales because they don't have better machines. So following that logic, we must have great machines. Today's talk is about technology, so I won't veer too much into sales, but IonQ is quickly approaching $100 million in annual bookings, and I can report today that we expect this to be an excellent year. A huge thanks to Rima and her sales team. And for those who tend to read too much into these things, I'm not updating our bookings guidance here. We do that during earnings calls. Next slide, please. The thrust of today's talk is about what it takes to make a commercially viable quantum computer. The first pillar or leg is performance. In particular, two-qubit native gate fidelities.
Fidelity in the short term controls how big a quantum circuit you can run in the NISQ era, and in the long term, determines how much error correction you'll need. And at IonQ, this is certainly one of our sweet spots. The second leg of the stool is getting to scale. While we hope to find commercially significant applications in this era, the true promise of quantum will need a lot more qubits and faster gate speeds. But just as importantly, as we scale up and network these quantum computers, we need to reduce the cost of the machines to make them affordable, because future quantum computers are gonna be made up of networked individual machines. So the cost per qubit needs to go down as the computational power increases. This is the one area of Moore's Law that we share.
So, so we're hyper-focused on that part. And the last leg is what we call enterprise grade, and, you know, we had, to be honest, a little bit of a trouble trying to label this one because it encompasses so much. You could think of it as product maturity, or for that matter, a product at all. You've all likely seen early pictures of ENIAC. It was one of the first computers. It was big and bulky. It was not a product, but it was a lab experiment. It was not designed to be mass-produced. It's one thing to do a lab experiment and write a paper, but it's something entirely different to produce a mass-produced product. The reality is, for quantum to meet its promises, all three of these legs are required.
To over-index on one leg means that you have a one-legged stool, which isn't worth much. Quantum is all about the architectural choices and navigating the compromises. In quantum, you can optimize one parameter, but often at the expense of another. So it's all about the final solution as a whole, and that's what we're hyper-focused on, is making sure that all these things are in, in balance, to make sure that the final solution can be produced to create IonQ's products. So we are optimizing all three of these pillars, and most of our talk today will be talking about areas in each one of these, and where we are today, and where we're going. Nikhil, next slide. So, you've probably seen these before, so, I won't go through this in detail, but ion traps have a number of advantages.
And you can see some of them right here. So again, I won't bother because I think we've already seen—you've seen this before. And with that, because today's is mostly about technology, I'm gonna hand this off to Dean, but I would like to thank everyone again for joining. So with that, Dean?
Thanks, Peter. So, I'm gonna pick things up, talk a little bit about some preamble work, and talk about some of the kind of exciting things I wanna cover, and then I'll go into deeper later. But first, I wanted to kind of start and jump into some of kind of our overall roadmap and kind of a slide that doesn't talk technology so much as it kind of speaks to the generations of systems that we've put in play, and that we're going to put in play in the future. And so right now, we have Harmony available out on the cloud. We also have Aria available on the cloud. Forte is available in kind of early access. And so, Forte is our flagship. It's commercially available. You can access it.
You know, we passed AQ 36 earlier this year on Forte, and so that was a year ahead of schedule. Now, as we move forward, Forte Enterprise, that adds a lot of focus on manufacturability and be able to have data center readiness for that system. We're gonna be able to kind of mirror the performance and the capabilities that we have from Forte and add to it, you know, customer requests, and robustness, and kind of uptime improvements. As we move to Tempo, that system is targeting more qubits and higher quality gates. So it's gonna be the first system that we're using our Reconfigurable Multicore Quantum Architecture for. And so, we'll be able to have a multi-core system there with Tempo, and we expect its performance that will exceed, you know, anything that can be simulated on a classical computer.
You know, Tempo is gonna be the first system that we use and leverage barium in. We have a number of development systems in place right now, but that'll be our first commercially available system in barium. I'll speak to some of our kind of barium progress in a little bit. So overall, I just wanna kinda highlight and kind of reinforce that our future systems, they're all being designed with a high degree of modularity. You know, I'll speak to that modularity in a little bit to allow our systems to be interconnected in that case, photonically later. But as we move through, we expect multiple generations to continually push performance, scale, and enterprise grade, all three of those pillars that Peter mentioned. Next slide.
So, let's start to unpack those three kind of essential pillars that provide commercial value. And so this slide is kind of intentionally a little bit of an eye chart. There's a lot of different things that go into these three pillars. We've thought through all of these kind of essential ingredients and elements, and we're working on all of them. I'm going to be picking a couple of these today to talk about. I hope future webinars will be able to kind of deep dive, kind of pick a couple others, and as well as deep dive into some of them. And so right now, you know, we focus on performance in a few different ways. You know, raw gate fidelity allows deeper programs to run with higher quality, right?
We're optimizing and fine-tuning our compiler to optimize circuits in order to kinda maximize the overall performance and scale, right? The more we can compile down onto the hardware, the broader reach of kind of circuits and algorithms that we can run. You know, increasing the quality of our results through, you know, error mitigation techniques and later, error correction, extract the maximum value from kind of the actual circuits that are run. You know, within scale, you know, we're focused on increasing the number of qubits that we have, while at the same time, maintaining a very rich and efficient connectivity between those qubits, and I'll be talking about scale quite a bit in a bit. Now, in terms of enterprise-grade, that's all about making our systems available to meet our customers' needs and the commercial capabilities.
So Peter mentioned manufacturability, being able to put these in customer data centers, being able to make sure that the kind of jobs that we need to run in terms of hybrid and full stack software is, available and just works. So, next slide, please. Now, those right architectural decisions, if you look across that overall, three pillars, there are trade-offs that have to happen, right? There's no single, axis that you can kind of, over-index on, without potentially hurting other, areas. And so we've been focusing on trying to make the right decisions across all three pillars to be able to bring the most value. And so they are driving the kind of the core requirements to bring customer value.
To give kind of a clear example, like within trapped-ion systems, we have known for quite a long time, back several decades now, that by you know shuttling only two qubits into an operational zone, you can achieve great fidelity. But as a result of that, you end up with a very poor time to solution. This was known back in you know in the 2000s. It was originally developed by NIST, and so there isn't a lot new here, right? But it's standard practice today by many. In our systems, we've looked at this and decided to go a slightly different path. Next slide. You know we are you know through our basically programmable beam steering, it's our AODs, our AOD technology, we are focused on longer chains.
Those longer chains allow us to be able to increase our all-to-all connectivity, but also increase better time to solution because we do not have the large shuttling and other overheads in place. This talks really to the philosophy that we have in terms of making those architectural trade-offs for our system and the engineering trade-offs that are required to both provide performance, scale, and enterprise grade. Next slide. So I'm going to be talking about some updates today that I'm excited about. You know, on the performance side, I'm gonna kinda pick up on, you know, fidelity, scale, or fidelity, gate speed. On scale, I'm gonna be talking about qubits and enterprise grade. I'll be talking about some of the other kind of work and research that we're doing.
So performance, right now, we have 99.6% two-qubit gate fidelity in our Forte systems. We have about 600 microsecond two-qubit gate times. We have 36 qubits, and right now, we have a couple data centers in place between College Park as well as Seattle. Moving forward, I'm happy to announce that, like, our objectives for next year are to be able to break three nines in our two-qubit gate fidelity. That's in long chains, that's with over 100 qubits. This is all on our road to AQ 64 that we've talked about before. We'll have our Basel data center online, and so that brings our three kind of data centers up, that are all gonna be co-located and dedicated to supporting customer jobs.
We're gonna be reducing our gate speeds next year. As we move forward, beyond next year, we're gonna be driving even greater improvements in our native gate fidelities, and by associating our logical gate fidelities. That's six nines in 2026 and beyond. And also scaling to thousands of qubits as we'd be able to kind of leverage photonically interconnected systems, and I'll be kind of deep diving into that in just a moment. So I'll dive into each of these a little bit more, but let's just start talking about performance. So, next slide. So performance is, you know, a key pillar in terms of everything you do. Without performance, you're not gonna be able to get the answers that you want. So we've had a strong track record of fidelity and performance.
I want to focus for this particular initial point on fidelity. You know, you can see our 2025 target. You can also see the anticipated kind of out-year native gate fidelities, as well as the associated logical gate fidelities. Now, as I mentioned, our plan for 2025 is to beat 99.9% native gate fidelities. In subsequent years, I believe we have a clear path, kind of partly enabled by our choice of barium and our other engineering and technology investments to even beat that, and my best estimate that we can achieve even greater numbers. Now, those year-over-year improvements that you see there, you know, are a result of a continued investment in our engineering, research, and technology. You know, that begins with the team that we're building.
You know, at IonQ, we have what I consider the best team out there. We have a world-class team. They're all, everyone is unbelievably talented, hardworking, and extremely innovative, right? The actual engineering starts with a deep understanding of our systems. That starts with the people, starts with the physics. Not only deep understanding of our systems, it's also starts with the understanding of our physical error sources. Those can arise from a number of different varieties, like laser crosstalk, context dependency, phase noise, but from that, it moves into simulation and modeling. Those inform design. It informs our control electronics, software, mechanical systems. You know, it involves the way we do gate solutions. All of this fidelity is achieved in long chains, right?
As we move through time, that design process, the deep understanding and everything else, will just compound itself to be able to build and drive deeper and deeper development and deeper, deeper capabilities. So, let's jump into kind of, you know, our fidelity and long chains, because those long chains not only help, they're driving across that, they help us with scale. But I wanna talk a little bit about some of that performance piece in our kind of all-to-all architecture. And so when people normally compare qubit modalities, you know, people will focus on qubit count and gate count or gate quality. You know, just as important is the arrangement of the qubits that we have. You know, some architectures, in particular, you know, superconducting, do not have, you know, all-to-all connectivity.
Many modalities, you know, that nearest neighbor qubits, you know, like planar architectures, have to do more work. So in all-to-all connectivity, it greatly reduces the complexity of the algorithms that result in the faster time to solutions, as well as kind of higher fidelity results by reducing and avoiding swap operations. Now, like, what does this mean? So, like, if you look at the example that's on the screen, right now, I'm showing an example where we are coupling two highlighted qubits, the ones that are kind of pure orange, that would require a single gate in an all-to-all architecture. That same operation would require 11 swaps or a total of 34 gates in all to be able to achieve the same result.
That means if you had a native gate quality of about 99.9% or three nines, that would effectively be reduced to 96.7. That's a huge hit. You know, swap and other intermediate operations that are interleaved between two-qubit gates, those errors associated with those accumulate, and that's unwanted error. So in all-to-all architectures, you have an ability to kind of literally pair those two gate operations one after another. And so it's extremely important because the less obvious thing is, the less intermediate operations you do, the more time you can spend just executing two-qubit gates. So it just means more time and actual resources that are spent on your actual circuit application.
Now, this does depend on the actual kind of structure of your algorithm, but all-to-all connectivity does give you an opportunity to optimize near term, and, you know, near term, that difference can be the difference between doing just general research versus kind of achieving commercial advantage. Next slide. So, in addition to some of the near-term applications, you know, longer term, you know, there is a strong interest and need for, you know, error correction. It's top of mind for many in the industry. Researchers are exploring a lot of different, you know, kinds of codes to enable that scaling. You know, that goes from small block codes to topological codes, like surface codes or color codes. It's even to new, more efficient, you know, codes, like LDPC codes, you know.
Now, what's clear from all of this is that the quantum computer architectures of the future need to be co-designed to take advantage of that, overall kind of code that you end up with. Now, more pressing is that current day hardware, current day software, and systems need to be architected to support that exploration of that, those novel error correction work, right? There's a lot of research that's going in right now into developing these codes. It's evidenced by kind of the, the industry excitement, the, the academic excitement.... Now, critical to this is being able to support remote operations, some of which, that you see in the most recent codes, you know, that are emerging today require long-distance links between qubits. And so let's jump to the next slide.
Right now, in IonQ's architecture, you know, we start with that highly connected all-to-all connectivity. You know, that allows that, you know, long-range operation over a single chain extremely efficiently. In Tempo, we plan to employ our RMQA, our Reconfigurable Multi-Core Architecture, that allows connections between individual cores. And then in the future, we intend to be able to use photonic interconnects to, you know, do that connectivity at scale. Now, because quantum error correction is still an active research area, it's key not to commit to a specific code. And so we're really focused on building a platform that facilitates meaningful exploration and experimentation, both theoretical as well as in hardware, so that we can stay nimble and learn as much as possible as we scale up. Okay, so let's shift to scale.
I want to talk about both scale and modularity as kind of our overall second critical pillar. Now, scalability has and continues to be our primary North Star, right? Modularity is the most important part of that. I'm gonna take a quick a detour in going to scale to first talk about barium. And so, recall that, like, starting with Tempo, our commercial systems are gonna be using barium as that computational qubit. Now, barium is enabling us to use visible spectrum lasers. It allows us to leverage standard photonic technologies for higher levels of integration, better stability. It also has, you know, additional long-lived internal states in its atomic structure. Those give us kind of lower state preparation as well as measurement errors. But the big advantage it has is it provides higher fundamental native gate fidelity limits.
And so those are all part of our story as we move to AQ 64, and kind of that progress. And so we're leveraging some of those properties of barium to try to, you know, in our implementation, I would say, of our mid-circuit measurement and reset. That's also part of Tempo. You know, if you think about that progress we've been making in barium, you know, one of the highlights that it's great to kind of just show is just the work that we've been doing. And so if you look on the right-hand side of the slide, you will see a picture of our 64 ions in a chain loaded into one of our barium development systems.
That photo was taken earlier this year as part of an internal imaging and readout demonstration that we performed as we're trying to work through those systems in our kind of barium test beds. And so that barium work and that internal research and internal development work is allowing us to kind of leverage some of the underlying physics to kind of scale and simplify the overall engineering. And this goes back to the overall kind of making the right choices in the trade space. This is a clear example. And so that then, if you go to the next slide, goes into kind of our North Star, and that's scalability, right? Modularity has been a very important factor in that scalability. Our overall architecture is based on local shuttling within modules, and then photonically interconnecting systems between modules.
Now, that networking technology enables us to stack these kind of qubits and these cores together. Excuse me. And so the modules can be built with modest complexity, cost, and form factor, and then interconnecting the modules enables design simplicity, manufacturability, reliability, and eventually economies of scale. Now, the milestone that we have planned for 2025 includes 100 physical qubits as we move beyond into our kind of multi-core and multi kind of photonically interconnected. We have a plan set out that is a kind of staged approach, where we continually build up numbers of qubits by simply connecting these stages together. And so let's jump into kind of how we want to, or how we plan to do that, how we go from 200 to 400 to 1,000 and beyond.
And so, the photonic interconnects we've talked about in the past represent kind of that key building block to that. Now, the photonic interconnects are there to support connections both between cores within a single physical quantum computer, as well as being able to connect multiple systems together. It's the same fundamental technology that both of these are built on. Now, the work I'm gonna talk about is collaborative across my entire team, but we believe that by pursuing this, it's going to be the primary mechanism and a very feasible mechanism to be able to scale. It's been kind of our at the core of our work moving forward, and it will continue to be the core. It's also been demonstrated in lab, and so right now we're taking this forward.
This first piece that we talked about is ion–photon entanglement. The ion–photon entanglement is a milestone we recently committed, completed, earlier this year. It is all about being able to connect and demonstrate entanglement between an ion and a photon. This is kind of a process, involves three different steps. We first generate an interconnect photon that is entangled with the interconnect qubit. That photon, that light from that single photon, is collected and sent through kind of fiber optics to a detection hub. That state is then detected, and ion–photon kind of entanglement is confirmed. We achieved that, and the next step that the team is working on is Milestone Two. Milestone Two is all about photon-mediated ion–ion entanglement.
Now, that expands on Milestone One by entangling two ion-based qubits from separate nodes using those entangled photons. Now, to help achieve this, we're developing systems to collect those interconnect photons, right? That, that photon light collection I talked about, from two different nodes and combine that at a single detection hub, where those two photons interfere, leaving an entangled state between the two separate interconnect qubits, at each node. That's the next, the second milestone. Now, later this year, after Milestone Two, we're gonna be demonstrating Milestone Three. That is where we swap the ion-ion photon entanglement from the interconnect qubits to the entire QPU, and so that uses, two qubit swap gates to be able to do that. And so that, is a case where the interconnect qubit, we have that establishment of the entanglement.
So we believe that this transfer occurs through kind of just that simple swap gate, creating this, the two entangled QPUs. And then Milestone 4, if you move to the next slide, is all about the kind of being able to do this at scale, right? Now, this involves being able to drive, you know, multiple collection across multiple different systems, being able to do timing, being able to basically switch qubit technology, and there's a lot of different topological architectures you can pick as you go through this. But, we're working through kind of both the technology development, as well as the engineering, to be able to make this happen, to be able to get to this point.
This is going to show up in our systems that are beyond Tempo, in our future systems on our roadmap, but represent kind of core capabilities that allow us to be able to scale to thousands of qubits. Now, let's jump to the next slide. I wanna kind of switch gears to something that we have not really talked about before, and that is about one of the things needed for scale, is to be able to think about size, weight, and power. And so in parallel to some of the photonic interconnect technology development that we have, the research team that we have is also thinking about what do we need to do to miniaturize our vacuum packages to support the overall kind of scale of our systems?
And so, vacuum is required in our trapped ion quantum computers to be able to maintain the overall chain. It allows us to kind of manipulate those qubits isolated from the external environment and influences, right? Now, within that vacuum package, we then, you know, use lasers to be able to kind of excite, cool, and do actual readout of the qubits. And so right now, most of those current... I would say, current practice and current state-of-the-art for trapped ion systems, is to be able to augment vacuum with cryostats or open cycle, or in our case, closed cycle cryostats, to be able to kind of bring down the pressure lower than what you can achieve with normal kind of pumping technology.
We are currently working to be able to do full room temperature trap technology with our what we're calling our Extreme High Vacuum packages. This is vacuum that basically is about the vacuum you'd find on the surface of the Moon, lower than 10^-12 torr . It's going to allow us to maintain our overall vacuum for days. That is a core piece of our scaling piece as we try to drive these overall form factor and size down. One of the cool parts of this is actually it's not just the trap itself that we're trying in the vacuum package that's being scaled, it's also the build-up of the manufacturing technology and the assembly capability that we have.
So what you see here is the development of manufacturing and novel capabilities to be able to assemble these systems in vacuum, so that when they are removed from the assembly and manufacturing chambers, you have a room temperature vacuum. So the idea of assembly inside of a vacuum to be able to create the overall Extreme High Vacuum systems is novel. We have a lot of technology development in terms of sealing technology, window technology, and other pieces that are going into this, and so that vacuum capability and that manufacturing capability are extremely exciting.
The overall, like, work that we're doing is also going to need to be on-ramped into kind of the larger engineering pipeline that we have, but it represents kind of IonQ's overall thought process, forward-thinking, and leaning into what we think of in scale to be able to solve not just what we're trying to do today, but what we're trying to do, you know, two years from now, five years from now, 10 years from now. With that, I want to pass it over to Margaret Arakawa. Margaret is our Chief Marketing Officer, and she's going to talk about our third pillar.
Thank you very much, Dean. Did wanna talk about enterprise-grade, and Peter talked about this earlier, about what that means. It does really mean that we're building computers that can be used in enterprise and academic institutions and research institutions in their data center. This is something that is very much a passion of mine. I worked at Microsoft for about 20 years, and what I loved about working there was this idea of democratizing IT and democratizing compute, and that's something that's important, whether or not you can manufacture the computer so that it's deployable, and will you have applications that can be scaled and that can be used? So you can go to the next slide. One of the exciting things about joining IonQ in the last year is how much we've grown as far as our footprint.
As you can see, this is a great video shot overhead of our new Seattle, Washington, manufacturing facility. It is actually the first dedicated quantum computer U.S. manufacturing facility, and it's over 100,000 sq ft. We initially had a smaller footprint, and we've expanded it, understanding that we now are excited about a lot of the demand that we're seeing, and increased demand this last year. We've also, as historically, we started out in College Park, so we have a great headquarters there, where we have a data center, as Dean suggested and talked about, a data center and a manufacturing facility in Seattle. In Basel, Switzerland, we started a partnership in June, and we also have a data center there, as part of our European outpost.
Another area of the world that we are in is in Toronto, Canada. We had acquired Entangled Networks at the end of 2022, and what's exciting about that is the investment that IonQ took on early in thinking about not just the quantum computing hardware, software system and the application stack, but also, where are we going as far as quantum networking? So very excited about the integration of the Entangled Networks team. You can go to the next slide. So I wanted to talk about some of the customers and organizations and institutions. This is a new announcement for us. The Navy, as well as IonQ, worked together to address an incredibly expensive problem for the Navy, as well as the Department of Defense. It's a $20.6 billion cost issue for the Navy to deal with corrosion.
You can see these Navy ships there, and you see that, you know, the salt water is obviously not helping as far as corrosion, and corrosion is a big problem. Because of the corrosive nature of the metals that the Navy uses, they wanted to look at, how do they reduce the cost of upkeep for their ships? So the Navy and IonQ, together with some of our researchers, we used our quantum computers to sample the quantum states and the molecular systems that actually affect the corrosion. So when you read about this in the scientific paper that was produced, it does read like a chemistry problem, and it is about the molecular systems that affect corrosion. One of the things that the Navy was very excited about was the all-to-all connectivity of IonQ systems.
Dean actually talked about it earlier, about the difference between our IonQ trapped ion systems and other systems, but that actually resulted in a best-in-class solution. They realized that, you know, quantum computing really can accelerate new corrosion inhibitors and abatement, and abatement is about addressing the corrosion that has already happened. What's really exciting is the Naval Research Lab has said that the calculations that they undertook on an IonQ quantum computer, it previously took them months, and can now be performed in hours. So as you see the numbers there, the cost is a very large cost, and as they look at, what are the initial results that they got? A 55% reduction in total gates from initial circuit through compilation. Any time you can speed up the time to solution, the quicker you are to getting to the heart of the cost issue.
So let's turn to another case study. This is something we've talked about before, but it's one of the most exciting ones because it does go to... Can you hit the next slide? About cargo loading optimization. I know that kind of sounds like an ephemeral thing, but imagine Airbus with hundreds of thousands of planes that they've sold into airlines across the world. What they need to do and help with all the airlines, as well as for them, is imagine there's all these different sizes of cargo. The weights of the cargo are different. They have to be put in bins.
They have to be optimized in the cargo loading space, and imagine all the flights that have to take off with all the weights of the actual airplanes, and the cargo, and all the destinations, and how much does it cost in fuel when you have extra cargo or the cargo is heavier? So this, again, is a different problem, but also something that if you reduce the cost of optimizing the cargo by 1% or 2%, it's $ billions saving. So together with Airbus, they developed a quantum solution for figuring out and constraining the problem. It's an optimization problem, how you actually get cargo loading optimized. This actually resulted in the largest variational optimization problem executed on a QPU in the world.
Very excited about that, and it became a foundation for really creating a solutions problem, a platform for a lot of different quantum algorithm development in the variational space. In this particular example, we used only 28 qubits in this full optimization and were able to get a very, very good solution as far as an important business application and something that really was driving efficiencies. And what's interesting about how Airbus characterizes it, they don't look at it as just a cost issue. They also look at it as reducing the environmental impact of their their products. So that's really exciting as far as how that can actually help with sustainability. So this last case study that I wanted to talk about, if you go to the next slide, is about optimizing flight gate assignments with DESY.
Go to the next slide. This is something that all of us, as human beings that live in the world and travel a lot, we experience this problem all the time. I was in Chicago O'Hare the other week, and I was literally running from one gate to the other, and my Apple Watch actually was blinking and asking me, "Are you working out?" And I thought, "I guess I am." It did seem like I was working out, 'cause it was a half a mile between the gate I was at to the gate where I needed to make my connection, and unfortunately, we had quantum computers trying to figure out this; this is a combinatorial problem.
It's an incredibly difficult problem because of how many planes, how many gates, how many flight times, whether or not the actual, planes are on time or delayed. It's a mind-blowing problem, and the result to us as just the human beings running to gates are, "I missed my plane. I missed it by 2 minutes." But what's great about what DESY did is they, we collaborated with them to reduce the connecting times, the actual understanding of gate occupancy, plane turnaround time, and the walking time of human beings that have to actually get to these gates. We ran 8-36 variable problems on IonQ Aria, and we had 12 of the variable problems were fully optimized. This is a unique cross-industry problem optimization, and it's something that you and I will see great results of once we actually help solve it.
18 qubits were used to solve the largest problems, and as you can see by the quote from, from Dr. Karl Jansen, he was very excited about the early results because this, again, is a platform for not just flight gate assignments, but for all quadratic assignment problems. If you'd like to read a little bit more about it, we have all of these case studies, or at least the last two case studies on our website, and we are excited that The Wall Street Journal also did a deep dive about this particular work that we did with DESY. All right, to the next slide. In addition to these three case studies, we did want to just show you the different number of quantum algorithms that we as a team are working on.
We have an applications and algorithm team, and the areas that we work on are, as Peter said, some of the most important groundbreaking areas that once a quantum computer can actually address these, not only will it save lives, it'll also save millions of dollars. He talked about chemistry and whether or not it's battery chemistry, which, by the way, will also help on power consumption and sustainability. Obviously, drug discovery, which a lot of folks have talked about, making sure that you can actually model and get to approvals of drugs faster. A lot of work has been done from a quantum machine learning and AI perspective. A lot of folks ask us about that. We are focused very deeply on making sure that we partner with customers as well as institutions on how do we help with the AI.
The part of AI that quantum computing does an incredibly amazing job of actually helping address areas that even supercomputers can't actually address. I talked about optimizations. We have physics-based simulations, whether or not it's, you know, in computational fluid dynamics and in field theory. Energy grid modeling, we're working on that. We announced that with the Oak Ridge National Laboratory. Financial services, this has been well talked about, as well as from looking at portfolios and optimizing those portfolios. Cybersecurity, Apple actually just recently announced improvements in their security from a post-quantum cryptography perspective, because they're looking at whether or not increasing the security levels on iMessage will help, as quantum is able to eventually break cryptography. So with that, I'm going to end with one last slide.
We start where we began, or we end where we began. Not only do we have an amazing mission to solve the world's most complex problems, we are looking at how, as IonQ, we can enable commercial advantage by building the most performant, the most scalable, and the most enterprise-grade quantum computers in the industry. When we get to this three legs of the stool, as Peter said, that's when we get commercial advantage. Not only one of these things will help companies and organizations succeed. You need all three.
'Cause you can have performance, but if you can't scale, if you don't have an innovative roadmap, if you can't scale your product roadmap so that you see innovation, if you don't have quantum networking to scale, and if you don't have that manufacturability, that deployability, the ability to be in a data center and to focus on getting to smaller footprint and being in data centers, we believe that when you have all that, those are the requirements that will help the quantum computing industry. Thank you very much for joining us. We really appreciate it. I think now we're gonna turn over to... I should point out, if you'd like to talk to anyone at IonQ, I don't think we'll be able to get to all the questions.
We'll try our best, but definitely go to this link, and you can get started talking to a solution specialist today. So I'll hand it over to Nikhil for the Q&A portion.
Thank you so much, Margaret, and thank you to Peter, Dean, and Margaret for a great presentation today, and talking about how we are thinking about and activating our core pillars of performance, scale, and enterprise-grade quantum computing. As promised, we are gonna take a few questions. We found a few topics that came through from the audience, so we are going to sort of paraphrase those and answer them. So this first one is for Dean, and it's around fidelity and error correction. So there has been quantum news and traction made with fidelity and error correction recently. What do you make of all of this? Dean, you might be on mute. There you go.
Yep, I was on mute, sorry. Thank you. So yeah, there definitely—there has been a lot of, kind of, news and traction, both kind of academic as well as in industry. And so I guess I would generally think, like, all of that news and all of that information, I would say, is celebrated as a win, right? There is a tremendous amount of progress being made, and it's a win for quantum computing as a whole, you know, especially trapped ion technology. On the fidelity front, right, you saw-
... today that like IonQ is targeting over three nines in 2025, right? We're extremely encouraged by, I would say, some of the extremely high fidelity demonstrations in trapped ions that have been done. It adds credibility to our roadmap, right? Trapped ions are a superior qubit modality. You know, we believe that longer chains are the way to go and, or will ultimately be, you know, more scalable and efficient. And so we are literally expecting to drive and planning to be able to execute high fidelity across long chains. Now, there are technical reasons that doing that high fidelities are easier to do in short chains, but I don't see any fundamental limitation to doing this in long chains, and that's our plan.
Now, on the error correction side, and I think I mentioned it a little bit earlier, and so, we have a, you know, a highly connected architecture. It's well suited to lots of different kinds of error correcting algorithms. And so I believe really right now that, like, the research that you see coming out, a lot of the demonstrations that are trying to get logical, you know, gates just a little bit beyond, native error rates is promising and should be applauded, right? A lot of the error correcting codes that you see coming out are not, you know, ubiquitous to, or I would say unique to, I should say, a single modality.
And so I would say that error correction research, as it rapidly advances, the best kind of approach is to stay nimble and see where it goes. And I think our architecture is really kind of key to being able to stay nimble and do just that.
Thank you, Dean. We'll stay with you for these next two questions. They are relevant here. So the next one, one of the dimensions you say you are working on is enterprise grade. Can you please expand more on what that means exactly?
Yeah, I thought, you know, Margaret did a great job. I'll add to it, I guess. And so, you know, the idea of enterprise grade is really about, you know, expanding both production as well as kind of operational readiness of our systems, right, for customers. It means focusing not only on just specs and capabilities, but also on those things that enterprise customers and commercial customers care about, right? And so it covers software, it covers hardware, it covers operations, and even regulatory compliance. That operational readiness, the deployability of those systems to data centers are all important. It really is all about general commercial viability and kind of thinking about things like, example would be, the cryostats that we have in our systems, right?
You know, closed loop cryostats, you know, are a step above open loop cryostats that are in no way commercially, you know, viable in any kind of scale. And so enterprise grade means focusing on the right technology, making the right choices. And like I talked about XHV and the X high vacuum for scale. That's an example of us thinking what does it mean in terms of a customer to be able to deploy these at scale, with high performance? And so, that's another example. Like, so the room temperature operation was, kind of that kind of overall umbrella. But like generally, modularity, being able to deploy into a data center, low power consumption, even serviceability kind of fall within this kind of general umbrella.
I would say, like, hands down, that this pillar is one of the reasons why IonQ is the leader in commercial quantum computing.
Thanks so much, Dean. I think we have time for one more question today. The final one is: how does IonQ think about error mitigation versus error correction?
Oh, that is a topic that, we've put a lot of thought into and actually we're quite excited about. So now when we say error mitigation, like error mitigation is all about boosting the performance kind of in this current realm of NISQ, or noisy intermediate scale quantum computing, as well as the kind of NISQ applications, right? So best-in-class error mitigation is generally bespoke to the applications that you're implementing, right? What you're trying to do is drive signal to noise. And so there is a huge benefit in developing and co-developing custom error mitigation techniques as kind of a high value, near term kind of effort to be able to, you know, boost, I would say, application performance for customers, right? It's going to be based off about bringing additional value.
And so, error mitigation will continue to always have a place in all these architectures, and it's complemented then by kind of error correction. So error correction is a totally different kind of beast. It's a different way of running applications, right? It's almost similar to analog versus digital in computers. Like, when you think about error correction, you have to digitize that problem. That digitization often involves a huge overhead. You see this showing up in the number of physical qubits that are required for every single logical qubit that you get, right? Those numbers vary widely depending on the error correction code, kind of the locality of the code that I talked about before.
But error correction buys you the ability to kind of run effective circuits that are far deeper than any kind of bare metal hardware would be able to natively support. But like right now, a full-scale error correction and all of the primitives that are required is very much a research project. There is a lot of advancement on the research side, and the community is actively exploring it, right? I mentioned earlier the number of papers and other things that are still actively coming up with new ideas, right? And so there's even reductions in the kinds of overheads that you're seeing through time. And so I would say that like we have error mitigation as kind of an investment. We have error correction as a parallel set of investments.
You know, at IonQ, we're investing in a flexible architecture that allows us to be able to kind of look at what's best out there and then use that in our systems. So, I wanna be able to try to drive that error correction to be able to be production-ready, right, at the end of the day. So hopefully that answered the question, but yeah, that was fun.
Thank you, Dean. I believe we are gonna take one more question, for Margaret. Can you comment on the power needs of quantum? I've read a lot about the huge power needs of AI. Is that an issue for quantum?
Great question. I just saw that, as one of the questions that came up. It is incredible the amount of sort of noise and attention that AI is getting, and we're all excited about that. One thing The Wall Street Journal, as well as Goldman Sachs, noticed, as we all do, is AI is a very hungry, power-hungry area, and they're looking at now data centers are now gonna need 160% more power than previously noted simply because of the AI workload. As I said about our applications, we do focus on QML and AI, and we do know that quantum will be able to take a portion of an AI, certain AI problems, like large language models, and be able to take a portion of it and run it on quantum.
One thing we've noticed is, from a power perspective, imagine we have quantum computers. As Dean said, we're gonna get to AQ 64. At that point, at AQ 64, at the end of next year, we'll be using about 18 kW of power. We also do simulations on NVIDIA chips as well, because you've got to actually... You have the classical to see if you can simulate what the quantum is doing. By the time we get to AQ 64, we'll have passed our ability to simulate a particular problem on classical. We are looking at the power if we did actually try, forgetting whether or not it would answer the question, it's what would the power needs be? It's 18 kW to run our AQ 64 systems.
A comparable classical system would take 106 million kW. So that's a point that's coming up very soon within the next 18 months, that we all have to look at, and research and see whether or not quantum, because of its low power footprint and because IonQ's power footprint is, as Dean says, we're focused on enterprise-grade and, you know, getting to room temperature, we'll be able to address some of those problems in the future. So, thanks very much.
Thank you so much, Margaret, and Dean, and Peter, for the great presentation. Thank you to our audience for joining and spending time with us today. This will be recorded, and the slides will be posted on demand in July. We will notify you when they are available. It should be within a week, give or take. Thank you so much again, and that's the show for today, so we will see you next time.