Hello, everyone. Welcome to the XPO webinar. Great to have you join us. We will kick off the webinar with a short introductory video of the what and why of the XPO. A bit of a spoiler alert, the video does feature me. After the introductory video, I'll turn it to Andy, who will provide the latest and greatest on the XPO MSA front and the excitement that we saw at OFC a couple of weeks ago. We'll follow Andy's presentation with presentations from our partners, Ryan from TeraHop, Helen Xenos from Ciena, and Sam from Amphenol. TeraHop, Ciena, and Amphenol are co-chairs of the XPO MSA, and we are super happy to have our partner presenters as part of this webinar. With that intro, let's kick it off with the video roll.
The unrelenting demand for AI training and inference workloads is pushing data center infrastructure to its absolute physical limits. 10 years ago, the OSFP module revolutionized networking, becoming the most successful form factor in history. OSFP continues to be the workhorse today that underpins the massive AI build-outs. However, as we look ahead and as AI clusters scale to hundreds of thousands of GPUs, traditional optics are not optimal to meet the unprecedented demands for bandwidth, density, cooling, and reliability. We just can't build bigger. We have to build smarter. Enter XPO, eXtra-dense Pluggable Optics. XPO reinvents the pluggable form factor, delivering a staggering 12.8 Tbps per module. By packing 16 modules into a single rack unit, XPO achieves 204.8 Tb of front panel bandwidth.
That's a 4x density improvement over OSFP, allowing operators to shrink their network switch footprint by a massive 75%, saving potentially $ billions in infrastructure costs. XPO is designed specifically for networking for AI infrastructure, including scale-up, scale-out, scale-across, and metro-reach fabrics. XPO is a universal form factor and supports all industry optic standards, DR, FR, LR, SR, ZR+, as well as copper, next-generation slow and wide optics, and RF microwave. XPO is also flexible and supports linear, half-retimed, or fully retimed interface architectures. XPO module supports linear pluggable optics or LPO without the power-hungry DSPs, and this enables the lowest power interconnect option for data centers. AI networks are evolving from air-cooled systems to liquid-cooled systems as the switch density and the capacity increase.
XPO is liquid first, featuring a natively integrated cold plate shared by two 32-channel paddle cards in a highly efficient belly-to-belly configuration. This keeps component temperatures 20-25 degrees Celsius lower than air-cooled modules, easily managing even high-powered ZR+ optics up to 400 W per module. These lower temperatures, combined with the 75% reduction in internal components, deliver a significant improvement in system-level reliability because the most reliable components are, of course, the ones that don't exist. Despite its compact faceplate, XPO's internal circuit board area is equivalent to 8 OSFP modules. This is a significant advantage, as this means that XPO can use existing 8-channel silicon photonics, eliminating the wait for any new chip development. At the same time, the paddle card design approach with 32 channels enables highly efficient next-generation designs with higher levels of photonic integration.
The success of any new standard relies on an open multi-vendor ecosystem. XPO is launching with over 45 partners. We are very grateful for the enthusiastic support from multiple leading optical vendors and technology providers. Density, liquid cooling, reliability. Welcome to XPO, the next generation of pluggable optics built for AI.
Thanks, Vijay. It has been now a little bit more than two weeks since we launched XPO and the XPO MSA at OFC. We will share some pictures here with you if you hadn't had a chance to visit the show. Let me say up front both the response and the number of companies that have since joined the MSA has been far beyond our expectations. More than 100 companies have now signed up for the XPO MSA, and we're still getting more sign-ups every day. Again, I wanted to just talk about some of the highlights, and you will hear later from TeraHop and Ciena. Anthony will in much more detail what they have been showing there. A quick update.
You know, I would say everybody understands the benefits of the four times increase in density, the liquid cooling, the higher reliability, reduced rack space. It is a totally natural fit for AI data centers. There is, I should say, great interest in all kinds of XPO modules. While our focus was to get the linear channel to work, you know, people also like LRO and fully retimed, slow and wide RF microwave and whatever comes next. There's particularly a lot of interest in coherent light and full CR optics in this form factor, which is surprisingly a natural fit because you can build these modules now with fixed lasers that are much more reliable than the tunable variety. Finally, as I mentioned, lots of interest in the MSA and so on.
Here are some pictures starting with the Arista booth. We had a demo set up for both full retimed half retimed and linear, and let's see if there's. Yeah, that's a close-up on the screen. The box you're seeing here is a test vehicle which we built with a Condor Cirrus test chip that has 64 channels. The test chip is inside the box. It's connected with fiber cable to the XPO. Again, we're demonstrating here a variety of optics modules, and they all performed really great. This is a close-up on that box. You can see the flyover cable connecting to the chip that has a conventional heat sink on the left. The XPO itself is, of course, liquid-cooled.
I like this tagline that pluggables have never been that cool. The booth was, you know, busy nonstop. People wanted to see this. It was certainly our main feature at the show. Now, walking around the show floor, there was at least six partner companies. Let me start with TeraHop, which we'll talk about their results here a little later in this presentation. You know, they showed a fully retimed 12.8 Tb XPO, which basically flawless, you know, bit error rates. YoctoLink had. I forgot whether there was a linear or a half retimed. I apologize. Coherent showed a module on their side. Aperion showed one. Linktel had a module, and Molex is obviously known as a connector company, but they will make modules as well.
Luxshare built a whole mock-up of a 20.4.8 T chip with, you know, all the cabling, inside and outside. MultiLane, a company that makes, test equipment for, XPO. Nextest, another test company, and there was many more. I apologize if we couldn't take enough pictures. I wanted to express a sincere thank you for all the participants, at OFC that, you know, demoed the XPO, live tests, connectors, cold plates, you know, static demos, and so on. It was really a great way to launch, this product. Now let me go back, what we're really trying to accomplish here, which is, as you know, the, amount of bandwidth in AI data centers is, exploding rapidly. Apparently it is doubling, you know, year-over-year.
This is due to the fact that higher bandwidth per GPU improves the training times and improves the overall efficiency of these data centers. The growth of bandwidth is actually higher or faster than even the growth in the number of AI chips. The column on the left would be sort of last year's chip with, call it an Ethernet-like scale-out interface and 12.8 Tb scale up. The center column is like this year's model or later this year's model at 25.6 Tb scale up and 1.6 Tb scale out. The one on the right would be maybe a 2028 chip that quadruples these, the scale up to 102.4 T and the scale out to 3.2 T.
Of course, the number of GPUs in these data centers is also growing. Today's data centers are measured in hundreds of thousands of GPUs, and there are some in design that go to one million or more. The number of optics, this is expressed in terms of 600 gig equivalent units per data center, is really going up a lot, basically from millions today to it could be 100 million, you know, over the next three years. That is just, you know, very hard to even comprehend what these numbers mean. Obviously, the ability to ramp optics to these high levels of volumes in a predictable fashion is absolutely key. Going back to, you know, the fundamental requirements here, number one, and maybe it's number one, two, and three, is reliability. Today's optics fail too often.
You know, we need an order of magnitude kind of improvement in the failure rates. Power efficiency is always very important, and every technology that improves it is highly welcome. Rack density is important because in total, these switch racks just occupy too much space. Then on top of this, the ability to scale this to very high volumes. The one thing that may not be as obvious, especially at an optics show, is that, you know, there are other technologies, including copper, active passive copper, RF microwave, and VCSELs that may or will play a very important role in these scale-up clusters. Then there, of course, is the single mode optics, DR8, FR4, LR4, coherent light and CR4 scale across.
There are eight things on this table here, and there may be more in reality, and they all have their own place in AI data centers in terms of their reach, their power efficiency, their reliability, and their cost, right? The one huge advantage of pluggable optics is that it can accommodate any one of these technologies. Obviously today, the pluggable optics market is dominated by the OSFP, which has been an incredible success. It is expected that over 100 million units will ship this calendar year, a combination of 40 gig, 80 gig, and 1.6 T. It supports all known optics and standards and use cases from copper cables to CR and future RF.
It has a very good thermal envelope for what it is, 30 W to 40 W with air cooling, and it has this very nice, what was considered ample front panel density of 32 OSFP per 1 U, which is equivalent to 51.2 T. The problem is it's not enough. In other words, the switch chips are going from 1.6 to 2.4 to 4.6 T, and people want them in the densest form factor possible, and the power of these systems really needs to be optimized for liquid cooling. This is where the XPO comes in. It's equivalent of eight to twelve of today's pluggables wrapped into one module, and with the liquid cold plate, you can take away the heat up to the highest power CR modules.
It means that the system level, again, you've seen the video, two or four. AT 1U, whereas today it would be a 4U box with OSFP. Now, let's talk about what this means at the level of a data center, where today, for a future, later this year, GPU with 24.6 T scale-up, you would need 8 OSFP switch racks just to tie this all together, which surprisingly is more switch racks than GPU racks, which doesn't make a lot of sense. Whereas with XPO, that would be shrunk to two switch racks, and the total number is six, half as many racks as the last picture. At the level of a large scale data center, this is a customer design point of a 400 MW data center with 128,000 XPOs. Each one is roughly 3 kW.
This is the blue squares are the XPO racks. There are 1,024 XPO racks, each with 128 XPOs. Then the red or purple buttons are 1,400 OSFP racks that you would need to interconnect this. This particular example is 12.8 Tb of scale-up, half the bandwidth of the last picture and 1.6 T scale-out. It's even with a lesser IO bandwidth here, it is just an enormous number of switch racks. With XPO, this picture looks like this, right? You're saving more than 1,000 switch racks, almost half the floor space in this data center. You could build a building in half the size. The densification is tremendous. Reducing switch racks, which, you know, don't contribute to revenue, is a large saving in structural cost.
You know, the rack itself, the bus bars, the manifolds, the plumbing. Reducing the data center footprint by half is always welcome. Also being able to reduce the wire length in these scale-up cases to a much shorter distance is very important for the technologies that have a limited reach. One can also use the density to put much denser switches and routers in a single rack using copper back planes. Last but not least, you know, it is just a huge amount of savings. Let me talk quickly about what's happening with the XPO MSA, which now owns the XPO specification. We transferred our previous draft specification into the MSA. As I mentioned, over 100 companies have signed up, including the world's leading module and system vendors.
The first task for the MSA is to publish the version 1.0. We have kicked off this 60-day review process with all the members. I should mention here the spec is actually largely complete. People have done an extensive review of all the issues over the last six or nine months from the early partners we have that did the modules you saw at OFC. Let's make sure it's perfect and, you know, more eyes are better than fewer eyes. If you'd like to participate or wanna get involved, please join the MSA by sending email to info@xpomsa.com. I have the permission from the optics module vendors to show their logos. So here's the first 20 XPO MSA module partners that have signed up.
There's a few more that have joined since, but you can see, you know, the logos of the world's largest, optics module vendors all on one page. We're very grateful that everybody joined and is interested in this new, opportunity. In summary, you know, the XPO really solves a lot of, pain points, with the status quo on optics, the much higher density, the liquid cooling, the higher reliability, being able to get to the lowest power linear interfaces. Reducing structural costs is a major benefit. Multi-technology is an absolute key in my mind for both scale-up and scale-out and scale-across. Again, there is no single optics technology that solves all problems. Pluggable optics are really wonderful for optics innovation because you can support whatever is available today and whatever comes next. With that, thank you very much, and I will hand it back over to Vijay.
Thank you, Andy, for the nice update on the XPO MSA. For the next part of this webinar, we turn to our partner presenters from TeraHop, Ciena and Amphenol. To start with, let me turn it to Ryan from TeraHop to give an overview of TeraHop's perspectives on XPO. Over to you, Ryan.
Thank you, Vijay. Thank you, everybody. Very happy to join this webinar to discuss XPO. My name is Ryan, from TeraHop. Happy to be part of the webinar to discuss XPO for AI scaling.
If you're looking at the challenges ahead of us, optics now become indispensable connectivity solutions, but we do face challenges when we continue to scale AI infrastructures. A higher capacity is badly needed. If you're looking at each GPU, we are looking at 10 Tbps for scale up. This is 10x more than scale-out capabilities usually we support. Higher density is required. We're looking at the switch capacity now at 100 Tbps, and quickly, we're going to see 200 Tbps per switch coming up next. So faceplate density is one of the bottleneck we need to address. Power is a very critical issue for AI infrastructures, particularly compared to the GPU power. The optical connectivity need to be small and small fraction of the compute power. Reliability also critical.
Looking at both reliability and availability, and also very critical supply resiliency when you're looking at across the board scaling things up quickly. That bring us to fast time to volume. Because in this kind of fast pace AI infrastructure build, there's no time for traditional slow ramp. When the new technology hits, we had to be able to manufacture quickly and ramp up quickly. Another spectrum, we need to support a full range of capabilities in terms of AI infrastructures. From 100 m for scale up to mid-range, a couple km range for scale out, and then all the way to 10-80 km + scale across. It's a very large range to cover. At the end of day, we also need to provide solutions with easy to service, easy to repair, and easy to replace.
This is ultimately about enhancing and optimizing the uptime for very high-cost compute resources. Let's look at how XPO is addressing these key challenges. From the get-go, XPO going to start with 12.8 Tbps now. This is 8x of the current new product into the market as a 1.6 OSFP form factor. Then we are looking at a potential to continue to go up to 25.6 T as a next step evolution. From the density point of view, we are looking at XPO supporting a 200 Tbps switch within 1U format. This is representing a faceplate density improvement by 4x compared to OSFP.
Now, looking at a combination of co-package copper, we call it CPC, plus improved XPO connector system, this enable low power LPO half retime low power optics to be enabled within the XPO form factor. Look at that reliability. XPO, because of the integration, we are looking at a much reduced component counts per gigabit per second, and we're also looking at integrating the liquid cooling right embedded inside the module. This allow us to lower the component temperature, and then looking at a higher degree of integrated silicon photonics now embody inside a 12.8 Tbps module. All these factors now driving a higher reliability of modules and at a system level. We discussed the time to market, and time to volume. We're using a mature 1.60 silicon photonics with a chiplet bonding as well embedded into the design of XPO.
This really allow us to have that fast time to volume, fast time to ship product, so customer can really use this high capacity XPO in a much accelerated timeframe. Looking at supporting a wide range of IMDD as well as coherent optics with DSP or without DSP to really cover the full range of reaches and application for AI scaling. This is embedded inside the design of from a low power version to high power up to 400 W. This allow us to embed significant diversity of technology inside the XPO form factor to cover the full range of AI scaling applications. At the same time, we really carry forward the full pluggability to support easy service, easy repair, easy replacement. All these traditional pluggable optics advantages the industry have enjoyed in the last two decades.
The last XPO is a multi-source agreement backed by industry leading optical suppliers. At the same time, also supported by over 100 companies. It's a well-established ecosystem to support this new form factor. As TeraHop enter the XPO, we as one of the industry leading optical suppliers, have demonstrated a full 12.88 by DR8 XPO fully retimed modules at OFC. This is built on our mature silicon photonics, and this is a fully retimed version with demonstrated robust link performance. If you're looking at the eye diagram showing the full 64 channels and also looking at full 64 channel link performance, it demonstrated nearly error-free pre-FEC BER floor. Also, as we discussed earlier, we integrated liquid cooling. This is allow us to observe much cooler DSP and component temperatures.
This is gonna be good advantages down the road for demonstrating the reliability of the high-capacity module. Let's look at a potential for low-power XPO. The diagram here is showing if we combine the co-packaged copper with a low-loss Twinax cable and improved XPO connector, this allows us to greatly improve the SI between ASIC to XPO optics. On the right side, I'm showing one of those examples where the CPC can be implemented in an integrated copper attachment to switch ASIC. This allows us to enable this potential low-power XPO. If we can maintain the SI loss within a reasonable range, here I'm setting a target of 18 dB, then LPO can be enabled with much lower power efficiency.
Much higher efficiency, but much lower power, at just 6 pJ/bit , the module power can be below 80 W total. If we use a half-rate retimer model, then we can demonstrate less than 10 pJ/bit with a total power of less than 130 W. Silicon photonics now becoming mainstream for optics, and this is gonna be a very important foundational technology to support high lane counts, such as 64 lanes times 200 G XPO. As an example, TeraHop has shipped 15 million silicon photonics-based 400 G, 800 G, 1.6 T optical transceivers in the last couple years. This has demonstrated 70 billion device hours in the field with proven reliability. This is really setting up a strong foundation for us to support the 64-lane XPO in the next phase of NPI and high-volume ramp-ups.
Continue to look at XPO evolutions. We are anticipating the XPO can continue to take advantage of the silicon photonics integration levels. At the beginning, we are using the ASICs of mature eight by 200 G, 1.6 T silicon photonic chipsets. Going forward, we can continue to evolve with higher degree of integration with silicon photonics. We've reduced the BOM item count, and as a result, we can enjoy a lower cost and higher reliability when we continue to evolve the silicon photonics integrated circuits. On another dimension, we are foresee a capability to continue to upgrade the XPO, potentially supporting 400 G per lane, and when the silicon photonics or hybrid silicon photonic can support 400 G per lane, then the module itself can be upgraded.
At the same time, the connector of the XPO can be upgraded to support 400 G per lane. As a result, we do anticipate to see a 25.6 T per module capacity down the road. In summary, we do see 12.8 T XPO now can support scale up, scale out, and scale across, going from last three generations of OSFP. We basically see a combination of different technology embedded inside a pluggable form factor from 400 G, 800 G, to 1.6 T, covering a shorter reach scale up to middle reach scale out, and then coherent technology to support scale across. Now we are up to 8x capacity and 4x the faceplate density.
We continue to support this diversity of technologies to support scale up on the short reach side and scale out on the middle range side, and then scale across with coherent technology. We can continue to support a large-scale cluster build-out in the AI infrastructures. We do anticipating the coherent technology could continue to encroaching inside the data center build. You can see moving forward to 25.6 T, we will anticipate the light blue coherent technology will start to play a bigger role in the overall build-out. XPO now setting up the foundation for the next couple generational evolutions. Thank you very much.
Thank you, Ryan. That was excellent. The points that really hit home are the time to market, time to volume, leveraging the capacity and the investments in the OSFP ecosystem to be transferable to the XPO ecosystem, as well as having the form factor that takes it to a higher level of photonic and DSP integration and a path to 25.6. Thanks for covering all those aspects .
Really great presentation. Really good. Thank you.
Thank you. Thank you very much.
Next up, we have Helen Xenos from Ciena, and we'll hear from Helen Xenos on Ciena's perspectives of XPO. Over to you, Helen Xenos.
Thank you, Vijay. I'm Helen Xenos, and welcome to the session. I'm really excited to be part of this webinar introducing XPO to the world. Today I'll be talking about and provide details as to Ciena's value and contributions as part of the XPO development. To get started, similar to Ryan, I'm gonna talk a little bit about where we are today from an industry perspective and some of the limitations or constraints that we're facing. I would say, you know, power and density are becoming new limits to scale. From a switch ASIC perspective, capacity is growing from 51 Tb to 100 Tbps to 200 Tbps , while at the same time faced by density constraint to 32 OSFPs per rack unit or per open rack unit.
We're being asked to push more and more capacity into pluggables, but even as we are evolving to new CMOS technologies, new modulator technologies, we are reaching the limits of air cooling. XPO offers a practical solutions to these challenges for technology evolution with higher density than OSFP, higher reliability with liquid cooling, and also providing the steps to a robust multi-vendor ecosystem with the XPO MSA. I'm gonna start first by briefly highlighting Ciena core competencies and how we are contributing here. We're looking at XPO not just from a single point solution, but across areas of components, modules, and system design.
From a Ciena perspective, we have deep expertise both in electro-optic and coherent DSP, the SERDES, as well as high-speed SERDES and high bandwidth analog to digital and digital to analog converters, as well as advanced packaging, such as hybrid integration as an example. Today, we deliver leading coherent pluggable transceivers both at 400 gig and 800 gig rates, being used for scale across and DCI applications. We just recently announced co-packaged optical engine solution that operates at 6.4 T, advanced optical engine that we have. Advanced packaging is also important from a module perspective to be able to fit everything in. From a systems perspective, we're bringing a full optical networking expertise.
This also not only includes, you know, the signal propagation, understanding, and link engineering, but also the management of the pluggables and innovations like direct to plug liquid cooling, which we have been leading efforts in. To close off, from a Ciena perspective, we are contributing to help develop and provide design specifications for XPO solutions that address connectivity requirements anywhere from tens of meters to thousands of kilometers like Ryan talked about. More specific details on Ciena's role as part of the XPO MSA. Again, it's about designing and providing guidance on design specs for broad use case applicability anywhere from scale up to scale across and beyond long-haul applications as well. The physical form factor is designed for maximum capacity and density, but also ensuring that it can be managed as well.
To this end, we are ensuring standard management interfaces aligned with OIF CMIS, where we have close collaboration there, and deep involvement with OIF. Ciena has also been involved in pioneering and actually leading the efforts towards direct to plug liquid cooling types of solutions. We're in the process of standardizing Mini QD connector, which is type of connector that's used here with XPO at OCP. Here I have a video, like what do people care about most when they hear about liquid cooling? It's really, we need to prevent leaks, and that's exactly what the Mini QD connector is designed to do. Here's a quick video that kinda shows you really designed for dripless, reliable operation, with liquid cooled plugs. Here's an example of a mechanical sample that we had at OFC.
This is a 12.8 T, a linear drive XPO example. Includes an integrated 6.4 T optical engine per PCB. Each optical engine supports 32 by 200 gig lanes, and we have 2 PCBs, so you can get the full 12.8 T of capacity there. Half retime design is also possible, and that will provide additional margin. You can see onboard here, there's an onboard laser source. We're using MPO connectors. This type of design would provide up to 2 km reach. Here in the middle, you can see what the liquid cooling cold plate looks like. That's between the 2 PCBs as part of the XPO. Okay.
Well, we talked about linear drive, but with XPO and the 400 W power budget, it offers improved thermal design flexibility, so very suitable for high performance designs and coherent technology designs. Looking at where we are in the industry, the OIF 1600 G transmission modes that are being developed is a good intercept point for XPO. Here I'm showing some of the specs for 1600ZR+, and 1600 Coherent-Lite, which is still under development. Direct to plug liquid cooling here provides a lot of benefits that tie to reliability. Ryan mentioned this as well. Some specific details are, you can now design for targeted cooling for specific hotspots on the XPO.
The fact that you have thermal stability really facilitates ultra high bandwidth analog designs. You can, at the end of the day, you're gonna get better performance, you're gonna get better yield. These are all great benefits that also improve reliability. We do eliminate the vibration from air cooling, which is another benefit here as well. Here's an example of another mechanical sample that we had at OFC, and this is a 12.8 T coherent light design that you can see here. What you can see is on each of the PCBs, we can put two 3.2 T coherent light engines. Again, you times that by two, you get the full 12.8 T design.
Of course, photonic integration, critical here, to be able to achieve the density that's needed. DFB lasers are used here. It's an open design, which helps to keep the design simple and manufacturable. Then we see LC connectors, which is maintaining the existing operational models that we have today. Each of these, you know, 3.2 T designs, one thing that you'll see, this indicates that we would be supporting two 1600 coherent light channels within a single OSFP. We are also assessing, of course, XPO and exactly how the mechanical fit of all the components is gonna go in the XPO for coherent ZR and ZR+ type of designs. How would you deploy the solution?
This is what I'm showing here on the slide. You can see it's very consistent to today's operational model from a system management perspective. It's another IP over DWDM type of architecture. The coherent plug plugs into the router. You have fibers connecting directly to channel mux demux, and going across the DWDM line system, and managed with, you know, there's a lot of very sophisticated developed multi-vendor, multi-layer management solutions available on the market today. What's different though is that each XPO here fills up half of the C-band or half of the L-band. With four XPOs, you basically fill the whole 9,600 GHz of spectrum, the whole C+L band, 51.2 Tb of capacity. Okay.
Just to close this off, XPO is not just another new form factor. It's really about enabling scalable architectures as we move forward. Scaling allows for scaling of not just routers, also pluggables. It supports a standard flexible liquid cool 12.8 Tb pluggable with a, you know, robust multi-vendor ecosystem. It enables the higher density routers. The flexible thermal envelope, and as I have showed, enables a flexible design options in the XPO to address a wide range of applications from scale up, scale out, scale across and beyond. The liquid cooling aspect plays an important role here for improved reliability, improved performance as well. Just to close off, Ciena actively working on multiple XPO designs to address a range of applications. That's it for me. Over to you, Vijay.
Thank you, Helen. That was excellent. Thanks for highlighting the lean into the coherent side, how XPO provides a great vehicle for coherent optics from coherent light to ZR+. Also thanks for highlighting the reliability benefits with liquid cooling. Extra credit for throwing in a Boltzmann constant into your equation. Well done.
Thank you.
Next up we have Sam from Amphenol. Sam, whenever you're ready, take it away.
Thank you, Vijay. I'm really excited here to be part of this webinar and of course, excited to be part of the entire XPO MSA process. We're gonna talk a little bit about Amphenol's perspective on this interface, and that really covers things from the connector interface to the channel implementation, all the way through to some of the transceiver options. Now, at the heart of this interface and at the heart of this XPO effort, we really believe that XPO is all about the optics innovation and maintaining the long-standing legacy of pluggable modules inside of the data center. Everybody wants to talk about the options we have for optical connectivity in the data center, and following OFC 2026, that likely means positioning a solution like XPO against a co-packaged solution, CPO.
Now I want to talk about XPO in the context of the known advantages of CPO rather than weighing the merits of co-packaging versus pluggables. We recognize that CPO implementations offer flexibility for lower loss channels. You don't have to route all the way to the faceplate or deal with managing the complex routing of copper cables to the faceplate. Certainly, transitioning to optics provides longer reach for fabrics compared to copper alternatives, and in the scale-up frame, that's a huge advantage that optics offers. Now, as compute fabrics continue to scale, solving the density problem in one way or another is going to be critical. With that in mind, we've looked at these following items as really the key goals for XPO.
Now, cabled hosts are not only becoming more favorable in terms of performance, but also more common for appliance architectures. Moving to near package copper or co-package copper, as shown in the illustration here, is not only becoming more practical, but also more realizable in terms of being able to do these types of implementations at scale and at the density requirement that's going to be needed for future switch appliances. Now, preserving the pluggable ecosystem supports easy scalability to all sorts of optical technologies, which gives it some favorable advantages over some of the limitations that might be addressed with CPO implementations today.
The XPO form factor was really designed to take advantage of the entire usable 1U faceplate area to help match the density, not only supported from CPO implementations, but to really help maintain alignment with today's switch ASIC bandwidth. In terms of electrical performance, when we look at pluggable modules, it's a good idea to maybe start by comparing XPO against the de facto standard for 1.6 today, which would be the OSFP module. If we look at the implementation, assuming a cabled implementation, and we focus largely on the bandwidth between 0 DC and 70 GHz, which is primarily the range of interest for interfacing at 200 gb through the IEEE or OIF standards. We can see that the bandwidth is largely equivalent.
At 53 GHz, both interfaces offer roughly one dB of insertion loss, and the return loss is well managed in this region as well, being a newer interface and taking advantage of some of the tighter tolerance loop opportunities, as well as reducing electrical stubs, mechanical imperfections that might have existed by maintaining compatibility with an older interface. When we push that frequency range out beyond 70 GHz, we see that the XPO solution actually helps provide more bandwidth, usable bandwidth, and better mitigation of discontinuities at the interface. That's largely due to the improved pad geometries, mechanical tolerance loop stack up, as well as the better interface by removing the necessity to have a surface mount attachment to the host interface.
We view these as not only extensible advantages of continuing to stick with a pluggable interface that has a paddle card surface running across a receptacle beam interface, but also shows improvements in ways we can continue to innovate as we push these types of modules to higher frequencies and higher data rates. At a link level architecture, we look at what we will be able to achieve by taking that 1 dB bandwidth that we can achieve with the XPO interface and bringing it into more of a link level architecture that could be deployed to the entire system.
If we look at the other components that would have to be employed in this type of application, the XPO interface would attach to twinax cables that would go to some type of copper connectorized interface, either near a chip or co-packaged with the ASIC chip. At its worst case, we believe that near chip or co-packaged interconnect would be about 1 dB, and depending on the wire gauge and length that you're using, the length between the ASIC and the faceplate will vary. We believe that a rough approximation would be about 4 dB per meter, given gauges that we believe are feasible to route out in the densities that are needed to support XPO.
We look at this really as a way for us to implement chip-to-module channels for high radix or 240 switches at a loss budget that's less than 19 dB. The reason to do this is really to help promote the support and capability for linear pluggables. As we compete with co-packaged solutions, co-packaged optics solutions, it's going to be really critical that we support the best possible channel and enable linear pluggable solutions or half-retimer solutions to reduce the latency and support these scale-up architectures that are needed in the near future. Now, we've resolved the density problem. We've been able to get the bandwidth for the interface. I think another piece that's been a concern for folks when we move to this high-density pluggable form factor is what the crosstalk looks like.
This is an area that's very implementation specific, something we're still working out in terms of the exact metrics and ways to be able to minimize crosstalk, not only for each of the lanes, but also for each of the adjacent modules. The XPO is designed in such a way where you have access at the host level to the individual 8-lane octal allocations. Through the pinout changes, we've been able to achieve better far-end crosstalk compared to OSFP 1600, and roughly equivalent near-end crosstalk.
Honestly, for the density that we're able to achieve with this form factor, being able to be equivalent to an 8-lane form factor like the OSFP in terms of crosstalk would be a huge win in being able to support moving forward with these types of architectures at 200 gig with the performance requirements that we're going to be targeting. Now all that data we've shown previously was simulation models of the interface. We have started to see XPO applications being deployed in test and measurement and some prototyping, and some of this was on display at OFC earlier this month.
We've also been able to take some measurements of these channels, and you can see here this is a near-package copper solution with about a 0.3 m cable connected to the XPO interface, routed through a loopback module, a passive loopback module, and we're looking at the frequency profile, the insertion loss profile of that entire channel. There is still some work to be done. Obviously, you can see in the picture there's no structure around the cage and module that would hold these features in place. Some of this work is still very much in development as we open the MSA up to the larger group.
We're very comfortable and confident that the interface and the technology at the base layer, that paddle card and receptacle, are going to be able to provide the bandwidth and crosstalk isolation that's needed to support at least 200 gig per lane, if not more, through further evolution. If we think about the link level architecture and again, where the biggest return on integration effort is, it is really moving towards that co-package copper implementation and focusing primarily on where that copper goes after you attach it to the ASIC substrate. The XPO offers one of the densest, most flexible pathways for those solutions to exit the chassis faceplate. The CPC together with the XPO is really what drives the heart of what we're calling the optics innovation behind this.
It removes the bottleneck of the BGA breakout, and there's continuous innovation on exactly how to implement those structures inside of the ASIC substrate, as well as connecting two with various forms of co-package copper interconnects. Once you get out of that interconnect, bubble, being able to route to a stable and consistent form factor that can provide both copper and optic module, connectivity is really going to be what I think is going to be the standing power of a form factor like XPO. Now in addition to all of the copper and implementation specific, details that Amphenol's working on, at OFC we also introduced, an LPO module in the XPO form factor. This has all 64 lanes running. It was electrically hot pluggable and demonstrated compliance with CMIS using the 48-volt single supply.
Amphenol's interest here not only in the optical transceiver, we also support the quick disconnects that go inside of the cooling mechanism, and we've been involved very early on in defining all of the details around the latching structure. We continue to find this an area where more improvement could probably be made to make this more user-friendly and support more broad distribution of these modules into the ecosystem. We're maintaining compatibility with MPO-16 interfaces, but we also are exploring the opportunity to use denser optical interfaces, which might provide more opportunity to allow for user-friendly access inside of the data center once deployed.
At OFC, we had a couple of different demos debuting the performance of these structures, one of which was in the Arista booth, where we were able to demonstrate fairly consistent performance across the entire breadth of the XPO module in what I would say is a traditional switch architecture utilizing technology available today. We had also attempted to mock up some of the channel advancements that we would expect to see for platform architectures in the future, and we were able to test that in our booth, demonstrating on a much smaller scale, a BER of e to the minus 10 and e to the minus 11 in some lanes.
We're very bullish on the ability for XPO to support LPO, and specifically because the use case of LPO I think is gonna be really important for supporting the scale-up architecture needs in comparison to some of the competing technologies. Now, as we go forward, we are actively working to enable and continue that test and measurement effort with some of our partners, utilizing link-level testing, which has been something that's been deployed on many of our copper backplane solutions when density has become much greater than what can be supported by traditional lab equipment at metrology-grade structures.
Being powered by some of the leading 224 gig SERDES and integrating some of those cabled host near package copper structures, we're able to get a real complete picture of what a full cabled host would look like in an actual appliance application. These boxes we're aiming at support for 500 W, so the full power spectrum of what we'd expect to be able to support with the XPO module. Also providing full CMIS validation of our modules and being able to test all of the different configurations that we would need to in a single appliance. These types of test vehicles are really important for us. We're appreciative to the support and partnership we've gotten from our friends at MultiLane in this effort, and look forward to continuing to work with them on these platforms.
Additionally, as we go forward, being able to probe and access each of the lanes in the module is going to continue to be important. Previous versions of high-density interfaces have fallen victim of making test structures that are far too complicated to be able to actually probe in situ or be able to probe practically in a single fixture. It is important to maintain compatibility with these frequency domain testers, and being able to have access to all the pairs in the XPO module.
We partnered and have been working with folks from Wilder Technologies to look at compliance board definition, compliance board structures to be able to get access to all of the XPO pairs, not at the same time, but in a user configurable way that's mechanically compatible to appliances like the one shown on the previous slide, but enable for debug at a differential pair level. As we go forward, you know, we'll be using fixtures like this to be able to take some measurements in the lab so we don't have to rely on the more complicated loopback or BERT level testers when we're doing debug of these initial evaluations. Now, what's next for us?
You know, I think building out more broad support for copper and optical solutions, continuing with the test and validation that I described on the previous couple of slides. Really going into a much more focused effort about the integration of routing of the cables to the XPO front panel. There's a lot of Twinax wire that needs to be layered and structured to make not only the channel perform at the level that we expect, but also be reliably serviced if needed, and be done so at a scale that's going to be needed to support these AI factories. This is a picture of some of the goals of what a finished routed structure might look like.
We're definitely taking care and focusing on being able to demonstrate this to a number of CM partners for our customers, as well as looking for ways to continue advancing some of the cable structuring to make sure these types of implementations survive the stresses and challenges that are going to be done when done at scale. We look forward to solving all of these challenges and continuing to promote XPO to the broader market. Again, thanks to everybody else who spoke here. I'm really happy to be part of this and looking forward to a bright future for XPO. Thanks, Vijay.
Thank you, Sam. That was excellent. Great to see you articulating the value of CPC plus XPO to hit the benefits of density, linearity, and pluggability. Thanks also for highlighting the test and measurement ecosystem that's so critical for the success of XPO. I'd like to thank Sam, Helen, and Ryan for an outstanding presentations as part of this webinar, and also for serving as the co-chairs for the XFP MSA. Super excited and looking forward to the next steps to make this form factor happen. Andy, any thoughts from your side?
Yes. Again, thanks to all the speakers. If any of you are not yet a member of the XPO MSA, please send an email to info@xpomsa.com.
Thanks, Andy. Any final words from Sam, Helen, or Ryan?
Thank you so much. Glad to be part of MSA.
Yeah. Thanks, all.
Thank you.
Awesome. Thank you, all. Have a good rest of the day. Cheers.
Thank you.