NVIDIA Corporation (NVDA)
NASDAQ: NVDA · Real-Time Price · USD
200.50
-8.75 (-4.18%)
Apr 30, 2026, 12:16 PM EDT - Market open
← View all transcripts

Co-Packaged Silicon Photonics Switches for Gigawatt AI Factories Webinar

Feb 3, 2026

Speaker 2

Hi, everyone. Thanks for joining us today for our webinar on Co-Packaged Silicon Photonic Switch for Gigawatt AI Factories. Before we begin, we wanted to cover a few housekeeping items. On your screen, you can find various widgets for the webinar. Each widget is resizable and movable. If you have any questions during the webcast, you can submit them through the Q&A widget near the bottom of your screen. We will try to answer them at the end of the event. Here are some tips to help you ensure you have a good experience today. To maximize the quality of the audio stream, please close any open applications aside from your browser window, and if your audio stops or the slides seem to be lagging, please try to refresh your browser.

If you encounter any other technical issues today, please let us know in the Q&A box, and we will try to help you troubleshoot. Lastly, you will notice there's a brief survey, and please take a moment to fill this out, and you will help us to tailor the webinars in the future. Now, without further ado, we will turn the event over to our speaker, NVIDIA Senior VP of Networking, Gilad, to begin this presentation. Over to you, Gilad.

Gilad Shainer
SVP of Networking, NVIDIA

Thank you very much, and thank you everyone for joining us. We will talk about several topics, actually. We'll talk about how do we build an AI supercomputer. We'll talk about the scale-out infrastructure technology, and how did we build an infrastructure, a scale-out infrastructure that can deliver the performance for AI workloads. And we will have a good focus on co-packaged optics. So the data center is the computer today. In the past, the computer was the CPU or a single GPU. But today, AI workloads are running across multiple computing elements, and therefore, the data center has become the computer. Now, the way that we connect those computing elements will define what that computer can do or what that data center can do, and therefore, the network defines the data center.

Now, building an AI supercomputer means that we need to bring together multiple networks, not just a single network. But there are four major networks, four major infrastructures that are creating an AI supercomputer. We start with Scale-Up. We start with NVLink, and NVLink mission is actually to connect GPU ASICs together to form a GPU. And in our case, it's actually to form a rack-scale GPU. That rack consists of tens, and could be hundreds, in the future, of GPU ASICs, connecting or connected with NVLink, becomes a single unit, becomes the GPU. And of course, there is a co-design between the workloads, the software, and the silicon devices, the platform, to enable that connectivity between GPU ASICs to form a single GPU or form a single virtual GPU.

So once we build our rack-scale GPU using NVLink Scale-Up network or Scale-Up computing infrastructure, actually, we need to scale it out. We need to have hundreds of thousands of GPUs connected together to run a single AI workload, a single distributed computing workload. And for that, we need a Scale-Out network, a Scale-Out infrastructure that will connect those racks to form the large data center. And for Scale-Out connectivity, we have designed Spectrum-X Ethernet. Spectrum-X Ethernet is an end-to-end infrastructure, Scale-Out infrastructure with a clear mission: to eliminate jitter, to ensure that every GPU gets the data at the same time, work at the same time, fully synchronized with other GPUs, exchange data at the same time, and so forth, which is the secret sauce on how taking GPU ASICs and building a single supercomputer.

Now, we also need to bring the right storage, and for that reason, we have the context memory storage infrastructure built within the AI pod, leveraging BlueField DPUs data and storage processors. The context memory storage creates a new tier of storage to serve the storage requirements for inferencing workloads.... So we have a scale up NVLink, we have a scale out Spectrum-X Ethernet, we have the context memory storage with BlueField. Now, we need to go scale across. Building a data center doesn't mean that we have the entire compute capability that we need for running our workloads. We may need to connect multiple data centers together, and the reason that we cannot scale in a single place is because we may be limited by power, may be limited by real estate.

But still, we want to be able to go to one million GPU scale to support the next generation of workloads. For that reason, we created a Scale-Across infrastructure. Based on Spectrum-X Ethernet, Scale-Across enables to connect remote data centers together to form a single Scale-Across computing engine that can support the giga-scale AI factories. So we have four major infrastructures with the purpose to make the data center a single computer for AI workloads. Now, focusing on Scale-Out, focusing on Spectrum-X Ethernet. When we look on Ethernet connectivity for Scale-Out AI, we actually can see that in the market there are different kinds of Ethernet. There is no single Ethernet out there. There are actually multiple kinds of Ethernet built for different purposes, built to support different kinds of workloads.

There is an Ethernet architecture or Ethernet technology that was built and optimized for enterprise. It's feature-rich, small radix, highly virtualized data centers. That's the target of that kind of Ethernet. We have another kind of Ethernet that was built and optimized for hyperscale data centers, for single server workloads. There is a third element or a third kind of Ethernet built for service providers, built for DCI, for carrier. None of those options was built to support AI. None of those options was built to support distributed computing workloads, to enable a single workload to run over hundreds of thousands of computing engines. And for that reason, for that purpose, to enable AI supercomputers, we design an Ethernet technology and Ethernet architecture for AI factories, focusing on high performance, focusing on distributed computing. So what makes Spectrum-X so great for AI?

Spectrum-X Ethernet is running a full set of standard Ethernet protocols. There is no proprietary Ethernet protocol in Spectrum-X. But the way that Spectrum-X Ethernet platform is being built is in an end-to-end infrastructure. Understanding that building a scale-out infrastructure is not something that you can do just in a single component. It's something that you need to actually build an entire end-to-end infrastructure to focus on eliminating jitter. We focus on RDMA, 'cause RDMA is the de facto ability to move data between GPUs in the most effective way, to avoid data copies, to move data directly from a GPU memory to another GPU memory. We focus on low jitter, on eliminating jitter in the operations of moving data. We want all GPUs to work in a full synchronous way.

All GPUs needs to get the data at the same time, so they can work the same time, and then exchange data again at the same time. So Spectrum-X Ethernet is built in an end-to-end way. There are SuperNICs connected to the Spectrum-X Ethernet switches, forming that infrastructure. We split the missions across that end-to-end infrastructure. The mission of the SuperNIC is, one, to control injection rate to the network itself, to make sure that we don't creating hotspots, as hotspots is a source of jitter. We also wanted to make sure that the SuperNIC mission is also to place the data in the right location in the GPU memory. So the switch can do a full unconditional data distribution using all of its ports and choosing the best path packet by packet, based on the local switch condition and the remote switch conditions.

Spectrum-X Ethernet works as a single end-to-end infrastructure, eliminating jitter by using fine-grain adaptive routing. Every packet can go to the best path available across that infrastructure and ensuring there are no hotspots by having the SuperNICs control injection rates. With that design, Spectrum-X eliminating jitter, ensuring the first highest performance for AI training and inferencing workloads. We are achieving great performance for inferencing by reducing, or improving, or increasing the expert dispatch performance by 3x. Expert dispatch performance is an operation that is all to all view operation, and it's very sensitive to jitter. By eliminating jitter, Spectrum-X Ethernet improves expert dispatch performance by 3x. If we look on training, Spectrum-X Ethernet give the best performance available and predicted performance. Every iteration, every step time is equal to any other step time.

Every GPU gets the data at the same time. All of the computing engines across the data center are working fully synchronized. So we're able to achieve 1.4x performance, but not just that, also a predictable performance, which is a key element for AI workloads. So we build a great infrastructure, a great Spectrum-X scale-out infrastructure surrounding Ethernet and eliminating jitter, and essentially is purposely built to support distributed computing workloads at scale. Now, we want to take that scale-out infrastructure, that Spectrum-X Ethernet scale-out infrastructure, and ensure that it's the most effective infrastructure also, when we look on power consumption. Also, as we look how to improve the resiliency of the data center. Building AI supercomputer means that we're connecting hundreds of thousands of GPU units, of GPU ASICs together.

It means that we need a scale-out infrastructure that can run over distance, and therefore, that scale-out infrastructure is based on optics. It's using optics to connect those switches together. That optical network consumes power. As we go from generation to generation, as we double the bandwidth on scale-out to support the next generation of workloads, to enable a full balanced design of an AI data center between the GPUs, the CPUs, and the network, the power consumptions of that optical connectivity continue to increase. And it continues to increase, and it can consume almost 10%, an equal number of 10% of the computing resources. And as power mandates the compute capacity of the data center, reducing that power will enable us to increase the compute capability of our data center, and therefore, it's very important mission.

Now, in order to minimize the power consumption, to optimize the scale-out network, to make sure that we can increase the compute density in a data center, and also to make sure that the growing size of data centers will be very reliable and resilient, we are moving and introducing co-packaged optics. Co-packaged optics means that we are taking the optical engine that traditionally is part of the transceiver, an external device to the switch, and move it from the transceiver into the switch system, actually to sit with the switch in the same package. The optical engine mission is actually to move light to electricity, to translate the light signals that are coming on that optical network to electricity that can go and get the data to the switch.

If we're doing that translation from light to electricity, far away from the switch device, we need to invest power. And that power, when we look on the growing size of AI data centers, can be very meaningful. So if we dive into how those elements look like, in a transceiver, we're gonna find an external modulated laser device and a DSP. It might, in some cases, there could be DSPs on one side and not on the other side, as there are different kinds of transceivers, but the concept is the same. The optical signal coming on the wire will go into that transceiver, and then from the transceiver, it will need to go through multiple transitions.

The substrate of the transceiver, the port, the switch port, the cage, go into the PCB of the switch system and go into the substrate until it reach the switch, the switch ASIC. We're investing in that entire movement of signal, 25 watts. Different transceiver could be 25-20, but it's—that's, that's the range of the transceiver. And of course, there is also a large signal loss, as the signal need to transition multiple times from the fiber all the way to the switch ASIC. Co-packaged optics does two things. 1, it actually move the optical engine to sit in the same package as the switch ASIC. Very close, which means is that we need to invest much less amount of power in order to move the signal from the silicon photonic engine to the switch ASIC.

We can save 5x in power consumption of the scale-out infrastructure. But it's not just that. We're also reducing the amount of components. We're reducing the amount of lasers that are needed for that optical network. We're reducing the components that are needed to build that scale-out infrastructure. We are increasing the resiliency of the data center. We're increasing the time to first interrupt, enabling AI workloads to achieve higher level of performance. We have been building, or we've built CPO-capable switches, or CPO switches for both Spectrum-X Ethernet and Quantum-X InfiniBand, supporting those two best scale-out infrastructure for AI workloads. Building the co-packaged optics technology is not a simple thing.

We have been working with a big ecosystem of partners of us, creating new elements of packaging, new ways to build those optical engines that could be small enough to support large radix switches for AI factories. We built technology that enables a better approach to attach the fiber to the optical engine. We focused on building liquid-cooled infrastructure that can enable a greater, compute density in a data center, and another way to, further reduce power. So we have a Spectrum-X Ethernet with co-packaged optics, and we have Quantum-X InfiniBand with co-packaged optics. Now, if we look inside the switch itself, you can see the complexity of the technology. You can see that there is, a package, a co-packaged optics, a switch, with the switch ASIC in the middle, surrounded by optical engines that sits on an interposer.

An optical engine consists of a photonic IC and electronic IC, as we are moving light to electricity. It sits on a 3D stacked electronic and photonic IC. We're using COUPE lenses with surface coupling to attach the fiber into that optical engine, and that goes to a fiber connector that goes and bring the signal from the outside, from the optical cable, and also the laser that comes from the laser source. There is a lot of technologies here that needs to come together in order to build co-package optics switch infrastructure. So Spectrum-X Ethernet Photonics, 200 gig SerDes co-package optics. It's double the bandwidth versus the previous generation, supporting the growing demand of data throughput from GPU to GPU. By having co-package optics, we're increasing the signal integrity by 64x.

We are improving the laser reliability with building high-power lasers that are fully optimized to co-packaged optics. We're increasing the reliability by 13x. We're reducing a lot of components. We're reducing the need to use transceiver for that scale-out infrastructure. We're reducing the amount of lasers. So on one side, reduce power, improve the signal quality. On the other side, increasing the resiliency of the data center. On Quantum-X Photonics, we have built a 115-tera switch that includes 144 800-gig ports. The switch is fully liquid-cooled, as we want to make sure that the power consumption is the lowest possible, building AI data centers. And because of co-packaged optics, we have a better energy efficiency, we have better resiliency, and we're increasing AI application runtime.

So we have the full range of co-packaged optics switch infrastructure for scale-out AI, and it enables to scale AI factories to millions of GPUs. For Spectrum-X Ethernet Photonics, we have 102-tera switch that consists of 120 ports of 800 Gb/s, or 512 ports of 200 Gb/s. We also have a larger switch supporting 409 Tb/s, with 512 ports of 800 Gb/s, or 2,000 ports of 200 Gb/s. For Quantum-X InfiniBand Photonics, we have a 115-tera switch supporting 144 ports of 800 Gb/s. Fully optimized for power consumption, liquid-cooled switch infrastructure, delivering the best AI performance by eliminating jitter for GPU to GPU communications, and enabling the best connectivity or the best optical connectivity.

Now, I would like to show you co-packaged optics working within NVIDIA lab. So here, here we go. Co-packaged optics switch infrastructure designed for improving power efficiency, reducing the power consumption of the data center to the minimum possible, while delivering the communications, the scale-out communications that is fully optimized for AI. Eliminating jitter with Spectrum-X Ethernet and Quantum-X InfiniBand, and enabling the million-scale GPUs by not just optimizing the data center, but also optimizing data center to data center connectivity. So with that, I will be happy to take questions, and answer anything that you would like to hear, or to ask about co-packaged optics.

Speaker 2

Yeah. Great. Thank you, Gilad. This is the time for Q&A section. Would like to remind you that you can start to ask question using the Q&A window on your screen. And then, once you input your questions, we will allow Gilad to start reading the questions, and then we'll start to read the question loud and then trying to answer them one by one. Okay? Just give me, Gilad, like, one minute, so we read through the questions, and we start to answer them. Okay, I will start to read out the first questions. Gilad, when will we see the massive deployment of CPO?

Gilad Shainer
SVP of Networking, NVIDIA

When we will see it? Sorry.

Speaker 2

Yeah. When will we see-

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, so,

Speaker 2

the massive deployment?

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, we will, we will start seeing co-packaged optics deployments, of course, this year. We have announced three partners of ours that will deploy Quantum-X InfiniBand co-packaged optics the first part of this year. We announced that CoreWeave, Lambda, and Texas Advanced Computing Center will be one of the first ones to deploy co-packaged optics with the InfiniBand scale-out infrastructure. We will introduce or start shipping Spectrum-X Ethernet co-packaged optics in the second part of the year, and we will start seeing more and more AI and supercomputing deployments with co-packaged optics.

Speaker 2

Okay. Second questions: Since, like, CPO reliability issue are solved now, what was the main reliability problem, and how was it solved?

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, for co-packaged optics, we wanted to make sure that the infrastructure is very reliable and very resilient. Optical network or pluggable optical network may require us to replace transceivers from time to time. And the reason that those are external devices that are subject, of course, to human touch, it requires cleaning before installation, and then as we install those, we may touch some other transceivers, or when we replace a transceiver, we may touch other transceivers. So human touch is one of the reasons for reducing resilience in a data center. What we did with co-packaged optics is essentially taking that optical engine and put that inside the switch system, inside the package with the switch ASIC.

It means that that optical engines is inside a package, liquid cooled on top of that, completely closed in a box, not subject to human touch. We built a process with our partners that enabled to fully validate the entire system build-up, to make sure that it goes 100% tested. And as it go 100% tested, not requiring human touch as you actually build it in the data center itself. We're able to bring great resiliency, great reliability to that co-packaged optics, optical network.

Speaker 2

Okay, thank you, Gilad. Regarding the new technology of CPO, what new requirement did you use in your collaboration with TSMC?

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, with TSMC, one of the technologies that we did with TSMC is the packaging or co-packaged packaging. It was important to create a packaging process that it's reliable. It was important to create a packaging process that can be fully validated and tested, so we have a great way to go to mass production. And previous technology attempts around co-packaged optics didn't manage to do a full validation, didn't manage to build a process that can be fully validated and tested with high reliability and high resiliency. And therefore, we needed to create or design multiple elements of the technology. Now, it's not just the packaging with TSMC, there were other elements that we designed for co-packaged optics. One example is the optical engine itself.

Previous attempts on co-packaged optics were focusing on building large optical engines based on MZM. MZM enables to build optical engines in a simple way, you can say, but it's a large engine. That large engine not necessarily can support large radix switches. So we wanted to make sure that we build an optical engine that can support the large radix switches, which is obviously an important element for a scale-out network. And therefore, our co-packaged optics design, or the optical engine design is based on micro-ring modulator. So that was one technology innovation. The co-packaging process with TSMC, it's another example of technology innovation. There were other technology innovations on how you do alignment of the fiber to the optical engine, for example.

How do you, how do you build it in a way that it's not just give you the right performance levels, but it's very reliable and very resilient? And the way that you build a fiber array that goes inside the switch system itself, connecting the to the fiber cables that are driving the signal from the other side, as well as incorporating the laser, the laser source that we also need to go to the optical engine. And then the laser, the laser source itself, we wanted to build something that can enable us to reduce the number of lasers, so actually we can build a dense infrastructure, for example, a dense switch infrastructure. So there is design around the laser itself, the laser source itself.

We design a high-power laser, and with that high-power laser, we're able to actually get the performance that we need on one side, but also reducing the amount of lasers that are needed for an optical infrastructure. So there is innovation across the entire aspect of that CPO technology on building that switch infrastructure. And of course, CPO and the work with TSMC is, it's important part of that.

Speaker 2

Thank you, Gilad. The next question is going to be a bit long, so I'm trying to read it as clear as possible. So let's get started. Pluggable optics have given us the flexibility to build out a network port by port. For example, in some GPU builds, we are using MMF short-range modules for the smaller builds, and just use single nodes to connect data halls together, as the SMF optics are much more expensive. Are co-packaged optics going to be that flexible? Are you going to be able to order the switch with CPO as either long range or short range?

Gilad Shainer
SVP of Networking, NVIDIA

Yeah. So when you have a pluggable network, you have the ability to choose different transceivers for different applications, for example. You can choose between multimode fiber to single mode fiber, it depends on the distance that you want to support. You can support between DRs and FRs, and you can have a transceiver that can go to very long distances and so forth. Of course, when you build a co-packaged optics CPO switches, you need to determine a specific technology for that connectivity. And therefore, you need to choose which one will be best serving the data center that I would like to build.

And therefore, we actually took the CPO, and we incorporate the technology that enable us to cover the entire distance of the data center, and even beyond that. Even beyond that. So with Spectrum-X Photonics, Spectrum-X co-packaged optics, we can actually connect even remote buildings together. So there is no need to have different kind of transceivers. So on one side, if you're comparing multi-mode to single mode, and you can talk about the power consumption, maybe difference between them, and then the longer reach. Co-packaged optic minimize the power consumption. So it's the best approach compared to any transceiver you would choose.

And then by enabling distances that are even beyond the distance within the data center, with our co-packaged optics, and be able to connect even remote buildings on a campus, that actually replaces a broad range of transceivers. Now, of course, once you go beyond that, and once you want to connect through scale across DC to DC over very long distances, you will connect to a transceiver that will enable that connectivity. But within a data center, we pick the technology that enables us to, on one side, reduce power consumption by 5x, up to 5x on one side, and cover the distance ranges that are needed within a data center building and also across data center buildings in the same campus.

Speaker 2

Okay. Next question: What are the key factors that could make hyperscaler cautious or slower in adopting CPO, despite its potential benefits in bandwidth, density, and power efficiency?

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, it's a good question. So first, let's look on advantages of CPO. So CPO enables you to reduce power, CPO enables you to increase the resilience of the data center, and CPO reduces link flaps, which means it's increased the time to first interrupt. It's hard to find a reason why not to go that path, right? No one would want to pay more for power, reduce the compute capacity that is available, or be able to build any data center, reduces the resiliency and reducing time to first interrupt, right? It doesn't make sense. So what could be the reasons that people may be cautious? Now, that's the question. So there were several items. I think I covered some of those in the previous questions, but I'll do that again.

One item is that when you have a pluggable optical network, you need to replace transceivers from time to time. And, you may think that if you need to replace a transceiver from time to time, and now that, quote, unquote, "transceiver" is integrated inside the switch system, does it mean now that I need to frequently replace the switch systems? So the answer is no. The reason that you need to replace transceiver from time to time is because it's external device, because there is human touch, because it requires human handling when you connect the transceiver. When you connect other transceiver, you touch other transceivers and so forth, so it may be able to create damage. When you replace a transceiver, you also touch other transceivers.

That's human touch, and that's an external device which exposed to other elements, is the reason that transceivers need to be replaced from time to time. When you build a co-packaged optics, which is capable of fully tested as a whole system, not as a component level, but fully tested and validated as a whole system. When you reduce the human touch, then you don't need to replace transceiver, you don't need to install transceiver. When you put it inside a package, well, there is no dust that goes inside. Liquid cooled, the resiliency is much higher. And the resiliency of a CPO switch is very similar to pluggable switch without the transceivers. And therefore, the element of I may have or I have some caution around resiliency of CPO, we actually answered that part.

The second part would be that with a transceiver pluggables, I can choose different transceiver for different applications. You know, if it's a short reach, multi-mode. If it's a larger reach within a data center, a single mode. You know, if I want to connect to data center building to building, I need another transceiver for longer distances and so forth. And co-packaged optics actually give me one technology. But what we did in our switch photonics is actually be able to not just connect the entire distances within a data center, but also to be able to support building to building in a campus. So our co-packaged optics actually replaces the multiple functions or different functions of different transceivers. So it gives you the capabilities to build a data center.

It reduces the power, because it's built, you know, it's co-packaged optics, increasing resiliency. And those items are actually address the concerns of hyperscalers or other companies, and I believe that we will see a very good adoption of co-packaged optics because of all of its advantages.

Speaker 2

Thank you, Gilad. We are looking at 45 questions right now, so I don't think we are able to answer them all. So we're going to choose a couple more for Gilad to answer, and then for rest of the questions, we're going to download them from this webinar, and then going to answer them one by one in a PDF form, and going to send out to you as a nurture. So hope that's fine with you. So I'm going to select three more questions for Gilad to answer live.

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, I can help you to select questions, if that's okay.

Speaker 2

Yeah, yeah, yeah.

Gilad Shainer
SVP of Networking, NVIDIA

Okay.

Speaker 2

Yeah.

Gilad Shainer
SVP of Networking, NVIDIA

Sorry.

Speaker 2

Just select the most-

Gilad Shainer
SVP of Networking, NVIDIA

Yeah

Speaker 2

... interesting and important, please.

Gilad Shainer
SVP of Networking, NVIDIA

Yeah, yeah. Let me choose for you. I'll try to choose for you that, asking different elements. So there's a question here: "One of the huge benefits of pluggable optics was the pay-as-you-go model, and be able to order how many you needed at the time, and didn't have to pay for everything up, all upfront. The CPO drive up the initial purchase price to be more of a maxed-out pricing from the start?" So it's a good question. It's true, right? When you build an infrastructure, you can buy the amount of transceiver that you need, for the infrastructure, and if you need to add more, you buy more transceiver.

Speaker 2

Yeah.

Gilad Shainer
SVP of Networking, NVIDIA

Pay as you go. This is true when we are talking about traditional data centers. This is true when you build something that is wasteful, in a sense. That you build a traditional data center, that there are switches that are not necessarily fully utilized, and if they are not fully utilized, then you just buy the transceiver for the ports that you're using and so forth. When you build an AI supercomputer, you build system that is fully optimal. You build a system that topology is going to leverage everything that you have. We building reference architecture to optimize the connectivity, to make sure that every switch is fully utilized, that you don't buy switches and just half using them in that sense, 'cause it doesn't make sense. And we're talking about a lot of switches.

We're talking about the large infrastructure, to connect hundreds of thousands of GPUs, for example. So everything is optimized, everything is utilized. So if you are taking that into consideration, that when you buy the switches, you cannot probably buy all the transceivers with it, because this is how you build the infrastructure. And therefore, instead of having those switches, and then you buy separately all the transceivers that are needed to cover the switch, buying that co-packaged optics actually reduced the amount of money that you need to pay for that infrastructure and also optimize power. So you are saving in both CapEx, and you're saving in OpEx, and you're increasing time to first interrupt, and you're increasing the resiliency of the data center. It's like a win-win-win-win situation from that perspective.

Let's see if I may be able to choose one more question. Yeah, "Where do you see the room for innovation and process improvement for next-gen products?" If you look on the cadence of technology, the cadence of new data center design, cadence of new GPUs, new CPUs, new switches, new SuperNICs , today, we're on actually on an annual cadence. An annual cadence is needed to support the next generation of AI workloads, training and inferencing, and oo forth. So every year, there is a new technology that is being released, and everything that you learn from one technology, you put in another technology and so forth. So it's an amazing technology design nowadays, you know? Every day, you actually work on two generations.

You know, the next one is the one after that. Now, co-packaged optics, the focus was, you know, 20G SerDes, focusing on power, saving the highest amount of power that we can save, and increasing resiliency. And building the technology that can enable larger radix switches, and we've formed Spectrum-X Ethernet with co-packaged optics and Quantum-X InfiniBand with co-packaged optics. Now, as we go on annual cadence, we're going to see switches with larger and larger radix is one thing. And the port density of optical network will continue to increase, and the scale-out infrastructure will continue to increase in its capacity, bandwidth capacity, amount of infrastructure, the way that you connect, and so forth. So we will see next level of innovations around building those larger and larger switch radix.

We're working on different ways to connect the fiber into the switch itself, and so forth, and then continue to optimize not just the switch system by itself, but the entire rack. The entire rack that is built in, you know, the entire rack is fully liquid cooled, because we want to make sure that we are building the most efficient data center and so forth. So there is next level of innovations in the in the radix that we want to support in the density of that optical connectivity and in the density of the rack, and obviously, the the the density of the entire radix. I'll take one last question, as we're running out of time. The liquid cooled... Is the liquid cooling compatible with the DGX cooling standard?

So the way that we design the data center is designing that as one unit. It's because, you know, that unit is the computer. And therefore, we want to make sure that what we design for compute covers the network. We want to have the same racks being used, make it easier to build, make it easier to install the data center, make it easier to managing the data center, and so forth. And the same liquid cool rack design that is being used for the compute server. It's the same rack design with cooling, liquid cooling that is being used for the switch network itself. By the way, we have our Spectrum Ethernet. Our Spectrum Ethernet is a fully flexible infrastructure.

So, of course, it's designed to support the same rack level, but we're also working with large partners of manufacturers that building Spectrum-X Ethernet switches for different CSPs or different customers, that they may have different designs. So we're supporting multiple designs that are optimized for our customers using it, in parallel to the design that we do ourselves to have similar to the computer racks. We're also supporting, by the way, variety of operating systems with Spectrum-X Ethernet, supporting Cumulus. Also adopted by others and running, for example, Nexus, so I support the system and so forth. So there is a full flexibility on one side, and we have partners that are designing switches that will go to different data centers designs of our customers.

As well, we are designing switches that goes into the same racks as the compute or the way that the compute unit make it easier to build, install, and manage. So with that, I think we're running out of time. We'll try to answer other questions, maybe post some answers out there, but I'd like to thank you for joining us today and for listening.

Speaker 2

Yeah. Thank you, everyone. Thank you, Gilad. We have quite a good amount of questions I'll answer in the Q&A box. We will try to answer them in next week and then trying to send it to you for your study and references. And once again, thank you all for joining our Webinar. We hope you find this is very informative. And before you leave, a gentle reminder, if you can complete our survey, that would be really grateful. And then if you like to listen to this webinar again, our on-demand version should be ready in next one or two hours, and can be accessed using the same link. And thank you again for joining us, and hope you have a lovely morning, evening, depending on where you are. And thank you, and thank you, Gilad. Hope you enjoyed today. Bye.

Powered by