Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.25
-12.15 (-7.67%)
Apr 27, 2026, 12:24 PM EDT - Market open
← View all transcripts

Status Update

Mar 15, 2023

Lou Ternullo
Senior Director of Product Marketing, Rambus

Hi, my name is Lou Ternullo, and I'm part of the product marketing team for the Rambus IP Group. Today, I'm gonna talk to you about accelerating data interconnects with PCI Express 6 Interface IP. Give you a brief overview of what I'm gonna talk about. I'm gonna first start to discuss the PCIe standard, the importance of the standard in PCI Express use and adoption. I'm gonna discuss a little bit about the various application spaces where PCIe is used today. Talk a little bit about the challenges of PCI specifications for 6.0. How they differ from the prior generations and what considerations needed to be or need to be taken into account in the IP and at the system level.

I'm gonna finish up the presentation by giving you a brief introduction to what Rambus offers for PCI Express intellectual property. PCI Express and CXL in a nutshell. PCIe, PCI Express, all, you know, defined as Peripheral Component Interconnect Express. It's an interface standard for connecting high-speed components. It is the de facto standard today in connecting application processors or CPUs to peripheral components. PCI Express is the standard today. There is more and more adoption of CXL. I'll touch a little bit on CXL in my presentation, though I'm gonna focus on PCI Express. PCI specifications can be found at the PCI-SIG. The website link is below, and CXL specifications are maintained and developed by the CXL Consortium. Again, information on how to get to the Compute Express Link is below as well.

This, the importance of a standard is not only in the ability to design the interface protocol to a certain specification, timings, and so forth, but it also, has to do with the ecosystem. The ecosystem not only enables but testing and validates to ensure interoperability in the future. The main reason, or one of the main reasons rather, I should say, that PCI Express is so prolifically adopted and used is because of the standard in conjunction with this ecosystem to enable plug-and-play type testing at the PCI-SIG consortiums and so forth to ensure when a company that's developing a server system is buying parts, plugging them in, they're gonna work. The other thing about PCI Express is that it's been so well adopted that new standards have been, I guess I'll say morphed from it.

By morphed, I mean they use the same exact physical layer and, the protocol stack is modified slightly. Some of these standards you may have heard of are like NVMe, Non-Volatile Memory Express. The newest one that's come to light is UCIe, Universal Chiplet Interconnect Express. Let me talk to you a little bit about where PCI Express is used today and what some of the benefits are for adopting CXL potentially in the future. Starting with the upper left on high performance computing, referring to cloud data center and edge computing, any type of application that you will that uses a compute processor, is refers to and is using the host portion of the PCI Express. That is the source.

The peripheral components, as I mentioned, plug into that to enable each of the various types of applications that I have listed here. Let's first let's start with enterprise storage. As you can see by the picture, the enterprise storage uses a front- load EDSFF form factor for SSDs, for example, what's used today in server CPUs. The SSDs typically use standards like an E3.S, which is a 2-high form factor for a 2U server, or E1.S for a 1-high form factor. That's adopted by this SNIA standard, SNIA, referred to as SNIA, to ensure interoperability with solid-state drives.

Moving over to the right of that, talk about enterprise networks, networking applications, often referred to as network interconnect cards, NICs, have evolved to what is often called today as a SmartNIC. I wanna use the SmartNIC as an example to explain. I mentioned in the previous slides the peripheral interconnect. What PCI Express interconnect has now evolved to is compute offload. Essentially, what I mean by that is that you're now adding in a card, not only is it just acting to connect to a peripheral, i.e., an Ethernet port, but some type of compute engine that does additional compute to offload what the server CPU used to have to do. In the case of enterprise networking, we have what's referred to as SmartNICs.

Typically, they include a DPU or a data processing unit. The purpose of the data processing unit is to essentially manage the network, manage network traffic, telemetry, and all those aspects that the server CPU historically had to do, but now it offloads it. The SmartNIC market is growing very quickly and is expected to grow in the order of 20%-25% compounded annual growth rate through the middle to latter part of the 2020 decade. Moving down to the lower left corner, talk about artificial intelligence, machine learning, and also inference. Artificial intelligence is the biggest buzzword for the last several years and will continue to be in the future.

While researching for this presentation, I came across a number of different statistics for AI growth, if you will, and looking, focusing only on hardware aspects. What I learned is that between 2020 and 2030, the AI hardware market, which is a large portion of which is include these accelerator cards or these interface cards plugging in vis-à-vis PCI Express, is expected to grow north of 25% compounded annual growth rate. Going from roughly, if my memory serves me correctly, $9 billion in 2020 to roughly $90 billion in 2030, which is a significant growth rate with respect to that.

I'll explain more about what the reason for this growth rate is, what's driving it, and what the importance is of the PCIe specification and the evolution of the specification to keep up with that. Talk a little bit about test and measurement. PCI Express, we talked about it as an interface for like a card plugin. It's also used as a low pin count chip-to-chip connectivity, type of interface. It's used in test and measurement for that, but it's also used, you know, if I can use this as an opportunity to plug the mobile space. As I mentioned before, any connecting anything from some processing unit, whether it's a CPU or an application processor, to a peripheral device.

In the case of the mobile space, it connects the APU to a modem or some type of radio device. Again, very high speed, low pin count, for, you know, die size reasons, and it helps enable that as well. Back to test and measurement, the interface in addition to the chip-to-chip connectivity in the systems, also to test these systems, you need to be able to test them. The test and measurement solutions need to keep up with these latest standards. When new SSDs come to market, when new AI cards or SmartNICs come to market, they can be tested and verified and validated before, you know, shipping them to your customers. Last but not least, automotive.

In the automotive space, although ADAS is starting to be more and more prolific, ADAS is essentially artificial intelligence in the car, in the automobile. It's autonomous driving, and the essence of behind autonomous driving is to understand your surroundings and you know, predict what's gonna happen and to guide what to do next. Again, there's a processing unit in the car, in the automobile, and connected to it, via a PCI Express interface is some type of artificial intelligence accelerator card of sorts. Again, this is nothing new. You can go back and look at articles. I think it was a couple years ago, NVIDIA and Mercedes announced a joint release where the NVIDIA GPU is being used in Mercedes vehicles to support autonomous driving.

The other thing I also wanna mention about autonomous driving or automobiles and the direction they're going in is from the perspective of communication, car-to-car communication and car-to-tower. You probably heard of the term 5G and that wireless standard, you know, for your mobile phones, but it's also being adopted by automobiles to communicate car to car and car to tower to and further enable data gathering for the AI engines in these vehicles. A little bit about CXL. CXL, as I mentioned, it's a new standard, I'll talk more about the specifics or the different types of CXL devices in the next slide. I wanna use this slide to talk a little bit about where the potential is for using CXL.

Obviously, on the host side, in the CPU, enterprise storage, there's applications as well as enterprise networking and SmartNICs as well as artificial intelligence or AI machine learning. The CXL interface offers a additional path to the traditional PCIe path, which has a lower latency, and is based on coherent memory interface or data access. The three types of CXL, not surprisingly, are referred to as type one, type two, and type three. It is again the connection of a peripheral component, whether it's an offload compute component to a host or a processor, as in a server CPU. The easiest way I like to explain the difference in these types have to do with the communication between the host or the paths of communication between the host processor and the devices.

There's a CXL. io path, which is essentially your traditional PCI Express path, which is used to, you know, set up the device itself and kick it off and running. Then there's CXL.cache, which allows for a coherent interface between a host processor and any cache device that's on that device. The CXL.mem, which again enables a coherent connectivity between the host processor and any memory connected to the device itself. The type two, you can see High Bandwidth Memory being part of that. Real quickly, briefly, we talked about in the previous slide the different application spaces. We talked about SmartNICs type one device.

SmartNICs don't typically have additional DRAM associated with it or memory, but there is a cache, and that provides the ability for the processor to more seamlessly communicate with the cache. The type two device, again, GPUs, accelerators and so forth, and it allows again the CPU to communicate with the cache and or the attached memory. The other thing I wanna mention, because this is a lower latency, the .mem interface is a lower latency interface, it also provides provisions for that accelerator card to borrow or use partition memory from main memory attached to the host processor. You've probably heard of natural language processing. It is an AI process that requires a tremendous amount of memory, and typically more memory than can be serviced by attaching High Bandwidth Memory to the accelerator card.

Using a CXL interface in an AI accelerator card; for natural language processing application can give it additional access to memory that's connected to the host processor. Last but not least, we have what's referred to as type three, which is essentially for memory bandwidth expansion or capacity expansion. By expansion I mean being that this CXL device; attaches to the PCI Express port, it does not take up any of the direct memory attached DRAM to the host processor. All that, path to direct DRAM exists. This is an additional or an expansion path. It gives the host other opportunities to decouple, the type of memory and that is behind the CXL ASIC.

Either it could be DRAM or storage- class memory, or whatever type of memory that could be most valuable for that particular system. Now let me talk a little bit about the history of PCI Express. PCI Express started almost 20 years ago, actually about 20 years ago in 2003, with, you know, version 1, 2.5 Gb transfers per lane. Consistently, almost consistently, every generation essentially doubles the bandwidth. You know, fast-forwarding to where we are now, you know, let's look at PCIe 4, which was released in 2017 at 16 Gb per second data rate. Two years later, PCIe 5 was announced, running at 32 Gb, and now we're talking about PCIe 6, which was formally announced by the PCI-SIG in January of last year, of 2022.

If you're reading in the press, the PCI-SIG is also looking forward to 7, PCI Express 7, which is expected to run at a data rate of 128 Gb transfers per second per lane. That's expected to come out in 2025. I want to point you to the release year column for a moment. You see, moving from 2019 to 2022 to 2025, essentially, we're in three-year increments. I read an article yesterday that was dated February 22nd. It was written by Tom's Hardware, and it was essentially a summary of Dr. Lisa Su's presentation at the International Solid-State Circuits Conference. What Dr.

Su was saying in her presentation is that they were tracking CPU server performance and GPU performance over time. Essentially, every two point four years, the performance of the server CPU doubles. Every roughly two point two years, the performance of the GPU doubles. Now, I mean, if you round up to three, that essentially aligns with what we're seeing here for PCI Express. Again, as the server performance increases, it needs to be able to access offload compute and/or peripheral components at a higher data rate. So it kinda tracks to what we're seeing in the increase in server CPU performance and expecting it to continue on that trend if not increase, and aligning to where we see the PCI Express standard devolving to or developing to. Again, I'll use this as an opportunity to plug the standard.

The only way we can transition that quickly, between the present generation of PCI Express standard to the next generation is through the standard, through the ecosystem, through the enablement of the plugfest and sources to ensure interoperability. It's all connected. What were some of the goals that the SIG were driving to when creating or generating the specification of PCI Express? Number one, as I mentioned, doubling the bandwidth, right? I mean, we saw from the previous chart that every generation doubles the bandwidth. Number two is backwards compatibility. You wanna make sure that if you have a system that supports the latest standard, i.e., Gen six, that if you have something that's supporting Gen five, for example, you could still plug it in, it could still talk.

All the way down to 1.1. Similar reach, channel reach versus PCIe 5. Now, I'll talk more about this in the upcoming slide, but this is really important from a form factor perspective. If the reach was dramatically changed going to generations, it's gonna completely change all the tooling and everything that already exists, and it's gonna make the adoption that much harder. Also higher bandwidth efficiency. You know, again, I'll talk more about what I mean by higher bandwidth efficiency and how it was achieved. What did this change or drive in the architecture of PCI Express 6? Well, moving to 64 Gb transfers per second, it essentially meant moving from an NRZ, non-return to zero signaling, to a PAM-4 or pulse and amplitude modulation signal.

If you're familiar with Ethernet, transitioned to a PAM-4 type signaling at 56 Gb transfers per second, which follows suit with what we're seeing with PCIe 6 at 64 Gb transfers per second. The addition of a low-latency forward error correction or FEC. I'll talk more about this, and I have 1 slide dedicated to this. This forward error correction is in the actual controller itself. It is not part of the SERDES. The SERDES has a number of equalization devices or aspects, which I'll touch on. This FEC, this forward error correction, the need for it to implement in the controller because of some of these changes that need to happen. It added a fixed-size FLIT encoding, and the FLIT being based on CRC.

Again, I'll talk a little bit more about what that is and the reason for that. Moving to the, this next slide here. Previous generations on the far left column, PCIe 6 in the center column, and some comments on the right. Let's start off with the top row, the signaling. As I mentioned, previous generations, Gen 5 in particular, used what was referred to as NRZ signaling. You could see it's essentially two data signals, you know, either a zero or a one. You could see that this is actually a very good eye. You have a pretty open eye and a UI with this that's pretty solid around the order of 32 Gb transfers, which is what is expected for Gen 5.

Moving to the PAM-4 signaling we see under PCIe 6, you see three eye diagrams translating to four datas, four data types, the 00, 01, 11, and 10. Essentially, what that means, and so step back a second here. The Nyquist frequency for Gen 5 at NRZ is the same Nyquist frequency for Gen 6. What that essentially means is over the same period of time, Gen 6 is transmitting, as you would expect, twice as much data because of the PAM-4 signaling. The second row, we'll talk about the forward error correction. There's no FEC in PCIe 5 or earlier generations, but we did add a FEC in, or the SIG added a FEC into the PCIe 6 as a requirement.

The primary mechanism is to correct errors in the controller or to correct some of the errors. It's not gonna catch all of them. But the purpose of the FEC in the controller is to slightly improve the bit error rate. In the next slide, I'll talk a little bit more about what the benefit of that is and what the implications of that is. We also have the data exchange interface on PCIe 5 and earlier generations. The data exchange can be of variable sizes across the TLP. With PCIe 6, it's a fixed FLIT size, if you will, and it's 256 bytes. This was in part required to support both the FEC and the PAM-4.

The FEC has to operate on a group of data, requiring the data exchange to be fixed at 256 bytes enables the FEC to work at its optimal. The combination of using a 256- byte FLIT as well as the doubling of the data rate, if you will, enabled the efficiency to increase to 3X. Two X of that efficiency increase has to do with the data rate , and 1.5 has to do with the implementation of this new FLIT interface to the TLP. From a power mode perspective, all previous generations of PCI Express have various power modes, and I'll talk about what those are in an upcoming slide. PCI Express 6 added a new power state referred to as L0p.

The biggest difference, and again, I'll discuss it more in an upcoming slide, is that L0p can be implemented without interrupting the flow of traffic. Historically, all other power modes, you had to actually interrupt the flow of traffic in order to implement those power modes. The PIPE interface, PIPE 5.x. In Gen 5, Gen 5 was the first generation of PCI Express that had an optional PIPE interface, either referred to as LPC or low pin count or SERDES mode. The SERDES mode essentially pulls the PCS, if you will, that was historically in the front end, the digital portion of the front end of the PHY, pulled it into the controller.

In PCIe 6, using PIPE 6 standard, the PCS is in the controller, and SERDES, what referred to as SERDES mode, is communicating between the controller and the physical layer. One of the benefits of that is now you don't have a protocol stack, if you will, in between the controller and the PHY. You have a SERDES interface, and that allows for muxing to different types of controllers, i.e., Ethernet, or with PCI Express, assuming the PHY itself supports the different types of SERDES-based standards. So it allows for multi-protocol SERDES use. More easily, allows the use of multi-protocol SERDES. I wanna talk a little bit about some of the system-level challenges. And you see a little block diagram, if you will, on the bottom right-hand corner.

The vertical interconnect is, it's an adding card, showing the rough dimensions, if you will, or trace length of signals for PCI Express in an adding card, roughly 3-4 inches. Then you have, traditionally a motherboard with a root complex, which is typically the CPU, and that's roughly 12-14 inches. This is the reach, if you will, that I was referring to, as part of the requirements, the reach that was supported by PCIe 5 that needs to be supported also in Gen 6. Let's look at the table now. Moving from Gen 5 to Gen 6, using Gen 5 as kind of a primary reference, because of the PAM-4 signaling, roughly 9 dB of signal loss was, you know, signal was lost.

From a UI timing window perspective, roughly 33% was lost. Now, to translate this into, you know, that's how I translate to myself in more layman's terms is, if you have a 32 Gb data rate eye in PCIe 5, if you lose roughly 33% of that, now your circuitry no longer has to operate, and detect signals at a 32 Gb data rate. It has to detect it at roughly a 40 Gb data rate, to give you some level of indication. Now let's talk a little bit about the pad-to-pad budget loss. For Gen 5, the total budget was roughly 36 dB. Again, we have the breakout there from root complex, or server CPU to adding card and system.

You know, roughly 9 dB, 9.5 dB, and 17.5 dB respectively. What is allowed to support PCIe 6 was a 32 dB total system budget pad-to-pad loss, 8 dB at the root complex, and 8.5 at the adding card and roughly 15.5 at the system level. This is all at the 16 gigahertz Nyquist data rate, clock rate. In addition to the budgets, again, doubling the data rate itself has implications to it. With respect to the BCB, with respect to the root complex package, moving to PCIe 6, you need to consider 3-6 dB better design characteristics for the, for the package itself, the root complex package.

For what's referred to as non-root complex package return loss, i.e., the adding card, roughly 0 to 6 dB. Again, from a crosstalk perspective, you need to again design for approximately 6 dB better. All said and done, yes, the form factor, the reach is essentially the same moving from Gen 5 to Gen 6, but the designs have to be much tighter in PCIe 6 to work. Okay. Briefly, again, I talked a little bit about the FEC, and I'll address it a little bit more on this slide here. The purpose of the FEC, as I mentioned, is to compensate for the higher bit error rate and dB loss at 64 Gbps transfers. The bit error rate for Gen 5 as a benchmark was 1 E to the minus 12.

For Gen 6, it's 1E-6, so it is a substantial reduction. Adding the FEC helps to achieve that. Adding the FEC as well as improving the CRC helps to provide for an improved total system uptime, if you will. What the addition of the FEC and the CRC enable is what's called a reduction in packet replay for bit errors. By supporting this and through the actual specification itself, a replay is actually less than 100 nanoseconds, which is not insignificant, but it's not totally detrimental to transferring from a Gen 5 to a Gen 6. You can see from the next bullet here, the CRC is stronger.

The cyclic redundancy check is 8 bytes for error detection, and that's across the entire FLIT packet. Previously in Gen 5, CRC was broken up into two points. The TLP was 4 bytes, and the data link layer packet was 2 bytes. The other requirement, again, anything you add to the data path adds latency. Yes, could the FEC have been stronger? Yes, it could have. If it were stronger, it wouldn't been able to be implemented at 2 nanoseconds of additional latency. That was the other requirement: keep the latency within reason with respect to where Gen 5 is, and improve bit error rate and usability for the users.

A little bit about power modes now. The PCIe 6 standard power modes, these are also the same power modes in Gen 5. L0 is normal operation. L0S is standby, transmitting data in one direction. Essentially, what you're doing is the device will be transmitting, the host will be receiving. On the device side, the receiver's turned off. On the host side, the transmitter is turned off. The L1, the transceiver logic, is actually turned off, reducing power. L2, all transmission disabled, the power is still active. L3, the power is actually removed.

You can imagine each of these states going from L0 down to L3, improve the power or reduce the power, as denoted by the little bars on the right in that table, but the exit time is also impacted as well. Obviously, normal operation, there's no impact. If you go all the way down to power down, it's gonna take much more time to come out of low power mode. L0P allows the power consumption to be proportional to the bandwidth. What the L0P allows the system to do is now if you have 16 lanes, for example, and they're all running at 64 giga transfers, that's the maximum, you know, data rate traffic that can be supported.

Not all system operations require full utilization of all 16 lanes at 64 giga transfers. What the L0p allows the system to do is to reduce the number of lanes that are transmitting data. You know, essentially, what you're doing is by reducing the lanes, you reduce the power. If you go from, you know, I drew a bar chart on the right there, where you go from 16 to 8, you effectively cut the power in half. From 8 to 4, again in half, and from 4 to 2, again in half. The other aspect, as I mentioned earlier, is that this L0p can be engaged, if you will, without having to shut the interface down. It can be done without interrupting the traffic flow.

That's a huge benefit for latency of systems, you know, coming in and out of power modes. This is new for Gen 6. Now let's talk a little bit about design considerations for PCIe Gen 6. The IP needs to support all the specification changes. We talked about the 64 giga transfers, the link rate negotiation at PAM-4 signaling. This includes the controller operating at the appropriate clock frequency with the appropriate data path to support the data transfer. It includes the physical layer supporting PAM-4 signaling at 64 giga transfers and all the equalization required for that. Flip mode and non-flip mode. We talked about backward compatibility and support for prior standards. Gen 5 does not require flip mode, Gen 6 does.

The controller has to support both flip and non-flip modes depending on the mode of operation. The lightweight FEC inclusion in Gen 6 and the new L0p power mode optimization. If you can visualize taking your system from 16 lanes down to 2, it's the same inflow of traffic, it's the same outflow, it's just across fewer lanes. That internal to the controller moving into the PHY, that traffic has to be managed. That has to be implemented all the way through the controller into the physical layer. Some of the IP design needs to help scale. The option of, again, using standard interfaces like AMBA or proprietary interfaces that are typically synchronous in nature.

If they're running at the same clock rate, the same input clock as the controller itself, you can use synchronous interfaces. If they're running a lot of times they're running at different clock rates, the asynchronous nature of those interfaces need to be supported. Customizing the core data path width and the PIPE interface, it support different peripheral clocks and data widths. Leveraging the PAM-4 architecture for other connectivity protocols, as I mentioned, the ability to leverage multi-protocol SERDES. All these again need to be accounted for in the IP. For built-in RAS and security features.

From a RAS or reliability, addressability, and serviceability perspective, monitoring and/or identifying when there is an error is critical so the system can implement a replay or a retry to try and get that data back, you know, assuming that it was lost across the link itself. Replaying it could, you know, get the data back correctly. It also requires the ability to debug and monitor, to collect telemetry information. Part of this is the, you know, the systems, these are very high-performance systems and their uptime is very critical. Monitoring where these types of errors occur, when they occur, how often they occur, helps the service provider to better understand, is there something systemic?

Is there something systemic in the server itself, in the traffic, in the environment or whatever, and be able to address that before they get severe downtimes. Also, as we move into these higher interface standards, the data encryption and integrity and data encryption is also becoming much, much more important. The communication of data anywhere across any interface, nowadays, encryption is something that is very important to be managed. Again, supporting the IDE engine as an optional block is also something that is essential to be considered for PCIe 5 and 6 in particular. A little bit about the Rambus PCIe 6 interface subsystem. So here I'm just showing a simple block diagram of the controller and the physical layer. Some of the high-level capabilities, and we've already talked about a few of them.

Supporting the options for native or a standards-based like an AMBA interface, the pipe 6.1 SerDes mode interface between what we refer to as a PCS or what is seen here in the chart as a PCIe PHY layer, which is in the controller and the PMA, the SerDes. FLIT and non-FLIT mode, the optional IDE security engine. Also here we also have the Rambus controller IP supports the capability of generating the controller RTL for ASIC or FPGA implementation. The benefit of this is, especially in the context of leveraging a protocol in a prototype system, a prototype system using FPGA before you wanna go to ASIC.

You know, the cost of going to ASICs nowadays to, you know, generating mask sets for advanced technology nodes like five and three nanometer are very expensive. Doing some prototyping in FPGA is one way to leverage that. The other way of leveraging FPGA is for software development. Again, before you actually get your silicon, the ability to leverage FPGA for software development. The benefit of the Rambus controller IP is that it generates the exact same RTL. From a register set perspective, from a programming perspective, it's exactly the same. The only primary difference between an ASIC and an FPGA implementation RTL is there are more pipe stages that are included to support the lower clock rate of synthesis in the FPGA fabric. Again, as I mentioned, this is all supported through the configuration GUI.

On the PHY side, it is DSP-based, FFE and DFE equalizer. It has built-in self-tests for supporting various loopback modes and capabilities for testing the SERDES, either in a manufacturing environment or in a system boot up, as well as, you know, what we refer to as ATPG-ready. Although the logic in the PHY is hardened, it is equipped and ready for ATPG patterns, also supporting boundary scan and AC-JTAG. The real-time receive data eye monitoring and schmooing. When bringing up the system and tuning the interface, this capability is invaluable, but it also allows you for debug the system should you need that.

Per channel, PRBS pseudo-random binary sequence, in addition to the ability for users to define their own patterns in those types of PRBS situations that you're testing your system. Moving on to coming down to the end here. I want to talk a little bit about the, you know, what we believe to be the Rambus advantages, the advantages of the SerDes and PCIe IP that's supplied by Rambus. 50-plus ASICs SoC design wins, over 100 million-plus systems in production, you know, products that are actually shipping, units, product units that are actually shipping. That's a significant amount, which adds to the credence of risk reduction. Fully integrated and co-validated PHY and controller subsystem offering. Offering a subsystem, the controller and the physical layer together.

Our test chips are developed by also integrating the controller in the PHY when we develop a new physical layer, a new protocol stack, if you will, and test it in our validation boards, as you see some of the images over there on the right. Various applications, the Rambus IP, both controller and PHY have been used in, you know, SmartNICs, storage, networking line cards, 4G, 5G base stations, and so by multiple customers in multiple application spaces in multiple products. What this means for the audience is it means that not only has it worked and is it working in all these different applications, but each of these applications uses the interface differently.

It's a way that we are learning in conjunction with our customers, and all that learning is forward retroactive, if you will. We take that learning and that gets all rolled into future products. With 20-plus years of history and, you know, cutting-edge SerDes development, that's a long time, a lot of learning to be rolled into present day and future products. Now rounding out the presentation, some of the key takeaways to summarize;, PCI Express improves over PCIe 5 in various areas. We talked about the doubling of the bandwidth. We talked about the higher throughput efficiency, roughly 3x versus PCIe 5, with the implementation of the FLIT encoding. We talked about the new low power mode, uninterrupted transition into this new low- power mode.

We talked about architecting to support the same channel reach. Again, there's no free lunch. Obviously, you have to work for it, and but it is implementable. The PCI-SIG did a fantastic job in ensuring the industry can adopt PCIe 6 successfully, using the same essential form factors, if you will, and reach Gen 5. Again, as a quick summary, the Rambus PCIe 6 IP benefits. Over 20 years of experience in controller architecture and development, all the way back from Gen 1 in 2003 to where we are today. A tremendous amount of experience has been rolled into the products over time, the controller products, and a very long history in SERDES, signal integrity and power integrity, as well as pioneering PAM-4 experience.

As indicative of the previous slide, all the different applications, the different types of SERDES starting with NRZ moving now into PAM-4, that Rambus has developed and deployed to the market. With that, I'd like to thank you for your time today, and I encourage you, if you'd, you know, like to have more information on PCI Express Gen 6 or other products that Rambus offers, please feel free to visit us at www.rambus.com.

Powered by