Matt's going to kick things off with an overview of our data center opportunity. Loi is going to walk us through the architecture of AI data centers. Ashish will dive deep into our Interconnect portfolio. Nick will discuss our cloud switching opportunity. And then Raghib's going to bring us home with custom compute. We will conclude the event with a Q&A session. I'd like to draw your attention to our forward-looking statements. As a reminder, this presentation contains projections and other forward-looking statements regarding future events and financial performance of the company. Such statements are predictions and subject to risks and uncertainties, which could cause actual results to differ materially. Please consider the risk factors in our SEC filings, which could potentially affect our business and financial performance. These filings are available on our website as well as at the SEC.
During this presentation, we may make certain non-GAAP financial measures. Reconciliation to GAAP are available in the investor relations section of our website. With that, let's kick things off with a short video.
AI has taken the world by storm, bringing the dawn of a new age. The AI era will fundamentally change the way we live and work, yielding changes as profound as those enabled by the PC and the Internet. The foundation of this era is a new computing architecture that's powering the supercomputers that generative AI requires. With the industry's most advanced, most comprehensive accelerated infrastructure portfolio of compute, connectivity, and storage, Marvell is powering the AI era. Every major cloud is undertaking an AI transformation and what will be a $1 trillion investment. Yet every cloud is unique. That's why Marvell tailors its technology to the differing needs of each one. No matter what they do or how they do it, they can grow their infrastructure more efficiently with Marvell. To meet the needs of today's AI supercomputers, Marvell designs optimized silicon that connects AI clusters and clouds.
Marvell partners with customers to customize AI accelerators and other critical computing elements to meet the specialized needs of any given cloud. Powered by Marvell accelerated infrastructure silicon, the world's largest clouds are enabling the transformation of every industry and every field. As we meet today's challenges, we're investing in the opportunities of tomorrow, innovating to help take humans places that once seemed impossible. Marvell accelerating infrastructure for the AI era.
Please welcome Chairman and Chief Executive Officer Matt Murphy.
All right. All right. Welcome, everybody. Good morning. It's great to be here, and welcome to Marvell's Accelerated Infrastructure for the AI Era event. It's very exciting to be back in New York City with all of you in this very historic location. I want to start out by asking the group a question. Does anybody know what this number behind me represents? Is it the GDP of Italy? Is it NVIDIA's market cap ? It's actually both. It's also the amount of data center CapEx that's going to be spent over the next five years to fuel the expansion of AI infrastructure in data centers. It's actually an astonishing number. So the $2 trillion question everybody's asking is, does this actually make sense? Does the expected economic return on AI justify this outside investment? I think it's the question that we're all wondering about.
We'll consider the current and future value of AI across numerous industries. It's not just about chatbots or better search experience. It's going to change the way we live and work. And it's not just simply about automating business processes. Companies are using AI to transform how they identify and manage risk, interact with their customers, and accelerate their time to market with new products. Everyday companies, including Marvell, see new use cases emerge in engineering, manufacturing, financial services, health care, and other industries. These are capabilities we never could have imagined. And I believe we're in the early innings of a generational inflection in technology. I was recently at a meeting where McKinsey shared their belief that these innovations will unlock something like $4.4 trillion annually in economic value. Additionally, insights I've gathered from a number of industry conversations suggest projections that are even more ambitious.
I recently had a discussion with the Chief Strategy Officer of one of the world's largest companies. He believes the figure is closer to $20 trillion of potential value capture over a longer period of time. So many of you here in the investment community have cited comparable or even larger numbers. So circling back to the original question, does this CapEx make sense? The answer is yes, absolutely makes sense, and it will be financed through massive gains in productivity and efficiency. So, look, we can all debate about the size. Is it $1 trillion? Is it $10 trillion? Is it more? What we do know is that there's a multi-trillion dollar opportunity out there. So with that context, if you look at the technology investment cycle that's going to happen over the next 10 years, the CapEx being deployed makes a lot of sense.
So we see this as a very, very real opportunity, and we're as well positioned as any company in technology to take advantage of this. We believe it's as consequential as the advent of PCs, the Internet, or cloud computing. What you're going to hear today from me and my team is how the investment cycle in front of us in data center is going to flow massively into semiconductor companies. We believe Marvell will be more levered to the spend on AI than any other company except one. Every day I open the Wall Street Journal, watch CNBC, and what's everybody trying to figure out? Where's the next best place to invest for AI right now? And I'm here to tell you guys, it's Marvell. And you're going to hear all about it today.
We've already started to see the benefit of this AI cycle translate directly into Marvell's revenue. Last year, we were over $550 million in AI-related revenue, or about 10% of our company. That's almost triple from the prior year, where it was about 3% of revenue. And now the $550 million last year was almost all connectivity, including optics and some switching. So that business will nearly double this year. And then if you layer on custom silicon, which is in the blue, we see our AI revenue this year almost tripling again to be over $1.5 billion, with about two-thirds being connectivity and one-third being custom compute. So if we take that $1.5 billion and consensus estimates, AI will be close to 30% of Marvell's total revenue this year. And that's going to continue to grow.
We see $2.5 billion as a solid base case for next year, with upside if the market grows faster. And right now, it's a little early to call the exact split. I know everybody wants to get the exact split in the future. I'm not ready to call that yet, but we will be updating you along the way. Now, a couple of things about these estimates I'm showing you. First, on the custom silicon side, the vast majority of our total custom revenue for Marvell is going to be in AI, just to clarify that. The second is, for our PAM4 DSP chipset and DCI revenue, we only capture the AI-specific revenue in these numbers. So what you're looking up here today is AI revenue only.
The remaining revenue shows up in cloud infrastructure, and you'll see those numbers flow through when I start talking about the bigger picture data center opportunity for Marvell. As you already may know, GenAI and the examples I mentioned are made possible by accelerated computing. Accelerated computing delivers the extraordinary computing power that's required, and these groundbreaking applications would not exist without it. What you might not realize is that accelerated computing would be impossible without the underlying accelerated infrastructure to support it. It's not just the power of the individual computers that makes this possible, but rather entire data centers full of computers connected through a massive data infrastructure. The reality is that there's a significant amount of connective tissue surrounding the compute to move, store, and process the data required to keep these systems running. That's where we come in.
At Marvell, we specialize in building the infrastructure for accelerated computing, which we refer to as accelerated infrastructure. All right, so how did we get here? On the left, this is Marvell's mission statement that we created in January 2017. It's very simple. Our goal is to be the world's pure-play chip company to move, store, process, and secure the world's data. And our thesis at that time, when we came up with this mission statement, was that the biggest TAM growth in front of us in the semiconductor industry was going to be in data infrastructure. And that was going to be driven by the growth in all the data platform companies who were emerging. And that strategy has proven to be remarkably sound. Our mission statement has not changed in seven years. In fact, it's only become even more relevant now for the era of accelerated infrastructure.
If you look at the foundational pillars you need for data infrastructure, it's compute, it's connectivity, and storage. Now, both our security and storage businesses will get a tailwind from AI. These are standard products across the data center market. But we're going to set them aside today for the purposes of the discussion. In the past several years, we've built leading franchises in each of these product categories. Today, we're positioned to be the semiconductor industry's accelerated infrastructure leader. Because accelerated infrastructure is data infrastructure in its most powerful and technologically advanced form, AI is the most data-hungry application the world has ever seen. Our journey has led us here. We built this company to address this extraordinary opportunity. Today, we're going to focus on the opportunity in the data center where AI has reached a critical inflection point.
AI infrastructure can be broadly categorized into these three areas: compute, connectivity, and storage. As I mentioned, storage is going to be common across AI and all other types of cloud, so we're not going to really do a deep dive on that today. It is a core business at Marvell where we do have a leadership position. The next is interconnect, where we provide the world's leading platform of physical layer connectivity solutions, independent of the networking layer or the compute layer. At the switching layer, Marvell provides one of the world's leading Ethernet switching platforms. You'll hear today that there are actually multiple different networks inside these AI data centers. Ethernet is the preferred choice in many of these networks. It stands out as the world's most widely embraced interoperable network layer. Now, for compute, XPU refers to GPUs, CPUs, and DPUs.
It's all the computing that's needed for these intensive data processing tasks. Here, Marvell is focused on building custom solutions. The architectures of these large cloud companies are completely different. I mean, they actually design and build their own individual data centers with domain-specific infrastructure optimized for their own applications. So every hyperscale data center today is building or planning to build their own compute silicon for a portion of their workloads, and Marvell is an ideal partner for these customers. So let's walk through this one by one, starting with Interconnect. These AI data centers are complex. Right now, there's no unified architecture. And even within a particular data center, there are a multitude of different clusters connected together. Now, you're going to hear more about this architecture from Loi, but for now, just take note of how many links there are.
Everywhere you see a link, you should think Marvell. Each of these links has a certain bandwidth, distance, and power requirement. As you can see, an optical module is located at each end of these individual links. You'll hear from Achyut about how Marvell is innovating and optimizing solutions for every type of link imaginable. But you might be wondering, well, why so many links? Why so much variety in terms of speed and distance and power? The reason lies in the fundamental difference between accelerated and general-purpose infrastructure. In general-purpose computing, a single workload is processed on a single processor or a fraction of a processor by using virtualization. That's not feasible with these large and demanding AI workloads. They cannot be easily decomposed to fit on a single processor.
So they require a huge number of interconnected processors working together to manage a single workload. This could be anywhere from dozens to tens or hundreds of thousands or more. And in this context, the connectivity between the processors effectively becomes part of the compute itself. It directly impacts the time it takes to complete a calculation. So with accelerated infrastructure, the compute and connectivity are fundamentally linked. And innovation in connectivity is needed at the same rate as the innovation in compute. Now, to understand the opportunity for connectivity within accelerated infrastructure, we just need to do some simple math. OK, more accelerators drive more ports to be connected. There are typically multiple ports per accelerator. So as the number of accelerators grow, so do the number of ports. At the same time, accelerators keep getting more powerful, require more bandwidth to keep them processing data.
So faster accelerators require higher-speed ports. And this drives the associated content growth to deliver the bandwidth. Put it all together, and this yields exponential growth in the connectivity framework of these hyperscale data centers. This is a large and rapidly growing market and a massive opportunity for Marvell. Achyut will talk more about this in his presentation this morning. Now, moving on to the switching layer, Ethernet switching is an essential part of our accelerated infrastructure offering. Marvell has one of the largest Ethernet switching businesses in the industry today. Our cloud switching portfolio, it was called Teralynx, and that came with the acquisition of Innovium that we did. Now, we've combined Marvell's switching team with Innovium's, significantly increased our resources on Teralynx, and accelerated our roadmap to position ourselves for the AI opportunity in front of us.
We're in high-volume production today on our 12.8T generation, and our new 51.2T product is ramping into production soon. The whole combined team worked on this together. Just to give you a sense, this is a reticle-sized chip with over 60 billion transistors. We went from 16-nanometer technology at 12.8 directly to five-nanometer technology at 51.2. We moved from third-party SerDes to Marvell best-in-class SerDes. We doubled the I/O bandwidth and quadrupled the transistor count. This product has been received extremely well. Nick will talk about how we're scaling up this business to win in data center switching. OK, now let me take you back in time a little bit. I see a lot of familiar faces, so some of you were there.
All the way back at Marvell's 2018 Investor Day, where we shared our vision of what we believe to be the future of computing in the data center. Now, we had just closed the Cavium acquisition. And as you can see back then, six years ago, we were focused on DPUs, security and network offload, Arm-based CPUs, and AI. So again, this was six years ago was our point of view. OK, now let me tell you what's transpired since then. In 2019, we went to market with these products, and we got very clear feedback. The strategy was right. Customers liked the technology. But uniformly, what we heard is that every one of these companies was going to make custom silicon a priority over the long term. So we acquired Avera, and we made a number of other organic investments.
And then we outlined all of it for you guys at our 2021 Investor Day with our cloud-optimized silicon strategy and platform. We also outlined a set of design wins back then that we said in the FY2025 to 2026 time frame, which is where we are now, that these would grow to be projected around $800 million of annual revenue at full ramp. So we're going to be at that run rate now by the end of this year, and we're going to blow past it next year. To be clear, there's a broad opportunity for customization. Today, accelerated compute for AI is hot, and it's driving most of the volume and revenue. However, there are other important custom computing applications. Every one of the large hyperscale companies is working on some or all of these applications in some way. And we're strategically engaged with every customer.
So the result of all this is that there's a tremendous amount of design activity right now across all these customers. And let me just give you some simple math on why that is. First, you do have multiple customers in this market. You have all the applications I showed you on the previous slide. Some of these have multiple SKUs per application for different performance reasons. And this type of business is also multi-generational in nature. So when you're working on the current version, you're also working on the next version, typically. So when you put it all together, there's a lot of projects and opportunity in flight at any given time. So it's not just one AI chip for one customer every four years, and that's the cadence. It's an amazing opportunity right now.
As I said, the biggest opportunity in custom silicon today is within the AI compute silicon itself. Now, we've shared previously that we had won two sockets for two different customers. I'd like to expand on that today. The first socket is an AI training accelerator for a U.S.-based hyperscaler. Its customers using the chip in their AI clusters and systems, and it's ramping incredibly fast. The partnership, teamwork, design, qualification on this product has been a huge success. I'm very proud of the accomplishments of the combined team on this effort. In addition, as part of the same development process, we're planning to ramp the AI inference accelerator next year. Given all this, we now have multiple years of visibility on this particular program. We expect revenue to continue in the next generation as well.
The second customer design is an ARM CPU for a second U.S. hyperscaler. This will be deployed in their general cloud computing platform as well as in their internal AI infrastructure. Both of these sockets, the AI training accelerator and the ARM CPU, are in production now and ramping for revenue this year. All right, so today I'm excited to share with you some new news. And it wouldn't be a Marvell Investor Day if we didn't give you guys a little bit of new news. Right? So the new news, and it's great news for us, is that we've won a third U.S.-based hyperscale customer for AI. It's for an AI accelerator. It's in design now, and the customer wants to take it to production in 2026. It's clear that our engagement model with these customers is evolving.
These initial wins have now turned into engagements that span multiple products and multiple generations. If we take a step back, Marvell now has design wins that are either going into production now or in design with three of the four U.S. hyperscalers. Our design win funnel is up by a factor of 8X where it was when we started this custom business just five years ago. Let me talk to you a little bit about how we support this massive set of designs that we've got. Let me just give you a sense of the R&D scale of Marvell today supporting the whole portfolio. First of all, we're a pure-play data infrastructure company. Our R&D spend is approximately $1.5 billion a year.
Now, that R&D is actually bigger than that because we receive NRE funding from our customers for these custom projects we just talked about. So we capture that at Marvell as an offset to R&D. We don't capture it as revenue in the top line. So all this R&D is only going into data infrastructure applications. So if you look at our largest peers, we're very competitive with them in terms of our R&D profile for this opportunity we're talking about today. It's right in the same ballpark. And compared to the rest of our smaller, less scaled-up peers, it's not even close. And the reason why this is important to our customers is that they want to know that their key partners have sufficient R&D scale and commitment to this market long term. And we're as scaled up as any major semiconductor company in this area.
So the benefit of this partnership is multifaceted. As an extended part of their R&D team, we're working hand in hand with our customers to co-architect their next-generation data centers. And by having this strategic position on the custom compute side, we gain unique insight into the next-generation architecture requirements, not just for the custom compute, but for all the connectivity, the higher-layer switching, and our customers' overall plans for their next-generation AI architectures. So this gives Marvell a significant advantage over our competition, having this deep kind of partnership and insight, unique insight. So let me tell you now how we invest to win in this market. First, you need to have an immense amount of IP and technical capability. And our team has been very thoughtful, very deliberate in recent years about building our technology platform.
We've assembled a powerhouse of infrastructure technology, second to none in the industry. Marvell is building some of the most complex digital products in the world. This includes chips that are among the biggest in the industry. To thrive in this business, we need to operate also at the leading-edge process node. Building on the success of our 5 nm and 3 nm portfolio, we're now aggressively investing at 2 nm. Our SerDes technology is world-class. That's why every single hyperscale data center operator today relies on it. It goes beyond IP. We have best-in-class packaging technology, electro-optics technology, analog capabilities. We focus on meeting customer needs for low-power design, seamless interoperability, and more. There are precious few companies that can match Marvell's technology assets and capability. Now, we're making this investment, this huge investment, because we believe this is the biggest opportunity in the semiconductor industry in decades.
Let me talk to you about how big that opportunity can be. OK, let's take a step back and look at the big picture. Last year, total data center CapEx was about $260 billion. And of course, some of that is in buildings and infrastructure. If you take that out, it gets you to about $197 billion. And that's the infrastructure equipment TAM. Then, if you break out semiconductors, this gets you to $120 billion. Now, we don't play in the analog and the memory side. If you break the $120 billion down further into the core semiconductor TAM, excluding analog and memory, it's an $82 billion opportunity last year and growing very fast. Now, let's break that $82 billion down further into the categories that we've been talking about that Marvell addresses. So compute is the largest portion at $68 billion.
I'm going to drill down on that a little bit more in just a moment. Interconnect was $4 billion last year. Switching was $6 billion. Storage was about $4 billion. So let's talk about that $68 billion market now. First of all, it's growing very fast. It's expected to become a $200 billion opportunity by 2028, with a 24% compounded annual growth rate. Now, some people believe that this number is even larger than the $200 billion. For us, that would be even more exciting if that was the case. But right now, we're using analyst estimates. We're using our own view. We're setting this as our base case scenario. So when you break down the $68 billion, $26 billion is in general-purpose computing. $42 billion is accelerated.
General-purpose compute, if you look over the long term, is expected to remain relatively flat, while accelerated compute is essentially driving all the growth with a 32% CAGR. Now, let's talk about what portion Marvell can address. Remember, our focus is on the custom portion of accelerated compute. If you look at this past year, about 16% of the TAM was already in custom. It was about $6.6 billion. If you go out to 2028 and you assume custom maintains the same share, that becomes a $27 billion opportunity. Now, we believe custom is probably going to gain share over the next few years because, ultimately, most of these hyperscalers are just getting started. Right now, we're estimating that about 25% of that will be custom. That gets you closer to a $43 billion market opportunity.
So in the end, depending on your assumption, the market for custom is going to grow anywhere between 30%-45% a year compounded annual growth. And if you take a step back and just think about this for a minute, in either of these two scenarios, you actually get a custom compute market that's as big or bigger than the general-purpose compute market by 2028. So with that in mind, let's now look at Marvell's total opportunity in data center. So starting with storage, it's going to be a $6 billion market last year. It's growing about 7% CAGR. And historically, that's a relatively typical growth rate for storage in data centers. Interconnect is a very fast-growing at 27% CAGR. And Achyut and Loi will talk to you today about the key drivers. But basically, you have a $4 billion market growing to $14 billion by 2028.
Switching is a $6 billion market growing to $12 billion. So that's 15% per year. And with that foundation now, let's add the $43 billion I talked about and I showed you earlier. And we get a $75 billion total available market for Marvell in the data center in 2028, with a CAGR that's growing at almost 30%. So this is a massive opportunity. It's a massive opportunity. And let me show you what that means for Marvell. OK, now, to start off, let me put this in context for everybody. So last year, the TAM that we're talking about was $21 billion. And Marvell did approximately $2.2 billion in revenue. It's about 10% share. And going forward, we have very aggressive plans to grow the business across each of these categories. And you're going to hear from my team about our plans to address each of these markets.
In accelerated compute, we have line of sight to gain share significantly based on those design wins that I talked about earlier. In switching, we also expect to gain share. Now, we're a relatively small part of that market today. But we're making a big investment. And we're getting a lot of traction. We already have a leadership position today in interconnect. So for now, we're just assuming in our base case scenario that we'll maintain the leadership share that we have. And then in storage, we're also planning to maintain our share position. So what does that mean? Well, when you add it all up, our goal at Marvell is to double our share over time from 10%-20% over the long term. That's what I'm driving the team towards. That's what we're all committed to here. You're going to hear from the team.
There's almost no other company out there that I can find that has the opportunity to grow their market by 3-4 times over the next 5 years and double their market share at the same time in an enormously large market. That's Marvell's opportunity. That's the opportunity in front of us right now. I want to take a moment to thank you all for being here. It's an incredible time to be in the semiconductor industry. It's an incredible opportunity for Marvell. Our company was purpose-built for this moment. I hope that you enjoy the rest of the day and that by the end of today's session, you'll be as excited about the future of Marvell as I am. Thank you all very much.
Please welcome Executive Vice President and General Manager, Cloud Optics, Loi Nguyen.
Good morning, everybody. I see many familiar faces. But for those of you who don't know me, I was co-founder of Inphi. So I joined Marvell as part of the Inphi acquisition about 3 years ago. Really happy to be with Marvell. Look at the opportunity that Matt just showed. It's just amazing. So I've been in this industry for 25 years all my professional life working on high-speed interconnects. So really excited to be here to talk about interconnects and how it's enabling AI. So let's get started. I love this image. I don't know if anybody recognizes it. It's the Turing machine that was built during World War II. Mr. Turing did not have an AI accelerator as we have today. But he was able to build a very fast computer that broke the Nazis' code by using massive parallel systems and lots and lots of interconnects.
Today, 80 years later, we are doing the same thing. But the interconnects today for AI are high-speed optical. Marvell is a leader in high-speed optical. So let me show you what is driving the speed of innovation of high-speed optical. Let's take a look. I love this music. 2023 was truly ground zero for AI. Before AI, speed of interconnect was driven by cloud data center server upgrades. And that happened once every four years. So every time it happened, the speed doubled. Today, the speed is driven by AI, much faster. We see the speed is now doubling every two years. And customers actually say, "I want them sooner. Can you do it?" Well, let me talk about why optical. There are coppers and other things. And copper has been around for a long time. And there's low cost and cheap.
But optical is the only technology that can give you the bandwidth and the reach needed to connect hundreds and thousands and tens of thousands of servers across the whole large data center. No other technology can do the job except optical. Last year, GPT-3 was trained on a 1K cluster using about 2,000 optical interconnects. Today, as we speak, GPT-4 is being trained on the 25K cluster, 25 times larger. And that requires about 75,000 optical interconnects. The model will keep larger and larger. We see the 100K cluster is going to be available soon. And that may require 5-layer switching. So maybe 500,000 optical interconnects. And people are talking about a one million cluster. I cannot imagine a one million cluster. But that's the kind of numbers that people are talking about today. And that may require 10 million optical interconnects in a single AI cluster.
So I know there's a lot of questions I get asked all the time. How many optical interconnects per accelerator? That question is being asked all the time. So these numbers should only be used as a guide. Small clusters like 128, you can connect them with one layer of switching so that's 1-to-1 based on today's architecture. A medium-sized cluster, 1K, will require two layers of switching. So that's 2-to-1. And a large cluster that we know how to build today, 25K, will require three layers of switching and so on. So that's 3-to-1. So the ratio went from 1-to-1 to 2-to-1 to 3-to-1. And in the future, 5-to-1 and could be even 10-to-1. So no matter how you look at it, the optical interconnect will grow faster than accelerator growth in an AI cluster. OK? Second question that I often get asked is, what about training versus inference?
What are the differences? Which drives more optical interconnects? Well, the answer is actually very simple. Training for large models, you want the largest cluster that you can lay your hands on, that you can afford, actually, and have availability. So 25K, 50K, 100K, 1 million, whatever. But there are a few of them around the world. For inference, the size of the inference machine varies depending on what you're trying to do with it and different verticals. The size varies. But you need a lot of them deployed globally to actually monetize AI. So the net of it is kind of the same: training, large cluster, a few of them; inference, small cluster, but lots of them. So the net is both will drive a massive amount of optical interconnects. Next, I want to talk about the need for new infrastructure for AI.
Matt talked about the $2 trillion to be spent over the next few years. This is the world map of the world data center today. There's about 6,000 data centers, some large, some small, whatever. They distribute. And you see the distribution out around, number one, wealthy countries, highly populated countries. The things will change as we move forward. That $2 trillion spent is going to be spread more globally. And the reason for it is, number one, power. AI servers consume more than 10 times power than a general-purpose server. So you need to deliver a lot of power to it. At a typical data center today, the power is 32 MW. People are building 1 GW data center today as we speak and in locations you never heard of. But power is beside. Privacy laws, national security, sovereignty require AI clusters to stay within borders.
So you will see a lot more data centers that are going to be built in existing locations as well as in new locations that don't have data centers today. Timing-wise, two days ago, my wife sent me a note saying, "Look at this, Loi. Microsoft just announced that they are investing $2.9 billion to build the largest AI data center in Japan." When you look at the map, there's a lot of dots in Japan already. Why does Japan need new data centers? Because for the simple reason that I cited, the current existing data centers are not suitable enough, have capability, and so on, for AI. And the $2.9 billion Microsoft announced is going to be spent over the next two years. And it will be the largest investment Microsoft has ever made in Japan. So this is just one proof point.
But you will hear more and more about it. OK? So the upshot of this $2 trillion spent, you're going to have more data centers, more locations. And all of those drive a lot of interconnects, both inside the data center and between the data centers, the market that Marvell is serving today. All right, so that's the setup. Now, what I want to spend the next few minutes here to talk about is the accelerator infrastructure that Matt talked about. There's a lot of confusion about all the networks that are needed: the back-end network, the front-end network, the compute fabric, the DCI. What do all these things do? So I hope that in the next few minutes after my session, we will have the same common language: what do these networks do? And whether they are over optical or they are over copper. All right?
So bear with me. So here's the AI server. You have inside a couple of accelerators that are connected together but very high-bandwidth fabrics. This is often referred to as a compute fabric. They are very high-speed fabrics but over very short distances, copper on traces on the board. And the protocols are NVLink and InfiniBand and PCIe. So whenever you see NVLink, you know that they are short-distance links over copper. Now, recently, this has been extended to within the rack. But it's still 1-meter range. So let's just be clear. Today, Marvell doesn't play in this compute fabric. It's on copper, passive. All right? How do you connect the AI server that I talked about to 1,000 other servers in a data center network? You use what we call the back-end network. So how do you get data in the back-end network?
Well, every accelerator has its own network interface card, or for short, NIC card. Every NIC card is connected to a module. That's how the module is connected to other modules within the switches and within other AI servers. Remember that. The back-end network is where you connect AI servers to other AI servers. The protocols are InfiniBand or Ethernet over optical. Whenever you see InfiniBand or Ethernet in an AI cluster, you should think about probably a back-end network and over optical. This is where Marvell plays. We are the leader in that space. Now, next is how do you get data in and out of an AI server? Not through the back-end network. To get data in and out of an AI server, you go through the front-end network. This is where you see CPUs inside an AI server.
Typically, there could be 1 CPU, could be 2 CPUs, whatever, the number of CPUs. Each CPU has its own NIC card. And every NIC has its own connected to its own optical module. And this is how an AI server is being connected to the rest of the data center: storage and other switches and so on. So those are the three types of connections that go into an AI server. And the front-end network is always Ethernet over optical. So let's see how all of these AI clusters are connected together to the back-end. This illustration shows thousands of AI servers connected together via two-layer switching via the back-end network. All the blue links here are optical. A data center may have more than 1 AI cluster. So let's see an example here: 3 AI clusters. How are they connected?
They are connected together, to the rest of the data center, to the front-end network. I want to show the front-end network, a bunch of general-purpose CPUs on your right. This is how everything is connected together: the front-end network, because there are so many elements within the data center. There may be four or five tiers of leaf switching. There's a lot of optical interconnects, Ethernet, actually, still sitting in the front-end network. But you can see here why AI is driving so much optical interconnects. On the left, you have the back-end network associated with each cluster. On the right, you have the general-purpose only connected to the front-end. This whole thing about the back-end is all driven by AI. That's why AI is driving so many more optical interconnects. All right. How do data get in and out of a data center?
You need another network called the data center interconnect, the yellow one on top. That is a 100-kilometer link to connect the data center to the data center in a region. So now, as a recap, there are four networks total: compute fabric over copper traces inside a server within a rack. Marvell doesn't play today. Back-end network, InfiniBand or Ethernet over optical, it's all Marvell, mostly. Then the front-end network, again, optical Ethernet, where Marvell plays a very large role. And DCI network is also where Marvell is in leadership. We play a very large role. So I hope you all get it. There will be a quiz at the end of the session for these four networks. All right? All right. Now, let me talk about Marvell Silicon TAM.
Matt talked to you about every time you see a link, you think you should think of Marvell. So every link here, there's an optical module at each end. And inside the optical module, you have the DSP, the TIA, the driver. And today, we're going to add another chip, a silicon photonics, to the toolkit. So you will hear more about it from me in the later section. But I want to spend a few minutes to talk about the DSP in particular. So a year ago, there was a lot of noise about all the some people say, "You don't need the DSP. You remove the DSP, delete the DSP to save the power," blah, blah, blah. Right? Many of you who attended the OFC this year, you heard the customer has spoken. The largest hyperscale provider in the U.S. has spoken that they need the DSP.
You saw what I showed about the network. There are hundreds of thousands of these optical interconnects within a single cluster. Who's going to go and tune for every channel to get the optimum solution? No. Time is the most precious commodity for these clusters to be deployed. In fact, one hyperscale provider said, "They're so nonlinear, LPO." LPO set the industry back to the Stone Age because there's no telemetry, no diagnostics, no interop, and so on and so forth. OK? So you'll hear more from my colleague, Achyut, about Marvell interconnects. And then switching. So you heard about the Marvell acquired company Innovium that brings to us a Teralynx line of switches, switching. And we are gaining share in the market. And you heard from my colleague, Nick. Really exciting area. Last but not least, it's in the AI server itself.
Today, the majority of accelerators being shipped today are using merchant silicon. Marvell doesn't do merchant silicon like Matt just talked to you about. But Marvell does custom compute. When you have custom compute, all of the blocks within the server are now Marvell TAM. And you will hear it from Raghib. And in fact, the custom compute, as Matt showed you, is the largest TAM within Marvell that we can address over the next few years. When you look at all these things, these are on the socket that Marvell is addressing today. And that's growing to a gigantic $75 billion TAM in the next few years. And that's it for my setup today. I hope you enjoyed the talk. And I hope that from now on, you will hear and know everything there is to know about the four networks within accelerator infrastructure.
With that, I'd like to hand it over to my colleague, Achyut, to talk to you about interconnects. Please welcome Senior Vice President and General Manager of Connectivity, Achyut Shah.
Good morning, everyone. I'm Achyut Shah, the Senior Vice President and General Manager for Marvell's Connectivity Business. Did you know that every single large language model today runs on compute clusters that are enabled by Marvell's Connectivity Silicon? And I'm going to talk to you today about the fantastic opportunity that lies ahead of us. A quick introduction of myself. I started my career over 25 years ago, getting a degree in electrical engineering at Maxim Integrated. And at that time, I joined their optical business unit. And the products we were working on at the time were 100 Mb. The dream, the vision at the time was, can we get 1 Gigabit over optical?
Fast forward to today, we have links of more than 1 Tb over optical, 10,000 times that 100 megabits. Still not enough for our customers. I spent 20+ years at Maxim, where my last role was General Manager of the Cloud and Data Center Business Unit, which created optimized silicon for both the power and the optical products within the data center. In 2020, I joined Marvell to lead the Physical Interconnect Business Unit. Shortly after the acquisition of Inphi, I was tasked with integrating the Inphi optical DSP team with the Marvell Business Unit to form what we now call the Connectivity Business at Marvell. I'm very thankful for Ford and for Matt for trusting me with this opportunity, giving me this responsibility.
And I'm very proud of the team that we formed, a cohesive, collaborative team of industry leaders in the interconnect technology, with expertise over a wide variety of analog, digital, mixed signal, DSP algorithms, firmware, and software. And this team has come together to provide the best-in-class solutions for our customers over multiple generations, making Marvell the industry leader in this interconnect space. There's a lot of bandwidth needs our customers have. It's insatiable. And I'll walk you through today what our customers are asking for, what are the underlying market trends driving these needs, and how Marvell is very well positioned to capitalize on this opportunity. Let's first take a look at where these networks are deployed and what actually goes into creating an interconnect. Loi walked you through the back-end and the front-end and the DCI networks, the logical flow of the network hierarchy.
When you look at it from a physical perspective, you can think of this as interconnects within the building, within the data center, and interconnects leaving the data center. Within the data center, you have the front-end and the back-end network. These are typically less than two kilometers of distance. They use a signaling scheme called PAM modulation. For the interconnects leaving the building, now, these are much, much longer distances, hundreds of kilometers, thousands of kilometers. These use a much more complex signaling scheme called coherent. But regardless of the distance, the optical interconnect comes down to optical modules connected by a length of fiber. These optical modules have multiple components in them. You have the DSP. You have the transimpedance amplifier, or TIA. And you have the laser driver. Now, what do these products do? The initial use case for a DSP is very simple.
Everybody understands it. You have the SerDes technology. You have some complex signal processing to make sure that the signal gets from one end to another with no errors. But there are two other use cases of these DSPs that are less clearly understood, less visible, but that are equally important for our customers: scale and reliability. When our customers deploy these networks, they don't start deploying hundreds or thousands of units at a time. They have these massive data center clusters, tens of thousands, hundreds of thousands, millions of units that all need to work and come up at the exact same time. These are across multiple locations in multiple data centers, connecting multiple endpoints, using multiple kinds of optics, tons of manufacturing variability between them. The DSP helps to make sure that, at this vast scale, you don't have to fine-tune every link by hand.
It all comes up, plug and play, and works when you need it to. The second function that this DSP does and provides is reliability. When a customer is running these very large language models on these massive clusters, and these data sets take weeks, sometimes even months, to run, in that time, if even one link goes down for an instance, the entire job collapses. You lose all that weeks of work, months of work, tons of loss of revenue, and tons of loss of profit. So these DSPs have intelligence in them: diagnostics, telemetry, system-level intelligence that checks the quality of the link and adds the margin where needed to make sure that the links stay up.
It also detects and can tell the network if a link is going to have a catastrophic failure so that customers can cut over to a redundant link to make sure that the job doesn't go down. So this is the incredibly important part, heart of the network, the DSP. You also have the laser driver, which takes the output of the DSP, amplifies it for very different kinds of lasers, and transmits it on to the other end of the link. And then the transimpedance amplifier takes the received signal, amplifies it cleanly, and passes it on for the DSP for post-processing. And to form all of these solutions, the DSP, the TIA, the driver, there is a very complex set of underlying technology behind it. You have leading-edge digital products.
You have coherent IP, PAM IP, signal processing IP, error correction that currently we are shipping in 7 nanometers. 5 nanometers goes to production this year. We're actively developing these DSPs in 3 nanometers and already investing in 2 nanometers going down into the future. On the other hand, you have very complex, high-frequency analog black magic of the semiconductor world. You have high-frequency silicon-germanium bi-CMOS processes to create these TIAs and drivers. In future generations, you have to take these high-frequency elements and actually put them back into cutting-edge digital. All of this takes a wide variety of expertise, of engineering leadership and expertise, to develop this full platform of solutions. Overlay on top of these complex technologies the system-level IPs that you need: diagnostics, telemetry, firmware, and software that provides a significant amount of flexibility programmably to our customers to optimize their networks.
Because we don't develop all of these together in a vacuum. We work with our customers multiple years, multiple generations ahead of deployment to understand what their network architecture is, what their deployment models are, and how they need to optimize each of these links to get them the scale and the TCO that they need. So all of these customers don't have a one-size-fits-all data center. So there is a tremendous amount of flexibility and programmability in all of these blocks so they can optimize these products for their specific implementations within the data center. And now, you take this entire set of IP, with the programmability, the analog, and the digital, and with the accelerated pace of AI, every single block here now has to be upgraded and redesigned on a two-year cadence. This takes a lot of expertise. It takes a lot of experience.
You need the scale to be able to do this successfully, generation after generation. That's what Marvell has been able to do. Now, let's take a look at this great opportunity in front of us, starting first from inside the data center networks, optical links that Loi talked about. Marvell has had a multi-generation leadership in this PAM inside the data center platform. We provide the DSPs, the TIAs, and the drivers. A decade ago, Inphi created the world's first PAM DSP. This was a 200-Gigabit product that had four lanes of 50 Gb. Marvell was also the first one to create a 100-Gb per lane product that created a 400-Gb optical module. And then we scaled that technology to 800 Gb. And that's what's driven the AI revolution in the last year. Now, we worked with our customers for multiple years before this 800-Gb product was developed.
At that time, we knew exactly what optimizations they wanted. We knew exactly when they wanted the product. What we did not know is how much of it they would want in the last year. We are very thankful for having that growth and that revenue driving Marvell's growth over the last year, a year and a half. Looking forward, we announced our 1.6 Tb per product, which was the first in the world to develop a 200-Gb per lane solution last year at OFC. Just a few weeks ago, you saw the world leader in AI infrastructure announce that their next-generation solutions need 1.6 Tb interconnects. That's what Marvell has developed. Today, we are in qualification with 1.6 Tb solutions at multiple customers. We expect to go to production at multiple customers by the end of this year.
Our customers still ask us, "Not enough. What have you got for us next?" As AI becomes a much larger part of the data center, we've seen that it accelerates the need to move to the higher speed. Back in the traditional data center networks, you started with 100 Gb NRZ, moved to 200 and 400 Gb PAM. These cycles happened every four years. But you've seen a significant acceleration of that at the 800 Gb and the 1.6 Tb generations happening right now, as these cycles have changed and shrunk from four years to two years. We see that trend continuing. We are already working closely with customers to enable 3.2 Tb PAM generation in the next couple of years and are already doing advanced R&D, again, closely with our customers, to develop even future technologies going to 6.4 Tb.
What this does for us is provide significant growth in the market. Because not only are these cluster sizes growing, like Loi showed you, from thousands to tens of thousands to hundreds of thousands of GPUs and XPUs, but every time they've got to be connected by faster speeds, Marvell gets higher silicon content. So you now have this rapidly accelerating market growth driven not only by units but also by higher speeds and higher content, providing a tailwind to the PAM. But why is this time to market so important?
When we ask our customers, they tell us, "The most precious commodity for them is time." The reason for that is, when you have these very large language models with billions of parameters growing so quickly, if they stick to the previous generation network, the previous generation infrastructure, the amount of time it would take them to run these new models would not make it economically viable. It would be months, even more. They have to keep up with the scale of the language models growing. For that, they have to increase their infrastructure, the compute, and the connectivity every couple of years. Their focus is always to move to the fastest and the best product quickly that enables their TCO. Now, you have people come out with solutions that provide some marginal benefits backward-looking. They simply don't have the time to qualify that.
Because the small fraction of TCO saving it saves them in the next 6 months will be wiped out if they don't move to the next generation over the next year or 2 years very quickly. And when our customers deploy these products, it's not that simple. You take the cutting-edge technology for interconnects. You have to marry it to the latest generations of compute elements, TPUs, GPUs, the switches, the NIC cards, put all this interconnect together, and create an entire system that takes months to qualify. So now, if you have a 2-year cadence, and you're already taking 4 or 5 months to qualify one generation, you don't have time to come back for a second gen and take another 6 months or a year to qualify something new. You're already moving to something faster. And that's what makes these solutions very sticky.
This is what provides significant growth for Marvell going forward as these cluster sizes and speeds continue to ramp. As great as this optical opportunity is in front of us for the next multiple years and multiple generations, we can also take this PAM IP that we have developed and our leaders in today and actually use it to open a completely new market, an opportunity for Marvell. This is where we can talk about DSPs for the active electrical cables. Now, we've talked a lot about optics everywhere. But still, within the very short reach in the data center think of it as a couple of meters, few meters in the rack. You're still using copper. A couple of examples in a traditional network, you have your NIC to top of rack, the TOR interconnect, which is 3-4 meters. That is copper today, passive copper.
Even in the AI server, there are interconnects that are either today on the board or even have cables connecting them. As the speeds go up, as the reach decreases, you're going to have more and more of these cables. But today, a lot of this stuff is passive copper, no content for semiconductors in it. So why do we need to go active here? Take an example. You have a 50-Gigabit link connecting a server to a NIC, some interconnect, 3 or 4 meters in the rack. At 50 Gb, the signal gets from one end to another with no errors, no problems. But now, you're doubling the speed to 100 Gigabits per lane. And as the speed goes up, law of physics, losses increase. You also have another vector happening here.
As the density of these data centers grow, our customers want to fit more and more interconnects in the rack, significantly increasing the faceplate density in these racks. Now, you need to use thinner cables. The thinner the cable, the more the loss. You now have the speed doubling, increasing the loss. The cable's getting thinner, increasing the loss. Now, when you go from 50 Gb to 100 Gb, the link, the distance cannot close at that speed with a passive cable. You now need to add electronics to it. You need to make it an active cable. Because the size of your rack doesn't change. The distance is fixed.
So to bridge that distance, you now need active electrical cables, which are essentially these copper interconnects with DSPs in them, opening up a whole new market for Marvell using the same PAM IP we have on the optical side. Now, AECs have been around for a few years. But they've always been used in niche applications. At 25 Gb NRZ, when customers had specific links that did not work or had a problem, they used AECs. But as you get to 50 Gigabits per lane and 100 Gigabits per lane, the use cases are going to balloon. And as a result, the customer needs scale. They need an ecosystem that enables them the flexibility and the capacity to be able to take all of these copper interconnects and move them from passive to active over a number of generations.
What Marvell is doing is not only creating these DSPs. We're also creating an entire ecosystem similar to what we enabled on the optical side to allow multi-million unit links within the data center with AECs. Today, in the optical space, we have multiple module partners that work with our end customers. We are doing exactly the same thing on the AEC side. What you see here are most of them, or probably all, the leaders in copper cable connectivity today. We are working with every single one of these partners. We're using Marvell DSP technology, creating AECs, and currently getting them qualified and in production at multiple end customers. We expect these PAM DSP-based AECs to enable another billion-dollar TAM for Marvell, something that we are already shipping to multiple customers today.
Significant growth opportunities inside the data center on the optical side and the new emerging opportunity on the copper side. But there's also exciting growth in the longer-distance links between the data centers. This is a place that Marvell has also had multiple generations of leadership in the DCI platform. Now, our go-to-market in this space is a little bit different. Inside the data center, you have millions of units, millions of links. And we sell the silicon, the TIAs, the drivers, the DSPs. But here, it's a smaller market in terms of units. But it's incredibly more complex technology. So this is where Marvell has all the species of silicon, the TIAs, the drivers, the DSPs. And we also do our own silicon photonics that Loi will talk about.
But it's incredibly complex to put all these together into a module that gives you the reach, but within the space and power envelope of a small optical module. And so this is something in a market where Marvell makes the entire optical module. Inphi was the first one to enable a coherent DCI link within a pluggable optical module. Before that, it was all these large transport boxes. We first created this market at 100 GB. And we've scaled it to a 400 Gb DCI pluggable coherent solution. And today, for the last year or so, that's been driving significant growth as AI continues to expand the need for bandwidth for our customers. We were also the first to market last year to announce an 800 GB pluggable DCI coherent module, the first in the world. We see a lot of growth available from these solutions along two vectors.
First, there is a continued growth in the current market. You have a 120 km pluggable with coherent that's currently going to ship and grow at 400 and then transition over to 800 GB. And as that happens, and as the number of data centers continues to grow in the world, like Loi showed, the bandwidth between the data center continues to increase. There is going to be a tailwind of units driven by more content as we go to higher speeds that's going to help us double the size of the SAM of the existing market. But you have a whole new market being enabled here. Currently, the products we have shipping today would only go up to over 120 km of reach. For the longer distances, hundreds of km, up to 1,000 km, customers use these large boxes that take tons of power and cost a lot.
Marvell has been able to come up with new technology, PCS technology, probabilistic constellation shaping, that enables these pluggable modules to now extend their reach from 100 kilometers to 1,000 kilometers. Our customers are going to do exactly the same as they did for the shorter DCI links a few years ago. As these 800-GB links go to higher speeds, our customers are going to remove these network boxes and replace them with pluggable silicon. This opens up another billion-dollar market for Marvell. This is technology we've demonstrated already at OFC earlier this year. It's available today. As you've seen, there is a huge amount of growth opportunity, very exciting opportunities in front of us inside the data center for the back-end and front-end networks, new market in the AECs, and then significantly growing markets in the DCI.
But as this AI interconnect continues to grow, like any other cutting-edge technology, sometimes you need something completely different. And so these AI networks are creating the need for a completely new kind of interconnect technology. Think of these clusters that are today thousands or tens of thousands of GPUs within a building. As these grow to hundreds of thousands of GPUs and TPU clusters, compute clusters, or even 1 million compute clusters, and to create a flat, low-latency network that you need today, you need huge physical distances to create these networks. So now, what you need to do is have a much larger data center building than what you have today. Or you have to break up the physical building into multiple buildings within the same campus, creating a logically making it look like just one data center.
So the links, distance that are needed for these much, much larger clusters grows from less than 2 kilometers to somewhere in the 10-20 kilometer reach. You now need an interconnect that gives you the distance characteristics of coherent but still looks to our customers from a TCO, latency, power perspective, like a PAM link. You need to bring these two technologies together, two technologies that Marvell is a leader on, two technologies that only Marvell has today, and create inside the data center coherent. And that is what we've been working with our customers for as they look forward for a few generations to create these very large clusters. And in the second half of this year, we will be sampling the world's first inside-the-data-center coherent product. What you've seen today is a complete interconnect portfolio, with Marvell being the only company to provide this to our customers.
You have PAM. You have coherent. You have a combination of that. You have DSPs and TIAs and drivers and silicon photonics. We provide a complete platform for our customers, a complete solution from 200 GB to 1.6 TB and beyond, from 1 meter links to 1,000km links and beyond. What our customers are looking for today in this rapidly evolving field of AI is partners that have the breadth of technology, that have the expertise, and that have the scale to help them implement all these networks. This combination a few years ago of Marvell and Inphi gave us exactly that technology base, that expertise, and that scale. With all of these growth vectors today, you have growth, significant growth, inside the data center and optical. You have new markets, like long-distance DCI and AECs.
You have the growth in the shorter-distance DCI links and new interconnect technology coming up. We expect that this opportunity for Marvell that was about $3.5 billion last year will grow to over $11 billion in the next five years. Thank you.
Welcome back to the stage, Executive Vice President and General Manager, Cloud Optics, Loi Nguyen.
Hi, everybody. It's me again, my second act for the day. So I'm going to talk to you about silicon photonics. It's a subject very dear to my heart. I started the group 10 years ago while I was at Inphi. So what is silicon photonics? It is integrated circuit for optics. That's as simple as that. So before we go there, let's take a look to see how optical modules are being built today. So you heard from Achyut about the TIA, the driver, the DSPs, all of that stuff. Electronics has advanced tremendously since the invention of the integrated circuit 64 years ago. But optics pretty much is still being optics using piece parts. All the optics that are connecting all the data centers together, outside, inside AI, are still being built predominantly using discrete components in small indium phosphide fabs. And it is hard to scale.
Today, well, let's go back. A few years ago, there were many, many different kinds of lasers. We started with LEDs and pixels and DML, a bunch of other stuff. But as you get to 200 Gbps on a single laser, as of today, as we speak today, only one laser remains as commercially viable. It's called an EML laser. EML stands for Electro-absorption Modulated Laser. I know it's a mouthful. It's been around for a long time. And actually, when I was a graduate student, I made these things. It was used for longer distance, like 10 km or so links originally for base stations and other things. But as the speed continued to go up, some of the other technology, laser technology, cannot keep up. Today, EML is the number one choice for discrete laser for the next generation 1.6T optical module.
EML is expensive. Not only that, capacity constraint is one of the factors that is impacting the scaling of optical interconnect in data centers. There's a constraint on capacity for EML. People are investing in it. But it is a discrete solution. So how do we hope to change that? In silicon photonics, we do not use high-speed EML laser. We use what we call a light bulb or a CW laser. CW laser is like a light bulb. Here, it just shines a constant light. It's easier to be made. It is available from multiple sources. It's low cost. All of the magic, high-speed magic, how to modulate data and stuff happening inside the silicon photonic chip itself.
The silicon photonics is an integrated circuit that has a high-speed modulator, the laser, the high-speed detector that detects the light, and all the other functional couplers and so on to manipulate light inside a piece of silicon. The good thing is, it is being manufactured by high-volume CMOS fabs on 200-mm or 300-mm wafers. Silicon photonics can scale with volume. Silicon photonics is now a really hot technology. When we were at OFC two weeks ago, everybody claimed they have silicon photonics. Also, there's more happening on the show floor everywhere. But few companies actually have been able to ship silicon photonics in volume. Marvell has done that. We've done that for the last nearly 10 years.
Like Achyut talked about the various products in DCI networks, we have proven that silicon photonics can be manufactured at scale and being used in mission-critical applications, connecting data center to data center in the DCI networks. OK? So now, with AI driving a lot of demand for high bandwidth and for scaling, the time we believe is now to bring silicon photonics inside data centers and completely change the landscape of how optical interconnect is being made. The choice comes down to discrete piece parts versus integrated solution. On the discrete solution using EML laser today, you need a set of eight 200G per laser. So you need eight, a set of eight laser die, photodetector, a bunch of different piece parts. You need a lens to focus the laser onto the fiber. You need an isolator. You need capacitors. You need resistors.
You need a lot of piece parts, just like the old day when we built discrete components using capacitor and resistor and transistor. Or you use the integrated approach on the SiPho one. The beauty of the SiPho being an integrated circuit, you can share the lasers. So in this case, 8 channels in a single silicon photonic chip, we share one laser for 4 channels. So we only need 2 lasers for a 1.6T flexible module. Lower cost, fewer lasers, higher integration means more reliability and better scaling. And then, of course, you still need the DSP to go to complete the chipset. But that's really the choice. We believe that historically, when technology gets developed and there's a huge boom for the market and the customer for it, integration will always win.
So two weeks ago, Marvell surprised a lot of people by announcing and doing a live demo, actually, of what we call a 3D SiPho Engine. It is a very highly integrated circuit on optics, silicon photonics, consisting of 32 channels of transmit and receive, each at 200 G electrical and optical. So this is the first, the world's first, 6.4 Tbps silicon photonic, 200 G per lane, that has been demonstrated. This device integrates 100 subcomponents on the chips, all of the piece parts that are needed in a comparable 6.4T. If you can ever build a 6.4T using discrete solutions, we also use advanced 3D integration to integrate also the transimpedance amplifier and the driver on the same device. And the design is modular. So we could scale this technology from 1.6T to 3.2 to 4.8 and 6.4T.
So we demonstrated 4x bandwidth of the highest bandwidth optical module today. And with the integration, silicon photonic die dropping costs dropping rapidly as the bandwidth goes up because the cost is calculated as a cost per bit. So when you double a bandwidth, the die doesn't double. So the cost of silicon photonics drops rapidly as you continue to scale. So here's the more detailed block diagram that shows how much we integrate on this chip with 32 channels of transmit, 32 channels of receive, and Mux and DMux and TIAs and driver controller, the whole nine yards. All right? So where do we see the use case for this technology? It's really a technology platform that we've developed that we see multiple use cases across the optical interconnect landscape. The most immediate near-term is to put a 3D SiPho engine inside a pluggable module.
Our customers love pluggable. Pluggable optics is what enabled the industry to scale today and continue to scale for many, many more years. Today, with the discrete solution, the maximum number of optical channels that you can probably jam into a small module like a QSFP-DD is eight channels. With our 3D SiPho Engine approach, you could put 16 channels, 32 channels, even 64 channels if we make the die smaller. The scaling of silicon photonics will enable pluggable optics to scale for many more years to come. So number one is pluggable optics. Number two, I've been working on this field for many, many years, as I told you before. 10 years ago, when we started silicon photonics at Inphi, we thought, actually, the real application was co-packaged optics. That's what we were doing for.
At the time, people were telling us that at 400 GB, you cannot do pluggable optics. You need co-packaged optics. That's how we started. But quickly, we realized that, no, no, pluggable optics are going to continue to be for a long time. So we shut the project. It went through so many different iterations. But co-packaged optics is still out there. And so we want to make sure that you all know that at Marvell, we have the fundamental technology to do silicon photonics. And better yet, our 3D SiPho engine is already doing today 200 Gb per lane, not the 100 Gb that you've seen from others. We do not think that at 100 Gb per lane, you need silicon photonics. Pluggable optics is just fine. The world has shown that at 100 Gb per lane, the world has been shipping millions and millions of pluggable optics.
Who needs Co-Packaged Optics? So this is something that we will continue to work on. And at some point in the future, it may be needed. And that's what it's for. The third area of application, which I'll show a little further out, but it's something that really excites me, is how to bring optical integrated circuits into working with AI accelerators. And as you will hear from my colleague Raghib, we're building custom compute. And the bandwidth of these AI accelerators is going up very rapidly. Every generation, it doubled every generation. Today, it is being connected over Compute Fabric over copper traces. And copper traces, they're fine. They're good. They're low cost, uncertain . But at some point, you will need more bandwidth. And you need to bring it further out. And that's when optics come into play.
So our technology with 200 Gb per bit today offers twice the number of bandwidth density, I/O, for an optical chiplet compared to others who do 100 Gb. Anyway, so we see our 3D SiPho Engine as an essential building block that will allow the scaling optics for AI. And also, you heard from Achyut, interconnect TAM in CY28 is $11.1 billion. With silicon photonics, we slam another $3 billion on top of that. And that just really gets me really excited to see silicon photonics going inside data centers. And I hope that you enjoyed the talk. And you will join me in the journey to see the rise of silicon photonics for AI in data centers and optical everywhere. Thank you. Next will be Nick.
Please welcome Senior Vice President and General Manager, Network Switching, Nick Kucharewski. Good morning. Thanks very much for being here today.
So my name is Nick Kucharewski. I'm the Senior Vice President and General Manager for Network Switching at Marvell. Now, I've been with the company for about a year. But I'm definitely not new to this market. In fact, I've been working on network switching for most of the last 25 years. See, in the late 1990s, I was at Stanford at a time when the internet boom was happening. And most of my friends were either starting up or leaving to go to a startup. But at that time, I decided to focus on a new and emerging market for packet processing semiconductors that would go into the switches and routers that were forming the internet. And about 9 years later, in 2008, I found myself working on the product requirements for an early Ethernet switch that was designed and optimized for hyperscale networks.
And with that product and over the next nine years, I was involved in a series of products that ultimately defined the category that we're talking about in this session. Now, I stepped away from the cloud for about six years. But when Matt called about the Marvell opportunity, I was really excited to be involved in the next chapter of switching for cloud data center networks. And that's really what I want to talk about today. AI has triggered a rapid expansion in the market for cloud switching semiconductors. And Marvell has assembled one of the few teams in the industry who has demonstrated the capability to deliver on this class of product. The requirements for AI drive shifts in the industry roadmap, which create new opportunities for product innovation.
Marvell has the essential portfolio technology, which can enable us to innovate and lead in the next wave of the market. So now, first, let's talk a bit about the switching category. As Loi mentioned, in many cloud data centers, all of the compute is connected to itself and to the internet through a structured network of high bandwidth switches. These switches are using industry standard protocols, specifically Ethernet to define the physical layer and the link layer, and IP routing to direct those packets through the network. Now, this general approach is somewhat similar to the internet itself. It's a really good approach because it enables the customer to build a network literally of almost any size. They can build the network mixing and matching products from a different generation. They can really incrementally build in certain cases.
And it enables them to build their network using components from different equipment manufacturers and different semiconductor manufacturers so that they can build an architecture for their network that is specific to their application needs. This has enabled the cloud to innovate, grow, and evolve very rapidly over the last 20 years. So now, expanding and extending the cloud for AI brings a new set of requirements to the network. This really comes down to one fundamental change in the way that applications are structured within the cloud. As Matt mentioned, in the past, internet applications, almost anything that you could be doing on the internet, would be based on software broken down into microservices that would fit within the confines of a single processor and its memory.
But with AI, it's somewhat more challenging because we're talking about very large data sets, large workloads that don't necessarily fit within the confines of a single processor. So then you need to distribute the workload across multiple elements and then rely on the network to make those processors behave like a single component. This obviously requires a network with higher capacity and predictable latency because application performance is directly correlated to network performance. And a higher performing network can ultimately lead to a more profitable AI deployment. So the most obvious impact here is a significant increase in the capacity of the cloud network built for AI. So first, in the front-end network here at the top, if we were to unroll that fabric, we would see perhaps 2-3 times as many ports allocated for each accelerated processor compared to a general-purpose processor.
Now, we expect that ratio to grow with each new generation of faster AI equipment. Now, at the back-end network, as Loi mentioned, there's an entirely different dedicated switching fabric for the AI elements within the cluster. Now, regardless of whether you're talking about Ethernet or InfiniBand for the back-end network, it represents a new total available market. And it is scaling on an entirely different growth curve from what we have seen in the cloud in the past. So now, let's have a look inside the switch.
Now, if you were to look inside any of these switch icons here and take the box and open it up, inside, you might find a single chip multi-Tb packet switch with on-chip packet buffers, on-chip route tables, on-chip instrumentation and telemetry, basically all of the things that are required by the cloud operator to route, observe, and react to the traffic patterns happening across the network. This device happens to be one of the most complex devices and the hardest device to build in the semiconductor industry. This is because, in order to enable that very high switching capacity, you require the most advanced process technology. You require a very high number of high-speed serial interfaces. It requires the right architecture, which strikes a delicate balance between feature set, high capacity, low latency, and a power-efficient implementation.
Now, in the last 10 years, a lot of companies have tried to enter the market for cloud data center network switching. But a lot of companies have failed. That's because there are only a few teams in the entire industry who have the know-how to build a product like this. And Marvell has assembled one of those teams. In 2021, Marvell acquired Innovium, who had developed a clean sheet architecture to meet the unique needs of cloud networks. Their first generation product at 12.8 Tb per second won a major design at a tier one hyperscaler and has shipped into live cloud networks for the last several years. With this product, Marvell jumped into the number 2 position in the 12.8 Tb per second switching category. This first generation proved that the team had the right architectural approach capable of meeting mega-scale requirements.
Now, we've proven it in our shipping in mega-scale data centers at high scale. This also gave Marvell the foundation to deliver on our AI switching product line. Marvell's next generation product was developed entirely in-house with Marvell 5 nm core IP and Marvell's 100G SerDes. The product delivers 51 Tbps of switching capacity. If you look at the specs, objectively, it is technically superior on multiple dimensions. Now, we've had good traction for the product with design wins that have expanded our customer base beyond that first generation. The first of those customers will be shipping in production this summer. Moving forward, Marvell has combined the Innovium team with the Marvell Enterprise Ethernet Switch team. I've shifted the priorities of the combined organization towards the data center cloud product line. In addition, we've increased our investment in core-enabling IP.
We've built our software and our support organization to meet hyperscale requirements and expanded our roadmap for next generation products serving the cloud. In fact, as it stands today, Marvell is one of a small number of companies with the team and the portfolio of technology required to deliver a switch silicon roadmap over the long run. As a company, we're able to make these investments in core switching technology because it is complementary to our interconnect portfolio and our custom ASIC portfolio, a lot of those technologies you're hearing about today. Marvell has invested in advanced process technology, high-speed SERDES, advanced packaging technology, and silicon photonics to enable high-density direct optics integration. Fundamentally, we have the core technology that's required to deliver on our multi-year product roadmap. So now, let's talk about the roadmap.
So when I'm looking at any product line strategy, I generally look for inflection points in the market, those kind of technical shifts which create new opportunities to define a differentiated product that can change the competitive dynamics in the market. And with the AI, we're seeing several of these. So first, let's talk about open platforms. Now, in the past, it was traditionally very time-consuming and expensive for a customer to design their equipment and to deploy it using more than one solution, more than one semiconductor manufacturer. And this is because it was built on proprietary networking software. And that software was built to target single vendor silicon. But in the last decade, the industry has developed an open network operating system called SONiC, which does for networking something similar to what Linux did for servers 20 years ago. Additionally, customers have built hardware abstraction layers.
Those normalize the differences in different silicon vendor implementations. These allow our customers to build new platforms very quickly than they could in the past. This is a key point, which I really want to emphasize. This means that customers can transition to Marvell. Or they can adopt a multi-vendor or a second-vendor silicon strategy for their networks. It really changes the dynamics. I think that's going to be necessary because our customers don't want to necessarily be slowed down. There's too much at risk here. There's too much at stake. Having these multiple supplier strategies, I think, will be critical moving forward. But it's not enough to be second source. We're going to continue to innovate and continue to drive new features into the product category.
We believe that the next wave of innovations will be driven from the top down, from the management orchestration layer down to the network, for instance, where we'll see traffic engineering to reduce interactions across the fabric and workload awareness, where the network may be involved in the decision making of where to place computing tasks within the cloud. Finally, we see AI influencing the architecture of the network itself. In fact, in the past year, we have started to see this happen inside the compute cluster with system partitioning and connectivity, which redraw the boundaries of the chipset. The next logical step is to take this thinking upwards in the network hierarchy to the fabric, blurring the boundaries of switching and connectivity and finding a way to repartition these large fabrics in a novel way.
Here, Marvell has the end-to-end technology portfolio to enable this new category of switching products building on these concepts. Having covered the dynamics of the market and Marvell's position, let's talk about the opportunity. Looking at the worldwide market for Ethernet data center switching, we see a $6 billion TAM in 2023. This is growing at a 15% CAGR and effectively doubling over the next five years. Now, the market here includes both general compute and AI. We believe that the AI portion is growing faster than the global TAM. Now, in looking at this, it's important to note that a lot of the growth in the next three years will be driven by the product categories that we've talked about today. Marvell is in a position to participate in this market growth going forward.
And on that basis, I'm very excited about the market and the opportunities that lie ahead for Marvell. Our 51-Tb product begins its production ramp this summer. We have new customers for the product, which expand beyond the design win traction for the first generation. We see a multi-billion-dollar opportunity over the next several years. As we've discussed, AI is that technology inflection that can reshuffle the competitive dynamics in the market and create opportunities for disruptive new product innovations. Marvell stands well positioned with the team and a comprehensive portfolio of core technology to deliver on AI networks this year and in the next generation of the cloud fabric. Thank you very much. I appreciate your time.
Please welcome President, Products and Technologies, Raghib Hussain.
Good morning, everyone. This is an exciting time, isn't it? I have been waiting for this moment for a long time. In fact, my entire career has been about accelerated computing. Well, those of you who do not know me, today, I am President of Products and Technologies at Marvell. However, 25 years ago, I co-founded Cavium with a focus on accelerated computing. So while we stand today at the dawn of this new era dominated by accelerated computing, I am really excited to state that it is driving the needs that align really well with our core expertise and portfolio. So while Marvell is leading with cutting-edge connectivity technology, as explained by my colleagues Loi, Achyut, and Nick, I'm going to share with you how we are partnering with our hyperscaler customers and bringing the most optimized accelerated compute silicon to the market.
It is going to drive even bigger growth for Marvell. I invite all of you to participate in it. So let's see. Well, 25 years ago, when I did not have this school of a hairstyle, we made a specialized processor with a specialized hardware acceleration for networking and crypto security protocol acceleration. This is how Cavium was born. And this is how we pioneered industry's first accelerated compute for networking. We drove performance and efficiency for networking applications. Since then, we have been doing many accelerated compute silicon for various applications and the markets. It's a large market. So let's see. Matt already shared with you how big this market is and how fast it is growing. But for me, I always believe that accelerated computing will surpass general purpose computing at some point.
Well, five years ago, when I came to Marvell and we focused on accelerated computing marketing, I did not expect for this market to become this big and this fast. However, we are focusing on custom compute, as Matt explained, because we are not building the general purpose CPUs and GPUs. This is a very exciting market because it is growing at a very fast compound annual growth. We are engaged with all the right customers. And we are very well positioned to take full advantage of this growth, which is going on. I'm going to discuss it in a lot more detail. But before we go there, let's see why custom compute. This is what Matt and I presented in 2021. Even at that time, hyperscalers were demanding custom compute because their goal all along has been to achieve the highest performance and efficiency.
However, post-Gen AI, in the past 15+ months, every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. It is not only achieving the highest performance but also saving billions of dollars. What we did not talk about is that the cloud also has multiple business models. So there are three business models in the cloud. Hyperscalers have their own internal large applications. These are some of those examples. In this case, they know every detail of this application and the underlying hardware required to process these applications. The second model is Software as a Service or SaaS. In this model, hyperscalers offer their own SaaS application as well as third-party SaaS applications for their customers. This is the fastest growing market driven by Gen AI, especially when enterprise applications are moving into the cloud.
For this, hyperscalers know their application very well, obviously, but have limited visibility on the third-party application. The third category is Infrastructure as a Service. These are some of the examples. In this case, hyperscalers provide on demand, on customer demand, the hardware compute resources. In this case, they do not have any control over the application. They just provide the hardware platform. Now, driven by the demand of the AI, there's a race going on right now where every hyperscaler is deploying whatever is available to increase their accelerated compute capacity. However, their desire is always to deploy custom compute as much as possible for as many applications because this is how they improve their TCO. For the internal application and for their own SaaS, hyperscalers have already started deploying a custom compute platform.
However, as they establish their own custom compute platform and software ecosystem, they are going to offer that for third-party SaaS as well as Infrastructure as a Service, probably at a better TCO. Customers will have a choice. While there's a whole spectrum of adoption of custom compute in these business models, hyperscalers will ultimately succeed to deploy custom compute for all of these business models. We are in the initial innings at the moment. There's a lot of growth in front of us. In summary, all of these business dynamics will drive the demand for custom compute even more. It is a huge upside for Marvell as we participate with our customers in this growth. Let's see why hyperscalers actually partner with us.
Matt showed you this, that we have established ourselves as leaders through our investment in the process node, critical IP, as well as package technologies. We have invested for years, actually for decades, in this IP. We have proven all these IPs in using in our own infrastructure products. Most importantly, hyperscalers partner with us because we have decades of experience and expertise doing these large, reticle-sized, complex compute silicon. In addition, there's another thing going on in the industry driven by Gen AI. Hyperscalers have a desire to increase the cadence of development of their own product. The reason for that is that these are very expensive, very complex products and have required different technologies, like, for example, process node, their own algorithm development, the critical IP, the package technology, the high bandwidth memory technology, and so on.
Each one of these technologies has their own cadence. So that is why, as soon as one of these things gets a new technology, hyperscalers would like to do another variant of their chip to achieve the highest possible performance and the best TCO because, once again, the impact of the magnitude of saving in this market is huge. So, for example, right now, we are ramping 5 nm. We are developing on 3 nm. At the same time, we are doing the critical IP that we do test chip on 2 nm.
That is why hyperscalers want to partner with a semiconductor company which has a scale to do all these things in parallel, but also capability to invest in it and probably the overall platform of their own products to be able to drive the critical technologies and gain the expertise and stay at the top of state-of-the-art technology. So let's double-click a little bit in that. If you look at this chart, each one of these critical IPs is very, very complex by itself. Each one of them requires a lot of expertise and experience to develop it. But that is not the challenge. The challenge is that by the time the world knows these technologies exist, it's already too late. What is needed is to do what has not been done before. In fact, just implementing the existing technology is not a great thing here.
This is where you need expertise and the right industry knowledge and access to technology to push the limits of what can be done. So this is how hyperscalers judge when somebody wants to work with them. They want to know that you have done investment in advance and prepared all these IPs because, by the time you engage in design, the choices of the architecture depend on the availability of these critical IPs and packages and et cetera. So if you do not have those things, they really cannot engage with you. So as a result of that, all these things have to be done in advance. In addition, through our expertise and decades of experience, we have developed internal tools. These are the tools that help us to achieve the highest performance and most optimized power and area for our silicon.
This is a result of decades of our experience, through our own experience working with our partners, working with our customers. This is a summary of all the learning. This plays a critical role because, again, once again, the goal is to achieve the highest performance and the lowest TCO. These tools help our hyperscaler customers to model their choices and pick the best combination to achieve the best product. Many of these IPs and technologies require a specialized skill set and expertise that are really hard to find because of the size of the investment needed and the hard-to-find resource teams which are needed. It is really difficult for an ASIC house to be the right partner in this business model.
That is why, when hyperscalers look for the partner, they are looking for a semiconductor partner, once again, which has the right product portfolio and right type of capability and team to invest in it. So let's double-click a little bit about the expertise and experience. In the 1990s, building silicon was very simple. It used to have 1 million transistors, about 1 million transistors. It was very easy to do. Now, today's high-end compute is a very complex multi-chip module, actually. Many of these chips are radical-sized. It's a completely different ballgame. You really have to have all the expertise and IPs in-house. The reason is that you can't depend on the third party. You can't outsource these things because the pace is really high. You can't really say, "Hey, I'll hire some contractors, and I'll get it done." It doesn't work that way.
We need to implement these cutting-edge products in a very fast pace and ship in production, in volume production, with a very high yield, with a very short time. We are pushing physical limits of everything in this area. So, for example, five years ago, when we acquired Avera, and we went out to the customer and said, "Hey, we want to be your partner for this customer," the discussion started, "OK, well, we want the highest performance. We want the best yield. We want the best schedule. Have you done it? How many radical-sized chips have you done in your life? Do you know how to close timing for that? Do you know how to do these multi-chip modules? How many 2.5D, 3D packages have you done? Do you have all those critical IPs in-house or not?
Do you even know what does it take to implement those IPs? So have you done mechanical and electrical test vehicles for your packages? How do I trust you that when I come up with the chip, this package will really work? Have you done your homework?" Well, our decades of expertise and experience, combined with our investment in the critical IP and package technology, is what started the dialogue. Since then, we have been working with our customer partners. We are building the high-performance compute solutions. So let's see, how did we build these capabilities and expertise? When we focused on this custom compute business 5 years ago, we knew we have to double down. We have to become the leader in process nodes, critical IP, packaging technology. For that, we needed the scale. We had to bring additional expertise.
This is where we acquired a bunch of companies. But it was not a random whatever is available thought. It was a very thoughtful process, what component, what IP, what capability do we need? And we had to combine all of them to make this beautiful platform. These companies not only brought critical IP and technologies to our platform, but it brought decades of experience and expertise, and most importantly, highly skilled engineers and scientists. For example, to do all those complex SerDes, we all knew that we need a large team because you have to do all these things in parallel. So how did we build that large studies team? We combined Marvell's studies team with Cavium's studies team, with Avera's studies team, and Inphi's studies team. And that's how we built a large experienced studies team. And that's what is delivering the product that we have.
Similarly, we knew that we need to do the similar thing for custom silicon and then for packaging. This is where we combined Marvell's expertise of packaging along with Avera and Cavium's expertise of the packaging, as well as Inphi. Inphi also brought optical capability, very critical, again, for this market, which gave us a very broad set of complex IPs as well as capabilities. So, in summary, in the last six years, through organic and inorganic, but very thoughtful investment, we have built a portfolio of critical IP and the expertise and a very large team which is needed to be a capable player in this market. So let's see, what have we been doing? Matt, talk about we have lots of opportunities in data center. Now, I'm really proud to share with you today that we have custom silicon products in each one of these areas.
They are either ramping this year or are going to ramp in production in the next 12 months. As you can see, we do not just do one thing for one application. Our relationship with our hyperscaler partners is completely different. We are not an ASIC design house. We are design partners. We work closely with our customer partners to come up with the solution. And we build with them whatever the pieces of the products are needed. This gives us many, many shots on the goal. All these designs will drive growth for Marvell and create value for our customers as well as shareholders. So, as I said, building this complex compute is not easy. So I just wanted to walk you through some more technical details to explain how we are different than many wannabes.
So, for example, you may have heard a lot of people out there say, "Hey, we can take third-party IPs, and we can build a high-performance compute chip." Well, I do not blame them because you don't know what you don't know. I have only one question: how many radical-sized, high-performance compute chips have they built? And it's not only about build. Have they taken in production, in volume production with high yield? And that is what is needed to do these kinds of things. So let me walk you through. So we take a third-party, for example, ARM core. But then we optimize it for power, for area, for performance because this is going to be a building block for high-end processor. And because we put lots of them, so chips' performance is going to depend on how good an optimized core it is.
But then it's not only about the core. The performance of the chip depends on how well you connect them. So the interconnect, which is a cache-coherent interconnect, the bandwidth and the capability of this interconnect is very critical for the performance. This is where we build our own interconnect mechanism, not only for the data. How do you distribute clock? How do you do power distribution? All those things are critical for the performance of the chip. This is where our recipes and experience come into the picture. This interconnect that we have is the best in the industry, which gives us the best scaling. I still remember when we built the first 16-core processor 20 years ago at Cavium, we could get a 16x processor. And then you all know there were many wannabes who did an 8, 10, 12-core processor, which never scaled.
This is where the decades of experience is at work to achieve the highest performance. Another aspect of this compute chip performance is the memory bandwidth. This is where we design our own high-bandwidth interface. This is how, connecting this for the outside world, this is how we build a compute die. Now, often, you need performance more than what you can fit in a single radical-sized die. That is why we put two of them together. The key here is this. We connect them together with a very high-bandwidth die-to-die interconnect with its cache-coherent so that this whole system and then we use the same die-to-die interconnect with the I/Os as well so that this whole system looks like a single entity.
In fact, it is so seamless, and bandwidth is so high that, from an all-practical point of view, software does not even know these are physically two separate things. So this is how we put all of them together in a complex package to build this large compute silicon. And just to note, in this picture, everything which is green is ours. In fact, all the blocks are ours. We design it. Even the Arm core, for example, the optimized version is our IP. So while this compute silicon is very important for the AI infrastructure, you must be thinking, why am I talking in so much detail about this compute silicon? Well, because it is very similar to how high-end AI accelerator chips look like. All the problem that you have to solve here is the similar problem that our processor team has been solving for decades.
So let's take a look at it. This is a custom AI accelerator. Now, in this case, the compute die, the algorithm, the architecture in there is what the customer team developed. However, to convert it into a physical die, we have to apply all those tools, all those expertise of decades of experience that our processor team has been doing for a long time. And that's how you build that AI radical-sized die. And then, once again, the die-to-die interconnect is critical here, which is the similar thing. We have to have a high bandwidth. And we have to make all these compute, whether it's a 2-die or 4-die, look like a single entity. OK? In the case of AI, the bandwidth requirement for the memory is much, much higher. And this is where we design a special, even the memory is being used as special, which is HBM.
But then it is also critical how efficient is the interface to the memory itself. This is where we design a very high-bandwidth, very low-power interface to this memory, which gives us a very balanced memory bandwidth with the compute. Now, once again, we put all of them together on a very complex package. This is where the experience and expertise of having the mechanical and electrical test vehicles come into the play. This is our modular approach to separate I/O and a separate memory interface actually gives a lot of benefit to our customers because then you can actually upgrade your chip, let's say, if a memory technology improves. Or let's say, if the process node changes, you can do another core die. And you can build much faster, in a much reliable way, a new chip.
So once again, as I said earlier, all the green blocks in this picture are Marvell IP. And, of course, the core die is our joint development with the customer. So Matt already talked about our design at two large U.S. hyperscalers. These products are already ramping in production. This is really exciting, as this is the result of our close collaboration and partnership with our customers. The third design is even bigger compared to the previous two designs. With all these engagements, we have captured a large share of the custom compute market. But for me personally, it is also a testament of our capabilities and preparedness and all the critical IPs that we have developed for a long time.
This engagement also is exciting for me because we are not only developing a critical piece of silicon for the AI infrastructure, but we are working together with the customer as a single team to solve system-level problems. So, as I said earlier, our relationship with the customer is a lot different. We are not an ASIC house. We actually are a co-development house. We work with the customer. We understand the architecture. We discuss architecture challenges. We actually do co-development pieces of IPs, which are needed for future development. Then we develop multiple silicon, multiple pieces of the silicon for the whole solution. And that's how we solve the overall system-level problem. Because the value of these IPs is the one that drives how good of a solution you have.
This is where we are pushing the limits in every possible technology, in every direction, because we have an understanding of many, many technologies and capabilities through our own products and expertise that we have. So, in other words, we are co-architecting with our customer to achieve the highest system-level performance and the best TCO because, at the end of the day, that's all that matters. So this is why these relationships, these are long-term strategic partnerships. And they are multi-generational. So you already heard from Matt earlier how accelerating infrastructure for AI is growing at a fast pace and how Marvell is providing connectivity, switch, and custom compute for AI infrastructure. We have been preparing and investing to be ready for this moment for a long time. And we are engaged with all the right customers.
So Matt talked about how we had 10% share in the total data center market last year and how we are driving to a 20% share target. But in custom compute, we had a very small portion last year. Now, based on all the designs that we have and customer engagement, we will be at double digits soon. And overall, gaining share in custom compute will be the biggest part of getting Marvell to the 20% overall target share. This is a massive opportunity for Marvell. So I would like to say, if you want to participate in AI infrastructure build-out, Marvell is your stock. Thank you very much.
Please welcome back Senior Vice President, Investor Relations, Ashish Saran.
Thanks, Raghib. We're going to enter a Q&A session right now. So we'll give us a minute or two to get this set up. So if I can have all the presenters come up onto the stage in a minute, and we will take questions from the audience. If you have a question, please raise your hand. We have four mics. And folks will walk around and provide them. So just give us a minute or two to get set up, and then we'll get going. All right. I think we've got some questions back right here. Let's start there.
Here you go.
Vivek, go ahead.
Great. Thank you. Thanks so much for the very informative event. I actually had two questions, first on the custom ASIC side and second on the optics side. So on the custom ASIC side, you mentioned very specific targets for calendar this year and next year. I'm wondering how the visibility is beyond that. I think, Raghib, you mentioned something about the program could be even bigger than the so if you could help, we all love quantification. So as much quantification you could provide for 2026 would be very helpful so we can understand how sustainable the strength is in these programs. And I think, on the optics side, Loi, perhaps for you, we understand the role of the DSP, very, very critical.
But at what point does the power consumption, latency, other things become so overwhelming that whether it is NVIDIA or somebody else, they are just forced to consider other solutions? So the 2023-2028 profile that you have provided, are you assuming any conversion over to CPU or other architectures that can help offer some power advantages versus the DSP architecture? Thank you.
Great. All right. Two great questions. Can you put them on the mic here? Two great questions, Vivek. And I'm going to direct traffic today. So I'll start on a few, and we'll all team up here. I'll start on the first question. I'll have Raghib add. So a couple of things. On our fiscal 2026 next year, just to set the context for everybody, on the $2.5 billion, the way you guys should think about that is that is the base. That's the floor. That's the view at this time. And I would just say, if you look at where we were last year at this time, and we started projecting out where our AI revenues could go, I mean, at some point, we said $400 million last year and $800 million this year. We're already showing you $1.5 billion.
So think of it that way as a starting point. And that's typically how we do it at Marvell. If you project out and you heard what Raghib said, and I'll let him add his own words, but we're looking at a custom silicon market in 2028 of over $40 billion. And if you just sort of think about what's the math it's going to take to get Marvell overall to 20% share long term, then, by default, you've got to be very successful in that market. And so Raghib's talking about driving his business to those levels over time. So it's very significant, Vivek, in terms of what we're saying to you guys today relative to the design wins now and the production ramps we have going on today.
Yeah, as Raghib said, and why don't you cover it a little bit? I mean, we do think that the newest opportunities are as significant or even bigger than what we've already won. So it's a big deal.
Yeah. So as I mentioned during my presentation, at this point, we have design wins, which gives us multi-year visibility. And first of all, all of these are multi-generational engagements. And on top of that, the new engagements are even bigger than the one. So obviously, we talk about the 20% target that Matt mentioned. And I said during my presentation that we will be double digit pretty soon. And then, obviously, the custom silicon, custom compute is going to be a much bigger part of achieving the 20% target.
Great. Then let's on the second question. Loi, why don't you take that one?
Yeah. So great question on the optics side. So in terms of power, whether power, how critical, when it's going to happen, and so on and so forth, well, the first thing is, as Achyut mentioned earlier, time is really the most precious commodity. Customers are wanting the solution that plug and play, deploy now, which can scale 800G now. And the 1.6T pluggable is pretty much the ecosystem is gearing up to launch that product for the next generation of 200G per lane. So from that standpoint, the linear optics and the CPO are kind of late for that generation, just like what we had debated and discussed on the various panels and the workshop at OFC two weeks ago. That situation doesn't change.
Of course, with the bandwidth going up, continuing to go up 2x every two years, we need to continue to innovate. We cannot just say, oh, we're going to do the same thing. And it's going to double the bandwidth, double the power. When you saw Achyut's roadmap, every time he's going up 2x, actually, he's cutting down the power per bit by something in the 30% or 40% or more. And so don't assume today and just multiply by the bandwidth.
Yeah. And I'll just add, and then we can move to the next question, Vivek. I think the way and what we said, even at OFC, I said this in my keynote, was, look, we do think there's a potential that some applications in some part of the market, in closed systems in particular, may move to these types of approaches. That's fine. We are absolutely preparing for that. And just to give you a sense, at the sort of 200 Gb per lane level, the ASP we can capture just on TIA driver, the analog content, not the DSP, is like the DSP content at 800 Gb. So there's still a significant ASP we can get. And we are absolutely going to prepare for that.
But I think what we're trying to do, and I think we did a good job at OFC and so did our customers, is say, look, bulk of the market's going to stay pluggables. That's the way to think about it. It's insatiable demand for data. If there's some applications that are closed and go LPO, Marvell will be there and will capture content and revenue. And we're just like, hey, what does the market want? What do our customers want? But we also wanted to get the industry off this bandwagon that all these DSPs were going to disappear, which was sort of what had come up at OFC last year, which proved to be completely and patently and totally incorrect. Next question.
Right. Can we get a question from the other side of the room?
Hi. Tom O'Malley with Barclays. Thank you for hosting the day, Matt and team. My first one is in relation to AEC. You talked about multiple customers. Then you showed a slide with the front end and the back end. You spent some time talking about the transition to 100 Gb per lane in the front. Could you talk about where you see the bigger opportunity for AEC? Is it when you see the front end refresh? Or do you see the bigger opportunity today in the back end with some of the custom ASIC ramps? That's one. Then two, you talked about the SiPho engine. Optics, there's a lot of vertical integration. It's very important for your customers. If you're at OFC, InnoLight talked about doing their own silicon photonics. How are you going to balance your progression there?
Some of your customers may be trying to do more themselves.
Sure. So I'll have Achyut take the first part on AECs. And then, Loi, you can talk about SiPho and those dynamics. Go ahead, Achyut.
When you talk about the back end and the front end networks, that's sort of the logical differentiation in the networks. Physically, it's really a question of the distance you need to travel for the AECs. So what we see today are opportunities both in the front end and the back end network because all of these have to grow up in speed at the same time. You can't have a back end network that's very fast and then a bottleneck on the front end. So as all the network speeds, both in the front end and the back end, scale, we are seeing applications for AECs today at multiple customers, all of them using it both in the front end and in the back end. So I think you should look at this as a combined opportunity, not something that's different.
The same customers that are going to deploy these AECs, some of it will go in their back end network, and some of it will go into their front end networks. That's what we see from multiple customers.
Yeah. Great. Thank you, Achyut. And then, Loi?
Could I repeat?
Yeah. Tom, why don't you just repeat the second question just so we got it?
Silicon photonics, you're talking about the SiPho engine. Some of your customers, they are developing silicon photonics themselves. How do you balance that? How do you balance your customers developing their own products and your desire to further vertically integrate yourselves?
OK. That's a great question. Marvell is a component vendor. We do what we do best. Silicon photonics is still an emerging technology. Like I said, it's just starting. It's not clear on who's going to be successful, who's not going to be successful. As I said in my talk too, a lot of people say they have silicon photonics, but few companies actually have successfully commercialized shipping silicon photonics at scale like Marvell. We obviously will need to help our customers to build these silicon photonics and offer as a component parts. Then for those who have silicon photonics and can scale, of course, we will work with them to enable them to scale by offering the TIAs and driver and the DSP as part of the chipset. It's a kind of a collaboration model to help to grow the industry.
The goal is to make the pie bigger.
Yeah. And I just add, and we can go to the next question. The way to think of it at this stage, Tom, is it's completely not a zero-sum game. This is a brand new emerging $3 billion market that's actually very disruptive. So to Loi's point, you're going to see a lot of players. I think what we tried to articulate in our section was, look, there's a lot of talk about SiPho. I mean, I went to OFC 10 years ago. SiPho was up on every banner and booth. There's really only a couple of companies in the entire world that have shipped in high-volume production into data center applications with high reliability, high yield, and high volume. And Marvell is one of those companies. And so we like our position as we transition this.
But look, I wouldn't get hung up around right now if somebody's got one, somebody doesn't. If the whole pie opens up and it can become commercially successful, $3 billion is a lot of TAM for everybody to go after. Next question?
Quinn Bolton with Needham. Two questions. Just wanted to you guys did a great job of highlighting the opportunity in the front-end and back-end networks. But you sort of didn't address the compute fabric, which today is copper. NVIDIA, a couple of weeks ago, introduced their new NVLink 5 that spans up to 576 GPUs across eight racks. It sounds like there should be opportunity for use, whether it's AECs or perhaps retimers, to drive some of those copper links. And wondering if you could address that opportunity. And the second question is on DSPs at OFC. It sounded like optical units are certainly going up. But pricing may be an issue. And I just wondered if you could talk about what you're seeing on the DSP or just the pricing front in the optics business. Thank you.
Yeah. Sure. Why don't I hand the second question to Achyut? And then you can decide if you want to answer the first one or you and Loi want to team up on that.
Sure. So let me take the second question first. On this market, like Matt said, it's a rapidly expanding market. I mean, the CAGRs on these things are huge as these cluster sizes grow. So like anything else, as the market grows, but we also move down the technology node. So to enable the market, we will enable whatever degree of price flexibility is needed. But we can also keep to maintain our technical leadership, our profitability, as we develop more and more cutting-edge solutions. Our customers really care about moving to the next node, the next fastest technology quicker. And that's really where we are focused on. And that's where you can really charge the premium. And you can capture a majority of the value in the market. I can start with the question on the compute fabric.
When you looked at the event you talked about a few weeks ago, a lot of the focus was really on trying to keep some of those links passive and developing SerDes technologies to do that. And the customers will try as hard as they can to keep doing that. But you're exactly right. At some point, as the speeds will keep going up, the speeds are going to double every couple of years or even faster, at some point, that math is going to break. And at some degree, they're going to have to go active, whether that happens in one generation or two generations or three generations. But it's going to open a significantly large market for this active electrical cable SAM at that point of time.
Yeah. Great. Thank you. Next question.
Hi. Tore Svanberg from Stifel. Thank you for this great event. I had a question for Raghib. Raghib, you talked about the high-bandwidth memory interface being proprietary to what you do. And that's a value add. Could you maybe give us some parameters, how that compares with, again, the event from a couple of weeks ago from the B200?
So, as I said that, the actual goal of putting these solutions are actually to achieve the highest bandwidth and the best power and latency. It's not as simple that, hey, you just build one and one size fits all. It's a result of what overall system-level memory bandwidth you are trying to achieve, which will match with the compute. So, I would like to answer your question this way. We have, at this moment, the most cutting-edge technology and capabilities. We worked closely with our customer during the architecture phase to figure out what exact interface that needs to be tuned to. That's what we built. Our goal is always to achieve the best performance and efficiency on a system level.
Great. Thanks.
We got some questions up front here. Carmen, thank you.
Thanks so much, Chris Rolland, Susquehanna. Thanks for this great day. This might be for Loi. But, Matt, I'll let you decide. So there's a lot of discussion about scaling out cluster sizes. And my question is, what effect does this have on optics attached per GPU? Do we go from 2 to 3 to 4? We're talking about cluster sizes of perhaps 1 million GPUs as well. And then secondly, in your slide, you talked well, you showed that it's really training that looks like it's more scale-out. And so if we have a move to inference, does that slow the rate? Or how would you expect that attach for inference to change over time as well?
Great. I think that's a great question for Loi. Yeah.
Yeah. So that's a great question. So on the same chart that I show you, I span the cluster size from 128 to 1 million. Today, 128 to 25,000 is the kind of cluster that we can build. So for inference machine, also depending on what you're trying to do to try to do with inference machine, inference machine could vary from 100 accelerator to 1,000 to 10,000. So it's an average that's on the 1-to-1 on the small size and 2-to-1 or even 3-to-1 on the large size. So if you just look at the ratio of optics to XPU, that's related to what the size. And in terms of total number, absolute number, you need to take the product, the product of the size of the cluster, the ratio, and how many of them are being deployed.
Today, what we see is that both training and inference are driving a massive amount of interconnects.
Great. Next question. Where do you want to go?
Thanks, guys. Great day. Ross Seymour from Deutsche Bank. I actually have a question on the custom compute business model and how that may be changing. Everybody knows the revenues are huge. You've talked about those TAMs. We know the gross margins are lower. The operating margins are higher. But given the competitive environment, the complexity of your products, the co-development house, not an ASIC house side of things, what are the trends in the gross margin side of that? Are they actually getting better over time, given these relationships? Because one of the big fears we hear is it'll be great on the revenue side, but the margins might be a bit dilutive. And what does that mean to the stock, et cetera?
I would think, in this AI generation, there could be some changes in that business model that either Matt, you, or even Willem might want to discuss.
Yeah. Yeah. I'll give a couple of comments. And I'll have Willem chime in as well. We've gotten this question for some time. And obviously, as the ASIC opportunity and the custom becomes larger, it certainly would be top of mind. I think the first thing is, just to be clear, and we said it up front, but we're not changing the financial model at this point. We still feel very good about the model we have. Long term, when you project out and you've got a $40-something billion custom compute TAM, and if we can capture a significant portion, that would change the dynamic because, from the very beginning, when we acquired Avera, we always said the custom business would carry a lower gross margin than the merchant-type products.
But the operating margins actually would be about on par over time because, remember, we do get NRE where customers are paying us to develop these products. And so that provides an offset. And ultimately, kind of the operating margin and the EPS line is on par. Although if you get to this kind of scale, then obviously, the EPS fall-through is massive because you're just on so much more volume. So that's how we think about it. I mean, maybe add a couple more thoughts. Because I think there's mechanisms we have to also give you guys some clarity around that too.
Yeah. I think the way we look at this, right, is clearly, as Matt outlined, when you go back in the past, we always had a mix where we expected growth in custom. So that was always contemplated in the model. But as you go forward and it becomes a way more significant part of our business, what we're looking to do is actually break it out for you guys so that you can see the core business and that strong margin. And then, as Matt outlined on the custom side and the leverage, we're getting good operating margin and all the EPS. So in the future, as this becomes way more significant for us and way more material, you should look for us to break it out for you guys so you have that visibility.
Yeah. There's no current plans to do that. But since that's the feeling and the question, I think it's appropriate to let you guys know we're mindful of that dynamic and would want to provide that clarity to investors. In particular, I think not so much, well, what's the margin on the ASIC and custom side? Because that's kind of an industry benchmark. But rather, hey, as the core business, are you really maintaining your profitability there? Is it really commanding best-in-class margins? Are you really getting paid for your engineering? And we're obviously very, we already watched that ourselves. And that side of the business is doing great. And we'll continue to do great. You saw all the innovation we're driving. But if that's helpful over time to investors, certainly, we would provide it to make sure you guys have the necessary information. All right. Next question.
Achyut, you want to direct them?
We can maybe go up front here.
Yeah. To Harlan.
Yeah. Morning. Thank you, Harlan Sur with J.P. Morgan. First off, congrats to the team for bringing Axion to the market with your large cloud hyperscaler customer. I mean, that CPU performance is pretty incredible. But my question for Raghib is that if you look at customer A and customer C, they've had a track record of doing all of their chips in-house. And so this inflection point has come where, all of a sudden, now, they're feeling the need to partner up with a Marvell or maybe some of your other accelerated compute ASIC competitors out there. Is that being driven by a lot of the things that you talk about, which is the complexity increase is going up so much, the specialized IP requirements are going up such that their internal teams just can't deal with the complexity? They don't have the IP capability.
They don't have the architectural expertise. Is this a trend that, given the complexity increase, is this a trend that continues going forward? In other words, these guys that used to do COT, over time, are just going to continue to engage more and more with Marvell because they just don't have all of this expertise in portfolio.
Yeah. Let me frame it first. And then, Raghib, you can chime in. Because I think it's a great way to summarize a lot of what you talked about. I'd say, at the first level, the way we're going to cover these types of questions is one layer of abstraction higher, given the sensitivity of all these customers and who might be A and B and C and so on and so forth. I think you're asking the absolute right question, which I think you should summarize, which is why? When would you go COT? When would you go need our services? What's the differentiation? And why are we seeing that trend continue? Because we actually think, as we showed in our TAM analysis, a bigger chunk of the TAM is going to move towards custom. By the way, some of that will be COT in there. There's no question.
Some of that will be COT. But even within that, we think we can get a significant share. So why don't I let you answer it at the why level versus the individual customer?
Yes. Thank you, Matt. I thought I tried to clarify it very well during my presentation. But here, I go again. So the thing that you have to keep in mind is not like past. First of all, this is not ASIC business, period, which means you should pretty much forget about all other players. Maybe a few large semiconductor companies have the right capabilities, right IPs, and right scale to be able to do it. All of them may not be doing these kinds of partnerships because they may have their own internal product which competes with these things. And that's why they will never do it. So that brings it down to really the companies. So then, on top of that, if you need to build a chip with existing IPs, yeah, you can build it. Many people can build it.
What I try to explain in this case, you are pushing the limits in every direction in performance, in technology. You are actually sitting with the scientists on the other side and really trying to understand what is the problem that we are trying to solve? How would we best fit all these complex tons of silicon on a single kind of complex package? How would we really manage this whole power and thermal as well as signal integrity and all that? That actually requires to develop a lot of new IPs in every generation. That IP house doesn't do that. Actually, they have no clue what is needed. In fact, they follow once things are done. That's what every IP house does. This is why these relationships are going to be more and more critical.
It's not about whether those companies have their internal capabilities or not because I explained to you that the cadence is increasing a lot more. So if anyone I mean, just to give you an idea, the amount of cost of, let's say, the package versus the memory versus the chip itself in these complex silicon is all pretty much equal. So if I can come up with a better memory technology not I, but memory technology improved in the industry, it makes sense to quickly upgrade that and get better performance and overall better TCO. So it's not about who can do what. It's only about what scale do you have? And this is where these partnerships are becoming much, much more important and much, much more closer. That's why I explained during my presentation, we are not ASIC house. To be very clear, period.
We are not ASIC house. What we do is we do co-development, co-design. We are a true partner of our customers to bring the best of the class, highest performance, most optimized TCO product to the market. That is what my mission is. That's what we are doing.
Yeah. Fantastic. All right. Next question. Maybe something at the back?
Yes. Hi. It's Chris Caso from Wolfe Research. So two quick questions. One is on some of what you provided for your fiscal 2028 and some of the share assumptions you had there. One of the assumptions you had was maintaining share in interconnect. I think there's maybe a little bit more to that there that perhaps you can explain. You have very high share in DSP now. But there are some new products. You spoke about active cables, for example. So can you speak about what your assumptions are on there? And then, secondly, if you could touch on co-packaged optics a bit. And you mentioned the presentation. But kind of about timing and applicability of that.
Sure. Yeah. I'll start off on those. And then, I may have Loi comment on the second. Yeah. On the interconnect one, I think the way to think about it is it was easier to set a base case as maintained because, as you exactly pointed out, we have very high share today on DSPs. There's some emerging product categories. So maybe, over time, those may be significant. We may gain more share in those. But they're still a smaller part of the total. So saying you gain share to us at this point seemed a little off. How are you going to gain share when you've already got so much? So I think it's just better to assume kind of keep it where it is, which would be high share still on the DSP side. Maybe it comes down a little bit over time.
I mean, we keep saying that. In fact, I think, when we were acquiring Inphi, Loi, they said, well, it's really high now. But eventually, it's going to come down. And then it just sort of never happened. But I think we just said we have a leadership position there. Just assume. It would be a hard claim to say it's going to get any bigger. And then, on the second one, I mean, Loi teed it up, which was he founded the optics group at Inphi 10 years ago to go do CPO. And here we are in 2024. And there's nothing shipping. And in fact, at OFC, in my keynote, I made a statement because I got asked about this too. And I said, my experience was CPO was always N plus two years away, with N being the current year. And it keeps rolling.
And then, in fact, after me, one of the senior fellows from our biggest customer got up and said, "Actually, my experience is, Matt is, well, first of all, Matt is being too polite. He's a very polite person. It's actually N plus 15 years is my experience. Because when I joined Google 15 years ago, that's what we were working on. And it never went anywhere." So we keep an eye on it. I think Loi said it beautifully. I mean, we absolutely have the team to go do it. We have the switch capability from Nick. It's not like we're ignoring it. But we just don't see that pull right now from the customer base to really do it. So it's always going to be on the roadmap. And we're going to be on it.
But right now, we don't see that on the horizon as being a volume type of application. Next question.
I think we'll take the last question from our time. Let's go ahead.
Thank you, C.J. Muse, with Cantor Fitzgerald. Thank you for hosting today. A couple of questions on custom silicon. So first one, you've targeted or announced three U.S. hyperscalers. Can you speak to your willingness to support China? And as part of that, can you give us an update on how you're thinking and approaching vertically integrated players? Any sort of update there in terms of maybe different workloads and what opportunities you could see over time? And then, larger question around the workload side of things. As you're trying to get wins today to ramp in 2026, 2027, beyond, are you seeing any meaningful changes in workloads or in what the focus is and what the requirements are within compute? Thanks so much.
OK. Let me do the first one. I actually didn't catch the second part as much. On the first one, with respect to China, our approach has been in that market for several years that we support our Chinese customers but with merchant products. Our custom offerings there generally have been very limited. I think, in light of all the restrictions that have gone on from the U.S. government, doing very high-end, sophisticated AI chips and advanced technologies just presents a risk factor that's probably too challenging for us. We've really doubled down both on the U.S. hyperscalers, certainly on the custom side, but then also enabling our merchant products both directly and through OEMs globally. We can still address, for example, part of the China hyperscale market. That could be on merchant parts.
Or that could be through OEMs that sell a platform into there. So I hope that's helpful.
Yeah. I think the second part, C.J., where you're trying to ask whether there's opportunities outside of cloud, essentially in vertically integrated was that kind of the question? So vertically integrated players. And I think the reality is you see such a big TAM right now with the cloud customers. And that's where we see the biggest traction. As Raghib talked about, customer C , that's an even bigger opportunity, which we've already won compared to what we are shipping in the next few years. I think if there are opportunities where some of the vertical integrators want their own custom solution as their scale becomes bigger, I think we'll absolutely be in play. But the reality is, right now, there's so much happening just with the large cloud customers.
Yeah. So I think Raghib wanted to add. And then, maybe I'll say a few closing comments.
Sounds good.
We've got lunch queued up outside for everybody. We'll be separated into nice tables. You can meet members of the management team. But.
Yeah. Yeah. So I think Ashish already covered that we are engaged pretty much who's who out there. So we keep a very close pulse on these things. And in fact, as I said, it's not like there are 10 such providers. So even those people know when they need help who to go to. So as the right opportunity comes, of course, we are always engaged with all of them.
Yeah. So again, just to wrap it up, I appreciate all the great comments. I appreciate everybody coming in to attend our AI event today. I just want to summarize it by saying, look, we have a $75 billion TAM opportunity at Marvell sitting in front of us, hanging out there. We've already got 10% of this market today. It's going to grow that share, by the way, in the current year we are because you can see our ramps we're having in data center. It's going to grow again as a percent of total next year. So we're already on the path marching towards this 20%. And so we tried to provide you guys enough information because you're going to have your own model. You're going to have your own point of view. This is a very explosive, very dynamic situation. So plug in your own numbers.
Plug in your own numbers. Figure out where you want to take it in terms of the market. But just to finish on this, we are absolutely confident, this management team, this company, that we can drive the share gains that we've talked about and be successful. So the question is, how big is the opportunity going to be? And you guys have better models than us, probably. So good luck. And we'll see you at lunch. Thank you.