Marvell Technology, Inc. (MRVL)
NASDAQ: MRVL · Real-Time Price · USD
158.21
-6.10 (-3.71%)
At close: Apr 27, 2026, 4:00 PM EDT
157.36
-0.85 (-0.54%)
After-hours: Apr 27, 2026, 5:38 PM EDT
← View all transcripts

Investor Update

Jun 17, 2025

Speaker 5

Welcome to Marvell's Custom AI Investor Event. Please welcome Senior Vice President, Investor Relations, Ashish Saran.

Speaker 6

Good morning and welcome to Marvell's Custom Silicon Investor Event. I'd like to draw your attention to our forward-looking statements. As a reminder, this presentation contains projections and other forward-looking statements regarding future events and financial performance of the company. Such statements are predictions and subject to significant risks and uncertainties, which could cause our actual results to differ materially. Please consider the risk factors in our SEC filings, which could potentially affect our business and financial performance. These filings are available from the SEC and on our website. During this presentation, we may be mentioning certain non-GAAP financial measures. Reconciliation to GAAP is available on our website. Let me walk you through our plan for today. While Marvell has five end markets, today we will focus exclusively on the custom silicon opportunity in our data center end market.

Matt will kick things off with an update on our custom silicon progress since our AI Day last year. Chris is going to focus on the key factors driving demand for custom silicon. Nick will discuss our full-service custom cloud platform, which is driving our success. Sandeep will do a deep dive into our technology platform. Will will bring us home with how his team is engaged and winning in custom silicon. We will conclude our event with a Q&A session. With that, I would like to invite Matt Murphy, Marvell's Chairman and CEO, onto the stage. Matt.

Speaker 5

Please welcome Chairman and CEO Matt Murphy.

Speaker 2

Awesome. Thank you. All right, good morning, everyone. It's great to see all of you. Welcome to Marvell's 2025 Custom AI Investor Event. To our audience joining via webcast, welcome and thank you for being with us. For those of you here in the room, Marvell's most senior distinguished engineers and fellows, thank you for the incredible work you're doing to drive this business and this company forward. Okay, let's get started. What if I were to tell you that there's a revolution happening inside cloud data centers? It's all around the silicon in which the data infrastructure is built. We call it cloud-optimized silicon. If that sounds familiar, it's because I stood on this stage almost four years ago and talked about exactly where this was headed. Check out the data on this thing.

This was my first slide, actually, from our 2021 Investor Day. It is truly incredible, is it not, how this has played out? Custom silicon has become one of the largest growth drivers in the entire semiconductor industry. Here is another slide from that event. Back then, we said that emerging applications were not going to work running software solely on x86 processors. Do you believe that was a debate back then? New workloads, we said, required new compute like GPUs, and that cloud providers would start customizing their machine learning chips, their CPUs, their DPUs, and ultimately the entire cloud infrastructure. Now, at Marvell, we have been on a mission since I became CEO in 2016 to lead in hyperscale data center infrastructure.

In 2018, my team and I made the decision that the future of cloud was going to be custom, and we needed to build a strategy and a team to lead. You're going to hear from many of those leaders here today. In 2021, we laid out the strategy for you, including our initial set of design wins. At that time, if you remember, there was a lot of debate in the industry on whether custom silicon could actually take off and ramp at scale. I don't think anyone questions that anymore. Custom is happening. It's happening in every cloud, and it's here today. Now, look, we all know we've come a long way, and it certainly has not been easy, but we've been investing for nearly a decade to get to where we are today.

If you're just waking up now and you want to be in the business of building custom silicon for the cloud, you're too late. Why has this trend accelerated over the past few years? Back in 2023, the top four US hyperscalers were spending about $150 billion in CapEx, which was already a huge amount back then. It grew to over $200 billion in 2024, and now over $300 billion in 2025. It's an incredible level of investment. If you think about that CapEx, a huge portion of it actually goes to the silicon. That's why these were the first companies, the big four, to first customize silicon for their data centers. They clearly saw the future back then and where this was heading and how they could benefit from this trend.

It just made sense to build optimized solutions for their individual use cases from top to bottom. It's not just the top four anymore. We see a whole new wave of companies investing in their own data infrastructure, and we call them emerging hyperscalers. The first group are companies building the foundational models, and they've realized the value of controlling their own infrastructure and that they're building their own data centers and beginning to build their own data centers. Take xAI, for example. They built a 200,000-unit AI cluster in just one year, and they've already produced a very powerful model in Grok. There are those that are building the end applications. They're also building highly specialized infrastructure for AI. For example, Tesla built its own Dojo-based data center to power the AI behind full self-driving.

There is media coverage suggesting many other companies are heading in the same direction. Recently, we are seeing the rise of what is called sovereign AI, which is nations around the world who are also launching major investments to build local AI infrastructure. All of this is driving even more demand, requires more innovation, and it creates more opportunity for Marvell. All right, let us go back to the CapEx numbers. The top four US hyperscalers have grown. They are spending at a 46% CAGR over this period. If you zoom out and you bring in and you look at total data center CapEx, you see it has grown even faster at a 51% CAGR. That is because historically, the rest of the data center CapEx was coming from tier two cloud providers and on-prem data center applications.

More recently, we're seeing these emerging hyperscalers contribute at an increasingly significant rate. If you fast forward to 2028, analysts are forecasting data center CapEx exceeding $1 trillion. Who knows? Today, they're saying a trillion, but it could be more. Where's the lion's share of that spending going to be in that time frame? Clearly, these four hyperscalers are not slowing down anytime soon, but I wouldn't be surprised if the emerging hyperscalers grow to be a significant portion themselves. Either way, it's clear from the trend line and what we see from our customers that both are going to grow substantially in this time frame. What does all this mean for Marvell? At our AI event last year, we outlined a $75 billion TAM, and it was growing at an almost 30% CAGR across custom silicon, switching, interconnect, and storage.

We stand here a year later, and everything has gotten bigger. Our overall estimate has grown by about 25%, and we're now seeing forecasts for a $94 billion TAM in 2028. If you look underneath that, the two fastest growing markets last year have grown even more. Compute is almost 30% larger than what we projected last year, and interconnect is up about 37%. Both of these are right in Marvell's wheelhouse and their key focus areas for us. Now Marvell's total data center opportunity is $94 billion, growing at a 35% CAGR. Custom compute is the largest and fastest growing portion, followed by interconnect, then switching, and then storage. At today's event, we're going to focus on custom compute. Over the past year, this is where we've seen the biggest change, both in terms of the size and diversity of the opportunity.

At the same time, we continue to drive strong execution across interconnect, switching, and storage. Compute remains by far the largest incremental opportunity in front of all of us. That is why we have chosen to focus our today's event entirely on custom. We have made tremendous progress in the portfolio in the past year, and you will hear from our team today. They are going to walk you through what is happening inside this market and why Marvell is winning. First, let us take a look at what is included in the market. Clearly, there are the XPUs themselves, and this is the biggest part of the compute TAM. They are the largest and most complex chips in the world. The technology required to compete in this market continues to accelerate at an astonishing rate. You will hear more from my team today on what we are doing to win in this market.

What's also become clear over the past year is that the number of XPU opportunities continues to expand, both within the top four hyperscalers and also with the emerging hyperscalers we talked about. We're going to spend more time on this exciting set of opportunities in a few minutes. First, let's take a look at what else is in the TAM. These AI compute platforms are comprised of more than just XPUs, as it turns out. Modern AI infrastructure requires complete systems packed with silicon to run AI workloads at scale. Within these platforms, there's a multitude of companion chips that help support and scale the XPUs. That's what we call XPU Attach. Just to be clear, XPU Attach is all custom silicon.

This is independent and distinct from the other product areas of interconnect, switching, and the other parts of our broader data center opportunity. By the way, every platform is different. There are common elements like network interface controllers or NICs, power management ICs, the scale-up fabric, just to name a few. In other cases, there's other customized solutions, specialized co-processors for security and other functions, or memory or storage poolers and expanders. What we're seeing is an explosion of different sockets inside these AI systems, but they're all attaching to the XPUs. This represents an incremental custom opportunity for Marvell on top of what we discussed last year.

If you turn back to the $55 billion in TAM, that breaks down to about $40 billion in XPU, and that's growing at a 47% CAGR, and $15 billion in XPU Attach, which is growing at an incredible 90% CAGR, nearly doubling every year from a pretty small base in 2023, but still doubling. We will explain why that's growing so fast. Essentially, it's because of the increasing complexity of these custom systems. If you just put this in context, when you look out to 2028, the custom XPU Attach market is of the same magnitude of the entire custom silicon market today for the cloud. Both XPU and XPU Attach are incredibly important parts of this market, and we're going to spend a lot of time today walking you through the dynamics in each and how we're positioned to win.

All right, let me update you now on our growing customer traction in this market. The goal here is to win what we call sockets. A socket represents a multi-generational opportunity inside a customer's architecture. As the architecture evolves from generation to generation, these sockets tend to sustain. Once you've won the socket, as long as you do a good job and you execute and you deliver, you're in the pole position for the next generation. Last year, we talked to you about these three custom compute sockets at three different US hyperscalers, and they're all on track. Two we've taken to production, and they're driving substantial revenue for us today. The third is well into its development, and it's also on track. We're fully engaged on the next generation on all three. Here's the interesting thing.

have also won nine additional custom sockets at the top four hyperscalers. Some of these we had already won last year, and some are brand new this year. This initial wave is in production today, but the remaining are in design execution, and they are set to deliver revenue over the next couple of years. These are the XPU Attach sockets. As I mentioned before, as architectures evolve, sockets tend to sustain. When the architecture changes and it is changing rapidly, new sockets emerge that were not part of the prior architecture. That is what we are seeing in the custom AI infrastructure now, a rapidly evolving architecture with more custom sockets each generation. This means more shots on goal for the Marvell team, and we are winning.

That gives us 12 total custom sockets with the US hyperscalers, the three we talked about last year, up from the three we talked about last year. That is just at the top four. Now, remember I talked about the emerging class of hyperscalers. We have been very active in this segment as well. I am excited to announce to all of you today that we have won two XPUs and four XPU Attach sockets in the emerging category. These are either in production or on their way to production today. This is going to continue to build over time, and it is an incredibly active part of the market. When you add it all up, Marvell's won 18 different sockets in the custom compute market. Those 18 give us line of sight to achieve our market share goals.

If you take a step back, there's been this perception that the entire custom silicon market for data center really comes down to just a handful of sockets. And to be fair, that used to be true. If I look back to 2023, the largest single socket probably made up 75% or more of the TAM, one socket. But that's clearly no longer the case. By the time we get to 2028, I would expect that no individual socket is going to be more than 10% or 15% of the TAM. That's because we're seeing more XPU opportunities at existing and emerging hyperscalers. The XPU Attach opportunities are expanding even faster as these architectures evolve. At Marvell, we've won 18 of them already. We're not stopping there.

The pipeline of opportunities in front of us continues to grow rapidly with an incredible number of active engagements. In fact, today, we are tracking more than 50 additional opportunities in our pipeline. More than 50. It is not just driven by one or two customers. There are over 10 different customers now that we are engaged with across this range of opportunities. If you look at the magnitude of this and you add it all up, there is a $75 billion lifetime revenue potential for Marvell hanging out there in front of us. These opportunities represent growth above and beyond the ones we have already won. Above and beyond. This is the future. Now, some of these could turn into revenue by 2028, which then would be incremental to our plan.

Let me just put the $75 billion in context to help investors on the line think through what this means. I'll start with the opportunities first. When you break down the opportunities, the 50, about one-third are XPUs and about two-thirds are XPU Attach in terms of the count. Now, these XPU programs are monsters. They're typically multi-billion dollars in lifetime revenue each, and they run between an 18-24 month period in terms of the lifespan of the program. That's what we refer to by lifetime revenue. You just take the total revenue over the lifetime of the project. When you look at the XPU Attach programs, these are also very significant. They're incredible opportunities. These are in the several hundred million dollars in lifetime revenue, but they span over a two- to four-year period.

For the financial community that is on the call, let me just give you some context on how you might want to think about this. Let's just take the 18 programs we talked about, 5 are XPUs, 13 are XPU Attach. Now we have given you a range for the lifetime of the programs and the scale of the revenue that is potential. You are all smart on the line. I will let you guys go do the math, and you can actually go and build a model and start to imagine what this could look like. You could easily see how we could get to the revenue scale we have been discussing. If you take our 18 existing programs and you think about the 50-plus opportunities we are chasing, you can start to imagine what that opportunity looks like long-term. Okay, why are we winning?

How do we get to this place where we now have 18 different sockets that we're ramping into production? That is what my team is going to spend the day talking to you about. We're going to explain what we do for these customers and why Marvell is so well positioned. Fundamentally, it comes down to this. We are unique as an end-to-end full-service custom silicon provider. What we do is bring together the best system architecture, design IP, silicon services, packaging expertise, and full manufacturing and logistics support to enable our customers to realize their silicon ambitions. This means the customer does not need to cobble together IP from a variety of third parties and then go off and hire a design house to complement their in-house team and then find another vendor to go manage their supply chain.

While this may have worked in the past from time to time, it simply is not going to work in the future as the technology landscape accelerates. You need pre-tested, pre-integrated IP and architecture support coupled with best-in-class design. That is what we do. The underlying technology platform we have established at Marvell is truly astonishing. It is what allows us to continue to win these multi-generational programs. We have a proven track record of delivering on leading-edge process nodes. Today, we are in volume production in both 5 nanometer and 3 nanometer. We are not stopping there. We already have test chips on 2 nanometer, which will enable our first 2 nanometer products. We are leading the charge into the Angstrom era with development already on A16 and A14 nodes for future products. You will hear from Sandeep and his team on how we are executing there.

At the IP level, we built one of the broadest portfolios of analog mixed signal IP in the industry, focused on high-performance, low-power, low-latency SerDes, along with our high-bandwidth die-to-die integration. Just to give you one example, Marvell demonstrated the world's first 448 gig SerDes running at OFC a few months ago. This kind of capability is critical both for scale-up and for scale-out networks. Finally, advanced packaging. This has become just as important as the silicon itself. It is not an afterthought anymore. It is fundamental to enabling AI technology at scale. Do not just take it from me. Let's hear from two of the largest cloud computing companies in the world and two of Marvell's most important partners. We built an incredible partnership with AWS, and Marvell has been a thought leader on implementing EDA in the cloud for our products.

It's helped us scale our design capability and move at hyperscale speed for our customers. We're building chips for their cloud, which is helping them scale. You can see a quote here from AWS CEO Matt Garman, which really speaks to the strength of the partnership and the joint work that our teams have been doing together. All right. Next, I'd like to introduce Rani Borkar, who has graciously agreed to speak with all of you today about our partnership. Rani is Corporate Vice President of Azure Hardware Systems and Infrastructure at Microsoft. Let's turn it over to Rani.

Speaker 6

Thanks, Matt. It's great to be here. We are in a pivotal moment in our industry, and it's partnerships like ours that enable us to innovate boldly and deliver at scale. At Microsoft, we are architecting our entire cloud stack to deliver the highest performance, lowest cost, and most secure infrastructure. As the demand for compute and model innovation outpaces the traditional rate of hardware innovation, we are reimagining every layer of the stack to meet the needs of next-generation cloud and AI workloads. This requires an end-to-end approach, optimizing across the data center, system, silicon, and the serving and application layers. By doing so, we are able to bend the cost curve and accelerate innovation, unlocking compounding gains that go beyond what's possible in any one layer. Sitting at the foundation of our stack is silicon, where deep hardware-software co-design is what truly unlocks breakthrough capabilities.

By combining Microsoft's software expertise with purpose-built silicon, we deliver the performance and efficiency needed to scale compute and AI infrastructure for the next wave of transformative applications. We do not do this alone. These gains, these breakthroughs are only possible through deep, long-standing partnerships across the ecosystem. For more than a decade, Microsoft has partnered with Marvell as we advance our infrastructure. As a leader in silicon, this includes various aspects of our custom silicon journey, and we benefit from Marvell's ongoing technology innovation. As we look ahead to the next wave of cloud and AI innovation, we are excited to continue our relationship with Marvell as a trusted partner on this journey.

Speaker 5

Wow. Very cool. Awesome. Awesome. Thank you, Rani. I think you actually captured it really well, I mean, how companies are working like Microsoft to optimize their infrastructure from top to bottom. Okay, let's dive back into Marvell's data center business. Today, I want to help investors get a better sense of all the moving pieces inside the data center and market. Last year, we did about $4.2 billion in data center revenue, and about $500 million of that, you can see it on the bottom there, came from on-prem data centers. That part of our business has been around for a while, and it's pretty stable. Going forward, for modeling purposes, you want to think about that business generally being in that zip code going forward, which means the rest of our data center revenue is now coming from cloud and AI.

Now, if you go back a few years when ChatGPT launched, we went through this exercise to break out AI as a separate category. At that time, it was new, and it was emerging, and it was really just tied to a handful of programs. After some work, we were able to get a pretty good read on that revenue, and we made some projections, which turned out to be actually wrong. The good news is they were wrong in a good way because we have vastly exceeded them from just a few years ago. As I sit here today, we have seen this just dramatic transition where AI is now in everything. I mean, all the applications we use every day and the tools that we use, AI is embedded, and it is touching every aspect of technology, including everything in the cloud.

Most of the applications running in the cloud are using AI. All the cloud infrastructure is really becoming an AI factory. Going forward, when I look at the investment and the CapEx, all of it has moved to AI. From our perspective, our cloud revenue will be AI revenue going forward. It is just all combined. Now, let's take a deeper dive inside that AI business. In our last earnings call, we said that in Q4 of fiscal 2025, about 25% of data center revenue was custom. If you just take out the on-prem piece, it would actually be greater than 25% of the cloud AI revenue. With the wins we have in place, we expect that custom will grow to greater than 50% of cloud AI revenue over time.

Based on the size of the opportunity, it probably just keeps going from there. If you go back to the $55 billion custom compute TAM, Marvell had very little share back in 2023. It was less than 5%, and our custom silicon programs at that time were just getting started. Now, going forward with the 18 sockets that we talked about, we're targeting a 20% share by 2028. We're actually well on track to achieve it. In fact, with the five XPU sockets we've won, I would expect that we could achieve about 20% of the $40 billion market. Even though it started a little bit later and it's newer, I actually don't see any reason why we can't achieve a 20% share on the $15 billion XPU Attach market in that same timeframe.

We've already won 13 sockets in that portion of the market. Our revenue in XPU Attach, by the way, from last year to this year is doubling, and it looks like it's going to keep doubling going forward. If you put it all together, we're well on track to achieve our 20% share. Now, let's take that $55 billion TAM, okay? Then we're going to add it back to our total data center TAM. See it there? Okay, now let me tell you about our progress. Last year, we had said that we had about a 10% share of a $21 billion market in calendar 2023. We're targeting a 20% share in the 2028 timeframe for data center. In 2024, the market actually grew almost 60%, but our business at Marvell almost doubled.

If you take the $4.2 billion we did last year in data center revenue, we actually had about 13% of the market. Obviously, this year, we're still growing quite strongly. While we expect to grow continuously, we actually see even a larger step up coming in calendar 2027 as several new programs, including a major XPU socket, hit mass production. This all positions us very well to achieve our 20% target by 2028 on a market that's now grown to almost $95 billion. Let me just take a moment to summarize everything we've talked about this morning. Last year, we told you we had a $75 billion opportunity in 2028. This year, we're saying it ballooned to $94 billion. Last year, we talked about the custom XPU opportunity. This year, we unveiled an entirely new market for custom XPU Attach.

Last year, we said we had a 10% share driving to 20%. This year, we showed we're already at 13% in 2024 and growing strongly again in 2025. Last year, we said we had these three existing custom XPU sockets, and this year, we revealed and talked about 18 multi-generational sockets. Last year, we talked just about the opportunity at the top four U.S. hyperscalers. This year, we see a brand new customer set with emerging hyperscalers. On top of that, we're actively pursuing over 50 additional opportunities worth $75 billion in potential lifetime revenue for Marvell. That's the opportunity in front of us. Now, I'm going to hand it over to my very talented leadership team. Once you hear from all of them, you'll see why I'm so excited and why I'm so confident that we can make this happen. Thank you very much.

Speaker 2

Welcome Chief Operating Officer Chris Koopmans.

Speaker 8

All right, good morning, everyone. I'm excited to be here this morning and talk to you all about our custom silicon opportunity. I'd like to start out by taking a step one level deeper into that market and trying to talk about what's driving that move towards customization. Let's start with that trillion dollars that Matt outlined at the beginning of his presentation. This is analysts' forecast for total data center CapEx in 2028. That's really based on a fairly modest growth rate from here of only 20% per year. If we start with that trillion dollars and we go one level down and take out the physical infrastructure, that's about $800 billion in equipment. Inside that is silicon, about $500 billion in semiconductors. $500 billion, that's bigger than the entire global semiconductor market in 2020.

It's half of the total CapEx that's being spent on silicon. The largest part of that is the accelerated computing TAM, and that's at about $350 billion. Now, that $350 billion is based on the current analyst forecasts, and some folks think it's going to be even larger than that. That's fair. A year ago at our AI event, we said calendar 2028 would be $172 billion. That's based on analyst forecasts at the time. It's actually doubled just in the last year. Clearly, there's a lot of room for growth on this accelerated computing TAM. If we take that $350 billion and we break it down into its component parts, we see that a large portion of it is the XPU itself. Think of this as the XPU tile. There's a big Attach market as well as Matt outlined.

The biggest part of the Attach is actually the HBM, or the high bandwidth memory, that's packaged inside the module. The second part is the XPU Attach that Matt outlined, which is the other networking co-processors, network interfaces, memory and storage interfaces, and controllers. If you put it all together, there's a very large TAM made up of these component parts. Now, remember, this is the total accelerated computing TAM. At Marvell, we focus only on the custom portion. For custom, when we work with the very largest hyperscalers in the world, what they tell us is that they want to buy the HBM directly. They have a very large relationship with these memory vendors, and they buy a lot of DRAM today, and they want to purchase it directly.

Now, emerging hyperscalers, they might want to get a turnkey service from Marvell, and we can deliver that for them. Just to be conservative, we're going to take that out of our TAM, leaving us with a $220 billion TAM. Now, that's just the DRAM that I pulled out of the TAM. Ultimately, if we deliver a custom HBM solution that we'll talk about later today, and we're building a bottom die with our proprietary die-to-die interfaces, that'll be part of the XPU Attach. If we take this $220 billion TAM and then ask ourselves what percentage of it's going to be custom by 2028. Last year, we said we thought about a quarter of the market would get to be custom.

Based on the progress over the last year and the opportunities we see in front of us, we're well on the way to achieving that and ultimately potentially exceeding that number. This is where we get the $55 billion custom compute TAM that Matt talked about. You can see that it is growing very fast as customization is increasingly taking on the market. XPU is growing almost 50% a year, the custom XPU, I should say. The custom XPU Attach is almost doubling every year. This really leaves us with the question of why. Why the move towards customization, and why would the custom XPU Attach even go faster than the custom XPUs themselves? The answer lies in the workloads. No longer do we just have a simple, singular AI workload where we're just training a large model. The workloads have now diversified significantly.

You have pre-training and post-training. You have a whole variety of inference workloads. As the workloads diversify, specialization is on the rise. You build specialized infrastructure to have superior total cost of ownership and superior performance for those diverse workloads. That, in turn, drives customization. Let's look at those workloads. First, we absolutely have the original workload of building the largest model possible. We see every company out there racing to build the biggest possible cluster to train the biggest possible model with the largest amount of data. You can see where progress has been made over the past couple of years. This trend has emerged of a race to be building the first million XPU cluster. That's one of the critical workloads that's driving this infrastructure trend in the market.

We have also seen now inference emerge as one of the most important applications in AI. Inference has also diversified. You have the traditional large language inference, which is chat interfaces, content writing, search. This is a fairly compute-light but memory-intensive workload. It is memory-intensive because you need to be able to access the entire model for every query, but you are not doing that much computation on each query. On the other side, we have chain-of-thought-based reasoning inference. This is a much different application, much more compute-intensive, as you are recursively asking the infrastructure to produce a better and better result. It allows you to produce much more complex answers, deep research, solve complex puzzles and problems, but it is much more compute-intensive. The trend is clear. If you are in the business of building an AI factory, one size does not fit all.

Ultimately, you need to be able to have clusters that are very large with heavy compute to train the biggest possible models. You also need to be able to have millions of instances of smaller clusters with less compute to be able to answer inference queries. You need to have chain-of-thought-optimized infrastructure to be able to deliver the best performance in that particular part of the market. As workloads and AI diversify, so too must the infrastructure. We are seeing it today. We have seen the largest hyperscalers deploy huge clusters of GPUs. You have also seen them deploy very large custom XPU fleets. In some cases, we have seen them deploy performance-optimized large cluster custom XPU fleets. We have seen them deploy separate efficiency-optimized smaller cluster custom XPU fleets. We have even seen emerging hyperscalers realize the value of deploying optimized inference infrastructure for their model. The dataset matters.

If you're training your model based on millions of hours of video for self-driving, that's a very different workload than if you're training it on billions of tweets. You have seen companies build separate custom infrastructure for those two types of training loads. It is pretty clear that diversification is driving specialization of the infrastructure, which is driving customization of the infrastructure. Let's take a quick look at what that means, and let's start with the XPU itself. Now, I get a lot of questions about this. What is the custom XPU? Is it a hard-coded ASIC? The answer is no. Our customers have designed these as highly programmable processors. They all have a multitude of compute cores, on-chip SRAM, and then I/O interfaces to the outside world. Where does the customization come in? It starts in the compute cores themselves.

Do you have more matrix math optimized cores or more scalar and floating point cores? What level of precision have you optimized it for? What's the ratio of cores to memory? It's really important to get this right because if you end up with the wrong ratio, you end up with idle compute waiting for memory, or you end up with idle memory or empty memory while the compute constraint kicks in. Having the right ratio of these two is critical to building an application-optimized XPU. That continues in the module. Again, getting the on-chip SRAM versus the in-module HBM memory right for your application is critically important and can really affect the performance and the total cost of ownership. That continues in the interfaces. This is where we build out into the scale-up fabric. Is it a direct-attached copper fabric?

Is it a co-packaged copper or a co-packaged optics fabric? As you customize all of this, you build a scale-up logical XPU that looks like one XPU to the software. Even in this architecture, we see specialization. Is that fabric an any-to-any star fabric or a nearest neighbor connected Torus fabric? Ultimately, depending on the workload and the application, a specialized infrastructure makes the difference. Let's take a step back. If you're a hyperscaler or an emerging hyperscaler today and you've made the decision to customize your XPU, the next question is, what platform do you deploy it in? On the left, we put the general-purpose platform. This is where everybody starts. Deploy it in a platform that was originally designed for racks of x86 servers. When you deploy it there and the hyperscaler started there, you quickly find out that it becomes a bottleneck.

You can't really unlock the value of your XPU in a general-purpose platform. The largest hyperscalers have already built full custom platforms to house their custom XPUs optimized for their applications. Recently, we've seen new ideas emerge. For example, a third-party accelerated infrastructure optimized platform available for others to drop in their custom XPUs, or will a standards-based accelerated infrastructure platform emerge. Just in the last couple of months, Marvell has made a few announcements in this area. At Computex last month, we announced with NVIDIA our ability to help our customers build an XPU and integrate NVLink Fusion, which would allow them to deploy it within NVIDIA platforms. Separately, we've announced our UALink platform, which would drive a move towards a standards-based accelerated infrastructure platform. Whatever the case, it's clear that the platform needs to be optimized for accelerated infrastructure, which is driving customization.

As you go from the left-hand side towards the right-hand side, the platform becomes increasingly customized. That is what is really driving the move towards custom XPUs optimized for the different workloads and the even larger increase in custom platform as it got started later, but it is moving very fast. At Marvell, with our 18 current sockets, we are well on our way to achieving our 20% share in this market. With the 50 opportunities that we are driving, we are just getting started. Thank you.

Speaker 6

Welcome, Senior Vice President and General Manager, Cloud Platform, Nick Kucharewski.

Speaker 2

Good morning. Thank you all for being here today. You know I've been building semiconductors for hyperscale data center applications since 2007. It's been about 18 years. What we're seeing today is really unprecedented in terms of the market opportunity for advanced semiconductor design. Also, the game has changed in terms of what's required to build a leading product for this market. We're seeing all new levels of vertical integration, system innovation, a lot of custom silicon, and the pace of component new technology is moving faster than ever. This requires a different category of company to participate in AI silicon. Marvell has built that company. We have the team, and we have the product strategy for this next generation of custom AI. We have 30 years of experience building high-performance ASICs with a track record of doing it right the first time.

We have built a comprehensive portfolio of technology specifically for data center and AI applications, many of which we've developed over multiple generations. We have built an engagement model specifically designed to work with hyperscale customers to manage the entire lifecycle of the design process. That is what you're going to hear about over the continuation of today's section, the Marvell Custom Cloud Platform. In my talk, you'll hear about our product strategy and our engagement model. Then you will hear from Sandeep and the engineering leadership who is building these leading portfolio technologies and bringing our customer solutions to market. From Will, you'll hear more about the opportunities and why our customers choose to work with Marvell. There is going to be a lot of information here. We wanted to give you a framework for how to think about this.

It really comes down to one very simple question. What do our customers look for in choosing a custom silicon partner? For our customers, this question is becoming more critical with every passing day. You see, the customer's final product, which is the AI data center itself, is a composite of silicon system and software technologies implemented in the right combination and then instantiated thousands of times across the full AI deployment. Inside every chip is a long list of key enabling technologies which enable those next-generation systems to reach the highest level of performance. Each one of those technology elements is innovating continuously. The key is to grab and choose the right pieces in the right combination for that next-generation product.

As Chris mentioned, the applications in AI are evolving so fast that there is an almost continuous demand for new system-level approaches to achieve the next level of optimization or customization for those new workloads. All of this is happening at industrial scale. That means exaflops of compute performance, petabits per second of networking bandwidth, gigawatts of power. When you're talking at that scale, one day of delay in bringing your product to market can mean millions of dollars in lost revenue. The stakes are higher than ever before. It's critical to work with the right silicon partner. In the world of custom silicon, not all companies are the same.

In fact, it seems like every month I hear about a new company that's announced their intentions to get involved in custom silicon, put the sign on the building, and announce that you're ready to do custom chips. It's not that simple. Marvell's engagement model is based on a full lifecycle engagement that anticipates the needs of the market multiple years before design even begins. We build the core IP and the design methodology well in advance. We go through a process with the customer to choose the right technologies. We work with customers on their product designs that integrate the latest leading technology for silicon package systems and manufacturing. This is not the only way to do it. In fact, there are other models within the industry for building custom silicon. In the second row, you see physical design services.

Now, this is what most people would refer to as an ASIC provider. It's a business model that's been around since the early 1980s. In this case, the custom partner may participate in a portion of the design process and then handle the manufacturing. Largely, the customer is doing all of the heavy lifting. They have to find all of the key different IPs. They have to make sure that they integrate together. They take on a lot of the work in terms of making sure that's ready for volume manufacturing. There is also an option out there for the do-it-yourselfers. This is known as COT or customer-owned toolset. In this case, the end customer is responsible for procuring all of the IP, doing all of the design, and designing their own design methodology in-house and ensuring that everything comes together in combination.

The custom partner is just handling the manufacturing. The core technology, our design methodology, our manufacturing standards, and our logistics systems have been developed over years by some of the most talented engineers in the industry. The technologies and services we provide cannot be found just by opening the Yellow Pages or looking on Yelp for an ASIC services partner. It is unique and differentiated IP that is not generally available on the open market. This is what customers look for when they want to reduce risk and go to market on time. I will be talking about each one of these elements in terms of what Marvell provides, which is different and differentiated within the market. First, talk about the system architecture. Marvell starts years ahead to see what technologies will be available for building high-performance silicon multiple years ahead of the project.

We work with our customers interactively to assess those technologies and to explore what kind of systems can be built with them. This enables our customers to build new system concepts and to define the silicon components based on the most advanced technology with a full view of the system. Now, from there, we work cooperatively with our customers to do interactive system definition. We look at power consumption, performance, cost trade-offs so that the product and the component silicon is truly market-leading. Now, as you'll see through our sessions today, the key to building these high-performance AI cloud silicon products is the enabling technology, the key design IP. Now, often, the first thing we talk about is the SerDes. And that's for a good reason. The SerDes is the single most differentiated element in building these products.

This is one of 10 or 20 different core IPs that go into building these advanced chips, everything from high-performance die-to-die to digital cores for compute and networking, integrated silicon photonics, high-bandwidth memory, on-chip memory, software, and firmware. All of these elements come together to build the highest-performance product for the next generation. Marvell builds these technologies in-house so our customers see a one-stop shop for many of the technologies they need to build their products. Not only that, we work closely with the ecosystem. We collaborate with startups to incubate new technologies and bring them to market and then integrate them into our product platform. We also work with some of the largest IP providers in the industry to ensure that our customers have a very broad portfolio with a choice of all the elements they need to build their products.

Next, we'll talk about the silicon design itself. Now, here, you often hear about process technology, 5 nanometer, 3 nanometer, and so on. That's definitely critical. The key point to note is that there's a lot of preparation that goes into every one of those process technology transitions. For example, all of the IP I talked about on the previous slide has to be brought to be supported in that next generation process generation. In some cases, we build test silicon to characterize the performance of those IPs. Based on that data, we'll create models so that customers can use the models to build their products. Sometimes, we even have to evolve the EDA workflow. We work with our partners to ensure the toolset used to build the chip is updated to reflect the latest technologies and the latest generation of the process node.

Not every services provider does this. Those that do, like Marvell, are years ahead of the curve in being ready for next-generation designs and working with customers and moving very quickly. Next, when we talk about design execution, we focus on the logic design and the physical design. We go beyond that. It is also about design for testability and design for manufacturing. We introduce new innovations for DFT and DFM, which translates into higher quality and better cost efficiency. We also innovate on power consumption. With every new process generation, there is an opportunity to do new innovations which are unique to that process for how you get the most in terms of the lowest power and the best efficiency. We also focus on yield enhancement.

When you're talking about very large products built for AI, all of those innovations that go into optimizing for yield translate into superior cost at scale. As you'll see today, packaging is one of the most critical innovations in building high-performance AI silicon. In fact, this is the key to scaling beyond Moore's Law. We have innovated on multiple dimensions to enable the latest generation of packaging technology for these AI products. We have a multi-die platform to enable the advanced integration of multiple silicon components. This is based on a Marvell-developed approach. We also work with TSMC CoWoS so customers have the option for how they build multi-die packages. For high-bandwidth memory, which means integrating a large number of DRAM right into the package, Marvell has its own platform solution. We also support industry-standard platforms.

Here, also, we provide customers with the flexibility depending on the approach they want to take. It is not just about packaging for high integration. It is about pulling system innovation into the package, allowing for better power consumption, lower cost, and better performance through system-level innovation right in the package. One really good example of this is CoPackage Copper. In the past, high-speed signaling was done at the system level through the printed circuit board. We have been doing this for decades. More recently, we have actually pulled those high-speed signals out and above the package. We run thousands of wires directly out of the package across the system. This gives you longer reach at lower power consumption and can allow you to achieve higher performance levels in the context of an AI rack.

Going beyond copper, Marvell has announced its platform for CoPackage Optics, bringing fiber optic signaling right into the package, which can give you much longer reach than copper at very high bandwidths at lower latency than you could do before. This is a key enabling technology for AI scale-up systems and those platforms which Chris was talking about. All of these innovations work together to enable products with more functionality and performance. It gives you better generational performance than ever thought possible. Once the product is defined, designed, and implemented in the silicon and the packaging, we move to volume manufacturing. Here, too, our customers need to work with a top partner. Marvell has the scale to deliver. In fact, we're one of the largest semiconductor suppliers in the world.

We have decades of experience delivering large-scale semiconductors, logging billions of device hours in the field. We have developed a methodology that ensures we are ready to move to volume production on time and in line with customer schedule and volume expectations. This is based on a multi-phase operations implementation that runs in parallel with the design team, covering product engineering, test engineering, quality assurance, and reliability. This operations engineering team works in parallel with the chip design team to make sure that we are ready to go to high-volume manufacturing when the customer is. Now, you see, looking at the big picture, it is clear why working with a full-service custom partner offers major advantages.

It's about having access to the latest technology that is not even available on the open market, and then discussing a plan for those technologies with the customer years in advance so they can define their systems based on what's going to be available out in the future. It is building products with the newest packaging, the latest silicon technology, and using a very broad portfolio of that IP. We have built the capability for custom cloud silicon that is best in class. Over time, this will become even more critical as AI trends continue to accelerate. When you consider all of the different technology components that go into AI semiconductors, each one of those components is refreshing approximately once every two years. You are getting an improvement in speed on each one of these components.

When you look at all of those parts coming together in components and then systems, the net result is a 10-times increase in performance every two years. That's what we see as possible moving forward through the confluence of all these technologies. As these AI applications diversify and customization becomes critical, it's really important that you work with a full-service partner who can provide that kind of breadth of offering. The customer do-it-yourself model is possible, but it will increasingly be challenged. It poses risks to the customer where working with a partner like us can really improve their odds. Now, as you look at the presentations today, keep in mind the three most critical points for our customers when choosing a custom partner. One is a partnership to select the right enabling technology years in advance of the product design.

Second is access to an exclusive set of design IP that is not generally available in the open market. Three, methodology and experience to enable total product quality manufactured at volume scale, plus the ability to do this across the full spectrum of AI cloud applications. This is the real advantage that our customers experience when they work with Marvell. Thanks very much for your time today. Please welcome Chief Development Officer Sandeep Bharathi. Good morning, everyone. I'm very excited to share our journey of technology and engineering leadership. First, before I get started, I wanted to share the journey of how I got to Marvell. In 2018, Matt was interviewing me, and he threw down the gauntlet. At the time, Marvell was in 28 and 16 nanometers. He asked me, how do we get to leadership?

The answer was to skip all those nodes and go straight to five. That was easier said than done. Today, I will share with you the journey of how we got here and how we got to industry leadership. You will hear from many of our senior technical leadership across multiple domains who each have 25-plus years of impeccable innovation and track record of outstanding execution. I represent thousands of engineers of very talented teams who have been delivering industry-leading products across custom cloud and silicon. In the past five decades, the compute capability has doubled at a very predictable pace every two years. Something remarkable happened in the last few years, the advent of deep learning. What happened is the large-scale AI needs shattered the cadence of two years. Now, compute capability doubles in less than a year.

It is going to less than six months. That is a remarkable acceleration in scale. In order for us to take a look at it, it requires a new level of thinking, of system-level architecture, packaging, device design, et cetera, because AI compute doubling less than a year is something we have tuned ourselves to do it differently. What does that mean? The insatiable demand for scaling means we have to pack a large number of transistors. What used to be easy, tens of millions of transistors at the time, is now 100 billion transistors in monolithic dice. That is just not enough. In order to meet the demands of AI workloads, trillion-plus transistors is the order of the day. That can only be done through innovations in heterogeneous integration of dice, multi-die systems, and also vertically with innovations in advanced packaging.

Even that is still not enough. You have to go from trillion transistor systems to 100 trillion-plus for rack scale integration. Now, let's ask ourselves, how do we do this at Marvell? This can only be done with a very proven technology stack. The technology stack is comprising of superior innovations in process technology, a very comprehensive IP portfolio, and advances in packaging for multi-die trillion-plus transistor integration. All these technologies that you see in the stack would just not be enough without a very production-ready integrated design flow. What does that really mean? It's really taking all these components from an architecture definition through logic design, through physical design, through tape-out, and all the manufacturing test and volume ramp through an integrated design flow that we can also scale with our EDA on the cloud. We have been pioneering that from end to end.

We are one of the few companies that can do an end-to-end flow. You heard about all the design wins. That is how we scale concurrent development of multiple chips. Now, let's break down each one of these. We have a multi-generational execution track record. Let's start with the process technology. We have more than 20 products in volume production in 5 nanometer and 3 nanometer. Matt said this, we are not resting there. We are advancing our innovations in 2 nanometer with many test chips that are working. Ken will walk you through that, as well as the move from nanometers to angstroms. How do we do that? How do we achieve this? I just want to tell you it is analogous to if you want to launch a Saturn V rocket, the Moon mission.

They didn't just launch a Saturn V rocket on day one. They had years of development of electronics, launch systems, booster rockets, multiple stages before they launched the rocket to the Moon. Precisely, advanced process technologies are as complex. In order for the product to be launched at A0 to production, which means at the very first try, process technologies have to be started three to four years, co-developing these with the foundries and with our partners, and putting complex analog mixed signal IP on test chips, validate that before we can put on the product. That's a three to four-year cycle lifetime. Marvell is a leader in advancing this, shifting left, such that when we put the product at A0 to production, our customers can ramp it in volume on the very first try into millions of units in a very short order.

What is key to that is validated critical IP ahead of product development. We have all heard about CMOS and nanometers. CMOS is not the only technology that is needed for heterogeneous integration. You will hear from Radha about silicon photonics. Silicon germanium and silicon photonics are an equal important capability in addition to CMOS. Without all these technologies, the product just does not happen. We will talk about what is the anatomy of an XPU shortly. The important advantage Marvell has is being process agnostic, meaning having leadership in development of all these process technologies, foundry ready so that we are future-proof. Now, let's talk about the IP layer, the IP portfolio. It means that we need to have analog mixed signal IP, which are very important for data movement, custom CPU cores, complex digital IP.

We will dissect each one of this in what you're seeing is the anatomy of a modern XPU. What does that contain? It contains one very critical IP called SerDes, which is serializer deserializer. It is really the component that transfers data at gigabit rates across centimeters to meters. Not only that, you need die-to-die. Die-to-die stands for exactly communication between two dies and a chiplet. This is different than SerDes because it has to transfer data in very short segments at high data rate and low power. Next, you have custom high bandwidth memory. High bandwidth memory is a critical component for the AI workloads. What Marvell has innovated is to have a base logic die at the bottom of the HBM stack. Mark will walk you through what are the advantages of that custom HBM stack.

You have custom SRAM, standard random access memory, static random access memory. The reason SRAMs are very important is to pack as much of data in a very small area so that the rest of the die area can be devoted to compute. Having high-performing, low-power SRAMs is a defining competitive advantage. Last but not the least is also the co-packaged optics, basically making sure that all the way from electrons to photons, we can integrate all of this on the modern XPU. That is an equally important capability that Raza will go through.

If you take a look at how this is integrated at an advanced packaging, its multi-layers, whether we call it 2.5D, 3.5D, 4D, et cetera, there are innovations in materials, thermal and power management, as well as signal integrity and power integrity in order to make sure the entire system works in the power envelope that you see in the rack scale AI systems. Now, Matt talked about XPU and XPU attach. There are two different varieties. What is important to see in an XPU attach? There may be certain IPs that are not necessary, for example, CPO. What it means is that for each of these, the power and performance per watt requirements are different, which means you need to optimize different SerDes or different die-to-dies for each one of these to meet the needs of the workloads.

Customization to achieve the highest performance per watt is a Marvell specialty. You can see there is a lot of content in each of these designs. Now, SerDes is something that Ken will talk a lot about in the next session. Why is SerDes so difficult? You can think about an analogy saying if you want to transport millions of passengers, you can do it through a multi-lane highway with many cars going at 60 miles an hour. If you want to send it faster, you would have it on a high-speed bullet train traveling at 300 miles an hour, faster but reliably. Meaning SerDes is exactly the same thing. It has to have multiple bits at gigabit transfer speeds for reaching a longer distance across a cable with the lowest power, lowest latency.

You need to get from the data movement from one chip to another at a very fast pace. You cannot lose bits, meaning you have to have the lowest bit error rate. Marvell has multi-generation SerDes leadership on 56 gig, 112 gig, 224 gig. You will see the demo of the 400 gig as well. This takes a very talented team and a longitudinal experience of seeing silicon results to continue the innovations on all these figures of merit. What I wanted to discuss now is each of the technical leaders who are going to come up on stage will talk to you about SerDes and die-to-die, custom HBM and custom SRAM innovations, co-packaged optics and silicon photonics, as well as advanced packaging. Marvell is a technology powerhouse because we have focused investments, and the technical leadership is second to none.

This enables first-to-market advantage, which is critical for deploying AI silicon at scale. Now, I would like to welcome Dr. Ken Chang, who is the Senior VP of Analog Mixed Signal Design at Marvell. He has more than 25 years of experience building SerDes across a wide variety of companies. He's also an IEEE Fellow and leads a very capable team at Marvell. I would like to welcome Dr. Ken Chang.

Speaker 6

Thank you, Sandeep. Sandeep, say I lead the analog and mixed signal team at Marvell. 1,000 people strong, 1,000. Amazing. A lot, many of them with decades of experience of SerDes and analog mixed signal. At Marvell, in parallel to my organization, via the Inphi acquisition, we have an awesome optical DSP team led by my peer, Arash Fahod. 800 people strong. Why am I telling you this? Marvell not only has SerDes talents, we have talents in scale. You probably heard it's incredibly important IP. And this IP needs to be developed early for this incredibly increasing AI market. Again, myself, 25 years experience, Sandeep mentioned, surprising, actually not surprising. My PhD was also on SerDes, 2 gigabit, 2 gigabit. Let's see how far we come. I'm going to show you Marvell SerDes leadership in a different way, not what we said. Look at IEEE.

This chart shows the SerDes trend over the years and technology. Y-axis is the line rate. Each data point is a paper, SerDes paper published to top-tier IEEE conference, Circuit Conference, ISSCC, VLSI, also top optical conference, OFC. These are all industry papers. Why am I showing this? It showed recently move out to 2x line rate for two years. Why am I keep on talking about everybody else? Where's Marvell? Marvell dominates. These are peer-reviewed papers. Marvell was not part of it. It was reviewed by industry worldwide SerDes expertise. I want to emphasize without Marvell, with the Marvell competition. They love us. They recognize us. As my colleague Raza is going to show you, Matt has already talked about, we are the industry first to demonstrate 400 gig optical IO here. I know some of you work on that here. Line rate is not the only metric.

Sandeep mentioned already. Power, lower the better. At 200 gig long reach, we are able to achieve 4 picojoules per bit. What does 4 picojoules per bit mean? For 100 terabit between XPU and switch, that's 400 watts. Every picojoule we're saving equates to 100 watts that we can use for compute. Next is reach. We achieve 50 dB in OFC demo. What does that mean? Two-meter cables between XPU and switches. The longer the cable, the larger the scale-up racks Nick talked about. We can use that for compute again. Next is bit error rate. We achieve 10 to the minus 8 to 10 to the minus 9 in these long reach SerDes. Clear the IEEE spec by four orders of magnitude. We also need to put a lot of SerDes on the XPU. We design the SerDes with the lowest area in mind.

We have done that. Show and tell here. Most of SerDes engineers like to see your own layout. This is the die. Why am I showing you here? In parallel, we have electrical SerDes and optical SerDes all developed or published. Do not get me wrong, this is not academics. We build this for products. We build this for the XPU. Sandeep mentioned it takes years. We design test chips for XPU. In 2022, the team developed a final meter XSR LR in 112 gig. That is in deployment today. Before I move on, we are going to continue this publication journey to let people know what we have done. OFC demo, the 400 gig and the 200 gig LR and the optical SerDes we plan to publish in next year. Sandeep mentioned the challenges on SerDes. I am going to go a little bit technical here.

As a transmitter output, we do have equalizer. We open the eye for PAM4, four-level pulse amplitude modulation, going wide direction. Three eyes, the larger, the better. After 2 meters, 50 dB, if you compute, that's less than 1% of amplitude left. The eye is closed. Eye is closed. We cannot detect the data. As a receiver, without going into detail, we have technique. We have equalization technique, analog DSP, to open that up to four distinct levels. That's what we do here. At 200 gig, the team deliver. The team deliver. Next is die-to-die. In contrast to a SerDes, which is 50 dB, 2 meters, the die-to-die is within the package. The reach is in millimeters, 3 millimeters. What's the big deal here? What's the challenge?

As my colleague Mark Kummerle, who's the next speaker, will tell you, we need to pass a massive amount of data between the XPU and the IO chiplets. More importantly, a lot more data between XPU and custom HBM, the bottom die. There is a different metric. Beachfront bandwidth density, terabit per millimeter. Today, in the lab, the team achieves, we achieve 10 plus terabit per millimeter. Not only that. At 0.3 picojoules per bit. Remember, SerDes is 4 picojoules per bit. If we do that here, this thing will be in flame. 0.3 picojoules at 100 terabit is 30 watts. Again, every subpicojoule, I talked to the team, 0.01 picojoules means a lot. We can use that for compute. For tomorrow, Mark talked to us, Mark talked to us, excuse me, Mark. Hey, we need more than 30 terabit per millimeter.

The team is working on that. And some of you are sitting here. If you notice, the picojoules, even less, 0.2 picojoules per bit. Does require innovation. Since the team is working on that, I cannot say too much. Oh, by the way, we know how to do it. OK, we know how to do it. For the future, the die-to-die, the roadmap is much shorter than the SerDes. SerDes is 2x per two years. This one, I say future, actually the next few months. We need to deliver 50+ terabit per millimeter. This is in collaboration with a distinguished packaging team here. And some of them are here. My colleague Mayank Mayuk is going to talk about the 3.5D package. And we can achieve even much less than 0.1 picojoules per bit. Now I show you the die-to-die and SerDes.

I'm going to hand it back to Sandeep to introduce the next speaker.

Speaker 2

Thank you, Ken. You could see the insights on how difficult SerDes is and why we are winning and why we are industry leading. The next speaker I would like to welcome is Mark Kummerle, who is our VP of Technology of Cloud Custom Solutions and Architecture. Mark has had an industry track record, 25 plus years across IBM, GlobalFoundries, and Avera. He is innovating at the cutting edge on custom external memories, internal memories. You will learn all about it from Mark. Mark, welcome on stage.

Speaker 8

My name is Mark. I'm excited to be here to speak with you today.

Speaker 3

As Sandeep mentioned, my name is Mark Kummerle. I lead the Custom Cloud Solutions Architecture team for Marvell. We work closely with our data center customers to develop cutting-edge solutions for next-generation systems. Today, I'll be speaking about three different technologies. We'll be talking about embedded SRAM, customized embedded SRAM. We'll talk about customized HBM. We'll talk about something we haven't spoken about before at Marvell publicly, package integrated voltage regulation. Starting with SRAM. Most people think SRAM is just something that you get from the foundry or from an IP provider. It's just something that you integrate on a chip. You set it and forget it. You don't have to worry about it. It just is what it is. You bring it in when you need storage. That's what you do.

I'm here today to tell you that that's really not true or that that's not the right way to approach it. If we look at this diagram, custom SRAM can take up 30% or 40% of a next-generation accelerator, CPU, or switch device. It's an incredible big portion of the overall design. How can we not optimize that to provide better performance to our customers? It's crazy. I'm here to tell you that at Marvell, for the last 25 years, we actually have a crack team that's been innovating in custom SRAM design led by Darren, who's in the room today somewhere. Darren, thank you. They've been working in every technology node and innovating in every technology node to open up more performance and lower power for our customers. Let's see what they've done recently.

This morning, we announced our 2-nanometer custom SRAM that is optimized to deliver maximum bandwidth to the data center. We deliver an astounding 17 times the bandwidth per square millimeter of off-the-shelf SRAM solutions that we can get from the foundry or IP providers. It's an insane amount of bandwidth. Bandwidth is so essential for these applications to feed these hungry compute units. What's even more special about our custom SRAM is that we deliver this amazing amount of bandwidth to our applications at 66% lower standby power than other competing solutions. Incredible amounts of bandwidth at much lower power than anything else on the market. This enables our customers to use more in their systems, build bigger data centers that have more compute at the same amount of power. Phenomenal achievement. Moving on to another type of memory.

What our custom SRAM does for bandwidth, custom HBM really enables for high-capacity memory delivered efficiently to the compute on the main die itself. At Marvell, we've been investing and developing custom HBM to really make these applications more efficient. I'll show you how we do that. If you think about a normal accelerator, GPU, or XPU device today, HBM infrastructure, HBM IO, takes an incredible amount of the main die along with our normal IO. You can see in the example on the left that our compute is actually constrained by the amount of infrastructure IO that's required to basically transfer data between these HBMs. It takes up more and more of the die as HBM advances in technology.

With Marvell's custom HBM, you can see that we unlock 1.7 times more useful compute area on the main die by removing those huge IO areas and removing the HBM controllers and using the die-to-die technology that Ken just spoke with you about. We create a huge open expansive area that our customers can fill with compute to make the most competitive devices possible. This area, there is a reason why we colored it gold, is gold to our customers. It is a huge impact for them to be able to scale up the compute 1.7 times the amount of area that they had before.

In addition to this incredible expanse of new area that we deliver to the customers, because we develop our custom HBM base die on advanced technology nodes, we open up additional area inside the HBM base die itself that our customers can use to integrate those high-bandwidth dense SRAM devices. They can integrate IO, additional die-to-die, unlocking huge opportunities on the accelerator. Even more amazing than all this extra space is what it does for the power consumption of the XPU device itself. By removing power-inefficient HBM interfaces and replacing them with the incredibly efficient die-to-die interface developed here at Marvell, we're actually enabling 75% lower memory IO power when we adapt these accelerators to custom HBM. It's quite an achievement. It lets our customers really scale up the amount of compute in the data center. Now, onto a personal note.

Anybody who knows me, and there's probably many of you in the room who do, know that I often walk around with a pocket full of XPU devices, which I have today. I am actually not going to talk about XPU devices because we're talking about memory. I want to share something that's actually very exciting to me. We talked about custom SRAM, where we have gigabits of capacity. We talked about custom HBM, where we have gigabytes of capacity. There is some really cool technology at Marvell that's unlocking terabytes of capacity. This little device here that we're super proud of is a Structera A module, which is a memory pooling device that enables additional compute with the memory access itself to terabytes of memory for our customers. It's a huge, huge innovation for the data center.

I'm proud that our team, many of you who are in the room, are a part of it. Moving on from memory to another topic, we actually just released a press release about this today. Marvell has created a platform with industry partners to enable package integrated voltage regulation. What does that look like? On this slide, we can see the typical accelerator picture that we grabbed from the last slide. On the underside of that accelerator, we'll actually be integrating package integrated voltage regulation. Why would anybody do this? The thing is, these big accelerator devices and even computer switch devices that we customize have incredibly complicated boards, incredibly complex. Just delivering the power to the accelerator through the board actually takes a fair amount of power loss and inefficiency.

Moving the current through these incredibly big, thick, complicated boards actually makes the system far less efficient and consumes power itself. With package integrated voltage regulation, we can cut down 85% of that IR power loss going through the board itself because the regulation is directly on the module. This can result in up to 15% of a benefit in total product power, which is quite incredible. Customers do not need to buy so many power plants to feed their data centers using this technology. Maybe even more important, and probably a near and dear subject to many people in this room, is power supply noise. Some of us have spent many hours trying to mitigate power supply noise because, quite frankly, as an accelerator goes from idle, slamming, to fully active, there is an incredible draw on the power supply.

You can actually watch the voltage dip as these devices go into operation. What we typically would have to do to account for power supply noise is margin or take sacrifices in the design density. Adding that margin can add additional power consumption to the device. The amazing thing about this voltage regulation technology and this platform that we're developing with other companies in the industry is that we can enable much higher speeds to react to this power supply noise and take away 60% of the power noise in the system, lowering the overall power noise by up to 60%. It's quite an achievement. It means our customers can do more with that extra power. They can use more of that compute more rapidly to deploy on their workloads. We're learning more and more about what this technology can do for us.

It even opens up, we think, new opportunities in other advanced integration like co-package optics, which you're going to hear about soon by Radha. With that, I want to thank you and hand it back to Sandeep.

Speaker 2

Thank you, Mark, for really allowing us to understand how you can unlock more compute with innovations in integrated voltage regulators and innovations in memory. Our next speaker is Dr. Radha Nagarajan, who has 30 plus years of experience leading focused innovations in silicon photonics and optics. He is an IEEE Fellow and also the Optical Engineering Society Fellow. He has been inducted into the National Academy of Engineering. You will hear all about the exciting world of silicon photonics from Radha. Welcome, Radha.

Speaker 7

Thank you.

Speaker 2

Thank you, Sandeep. It's great to be here in the presence of all this technical talent in the room. You've heard a lot about co-package optics from several speakers at this event. As Sandeep said, I've been doing this for 30 years. Like Ken, I started designing optical interconnects at a gigabit speed 30 years ago. Today, we are at terabit speeds. Over this time, optical interconnects have been pretty much done the same way. The last couple of years, co-package optics is a sea change in optical interconnects, where you bring interconnects to the custom silicon. Critical for co-package optics is silicon photonics. You've heard a lot about silicon photonics. What silicon photonics enables you to do is to design the entire optical system on a chip. A traditional optical component design is one at a time. You design a component.

It puts it together in a package. What silicon photonics enables is you can do all of that on a single platform. Silicon also has very high speeds, as we will see, and long reach. As the name implies, you can use existing CMOS fabs, silicon foundries to build optical components. That is the other sea change, so you do not have to build your own custom foundries or fabs. What is most important in this process is it allows complex electronics and photonics integration. When I mean complex, you can integrate silicon germanium, CMOS, and photonics onto a single common substrate. We will talk about it in a moment. Marvell has been working on silicon photonics for over 10 years. This is not as commonly known. Silicon photonics applications for data center are two large application spaces.

One is between data centers, as you saw in one of the slides, between campuses. The other is inside data centers. Marvell chose to attack the harder problem first by introducing products for between data centers, where the reach is several hundred kilometers as opposed to several hundred meters. The data rates are generationally higher. We started shipping the 100 gig product in 2017. The 400 gig product is shipping in volume. 800 gig is sampling. 1.6 terabit per second, all these are per lambda, is in design. The brain trust for this silicon photonics progress is in this room. That is why it is so great to be addressing this group of people. We have multiple generations of field-applied silicon photonics. That is very important. That is Marvell's pedigree. Number two, high-speed electronics.

As multiple speakers have pointed out, things seem to be happening every two years. Who thought? I mean, 112 was the darling of the industry not too long ago. We're deploying 224 in volume. Today, inside Marvell, we have 480 gigabits per second, single lane electrical. We used the same set of electronics, which had a lot of extra margin, as you can well tell, to do a 450 gigabits per second demo at OFC three months ago. This progress of 480 isn't just the last three months. Again, this 400 gig enables two classes of applications. One is the 1.6T ZR between data center application, single lane, and a coherent format. Coherent format, the way it's used here, has a 4x data density compared to inside data center. And that's the difference.

At 400 gig, at 32 lanes of 400 gig, we are well on our way to designing the next generation co-packaged optics, 12.8 terabits per second. How do you put all of these together? Another common theme at this event, advanced packaging. This is where it all comes together. We'll discuss two levels of advanced packaging. Let's look at it at the die level. This is a cross-section of a light engine. The way to read it is you start at the electronics layer on the top. Then there's a thin sliver of silicon. You may or may not be able to see it, which does the bulk of the work, processes optical signals, and allows for the integration of the electronics above it. This electronics could be CMOS, SiGe, in some cases, other material systems as well. There are through silicon vias.

You go to the substrate. As Mark showed, why stop there? Integrate things to the back of the substrate, decoupling capacitors, electronics. This is the basis for the 6.4T optical engine in the bottom left, 32 lanes, 224. This is where the beauty of advanced packaging comes in. You take that, which is already a 3D silicon engine, and you integrate it onto a substrate to obtain an XPU with four of these light engines integrated together for 25.6T optical interconnect to an XPU complex. To tell you all the details about how you do the next level of integration, this is my colleague, Mayank. I'll hand it over to Sandeep. Thank you.

Speaker 4

Thank you, Radha. Radha and team make even the most complex technology simple. It is not that simple. We will now transition to our next speaker, Mayank Mayuk, who is our Senior Distinguished Engineer of Advanced Packaging. He will talk about all the innovations and direction of advanced packaging. The innovations will be very entertaining.

Speaker 5

Thanks, Sandeep. All right, like Sandeep said, I'm going to be talking about advanced packaging. Just a few years ago, advanced packaging used to be an afterthought. That was something you did around tape out, something taken for granted. That is not going to work anymore. It has emerged as a key differentiator. It is as important as silicon, if not more. All I'm telling you is, for those we hired recently in packaging, this is a good time to be a packaging engineer. Because of all the drama going on in advanced packaging, we are making investments to expand our roadmap for current as well as future generations of products for switch as well as for XPUs. How do we do this? We are doing this by partnering with the right partners as well as with the right strategy.

We are starting with creating some foundational IPs in design, materials, as well as process. We forge partnerships with OSATs and foundries and so on. With their help, we co-create these building blocks. These technology building blocks are fungible across generations, which means that we can create a very long range of roadmap in a very short time. These building blocks can also be mixed and matched, which means that we can create real custom solutions in advanced packaging by all these building blocks that can be with different permutations and combinations of these building blocks. Also, since these building blocks are fungible and generational, we are able to pull this off in a much shorter time. The cadence between generations is shorter, shorter than ever before.

Next, I'm going to show you the evolution of advanced packaging and how Marvell is leading the way. We started doing a 2D package a while ago, 20, 25 years ago. A 2D package has silicon, which is directly sitting on the substrate. You have a lid on top. Next, we transitioned into a 2.5D package. In a 2.5D package, instead of one monolithic silicon, what you have is silicon chiplets. These chiplets are integrated onto an interposer. The interposer then sits on the substrate. That is how we create the 2.5D package. By doing that, we are able to scale the package from a 1x to 4x, four times. The next one is a 3.5D package, in which it has everything that a 2.5D package has to offer. In addition to that, we have a die stack, a 3D stack silicon chiplets.

With that, we are able to double up the capacity or double up the scaling of the package. It is 8x compared to the 2D package, which just feels like yesterday. The next two packaging technologies, the 4D and the 4.5D, take advantage of recent innovations that we have done in terms of substrates. For example, in substrates, we are using engineered materials that are helping us to scale from tens of millimeters to tens of inches. We are also integrating optics and copper right into the package. With all of this ensemble, we are able to scale the package to 16x compared to the 2D package. Let us take an example. We are going to build a package ground up. At the very bottom layer, we have substrates. In this particular case, we have advanced substrates, which, as I told you, has a much larger scale.

Also, we are embedding active, passive, and optical components right into the substrate. The next is an interposer. It could be one piece, or it could be multiple pieces of interposer. We have both bridge-based interposer as well as RDL-based interposers. Beyond that, we have the 3D IC. In a 3D IC, we have stacking of top and bottom dies using hybrid copper bonding. With that, we are able to create these high-bandwidth interconnects. These high-bandwidth interconnects then help to tie together multiple pieces of silicon. You can pack in about 2x the amount of compute in the same XY footprint. Finally, we have integration of optics that gives you high-bandwidth, low-latency connectivity, as well as we have copper integration right into the package that gives you improved signal integrity at much lower power and much lower cost.

What I have shown you today is that we can have a combination of different building blocks, with this one as an example. You can combine these building blocks in different ways to create unique and custom solutions that are optimized for a particular workload. Now, back to Sandeep. Thank you.

Speaker 4

Thank you, Mayank. I think Marvell, from all the technologies you have seen today, is an industry powerhouse. We have been able to do this with sustained innovations and a track record of extraordinary execution, not only for the current generation, but we are on track to do it for the next generation and the generation after. As a result of this, I hope you're convinced that we are a handful of fabless companies, less than four or five, that can make this happen. With that, I want to yield to the next speaker.

Speaker 1

Welcome to Vice President and General Manager, Custom Cloud Solutions, Will Chu.

Speaker 4

All right, good morning, everyone. I'm super excited to be here to discuss the fantastic progress of my business and the incredible custom opportunity for Marvell. Based on our nonstop pace of activity, it's clear that we continue to see strong customer momentum. Once again, my name is Will Chu. I started my career at Texas Instruments as a design engineer. I then earned my MBA at MIT. Eventually, I joined Maxim. I've been at Marvell for the past eight years. My presentation will focus on the unique value Marvell brings to our customers and why we are extremely well-positioned to continue growing rapidly. The Custom Cloud Solutions BU was formed through the integration of Cavium, Avera, and internal Marvell teams. Cavium brought decades of experience in custom compute, networking, and security. Avera brought more than 25 years of experience in custom silicon, having delivered more than 2,000 ASICs.

I've been involved in the custom business from the very beginning. I personally drove the integration of these teams to form a custom silicon powerhouse focused on winning in cloud. Now, let's discuss the strong design win momentum we have achieved. Matt had discussed the enormous traction we've had in the market. We have design wins with all of the top four hyperscalers and emerging hyperscalers. We have custom XPU and custom XPU attach design wins. Many of these are multi-generational. I'm super pleased with this traction and the increase in the number of customers and sockets. I wanted to thank the entire Marvell team for making it happen. As you've seen in the previous sessions, we have a world-class engineering organization at Marvell. Let's dive into how we won so many sockets.

Matt discussed the typical system architecture, where there are custom XPU and custom XPU attach opportunities. For the custom XPUs, our customers have specific workloads that they want to optimize with customized silicon. That tight integration between their specific workloads and custom silicon can drive enormous increases in profitability for their cloud infrastructure. For custom XPU attach opportunities, there are three main types: networking, memory networking, and co-processors. For memory, cloud customers today buy billions of dollars worth of memory. They are looking for specific custom solutions that will improve memory capacity, reuse, and utilization that can drive enormous TCO benefits. For networking, there is an enormous amount of data movement throughout their systems. The customers are looking for custom solutions that optimize that data movement to deliver increased performance and efficiency. Finally, for co-processors, the customers are looking for ways to secure and control their infrastructure.

Custom solutions enable them to do that security and management at scale. OK, with this as a background, now I'm going to dive into some examples. I'm going to start with custom XPUs. OK, you see here an XPU. Normally, the engagement starts off with an architect like Mark engaging with the customer. There's a deep, intense collaboration to figure out what to build and identify how to use Marvell's unique value add to make their custom solutions winning. We also showcase all the unique technologies you've seen just presented previously in Nick's and Sandeep's presentations. In that process, we also tend to learn about their customers' next-generation opportunities. OK, let's dive in a little bit. We go through the process that Nick described. Let's talk about system architecture. I listed here rack scale enablement and optimization.

This is all outside the XPU before we even get into the silicon. The customers want to scale the XPU, as you saw, tens to thousands, maybe up to a million XPUs in a single solution. We support that system architecture engagement with the customer. That is called rack scale enablement. A perfect example is what Radha described with CPO, or Nick described with CPC. Today, we are working with many customers right now in those technologies to enable them to scale their solutions for thousands or up to a million XPUs. Rack scale optimization. This is what Mark discussed with our integrated power solutions. He talked about reducing power 15% or more inside the XPU. If you multiply that by thousands of XPUs, this is a rack scale level optimization that the customers are looking for. Marvell does that uniquely. Next, design IPs. These are critical IPs.

We talked about those. In a typical XPU, you have SerDes, die-to-die, and dense SRAM. Those are all—we had the team go through that. These are table stake IPs. Our customers want and need world-class IP. We have that at Marvell uniquely. If we were to move to CPUs, of course, we have developed years of experience, decades of experience, doing Arm products. We can add that in the critical IP list to develop leading CPUs. Let's go to our silicon services. What's unique is I list here design to spec and co-development. In many XPUs, as you saw, there are many different chiplets in the architecture. On the design to spec side, in many instances, we at Marvell will own one of the chiplets in the design. We do most of the design work in conjunction with our customer.

On the co-development side, typically on the main ASIC, they will own that. We will support them. It goes beyond just a traditional physical design relationship. We are typically supporting the customer on the front end of that, doing things like emulation and FPGA work to make sure that their design works really well. Again, this is unique value that Marvell brings to the table. Packaging. Mayank talked about this. Marvell has both custom and we are using the traditional TSMC CoWoS-based packaging. We deliver custom packaging to our customers because, one, it gives them more capacity in their solutions. Number two, it enables a lot more design flexibility as they design their package. Last, manufacturing and logistics, I list here faster time to market. All of these things that we are doing already shortens the time for the customer to take their product to high-volume manufacturing.

Beyond that, because of Marvell's scale, our expertise, we're able to do many things on the manufacturing side, from tape-out to general availability in parallel that helps pull in the time that it takes for the customer to go to production and dramatically improve their TCO. This is a really good example of all the things that Marvell brings to bear for our customers. These are all unique. These deep engagements enable us to drive multi-generational opportunities. OK, now let's look at an XPU attach example. In the XPU attach example, it's a slightly different model. As we mentioned, there's memory, networking, and co-processor type opportunities. We have experts in all three fields that have decades of experience in developing these kinds of products. They're working with our customers to showcase our technology to make their silicon dreams come true.

What you do not know is that many times we are writing the specification for those customers because we have this expertise and we have this technology. We are working side by side very closely with the customer to define what the product really is. That is unique to Marvell. As we jump into the architecture, I have here production firmware, software boards, and ecosystem. Again, outside the silicon, we are developing the production firmware and software for our customers today in all of these three areas: memory, networking, and on the security side. Boards. In the networking space, we are delivering full production boards for our customers today. In the memory space, we are optimizing the board with the various DRAMs that go on the board so that the customers have an optimal solution. On the system architecture side, ecosystem. What does this mean?

We're working with the ecosystem partners to make our solution fantastic for our customers. In the memory space, we work with the ecosystem of DRAM vendors so that when the board comes up, it works really well. In the networking space, in the NIC card example, we work with the CPU or the XPU partner on the board to make sure that the bring-up between the custom NIC and the CPU work really, really well. In security, we work to deliver compliance certification like FIPS for our customers. Again, these are big, unique value adds for our customers. Next, on the design IP side, critical IPs. I list a bunch here. For example, the first couple: compress, decompress, and our compute fabric.

In the memory space, what we are able to do with our unique IP that we have developed over decades is we are able to pack more bits into the same amount of memory. Obviously, this is a big benefit for our customers. We have a compute fabric that can move the data efficiently across the custom memory accelerator. Security. We have leading cryptography IP at Marvell. With that, we are able to deliver leading solutions for our customers to develop their security products. Finally, on the networking side, SerDes, which Ken talked about. We have leading SerDes, which enables world-class custom networking solutions for our customers. OK, I will move on to silicon services. Design to spec integrating customer IP. As I mentioned, in many instances, we are helping the customer write the specification for the products.

In this case, design to spec means we're designing the entire chip. We also integrate our customer's IP. It sounds simple, at least on the slide, but it's not that simple. What we're doing is we're taking their IP, we're instantiating it in the chip, and we're making sure it works seamlessly with everything else that we're designing. We are doing things like emulation and FPGA work, for example. We are also attaching it to the system level. Their custom IP needs to work within all the IPs that we have, as well as deliver, connect to the production firmware and software, work well on the board, and all the rest of the ecosystem around it. These are all unique value adds from Marvell. These deep engagements are driving our multi-generational engagements.

Now, let's see how this unique value supports customers across a single complete program. Here we have a bunch of different phases. In the first one of the IP development, which has been discussed, Marvell invests years ahead of time before we get a design award. We design the IP, and we demonstrate it to our customers so they have confidence that this is what they need. In the next phase in the system architecture, we have this intense collaboration where we decide what to build and how to engage. Finally, we have full ownership of the end-to-end solution after the design award's been given to us. We take the chip all the way to production. This entire flow is an unmatched combination of technology, expertise, and scale.

This is not something that physical design services does or just the manufacturing-only service can do. This is unique to Marvell. Now, by investing upfront in the IP and working so closely in co-development with our customers, we also get engaged in the customer's next generation. Let's see how this unique value-added model supports our multi-generational partnerships, especially in light of the rapid pace of innovation in AI. As I discussed, in a single program, we invest well ahead of the curve. Due to the rapid pace of AI innovation, customers engaged in their projects in a much more compressed time. The customers are engaging in multiple generations at the same time. I'll give you an example here. When we have a three-nanometer design win, as I mentioned, we had already invested years ago in that IP.

We're also investing at the same time before we even get the award in three-nanometer in our two-nanometer IP. We're also then investing in our 16 AMP Schirmer A16 IP as well. The customers are depending on us, as Sandeep described, to do all this innovation. This is what enables concurrent engagement and enables a chip every single year. This is just an example on process node. Let's see how examples of this for our IPs that we have announced in just the last six months. All right, here's a list of the press releases of Marvell in the last six months, press releases of all our breakthrough IPs. We have our integrated power solution, which Mark talked about, and our two-nanometer custom SRAM, which Mark also talked about. We announced both of these today.

We have our advanced packaging platform, which Mayank talked about. We have our industry-leading two-nanometer platform, which Ken described, and our breakthrough co-package optics, which Radha discussed, and our breakthrough custom HBM, which Mark also discussed. Every single one of these IPs are early and ultra unique. Each one of these are driving specific customer engagements for next-generation opportunities. That leads us to the opportunity pipeline. As Matt discussed, we see over 50 opportunities in front of us. About a third are custom XPU, and two-thirds are custom XPU attach. We see this many opportunities because traditional and emerging hyperscalers see the unique value that Marvell brings to their programs for this generation and their future generations. Said differently, we have a seat at every table for these opportunities. I am personally involved in driving every single one of these opportunities.

You have seen the outstanding engineering leadership at Marvell driving our innovation and execution. To recap, Marvell brings incredibly unique value to our customers. We are engaged in every opportunity and extremely well-positioned to win. Thank you. Thanks, Will. We will now start our Q&A session. Let's give the event team a few minutes to set up the stage. In the meantime, investors and analysts can continue submitting questions through the live video screen window. I will reload those questions onto the team. Looks like we are almost ready. I would like to invite the Marvell team back on the stage. All right, great. All right, let's start with the first question. This one's come in from a few different investors. Can you clarify the difference between XPU and XPU attach with some examples of each? Sure.

Before I start, first of all, just again, everybody on the line, thank you so much for joining today. Thanks, everybody in the audience. I just want to take a moment to thank this outstanding team here. You guys did an excellent job today. Thank you so much. Very good. Perfect. All right, I am going to direct traffic here on the questions a little bit. I am so happy because normally I am on an earnings call, and it is me and Willem and then the world. At least I got some teammates up here. Chris, why do you not take the first question on clarifying the difference for the audience on XPU and XPU Attach? We talked about it a lot, but maybe just to clarify for people. Sure. Great.

Yeah, so basically the XPU attach is the portion of the compute TAM that's actually addressed by companion chips. And those companion chips are different for every architecture, but it includes things like NICs, scale-up fabrics, co-processors, memory interfaces. Actually, as these architectures continue to customize, we're seeing more and more sockets as those platforms customize. Previously, this XPU attach was included in the compute TAM that we talked about last year, for example. This year, as it's grown, we've sized it and broken it out separately just to make that portion clear. The XPU, of course, is still a very important part of the market for us. That continues to grow very rapidly as well. Yeah, great answer. All right, Ashish. The next question is from Ross from Deutsche Bank.

Does Marvell expect the number of XPUs per hyperscaler to expect to continue to expand to address new different workloads? Or is the expense too great? Sure. Why do not you take that one also, and I can add to it? Sure. Thanks, Ross. I think in general, yes, that is what we see. We see, as I mentioned in my talk, that the workloads are diversifying and that ultimately having specialized silicon for the different workloads is of benefit as this entire thing scales and the total CapEx dollars go up. Over time, we do expect that to happen. Yeah, and I would just add, if you look at the compound all the different technologies we talked about today in terms of what is the total benefit to our customers in terms of cost and performance, I think it is very compelling versus what is out there.

It actually enables them to create more SKUs and to create more solutions for them to optimize their workloads and their technologies. I think it's actually creating more opportunities the way we're going about this. Right. The next question is from Ateh from Citi. Is the profitability levels, both gross margin and operating margin of XPU Attach, similar to what you see on the XPU side? Yeah, I'll give Willem a shot here for a financial question. Yeah, I think the way you should look at it, and Will did a good job explaining all the different IP. Clearly, the more IP that's from Marvell, that margin profile is sort of on the higher end of the custom scale. We are very excited about those opportunities because they're very sticky. There's a lot of Marvell IP involved.

From a scale standpoint, I would say it's on the higher end of our custom model. Yeah, and I'd also add to what contributes to that is there's just a huge magnitude difference as well. The XPUs, by nature, are just much higher volume. The XPU attach are still actually very large volume if you compare it to any sockets we ever actually used to go after in Marvell. If you go after the last six or seven years, those alone are just gigantic XPU attach. Because they're also not as high a volume, you tend to get a little bit better on the margin side there. Right. We have a question from two different investors, same question. Are the XPU attach wins tied to winning also the XPU compute socket at the same customer? Or can they be different? Where's Will?

Will, do you want to take that one? Yeah, they're not attached strictly. Of course, we're trying to win every socket at every hyperscaler. There is no linkage between them generally. We go after each socket independently. Of course, we're working hard to win them all. I would just add, our customers expect us to participate and be active across everything that they're putting in front of us. We don't cherry pick. We go all in. We look at where we can participate, where we can bring the value. That ultimately, that level of engagement and trust actually brings a variety of opportunities. Usually, that's been the path to actually win some of the larger ones, actually starting small. In some cases, we started actually very small four or five years ago with some of these hyperscalers.

We built the trust, showed the execution, and that leads to bigger and bigger opportunities over time. That is the model that we are engaging. We are just all in on this market in terms of how we engage with our customers on a range of opportunities from big to more modest. Right. The next question is from Torres Vanberg from Stifel. The reports are that custom ASICs may perform below merchant GPUs. How does this factor into Marvell's view of the market opportunity? And what is needed to close the gap if it does exist? Yeah, maybe I will have Sandeep talk about that one. Yeah, so custom ASICs are very purpose-built to the workloads that each of the customers would have.

When you have to customize for, let's say, different kinds of floating point arithmetic or fixed point arithmetic, then GPUs are general purpose, and the framework is going to be different. You can actually fine-tune, just like you saw from the technology presentations, and equally capitalize on the advancements that we bring for the TCO, which is very purpose-built. You heard Rani also talk about it in the partnership. That is what enables an important innovation to really fine-tune the compute performance, the service performance, the memory performance to the actual workloads, and where GPUs may not excel from a performance per watt perspective. Go ahead. Yeah, we're not talking about performance against benchmarks here. As Rani talked about, it's software-hardware co-design. The performance in the workload is what matters. Clearly, these are superior performance in the right workloads.

Yeah, and then the final context I would add is, in light of all this, we're still sizing the % of the total opportunity of accelerators to be about 25% as custom. So we're not even saying it's addressing the whole market. Now, the more competitive those are and the more compelling we can make those, then obviously the higher the % contribution those could be. But we're kind of in that 25% number right now, which still spits out just an enormous TAM for us. Right. The next question is from Ben Reitzes from Melius. When do you expect the two new XPU and four new XPU Attach with the emerging hyperscalers to hit your P&L? Should we think of this as incremental to what you had discussed last year? Yeah, hey, thanks, Ben. Chris, do you want to take this one? Sure.

Yeah, so the way to think about it is that some of the 18 sockets that we outlined altogether, we had one last year. Some of those are in production now, that first wave, as Matt said. Many of them are in design execution. We would expect them really to start to turn into revenue in 2026 and 2027 going forward. Some of them, as I said, already are there. Yeah. Right. The next question is from Harlan Sohr from JPMorgan. In the concurrent engagement model you articulated, we think you're already well into three-nanometer designs with your lead customers. Is it fair to assume you're already well into the design on next-generation programs? I'll have Will take that one since we talked about concurrent. You're in the middle of all these. Absolutely.

The customers, as I said, they're looking at multiple generations ahead. As Sandeep mentioned in his presentation, it's maybe three to four years to do a full technology cycle. If you're refreshing the platform every two years or less, then by definition, you have to do things concurrently. This is what almost every customer is looking at because they have to. Right. One more clarification question. In the five XPUs you listed, how many of these are CPUs versus accelerators? Yeah. Chris, why don't you do that one? Sure. Yeah, basically four accelerators and one is a CPU. If you look out at that pipeline of opportunities that we talked about, the 50 in the pipeline, it's actually a mix of CPUs and XPUs and then, of course, the XPU attach. Right. The next question is from Gary Mobley from Loop Capital.

For the IVR and custom SRAM solutions you outlined, are these specifically for a custom AI XPU or XPU attach Marvell is currently working on? Or is it available more broadly beyond the custom platform? Yeah, do you want to do a little bit? And then Mark, you add to that as well since you see the whole opportunity. Yeah, so we have customers that we're working with closely to design in our two-nanometer custom SRAM technology, for sure. And there was an IVR. An IVR. Oh, on the IVR side, we have multiple engagements with many customers trying to take that solution to market. It's very rigorous, let's say, or intense. Because as you can see, the power benefit that you can derive from something like what we're developing on the IVR side is quite compelling. Right. And I'll just add to that.

If you look at the custom SRAM, for example, the benefit of it isn't really limited just to XPU devices. It's something that could be highly beneficial for not only accelerators for AI, also for processors, also for switching applications, also for relatively simple networking applications like a NIC. The benefit really isn't limited to just the very high-end AI accelerator devices. Similar with IVR, a lot of products have challenges with power supply noise and power efficiency. That technology can be brought to bear to help many different applications. Yeah, I mean, as you guys said, this is really a pan-Marvell benefit we can get from this deep engagement on the AI market. We can actually leverage that technology across all of our products. Absolutely. Yeah. The next question is from Joe Moore from Morgan Stanley. Can you talk about the NRE relationships here?

How much of the R&D for these opportunities is funded by our customers? Is there a difference in NRE between compute and compute attached? Sure. Willem, you want to take the first part? Will, you can chime in on the second part. Yeah, just as a reminder, NRE is non-recurring engineering. When you look at our custom engagement model, the way we operate is that our customers sort of co-invest. We recognize that as a reduction in our operating expense. If you look at the operating margin that these programs drive, that is an improvement in the operating margin.

When we look across all these programs, you should expect it to be very consistent in that when we look at the compute attached or the XPU attached, there is a significant NRE component, very similar to the XPU programs that we have today. Yeah, that is a great answer. Anything to add or? Great. The next question is from William Kerwin from Morningstar. What are situations where a customer may choose a less-than-full-service vendor? And how large do you see it as a piece of the total TAM, especially as you look forward? Yeah. You want to lead that one? Sure. Yeah, so ultimately, what we see going forward, I mean, clearly, those are situations where they might be able to source the IP on the open market. And those tend to be IPs that are generally available.

Usually, we see that for sort of slower-moving parts of the market that are not sort of moving at the rates that Sandeep talked about and that Ken and the team talked about, where the IPs are going so fast that it is not something you can generally source available on the open market. That is where you are able to sort of put together different services from different companies and produce a product. That does happen. There are plenty of examples of that. It just tends to not be in this sort of accelerated computing going forward. Yeah. Nick, anything to add to that? You had some slides around this topic.

Yeah, I think the key point is that as we see continued acceleration in the applications and customization for the workload, the value of having a differentiated technology and doing very, very high levels of integration is just getting higher and higher. If you compare it to five years ago, you can build a product that's simple by comparison and be competitive. Now that's just not an option. That is where the full service becomes a lot more critical. Yeah, we see opportunities kind of up and down when we engage. Typically, when it gets real and we actually have to go deliver and there's a tight time frame and you want to just underwrite your execution, then that's where our success has been.

It really kind of floats to the top of that model in terms of needing to get the full benefit of what Marvell can bring to the table. Question from Aaron Rakers at Wells Fargo. Can you talk a little bit more on how we should think about the custom HBM4 logic die timing and how this could be leveraged as a differentiator in your custom XPU or attach design wins? Gotcha. Will, do you want to talk a little bit about just rough time frame on some of these things? Sandeep, you can add if you've got a perspective. Yeah, on the HBM4s, I mean, this is public information. They are coming out to market later this year, all the samples. We are engaged with customers to figure out how to incorporate our custom HBM technology to support that.

I think most customers are targeting HBM4e, which is the 4ECO, as they would say, which is the kind of higher performance solution that comes after custom HBM4. I think there is a bigger kind of interest there just because of the performance that is needed and the benefit you can derive with that higher performance HBM coupled with custom HBM technology. Right. The next question is from Craig Ellis at B. Riley. Given the large number of wins, 18 you outlined, and the pipeline of 50 plus, can you please discuss capacity planning across ecosystems to ensure sufficient supply to meet customer demand? Are there any upside limits or concerns? If so, where? Sure. I will have Chris take this one. He leads our operations as well.

We have been deep in that planning for years, actually, you and I, in terms of getting ready for the ramp we have now and then in the future. Yeah, that is really the answer. I mean, starting back in 2020, when this original supply crunch hit, we started doing long-range planning for all of our supply chain, including packaging, substrates, foundry, et cetera. In fact, we do a five-year forecast for our suppliers, and we give it to them every year. We have done long-range contracts in some cases where we need to. We have been planning for these ramps for a long time. We are very confident in the capacity that we have lined up.

Yeah, I think I just add, I mean, we've come so far on that front in terms of shifting from kind of reacting to demand to thinking that, hey, we got to plan multiple years in advance. We actually made investments right in the supply chain. We built partnerships. We put in contracts to make sure we had access to the best technology. We staffed the team. From that standpoint, I think we've got a very, very robust supply chain set of partners. They're deeply committed to us and our success. Clearly, with the magnitude, which is a great question, we need to plan even more aggressively in advance as we chart our path on data center, right, from a couple billion dollars to $4 billion-plus last year. This ramp we've got through 2028 and beyond.

I think we're in great shape as a company from a supply chain perspective and the partners we have. The next question is from an investor. Could you talk about the various flavors of SerDes that are used in your XPU portfolio specifically, for example, extra short reach and others, and specific attributes where your portfolio is better versus merchant solutions? Yeah, Ken, do you want to lead off with that? Maybe Mark, you guys can team up on this. Can you repeat the different flavors? Different flavors SerDes. We primarily focus on long-reach SerDes. We focus on electrical and optical SerDes. We have coherent SerDes. On electrical SerDes, we primarily focus on long-reach, although we support short-reach as part of that. We do have XSR capability, so that can turn on any single time when there's a customer demand.

Just to add to that, I think from a broad portfolio perspective, all the data rates that Ken added, 112, 224, on all different process nodes that you see, 5, 3, 2. It is a deep portfolio of multiple data rates across electrical, optical, and coherent modulation that makes it a fully-blown portfolio. I think the concurrent planning, in particular on this part of the engineering of the company, is extremely complex because you're trying to shoot basically three years out in advance with multiple nodes, multiple reaches, multiple line rates. It is a non-trivial job to do the architecture planning. The team that has been assembled under Ken is just really world-class in terms of their ability to think through what's needed and then actually plan the engineering and then get on the shuttles. This is a machine now we've built.

I actually feel it's a machine that's driving leadership, actually, in terms of product release and IP availability. That's what's really winning the customers, right, is we're able to show real working silicon in the lab. That's giving customers confidence that we can continue to provide this IP, which ultimately is a key determinant of do they keep working with us or not. It's been a great job, Ken, you and the team on this one. Thank you. Thanks. Maybe just add one more. We not only focus on tape power. We also focus on, as Matt will say, post-silicon support and customer support. That's actually turned out to be very, very important. Yeah. Thank you. Right. We have a question from an investor, a two-part question. First is, can you clarify how you treat NRE in your OpEx?

Second, can you also speak to how your OpEx will scale as you move from supporting a handful of wins last year to basically 18 wins you talked about today? You're looking at 50 plus in your pipeline. How should we think about OpEx in the future? Yeah, I'll let Willem handle that one. I might add at the end. Sure. Yeah, so NRE, non-recurring engineering, we recognize that as a reduction in our operating expense. Customers effectively invest with us on the products that we're co-developing with them. What that results in is that we get additional leverage on the R&D investment, where the actual investment is actually quite a bit larger than is shown on our P&L.

If you look at the work that we've done in terms of funding all of these investments, when you go back over the last year, the data center has become more than three-quarters of our business. When we looked at the remainder of our portfolio, we had heavily invested in bringing those to the leading sort of level of technology. The required investments on the rest of our portfolio is, frankly, just we assess that as being lower today. What we did is we very actively moved resources and redirected our investments to the data center. Not only do we have the benefit of the NRE, but then we've really optimized our portfolio. As we look forward, you should expect us to continue to drive very significant operating leverage as we grow that top line.

Yeah, maybe just to add, I think on top of all that, the NRE is critical. I think also the company, we've been very thoughtful in our capital allocation over the years. I think back to when I was interviewing you and we're talking about making the jump to light speed on nanometers and how are we going to pay for this? That was always a big worry. We actually did not make that jump investors' problems. We drove a lot of operating leverage through this cycle we've been through. We've gotten now on the leading-edge train. We do get this benefit where we have co-investment from our customers to go do this. We also have a lot of revenue scale we're driving.

Even recently, you keep seeing this trend where we're growing our revenues, but we're growing our operating income at a faster rate. We're getting some leverage in the model, but we're clearly prepared to invest with our customers. This is a monster opportunity. We're going to continue to grow R&D spending in the company. Despite the ups and downs and semis as a cyclical business and the cycles, we all feel, especially the employees of the company, when we're going through it. If you actually look back over the last nine years, I think we looked at it, I think we've actually grown R&D spending in Marvell every single year since 2016, just consistently. We've done a lot of reallocating, and we've sort of put our bets where the future is. I think that's got us to where we are.

I'm very confident with the customer investment, the leverage we're driving, and the way we run our capital allocation in Marvell. I think we can continue to have a compelling financial model, but really be very, very in position to win all the designs that are in front of us and compete at the highest levels on technology and readiness. All right, the last couple of questions. First one from Srini Pajuri and Raymond James. Given you're one of two full-service custom providers, plus all the design wins you articulate today, I guess I'm a little surprised by your 20% share target. Why not higher? Who am I going to give that one to? I mean, look, we can all look back and say 20% is so low, but maybe I'll just comment on it since it's such a big question. I think it's a journey we're on.

I think just looking at the progress where in this customer area as an example, I mean, we were less than like 5% market share, right, just a couple of years ago. You take it to the data center ink level, and even there, we were like 10% market share just a couple of years back before ChatGPT and with the ChatGPT and sort of GenAI rise, now we're at 13% share. It is like anything, right? It is a journey that we're on. I think that's a great benchmark along the way. We are here for the long game here at Marvell, right? It is not a, hey, we get to 20, and then we spike the ball, and then we all retire, and we're done.

If we can get there earlier, by the way, because some of these designs we have got in the 18, I mean, you do not know how big they can be, quite frankly. You do not know on the 50. You do not know with some of these new emerging customers what they can do or where the traction can come from. This is our best estimate. We tend to be, people could call us conservative. You could call us also just sort of thoughtful and judging the business the right way and giving ourselves some room. I think it is going to be an absolute home run for employees and investors if we can achieve our goals there in the data center and drive that kind of a revenue level.

We have just got to keep executing, guys, increasing our market share year in and year out, let the market evolve. That is going to have its ups and downs too along the way. If you look through cycle, it is going to be a big, big market. On top of that, just for a moment to zoom out to the Marvell level, you have got the core business roaring back as well, right, which is our carrier business, enterprise business, industrial business. Those kinds of businesses are coming back. When you add those up and you look at what the opportunity in this company is, it is quite substantial. I just view the 20% as a good goal. It is somewhere in that time frame. I think it only gets bigger if you just look at the momentum that we have. Right.

Last question from Chris Kayser at Wolf. Can you just help investors just understand better what you are counting when you show sockets? For example, are the five XPUs you showed, are any of these follow-on projects or existing customers, or these are all discrete projects? Yeah, great question. Happy to keep clarifying. Chris, you want to take that one? Yeah. The 18 sockets that we have today, those are sockets, multi-generational in nature, all independent of one another. Now, of the 50 opportunities that we're chasing, some of those would be follow-on to the 18. Whatever generation we have at one might be part of the 50. Of course, obviously, there's a lot of incremental opportunities in there that would be brand new sockets as well. Perfect. I think that was the last question. Okay. Yeah, that was actually rapid fire.

I thought that was great. I got so much help. Look how we can do when we all go parallel, right? Everybody's invited to the conference. Yeah, exactly. I think next earnings call, I might have a few new friends join me in my conference room. Anyway, just to wrap it up again, thanks everybody for joining us today. Everybody in the room, all the Marvell employees listening, and all the investors on the call, we appreciate the interest. I thought there were great questions from the analyst community and our investors. We're clearly going to have a lot to talk about, and we're happy to go do that. I mean, there's a lot of just new things we talked about today, new reveals, bigger opportunity. Again, I just appreciate everybody here and the outstanding job you guys did. A few final points.

I mean, if you sort of wrap it up. First, custom silicon, it's a major, major growth engine, not just for Marvell, but if you look in the industry and just pure TAM, I mean, I think you put it in perspective, like the data center silicon TAM out in a few years is going to be as big as the entire semiconductor TAM of everything all in like a year or two ago, right? That is exciting. Custom silicon is driving a lot of that. We're in the sweet spot there given how long we've been investing and how long we've been at this. I think it's a credit to the team that the really smart people in this company and in the room and across Marvell saw it coming. We were able to start preparing.

We did not know how big it was going to be, but we certainly had an instinct that this is where the puck was going. The second is we really have established these deep relationships with these very significant, important customers. We value those immensely. Like I said, we are all in with our customers. We are going to invest with them to make sure that they are as successful as humanly possible and that they go make it happen. Our portfolio is very broad within custom. If you just look at the capabilities we outlined today, big moat in terms of what we can bring in and showcase to our customers and leverage those technologies to get them their best solution. Also, the fact that we can come in with our interconnect portfolio, other product lines to sell to them.

Of course, this new emerging XPU Attach area, right, which really adds value to their architecture and how they design their systems. I think we're just extremely well positioned to capitalize on this $95 billion-ish opportunity for Marvell. I think there's a huge, huge growth potential in front of us. I just want to thank everybody for attending today, our Senior Technical Leadership Conference. It's a great way to kick it off, isn't it, guys? All right. Perfect. Perfect. Thank you. I thank everybody for joining and your interest in Marvell. We'll be in touch. Thank you.

Powered by