Good afternoon, everyone. Last session of the day, I think we saved the best for last. Delighted to have with us Marvell Technology. We have the CEO, Matt Murphy, and VP of Investor Relations, Ashish Saran. I'm gonna kick this off just the way we've done the whole day, kick it off with a few questions. Halfway through, I'm gonna pause and see if anyone has any questions in the group. Happy to kind of jump in as you want. Thanks a lot for being here. Really appreciate your time.
Yeah.
You know, I guess maybe just to kick things off, you know, I think, Matt, one of the debates everyone seems to be having is: What does end demand trajectory look like, right? We've gone from supply constraints to potentially demand constraints in certain places. You folks just had your earnings call recently, and, you know, it sounds like AI is doing well, but rest of it is, perhaps a bit uneven, if you may. You know, just touch on a high level, what are you seeing? Then we can dig in kind of from there.
Sure. Yeah, great question, and thanks for having me. Yeah, if I summarize, can you guys hear me okay? If I summarize the last earnings call we had, I think the AI upside's great, and it's great to see how that's strengthened throughout the year. And basically, what we said was, "That's up, and everything else is sort of down." So such that just basically, don't change your numbers, right?
Right.
And we'd say the way we're thinking about it and managing it is, clearly, there was a huge inventory bubble built. This is not like anything I haven't seen before. I mean, I've been doing this for almost 30 years, kind of on the front lines, so I've seen all the cycles going back to... My first one was 1995, the supply-demand imbalance, and, you know, this happens. I think the balance sheets of our customers need to get worked down.
Mm-hmm.
I mean, that, you can sort of see that bubble still sitting out there. But, you know, I think the way to think about us is that we've really built a very diversified company with a very strong foundational layer in some very, very profitable, very attractive end markets that are going through correction. One of them is enterprise, right? Which, by the way, even after we sort of get through the inventory correction, that business is up so meaningfully versus where this was back in 2019, 2020.
Mm-hmm.
And that's really mostly organic. It's through our share gains, our content, uplift, and performance there. So that, once it goes through its correction, will still be at a significantly higher level than it was before and very profitable. We've built a nice carrier business to kind of layer on that as a foundational layer, with both wired and wireless infrastructure. And then, we've kind of got the growth stuff layered on top, which is, you know, cloud infrastructure, AI, automotive, and a few other call options.
Right.
And so the goal really for us always has been to, from day one for me, was to always think about diversifying the firm, such that when you have to go through down periods, you're gonna just, in theory, outperform.
Yeah, and maybe just to add just one more, I think, important-
Mm-hmm
... point is, it's not just AI, which is outperforming, clearly, but I think we've also started to see growth from what we call standard cloud infrastructure, right? So as data centers start to accelerate, starting in Q2, and we got it up quite meaningfully in Q3, AI is certainly growing quite significantly. But our networking business, which is really connecting standard cloud infrastructure, has also started to recover quite nicely, right? So we saw, call it, a short two-quarter kind of correction there-
Mm-hmm
... and that business is also now looking very healthy. So that's kind of an important point. It's not just AI.
Got it. That's fair, and yeah, I'll, I'll say this: If you get most of the OEMs networking enterprise, they are seeing a lot, they are sitting on a lot of inventory, and they'll all tell you it takes 12 months to work down, so maybe there's a little bit of that, as you were talking about, that-
Yeah
... needs to happen as well.
I mean, it's simple math, right? If you just look at all those companies and just look at their inventory level-
Right
... growth year-over-year-
Mm-hmm
... for the last couple of years, and you look at their revenue growth, I mean, it was like everything else, right?
Yeah.
You know, like, just imbalance.
Right, and-
So it's gotta correct.
Exactly. You know, maybe just to dig on the AI opportunity, right? I think you folks talked about a $800 million exit run rate at this point. I think it was $400 was the prior expectation, if I may. You obviously a lot of questions around that, a lot of focus on that. Maybe just touch on what's driving this upside. Is it all around the PAM4 optical? Is it more diversified with you know, custom silicon stuff? Just talk about what's enabling this big step up.
Sure, yeah. I think if you look two quarters back, we sized the AI business for Mar- for Marvell as about $200 million last year. It was gonna go to $400 million this year, roughly-
Mm-hmm
... double, and then, you know, at that time, two quarters ago, we also viewed the opportunity was to double again. And we, we did that to really try to be helpful to investors that were trying to sort through all of your different investment opportunities, and, you know, "How, how big can this be, and where's the ballpark, and what's the range?" So we did that. The latest update is, we basically provided, you know, incrementally, you know, kind of positive news, in that, in that the fourth quarter would actually already... of this year, would already be at the 800 run rate. So we didn't- we're not- and we're not gonna go through in every quarter, try to update and call the ball, like, you know, "What's this year? What's the-
Mm-hmm.
It's just too complicated, but we do wanna... It's too uncertain at this point. But the upside and the strength that we've seen, you know, for the last six months has really been driven primarily by strength in the PAM4 optics business. And that, you know, order backlog and bookings and sort of pull for supply, you know, that's really continues to increase at this point. And, you know, that will also grow next year as well. So that's a real positive. You know, beyond that, you know, we'll see where next year lands and where this AI cycle takes us, 'cause right now, it's got a significant head of steam on it.
Yeah.
Yeah, I think even our DCI business, which is a module business within our optics portfolio, to Matt's point, the bulk of this activity is really PAM-4, which is connecting clusters inside cloud data centers and then the switches between each other. But we started to see upside, you know, earlier in the year. In fact, the first upside we saw in our AI revenue was actually the inflection post-ChatGPT in terms of higher deployments of these data center interconnect modules, right? So now, that's a smaller part of the revenue, but that's also growing quite nicely. And as you look forward, as you start to think about more and more... You've spent the money, you've built your training model, now to get it out to all your inference clusters-
Mm.
This is where you'll see a lot more activity, right, between data centers.
Yeah.
You know, PAM remains obviously one of the bigger, faster growth drivers, but I would put DCI kind of right behind it.
Right. Wait, I think I know, I know Ciena actually talked about this very topic on their call, talking about they're starting to see initial, very initial signs of the DCI business starting to ramp up. And I like, this is more of a 2024 story, but to your point, like, eventually, you have to take the training to the inferences, and that's where-
Right
... data center to data center connections will work better.
Yeah, and DCI is a market-
Yeah, but to be clear, for us, it's, it's a 2023 story.
Yeah.
It's a monster upside.
That's my point.
This isn't something in the future. We kind of the genesis was, you know, when it was Inphi, they pioneered this approach, right?
Mm.
For a pluggable data center interconnect module, replacing a transport box and replacing sort of the traditional architecture. And so at 400 gig, you know, it obviously has transitioned to, you know, 4x the bandwidth and more of an industry standard. But yeah, we're—that business, you know, today is running, you know, significantly higher than anything that Inphi sort of ever sized.
Mm-hmm.
That's, that's this year, and of course, that's going to keep going. And just to put a cherry on top, I mean, just to kind of emphasize the importance of this, you know, we, as, as, over the last 6-9 months, as the AI thing has really become real and this trend towards inferencing closer to the source. Plus, some of the challenges, quite frankly, of actually building mega-scale data centers, like physically, like getting enough land, enough power, enough energy, permits, you know, you name it-
Mm-hmm
... this regionalized approach, I think, is gaining momentum. So we recently announced our 800 gig ZR product, which I think was very much a surprise to the industry. We timed it with our release of our 800 gig coherent DSP for telecom. It works in both applications.
Yeah.
So that's a roadmap change that we pivoted on, actually, because of AI. So that, I think the bulk is PAM, and that's kind of what everybody talks about. But this coherent technology, DSP, which is used in the DCI application area, is very strategic, very hard to do, and I think both of those are actually going to be very significant growth drivers.
Right
-for Marvell.
Got it. Yeah, and you said it's not the intent to keep updating these AI run rate numbers. Do appreciate you giving it right now. But I think, you know, one of the concerns that I'll always hear from folks is, what's the durability of these AI numbers for everyone, right?
Mm.
Like, is this just like a nice, big sugar rush, and then we're going to have a crash? Or, you know, how do you think about the durability of this-
Yeah
... and the sustainability of these growth rates?
Yeah. I would uplevel it. The way we're thinking about this is that we strongly believe, as a collective team at Marvell, that the shift from traditional computing architectures to accelerated computing is accelerating.
Mm-hmm.
And it couldn't be more clear just by looking at NVIDIA, right? It's sort of the poster child for that. But it's not just AI, right? And it's not just a GPU as an example. I mean, it's actually all the reasons why all the SAM growth for the last few years, quite frankly, in computing, even in traditional data centers, was actually moving to DPUs, customized offload chips, right? Trying to optimize around the whole system. That train's been running for a while.
Mm-hmm.
It's inflected, but that sort of structural shift is going to create, you know, a significant TAM increase in the semiconductor industry. Now, that may become at a decrease from somewhere else, but where Marvell has always been pointed is in that area, and it's not just compute for us, right?
Mm-hmm.
As we're talking about, it has a ripple-through to our optical strategy, a ripple-through.
Right
... to our DCI strategy, a ripple-through to our switching approach. Our custom silicon TAM gets massively larger, right, as things go from a traditional kind of fixed architecture to very open and customized. Things like AECs become critical, CXL, I mean, I could go on. And then we got a whole bunch of things you guys don't even know about because obviously, we're investing for the future. So to put it in context, there's clearly a massive ramp in AI. It's breathtaking in scale. None of us know how long it's going to last, per se, and is there going to be a reset period at some point, and then it continues.
But for us, and for me as the CEO, where my number one job is to allocate capital for the company, we have to think in 5-10-year, 7-year kind of time frames.
Right.
So when I draw a line from here to there, the trend towards traditional computing systems moving to accelerated computing, clearly happening. It has a profound impact and creates an opportunity for Marvell. So I can't comment on when the AI thing's going to happen.
Mm-hmm.
That's what everybody want, you know, every question is a specific question, and sometimes when you don't know the answer, it's unknowable.
Right.
But it's better to tell and at least share with investors that from my standpoint, the thing that I can help affect the most is outcomes that are beyond a bubble, a reset, a correction, if you will.
Right. Fair enough.
Yeah.
You know, actually, I also think about this is, the amount of cost savings a company's going to get through deploying AI properly, right? Like IBM will talk about 30% labor reduction if you can do it well. It would be logical for them to keep investing in AI if those are the real savings you can end up with. So it might be lumpy, but to your point, the 5, 10-year journey has to be much higher at year 5, year 10.
Yeah, I think they're, and yeah, at the much higher level, I think there's absolutely a massive productivity unlock, and probably we haven't seen an opportunity in a long time, you know, in the world, where there's a technology that can actually enable significant productivity to be unleashed.
Right.
And then that'll, in theory, justify the capital. The question is the mismatch in the timing-
Right
... and when does that happen? And again, hard to know, but it seems real from sort of top to bottom.
Yeah.
Again, we have to look through cycle to think about our investments.
Yeah. The only one I can think about, it's not as big, is deployment of BlackBerrys back in the day-
Mm
... in terms of how that unlocked productivity. But everyone got a BlackBerry 'cause it was so much more efficient.
... Okay, so just clip that. I'm just giving the whole, the whole thing?
Yeah, yeah.
Just clip that. Just a little bit higher.
Yeah, right there. Perfect.
How about this?
Yeah, it's good.
There you go.
Yeah.
Perfect. You know, I guess, you know, maybe just broadly talk about the breadth of your design pipeline, what resonates with customer when it comes to AI stuff versus not, and clearly, the DSP optical stuff resonating very, very well. But how about things like Ethernet switching, or custom silicon, or photonics? Just talk about the breadth of what you're doing here.
Yeah. Yeah, well, I think, I think it's actually the power of the portfolio that's resonating the most. And the reason why I say that, and I would say that this is true all the way to the very highest levels of these customers we have. And sort of the reason is that, for these companies who are especially hosting services and, you know, doing that for a living, I mean, this inflection on AI is, represents, like, a massive opportunity for them and a massive threat-
Mm
... if they don't get it right. This is sort of where market shares move and history gets made. So I think strategically, all of our customers and everybody in the ecosystem thinking about this is thinking about how do I, with whatever unique set of applications or competitive advantage I have, leverage my own somewhat unique approach to drive a TCO advantage?
Right.
Right. I mean, at the end of the day, if you don't have a TCO advantage or some kind of competitive, sustainable advantage, you're gonna get commoditized out, and you're just gonna be competing on price, and it's gonna blow your whole value prop up. So the reason I frame it all that way is, if you're a customer, and you actually look at, like, well, who is a supplier of technology that can really enable all this? Having that all come together is really important. As an example, the switching platform technology we have, or you could even say custom silicon, think of those as the large digital blocks. Those need to directly interoperate and interface to the outside world through optics, as an example.
Right
... could be PAM4, could be a roadmap for new modulation technologies to enable faster throughput in the future. Once PAM, at some point, runs out of gas, we have a roadmap there. Co-packaged optics is one angle, linear direct drive is another angle. You have the ability then to optimize those solutions together.
Mm-hmm.
It's not just taking, well, somebody had a part that got, you know, defined three years ago, and here's the part, so can you make a better part? It's like... I mean, everything down to the NIC, to the switch, to the interconnect, it's all gonna get rethought because it's a completely different application and set of applications. So I think somebody that can do the customized silicon approach, have all the right connectivity and interfaces between those, have the upper layer networking to actually move the data around. I mean, you know, we have incredible storage assets, as an example. Maybe there's a new way to skin the cat on how these large language models use that. So I think we have, like, all the pieces.
That's what we've been building in Marvell for, like, our data infrastructure strategy. So we're kind of here, and now you have the ultimate sort of killer application that's gonna drive a lot of need for innovation, and you can't just do this overnight. You can't sort of be on, like, a corporate board and say, "Oh, wow, hey, this AI thing's happening. Do we have a solution for that?" You know, and some companies just, "I don't even know what all this stuff is.
Right.
I mean, we spent, like, 7 years, right, to put all this together.
Mm-hmm.
Plus, you layer on, you know, process technology leadership, packaging leadership, IP leadership in terms of controlling our destiny on IO, both optical and electrical. It's a pretty powerful combo, and then to be able to come in in a very partner-oriented way, and also have a culture of collaboration inside our company, which allows all these different groups to actually come together-
Mm-hmm
... And that is a very, very powerful value prop that I don't think is really matched, or at least if there is, there's a couple of people, right, that have that sort of combination. It's a scarce few of us, and so I think as I look at this wave to accelerate computing and AI and what customers want, I actually think what we have, we're one of the companies that really has what these customers want 'cause we can actually do this.
Right.
We can execute it, and it's not like, "Oh, yeah, I'll, I'll get a couple of people," and then groups that never talk to each other are trying to show up in a meeting, and one person's on some other flow or has some other vision of the future. So we're driving a lot of internal alignment so that the whole company gets rallied behind this, and all the groups and all the technologies come together, and we look like one unified partner to our customers.
Yeah. Now, do customers come to you and say, "I want all of this from you?" Or do they come and say, "I just want your optical stuff, I'll figure out custom silicon somewhere else. I'll do Ethernet switching somewhere else"-
Yeah.
or do they want a one-stop shop?
It kinda, it kinda goes like this: It's like top-down, bottom-up, right? Bottom up, there's a team, and all they do is optics.
Right.
They're like the best in the world, right, at our customers. They're super detailed, and like, that's what they're focused on. Then you meet another group, and they're doing this, and so. But if you go actually executive down, think if you're an executive at one of these companies, like, running these big properties, I mean, you've got to be thinking about, "Boy, if I've got all my people running around, doing point solutions, optimizing every little thing, and am I gonna get the best outcome?
Mm-hmm.
How am I even gonna synthesize what the heck is going on?" And so I think we're trying to do a good job of being very sort of 360, right, with our customers, and selling the value at the very, very detailed, detailed level to kinda run the gauntlet, but also making sure that we're aware. And I think the combination of those is leading us to a much more solution-oriented approach, which ultimately does add a lot more value to our customers. It adds content opportunity for us to drive, and it helps drive our roadmap because then we can think three steps ahead.
Yep.
You know, how are, how are they architecting these systems? Where is it going? And if you get the input from the switch group, and the optics group, and the custom group, we actually have a pretty good view of where these things are heading, and then we actually take that back in, and we work very closely, kind of in a bespoke manner, with our customers on how we could help them think it through. 'Cause we can see a lot of their organization that they might not, especially big companies.
Right. That's great.
This is all we do for a living, right? So it's not like we're... We got some other business on the side, like, "Oh, we gotta go to creative visit the TV accounts, and we'll be back in a month." You know, it's like this is all we do. All we do is infrastructure. All the people we have in the company know this stuff.
Right.
They're, like, super geeked out on it. Everybody loves it. It's very esoteric, it's not mainstream, but we're really, really good at it. And so I think, you know, I think the approach seems to be resonating.
Yeah, and this has evolved significantly. If you look back 5 years back, I think we'd have to give you a very different answer, which will be more in terms of, hey, this point solutions, right?
Right.
But I think in the last five years, as the companies have evolved, we've added all this capability, this technology. I think the conversation's moved up massively. I think the levels you talk to Matt today are very different than what would have been-
Right
... the sales like 5 years back.
Yeah.
It's a complete change.
I have somebody on my staff in charge of this. I mean, Loi Nguyen, who was the founder of Inphi-
Mm-hmm
... who's one of our most esteemed technologists, business executives, champion of the culture of the company, he runs an entire task force on this, with CTO, heads of engineering from the business units, product defi- and he's driving. So it's not just like, "Hey, I'm the CEO, go, go, go, go run a task force.
Right.
I've got literally an executive whose big part of his remit is to actually really get organized on this front. It's unique.
Right. It's truly very differentiated.
Yeah.
Like, you don't hear that narrative from others, so...
Yeah.
This, this is gonna be off question on a different topic than what we just talked about, since you talked about, you know, having a complete system strategy. On that core optical side, where you're actually doing well, could you just tell me what the competitive landscape looks like right now? Who do you see competing against the, the COLORZ, the kind of platform that you have?
On the DCI side?
On the DCI side.
Well, yeah, I think there's always been a very, you know, strong competitive competition. Back in the, you know, pre-M&A on two sides, it was really Inphi and a very good company called Acacia, right?
Mm-hmm.
And then Acacia was acquired by Cisco, and we acquired. And I think those are the two that primarily, you know, serve that market. Clearly, there's a volume there. There's other people that are kind of from the system side coming in. And, you know, like anything, we have in every market we're in, we have very strong capable competitors. Certainly, certainly, we're doing what we do, what we always do, which is we're driving the innovation really hard. And as an example, the 800 ZR announcements we've made and the speed at which we're able to turn those products, which is really tied to our coherent DSP roadmap, which we have a whole market for-
Mm-hmm
... as a merchant supplier, really helps us. But yeah, there's really one main competitor.
Got it. Then, just on optical attach rates, something that comes up, which is, you know, I think traditionally, servers don't have a lot of optical attach to it-
Mm-hmm
... typically, right?
Yeah, right.
Seems to be evolving and changing when it comes to AI-enabled or AI-centric servers. Can you just talk about how does that change, and what does that look like for you folks?
Well, let me tee it up, and you can hit the ball here.
Sure.
I think I'd just say that the answer is that the accelerated computing platforms and AI platforms have significantly higher content of our products in the optical area, and that's primarily because just the raw computing capacity and throughput is such that you just need a... You have just a ton more bandwidth coming out. I mean, you're talking about, you know, 3.2 Tbps of computing capacity needing to pass through versus... Well, even more than that, actually, down at the digital silicon level, it's probably like-
Yeah, it's like 30, 30 terabits.
... 30, 20 or 30 terabits, that's gotta go get shoved out through a 3-terabit pipe. And, you know, servers are 100 gig.
Yep. Yeah.
200 gig, you know, 50 gig. So it's just a different ballgame. And so that was an early concern by some investors was, geez, these AI systems are really gonna... Aren't gonna get this intermediacy. They're gonna be, like, a content issue.
Right.
It's not as much volume, so. But as you've seen in our numbers, it's actually turned out to be a very important part of the equation.
Yeah, I think at a high level, to your point, right? Servers individually, the first hop from a server to a top-of-rack switch has traditionally been like direct attach copper, right?
Right.
'Cause you're dealing with tens to maybe 100, maximum 200 Gbps of capacity, and that's fairly bad.
Mm-hmm.
It's mostly 50-100, right? You don't need an optical connection. It's really at the switch connection. So now you're like one or two hops above the actual-
Yep
Server is where you have a, our 200, 400 gig optical product shipping and volume for the last few years. What changes in an AI cluster is essentially, you're driving the first hop itself from an accelerator, whether it's GPU or it's a custom ASIC, is optical right from the start.
Right.
You can imagine every other hop, switch to switch, has also to be optical. So the attach rate is... Think of many servers in traditional cloud, driving, call it one or two optical links, versus you've got basically a higher ratio. You've got more than one optical DSP required for one accelerator. So it's like a massive, massive difference, right? And, and that's what, what we see today, and as we look forward, as you can imagine, the compute power of the next generation of AI chips will be even higher.
Mm-hmm.
There's actually, as Matt said, there's a big mismatch in what's inside the box versus how much you can actually connect out. The value of AI really is it's a distributed compute application, right?
Right.
The value of the cluster is how much can you network, and we see that actually growing as we go forward.
Yeah. And so a way to think about this, actually, is how underutilized do you want your GPU clusters to be-
That's right
... if you're running-
That's right. You're spending all this money on compute, right?
Right.
You want to feed the monster as fast as you can, right? Like I said, as you look forward, these AI applications don't fit in a single accelerator, even 10, could be 100, could be even a 1,000. You have to network them together. Versus in a standard server, a lot of applications fit inside a single CPU, which is why you virtualize them and actually slice up the application, right?
Yeah.
'Cause you can't actually use a full CPU for one application.
Cycle, right.
Completely different world when it comes to AI.
Fair enough. Maybe I'll pause there for a minute, see if there's any questions in the group. Go ahead.
Wait for the microphone.
Thanks, guys. Wondering if you could talk a little bit more about the 51 T switch upgrade cycle, and whether you see that primarily being driven in AI applications or as well as traditional infrastructure as well?
Sure.
I think it's both.
Yeah.
Simple answer. I think that the current installed base and sort of state-of-the-art that's broadly deployed is at 12.8 T. Today, in traditional cloud infrastructure, there's sort of everybody generically decided to skip 25.6 and wait for 51.2. You get sort of the quadruple effect, and there's a big lift to go do one of these cycles, so that that's gonna happen independently. Then on top of that, I think the just increased bandwidth and network capacity required is gonna have a kicker on AI, and I think there's very specific sort of AI features as well that probably need to get integrated into a future release as well. So I think there's gonna be a ton of activity.
I think it's gonna be a very strong industry upgrade cycle, going on in networking, in cloud infrastructure, driven by the 51.2T platforms that are becoming available.
Yeah, and I think we're very well positioned. As you know, we just basically just announced we started sampling our product, so I think we should see a lot of activity on this front. There's a lot of excitement from customers as we go forward.
Is there any way that you can size that opportunity?
I mean, the data center switching market's already a very large, you know, multi-billion-dollar market today, right? And as you go from generation to generation, as you give your customer, call it 4X more bandwidth, you also see a pretty nice content uplift, right? So not only do port counts continue to expand for a number of reasons, like I talked about from AI as one example, total throughput is gonna keep increasing on top of that. As you go from 12.8 to 51.2, you also get a pretty nice ASP uplift.
Yeah.
The market is gonna be growing at a very, very fast rate.
Yeah, multi-billion dollar SAM today, and gonna grow at a very healthy rate, actually, going forward, just given the content side of it.
Perfect. All right, I guess maybe we'll, maybe we'll switch gears a little bit on, custom silicon. You know, may we just touch on like, you know, what do you see the opportunity over here? What sort of design opportunities are you engaging around the custom silicon side? And then, you know, what is Marvell's kind of value proposition in these products?
Mm-hmm. You wanna start with this one?
Yeah, sure. I think this is, you know, the journey we started down custom silicon was really when we acquired a company called Avera, which was a few years back, and this is the old IBM, you know, internal ASIC, which they've done probably close to 1,000 designs worldwide. And what we saw was an opportunity, as we saw cloud customers start to focus on really building out massive internal data centers and having very unique workloads, quite frankly, right? They were starting to look at building their own, augmenting what they buy from kind of the merchant market, and we saw that initially as really think of these as server offload devices or accelerators.
Mm-hmm.
And as you moved into the AI era, you've now seen more, more than one public example of some very large applications, right? Which are going towards custom silicon. And again, the idea there is, especially for workloads where it's kind of their own data stream, they wanna be able to understand how they can extract the maximum value, how do you differentiate? And they're looking for partners, 'cause these are incredibly complex devices, especially the type of activity we're engaged in, right? Which is typically advanced process geometry, typically very high IO rates. You need someone who has the world's best services, essentially, which we are one of two companies, I would say, which have the world's best services at this point in time. The ability to invest consistently, it's not just 5 nanometer today, it's 3 nanometers now, and it's 2 nanometers going forward, right?
Having very advanced packaging capabilities. Think of these as, this is really a co-design effort, where kind of the very front end is done by the customer, because they have a unique understanding of what they want to implement. But they're looking for a partner who can now take this-
... design into something which can be manufactured with very high yield, on time, first time, right? And that's really the capability we provide, and soon after we did Avera, jumped to 5 nanometers, really had this custom platform available for the first time. We won a whole slew of design wins. A number of products are in production today, and we are looking forward to ramping additional, much larger AI programs as we get into the next year.
Have you talked about how big could this opportunity get over time? And then maybe relate—not, somewhat related to that, one of the questions always around custom silicon is: What do the margins look like? Are the gross margins better or worse than the corporates? Just anything on those two fronts.
Yeah, maybe I'll talk about margins first, right?
Okay.
So, I think first is, end of the day, what you're trying to drive is bottom line, EPS growth, operating margin dollars, right?
Sure.
So I think you should think about the way we manage the business is essentially that the operating margin profile of custom silicon is no different than what it is for our merchant products. The difference is your gross margin will be lower because the customer is fronting some of the NRE and as well as doing some of the design work, so my R&D spend's gonna be a lot lower, right? So the net-net of it is the business model essentially is the same, which is you want to be able to drive our long-term corporate average, which for us is 38-40 points of op margin, and that's essentially what we see in the custom silicon business.
Got it. Yeah.
In terms of the opportunity size, I mean, I would say—Oh, and just to clarify for the team here, the way that we treat the NRE and the customer funding side is as a offset to R&D. Mm-hmm. So it's contra OpEx. So that's where you get the margin flow through. So it doesn't show up in the top line.
Got it. So it's-
So it's like you take whatever. If you start with our target model-
Mm-hmm
... and you just build it back up, and you say- Right ... "Well, how much funding are you getting?" And then you sort of get the difference of the two, and you can figure out the gross margin.
Got it.
But-
Okay, got it. So the NRE is not a revenue number, it's a-
That's right. Correct.
... offset to the OpEx.
Correct.
Got it.
Exactly. Yeah. So we really— So in that business, for most of the reasons that Aashish mentioned, we— And from day one, even when we bought Avera, that was, that was the thesis that we laid out from day one is, "Yep, it's gonna be a lower gross margin. Nobody panic because we have a—the business model we're targeting for that business-
Mm
... will be in line with the Marvell corporate average target.
Got it.
That's still true today. That was 5 years ago, by the way.
Yeah.
And we've been managing that business since then, and in fact, it didn't start off that way.
Mm-hmm.
I mean, we've gotten it scaled up.
Mm.
We've grown the business. It's actually, the custom business at Marvell has grown at a faster rate than the overall company average, if you look back to when we bought Avera.
Mm.
So you would think, well, gosh, if a business is growing faster and it's lower gross margin, oh, my God, what would... what happened? But actually, we have all these other businesses that are better than the average.
Mm.
We just sort of managed the portfolio-
Right
... and along the way, we've chunked the operating margin.
Mm
... up over time, and that's how we think about it. I just want to add one other point strategically, maybe this is where you were going, but why not? It's actually, and this is also, you know, by design from going back to the Avera days, you know, we concluded for us to be a real leader in our field, table stakes was gonna be process, packaging, IP leadership, and that was not the prior strategy.
Mm.
The prior strategy was... And you still hear the remnants of this in other smaller companies who sort of, "Hey, Marvell got away with this for a while," which is, "Oh, don't worry, you know, we're a node behind. We're two nodes behind, but we have better design than the other guys, so therefore, we can do the same chip for the..." It's like, it's like total fake news. Like, table stakes is also, you have to have really good circuit design, really good architecture, really good engineers-
Mm
... and you need leadership. Okay? It's just, it's just how it works. So, so that's a decision that we and my team made very early on, is we've got to pivot this company or we're just gonna be an also-ran. And so part of doing Avera was the fact that it was going to drive the company, being in an ASIC business for advanced infrastructure applications. It was gonna drive the node train, and we just hooked onto it, right? And between that, between the Cavium business always being a node behind, we needed to get that caught up 'cause it was processors. I mean, things like Innovium, things like-
Mm
... these Coherent DSPs, I mean, switch, every one of these, it's kind of required or else you're not competitive. You follow me?
Yeah.
Going back to it, why have an ASIC business that drives lower gross margins? Oh, boy, that's not good. Well, if it can drive EPS and equivalent operating margin and literally be the pipe cleaner, if you will, that the whole company benefits from.
Mm.
Because all we do is infrastructure, so all the IPs we do for that, they all flow back to the mothership. So it's super strategic. You know, it's not like a sideshow business. "Hey, let's go run around and get into the ASIC business, and then let's go run over here and get into this business." Like, in my mind, you know, when we mapped this whole thing out and where we wanted to take this company, it was just a critical piece of the puzzle, and so we're gonna go after that business, even if it is lower gross margin. But if it's at the equivalent sort of net income level, and you can get outsized top-line growth and address this whole huge emerging SAM-
Mm
... and you can get a whole bunch of extra EPS. I think investors are gonna be cracking the champagne if we're successful.
Yeah.
Even if the blended average gross margin is not as high as... 'Cause at some point, you get your gross margin up too high, your growth peters out. We've seen that movie. So we balance all that. I think in general, other than going through this downturn, we've managed this extremely well for the last seven years in terms of managing around a gross margin target, dealing with all the different sort of dynamics.
Mm-hmm.
... but yeah, ASIC is always gonna be lower, but it's not a bad thing. It's a good thing.
No, I mean, 'cause, I mean, it seems like it's, it's a key part of the integrated offering for infrastructure companies.
That's right. That's right.
It's dilutive gross margins, but it's good on operating margins and certainly net income accretive, so-
Big time, and it-
Yeah
... and to your point, at the first point, it's super strategic.
Right.
I mean, it's as strategic you get of any kind of hardware decision-
Mm
... that any one of these major companies is gonna make, not just in the four hyperscale companies, but all of the ASIC things that we do, it becomes, like, one of the key decisions, 'cause that chip architecture is ultimately gonna drive the hardware, which is ultimately-
Mm
... gonna drive the value prop, along with software and other things that those companies offer. But it's... So when you have that, you're just in the complete sort of center of the decision in terms of the block diagram, if you will. Go back to the old days.
Right.
You wanna, like, drive the block diagram and attach, like, boom, that's the main thing. And then if you do a good job on that, it's like, well, geez, we're betting the farm on this company-
Right
... let's give them more business, right? So-
Yeah. You know, I was gonna ask you this later, but since we had this discussion already on the integrated solution, M&A, I think, you know, you folks have actually created a lot more value through deals than I've typically seen from companies. And one of the things that always stood out to me is a lot of the founders are still at Marvell after-
Mm-hmm
... I assume divestitures are all done, and they are, they can do what they want. As you reflect back, like, what have you done to integrate these assets and the kind of the strategy behind them, behind it? 'Cause it is somewhat unique to have all these folks still at the company together.
Yeah. Yeah, I think there's a couple angles on the M&A. First, I think it has worked out really well for us, both in that, on every deal we've done, and we review, you know, the history after the fact, you know, we've always exceeded the cost synergies that we've committed to.
Yeah.
I'm gonna get to the, to the straightforward stuff first, but it's important 'cause it's part of the track record, right? The second is really strong team that we and I built from the very beginning in terms of the sort of, call it the IT, the infrastructure, the, like, how do you get these done?
Mm.
I mean, we did some of these, we did some of these ERP migrations in, like, three months. Like, done. I mean, companies-
Mm
...go years sometimes, and they struggle, and they get underwater, and so we really... And I, and I, you know, we just early on, we hired, like, really, really good people, kind of punching above our weight, just in case we were gonna go do that.
Mm-hmm.
So that, that part's been really clean. And then you kind of get up to the next layer. It's like, God, so you can do that, that's great. That's a value creator. Perfect. And we always, in general, Inphi was the exception, tried to justify these deals kind of on that basis. Can you get enough sort of cost synergies out and integration? The top line, revenue synergy side, which if you look on any of our deals, we never committed a number. I never mentioned the word revenue synergy ever as part of a deal rationale.
Mm.
But monster revenue synergies have come from all this, right? If you look at 5G, right, how did we actually get all this stuff going? It wasn't 'cause we had this one baseband. We pulled an L2 in, we pulled a switch, and we did all these things, and then with Inphi, it was like a beyond home run, 'cause it had this sort of second-order effect of a tailwind on the custom business, a tailwind on another thing. So I'm going up the stack now.
Yep.
And then you go: Okay, cool, good job. You, you got it done. Good job, you took the cost out. Good job. Oh, God, you got some revenue synergy. That's pretty cool. Well, did you, like, blow up the company in the process, and everybody hates your guts, and all the founders quit 'cause it was, like, a terrible place to work? And, and then you're sort of, "Yeah, sorry about that, but I got the other three." And I think we're... I'm, you know, very proud of the fact that the key people, not just the founders, but the key people in general, we've had a really excellent retention on. So you say, "Well, why?" And by the way, just on the founders, we have Raghib Hussain, who was co-founder of Cavium. He's on my staff.
Mm-hmm.
Loi Nguyen, who's founder of Inphi, is on my staff. Puneet Agarwal, who is the founder of Innovium, he's the chief architect and CTO for our switching platform business. I could rattle off a few others. A few others, by the way, stayed. Founder of Aquantia stayed for a while, Ramin. He went off to go do a start-up. We actually helped him out. We funded him to go do that.
Mm.
I mean, it was not like, "Oh, I quit. I hate this place." It was like: "Hey, I wanna go do my own.
Mm-hmm.
Here's some money." So we... then what, what's the reason? Right, what's the reason? And the reason, I think, and you can comment-
Mm
... is that from the very beginning, also my feeling in becoming the CEO of this company, was that to build a really kind of built-to-last organization, and one that could execute the strategy I wanted to do, which was be this infrastructure leader and get a team that could work together, it was... What resonates is the company's culture, the core behaviors that we espouse, the value, the respect that we show, right?
Mm-hmm.
The way we always use the word in, we're gonna merge, we're not gonna acquire you. And then also making sure we take care of the people, both in terms of meaningful scope when they come in. Every one of those leaders I rattled off, like, everybody got more scope.
Right.
Some of my existing executives gave stuff up. I mean, when I acquired Cavium, Chris Koopmans, who's my COO, he was running our networking business. He gave it to Raghib, gave it to him.
Mm.
Then Raghib said, "Cool, now I'm running the Cavium business and the Marvell networking." When Loi joined, you know, I gave him-
Mm
... I gave him, like, a billion-dollar business to manage. It was bigger than Inphi. Wow, thank you for that trust. You know, and so I think we, we- and we've benefited huge from their leadership, right? And their, their, and, and, and but they really like the culture. They like working for the company.
Mm-hmm.
So-
Yeah, and customers see it, by the way, right? Because customers are also happy to see that there's a significant bench strength. There's folks they've actually worked with individually, right? Because that's what made those businesses successful. So they like the fact that there's that continuity as well as that additional leadership you're providing, but there's also like, I'm not relying on one single person when I work with. I mean, these are very massive, large projects spanning multiple disciplines. So the fact that it's the same group of people, but a broader responsibility, much bigger bench strength across the company, I think all of those things are critical. Because for these customers, you're not like a piece part supplier at this point, right?
Right.
They're betting the fact that we're gonna work with you for the next 5-10 years, not for the next 1-3 years.
Yeah.
That's a piece they've picked up on as well.
Yeah. You can kind of look at these executives as CEOs have different ways of looking at it. They're, they're an asset or they're a liability.
Right.
Too much cost, chop them all out, we don't need those people, they're useless. I just go down, you know, or you could say, "Hey, huh, I wonder why these companies were so freaking successful-
Mm
... and they hired so many great people, and all those people stayed for a long time. Huh! And then you meet these amazing people that ran them, and you're like, "I- I want these people on my team," you know? So it's. And they don't have to do it, to your point.
Yeah.
Right.
They're very successful. They're very financially independent, you know, but they, you know, I'm really just honored to work with them because they love the mission, and we love this infrastructure stuff. I mean, it's like, sounds super geeky, but this, this team loves to do this.
Right.
You know? And so it's like a paradise for engineers. It's a paradise for executives that want to pursue their passion and do it in a like a non-political like good place to work, get rewarded if you do a good job-
Right
... get the bad feedback if you did a bad job, move for-- I mean, it's very simple, but-
Uh-huh
... so far it's been working, and it's, I'm just very lucky to work with the people that we do, so.
Perfect. I'm almost up on my time here.
Yeah.
So maybe the last one. I think in the past you've talked about, hey, the portfolio we have is great, we're good, good with the way we are on the field.
Yeah.
I wonder, does AI change that at all from your perspective? Does that create more opportunities? Does it create anything from an M&A basis for you to look at?
Yeah, I think it provides a different lens, certainly, when you look at the world as if accelerated computing and our, you know, if that structurally changes, which we believe it will, then it certainly opens up our aperture.
Mm-hmm.
You want to look at things always in your strategic context, not "Hey, that looks pretty good, I can get some EPS out of that guy. I can get some accretion out of there." I mean, yeah, that's one, but if it's orthogonal-
Mm
... you guys would slaughter me. You know, it's like, "Oh, what are you doing?" "Oh, what I got you, I got you, like, another $0.10 of EPS-
Right
...on some random thing to do." So we'll, we'll always tend to stay very, very, very focused on our strategy there, but we really, really do have all the organic pieces we need now-
Mm-hmm
... from the inorganic stuff, but it's all one Marvell, and so that's really what we're driving. And it's cleaner. M&A is really hard. It's like, really, like, this is, it's not easy to do. We're good at it. Certainly, if stuff came along, it'd be great, but we don't... There's, it's not like a must-have. We really did-
Right
... did what we needed to do there.
Fair enough. I'll pass my time to someone else.
Sorry to the Evercore bankers in the room that are like, "Oh, man, that's a bummer. Can I cancel my meeting later?
Oh, boy.
Okay.
I will award that one. I'm up on my time.
Okay, thank you.
Maybe I'll turn it back to you. Any closing comments?
Yeah
... that you have not touched on, you wanna touch on?
No, appreciate the opportunity, and actually, I appreciate the opportunity to get to talk a little bit more about our company.
Mm-hmm, mm
... and actually what we're trying to accomplish and the vision we have and the people we have. It's not often you get to do that. Quite frankly, I just get pinged all day at these with how many GPUs are gonna sell next year, times the number of 800 gig PAM4s ... times the ratio, times the ASP. You know, and at some point, I got that.
Right.
We're gonna be helpful to our investors to create the models you need to size the opportunity. But I also hope for the folks that have a little bit of a longer horizon to realize that we're a very motivated team to build something really great, and you can't- you can just sort of say that, but if you don't understand the underpinnings, then-
Yeah
... maybe the story rings a little bit more hollow. So anyway.
Perfect.
Great.
Thank you very much.
Appreciate it.
Thank you.
Thank you. Hey, thanks for the clap. Appreciate that. Feels great.