All right. Welcome back to the Barclays Global Tech Conference. We have Chris Koopmans and Ashish Saran from Marvell. Thank you very much for joining.
Thanks, Tom.
Why don't we start off with the big question that I'm sure you're getting asked in a lot of meetings: $3 trillion of announced spend, the ability to deploy this spend in question, are we in a bubble, etc., etc.? You guys are increasingly one of the largest providers of AI silicon in the world. How do you feel about this trend? Are we in the early stages of a long investment cycle? Are we going too fast? Any comments there would be super helpful.
Sure. I'll start, and then maybe Ashish, you can add. So first of all, of course, we're in the early stages of a long investment cycle. How you can tell that is that, number one, we're constantly being asked to go faster constantly, and we're not able to deliver enough computing power for what the workloads are trying to do today. And so the whole industry is scrambling to build higher speed I/O, faster compute, faster acceleration to solve these big problems that aren't there yet. And you're seeing architectural explosion across the board as well. So ultimately, I think if you took and said, "Hey, from 2022, from the launch of ChatGPT, that's probably the beginning of a 10-15-year investment cycle." Now, those don't normally go straight line. Normally, there's ups and downs along the way, and I would certainly expect that here.
But that's kind of, I'm not really thinking about that. I'm more thinking about sort of how do we actually get to five years from now and deliver everything that is needed. And ultimately, right now, I would say, if anything, our customers are asking us to accelerate and do more in terms of what they're trying to do in 2026 and in 2027. So it looks really good for now.
Great. Well, you guys attack AI through a variety of different ways, from the XPU all the way through interconnect. I thought it was interesting, your customers are asking you to do more. You did the Celestial deal at earnings. Maybe talk about what that adds to your product portfolio and why you felt like this was the right time to add that piece to your puzzle.
Absolutely. So if you think about the data center architecture today, and we talk a lot about the connectivity inside the data center architecture. Marvell did the Innovium and Inphi acquisitions in 2021, which were targeting switching and the optical interconnect technology in the scale-out portion of the network. That's the rack-to-rack communication across the data center, as well as the data center interconnect, which is data center to data center. Those are the longer distances. 80%-90% of the traffic in the data center is in the scale-up portion of the data center, which is the XPU to XPU traffic inside the rack. And so today, the entire optical market, the entire switching market, is only 10% of the traffic. The rest of it is really all passive copper today.
There's only one major architecture that has adopted a switch, and none of them are using optics, so our view is, in the next five-plus years, you're going to see many of those architectures move towards a switching architecture, and Marvell is investing in the leading scale-up switch roadmap, including both UALink and ESUN. We'll be sampling our 115T UALink switch second half of next year, and the optical interconnect technology that needs to happen in the scale-up portion of the network, where you're talking about millimeters and inches and meters versus tens of meters and kilometers, is a totally different type of technology, and that's what Celestial AI brings to the table, and just to give you an idea, the very first chiplet that we're going to be delivering is a 16 Tb chiplet. That's 10 times the state-of-the-art 1.6T in the scale-out.
It's a totally different form factor, a totally different density, totally different bandwidth, and totally different technology because it has to be co-packaged directly with these kilowatt XPUs and switches. And so really, between Marvell's UALink roadmap and ESUN roadmap and the Celestial AI Photonic Fabric technology, we're positioned to be the leader in this new TAM that's growing basically from zero today to probably be bigger than the scale-out TAM five years from now.
Yeah. I mean, just to give you a sense, today we have a $3 billion interconnect business, which is all really scale-out, and that's 15% of traffic. So you can think about the opportunity we have with this acquisition, where you have a much, much bigger TAM opening up over time. So Inphi was awesome. It was fantastic. It's done extremely well. But the market size available in scale-up is significantly larger.
And if you put those two teams together, we have the leading optical interconnect portfolio in the industry. And I think that team can together solve any challenges inside the data center going forward.
So you noticed with the deal comprised of a lot of earn-outs, and then you also think about CPO from a scale-out perspective. It's taken a lot of years to get to fruition, and now we're talking about it more and more. But it does seem more real now than it has over the last couple of years. Could you talk about when that crossover from a technology perspective really happens? I mean, you can obviously look at when you talk about revenue contribution from the company as your first metric, but is that really a statement on market adoption, or is it a statement around Celestial being able to compete first in that industry?
So I think, generally, it's a statement of two different markets. So most of the CPO technology in the industry, including Marvell's own organic investment and our light engine technology, was focused on scale-out. It's a sort of much longer-reach, interoperable market. That's the one that is slow to be adopted. And if you think about that market still, I mean, pluggable, we launched our 800 Gb in 2022 pluggable technology. It's still growing into 2026. So we've already a ramping 1.6T, so you should assume that's probably going to be growing through at least 2029. We've already demonstrated 3.2T technology. So for the next five, six, seven years, pluggable is still working. So that's why CPO and scale-out is still not being adopted. This is a totally different market. This is scale-up. And ultimately, this is not going from pluggable optical to CPO optical.
That's a world where people are going to stay pluggable as long as possible. This is electrical has run out of steam, and pluggable won't work because of the density and the bandwidth required, so you need a new piece of technology, so it's really kind of an apples and oranges. I think, ultimately, the CPO and scale-out, it'll still come, but it's still years away. I think in the scale-up world, although everybody's working as hard as possible to make electrical go as long as they can, when it goes optical, it has to go CPO from the beginning.
And so really, the time frames that we've put, which is significant revenue, I think we said $500 million in run rate exiting calendar 2027 and doubling to a billion-dollar run rate by the exit of calendar 2028, which you'll notice is probably half of what they would need for the earnout. So it should be much bigger than that. That's the time frame. And that's really just based on first product, first customer. I mean, at the end of the day, the reach of this technology can be massive inside the data.
Yeah, and Tom, in terms of timing of that revenue, yeah, I would say that the reason why it's as quick as it is is I think it's both things. One is the maturity of their technology, which is part of the reason why we felt they were the best choice. There's obviously a number of folks trying to go after the scale-up market from a photonic perspective. And the second one, of course, is their ability, especially along with us, where they've already got a lead customer, which is very, very public, and that customer's ability to go drive this into production very quickly. We believe this will be the first large commercial deployment of photonic technology within the scale-up network. I think it's, again, really based on the maturity of the platform they're bringing to the table.
In the intermediary period of time before we move to CPO or optics-based scale-up, you do see other technologies that are inserting themselves into this space, whether it be active copper cables, whether it be AECs. AECs have been used between racks, but also there's a view that you could be used intra-rack as well. So I saw it was either yesterday or this past week, time is a flat circle, the Golden Cable initiative. What are you guys doing there? Is that just your effort to work with multiple cable manufacturers, ease of use to the supply chain? Why were you so formal about that announcement there?
Yeah. If you think about it, something like a cable is no different than an optical module. And our hyperscale customers want to multi-source that. They don't ever want to be because these things you need are copper cables. You need to be having access to copper long-term. There's factory issues that could come up. They need multi-sourced solutions in this area. And they came to us, and they asked us to provide a multi-source solution. So what we've done and what we've been working on now for a year plus is enabling the entire copper cable ecosystem to develop and deliver active electrical cables that meet the standards and quality requirements of those customers.
And so our Golden Cable initiative was basically you think of it as a reference design that's a full working design to sort of bring it to these companies and show them how to make it work. And I think Foxconn Interconnect Technology, so they got it working in two months. Our goal is really to be able to come to the entire hyperscale ecosystem and provide multiple partners that they can purchase a working active electrical cable with our DSP. And by the way, the reason we can do that is because our DSP is so strong and so capable in this PAM4 technology that we're able to work across all these cables. We don't have to control and define the whole thing end to end. And ultimately, that's what these customers want.
Just like in the module ecosystem, we may have a customer that is almost 100% using Marvell's DSP, and we want to design with them, but they still have three or four optical module companies meeting all of their different needs with all of their different laser types.
If you look at earnings, and I want to just focus in on the PAM4 modulation and your optical DSP, the biggest change, I think, where people's numbers were and where they went was the step higher in optics, where you talked about cloud CapEx growing faster, and as a function of that, your optical business, which has tracked largely in line with CapEx, not with large hyperscaler spend, has gone higher as well. Excuse me, you were very careful about splitting these up, so maybe I'll use this form as well. What does your optical business track with? Why are you seeing that big step up to that 35%+ kind of range, and why does it track broadly with that number that you described at earnings?
Yeah. I mean, that optical interconnect technology that came into the Inphi acquisition has been on fire. It's obviously very high market share, and the attach rate within the data center actually continues to grow. I think we've grown at 50% CAGR for like four or five years since we acquired that company. And so the way that we look at it going forward is demand is off the charts. I mean, even if you just look recently, total Marvell, I'm talking now, but optics is a big portion of it being half of our data center. Our Q2 was our record bookings quarter ever. Q3 was our record bookings quarter ever. We're only five and a half weeks into Q4. We've already booked more than all of Q2. I mean, the business is doing incredibly well.
And yeah, we think it should be tracking above cloud CapEx because it's attached to the AI, and that's tracking above total cloud CapEx. Now, I will say that the vast majority of cloud CapEx today is going into something related to AI. And how do I stand up AI? So it's not like it's double of cloud CapEx, but it's higher than cloud CapEx.
Yeah, and the reason we end up growing faster typically is because it's the same formula we outlined, which is we are first to market, we are first to sample, we're first to get the customer up and running, and we are first to come out with the next optimized solution, and if you can keep doing that, especially on this accelerating cadence, I think that's what gives you market leadership, so I'm not surprised that we would continue to be growing faster than kind of the underlying market.
And then as you shift to new speeds, 1.6T coming more in volume in 2026, still 800G, the vast majority of connections next year. But as you move to 1.6T, what does that share profile look like, and how can you assure that you're in the same kind of position that you were in prior generations?
Yeah. I mean, it's sort of piggybacks on what Ashish just said, which is you've got to be first to market with technology that works and has the right power envelopes to deliver these solutions. Marvell's 1.6T, we were first to market with our five nanometer solution to demonstrate and make it work with every accelerator. We were first to market with our three nanometer, which has the right power for this particular growth vector that's coming. And ultimately, we have deep, long-standing partnerships with all these companies. This is not something where you can just come with the DSP and win share. The qualification cycles tended to have happened a year ago. Even if you come to market later with a product that works, getting it qualified across multiple module ecosystems, multiple laser types, and the exact form factor required by all the hyperscalers takes a long time.
Ultimately, what they are focused on first and foremost today is time to market.
Yeah, and we said on our call, by the way, that we're seeing literally exceptionally strong demand for 1.6T. Well, to your point, 800 Gb is growing, and it's going to be very, very strong next year. I would suspect at this point in time, we are probably the primary driver of 1.6T in the industry.
Switching gears to the ASIC business, optics drove the big uptick in 2026 numbers, and off a very large base, it's still growing quite nicely, whatever you think cloud CapEx is into 2027. But I think the most surprising thing on the ASIC side is you talked about receiving purchase orders through the entirety of next year. Can you talk about is that with just the one large customer, or were there smaller deals as well? And then secondly, into 2027, you talked about that business doubling, which was a big step up from kind of where people had thought before. That actually has you growing at a faster rate, generally speaking, than what cloud CapEx would grow in that year. We'll see. But maybe talk about what gives you the assurance about 2026, and then what's driving that doubling into 2027.
Sure. So first of all, our 2026 numbers, calendar 2026 numbers, our custom business is still a relatively small number of sockets. We started this business just a few years ago. We announced a certain set of sockets last year. We announced we have more this year, but the revenue is still being driven by a small handful of sockets. And so ultimately, the reason why we're showing the growth next year is because we know what the customers want to do. And they've told us, and the reason we talked about the purchase orders for the one, first of all, the lead time on these products is very long. So you're already getting orders for products we're going to be delivering eight, nine months from now. So within a few months, probably have orders for the entire business for all of next year.
The comment on the one program, which is the next generation XPU for our lead XPU customer, was really just to try to assuage any concerns that we were going to participate in the next generation. We have the purchase orders for the whole year to support that growth. The following year, there's really a couple of big things. Number one, you take the base business that we have this year that's growing into next year and grow again. Number two, you add in a next XPU customer, which is really more meaningful, and the first meaningful revenue is in 2027, and then number three, what we've said is our XPU Attach business is starting off a really small base. We said it doubled from last year to this year, and it'll probably continue to double into the future off of a small base.
And so by the time you get out to 2027 calendar, it starts becoming very significant. And that's growing, obviously, much faster than cloud CapEx because it's sort of a newer portion of the custom market. So those are really the three aspects that give you confidence.
Yeah. I mean, just to put some numbers around it, just so everyone's on the same page, we're basically implying that a business which will exit next year, for fully or next year, we implied that number is probably just shy of $2 billion, call it big large numbers. But the exit run rate to Chris' point is probably going to be north of $2 billion annualized when we exit Q4 next year. Just to assume, again, just using like 20% cloud CapEx, that base alone grows to like close to $2.5 billion. And then the next billion plus or minus left to get to our target is really from like two big chunks. One of them is multiple XPU Attach sockets, and then the balance is from, quite frankly, a very conservative view at this earliest point of the next big XPU.
So it's not like we're expecting this $1.8 billion additional revenue from just one single place. It's actually, if you think about it, it's from really three big chunks of revenue, which is, in my view, still a fairly conservative outlook.
And you talked about $2 billion by fiscal year 2029 from that XPU Attach. Could you help us zoom in on this bucket? At the AI Day and the Analyst Day, you kind of looked at different sockets that you would say contributed to that. What are some of the bigger ones that we should be looking at that would be helpful for us to track?
Yeah. So first of all, what we said at our custom AI event was that we thought it could get to $3 billion by calendar 2028, which is 20% of what we thought would be the $15 billion TAM. What we said in this last earnings call is that just with two specific applications, we'll get to $2 billion. The others are with the other applications. And those two applications, so when we announced this or when we talked about this in June, we sort of talked about all the different sockets that are in there. At our recent earnings, we sort of highlighted that we now have two specific applications that we have multiple design wins for multiple generations and at multiple hyperscalers. So they're emerging as their own independent applications.
The first one being SmartNICs and NICs, network interface controllers, and the second one being CXL memory expanders and poolers . And both of these are attached to both standard infrastructure and increasingly into AI infrastructure. And what we said is just those two alone will combine to be $2 billion by calendar 2028. And really, it still is just starting. So it's going to grow significantly beyond that based on those two applications. And if you think about how we got here, these are both areas that Marvell has been investing in IP for a long time. SmartNICs are both Arm complexes as well as high-speed network interface controller and IP around SerDes and things like that. And CXL, we announced a standard product line for CXL memory expanders and near-memory accelerators a few years ago.
And ultimately, just like a lot of the other things in the hyperscale market, it helped them see it and test it and say, "Okay, I want something custom." But it's sort of a similar idea, but I want something customized for the standard infrastructure, and then I want something custom increasingly for the accelerated infrastructure. So those are two of the big- use cases. We also have storage accelerators, and we also have security products that are kind of making up the rest of it.
Yeah. I mean, two things to add, I think, which is what we've seen and why these markets are getting more exciting. First, on the CXL side, CXL originally was designed for really basically disaggregating memory from a CPU. And that very much is a use case, and that was our initial design wins. But especially as we now look into a world where you have LLMs, which are essentially going to be storing all the what's called a key-value cache, KV caching, where you don't have to regenerate the text stream every time you go and ask the next question, that requires a whole bunch of DRAM sitting very, very close to the XPU. And that's the point which Chris was trying to make.
It's not just attaching to CPUs, but once you start attaching to XPUs, as you know, in traditional AI infrastructure, the ratio of an XPU to a CPU is anywhere from four to one to probably eight to one. So the use case is significantly larger than what we'd anticipated. Similarly, on the SmartNIC side, our original assumption is on original design wins were really around attaching only to the custom portion of the rack-level infrastructure. But as you know, all of these large hyperscalers have their own networking teams. So the attachment is not just to the custom portion, but it's across their entire network. And some of these guys are driving million and well over a million plus AI servers per year. So you can see why we have really started to move this up, and it's become a much, much bigger use case.
Want to ask on the ASIC side, this has been something you've dealt with for over a year. First, there were concerns around your largest customer ramping into 2026. And more recently, there's been concern about, I believe, what you called Customer C at Analyst Day a long time ago ramping in 2027. Matt was on TV last night kind of offering his opinions as to what's going on with the situation, but maybe clear it up for us. Is this a function of things not being set yet? Is this a function of there just being a lot of noise and we shouldn't be paying attention? Maybe set the record straight on where we are because we've now introduced Customer C in 2027.
Yeah. I think Matt said it last night, which is there's a lot of noise, and when there's a lot of noise, you have to pay attention to the signal. You need the signal. Like we've said, we're on track with these programs. I mean, those customers, actually, I saw an article this morning. There was an article by Barron's that it was sort of reporting on this whole loss of a particular product, and they actually updated their story this morning saying that they got a response from the customer saying that that's not true. That's just flatly not true, and the other customer, I mean, the senior leader at that customer spoke at our event a few months ago. That program is on track. It's doing well. We just said that and gave you a forecast for our customer revenue ramp into 2027 based on that fact.
So nothing changed in the last seven days that has changed that other than that it seems like these days people can report whatever they want without having to do any actual research. I mean, by the way, I'm sure anybody that wants to can report that I am also talking to all of our competitors' customers and trying to win business for them. Every single day. You can imagine. Every single day, I'm trying to win business away from all of my competitors. If that's newsworthy, please report it.
Very helpful. I wanted to talk about NVLink Fusion. If you look at T4, there was announced compatibility with NVLink Fusion. Over time, there's a view what is going on with scale-up backend networks. How do you feel like this changes the dynamic? Was that something that you would have expected? And then also, there is a view that with this type of deployment, you're going to need a chiplet or an ASIC that's sitting next to all the silicon that's going into an NVIDIA rack. That seems like something you could do. Is that something that you guys would be interested in doing?
Sure, so let me take it sort of a step way back. The scale-up domain is brand new. I mean, obviously, NVIDIA has the NVLink and announced NVLink Fusion, and Marvell has announced that we're part of that capability when any customers want to use that and want us to build custom products that would fit into that NVLink Fusion environment. UALink is new. ESUN is new. All of these things are fairly new, and I think most customers right now are working through how they're going to plan their overall architecture, and ultimately, I agree with this idea of chiplets giving them optionality. When you're trying to build everything into a monolithic die, you have to make a decision right now.
Now, who would make a decision right now to say, "I'm going to go bet my whole farm on one of these protocols when the industry hasn't yet finalized?" And there's no switch on the other end that's available yet to even make sure that it was going to work. And what happens if it doesn't come out on time? So I think that that level of optionality is absolutely there. And Marvell is also going to participate. Now, we think that UALink is the open standard protocol that has been purposely designed for this application. It's effectively taking what is generally today a PCIe-based scale-up network that's used today in most of these custom XPU programs and then attaching a much higher speed Ethernet-style interface to it. That's kind of what UALink is. So it's kind of purpose-built for a scale-up network in an open standards world.
Having said that, there's a huge Ethernet ecosystem out there, and so having that capability makes a lot of sense as well, so as a leading Ethernet switch vendor, we'll build that. We're building our UALink switch. We'll sample our 115T next year alone, and so we're investing in that, and we're involved in NVLink Fusion, so really, this is about optionality for our customers, and from a Marvell perspective, I think this move towards chiplets actually helps a lot because that's where we can help. If you think about the main XPU die of building compute cores, that's not using as much of our IP, but if you think about all this high-speed SerDes and all these interfaces and things like that, that's our bread and butter. That's what we do all day long.
Would that categorize itself in like an adjacent AI opportunity, not a piece? So you would put that in the bucket of not just an accelerator, but it would be maybe something that you talked about.
I mean, if you remember, we announced that NVLink Fusion is also part of our XPU basically portfolio, essentially. That was, if I remember, the beginning of the year, actually. I think this is not something new for us. I mean, this is something which, as you can imagine, we work with all our hyperscale customers well in advance of what you see actually coming out. This is already part of our portfolio. It's basically that flexibility we are able to provide.
You've seen accelerator guys, general purpose accelerator guys, go after more system-based architectures. You've seen IP providers move potentially into more accelerator design. You guys own the capability of doing most of the design for accelerators. You own the interconnect. Would you guys ever consider going after a system-based architecture or something of that nature and throwing your hat in the ring?
If you take a look at everything that we've built, including the recent announcement that we have, we're absolutely complementing rack-scale system-level architectures. Now, we believe our place in the market is to be the best partner for companies that are trying to build that. Whether that be an OEM that's looking to build a merchant-based platform rack-scale architecture, we have a huge breadth of IP. We can help you do that. If you're a hyperscaler looking to build your own custom rack-scale system architecture, we have an incredible amount of IP. We're here to help you do that. That's really where we believe that we fit is that we have all the capabilities from scale-out, scale-up, optics, electrics, switching, XPU technology, and now the Photonic Fabric technology, everything that you need to help you build the best hyperscale rack-scale AI data center.
Got it. Pivoting off data center, you compressed your other businesses into a single line item saying 10% growth next year and then in 2027 kind of GDP-ish growth. Is there anything that we should be looking at that changes the dynamic? Obviously, I would imagine something like 6G would offer a little more life there. But is there a reason why you're kind of saying that kind of grows at the market plus?
Just as a baseline. I mean, you should imagine that the person that's running that business has a target much higher than that. We want to gain share in that business.
[audio distortion] Chris by the way?
We expect to be able to gain share. Look, from a baseline perspective, that's not going to be a super exciting market. It's going to grow 50% at some point. Yes, some of these markets like 6G, that'll be a little bit lumpier. Those are markets that don't move in quarters. I mean, those markets, they tend to take years to develop. Ultimately, we expect that that market will grow at least with GDP. Marvell should go at least with the market. In fact, I think we should be able to gain share in that market with the breadth of our capability because the way we position it is all this investment we're doing for the data center, and it trickles down into the rest of those markets.
Where they can't necessarily afford the investment in the highest speed SerDes in the world, they certainly will take it a few years later and be able to invest in it and put it across enterprise networking, telecommunications networks, 5G networks, industrial networking, all those other kind of communications networks.
Yeah. I mean, within that, I think the enterprise portion probably does grow from. I mean, we use GDP again with a very, very simple. We're talking a year and a half, two years out from today. But the reality is enterprise spending typically is faster than GDP. Historically, IT spending roughly two times GDP is more typical. So I think that's what would be my basic expectation. Carrier spending does tend to track carrier CapEx historically is more on the GDP side. So again, look at that as a very, very simple framework for now, and I think we've generally done better than that.
With all these baseline assumptions, I'll tip my cap to you guys. You gave a ton of color on the earnings call around the next two years. Napkin math, it kind of equates to something over $5 of earnings in fiscal year 2028, calendar year 2027. If you look at those numbers, it's always risky to go out just a year, and you guys have expressed that. Going out two years, we really appreciate. But when you go out further, there's introducing more risk over time. When you look at the broader market, you've heard a lot on memory issues and availability. You've heard around potentially digestion of CapEx. You've heard about potentially lasers. There's a variety of different constraints that are limiting some deployments.
When you think about your calendar year 2027 outlook, what is the one area you stay up at night worrying about kind of from your heritage in the business of what limits you in getting those deployments out?
I think there's probably a couple of things. First of all, you should imagine that we comprehended all of these worries when we gave the numbers. We could have given a bigger number. I think we gave the numbers because with the comprehension of potential supply constraints, potential all these issues. That's why we mentioned that some of these parts of these are fairly conservative that we think we can actually do a lot better that. That includes both the new XPU that's ramping. That includes everything else that we talked about. So if you step back and you say, "What can go wrong?" I mean, there's obviously the if we just hit the brakes on spending and said, "Slow all the spending down for a little while," I said earlier, my view is that if you take a long enough view, that's not an issue.
But it could certainly be an issue for a few quarters. And if those few quarters happen to fall into calendar 2027, then that would be a problem. But ultimately, all that other stuff, if you think about the design cycles, they're on track. We've won all the designs. We've won the business. They're all in execution. We've been kind of coming to this for years. If you think about this optical momentum that we've built and the strength that we've built in that business, switching, we bought that product line. It was single-digit millions of dollars in revenue. We grew it to 150 and then 300. Now we're saying 500 next year. I mean, these things take a long time to kind of get to this point. And now we have a number of these growth drivers that are now sizable.
Our XPU Attach, we said it was really small, and it doubled into this year. It'll probably keep doubling going forward. So that starts to get towards multiple billions of dollars. So once you sort of have these different sort of growth engines, the flywheel is spinning. We have a lot of confidence in hitting those numbers.
Yeah. I mean, most of them, and which is why we use CapEx as kind of an index so that you all can basically follow it along with us. And in most of our businesses, we are talking about going in line or perhaps a little bit above CapEx. I think the only place where we're actually going a little bit above that number really is in custom. And even in custom, it's not for the entire business. It's really for a couple of product cycles, which obviously we're very, very far along on. So if you really think about it, really all I'm really saying is, look, and I'm assuming a fairly modest 20% CapEx growth assumption for calendar 2027. So it's not even close to what this. I mean, this year is way higher. Next year, I think the whole market thinks it's somewhere around 30%.
So I've actually taken that down to 20%. And in most businesses, I'm saying it's at or roughly at that rate. It's only in the custom part where because we obviously have very good visibility to some very strong product cycles is where we are saying, "Hey, that business doubled. That grows way above CapEx." So that's kind of my point of it's really not once you kind of go through the chunks, it isn't that big growth, even though it sounds like it. It's really tied to CapEx primarily.
Very helpful. Things sound great. Thank you, Chris. Thank you, Ashish, for being here. Appreciate it.
Thank you.
Thanks for having me.