Okay, I'd like to get started. Good afternoon, everyone. Thank you so much for coming. My name is Toshiya Hari. I cover the semiconductor space at Goldman Sachs. Very honored, very excited to have Dr. Charlie Kawwas from Broadcom. Charlie is president of the Semiconductor Solutions Group at Broadcom. He's responsible for the company's 15 semiconductor divisions, also the Brocade Storage Networking division. He also leads global operations and intellectual property for the entire company. Prior to his current role, Charlie served as chief operating officer from December 2020, during which he drove global operations, sales, and corporate marketing. During his tenure at the company, Charlie has overseen the strategic growth of the hardware businesses from roughly $4 billion to $30+ billion , driving strategic technology investments and operational excellence. Charlie, thank you so much for coming.
Toshiya, thank you for having me. It's a pleasure.
Thank you. I definitely want to spend most of our time on longer-term, more strategic areas of the business, but you did report results last week, so I just wanted to touch on that. Specific to the business that you run, which is semiconductors, what would you say were the key highlights in the quarter? And were there any surprises relative to your internal expectations, plus or minus?
Okay. So good timing, 'cause we just announced last week, and as we announced last week, we said that the AI business continues to grow, continues to be robust. And, if you remember a year ago, at the same time, we said we think in 2024 we're gonna do about $7.5 billion. In, the spring timeframe, around March, we had the AI Day, and we said: Look, we feel good about the $7.5 billion, now it's $10 billion. And then in June, we felt better about it, we said, "It's $11 billion." And so one of the highlights of last week was we said we continue to feel really good about the AI business. The AI business will actually do $12 billion.
And so relative to the seven and a half, it's a very big step in terms of forecasting, especially when we have such a lead time and visibility with our customers. So we're very pleased with that. We're able to satisfy that, especially due to the strategic partners we have helping us with this. I would say the other thing that's positive is we are starting to see some level of recovery of the non-AI business, and as you know, we play in multiple end markets. We're quite diversified with the 15 businesses that we have, and we are starting to see the revenue sequentially grow. I would say, especially in enterprise, we're starting to see that recovery come sooner than what we thought, which is good news.
Great. You talked a little bit about the full year AI revenue outlook improving from or growing from $7.5 billion to now $12 billion.
Yes.
Between custom compute and your high-speed networking business-
Mm-hmm
... what are you seeing between those two? And I realize it's a little bit early, but into fiscal 2025, is there any sort of indication outlook that you can provide for us?
Okay. So you're right. In the AI business, we have sort of two categories we look at. We have the custom XPU, and we have the rest of the merchant networking portfolio. The custom XPUs, it's really driven by the three customers I talked about in March, and we talked about them last week in the earnings call. They are all in production and continue to grow, and that's about 2/3 of the business. The remaining 1/3 is the networking merchant silicon that we have, and that actually, for AI, we provide that solution to everybody, so anybody in the hyperscale space or anybody that is actually building systems that go into enterprise even. So we can cover the entire market for that, and that's about a third of that business. Now, you're right.
In 2024, we did extremely well, very pleased of how we were able to grow from 2023 to 2024 meaningfully, and if you look at the numbers, it almost tripled, which is something we're very pleased with. We believe with the spend that we see, especially in XPU, since it's 2/3 of the business, the key customers we're working with, they're increasing their spend massively, and I'll share with you some of the numbers, so in 2023, just the four top hyperscalers in the U.S., only in the U.S., were spending about $145 billion on CapEx. A big portion of that is for AI. Obviously, the rest of the portion is to run their infrastructure.
If you look at the last quarter where they announced their CapEx, each one of them has increased it massively to the point that only these top four went from $145 billion to $202 billion, and that big increase, almost $60 billion, is all for AI, and that's for 2024, so from 2024 to 2023, massive increase in AI budgets and CapEx, and that's committed. We believe that these same players will continue to drive strength into 2025. Exactly how much, it's still a little bit early to predict that, but we think the business will continue to be strong in AI moving forward.
Okay, understood. I appreciate that. Maybe taking a step back, you have technology leadership across pretty much all your franchises that you run in semiconductors, and the AI businesses that you spoke about are really no exception. Maybe spend a couple of minutes discussing the key pillars or the foundational IP that really drives the competitive strength.
Yeah. So that's something we're very, very lucky to have, actually. In a way, this is was underpinned by Hock, of how he ran each of these businesses, so I'm very lucky and blessed to have had that opportunity to continue this, so it's done by design, meaning that each of these businesses, we call them a sustainable franchise, and to qualify to a sustainable franchise, you have to first play in a market that's about ten years out, and so many people, when they think of Broadcom, they don't realize we actually have very long strategic planning when we actually think of having a business or what we call a sustainable franchise on our platform, so we have a visibility and a plan of about ten years, and we select these markets, and if that market we don't think is sustainable, we do not invest in it.
So we believe AI is obviously sustainable-
Mm-hmm.
And we believe the rest of the businesses are sustainable. So that's one. Two, and the reason I'm bringing this, is technology. And I'll specifically talk about the key differentiators and key IP that we have. But in each of these businesses, if we don't have the best IP, the best technology, which, thank God for natural intelligence, i.e. people, we will not be in that business. So each of the fifteen businesses, we believe we have some of the smartest, if not the smartest, engineers, and we have the best IP in each of these domains. And three is the execution. If you might have the best IP, but if you can't execute, and we know several companies that have great IP but are not executing, who cares?
So we're very disciplined in how we operate and run these businesses, such that each of these businesses execute seamlessly within a financial framework that we expect, and that's how we run them. So I wanna double-click on that technology piece that you asked about. For us, especially in the businesses that we're in, it's really fundamental that for that market, that we have a view of about horizon of ten years. We need to understand where do we differentiate, and how do we invest and make sure that we out-invest our peers or competitors? I would say in many of the businesses I'm running, we're probably out-investing the entire industry, more than anybody's investing in that space, and in many cases, take all of the peers that are trying to invest in this space, our investment is larger than their combined investment and larger than their individual revenues.
And so in there, I would start with the foundational items. And so we have a fairly large team that focuses on Central Engineering , which develops IP that all these divisions have access to. And that's where it starts, and that's actually quite differentiating because we do not license IP from third parties for the differentiating pieces. If it's not differentiating, of course, we will work with partners, whether it's Synopsys, Cadence, Mentor, and so forth. But when it's differentiating, like SerDes, we absolutely make sure we build it ourselves. When it's differentiating like memory, we go build it ourselves, and that's why you see us to be very picky on picking the right strategic partners we invest in. And I'll give you an example just to clarify what that means.
So if you think of an XPU or a custom accelerator GPU, when some of our customers come to us and say, "Hey, I can actually use third-party IP or my own developed IP to develop this core, compute engine," they come to us. We can do the same with the same capabilities we have in half the die size, at a significantly lower power and much better performance, too, especially when it comes to SerDes die-to-die interconnect. So it starts with that engine, and there it's a fairly large team. We're not talking hundreds of people. We're talking North of 1,000 engineers just dedicated to that function. Then on top of that, we actually start taking the divisions that work with that Central Engineering team, where they take some of that IP, and then they develop their own differentiation.
And an example of that could be our networking division or specifically our Ethernet division. They take that IP, SerDes, the libraries that we have, combine it together, and then on top of that, they'll build the packet processors, they'll build the fabrics, they'll build the engines that are required to actually do spraying and congestion control. All that comes from that unit. And this is where we have significant differentiation on the networking side. Similarly, on the optics side, we are able to actually co-develop the best lasers for single mode, multimode, VCSELs as well, or EMLs. We're able to get to a hundred gig, two hundred gig that the industry is struggling to get to, and we're probably the only one providing these.
We're able to go create now next generation DSPs for 800 gig, 1.6, 3.2 Tbps with massive integration, creating a new category, something called IDSPs, where we can actually integrate many components into a single DSP, completely change the equation. Same thing in the server storage, same thing in the broadband space. So that's sort of the strategy that we have, which is really built on, first, I would say, the business approach of having a sustainable franchise, having a full platform that can customize the best IP needed for that best technology, and then we will have each of the products, developing products derived from this, that would differentiate massively relative to others.
Great. Really compelling. I appreciate that. Maybe a question on XPUs or custom compute. There's a big debate as it pertains to merchant versus custom.
Yeah.
Over the past two days, we've had leadership teams from Amazon, Google, Microsoft, we had AMD, we had Jensen this morning, so all sorts of views out there. I feel like at Broadcom, a couple of years ago, the view was merchant typically wins in the end.
Mm-hmm.
Something happened between then and now, or very recently. You're very aggressive in the custom compute space. So what changed, and do you expect custom to outgrow merchant over the medium to long run?
I would say when we speak generally about semiconductors, merchant is still the winner, generally speaking. But when we speak specifically about XPUs, things are super exciting and super different today, even relative to when we started this journey a decade ago. So we actually started investing in XPUs 10 years ago, so it's not something that Broadcom got into in the last two, three years, because it takes time to build these capabilities. What's different in the XPU space is really two things. One is the SAM is massive. We're not talking about hundreds of millions of dollars or billions of dollars. We're actually talking about tens of billions of dollars, possibly larger than that, and some of our peers and partners, and customers actually predict this to be in the hundreds of billions of dollars.
And so when you are one of these big, large hyperscalers, and you have a consumer platform where you have hundreds of millions, even more, billions of consumers sitting on it, it's all internal applications. Controlling your own destiny on something that's a massive spend is critical because if you... And I'm sure it came up in some of their talks, if you look at these four large players, the same four large players who you mentioned, the two big issues they have today, number one is access to GPUs, and number two, access to power. And you can say, well, it's really maybe access to power and access to GPUs. It depends on what cycle they're in.
But they wanna control their own destiny, and if they're gonna be spending billions per year, if not tens of billions each, on merchant silicon that's built for a fairly large industry, a lot more than these four, then it might have too many bells and whistles that they don't really need for their internal workloads. Maybe they need it for a cloud, enterprise cloud service, but for their internal workloads, it's not really needed. So number one, you're not controlling your own destiny because you're counting on somebody else doing this for you, and it depends if that person likes you or not to give you allocations. And then number two, you're paying a lot of money for it when you actually have an opportunity not to do that. So what we've done a couple of years ago is we saw that trend coming-
Mm
... because of the engagements we've had. At the time, we had two customers. So what we've decided to do at the time is we said, "Since the technology has changed," meaning I can't put all of the transistors for a GPU or XPU on a single die. We all hit 800 square millimeters. You just can't, and so you had to go to a chiplet architecture. When you go to a chiplet architecture, we saw that as a big inflection point that we jumped on, and we said, "We are going to change how XPUs are built." And we said, "We're gonna build a platform for XPUs." So, Toshiya, if you wanna build an XPU, typically it's a three-year cycle. Well, guess what? With Broadcom, we can do it in less than a year.
People say, "Well, wait, how do you do that?" We do that because we have built all the necessary IP that's needed, including full turnkey chiplet designs, where the only thing that's left is you give us the spec of the house you wanna build or the XPU you wanna build, and we work with you on your compute engines that are perfect, well-designed for your workload. When you do that, these people that are thinking about this industry for the next half a decade to a decade, remember the sustainable franchisor thing?
Mm.
It makes absolute sense for them to go to the XPU. So I believe that if you look and fast-forward at least five years, internal workloads will be all done on custom XPUs. Now, whether Broadcom builds them all or not, we don't know. We have to go earn and win every one of them, and we will fight hard to go win the ones we believe are the winners of the future. But what we're hearing from them and what we're seeing is that's the path they're gonna go towards. On top of that, then you have an open platform or an internal platform that's proprietary of the right software stack that's needed, including the models that are needed.
What I'm happy to share with you and see is all of them are looking at how do I get to an open platform, which is the XPU platform that we have is open. How do we get it to scale? And we'll talk, I'm sure, more about how do we scale that, and we are already scaling this. And then how do we make sure we squeeze the power out of this? Because remember, one or number two constraint is power.
Right.
When you customize, you get the best power, you get the ability to scale, and it's open. You're not stuck.
Right. Right, and I think you've kind of thrown out this data point before, but the power savings that some of your big customers get by going custom relative to merchant is very significant, right?
Absolutely.
Right.
Absolutely. Remember the days when we... Not even us, if you go back two decades ago, but even recently, if you look at any of the hyperscalers, why do you think they don't buy directly, for example, from the OEMs? Because they have the ability to customize and build exactly what they need at the right power, at the right cost, and the XPUs are so expensive, the GPUs, such that now the XPUs become more affordable, but they become open, scalable, and power efficient.
Okay, got it. Thank you. The other debate, which is more on the networking side, that we hear, from investors is Ethernet versus InfiniBand.
Mm-hmm. Yeah.
Ethernet is the de facto standard. I think you guys have said this, and many of your competitors, partners have said this. But in terms of... I guess I wanted to ask you, on InfiniBand, I guess if I'm a hyperscaler, under what conditions would I be swayed toward using InfiniBand? Again, Ethernet is the standard.
Yeah.
But what kind of circumstances, if you will, would I be swayed toward InfiniBand? And NVIDIA is obviously quite loud about Spectrum-X. How does that product, how does that platform compare with what Broadcom offers?
Yeah. If you don't mind, I'm gonna first-
Yeah, please.
I think it's important to talk first about sort of the big view of where the market is at. And the way we view the market is sort of think the big consumer hyperscalers, which by the way, are the majority of the market today, and then the enterprise. If you look at the hyperscale side and the big consumer AI platforms, it's very simple: they want an open platform. They want a power-efficient platform. And they have the ability, they have tens of thousands of engineers. They have the ability to collaborate, whether it's Broadcom or somebody else. They have the ability to go build that right platform, given either enough lead time or a partner like us, where we can actually turn around in less than a year. And this is what actually the thing that they love about what we can do.
By that, I don't mean just the XPU. I also mean Ethernet, and I'll come to that.
Right.
And then the enterprise, and at this point in the enterprise, it's an emerging market. It's yet to be proven as a market. The ROI is still question mark, TBD. And I think in that space, having a turnkey solution makes a lot of sense. It's not really that important to have it to be scalable, because how many nodes is an enterprise going to deploy? It's not going to be a million nodes. It's not going to be 100,000 . I don't think it'll be even 50,000 .
So in that situation, what's inside the box, as long as the whole system or that small cluster, if I may call it small cluster, is still going to be hundreds, maybe thousands of nodes. It doesn't really matter what happens inside, as long as it actually takes my data, it runs my workload, and I get the outcome that I need. So in that space, there's less focus on open and scale. Power is always important, but even less focus on power. When you talk about the first one, where your question is very relevant, I think it's a huge deal, and you'll see me repeat this: open, scalable power, because ultimately, these hyperscalers, they absolutely care about this. So now let's focus on Ethernet.
Look, if you go back to 1990s, we were fighting, and I was an engineer at the time, working on Ethernet, which the big thing at the time was 10 Mbps . It was a huge deal. Token Ring was going from 4 Mbps to 16. FDDI was sort of the promise of the next world going to 100 Mbps . All of them died. Only Ethernet survived. By the way, it's not the best and most robust protocol at the time. However, it was actually very simple, it was open, and actually, it was very inexpensive. Then with the cloud, it just took off. Now you're talking to the folks who have built their entire infrastructure on Ethernet. Honestly, it has nothing to do with InfiniBand or anything else.
When you build these clusters, these clusters have to talk to the outside world, whether it's accessing data, or sending data out, or storage, you need to communicate with an infrastructure that's built on Ethernet. So you say, "Okay, so the front-end networks are all built on Ethernet." So now you say, "Okay, well, inside this structure, this cluster that they are building, how do I connect it?" And that's called the back end. And in the back end today, yes, you have choices. You can do Ethernet or InfiniBand. These are the choices you have. But if you're a hyperscaler, why would you wanna use anything else but a platform that you know how to build? You actually manufacture these boxes yourself, either with CMs or with ODMs, and you know exactly how much each chip and each optics cost.
And when it comes to that, their preference, without any doubt, and it's not just the top four in the U.S., I would tell you it's the top 10 in the world, they actually choose Ethernet. So by definition, this is beneficial to Broadcom because we have demonstrated we're the leader in the front-end networks. And ahead of time, when people told us, "You don't need a 51 Tbps chip. Why are you building 10? Front end, we're good. 12.8 is fantastic. 25 T, I don't know what to do with this." Now, luckily, and I brought a few things to show you guys, because I think it's important. So, and I'm gonna pass this to you, but you have to promise to give it back to me, okay?
So this is the only 51 Tbps chip that went in production last year, and I think even if you're sitting in the back, you see this big, single black die that's sitting in here. This is not only the first one that went to production, this is the only one that has a single die, that single die. And that actually is a big deal. Every other 51 Tbps chip that came afterwards could not, all of our peers. Remember the second thing in sustainable franchises? This is where it shows up. Could not get it to be in a single die. They could not. You can ask them why, but it doesn't matter. We're the only one who actually is shipping this in a single die, 51 Tbps . Now, why is this a big deal?
It's a big deal because if you have a single die, it's much lower power. If you have multi-die solution, you need many more chips, which costs more power, but then you actually have to go to a expensive packaging called CoWoS, and that costs a lot of money. We don't have to do this in here. So the power, which is a big deal, of this chip, let's say it's in the range of about 700 W . Everybody else is probably over a 1,000 W . That's more than 40% more power than us. That's number one. Number two, the SerDes that we have in here is second to none. So the SerDes that's in there can drive 5 meters. Nobody else can drive 5 meters.
They would actually need retimers, which means DSPs, even though we have that product, and we'd love to sell it, but it costs more power, and it's very expensive. With the Broadcom solution, you don't need that. You eliminate tons of power, tons of cost that's unnecessary, and many, many more advantages. So when you are a hyperscaler and you say, "Well, hold on a second, there is this company that built this chip," and I'll pass you this. Keep it with you for a second.
I'm shaking.
Why would I go to anything else? I already know how to build this box. I already know the ODMs that can scale it and build it in the thousands, tens of thousands. It's the lowest power. It's much more cost-effective, and end-to-end solution, total TCO, it's the best in the world. So that's an example.
Thank you.
Thank you. That's an example of why having the best engineers, having the best IP, and focusing on an area, but forecasting where the industry should be, and as you could imagine, we're working on the next one now, and we're already planning the one after that. And so this is at least one to two years of anybody else in the industry, and everybody else will struggle to get to this level. So when it comes to Ethernet, I think the hyperscalers in that segment of the market, I would say by the first half of 2025, are all gonna be on Ethernet. All. Outside that, I think there is tons of room for Ethernet or InfiniBand or, or whatever it might be.
Okay, thank you. That's, that's very insightful. I appreciate that. We get a bunch of questions on co-packaged optics as well. You announced the industry's first 51.2 Tbps
Based on this.
Based on that-
Yeah
... CPO Ethernet switch platform for AI. As the industry leader, I guess a multi-part question. One, what are the key technological challenges as it pertains to CPO? When do you expect CPO to be ready for high volume? And what are the pros and cons-
Okay
... from your perspective?
I'm glad I showed you this because I want you to remember this big die in here, and that's why I brought this. That is the first CPO in the world that is working. You see this big black die in the middle? That is the same one that's in here, and that's the advantage of what we have at Broadcom. And I'll answer your questions about the CPO, but to get this to work, it's extremely difficult. Extremely difficult. Many people thought, "Hey, we don't need CPO. Why are you guys investing in CPO? And it's expensive to invest in CPO." Actually, the money that we're investing in CPO per year is more than the entire industry investing in CPO. This is our second-generation platform, okay? The first generation, we did it with 25 Tbps , Tomahawk 4. We sampled it.
We did POCs with it. It was more of a learning experience for us to ensure that we can drive such a platform to full volume and production. So we decided with Tomahawk 5 to actually take the CPO technology and put it in here. And so let me tell you a little bit about it and then answer the sort of few questions you asked. One, is when I flip it to the other side, you see eight tiles in here. One, two, three, four, five, six, seven, eight. Each of these tiles is built with our silicon photonics technology, and it's actually a very difficult technology to build because now you're bringing two types of technologies together.
One is the electrical IC that connects directly to the SerDes of our die of Tomahawk 5 , and then underneath that is our photonic IC or PIC. And you need to bring these two together, and you need to actually make sure that mechanically, when the fiber is plugged in here, you actually have a high reliable system ... close to the reliability of silicon, which is orders of magnitude much more reliable than transceivers, which are optical, separate pluggable parts. And that's very difficult to do. We know how to do that, and that's why in this second generation, we are actually able to replace a hundred and twenty-eight pluggable optics. So most of you have a iPhone. All of us have a smartphone. Imagine hundred and twenty-eight of any of these iPhones. You line them up together, 128 .
You could imagine it'll fill up easily the table you're sitting at. That's what this platform replaces. Each of these types of transceivers, the size of the iPhone, today, for 800 gig, they burn about 15 W, 16 W each. When you come in here, that 15 W, 16 W is about 5 W. That's 70% power reduction times 128. It's massive. Remember, this is the lowest power switch in the world. When you combine this, the power savings alone are mind-blowing, and that's a huge, huge plus that we bring with this. Number two, this is a 100 gig SerDes on the Tomahawk 5. The next platform is gonna be 200. Good luck with 200 gig. As the speed goes up, the distance that you can go is shorter. Good luck doing this with pluggable.
Let's assume we figured that out, and I think at Broadcom, with the various divisions, we'll figure that out. The next step is gonna be four hundred gigabits per second. It's done. You need CPO. You can run, but you can't hide. So with CPO, you cannot hide, and that's why, first, it made sense to put it in the switch. So to ensure that it can get to production, we actually are doing POCs with three hyperscalers as we speak. We're shipping right now tens of systems in production. I would say by calendar Q4 and early 2025, we'll be shipping hundreds of these systems. I think in the first half of next year, they'll actually put them in live traffic in their data centers, and sometime in 2025, and definitely 2026, is when we actually will see production on this.
In the meantime, we will have the follow-on on this that will go to the next platform, so imagine in here, each of these tiles that I showed you, 6.4 Tbps . Next one will be 12.8, hence you can scale to 100 T, but what we're starting to realize with the deep engagements with the XPUs is, well, this also has advantage to lower the power on the XPU, so now we actually have a new opportunity that we didn't actually think it will be there that soon, and that's driven by that all gen AI going to AGI, and I think CPO is going to appear first on switches, which is happening now with Broadcom. We're the only one in the world enabling this, and I think it will migrate to other platforms, including the XPUs in the future.
That's great. I could go on for another thirty minutes easily, but unfortunately, we're out of time. Charlie, thank you so much for supporting the conference. I know you're a busy man, and congrats on all the innovation. Thank you.
Thank you very much.
Thank you all for coming.
Thank you.