Okay. Good afternoon, everybody. Welcome to the Goldman Sachs Communication and Technology Conference. I'm Jim Schneider, the Senior Analyst here at Goldman Sachs. It's my real distinct pleasure to welcome Broadcom and CEO Hock Tan to the stage today. Welcome, Hock.
Thank you.
Thanks for being here. I think it's fair to say that your business portfolio has really evolved radically over the past several years. The company is much bigger today, partly because the software business is much bigger due to VMware, but also really just the semiconductor business is so much bigger now due to AI. If you just sort of step back and think about the top two or three priorities for Broadcom over the next few years, what are they? If we sit down here again in five years, what do you think is going to be the one thing that surprises investors the most?
I'm not sure what will surprise you guys the most, but I'll answer that first one. Right now, front and center for us, as we see, is this huge opportunity in AI, in AI compute, to be precise. I mean, we are seeing very, very interesting, very, very strong demand for AI compute. We are driving the business model. We're driving resources of this company to addressing these specific needs from, to be honest, a very narrow group of customers.
If you think philosophically about AI for a second, maybe help us understand your vision personally for where AI is going as a technology. Why does the world need AI? How useful do you think it's going to be? Is it going to live up to the amount of investment that the investment community is putting into it today?
I hate this question, you know that? Because I'm not like that. I'm not a cheerleader. I try not to be anyway. It's hard not to here. No, I think AI is here at least to stay for a while because I think we've discovered what AI could do. It's this huge, sophisticated algorithm that enables us, based on history, to so impressively predict what would happen next. I thought I look at AI as even generative AI. Now, many of you are probably saying it's more than that, but it is pretty good. Because of that, it's a great tool to be used to basically make us feel more intelligent, and probably it does. Where we are, coming back to your first question on Broadcom, is we're very the key part. A lot of all this is we're not writing the algorithms to create the LLMs. We make no such pretend model that does it. What we're doing is we enable it because you can write and create the best algorithms in the world, and they exist today and they keep improving. If you don't have the ability to create compute, to manifest it as compute, all of it is just nice equations and algorithms on a whiteboard. It's not practical. The manifestation of it is that compute, which is where our know-how in semiconductors, especially in being able to create a very effective compute vehicle, compute engine, is what makes the difference. That underpins where we are. What we're seeing for the next three years is accelerating demand for that compute capacity in that narrow group of customers we focus our attention on. Jim, it's those guys trying to create those LLMs out there. In many ways, this LLM is another term for these people, these players trying to create superintelligence and by so doing it over time, create the new platforms, the new platforms of tomorrow.
Yeah. Are there any use cases for AI that excite you most personally in terms of utility, monetization, or otherwise?
It's great for it to be able to create a nice poem for my wife. I'm sure it does for you too. That's a use case that excites me.
Excellent. Maybe you want to take it back to Broadcom's business for a second, but maybe if there's a...
See, I told you I hate this kind of question.
Let's talk about your business. You just guided your fiscal 2026 AI revenue to grow previously in the 60% range. On the earnings call last week, you announced that you expect this growth to accelerate materially as now you've converted another customer who's focused on inference. Maybe help us understand the broader trend you're seeing among customers here. Is the demand intensity higher for inference or for training right now?
I think it's hard to take a snapshot at any point and say it is. I think what we see here among the seven quote-unquote "players" that we have, we consider both customers and others that consider prospects. One thing I see in common is they are the ones investing to create the best LLMs out there. It's a constant roadmap. It's a constant journey for them. That creation of intelligence improvement comes only from creating better LLMs, which comes from research in training. Training creates the intelligence. Inference monetizes the intelligence. You tell me which happens more. It's a function, I think, at that point in time when one of those players we engage with decides, hey, it's time to create scale, to create monetization. They go into inference. Otherwise, if you're just going for superintelligence, just focus on research, that's training. I think both happen. Both are at a very high level of investment usage at this point.
Give us a sense for 2026, how big is the acceleration beyond the 60% you've already called for? More importantly, do you think that growth rate is even faster in 2027 for you?
I'd rather not talk about 2026, 2027, because I don't give that guidance. Just to give you a sense, yeah, we did indicate 60% 2026 growth rate, which was the growth rate of 2025. It looks like we're accelerating beyond that in 2026. We put a milestone. You know, we gave you little clues all over the place just to confuse you. Clearly, we gave you a milestone that says, and that was done about a year ago, earnings call, that we see a serve available market. Available market, not a forecast of $60 billion - $90 billion. Remember that?
Yeah.
Right. You ask me this question. Let me throw you something that might make you think about it. As you know, I signed up for an extension of my contract to 2030. With this extension, of course, I get some incentives. One of the incentives that's tied to it is tied to, so you guys know, AI revenue for Broadcom. Very simple. I need to hit to achieve max on my incentive for 2030, five years from now. Give you a sense, AI revenue of exceeding $120 billion. Today, 2025, our AI revenue is $20 billion. It gives you a sense of our belief, how strong the demand for compute is in this race towards generative AI, superintelligence.
That's a good kicker, Hock.
That's a question whether you get there.
Very good. It's great to hear that in the detail. Thank you. Just one last question on this AI revenue piece. You've talked previously about your three existing customers. You've added a fourth, and you've talked about three additional prospects you have in the pipeline. Maybe just kind of give us a sense about, do you see a universe of potential customers outside those four plus three prospects, or is it pretty much those seven is what you see as the potential opportunity?
For Broadcom, we're driving a business model just on those guys. These are guys who are creating those LLMs. Who knows how long these guys will survive, whether they will go through the whole race. What we see now is about seven guys doing it. Beyond that, we do not see those as our market at all.
Okay. Fair enough. Maybe just go a little deeper on some of the customer side of things. I think in the past, you said that you view companies that have their own LLM capabilities to be the most valid, important customer prospects. Separately, I think you've also said that you want to pursue the largest volume opportunities available in the market. Maybe give us investors a sense of how do you diligence these different customer prospects? Can you discriminate customers between ones that have promise and the ones that are a little bit less interesting to you? How do you make that determination?
I think we kind of made it simple now these days. Those customers who are really doing LLMs, that's our customer. That's it. We start looking at whether they're big or small.
Okay. I think it's easy. Relative to the competitive landscape, I think the conventional wisdom out there in the investment community is probably that custom ASICs are good for internal workloads and merchant GPUs are more broad-based and they're good for more of the external workloads as well as internal workloads. I think we've seen more than a few headlines of the advantage of one of your largest customers on the custom ASIC side and what's happening there. I'm kind of curious, do you see that conventional wisdom shifting at all or do you see it changing? Do you see a point in time where enterprises are going to have more software capabilities and actually are able to use ASICs as some of the major CSPs?
I don't think we look at it that way. To answer your question, I don't think enterprises, at least in the foreseeable future, would ever want to consider developing the core technology for them to enable AI computing. Rather, the way we have looked at this market consistently is really in generative AI that we see today. It's two broad, simple segments. There's one segment, as I mentioned, of these few players, and some of them are hyperscalers, some of them are not. They're just, for want of a better word, super startups in AI. I know some of you call them as labs, perhaps it is. These are the guys who are really making that investment in LLMs, which entail a huge amount of R&D, particularly in respect to training, which is just not just creating those algorithms to create those LLMs and make it work, but to also create the ability and spend on the ability to train huge amounts of data to improve on their journey towards superintelligence, as we, for want of a better word. That's one category. Then we have a second category of customers who are almost like the rest of us. I call it enterprise. They probably include sovereign. They include public cloud guys. At the end of the day, they go back to enterprise out there. There's thousands and thousands of companies out there who are looking at AI as a very interesting tool to improve the way they run their business, to basically achieve mostly productivity gains, which is truly trying to find use case for it and dabbling, most of them POCs right now, on AI tools and trying to run AI. Whether they get that ability through renting GPUs or buying some and running it on-prem, these are that category. That category of market are largely inference, truly inference. They talk about training. It's limited post-training, perhaps, test time scaling, but it's all inference. It's looking for use case and trying to get a return on investment, mostly on productivity. These guys are going to stay largely on merchant silicon, merchant GPU, because they're not going to create XPUs, write software stacks to make it work and figure out all these things. Who cares? All they want is to get their models running and a return on investment. These guys, I do not see, at least in the near term, ever going beyond merchant GPU. There's 10,000 of them perhaps today, each spending $10 million a year, maybe more. That's a big size, $100 billion market. In contrast, we see these few guys creating LLMs, very few. They are able to spend today some $30 billion a year on AI compute. You have $100 billion here and you're maybe $100 billion, $200 billion there. We focus on one market and they're very distinct from the other market.
Very fair. Also serving a custom compute segment, can you maybe talk about how you add value there and the defensibility of your position? In other words, talk about some of the core IP that you add or you augment your customers' design with the value you provide through the manufacturing process. To the point, you know, when investors ask, you know, how defensible is your position and are you at risk of your customers, your biggest customers going straight to a foundry with their solution, how do you answer that?
It's the same way where we are in the semiconductor business over the years. It's engineering. It's deep engineering, advanced engineering IP, especially in the semiconductor space, some part of it in the software space. It's about, you know, having a lot more intellectual property in creating your chips. At the end of the day, it's even beyond that. Thanks for that question. People think about AI compute as the GPU or the XPU, which is what they call the actual multiplier engine that's used to do matrix multiplication and regression. It's much more than that because, as you know, each single GPU or XPU, no matter how powerful, we all make it. We can make them very powerful by scrambling more and more multipliers, what I call multipliers, into this 800 sq mm piece of silicon. You do that by going deeper and deeper, sub-micron process technology so you can cram more. You go even one step further by then, instead of making it one die in one package or chip, you do multi-dies. We have reached a point where the latest product we're doing, we have actually three dies in one package. That's how much we have crammed in. You can make this GPU or XPU, in our case, super, super good in doing this multiplication and matrix regression. That's not enough. You don't create training. You are not able to do this generative AI training of this huge database to create this billion, billion parameters models with one single GPU or XPU, no matter how hard you try. You need to create a cluster of them. The more complex your LLM becomes, the larger your clusters have to be. You face yourself with a totally different problem. How do you create all this matrix multiplication across, say, 100,000 of the GPU simultaneously? You go to a million. That's even a bigger headache. That's one of the... That is, in fact, I consider the biggest technology challenge in computing on generative AI as you progress your models higher. That comes in the other aspect, networking. How do you connect them? How do you have the bandwidth to connect them? How do you schedule the workloads, orchestrate it so that they can all run in parallel? This is more as much software as the networking side, which is hardware. I think that's one of the biggest challenges towards progression, making massive progress on being able to create superintelligence. How do you get these huge workloads that you're supposed to run on these huge models and be able to train a huge amount of data on a million GPU or XPU simultaneously and converge on a solution? That's part of it. It is slowly coming to the surface that it's networking that might be the biggest problem because in generative AI, I said that before, I'll say it again. The network becomes the computer, not any single GPU or XPU. We all tend to focus on that. It's way beyond that. Developing XPU, GPU, and when trying to come up, it's hard. There are a lot of intellectual property required, but truly, it's easier than doing the networking. That's where the challenge starts. That's how we differentiate ourselves. We come up with a better mousetrap, better technology, and just outrun the competition. No different than what we're doing in all other areas in semiconductors where we have been very successful.
Yeah. That's a perfect segue into the next question I was going to ask about networking because clearly that's a mainstay franchise for the company historically. Talk about, you know, I think you've previously talked about AI networks. I think you said scale-up opportunity is about 5 x-10x larger in content relative to scale-out. How do you think about the adoption curve for Ethernet and scale-up and just sort of talk about your sort of overall product portfolio and how it gets there?
Ethernet is going to happen inevitably because you don't want to have to create new protocols, which are closed protocols, to replace something that is frankly available, tested, proven for the last 20, 30 years, and where most of the operators, designers with the hyperscalers who are the leaders in developing are very familiar with. It is. It comes to a bigger picture of what you say, where you're getting at is disaggregation. There's more with XPUs. We are in Broadcom in a way disaggregating the XPU or GPU from the networking side. That's a choice. You can choose to have whatever GPU, XPU you prefer, and you can choose whatever networking switching you prefer. That disaggregation is eventually going to optimize by being able to pick the best of breed of each, your ability to do generative AI computing and leads to eventually best performance in reaching the LLMs outcomes that you want to drive to. It's less a platform. It's really about disaggregating, which is exactly what hyperscalers have when they create cloud computing, public cloud computing, reach, achieve towards that direction. They disaggregate hardware, software, CPU from networking, everything. I think you will see in the generative AI, that's the path it's headed now on those platform LLM guys I talked about. When it comes to enterprise, however, I think it's still very OEM-based simply because the enterprise don't have the interest nor capability to create that need to disaggregate and optimize. The footprints are not big enough. Those guys doing the LLM, those few guys I talked about, for us, we're dealing with seven. They will disaggregate. They will optimize.
If you think about your outlook for the networking business, obviously some, you know, however much AI grows, how do you think about the networking business tracking that envelope faster, slower, the same?
I tend to think a lot of that will be tied again back to that narrow group of customers we talk about, the big hyperscalers and the super startups. The growth will grow from no other reason than it's the move to Ethernet, which is one thing. More than that, the ability, as you go to larger and larger clusters, the importance of scaling up within the rack becomes super important. When you do that level of scale-up within rack for AI, generative AI computing, this is where the matrix multiplication truly happens in your model. You're talking about massive bandwidth. You're talking about bandwidth now soon that will go to, and this is about not just GPU to GPU, XPU to XPU connection. You're talking about memory sharing across GPU. You know how much memory these GPUs have. You really want to have a bandwidth that goes to 100 Tbps and not stick at the 28 we're seeing today in the copper racks. You want to go optical. You want to be able to connect not just 72 GPUs to each other. You want to connect 512, even to 1,024 XPUs to XPUs. That's the scale you want to drive to, the bandwidth you want to drive to, which will create very much faster convergence to the solution or to the training solution outcome you're looking for. That's the roadmap we're driving towards. We're not talking years away. We're talking like within the next year or two, 2026, 2027, that will start to happen. The product is out there. The technology is available. It's a question of the deployment at this point.
Very good. One question I get a lot of times is on the copackaged optics and their role in the industry going forward. Maybe how ready is that technology, in your opinion, for mainstream deployment or large customer deployment? How do you think the adoption curve is going to go for CPU?
I love to hype you guys on things. The latest is copackaged optics. It's just optics. Okay. I don't know why you need to call it copackaged optics, except it's silicon photonics. Now, the world is still getting ready for silicon photonics. A big part of the reason is what I said earlier. When you do scaling up to a scale larger and larger, because the clusters get bigger and bigger and you're an LLM player, you do that. If you're not an LLM player, you're a little enterprise and you just run one rack of maybe no more than 36 GPUs, you don't need any of that. Run direct, attach copper, you're done. These big LLM guys trying to run 100, 200, 500,000 XPU or GPU, you want, as I said earlier, scaling up in a rack. You want to make it optical because by doing it optical, you can connect directly 512 compute nodes, XPU or GPU to each other, much better than anything else you can do. Potentially, you go one step further on bandwidth at your rack top of switch. You can go to 1,024. That's optics, moving away from the copper we're seeing today. Copper will still be around 2026 mostly. By 2027, it's all going to be optical. People talk about what optical solution you need. People talk about, hey, one thought is called silicon photonics. One manifestation of that is called copackaged optics, where you integrate the active components in a fiber optic interconnect into the silicon, whether it's a GPU silicon or the switch silicon, you do both. That's called copackaged optics. That's a dream. It's really a good dream because you reduce power, we know, by 40%. You do it. All that is great. We at Broadcom, we already have the technology done three years ago. The issue is this. Optics, typically optical interconnects, because it's a lot mechanical, have anywhere from 5% - 8% failure rate. When you have pluggable optics that you guys hear about, when it fails, you just unplug it and put in a new one, pluggable. When you do copackaged optics or silicon photonics, you integrate your GPU, that expensive $40,000 GPU into the optics. That fails at 5%, 8%, you got a problem. The question is, which we have been studying, when you create silicon photonics, does the integrated solution have the failure characteristic of silicon, which is like 0.1% barely, or optics, which is 5% - 8%? That data we're still collecting. We have been collecting for the last two, three years. I like to believe it will have the characteristics of silicon rather than optics. Only as we test through it, we'll figure it out. I'm sorry, long answer to your short question.
No, it's quite OK. It's quite OK. Finally, I think you talked about competitively, your belief is that Ethernet is going to win in the end. Maybe talk about over what time frame you think that occurs and just kind of like any of the factors that would slow that down from happening.
The easiest answer is, as among the few hyperscalers we deal with, the ones who are doing LLMs, they are the ones who are also seeing that it's an existential need to create their own custom accelerators, their XPUs. As they replace, as they start to deploy XPU in a steady, progressive manner, like all things, over the next five years, you'll see Ethernet come into play. That's a direct correlation back to disaggregation. The disaggregation out of an integrated platform ranks to one that's more disaggregated with XPUs and Ethernet switching interconnects, one that is kind of more optimized for the other.
Yeah, I'd be remiss if I didn't touch on your software portfolio for a moment.
I knew we were getting to that.
Maybe just talk broadly. You've obviously done a great job of acquiring companies in the enterprise which have very sticky customer bases, driving very high margins over time for that business. Maybe talk about sort of the durability of your software and franchise as you see it, whether it be in VMware or all the products you've had up until now on the existing business. Maybe just kind of address your appetite for M&A in the software space. Are you 100% focused on the AI opportunity at this point?
You know, I love the software business we have acquired because we are very careful what we buy. The whole game and the whole thing in software is you hit it right on. Durability, sustainability. The key thing to it is you have to invest. Unlike what you may think, we actually invest a lot in the software we have, if nothing else, to make it be better for the customers who are using it. Not only that, investing in the technology to make it better, you invest in support. Enterprises will always break software. When they do, you want to make it very resilient so that they can get it back up fast. Support is important. That's where we make a lot of investment, that and services and support and keeping it durable. If you do that, we accept the fact that we don't need our software business to grow dramatically, just sustain and grow single digits. Just by doing that and not trying to just push to grow, you make very good margin. You've seen that in a way we announced it. Very good, profitable margin. You know, it's a stable kind of business. Here's the funny part about answering this first part of it. Non-AI and software, it's great businesses of ours, contribute a big part of our revenue and earnings. In a matter of a few years, my AI revenue will exceed a combination of both. If it isn't already, in a matter of one or two years, it will exceed. From our viewpoint, with this big opportunity and the dramatic growth I have indicated that we're going through, we're kind of looking at it and saying, you don't need to buy anything else. You don't need to invest in anything else to accrete your revenue nor earnings. Just keep investing in AI. Our view for the next few years, that's where we are.
That's a great place to end it. Thank you very much, Hock, for being here. We really appreciate it.
Thanks.