Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts

Morgan Stanley’s Technology, Media & Telecom Conference 2024

Mar 5, 2024

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Welcome back, everybody. Joe Moore from Morgan Stanley Semiconductor Research. Very happy to have with us today Jean Hu, the CFO of AMD. I think I'm supposed to read this real quick. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com. If you have any questions, please reach out to your Morgan Stanley sales rep. So sorry to get that out of the way. So I guess we'll start with AI, because we always have to start with AI these days. You guys put out this $400 billion number for the accelerator market in 2027. And that got a lot of attention, because AMD has been very grounded and hasn't kind of been big with hyperbolic numbers. So that number got a lot of attention. It's probably the most frequently asked question. So maybe just kind of talk to that.

Where did that number come from, and how confident are you?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Oh, yeah. First, thank you so much for having us. Yes, last December, we updated our AI opportunities TAM to $400 billion in 2027. As you said, it was a surprise to some people back then. But I think it may not be the case today. Since last December, if you look at the pace of innovation in generative AI, it has been extraordinary. We have seen more evidence of new models, new applications, and also CapEx spending. And more people are talking about a $1 trillion investment in AI infrastructure in order to really deliver the capabilities needed, how we work, and how we live going forward. So I would say when we think about projecting market opportunities in the longer term, the way we think about it is really framing our understanding of the trajectory of the market. This is an incredible technology trend.

When we talk to our customers in the cloud, they tell us the needs to run large language models, how they need to get the model to be better, to be more accurate, less hallucination, so the model can answer better questions and improve productivities. All of those things need a really large cluster of compute. Also, in enterprise, we start to see the very early evidence of productivities improvement. We hear people talking about very specific functions, sometimes 30%, 40%, even 100% productivity improvement. For those applications, you need a different model, different application, which then need a different cluster of compute. The other thing is AMD is a strong believer of pervasive AI. We think AI started in data center, and it's going to go to edge, going to personal device like a PC.

So when you think about all those different areas for applications and models, and you can actually see the pace and the rate of innovation of model applications actually accelerating, it's probably actually held back because nobody has enough GPUs. So when we add all those together, the trajectory of the market is actually quite consistent with our view. If you look at the AI accelerator market, it was almost nothing, very small in 2022. In 2023, it literally probably passed $40 billion. This year, it's going to be continually double or more. I think what we really can debate is if it's $400 billion or $300 billion. But the trajectory of this technology trend is truly extraordinary.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Yeah. OK, that's very helpful. Then maybe talk about AMD's efforts to capture some of that. Give us a description of the MI300 today. What capabilities does it have, and what markets can you attack?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, we literally launched the MI300 last December. That's only like 3 months ago, with really strong support from cloud customers. We have Microsoft, Meta, Oracle, and OEMs, Dell, Supermicro, HPE, and also the whole ecosystem. And our team is super focused on aggressively ramping into production. And for the first quarter, last quarter, December quarter, we actually delivered more than $400 million revenue. It's a very complex product. So not only are we ramping production, it's going to be the fastest ramp product in AMD history. And we also have a broad set of customers, not only cloud customers we talk about, but also enterprise customers. So the use case is also very interesting. It covers both inference and training. And for cloud customers, not only are they using them for internal workload like GPT-4, we talk about it, but also third-party workload. So it's a great attraction.

Since our earnings release, we continue to broaden and deepen the customer engagement. So we feel pretty good about the feedbacks and adoption, because it does take some time for the production, for the qualification. But the trajectory has been really good. We talk about it for 2024, we think about more than $3.5 billion, and the sequential revenue increase each quarter. So it's really exciting.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Yeah, I want to double-click on those numbers a little bit. But first, what types of workloads are you working on? I mean, you have this very impressive customer list, but everybody's kind of cagey about what they're actually doing. And you sort of say, we'll have instances available in cloud in Q1. Are you seeing partnerships where you are the inference partner for some of these most critical workloads from some of the hyperscalers?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, absolutely. I think if we look at our customer list, it's not just one, or two, or three customers. It's actually quite many customers. Of course, the cloud customers, they have the use case for their internal workload primarily. But there are also third-party workloads. And then the enterprise customers, they either go through OEMs, or they can use cloud as their service providers. And the inference and also training, we do see both.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

OK, great. So on the numbers, the $2 billion that went to $3.5, that's actually a remarkable year one. I mean, you just introduced this. The ecosystem needs to develop. There needs to be software support and things like that. The first year of your Zen server business was a few hundred million. And you're talking north of $3 billion already. Can you just talk about how that evolved so quickly? And I get into a little bit. People want there to be upsides to that number. I'll get into that in a sec. But just how did you assume the number is $3.5 billion or more? How did that number evolve so quickly?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, it is incredible when we even think about it. We have a great team, great execution. But the ramp has been really surprising to many people. Maybe the way to think about it is to take a step back. We hear feedback sometimes saying, hey, how can AMD come up with MI300X so quickly and the ramp? But in reality, when you think about AMD, it has been investing in GPU for a decade since the ATI acquisition. And when Lisa joined AMD, we focused on investment to build the CPU platform and the GPU platform. So we actually have a very similar shared experience with NVIDIA investing in GPU. And if you look at the MI300, it's actually not coming out of the blue. Literally, from 2020 to 2023, we introduced MI100, MI200, 210, 250, and then MI300. On the other side, you're absolutely right.

This is not just about the GPU hardware. Software is so important. It has been many years AMD is investing in ROCm software. Of course, initially, it was used for HPC applications. Then last few years, the team really speeded up the progress to focus on cloud applications. It's not just suddenly happened. It actually has a long journey. It's accelerated, because we actually have the comprehensive hardware. The software actually is at a state it's quite ready for a lot of customers. We also collaborate with our hyperscale cloud customers closely for many years. All those efforts really helped us to accelerate adoption of MI300.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. So can you talk about the supply chain for this in a couple of different ways? One, how do you this is an incredibly complex, advanced packaging device. A lot of die in the stack. There's also I think you've talked at one point about an 8-month manufacturing lead time, which is pretty long. I don't know if I've ever heard a semiconductor that had an 8-month lead time like that. And yet, you've been able to ramp it to $3.5 billion plus pretty quickly. So put that into perspective. And then separately, a lot of people in my profession are listening to the supply chain about your unit numbers, assigning a price to that, and assuming that's all revenue this year. And that's where you're getting very high whispers for the year. Can you talk about if is that the right way to think about your opportunity?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, yeah. Thank you for the question. So on the MI300 side, maybe just a little bit of background. It's actually a chiplet design, having multiple chips and packaging together. I would say AMD actually is the leader and innovated very early on in chiplet design. Probably it's the first to have a really large-scale chiplet design and the production ramp going back almost like 10 years ago. Also, it's the same case. The CoWoS packaging, actually, AMD partnered with TSMC initially to develop the technology.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

And Xilinx.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Xilinx, yeah. Exactly. When you look at that history of close integration relationship with TSMC, that helped us tremendously. Our chiplet design has been in production for all different kinds of products at TSMC for a long time. And this one, of course, is the most complex chip we designed. And we work with TSMC very closely to ramp the production and also, of course, the whole supply chain in the system. That really helped us. I think you probably know TSMC and AMD, the partnership is very strong. I think for our team, they have done a great job under a very tight supply constraint environment. We did secure the supplies more than $3.5 billion. But the way Lisa thinks about the business is really long term.

When you have a seven-month manufacturing cycle, and you have a really broad set of customer engagement, you want to make sure you position AMD for success. At the same time, you know different customers, they have different qualification processes. The ramp sometimes may be not that predictable. It can be gradual, because different customers ramp at a different time. Overall, I think the way we think about it is this is just not about one year. It's a trajectory. It's a success, very large market opportunity. We are the new entrant into the market. The opportunities and the supplies need to be aligned for the longer term.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

This year is really laying a foundation for $3.5 billion.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Oh, yeah. This is very early in the journey. If you look at the market opportunity, we are positioned for longer term to address this market.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. And then on the second part of the people using the supply chain to triangulate your numbers, is everything you build this year gets sold this year? How should we think about that?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

I think you know the semiconductor ecosystem better. Supply chain is really manufacturing cycle is quite long. It takes time. You have the capacity. It does not mean you can just all ship the product into the customers. It's a process. So I think it's quite naive to try to use the supply chain number to derive the revenue number. It's a very different thing.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Because you get website. There'll be a lot of enthusiasm in the supply chain. And then some month, there'll be some downtick. And then it'll be negative. But your number was never that number.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

No, no. That's a simplistic approach. I don't think that's how most people do it.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Right.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

I agree. OK, and so you're thinking if this market goes from $100 billion+ this year to $400 billion in three years, your goal is to at least keep pace with that?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

I think we do think we have the very competitive product. We now have the software, the networking partners. It is a large market. We are a strong contender, second contender in this market. We do think it's a great opportunity. And it's probably the largest growth driver for AMD in the next several years.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Yeah. I mean, $3.5 billion is a big enough number that it probably registers with NVIDIA, that you're seeing those kinds of numbers. They have the B100 coming out. They've talked about gross margins coming down in the back half of the year. Do you anticipate that as you become more significant, similar to what you saw in servers, that there will be some competitive pushback?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

I think for a large market, any technology market, you will have competition. And especially this market, not only it's very large, but there are broad demands and needs. Different customers have different demands and needs, especially a lot of open source ecosystems continue to evolve. It's not like everybody writes their model on CUDA. There are more and more people writing models on the open source ecosystem. So we do think for a large market, very diverse segmented needs, you need more than one player. And GPU is not easy to do. We both, NVIDIA and AMD, have a strong history of GPU platforms. So it is good to have competition in the market.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Yeah, I mean, I use the same line for you guys that I used for NVIDIA in 2018, which is it's an overnight success story that took like six or seven years to sort of.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Turn that corner. Yeah. OK, great. And then so you mentioned ecosystem support. I thought one of the more compelling when you originally did the launch, you had people from Hugging Face and Python and those kinds of things. Where are you in that ecosystem support? I assume there's still room to grow when you're sort of only a few months in.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. Last December, when we launched MI300, if you recall, the most important thing we talked about are three pillars of the strategy. So not only do you need to have the most competitive GPUs, but both the software ecosystem and networking are really important. On the ecosystem side, the software strategy years ago, it's about open source. There's actually a reason. We recognize we are the second player in the marketplace. And what we want to do is to lower barriers to entry and make it easier for customers. By focusing on the open source ecosystem, all the frameworks, models, libraries, they are providing better TCO for our customers. And when you think about it today, since the introduction of ChatGPT, we have seen the open source ecosystem evolve very rapidly.

All the larger language models and a lot of frameworks, we actually think more and more people are writing on the open-source ecosystem. For us, ROCm 6, we have made very significant progress. So we support PyTorch, JAX, Triton, and the different frameworks. If our customers are writing the models based on those frameworks, you can run MI300X out of the box. So it's that easy. And if you look at Hugging Face, there are probably 500,000 models right now on Hugging Face. And they can all run on MI300X. So it is really exciting. We think we narrowed the gap already. We think we can continue to make progress to make ROCm to be quite competitive. On the porting side, it's also the same, because we do have the same history as NVIDIA in the GPU market for a decade.

So we have been able to update ROCm to the level with strong capabilities. So if you sometimes want to port CUDA to MI300, it's actually quite efficient. We'll continue to make progress there.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

OK, great. And then my last AI question, because I want to save time for the other $23 billion of revenue.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

You've mentioned a couple of times training. I think people thought of this. You positioned it initially as the biggest applications being inference. But clearly, you're seeing maybe more than you expected on the training side. Can you talk to that?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. Definitely, we talk about MI300X, because the memory bandwidth and capacity today is still the largest memory bandwidth and capacity. So inference, absolutely, we have significant benefits. But even at the training, it's quite competitive. So when we look at our customers, we do see demand not only inference side, but the training side. One thing you can imagine is training happened much earlier. Inference is just ramping. So when we introduce our product, naturally, when the market is ramping, the demand is strong. The inference, we're probably more indexed to inference. But our customers, enterprise customers, they actually use MI300X for training. And it's exciting. I think going forward, we'll see more and more of those opportunities as we continue to work with our customers closely.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. Well, congratulations on all of that. I did want to talk about some of your other businesses, maybe starting with servers. You had a difficult year last year where particularly a lot of the discretionary upgrades of servers got postponed as the money shifted towards AI. And there's some indications that that continues. You saw Amazon stretch out their depreciation cycles for servers. The market's coming back, because we need servers. There seems to be growth. But to the extent that those big discretionary upgrades, at least I'm still not really seeing, AMD underlying that has to continue to do pretty well. But can you talk about what you're seeing, what your expectations are in 2024? Do you think we'll see at some point some of those bigger upgrades start to come back?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. 2023, you're right. Overall, the server market actually declined. They're what you said. One is inventory digestion and also AI spending. Customers prioritize AI and really extend the depreciation. I think almost all the cloud customers extend their depreciation. But I think the most important thing we see is when you look at compute, servers continue to be the most efficient, best TCO platforms for a lot of compute, traditional compute, including mission-critical applications. And those workloads actually continue to expand. The data center today is actually running out of space power. And the operating costs are actually quite high if you continue to use old servers. I think you can absolutely extend it to certain life, but you have to upgrade. We do think 2024 is a much better backdrop for the market when you think about the potential refresh cycle.

It's still early, but we do hear customers, cloud and enterprise customers, thinking about refreshing, because they need more space, more power. And also, they want to make sure their data center operating cost is efficient. So for our server business, as you know, Genoa family has the best TCO today in the marketplace. And Turin, we are going to introduce in the second half, will be even more on the performance per watt and the performance per dollar. So we do think by upgrading, we actually can capitalize on the opportunity to gain more market share. It is a very important year for us. Last year, even though the server market declined significantly, we actually grew our server revenue. And this year, we think the market will be better. And we absolutely will continue to gain share and grow much faster this year in the server market.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. And maybe I want to ask about both of those, Genoa and Turin. On the Genoa side and Bergamo, you had relatively slow uptake in the beginning. And I feel like you guys got blamed for issues that actually everybody had with PCIe Gen 5 and DDR5 and things like that. And you also just had the money getting reallocated elsewhere. But you look at the data, and Genoa is ramping very nicely now and Bergamo. Can you talk to that? What flipped that switch? And then I'll get into Turin in a second.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, yeah. Genoa actually is the new socket compared to last generation. As you said, we have DDR5, the new memory. We have the PCIe Gen 5 new interface. The qualification process for customers is not just about the server. They have to upgrade their memory and the interface. That took a little bit of a long time. But once we passed that, Genoa actually is the fastest ramp to pass 50% both in unit volume and value within the last four generations. The ramp is quite significant. If you look at Q3, Q4 last year, the second half, our server revenue literally was almost more than 50% higher than the first half. Year-over-year, we also grew revenue and exited the year with more than 31% market share, the highest in our history.

I think the most important thing is both Genoa and Bergamo provide the best TCO. The customers really like it. And I think cloud customers, of course, you know our market share is pretty high. And right now, we see more momentum with the second-tier cloud customers, which have been underrepresented in the past. But they typically adopt a new generation of technology one or two years behind the hyperscale, super, all those companies in the US. So we do see momentum there. And in enterprise, it's the same thing. We see both Genoa and especially Genoa provide really best TCO. And the customers are quite excited about it.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. I know you're agnostic between Genoa and Bergamo. You do well either way. But can you talk to the Bergamo ramp? And is that the sort of higher core count, more cloud-native processor? Is that something that you're going to continue to focus on as part of your server roadmap?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, absolutely. I think Bergamo has a unique feature. It's different from Genoa. It's really cloud-native. And we see Meta actually adopt Bergamo across all their platforms, from Instagram, WhatsApp, to Facebook. I think if the application is similar to that, we do see other cloud customers, second-tier cloud customers, really like Bergamo, because it probably has an even better TCO when they use it for certain workloads. It definitely is our focus. It's quite important, actually, going forward.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

I feel like we need to send Mark Zuckerberg a Christmas card for talking about all these semiconductors that he uses, because it's actually very helpful to talk about this in the open. And then on the next generation market share, there's some debate around that. You've said, I think, in October that Turin was sampling with cloud customers. I don't believe your competitors' products are sampling yet. And it's socket compatible versus your competitor that needs a transition. So it seems pretty good that if we stay in an environment like this, that you're in pretty good shape on market share in the next generation.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. With Turin, not only is it socket compatible, but also the TCO improvement is quite significant. And the core counts will continue to push the core counts to be much higher. The customer feedback so far is very positive. So when you think about it, is Genoa still in the rapid adoption period of time? And we're still selling in Milan, because certain customers do need Milan. We can see the opportunities for Turin adoption going into the future that will help us to gain more market share. We are confident from the market share perspective, because of the total TCO we can provide our customers.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. And then switching to PC client, bless you. Switching to PC client, you had a rough year last year. And you've recovered. Now you've had a couple of quarters of restocking in that market. What's your feeling on 2024? And then longer term, a lot of discussions around AI PCs and things like that. What's AMD's ability to enable some of that?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, yeah. The PC market was challenging last year. The second half was better. I think what we're seeing is the inventory is more normalized. Sell in, sell through is more balanced. So it's market and market demand. We do think the second half seasonality typically will be better. I think for AMD, one of the most important things is AI PC. We do believe AI PC will drive a different replacement cycle going forward. We are actually the first one to introduce AI PC last year with the Ryzen 7040 and shipped millions of AI PC. What AI PC can do is once you incorporate the NPU into the PC, it can do a lot of applications, AI applications, locally. So you don't need to go to the cloud anymore. Once you have those applications, you can really improve the productivity and the user experience.

For us, AI PC is very important. We are the leader. We are ramping Hawk Point. The second half will introduce Strix. We do think once the applications are more and more in the marketplace, especially into next year, we should expect the AI PC adoption to go up. That's very exciting, actually. I think that will help not only our share gain, but also gross margin dollars help the operating margin of the whole client segment.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

That strong position in graphics should definitely be helping. Can you talk to the profitability of PCs? I mean, you had this Intel fairly openly promotional activity that negatively impacted you. It seems like they're past that now. Do you anticipate that that returns? Or how do you think about profitability of the segment?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. PC market or client segment profitability is very important to AMD. The way we have worked on it is during the down cycle, we actually have optimized our investment, really focused on managing the OpEx and actually reducing the OpEx in the client segment. So we are very well set from an investment level right now. Once we can ramp up the revenue, we do think we can leverage it to the bottom line, because gross margin has been impacted last year during the down cycle, but it stabilized. Going forward, typically for the PC market, once you have higher revenue and once you digest the channel inventory, the gross margin tends to be more stable and going up. I think when you think about the overall client segment, we do think it is the market or business we should drive a success model of very high operating margin.

Gross margin probably not as high as the corporate average, because it's always consumer-driven to a certain degree. But operating margin should be much higher. So I think that's what we focus on, is the operating margin improvement.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Great. I'll ask one more question, and then I'll turn it to the audience for any potential questions. The embedded business, Xilinx, now down 40% or so from peak to trough. That's a pretty severe drawdown by historic standards. Your sense of whether that business could be approaching a bottom?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. I think since our announcement, we guided embedded business down Q1, like you said, close to 40%. We also see other peers. They had their earnings released, either the peers that address the same end of the market. It's actually very similar. The decline is very similar. I think our view is we are going through the bottoming process in the first half of this year. And the second half, we do expect recovery, but a more gradual recovery, because some of the market, like communication, continue to be quite weak, not only because of inventory, but also because it's just CapEx and the product cycle, right? 5G is at the very latest stage of the cycle. And industrial, to a certain degree, is also inventory digestion and weak. So I think the key thing about our embedded business is Xilinx is a great franchise.

During this down cycle, we actually continue to win more design, especially when you combine with AMD's embedded processors. We actually expand our design win very significantly. A lot of revenue synergies we are driving. I think it's going to show up in the long term.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Yeah. OK. Great. So let me see if we have questions from the audience.

Speaker 3

Yeah. My question is very specific with respect to this $400 billion number. I mean, when AMD put that $400 billion estimate out, in the process, if you can talk to the process, was that a process driven by your customers coming and telling you their ramp of demand? And so customer A wants 100,000 units. Customer B wants 50,000 units. You aggregated that and then multiplied by some ASP? Or is it based on, well, NVIDIA might supply $200 billion? Is it a supply-driven estimate? What was the process of coming with that estimate?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah, yeah. Thank you for the question. As I mentioned, the way we think about projecting time is actually bottom up, right? It's how we work with our customers based on their feedback about their long-term needs and based on what we understand about the silicon content, memory content needed to meet our customers' needs. So it's actually, of course, all the forecasts are with a bunch of assumptions, right? And the way we think about it is the unit volume and based on customer feedback, of course, and our own projection and estimate. Remember, it's important in framing the trajectory for us, because in high tech, nothing is certain. But if you know the trajectory of the market, you can allocate internal resources to invest in those markets. That's our really fundamental belief, is how we can roughly guess the trajectory of the market.

So the unit volume globally, not only large cloud customers, enterprise, but also include nations. And also the content, the ASP also increases, because what we believe the customer needs for the compute, all the GPUs, the technology packaging, memory, high bandwidth memory, so the ASP will also increase. At the same time, we also include the customer ASIC. We hear the Google GPU and TPU and other customers talking about the internal silicon. We also have a sizable estimate for those kinds of opportunities. So it is bottom up. But our team also did work to check, once you have that $400 billion, what it means for the global GDP, what it means for productivity improvement. Even we're talking about AI really saving a lot of labor cost and what it means for labor costs.

We did do multiple triangulating to make sure the trajectory we are thinking about is not out of line.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

Another question in the front?

Speaker 3

Thank you. Maybe if you can speak to your AI prospects in China? There was a Bloomberg story today about some of the chips tailored for China, maybe not getting approval. So just curious how you're thinking about that opportunity?

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. I think for now, we are the new entrants in the market. Our focus has been working with our customers. MI300, the revenue we are talking about today and for 2024 has largely been non-China customers. We do ship MI210 to China, which is in compliance with export control. We are working with our customers and working with the U.S. government to see if we can have a derivative of MI300 to support the Chinese customers. That's a working process. We need to comply with export control. We have been working with the U.S. government on that. Overall, in the long term, I think China is a market for us. The key thing is how we can make sure we are complying with export control. Today, there's not much impact on us.

Speaker 4

I have two questions. The first one is, can you talk about MI300 maybe next year? What is the number will be like? This year will be more than 3.5. So what will be next year? And can you also talk a little bit more on the competition? So NVIDIA is talking about one-year cadence. Every year, they're going to launch new technology. So how are we going to compete? What is our strategy on that front? Thank you.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Yeah. Yeah. Thank you for the question. Yeah. So on the competition side, I think competition is really good for the market. Since we introduced the MI300, we have seen NVIDIA continue to accelerate the cadence of the product introduction. You should expect us to also do the same. If you look at the history of AMD, from MI100 to MI300, literally, it is around 3-4 years. We introduced the multiple products. We are the leader in chiplet technology. We also have a strong partnership with TSMC on the packaging side. So we do feel confident about the roadmap. And we are also working with our customers. The key thing about any new technology is not only you need to introduce, you also need customer adoption. They also need to have resources to make sure they can work on the new technology.

So for us, it is about working with the customers, aligning our roadmap with what the customer needs. You should expect us to continue to drive competition in this side. I think the success of our AI business is not just in 2024. We really look at the longer-term success and the trajectory of revenue based on the design wins and the customer engagement. We feel pretty good about our long-term success. So I think you should expect us to continue to drive the competitiveness of our roadmap. And we do think AI, especially data center GPU, will be the largest growth driver for AMD going forward.

Joseph Moore
Managing Director and Head of U.S. Semiconductor Research, Morgan Stanley

We'll have to wrap it up there, Jean. Thank you so much for your time.

Jean Hu
Executive Vice President, Chief Financial Officer and Treasurer, AMD

Thank you.

Powered by