Okay, good afternoon and welcome to, again, welcome to the second day of J.P. Morgan's 52nd Annual Technology, Media, and Communications Conference. My name is Harlan Sur. I'm the Semiconductor and Semiconductor Capital Equipment Analyst for the firm. Very pleased to have Jean Hu, Executive Vice President and Chief Financial Officer at Advanced Micro Devices, here with us today. Jean, thank you for joining us this afternoon.
Yeah, thank you. Thank you for having us.
Yeah, I think, one of the best places to start is, you know, obviously, one of your customers, Microsoft, is having their Build event, and there were a couple of, I think, key announcements, product launches, using the MI300 GPU platform from AMD. I don't know if you wanted to maybe spend some time maybe talking about that?
Yeah, it's really exciting. So we announced it today with the Microsoft Build conference, end-to-end partnership with Microsoft from, of course, AI PC to MI300. On the MI300 side, MI300X and the ROCm software together-
Mm-hmm
-actually power the Microsoft's virtual machine-
That's right
-both for the internal workload, the ChatGPT, what open source use, and also external workload, third-party workload. And Microsoft, actually said, right, from, MI300X and the ROCm, it's the best price performance, to power the ChatGPT-4 inference. So that's really, a proof point of, for not only MI300X from hardware, how competitive we are, but also from ROCm software, the maturity, how we have worked with our customers to come up with the best price performance. It's very exciting. And on the third-party workload side, the Hugging Face is also using Microsoft's, you know, virtual machine powered by-
Correct
AMD's MI300X, and the Hugging Face has almost 500,000 models-
Yes
-which can all run on MI300X. So we have made tremendous progress, not only from MI300X competitive in inference training, but more importantly, software has been really critical investment we are making, and today we really can run a lot of models. If they're writing the models based on the open source ecosystem framework-
Mm-hmm
-we can run them out of box. And then we also, you know, help the customers optimize their models to make it the most efficient, provide the best TCO to our customers. So it's very exciting.
So the Microsoft announcement, like you said, was in two portions, right? So first is Azure is just offering MI300X instances, right?
Yes.
To their public cloud customers.
That's for third parties. Yeah.
Exactly. And then the partnership with OpenAI, right? They announced the Azure OpenAI Service, right, that allows customers to take advantage of pre-built models, right, to quickly bring training and inference and bring their models to the market that also is using your MI300X platform, right?
Yeah, yeah, because the MI300 is powering the ChatGPT 3.5 and the 4-
That's correct
-all the Copilot, all the different version, the Team Copilot, all of those applications. It's really one of the most important AI infrastructure in Microsoft-
Yes
-Azure Data Center, so we are really pleased with our partnership with Microsoft.
No, congratulations on that. We'll, we'll talk a little bit more about MI300, but I did want to start off with. And thank you again for joining us today. I did want to start off with some of the near-term sort of business environment questions. Server shipment TAM forecast for calendar 2024 is sort of low, mid-single digits sort of growth this year. You expect to grow your data center business this year by strong double digits percentage points. Within that, it looks like your server business ex-GPUs is growing kind of 25%-30%, and implies 15% or better second half versus first half growth, right? You did talk about continued adoption of Genoa and improved, improvement in enterprise demand dynamics.
Like, what demand dynamics are you tracking, customer programs, adoption of Genoa, Bergamo, ramp of Turin, that gives the team confidence on a better second half, and strong growth in your server business this year?
Yeah, a great question. We are very pleased with our data center performance. If you look at the market opportunities, it's the largest, fastest-growing-
Mm-hmm
-opportunity, and we have been investing in data center, and the momentum you can see both from GPU side and the CPU side. So when you look at our CPU business, in Q1, we saw double-digit, strong double-digit growth year-over-year.
That's right.
In Q2, we're going to see another strong double digits-
Mm-hmm
-year-over-year growth. All of those are have been driven by the ramp of Zen 4 family of processors, which including Genoa, Bergamo, and others, the significant adoption in both cloud, and enterprise customers. I think fundamentally because, our processors provide the best TCO for our customers. If you look at the market share, in Q1, the third-party market share shows we are reaching 33% market share-
That's right
-from the server CPU side. I think, we do expect the second half to be better than first half. The first one is in cloud. Cloud market demand continue to be a little bit mixed, but, since we are providing best TCO-
Mm-hmm
-for our customers, we do see both the hyperscale cloud customers and the Tier 2 cloud customers continue to adopt our Zen 4 processors across the different workload, both external workload and internal-
Yes.
-workload. And we talk about, we have, almost like 900 public-
Mm-hmm
-instances available globally for customer adoption. I think that's really have been helping us to drive the growth. Secondly, in enterprise, we actually start to see some demand improvement. Because today, all the CIOs in enterprise, they are actually facing couple of challenges, right? The first is all their workload continue to be more. The data is more, application is more, so they do need to have a more general compute. At the same time, they need to start think about how they can accommodate AI adoption in enterprise. They are facing the challenges of running out of power-
That's right
-and the space. If you look at our Zen 4 family of processors, we literally can provide the same compute with 45% less servers. What that means is, if they adopt AMD's solution versus our competitors, they actually can save CapEx almost by half upfront, and then in addition, the operational cost will be 40% less. So when you look at the TCO benefit we can offer.
Mm-hmm.
Plus, we have been investing in go-to-market. We have been having more feet on the street to talk to enterprise customers, and show them the TCO benefit they have. So we do see the acceleration of our effort-
Yeah
-is paying off. You know, we talk about American Express, Shell, STMicroelectronics-
Mm-hmm
-some of the large enterprise customers shifting to AMD solutions. So that's just the beginning. We do think, in second half, with the demand improvement and our continued share gain in enterprise market.
Yes
-will also help us. Of course, as you know, we are very excited about Turin launch-
Mm-hmm
-which is our Zen 5 server processors. It will extend the TCO benefit compared to Zen 4.
Right.
So we're very excited about it. Of course, the revenue ramp probably is more in 2025, but, the momentum, when we look at, our competitive positioning, how we can provide the best TCO for customers, we feel pretty good about the second half.
Similar question that I asked on data center, but now focused on the client PC, right? You drove better than seasonal shipments in Q1, guided for slightly better seasonal shipment dynamics in Q2. Full year, I think we, in consensus, have your client business up about 25%. What metrics are you monitoring that gives the team confidence that, PC client business will drive strong growth relative to the overall TAM growth?
Yeah, yeah, appreciate your comment. It's true that when you look at the first half of our PC client business are performing really well.
Right.
We're gaining share.
Mm-hmm.
Primarily, they are driven by our most recent generation of processors, Ryzen 8000. When I look at our Q1 performance, on the desktop side, we had strong year-over-year double-digit growth. On the mobile side, we actually almost doubled the revenue from the Ryzen-
Mm-hmm
- 8000 to processors. So the way to think about it is we actually were the first to introduce NPU inside of a PC, the AI PC-
Mm-hmm
-people talk about. It is with the Ryzen 7040, and we're also the first to introduce NPU inside the desktop. That's the 8000 series.
Yeah.
So the technology and product leadership that have helped us drive the significant demand. I think, you know, AMD has always been using this strategy is to drive the top-line revenue growth through product technology leadership, and the team has been executing extremely well. In the second half, I think we're going to launch our next generation AI PC Strix.
Mm-hmm. Mm-hmm.
You're going to hear about it in the coming weeks. It's a very exciting product and very competitive to power the AI applications in the PC market. We do believe AI PC is a very significant inflection point. It will potentially help the refresh of the PC market.
Right.
And so overall, to come to your question, we think generation-over-generation technology and product leadership will help us, both on the commercial side and the consumer side, to continue to gain share.
Perfect. We did talk a little bit about the traction and announcements today at Microsoft Build relative to the MI300. But let's talk a little bit more about AI and accelerated compute, right? Your team is executing extremely well, fastest product ramp in the history of the company. You know, $1 billion in cumulative revenues just over the past two quarters. You've taken your MI300, you know, calendar 2024 revenue targets from greater than $2 billion to greater than $3.5 billion, to greater than $4 billion. Near term, the team has said that there's supply constraint, right? Seems like demand is rising much faster than expectations. Lisa said that off of the $4 billion revenue target for this year, that the team has supply commitments to drive revenues significantly above that amount, right? Is that still the case?
Is GPU revenue upside from here just purely dependent on customer conversions from eval to qual to deployment?
Yeah, first is the, the MI300 ramp is really unprecedented. If you think about it, we launched MI300 December 6th last year.
Mm-hmm.
And, since then, as you mentioned, you know, in less than two quarters, we actually passed a billion-dollar revenue, and we also guided Q2, significant increase and each quarter sequentially-
Right
- for the rest of the year. And, we updated it to, like, $4 billion, more than $4 billion revenue for this year based on, what we have qualified-
That's right.
and backlog
That's right
-for the orders at the point when we did our earnings announcement. We have more than 100 customer engagements ongoing right now. Lisa talked about the different customers at different stages of engagement, from a POC to qualification, lab, production to, you know, ramp. All those customers, the customer list includes, of course, Microsoft, Meta, Oracle, those hyperscale customers, but we also have a broad set of enterprise customers-
Mm-hmm
-we are working with. Overall, if you look at the AI accelerated demand, it continue to exceed everybody's expectations.
Yes.
I think there are more demand for GPUs, and our team is working very hard with our customers to continue to go through those kind of process, to scale our customer, to make sure they ramp into the production. So we do have more than 4 billion supplies secured-
Mm-hmm
-especially in second half.
Mm-hmm.
We are absolutely working hard to continue to drive the customers, help customer to ramp into production.
Your data center GPU competitor has laid out a multi-year roadmap, increased cadence of new products, and also more finely segmented out their product line, right? I think we and investors are wondering when the AMD team is gonna provide us more visibility on their roadmaps. You know, and I think Lisa said that we should see new products being introduced towards the latter part of this year. Is that still on track?
Yeah, I think, Harlan, I will highlight first how AMD got where we are today.
Mm-hmm.
If you think about AMD, since Lisa and Mark Papermaster joined the company, has been trying to build a high-performance compute company. So not only we have been investing in CPU and on GPU side, we have been investing in GPU for many, many years, since ATI days. And if you look at our GPU roadmap, we have been investing in GPU from MI100 to 250-
Mm
-200 to today. So the approach has always been multi-generational, multi-year roadmap-
Yes, yes
-from AMD's perspective. And the MI300X ramp, the success, software side and hardware side is really reflection of long-term investment-
Mm-hmm
-that we have been making. So I think, from our perspective, that background backdrop is really important.
Yes.
When we work with our customers, both companies are investing significant resources. So you should expect the customer relationship is about multi-generation.
Mm-hmm.
We actually get a very significant feedback from our customers about not only MI300X, the next generation and the generation after next. The other thing I would say is, AMD has been doing the chiplet architecture for a long time.
Yeah.
Literally almost 10 years. The success of our server CPU roadmap, it's because generation over generation, it's about a chiplet design.
Mm-hmm.
That really gave us a lot of flexibility to expand our roadmap and accelerate our roadmap. That also help us. We tend to be more conservative from-
Right
-announcing roadmap perspective, but you should expect us to have a very competitive roadmap. I think, stay tuned, and we will have a preview of our roadmap in the coming weeks.
Okay, perfect. I feel like the expansion in your data center GPU business outlook this year has been two dynamics. First, unlocking better supply availability, but secondly, faster time to production conversion by your customers as they migrate their software stacks over to your platform, right? Thanks to several new iterations of your ROCm software framework. What is the AMD team doing here to continue to sort of close the gap on software, AI frameworks, and just accelerated compute ecosystem development?
Yeah, great question. Software is so important-
Mm-hmm
-in this market. The ROCm software AMD has been investing initially, it's in HPC market. So when you look at MI100, the 250, and the MI300A, it's ROCm software. Between the MI300A and the ROCm, we are powering the most advanced, the frontier model, in the hyper HPC market.
Yes.
Last two years, we have made a significant investment and progress in the ROCm to support the AI.
Yeah.
That has been tremendous, and the ROCm 6.1 actually extended our support for broad library models, tools, and also ecosystem. That has been the reason a lot of models, if you are writing based on open source framework, you actually can run your model out of the box-
Mm.
using MI300. And that's also why with Microsoft, we can work together closely to really co-optimize the-
Yes
-the performance to the point to be the best price performance MI300 machine. So the importance for us to move forward is to continue to scale, because we have more than 100 customers, we have a broad set of different workload, we need to scale our model with open source ecosystem. That have been evolving very quickly. And, also, secondly, is we do have the approach at AMD's end-to-end AI. So the ROCm as a single software platform will support AI PC, the GPUs-
Yeah
-and eventually, the edge side of AI applications and the server CPUs, and also multi-generation-
Yeah
GPU roadmap. So from that perspective, broadening the support and deepening the support are what we're doing. And as you can see, we not only invest organically, we also did some small acquisitions to really expand our software capabilities.
Great. Why don't we see if there are any questions in the audience? If you have any questions, raise your hand. We have one right up here. While it's gonna take some time, so why don't before he asks a question, let me ask my next question: when we talk about AI compute, silicon, and hardware, it's typically focused on cloud and hyperscalers, right? But interestingly enough, right, a majority of customer-specific and proprietary data actually resides on-prem. Your enterprise customers would like to keep the proprietary, proprietary data on-prem, run their AI workloads on-prem. The team is actually starting to prime the enterprise markets with MI300. You announced a plethora of OEM server partnerships recently. What's the strategy for targeting the enterprise markets, and how is this sort of modulating your sort of future product portfolio?
Yeah. We actually, if you look at the over 100 customers we have engagement right now, there are a lot of enterprise customers. The approach we're taking is not only we want to make our hyperscale cloud customers successful-
Right
-we also want to seed our enterprise customers.
Mm-hmm.
Because we do think AI is going to be everywhere. And you're absolutely right, when we talk to our enterprise customers, they do start to think about that question.
Yes.
Do I do it on-premise?
Mm-hmm.
Do I send it to cloud? So that is a strategic approach they have to think through, and, typically, they will come to us basically saying, how should they deploy AI? I think, we are uniquely positioned is because, on the server side, we're working with our customers. We're helping them with the, how they deploy servers. So it become like a significant leverage for us, and frankly, on the commercial PC side.
Yes, yes.
Right? So AI PC-
Right
-the server side and the GPU side, that's a part of our go-to-market model right now, is we actually can leverage that to work with the enterprise customer across our different platforms. Of course, ROCm software is really important because ROCm is open source by nature, and the customers, if they can write their model based on the open source framework-
Mm-hmm, mm-hmm
-it actually saves them more money.
Yes.
From a TCO perspective, that's what we're really trying to approach. It is consistently try to provide our customers the best TCO. We do think that's something we can continue to drive the engagement with enterprise customers.
Perfect. Okay, we have a question here.
Yeah. Hey, Jean. You know, NVIDIA has, they have the China version of their like H200, you mean the H20, and then the L40, the L40, et cetera. Does in your $4.5 billion guidance. Oh, sorry, number one, do you have kind of like a similar China-specific SKU? And number two is, if you do, is it included in your guidance? Like, how do we think about your, that, that China opportunity that NVIDIA has, and whether you have this, a, a product for that, whether that's in your, your guidance or not? Thank you.
Yeah. So when you look at our current revenue, the China exposure is almost nothing. It's very limited. I think the way to think about it is China is an important market. It is a large market, but we definitely want to make sure we're complying with the export control. I think the export control has been changing a lot, and but for us, the one of the advantage we do have is because we do have a chiplet architecture, and if we needed to design a model or design something for unique to meet the export control, the China standard, we absolutely can do that. The way I always say is you should expect us to focus on all the market opportunities. We'll prioritize it right now.
Today, we're really focused on make sure our U.S. customers and the, you know, those enterprise customers get the GPU supplies we have, but we absolutely think China is important market for us.
But that's not in the guidance, I guess, is it?
I don't think we'll give that comment, right? Our guidance is based on at that very particular point of for Q1 earnings call, how we look at comfortably the backlog and everything. At that point, you know, it's really focused on largely US customers.
You know, your, your competitor surprised all of us last earnings when they told us that 40% of their profile is inferencing, right? And it's high performance, I mean, full-blown GPU inferencing, right? It's not any of their sort of, you know, lowered SKUs, it's full-blown GPU inferencing. And from that perspective, I mean, as the team rolled out the MI300 platform, that was always how the team led, right? Which is we have the best TCO, performance, cost, power perspective from an inferencing perspective. And I think that the Microsoft Build announcement, right, with the Azure OpenAI service, is using your MI300 primarily for the inferencing engine, right? And so if you think about your design win port- so it's clear that inferencing is becoming a bigger and bigger part of the pie because your customers now are starting to deploy, right?
And so is it fair to assume that most of your engagements on MI300 is for inferencing-type applications?
I would say we actually engage with our customers on both inferencing-
Mm-hmm
and training. If you look at our Q1 revenue, we have actually quite broad customers-
Mm
-including inference and training. I think it's probably more indexed to the inference, just because, when we come into the market, we are late to market. When we enter the market, inference is actually taking off, right?
Yes. Yes, that's right.
Initially, it's training, and then inference taking off. We are coming to the market with a lot of inference opportunities. Secondly, you're absolutely right. MI300X today has the best inference performance.
Mm-hmm.
The TCO, benefit for the customers, that absolutely is also one of the driver. But, to us, when we think ahead, both training and inference are important to us. We do have the roadmap to address both opportunities.
On the server side, over the past year, we've seen at least five or six Arm-based server CPUs being introduced, NVIDIA Grace CPU, Next Gen Graviton, Cobalt, Axion from Google, and a couple of others, right? How, how is this push on custom and merchant Arm-based CPUs gonna impact the server CPU opportunity, you think, for the AMD team?
Yeah, I think we work with our customer closely. I would say when customer think about things, it's not about the architecture of Arm or x86. For them, it's really about performance per watt-
Mm-hmm
-and the performance per dollar, where you can get the best performance and the TCO. So when you look at our today's server processor roadmap, the Zen 4 and the Zen 5 , which is coming up, is with the Zen 4, we actually have all different SKUs, right?
Yes.
Genoa, that can run very complex workload, and Bergamo, which is tailored to cloud-native workload.
Yes.
If you look at the customer adoption of Bergamo, which is probably more competitive with Arm-
Mm-hmm
-we see significant adoptions. Meta, we talk about it, is across Instagram, WhatsApp, and also Facebook. It's all Bergamo, right?
Yeah, that's right.
Because it actually provide the best TCO. I do think, fundamentally, that's what's most important, is, the performance. So I think, the merchant versus ASIC is always,
Yeah
- in semiconductor industry, there's always that phenomenon. You always have a certain portion of the silicon will be like ASIC or customer solution. I think it's, again, it's about the TCO. If a customer think there's a TCO benefit to do ASIC. But so far, we feel like really our line of product portfolio and the technology leadership will continue to be able to provide a very significant benefit for our customers.
You know, going back to the PC space, you know, competitors, some of your competitors are showcasing new Arm-based CPU platforms. They're touting their strong power efficiency, the SoC-like architecture, right? Which makes it a lot more flexible. I would argue you're getting the same benefits with your Ryzen x86 platform. But the AMD team does have the Arm architecture expertise, right? It's a part of CoreLink, it's part of the Pensando portfolio. You have the GPU, you have the NPU, you have the AI blocks, and other accelerator IP. If the OS vendors, if the PC ecosystem really wants to expand the CPU architecture base to Arm aggressively, would AMD participate?
Yeah, I think I would not comment on very specific things, but you're absolutely right. AMD, the way to think about it, is it's high-performance compute.
Mm-hmm.
The building blocks that you mentioned-
Right
-those are exactly the advantage of this platform. We're actually the only company who can cover all those areas, from just every building block-
Yes
-you just said. We also, from a business model perspective, we also have a two business model.
Mm-hmm.
If you look at our gaming console business, it's a semi-custom business-
Right, right
-for generations and the future generation to come.
Mm.
Business model-wise and IP-wise, we can do both, right? So we definitely have the capability and the IP blocks to work with our customers. It's really what the customers need.
On the embedded markets, you know, very diverse end markets: industrial, auto, infrastructure, test and measurement. You know, given your strong market share position here, Xilinx is in a good position to catalyze EPYC CPU attach or Ryzen CPU attach to the FPGA. But from a near- to midterm perspective, I mean, the team did start seeing the weakness in embedded second half of last year, much like a lot of your peers. You seemed pretty confident on a return to quarter-on-quarter growth in embedded in the second half of this year. Given your lead times, you are probably booking into the second half of the year. But is that what's driving the confidence? Is that you're already starting to see the bookings inflection in the quarter?
Yeah. I think, as you know, Harlan, Xilinx, it's the best, franchise-
Yes
in the FPGA business. We have seen the market share and the continued design win share again from Xilinx business, especially combined with the AMD.
Mm-hmm.
Not only on the FPGA side, on the embedded processor side-
Yeah
-we're gaining tremendous design win share. Of course, those businesses tend to take time to-
That's right
- ramp. Coming back to the near term, I think everybody here knows these, industrial, automotive-
Yeah
Communication are going through a deeper inventory correction cycle. For our business, we can see quite a mixed demand, right? Aerospace, defense-
Yes
-is still to be okay. Communication, not only it's inventory correction, but also you're facing, the CapEx spending are quite limited. So those are the two extremes, and then in the middle you have industrial, and the automotive is quite mixed. We do feel, first half will be the bottom.
Mm.
I think the inventory correction has to be quite steep, but the second half, our view is the recovery is quite gradual.
Got it.
So it's not a V-shaped recovery. It will be slightly going up. Q4 probably is better than Q3. Overall, if we look at the design wins we get, we feel quite confident about the longer term. The embedded business will continue to be a significant share gainer and continue to drive growth.
Embedded x86 CPU, that's about a $6-$8 billion per year market opportunity. AMD has very small CPU share here, so given you, you know, you've got Xilinx in the portfolio, can you give us an update on the synergy unlock? I mean, what is the AMD team doing to aggressively drive higher AMD compute attached to all of those Xilinx sockets?
Yeah, that's one of the best revenue synergies that we have between AMD and Xilinx.
Mm.
I think, you're absolutely right. Embedded processor business, traditionally, it's just not a priority for AMD because the server market-
Yes
- the PC, the GPU.
Mm-hmm.
But, because of Xilinx acquisition, we have the natural leverage on the go-to-market side and the customer side. We actually have been seeing significant design wins in that market from both, security market, networking market, communication market, and, we do feel like, we can continue to get design win momentum. The revenue will probably show up in 2025-
Yes, yes
-and beyond, but that's one of the very significant revenue synergies we see from the combination of two companies.
Then finally, you guided gross margins 53% for this quarter. As you move into the second half of the year, you have several dynamics, right? Which I think should contribute to a better gross margin profile. Embedded's gonna start to gradually recovery. Data center will drive strong growth. Looks like we in consensus are modeling the team exiting this year about 150 basis points higher in gross margins from the June levels. Is that how we should think about the trajectory from here? Any other puts and takes to think about?
I would say the major driver in the second half-
Mm
-is actually data center business, right?
Yes.
Because data center is growing so much faster than other business. That will be the major driver for the sequential margin expansion each quarter for the second half. Of course, if embedded business-
Yeah
-come back.
Yes
-that will be additional tailwind. But right now, the way we think about embedded business will be more gradual recovery, and the major driver of gross margin expansion in second half is to continue the data center strong growth.
Jean, thanks for the participation today. Always appreciate the insights. Thank you.
Yeah, thank you so much.
Thank you. Thank you, Jean.
Yeah. Thank you.