Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts

NASDAQ Investor Conference

Jun 11, 2024

Speaker 2

I was telling everybody that I spent 75% of my marketing talking about AI to start. Jean Hu, CFO, happy to have you here, and I, you know, I think maybe as a way to kind of frame some moving pieces, you were just at Computex, which is usually a PC show, but it had AI roadmaps from both companies, so maybe kind of start with that. Two interesting areas, AI PC, but then obviously data center AI. Maybe step through some of that, and then we'll get into some of the questions.

Jean Hu
CFO, AMD

Yeah. First, thank you. Thank you for having us, and thank you all for joining us this afternoon. You're right, we had really exciting Computex. There are a lot of announcements from AMD. Lisa Su, our CEO, actually did the keynote and unveiled the leadership CPU, GPU, and NPU architecture leadership. Actually, AMD is the only company who have end-to-end solutions cover CPU, GPU, and NPU from a data center, and then to PC, and eventually our embedded business, which is the Xilinx FPGA business. We do think edge AI will happen in the future. So we have a broad portfolio to cover everything. So going back to on the announcement side, the first is AI PC, because it is a consumer show. We actually announced about our Ryzen AI 300 Series.

It's for premium, like, actual thin notebook. It actually has the CPU and the latest of the GPU and the NPU, which are all on one single chip. The TOPS, Microsoft one is the 40 TOPS. We actually have 20% more TOPS. We are the only one who reach the 50 TOPS to really power the Copilot Plus, the AI PC, so it's one of the really amazing product. It's going to be available in July. We also announced our Ryzen 9000 desktop processors. It also has leadership of performance in AI inferencing. So that's on the PC side. Then on the data center side, we actually previewed our next generation Gen 5 EPYC CPU servers. It's codenamed Turin.

Turin extend our leadership in both, performance per watt and performance per dollar significantly, so, that's going to be launched in second half of this year. Very exciting, but, most importantly is Lisa actually previewed, our GPU AI accelerator roadmap, on an annual cadence. So later this year, we're going to introduce the MI325, which is going to have 288 GB HBM3E memory and have significant memory capacity and bandwidth better than our competition. And then next year, we are introducing MI350, which is based on, CDNA 4, a new architecture, which is also going to have a 288 GB HBM3E memory. And also, more importantly, if you look at the generation over generation, the performance improvement is 35x. That product will compete with, literally Blackwell 200, and they will have a similar generation over generation performance improvement.

And then in 2026, we're going to introduce the MI400, which, you know, along with the NVIDIA's Rubin GPU, that is going to continue to expand both the leadership, inference performance and the training performance. So overall, very exciting roadmap. We feel pretty good about annual cadence, because we have been the first one to adopt a chiplet architecture. That give us a tremendous flexibility, to be able to have a lot more memory and to be able to accelerate the roadmap.

Speaker 2

Right. I think there definitely. I mean, obviously, NVIDIA's been vocal about a one-year cadence, so you kind of matched on the GPU. I would say the questions I get a lot is that, you know, they introduce a GPU, they have a CPU, they have a NIC card, you know, they have NVLink switch, Ethernet switch, so it's a whole portfolio that ultimately makes up a rack of AI servers. And I think the question that people have is: How do you counter that? What partners, you know, have you talked about, and, you know, can you stay on a one-year cadence for all of those other components as well?

Jean Hu
CFO, AMD

Yeah, yeah, that's a great question. On the GPU side, we believe we have very competitive GPU, and, on the networking side, if you think about interconnect, AMD has always been, have the legacy to work on the Infinity Fabric, which link our GPUs today. It has been a very, successful interconnect, technology, and recently, there are 8 companies which include, Microsoft, Meta, Google, you know, AWS, Broadcom, AMD, and, Cisco. 8 companies formed the UALink, and, we are going to create an open standard. 1.0 will come out literally in Q3. So the industry, the ecosystem, it will have an open standard for interconnect, which can link up to 1,000 GPUs. That's really, important, right, is to really scale up, the interconnect side.

On the networking side, as you recall, we actually bought a company called Pensando, which has a really programmable architecture DPU. Not only, you know, we have a tremendous customer interest today for standalone DPUs, but it's also very important networking technology for GPU roadmap going forward. In addition, there's an Ultra Ethernet Consortium with Cisco, Broadcom, Arista, Marvell, and other companies. What the ecosystem is doing is to promote the Ethernet networking. If you look at it today, globally, Ethernet is the most prevalent networking technology across all data centers. So we do think, by working with the ecosystem partners, we're going to have the networking technology, the interconnect technology, the DPUs, to really continue to drive our roadmap going forward.

Speaker 2

One of the other impediments for adoption has been the software side, and I just wondered if maybe you could talk about the progress that's being made in terms of open source software and, you know, having. You know, obviously, if it's done in CUDA, it's a lot of manual labor to port. It's hard for the customer, and that's been an impediment. You know, where are we in that progression where a customer can you know, use your silicon, and how long is that process at this point?

Jean Hu
CFO, AMD

Yeah. The ROCm 6.1, that's our most recent release of our software stack. It has the library, compiler, tools. We have been working with the ecosystem closely. If the model is writing based on, you know, the PyTorch, Triton, and JAX, you actually can use MI300 out of box. That's why when you look at the Hugging Face, they probably have more than 700,000 models. All those models actually can run just on MI300. So that's the first one. Second one is, to your point, is if customers have been using CUDA, and especially the kernel node level for CUDA, we actually provide the library tools for them to port really efficiently. It's at a level right now, for some customers, the porting work can be a day, can be a week.

Some may take longer if it's a more complex model, but it is so efficient. So it's largely porting is not a barrier anymore, and it will continue to mature, the ROCm software stack, library, tools, and even models, to really help customer to convert very easily.

Speaker 2

I want to ask you, NVIDIA mentioned on the last call that 40% of their business was inference. I know it might be tough to understand, you know, because they might do both, but in terms of your makeup of your business, inference versus training, I don't know if you want to throw out a percentage or maybe I believe it's more inference-weighted, and I'm kind of curious your perspective as to why. Why is your chip better for inference, and is it because you need to accomplish some things to get more of the training business or is it just because your chip's better at inference?

Jean Hu
CFO, AMD

Yeah, yeah. I think you're right. Today, if you look at our revenue, we're more indexed to the inference. I think there are several reasons. First, you know, we are the second player to enter the market. Training absolutely has been going on for a while, right? As people need to train the model first, then inference is where they make money. So initially, it's about the training of the model, and then inference is taking off. So when we entered the market, we're actually able to see the inference applications and demand continue to increase. Secondly, if you look at the MI300X, we do have 192 GB HBM3 memory. So from that perspective, it's a significantly higher memory capacity and the bandwidth versus computation. For inferencing, that's really important, right?

Because if you have a really large memory capacity, you can do the inference much faster. That's why, the total cost of ownership for inference by using AMD's MI300 is much more. The TCO is really what the customer is focusing on. That's why we see customers like to use the MI300X for inference applications.

Speaker 2

I want to ask you about supply. Last year, I think everybody figured out what CoWoS was and had their own models, and that was a big. You know, people following that. I'm curious, are you hearing more noise about HBM? So in just terms of, I think, foundry capacity, I'm assuming you can get, you know, your chips done, but in terms of, other components to ship, where is the tightness today, and is there any concern with HBM, you know, vendor supply?

Jean Hu
CFO, AMD

Yeah. I would say the industry continue to increase the capacity both on the HBM side and on the CoWoS side. So, it's absolutely the case that our team has done a great job, but for the first half, we continue to see the tightness for both the HBM and the CoWoS. So capacity continue to be limited, but the team continue to work with the supply chain ecosystem, will continue to improve supply in the second half. I do think you know, from a memory side, HBM side, we are working with all three memory suppliers, and the capacity will continue to expand. That's that's the good news of the capacity side.

Speaker 2

I did wanna talk about the other two businesses, and we can go back to AI if we have time. But I wanted to ask on traditional servers, 'cause this year was thought of as a rebound year, when last year it was thought of as, like, capacity plus wallet squeeze. I think to date, maybe the market is kind of, you know, only up modestly. So you- just your perspective on the traditional server market.

Jean Hu
CFO, AMD

Yeah, traditional server market is still quite mixed. Last year, we all know traditional server market actually declined, and if you look at it this year, in the cloud environment, it's continue to see customers, some of them, continue to optimize, right? Is the AI investment optimization still going on? But we have, again, tremendous momentum with our Gen 4 server CPU platforms. Both the Genoa and the Bergamo have been ramping quite significantly. Adoption and the market share, again, has been really, really impressive. If you look at the Q1, we actually got to 33% revenue market share. So when you look at the enterprise market, we actually started to see early signs of a refreshing cycle. The way to look at it is when you look at the CIOs today, they're facing a lot of challenges.

They are limited by power, by space, and also they're trying to figure out how to adopt AI. So with all those things, the TCO become really important. When you look at the AMD solution, we actually can provide the same amount of compute with the 40% less servers with our Gen 4 family. What that means is you can cut the CapEx by half at the very beginning and the operation cost to operate those servers will be also 40% less. So when you look at the whole TCO, we do think, you know, that will help the refresh cycle, right? Because then they'll have more space, more power, they can utilize, you know, adopt the AI or do better planning within their data center.

Speaker 2

It's a great lead-in. I was gonna ask you, in terms of, I mean, if you listen to NVIDIA, you would think that a GPU is gonna do every workload ever over time. But clearly there's a lot of workloads that a CPU is gonna handle, and in many cases, AI might even create more workloads for CPU. You know, can you talk about just how old is this and, you know, installed base of CPUs, and can they continue to just kind of ignore spending? And then you mentioned a better, you know, performance per. Is it looking at this market wrong on a unit perspective? You're getting much bigger, you know, super high core counts like Bergamo.

You know, is it potentially not a growth unit market, but then ASP is really the way to look at it?

Jean Hu
CFO, AMD

Yeah, yeah, I really appreciate the question. So first, Jason, when you look at different workload, really fundamentally, there are so many different workload. The data explosion continues. Different workload really need a different compute engine. When you look at the traditional foundational applications, your ERP system, your database, and you know, your shopping website, your Meta's Facebook, Instagram, all those things, you don't need a GPU. The CPU has the best TCO for those kind of foundational traditional applications, and those things continue to increase. AI is absolutely. Generative AI is incremental, is in addition to that foundational data and the foundational workload. So your question is spot on, is when we look at the server CPU market, we actually continue to increase core counts per unit.

So unit actually is not a good way to look at this market at all, because, when you have a, you know, 192 core counts with our next generation Turin, literally, you are addressing a lot of problems in the general compute side. And the core count has been increasing double digit. Both our competition and ourselves, we are pushing the core counts continue to be higher and higher. So the right way to look at it is actually core counts. Unit is actually declining, but the core counts has been increasing.

Speaker 2

Right.

Jean Hu
CFO, AMD

So from ASP perspective, we do think ASP will increase because core counts increase. In general, our view is, "Hey, this is a mature market. It's going to continue to grow. It may not be as high as the generative AI, but it's a very healthy market, and we'll continue to gain share in this market.

Speaker 2

I wanted to ask you, the other announcement at Computex, you know, AI PC, the first wave came out, they're Arm-based. You know, you came out with a product that has maybe 10 more TOPS, and, you know, I think for a customer, it's gonna be seamless from a software perspective. So just kind of your perspective, there has been some high numbers thrown out there for Arm-based PCs. I mean, kind of what's the selling point for those too, you know, is there, you know, a battery life argument they can make? Or, I mean, 'cause I think on the other side, they're not priced any cheaper, they don't run all the software.

It seems like if AI PC is gonna happen, it should be a good thing for you as well as Intel, but it's kind of painted sometimes as a negative.

Jean Hu
CFO, AMD

It's interesting, right? Arm PC has been around. You and I, we have been in this industry for a long time, and the Arm PC probably, you know, this is the second round, first round, it did not go anywhere. We all paid a lot of attention. This is second round. I think fundamentally, if you look at, AMD's solution on the AI PC side, not only we have, our latest generation CPU, GPU, and also NPU performance with the 50 TOPS, we actually know the whole PC ecosystem, much better compared to, you know, Arm PC. When you think about, the ecosystem, especially commercial side, right, all the applications, everything has been really enterprise-based on the x86 generation over generation. So that backward compatibility, that ecosystem, is really important, and the performance is also very important.

So I think from our perspective, we do think AI PC is going to really help with the PC refreshing cycle, and our leadership product, the Ryzen, you know, the AI, we are going to be on shelf literally July. We do think that that's going to have the leadership features and the capabilities that will help a customer to adopt AI PC.

Speaker 2

Curious, you know, if you're seeing what the particular drivers are. I mean, I think, you know, on the enterprise side, there's Copilot. It seems like, though, no one's really making a distinction that it has to be run locally, right? So to the consumer or even the enterprise, like, you know, they're gonna have maybe, you know, six, seven years' worth of different models, Intel, AMD, there's a whole mix, and they may. So to draw a hard line and say, "Nope, it won't run, you know, you can't run this certain Copilot version," might be impractical. So if that's the case, and when we talk about an upgrade cycle, like, what applications are you seeing that are interesting that might drive it?

And then do you see eventually someone drawing a hard line, saying, "You need to have a one-year-old PC or newer, otherwise you can't do it?

Jean Hu
CFO, AMD

Yeah, great question. It's very interesting, is if you think about AMD, we actually introduced the AI PC first in the market, so the Ryzen 7000-

Speaker 2

Right

Jean Hu
CFO, AMD

actually is a AI PC. But there were not many applications. So even though we have a AI PC, there are not applications of the, from customer perspective, either enterprise or consumer. The most important thing is applications. The key thing is really how those application can help enterprise to improve productivity and to help consumers for content creation or for running their, you know, whatever, family photos, videos, to be helpful. That's why our view is that we'll see second half of this year, when this AI application come out, then next year potentially is where you can see the AI PC adoption. Because only when we, enterprise, the customers, you know, we all, if there are AI application we can use to offload on the NPUs, which can improve our productivity, we're going to adopt it, right?

I do think it's important to align the applications and the ecosystem to make sure, you know, we can't really pay a premium to get the AI PC, but to get more productivity from that.

Speaker 2

Curious, if you think about just holistically the business in terms of share, you know, the PC share gain's kind of moderated out. I think you're still gaining share in servers. I think part of the PC market is the penetration on the enterprise side. I think the Intel just has a very dominant franchise. It's hard to kind of crack that nut. So, I'm kind of curious, are people thinking about it wrong if they think that your PC share gains are gonna remain at, you know, whatever, 20%? You know, and I think it fits into the kind of, like, monetizing AI, too. Can you get enterprise share? That'll be a benefit to you.

Jean Hu
CFO, AMD

Yeah. Maybe, if we take a step back, if you look at the enterprise side, is the enterprise really requires different go-to-market. The commercial, each enterprise, their CIO, how they buy PC, how they buy servers, it's very different from a consumer or from a hyperscale data center. So AMD has made a tremendous effort during last two years in investing in go-to-market side. You know, we hired our new chief sales officer from IBM. One of the objective is absolutely focus on enterprise go-to-market approach, not only have a more feet on the street, but also understand how to approach enterprise customer. So the success we have been seeing is, is on the server side first, actually, is we are able to show CIOs the total cost of ownership a benefit so they can convert to AMD's servers.

The same thing on the PC side, you literally have to convince the enterprise CIOs to change in order to, you know, expand your market share. So that takes a longer time, but with the go-to-market approach we have and the capabilities and the leadership of our product portfolio, we do think we'll continue to make progress. Just like how we are gaining share on the server side, we do think on the PC side, the commercial side, we will continue to gain share, make the progress. Go-to-market is very, very important there.

Speaker 2

So I've asked you all these strategy questions. You've done a fantastic job. I'll ask you a CFO question. I actually want to know, if you think about next year, there are some big moving pieces. Who knows what the AI number is going to be? Obviously, AI PC, you know, might be additive to gross margin. I'm, you know, the, the data center stuff you tell me might be compressive to gross margin. Kind of how do you think about those moving pieces? Obviously, we don't know the magnitudes of each, but how do we think about your gross margin, where it is today and, and where it could be, depending on those swings?

Jean Hu
CFO, AMD

Yeah. So if you look at our gross margin, we have been making progress. In 2023, the company's gross margin is 50%, and in Q1, we actually end up improved gross margin to 52.3%, and we actually guided Q2 53%. So one of the key drivers of the gross margin increase actually is because data center mix. Data center has been growing much faster than our other business. Despite of a headwind on the embedded Xilinx business side, we're able to improve gross margin. Second half, we think it will have the same dynamics. Xilinx embedded business are going to recover, but gradually, and the major driver of gross margin improvement continue to be data center. I think next year it will be the similar picture, is because the mix change is going to be more favorable.

Data center in general has higher-than-corporate average gross margin. Then, hopefully, when the embedded business stabilized and come back, that will be the tailwind to continue to help us, to improve gross margin. I think one thing, as you know, is our gaming business, has a lower-than-corporate average gross margin, and the gaming cycle is at, you know, fifth year, going to six, seven year. Gaming business is going to be more muted, which, you know, it's not good, but it actually helps the gross margin mix.

Speaker 2

Okay. Well, with that, we're out of time. Thank you, Jean.

Jean Hu
CFO, AMD

Okay, thank you so much, everyone.

Powered by