Good morning, everyone. Welcome to the second day of the Deutsche Bank Technology Conference. I'm Ross Seymour, semiconductor analyst here in the U.S. for Deutsche Bank. We're very pleased to kick it off this morning, day two, with Jean Hu, the EVP and CFO of AMD. So, Jean, welcome to Dana Point. Thank you very much for coming.
Yeah, thank you. Thank you so much for having us.
So lots to talk about, a little thing called AI we'll get into in a minute. But I want to start off with some, just kind of updated views on the macro, market right now. You guys have had significant ups and downs, as has everybody on the PC side, as well as the, the data center side of things. So just at a high level, where are we in that kind of corrective process?
Yeah. Yes, thanks for the question. So, let me share with you what we are seeing in the market that we participate, talking about the PC or client for us. As we all know, you know, the whole PC industry has gone through one of, probably, the worst down cycles during the last three decades. So definitely Q1 is the bottom. We definitely see that. And if you look at our business, our Q2 client segment actually grew 35% sequentially. We also guided a very significant double-digit growth in Q3. Second half, seasonally, typically, PC is better, and next year, you do have Windows 10 end of life and potentially, you know, the AI applications that will help the refresh cycle. So we are quite, you know, optimistic about the PC and the client business.
The inventory, the sell-through has been normalized. When you look at the data center, it has been a quite mixed demand environment. You know, we look at the data center, some of the cloud customers, they were going through the inventory digestion post the pandemic. And also, we do see, you know, CapEx optimization with other customers. Enterprise, of course, continue to be cautious, so the demand environment is mixed. But, I think, that being said, given the backdrop, our business has been performing really well. In Q2, we actually see our Genoa family, which is the EPYC Gen four family, almost doubled sequentially on the revenue side. And we also guided the Q3 data center revenue sequentially growing double digits, largely driven by our own product cycle.
If you look at the EPYC Gen 4 family, not only Genoa is ramping quickly, but also Bergamo is starting to ramp in second half. We also have a Genoa-X, which is very focused on technical workload, and also Siena is coming, which is focused on the telecom and, you know, very cost-sensitive segment. So when you look at the lineup we have, we definitely see second half, our server business will continue to grow and continue to gain share. Of course, we have to talk about AI. I think when you look at the accelerator and the GPU market, the surge of generative AI, it's a really exciting market for us. We have been investing, in MI250, MI300. We continue to make significant progress both on hardware and software side.
So, in second half, we definitely in Q4, we are on track for MI300, both A and X, to not only launch and ramp. So for us, when we look at the market opportunities, it's actually quite exciting because the PC is coming out of down cycle and the server market, we do have our unique product cycle driving our revenue growth. And, you know, next year we should get more meaningful, significant opportunities on the AI side.
Why don't we stick with the—w ell, before I ask this question, just, same day, thing as yesterday. If you have a question, raise your hand. We're webcast, so wait for the mic to come to you, and, if I don't see you, just wave your hand harder. On the AI side of things, I think on your last call, you talked about engagements being up kind of 7x sequentially. Talk about what an engagement means. People being interested for GPUs because they can't get enough from one of your competitors is fine, but I'm, I'm not sure that that really translates into business. So when you talk about engagements, define a little bit of that for us.
Yeah. I think, during the last earnings call, Lisa mentioned that the engagement, the momentum, just surged tremendously. We continue to see that momentum continues. The engagement is very broad. First, on the customer side, our engagements with the cloud customers, enterprise customers, and the AI startups, and also it's at all different stage of engagement, you know, from qualification to, you know, initial discussions, very broad set of spectrum. I think the key is that when you take a step higher to look at this opportunity, we are at the very, very early beginning of generative AI, and the way AMD thinks about it is really end-to-end. If you look at our product portfolio, we do have a GPU, CPU, adaptive compute, the AI engine is everywhere.
But fundamentally, right, is generative AI. What we are all looking at is the, the opportunity to improve productivity, the experience, and potentially generate $1 trillion of GDP. And when you, when you think about that, is you actually really fit how we think about it, about the AI. AI eventually is going to be everywhere. It's end to end. It's going to be very pervasive. So today, we're very much focused on the GPU side of engagement because it's early, but at the same time, we are also having our AI engine in the client side and the edge side. We do think eventually the compute is, you know, different workload will need a different compute, different AI capabilities. So the engagement you're asking, it's very broad. It's not only on the GPU side.
We are also working with the companies on the client side, edge side. That's why it's so exciting. It's, the way we think about it, it's so early, but, if you look at the next decade, this, Generative AI opportunity will drive another significant cycle of compute cycle. For AMD, you know, it's all built for high-performance compute, so it's perfect.
So, when people talk about MI300—w ell, when they talk about AI, all they think of is MI300.
Yeah.
You guys, as you just said, have a much more holistic approach to it. But if I do bring it back to the MI300.
Yeah.
I know there's a couple different, the A and the X families of that. What's the hardware differentiation that AMD believes it brings versus your primary competitor? And, and I'll follow up with the software side afterwards.
Yeah, I, I think, if you look at how AMD approach customers, the technology and the solutions, Lisa and Mark Papermaster, they have been always trying to focus on what customer need. So MI300A, that's what's designed for the high-performance computing segment. And what customer need is really, very complex, both GPU, CPU, memory bandwidth altogether. So MI300A is tailored to the HPC. But, as you know, AMD is the one leading the technology and the innovation in chiplet modular approach for design and the 3D packaging, along the process technology. So what we have been able to do is quickly spin off the MI300X, which is more tailored to, generative AI, the cloud customers. Which, the key advantage, one of them is definitely memory bandwidth and capacity.
When you're talking about the training inference, that's probably one of the most important things to increase the total cost of ownership and efficiency. So from hardware side, we feel really good about how competitive MI300, you know, versus whatever is in the marketplace right now.
So what about the software side? I think that's where people believe there might be a bigger moat, because of, you know, CUDA having been around for 15 years or whatever it is now. And you guys are taking a different approach with a much more open angle to it. So talk about how you're bearing there and catching up on that front.
Yeah. Yeah, appreciate the question. I think, it's very important to embrace the open source system. AMD, actually, if you think about the software investment, the ROCm has gone through multiple generations, so the investment has been for a long time. And initially, ROCm was more focused on the HPC segment because the product was more tailored to that segment. And during the last several years, we have been quickly really working with the open source ecosystem, advance ROCm quickly, continually to increase the capacity and increase the functions, the features, the capabilities. So right now, ROCm 5.6, we just released, it actually supports a lot of the open source frameworks, right? PyTorch, Triton, and TensorFlow, and others. So the capabilities and the partnership we have with our customers today, constantly, we are advancing the software quickly.
We definitely get to the point, you know, we feel pretty good about some of the general workload. We can really basically move the software, the model, to our hardware quickly. So that has been like, literally every week you see significant progress.
I think if we get a little bit to some numbers around things, at least general idea, the fourth quarter is really when the ramp is going to start. You mentioned earlier that it'll be the supercomputer side as well as some of the CSP sides of things.
Yeah.
Talk a little bit about how the trajectory goes on the supercomputer side with El Capitan. I know you did Frontier a couple of years ago, so we have an idea.
Yeah. Yeah, supercomputer, a lot of, you know, is the ramp, it's very lumpy. It tends to be, you know, one quarter has primarily all the volume, and then it will come down. So in Q4, definitely we feel very good about the tracking on MI300A ramping with El Capitan. That's when you will see the volume, majority of the volume. And then Q1, probably a little bit. I think we also mentioned is we do see other AI customers ramping in Q4. And so the way to think about it is definitely El Capitan will come down, but all the other AI customers, we continue to qualify to work with the partners. Those are going to continue to ramp.
By the second half of next year, probably you will see much more meaningful revenue from MI300X, which is more focused on the cloud enterprise and other AI customers.
And we were talking out in the hallway before about kind of making sure people's expectations were correct, that it doesn't just happen overnight, maybe outside of the supercomputer side. So to the extent you have engagements now, I assume you're gonna be launching the chip, like you said, in the fourth quarter or third quarter. And by the way, are you guys gonna have a launch event at some point, like you did with the Bergamo in June?
Yeah, in Q4.
Okay. You'll have that launch event then. Is the duration from that launch event to the second half, is that typical? Is that because you just, tons of validation is necessary for each CSP?
Yeah, I think, as we discussed earlier, our customer engagement are quite broad, and we are at a very, very early stage of this multi years or decade kind of evolution and the product cycle. So for us, the qualification of MI300 is quite technical, so it will take time, and it's also customer model workload specific. So it all depends on different customers. If the model, really large language model and bigger cluster, that probably take more time, and maybe if it's a small customers, it will take less time. So it's all different spectrum of the qualification process.
But, when you think about the opportunities, the engagement, the momentum we have and at a different stage of customers we are ramping, the results we are building to support the ramp, we are really at the very beginning of this journey. And so the exciting thing is when you build up all the customers' qualifications, the revenue ramp will just come by stages, right?
The last question on AI is on the supply side. Demand is obviously off the charts for everybody, but back-end packaging, front-end wafers, those sorts of things. How is AMD kind of getting things ready, positioning itself for the ramp? Any sort of limitations on the supply side?
Yeah. One of the things I have learned since I joined AMD is the teams focus, not only just on technology, product roadmap, but also execution, the supply chain side, and working closely with the supply chain partners. So we do feel pretty good about, you know, working with the partners, make sure we have the HBM memory, we have CoWoS capacity going forward, and execute on that trajectory. Definitely there are constraints here and there, but, overall, we feel really good about securing ample capacity for next year.
Great. So why don't we just move on to the, you know, not direct AI side of your data center business, the server CPUs. You talked about the back half of this year being much stronger, but it sounds like that's mainly due to your own product cycles, whether it be the Genoa ramp or the Bergamo side of things, and, and even Siena ramping. Talk a little bit about where you see actual demand. Is it—so is it AMD gaining share? Is it ASPs for all those products are higher than their predecessors? What's driving that significant growth in the second half?
Yeah, that's a great question. I, I think if you look at our server business, we have. Since 2017, we have really established a multi-generation roadmap, and each generation, the key driver is to continue to improve the total TCO for our customers. And when you look at the Genoa, the whole generation and the whole platform, not only Genoa, Genoa-X, Bergamo and the Siena, all of them actually provide the best TCO. Either it's performance per watt or performance per dollar for our customers. So what we're seeing definitely is our cloud customers, they have been deploying Genoa across all major cloud customers and also OEM, ODMs. That is the major driver. Q2, it's almost doubled sequentially, and the second half, Genoa continue to ramp with all the cloud customers and increasingly more enterprise customers.
And the Bergamo, you know, Meta literally is adopting Bergamo from Facebook to WhatsApp to even the Instagram. So that absolutely help us with the momentum. I think fundamentally, if you look at the today's data center, cloud or enterprise, one of the key thing is about operating costs, how they can be more efficient, power efficient, and saving money. So what we're offering really is that, opportunity for customers saving money. That's the, the product cycle we're really driving, not only this generation and next generation.
What's your thoughts on the CPU crowding out or getting crowded out by the GPU argument? In the first half of the year, that kind of made sense. In the second half of the year, obviously, big GPU growth at one of your competitors, but you're doing great in the second half on your CPU side of things. What's your thought on that dynamic?
Yeah. There are definitely some optimization we have been seeing with cloud customers and even in the enterprise. You know, everybody's trying to figure out how to invest in AI. People may be cautious about CPU, but fundamentally, our belief is different compute engine fit best for different workload. In the end, it's all about the TCO. A lot of workload today, we use, you know, surfing the web, the Facebook and the, you know, Instagram, all those things, CPU can support it very efficiently. So our view is those workloads, the millions of software code written on the x86 CPUs, it will continue. Right now, there may be some optimization, but in the long term it will continue. And more importantly, even if you look at the GPUs, you still need a head node GPU.
It plays a critical role to manage the GPU, the cluster. So I think, our view definitely is, in the longer term, you will see the compute everywhere. GPU can do inference too, can do small recommendation, too. Customers are going to really focus on what's the best from TCO perspective, replacement cost, whatsoever. In the end, it, it— all go back to economics, right? What, what makes the most sense from a semiconductor hardware perspective. And AMD has been always just focused on that, is what the customer really needs and, what is, you know, economics they can drive from a solution we provide.
The last question on the data center side I want to talk about is a little bit on competition. We've seen, and you and I have talked about some of the competitive pressures on the client side of things over the last year, in the midst of a downturn, admittedly. But on the data center side, AMD's done a superb job gaining tons of share, performance leadership, cost leadership, all of those sorts of things.
Yeah.
There's other architectures that are coming. There's internal ASICs that are being built, and then your largest CPU competitor is also accelerating their roadmap. How do you see the competitive environment as you look for next year or the year after?
Yeah. I think the first and foremost is we have always been assuming the market is very competitive. I think if you look at how we push the roadmap and technology evolution, there are multiple elements how we think about competitive advantage and the mode we're going to build is from first the design innovation. So Mark Papermaster, Lisa, they are very focused on that. And secondly, packaging, because we know Moore's law is slowing. So for long, long time, since they joined AMD, they have been focusing on 3D packaging, really drive the leadership in the packaging, in a lot of innovations there. And third, working with the partners like TSMC. There are a lot of process co-innovation between AMD and TSMC to drive our server roadmap to today. So those combination and also flawless execution.
So every two years, we have the cadence to come up with a new generation. And if you look at the Milan, which was introduced and in production since 2021. So Milan today is still very competitive compared to Sapphire Rapids. We still drive a tremendous adoption, even with the AI, right? The, the high note, Milan still is very, very cost-efficient solution for that. Then Genoa, it's unmatched, the performance. So we do think we'll continue to drive the innovation and the, the roadmap development with execution. Next year, we'll have Turin. And so we feel pretty good about where we are, and especially the fundamental drive is to provide the customers with the best TCO.
The last question on that topic is: I know pricing matters less in these infrastructure cloud markets than it does on the client or consumer side of things. However, once performance is relatively equivalent among peers, then, you know, TCO can be code speak for pricing pressure. Do you see that happening? And we can talk about gross margin a little bit later. But how do you see the pricing environment on the data center side of things?
Yeah. So, it's going back to AMD's strategy, is to have a platform of different technologies. If you look at the Genoa family, we have a Genoa, really, it's for the most highly performance workload the customer need. But then Bergamo, with 128 cores, the density is really high. It's cloud native, native workload. And then, Siena is, you know, probably most cost efficient and work for telco and the small medium business side, the edge side. Then Genoa-X is for technical workload, which probably is the most expensive. But, the key thing is to give customers different product for different workloads so they can achieve the best economics. Even when we look at the Milan today, it's probably.
You know, if a customer feel like they really, it's enough or sufficient to use Milan, they get the best cost advantage. So the idea is to try to continue to deliver more value to customers, add more features, but at the same time, you know, our pricing need to reflect the value we add, the IP, the technology we provide to customers.
Great. So I spent a lot of time on the data center side of things. I think that's appropriate, given your story. The other segments, I'm going to hop into another topic, so it'll be a little more rapid fire, and again, if people have questions, just raise your hand. So on the client side of things.
Yeah.
Is AMD shipping back to demand at this point?
Yeah. I think the client side, we went through a very significant inventory adjustment. So, we have been like selling much lower versus sell through, so the downstream inventory digestion can go through. Right now, it's really normalized. I think selling, sell through is quite balanced. And, I think the second half, you probably—it's going to be the same balance, the selling and the sell through, continue to get the inventory out.
Do you believe the PC market will be a growth market for you because of your ability to gain share, or is that kind of stabilized in a market that, you know, might be 300 million units or somewhere around there?
We do. We do think the PC or our client business is a growing business for us. I think fundamentally, when we get through this down cycle, we do see some of the potential tailwinds on the Windows 10 and also, you know, the AI capabilities everybody's driving. That refresh cycle probably will be better next year compared to this year. And for us, we actually have a set of very competitive product. And one of the, you know, our Ryzen 7040 actually included the AI engine. If you look at the volume and the traction and the revenue generating from this particular product line, it has been quite impressive.
So for us, when we look forward, not only the market is more stabilized, but we do have a very competitive product portfolio, and we are focusing on more on the premium side of the PC market, right? We don't have anything with Chromebook, all those things. So we do think we can continue to drive share again going forward with our competitive product portfolio.
Keeping on with the rapid fire, the embedded side, a little bit different in its cyclical correction, timing, great business. It's grown really nicely this year, but it seems like it's finally paying the cyclical price a little bit. Where are we in that corrective process, and when do you think that can return to growth?
Yeah, and Ross, you and I, we talk about this often. It's like, this down cycle in semiconductor is so different, right? It's almost like a stage, you know, one cycle after the other, every vertical literally going through different cycle, different time. You know, for us, PC servers, they are actually already coming out of the cycle, but the embedded literally is starting the cycle. In the sense is, you know, the lead time has been normalized. It used to be, you know, 53 weeks or longer, but right now it's more normalized. So what you are saying is definitely customer is more ordering to their demand, versus in the past, they tend to order way ahead of time. We did see, you know, all the delinquencies we used to have, it's all gone, right?
Right now, we're really looking at the customers adjusting, some of them adjusting their inventory positions. So you do see the Q3 sequentially, we guided it down, but the Q4 is going to be flat-ish versus Q3. You know, we hope next year. Typically, inventory adjustments take a, you know, couple of quarters, but second half of next year, we do think we'll see the embedded business taking up. More importantly, if we look at our embedded business, aerospace and defense is doing really well. Demand continue to be strong and, you know, testing, automation and healthcare side, it continued to be really strong. We also have the AI side of things incorporated with our embedded business.
especially, you know, the other thing we see is between Xilinx and AMD's embedded processor business, we do see tremendous synergies. We get more design wins with Ryzen or EPYC servers in cybersecurity, in all different networking box. So that will help us to drive longer term revenue growth for our embedded business.
So the last segment, and I'll make it one quick question and hopefully a quick answer, because I want to make sure to wrap up by talking about gross margin, but the gaming business. Semi-Custom has been huge, great business for you all.
Yeah.
Maybe not as much on the gross margin side, but definitely on the operating margin side. But that cycle is getting a little bit dated. Will gaming ever, with at least in the next couple of years, grow again, or is that kind of in the plateauing and just staying mature cycle stage?
Yeah, it has been a great cycle. It's probably the one of the best gaming cycles, if you look back historically. And right now there are a lot of new release of titles. So even now, you know, the demand is quite good. But, as you said, it's, I think this year is year four, next year will be year five of the cycle. Very normal. It's maturing. We do expect the semi-custom gaming business declining next year just because, you know, it's year five. In the longer term, I think we continue to drive, you know, the next generation roadmap to make sure, the gaming graphics side continue to grow. So I think overall, you know, it's a great business.
It's going through the cycles, but once this cycle is down, the next cycle, typically you will see the ramping up over the next cycle.
Got it. So in the last five minutes we have, let's talk a little bit about the margin side. On the gross margin side, AMD has done a superb job of structurally raising the gross margin of the company, up into, you know, low- to mid-50s% and a target higher than that. So in the near term, it's a little bit lower than that. We've talked a little bit about the pressures on your client side of things, and you break out the, at least the operating margin on that. Has that pressure started to lessen, and where do you think that can normalize versus the kind of low 50s% I think you peaked out at in the first half of last year?
Yeah. Yeah. The headwind, when you have the PC client business, which is quite cyclical, the headwind created during the down cycle was quite significant. That's why our client segment, the gross margin , was a headwind for the overall AMD business. I think right now, what we're saying is, once you get through the normalization of channel inventory, the gross margin is coming back for the client business. The way to think about it is if you look at the second half, we actually guided the Q3 gross margin at 51%, and we all know embedded business is coming down significantly. So the mix actually is less favorable, but we were able to guide, you know, gross margin to go up 100 basis points sequentially. It's largely because client business is stabilizing and coming back.
We do think, you know, in the longer term, client business will continue to improve gross margin . It is a very competitive environment, no doubt, but I think the key thing for us is to continue to drive the leadership and the focus on the premium part of the market, so we can continue to improve client side of gross margin . Longer term for AMD, data center, as we talked, it's the largest incremental revenue driver for the company. So we do think, over longer term, we should continue to progress our gross margin to reflect the value we provide to our customers.
So if you look at—l et's leave gaming out for now, but your other three segments, I wouldn't think embedded gross margin would really change too much. It's been very, very steady over the years. Client, I think you just talked about. But on the data center side of things, how do you see that moving as the AI side comes up, more cloud-specific concentration, maybe going into enterprise, some of the data center GPUs versus CPUs? There's just so many different moving parts. Should we think, as investors, that the gross margin in your data center segment is higher, lower, the same over the next few years?
Yeah, that's a great question. You're absolutely right. We have a very diversified product portfolio within our data center. I think, in general, we do expect data center gross margin will continue to improve. I think if you look at the investment we are making as a company, we really continue to pivot the R&D investment into the area we get the most highest return on investment, which we believe is the data center. When we invest more there, typically you end up providing more features and more capabilities for customers. So at a very high level, that's what we believe is will continue to drive up. Of course, at the next level, you're talking about there are a lot of puts and takes, right?
It's, you know, server side, we'll continue to see gross margin continue to improve on the GPU side because it's a new product introduction. Typically, at the beginning, you really need to mature the product line, but that's, you know, in the near term, the gross margin may not be as normal as you get to the mature stage. So that will be a tailwind in the longer term. It's, you know, introduce product ramp and eventually mature the product line to the point the gross margin will be much more healthy. So I do think in the longer term, not only we'll continue to improve server side of gross margin, then the GPU side of gross margin will continue to go up.
The last question, in the 30 seconds we have, the OpEx side of things, I think your target is 23%-24% of revenues, if I recall. Does that need to change given the AI investments, or can AMD keep within that band?
I think that's definitely our target. We'll continue to think that's the right target. You know, the model is once the revenue grow, the leverage is quite significant. So when we look at the opportunities, especially on the GPU side, today, AI GPU side, accelerator, today, the revenue, we have no revenue. So the ramp of the GPU business and the revenue will help us to leverage our model much more significantly. On 23%-24%, it's really the right target.
Great. Well, we are out of time, but, Jean, thank you very much for joining and kicking us off this morning.
Okay, thank you. Thank you, everyone.