Good morning, everybody. Welcome. This is the first event at TD Cowen's 53rd, got it right, annual TMT conference. Really pleased to be kicking it off with Mark Papermaster, CTO of AMD, and Matt Ramsay, VP of IR and Financial Strategy. Gentlemen, thank you for joining us. Matt, welcome back.
Thank you.
Mark, you have a unique vantage being a key decision maker at a very important technology company at a time when it seems like technology has never been more important. I mean, if we reflect back two years ago, no one was talking about generative AI. Maybe to start big picture, what are the key problems that you and your customers are trying to solve for today? What are you most excited about to be working on?
Josh, it's a great question to get us started at the 53rd TD Cowen event here, which is pretty awesome. Congratulations. Look, it has really never been a more exciting time in computing. The reason is, just as you said, it's only been 2.5 ye ars since generative AI took off. I don't know about you all, but I remember hearing the buzz and getting on and just starting asking questions, putting prompts in, and just sort of being amazed at just the depth of knowledge base that were in these models and it was able to relay back to you as an individual. It was still a novelty at that time.
When you think about what's happened over the last two years, it's gone from novelty to people starting to figure out how to take generative AI and now agentic AI, reasoning, as we've had expanded capabilities, and how to really apply them. It has brought us to a point where computing has never been more important in terms of its ability to impact our lives, our personal lives, and certainly in a dramatic way, our business processes. The challenge is scale. It's absolutely scale. It's how to be on a faster pace of innovation than we and our peers in the industry have ever been on and how to drive technology forward. This AI will be embedded everywhere. It is in everything from supercomputers to the largest data centers, in all of your PCs. If they're not already, they'll be AI enabled.
You saw the latest OpenAI investment to have really take a serious run at a wearable where just even how you get at your AI is simpler and simpler. Very, very exciting times, incredibly pivotal time.
Awesome. Thank you for that. I mean, when you talk to semis investors, they're perpetually worried about digestion and overinvestment. When you talk to you and listen to you speak and listen to technology companies, it's all about more and solving problems. I mean, how are your conversations with your customers? I mean, is there any concern of overbuilds or is it all about, again, trying to solve problems?
There is a huge investment people made. When you think about the last, just literally the last 12 months, there's just been a huge ramp of investment, 12 months - 18 months in building up AI capability. No one could afford to be left behind. There has been a big capacity buildup. Now what you're seeing is a rationalization. What most businesses are doing, the CIOs I talk to, they've run their proof of concepts. They've gotten the low hanging fruit of how they're applying AI. They now have a plan. They're targeting which areas they want to go after to dramatically improve their productivity. What they're also realizing is that, hey, wait a minute, the investment can't just be on the AI and building up that capability. My old workloads didn't go away.
I still have to refresh my server farm because I'm running all my payroll. I'm running all of my CRM. I'm running all the applications which are not massively parallel like you have in a generative AI application. They're best suited for an x86 CPU, which is where we're going to excel and bring absolute best total cost of ownership. That investment's coming back. There's also a refresh cycle with Windows 11 on PCs. I think everyone's looking much more lucidly at the broader compute landscape and realizing, yes, they're going to be growing their AI capabilities year after year, particularly as inference starts to take off. I'm sure we'll talk more about that later. They're also thinking much more, as I said, lucidly about their broad IT needs because it turns out AI is driving overall a much higher demand for computation.
I want to dive into AMD's specific role in that AI story for the industry. I think given all the attention that's put on it, we forget that MI300 and MI325 are AMD's really first explicit data center accelerator parts. You're close to launching, you're officially launching your MI355 part. Where are we in the maturation of your accelerator franchise? What did you learn on MI300 and MI325? What are the next steps as you roll out MI355 and then MI400, MI450 next year?
That's a great question. It's really interesting when you look at our journey in AI because people think, oh, OK, we showed up in December of 2023, there's the MI300. We had a great ramp last year, fastest ramp of any product ever in AMD's history. Went from virtually zero in 2023 of revenue to $5 billion last year. It was indeed our first step into dedicated AI GPU for the data center. Our journey was over literally a decade because while we were focused on getting leadership CPUs out that drove the key catalysts of the turnaround of AMD, we had been investing in GPUs for compute. We had won federal grants to drive how CPUs and GPUs could work best together. That led to the win for AMD CPU and GPU and what is now the world's number one and number two supercomputer.
We have been the top supercomputer for three years now running. It was those seeds that then drove our software investment. Just like NVIDIA first developed CUDA for high performance computing and National Labs and then extended to AI, we did the same. We took that AI stack for HPC. If you think about what we did last year, we hardened it as MI300 came up and went into production across major hyperscale installations. You say, how do you think about a roadmap going forward? We are committed to a very, very fast cadence of new products. We are on actually a year, 1+ years kind of cadence in getting products out just like the lead dominant player in the GPU industry.
What you'll see us do is repeat what we did in CPU for servers or CPU and APUs that we have in the PC and embedded market. That is, we're going to come out with great advantages every product cycle. We're going to win on bringing total cost of ownership advantage, but also innovation and also key partnerships. You look at the MI350 family that we'll be sharing a lot more detail on on June 12. We have an event called Advancing AI in San Jose. What we'll talk about is how we stay in the exact same universal baseboards, so the same infrastructure standard that we're in today, yet we bring 35x improvement in inferencing. That's with design changes. That's with optimized math formats to get you more efficient in inferencing and more. That's the next translation.
We are well on our design of the MI450 family. That is a 2026 deliverable. Again, it is going to be significant performance improvements, but it is also going right after massive scaling. It will be able to take on even the largest training workloads.
You guys have been clear that for the Instinct family, you feel you can best compete in inference first and then training longer term. Can you talk about the broadening of sort of the inference workloads and customers that you're able to service and how in particular, again, I realize you have the event coming up, so there's only so much you can say, but how 350 could potentially broaden your engagements given what technically you're going to be able to bring on the universal baseboard?
Yeah. When you think about where we are today, how did we get those gains on MI300 and MI325? It was inferencing because that's where we brought advantage. We designed in more memory. We had the world's best experience in chiplet technology. We leveraged that with MI300. We had an eight high stack of high bandwidth memory. That is in immediate silicon proximity, silicon to silicon connectivity to our GPU and IO complex. It was an incredibly efficient inference engine. In fact, to the point when you think about DeepSeek coming out, people realize right away, my God, I can run DeepSeek on one AMD GPU. It takes two of the competitor. We really leveraged technology to get that inference advantage. We have benchmarks out there with MLPerf, again, the DeepSeek Llama results out there that show that advantage. MI350 will continue the same.
Again, easy to adopt from an infrastructure standpoint, but we're going to keep that memory advantage. We're going to keep a throughput advantage. We couldn't be more excited about what MI350 will be doing. It also will start us down the training path. What we've done is enhance for MI350 our networking capability. We had invested in Pensando. Pensando is our AMD Pensando team that has created an AI network interface chip, which is finely tuned to accelerate our AMD Instinct platforms. As well, we partnered with the industry. This is our forte at AMD. What you'll see is different kind of configurations, different networking solutions, different OEMs providing tailoring that you need for your workloads. That applies, of course, on hyperscalers. Hyperscalers are going to invest and they're going to have a very tailored design.
It turns out enterprise equally has such a breadth of inference and application. They're all deciding right now how they're going to start deploying their inferencing. What's that combination of on-prem? What do I run in the cloud? Typically, on-prem is more of their just day after day inference and applications. Yet when they need to tune a model, they're going to do some training. MI350 starts us down that training path. We'll build clusters of thousands of GPUs, not hundreds of thousands, but thousands of GPUs that will be built in clusters around MI350. MI450, full scalability for even the most demanding training workloads.
I wanted to follow up on that last point on enterprise. I think we've traditionally thought of enterprise as AI forays as going through cloud vendors. It seems like from speaking with you the last couple of days that you're seeing more pull, or maybe it's just at the point where there's enough capacity where the enterprise customers can be serviced. What are the sort of applications where you envision direct engagements with enterprise customers? How meaningful can that be? We're all focused on the big checks from the hyperscale vendors, but I'd be curious to hear your thoughts on enterprise AI adoption as well.
People underestimate, I think, the impact of DeepSeek because what DeepSeek showed is that you could create an open model, but you could also bring innovations that allowed more efficiency. It didn't, everything doesn't have to be run on billion and many billions, hundreds of billions, and trillion parameter large language models. When you focus tasks, which enterprise typically is doing, they've got specific agentic tasks they're doing for their company to really speed their productivity. They're going to be able to focus more in their enterprise applications. You're seeing CIOs and heads of infrastructure start to hone their strategy. No surprise, it's sort of landing what they've been doing for years on their CPU compute, meaning it's landing on a predominant hybrid model.
They're running on-prem where it's just more both cost efficient or the data they have and frankly, the models they have, the weights they have that's trained on their proprietary data, they don't want to leave the premise. They're making that investment to run in their controlled IT, yet they're still hybrid. They're running on clouds where they need large compute clusters. You don't run those constantly. You run those when you're fine-tuning your models that you're running. They're leveraging the cloud for that as well as bursting to the cloud again, just like you see in CPU operations. One example I'll give you is health sciences, so drug discovery. When I talk to companies in this field, they are getting vast improvements in their time to discovery and their time to market of new drugs and new treatments.
It is incredibly proprietary with what they had. Their data is gold. It is not that you cannot trust the cloud. Of course you can. We, as an industry, have tackled those challenges. We have leaned in at AMD with confidential compute in a huge way, which gives you even more trust in the cloud. What they want is complete control over their crown jewels. At this stage of the game, we are going to see that play out in a number of industries.
Some of the earlier feedback on your Instinct family was that the software needed work and maturity. Enterprise seems like it's amongst the harder challenges to solve for from a software standpoint. You've moved to biweekly releases on ROCm. I've asked Matt directly, like, how do I know those biweekly releases are good? How should we as investors from the outside looking in judge and analyze the capabilities and maturation of your software stack?
Yeah. That's a great question. The first thing I want to do is just sort of share a context that may be helpful to you. It's a reality context. That is, as I said, we were working on the ROCm stack for years, but we went to production in December 2023. We had 2024, no surprise to you, the focus was on really hardening that stack, making sure that it ran flawlessly, that people could, of course, just bank their business on that. We went to full production level at Meta. If you look at many of the Meta properties that you run on, you don't know it, but inferencing is running on AMD in the background. We look at the production instances in Microsoft running on Azure on MI300, as well as first-party applications there. Oracle brought up on MI300.
Last year was focused on a number of hyperscalers in terms of getting them to full production level. Everyone will benefit from that because now you have a hardened ROCm stack. What we did not prioritize last year because we had to, I'll say, focus on the fundamentals of getting to a full production level and getting the performance attainment that we knew we could achieve. What we did not do last year was maintain the software for the broad community at the rate that they need. That has been addressed in 2025. We did in first quarter, as you said, we went instead of quarterly to literally every other week software promotions of the new changes. AI is nothing but a constant change rate of tuning and performance improvements. By the way, look at ours or a competitor.
Look at any other performance of a product you release and look at that performance one year later. You'll see it's typically doubled or gone 2.5 x because that's the kind of software improvements that you bring out over time. We did that last year for our hyperscalers, and we're now doing that in parallel for the hyperscalers and the whole community, as well as a lot more communication. We're putting out blog posts as well, documentation that we brought to bear. We're really excited with now our support for the community.
Last one on Instinct, I promise. We'll move on. Your plans for your Rackscale offering are different than your competitors. I was wondering if you could maybe speak to what's your view on the appetite and specifically what customers want from Rackscale offerings and what your plans are with ZT Systems, now yours. What's your plan for your go-to-market there and how is it different than your largest competitors?
First off, it should be recognized by all that Rackscale is hard. I've grown up with Rackscale. If you look at my history, it was years at IBM before I went on to take this role a dozen years ago. When you recognize the challenges, you realize that you actually have to architect for Rackscale from the very beginning. What we've done with the ZT acquisition is really strengthen our hand. It brought in 1,200 engineers that know not only how to design for Rackscale, but it was part of a design and manufacturing company, ZT Systems. We've just recently announced the divestiture of the manufacturing side.
Those engineers, those 1,200 Rackscale design engineers, they're skilled at not only designing for the highest performance, the ability to cool, whether it be air-cooled, liquid-cooled, it's deep design skill that they have, but they were brought up in a design and manufacturing house. Everything that they think about in terms of that optimization I just described is equally with the focus on manufacturability. That's what's key at Rackscale: can you manufacture in such a way that you have signal integrity to all of the myriad of connection points across that cluster that you build out? Do you have thermal management? I mean, the power demands of each of these GPUs is going up dramatically because people are trying to get more compute efficiency per square foot in the data center. ZT really allows us to bring that design for high performance and manufacturability.
We did not wait until we closed. We actually put a consulting contract in place with them before we closed. They have directly influenced our MI450 design for Rackscale. Now they are helping us speed that MI350 to market now that we have closed. Very, very key addition to the company.
Let's just shift gears or I guess shift frequencies is probably more appropriate to the CPU side. On client in particular, your revenue has significantly outperformed your largest competitor and also a lot of third-party data. A lot of that's been from on the revenue side from ASP gains. I was wondering if you could elaborate on what's driving the ASP change, the mix underneath, and how much room you think is still left on the PC side for those share gains and how durable those ASP gains in particular have been.
We are very much focused on revenue share gain. We do not have a fab to fill. We are trying to drive the best financials for the company. What we have really focused on is delivering the best performance notebook for the longest battery life and with the top AI capability. People want to future-proof their design. If you look at the market share results, we have been shipping AI PCs for over two years now. If you look, we are number one in terms of selling these AI PCs that have actually been activated with Windows Copilot. People are really wanting to leverage not only the technical performance capabilities, but also the future-proofing for workloads. Microsoft just recently announced a number of exciting changes in Copilot as Copilot becomes more pervasive across the whole Office Suite of applications.
I think people are going to see it more and more as indispensable. That is where we focus, on that capability of bringing the best overall capability for the best price. It is still a great TCO play, but it is really about the solution value that it brings. Likewise, on desktop, we have just got at this point a dominant share. It has just grown dramatically because we have a daunting performance lead. When people buy a desktop, you are buying that because you are running your most demanding worksheets, your spreadsheets on it, your applications that are like in workstation mode, you want it right under your desk. You might be a gamer. When you run with our high-performance CPUs, you get dramatically better gaming performance. Yeah, that revenue share gain is 100% attributable to the leadership roadmap that we have, and we do not intend to slow down.
Josh, maybe I could double-click a couple of things there. I would be remiss if I did not say thanks to everyone in the room at TD Cowen. You guys know that I was on this stage with Mark last year doing the keynote in a little bit of a different capacity. It is great to see all of you guys and continue to partner with your firm over time. This place meant a lot to me. Thank you for allowing us to be here. Just to double-click on the client point, we did grow in the first quarter revenue, I think 68% year over year in our client business. If I was an investor, I would ask the same question. Is that sustainable? Right? About 2/3 of that growth was ASPs, which is gaining share in much more revenue-rich and margin-rich parts of the market.
We did grow unit share year over year. Interestingly enough, we get the question all the time where there are pull-ins in your PC business ahead of COVID or ahead of COVID, ahead of tariffs. Hey, my brain is at least functioning a little bit today. We did see a little bit of that. We're not going to try to deny that it exists, but we're really working hard to manage the business around that. Our units in the first quarter in our client business were actually down high teens sequentially, so more than typical seasonality. We actually had sell-through exceed sell-in in our desktop business and in our client business overall. We're really trying to do the best we can to manage the business to kind of ride out some of these perturbations given all the geopolitics and tariffs that are happening.
I think some of the ASP gains are sustainable. We've been giving some commentary, and I think Lisa shared this on our earnings call that we're kind of anticipating for right now a little bit subseasonal in the back half of the year. If the PC market acts normal, I think we'll be in a really, really good position to benefit from that. Right now, I think being pragmatic about the environment is sort of responsible, and that's what we're doing. All of the things that we've gotten and the share gains that Mark talked about in the right parts of the market, I think are going to be with us for a bit.
Yeah. The other one we get as we look into the back half of the year is you've previously talked about expectations of margins in the second half of this year to be better than the first half of this year. I think the concern is that the data center GPU franchise you've been very clear is gross margin dilutive, at least for now. What is driving the confidence in the back half margin strength? Maybe you could talk about some of the underlying mix within the segments that is helping support margins.
Let me just start with a macro view, and Matt, I'm sure you'll chime in with your thoughts. But when you look at our focus on enterprise and what we're doing across both the PC space and server, what we're seeing, I mentioned earlier, that companies are realizing that although they spent quite a bit of their wallet share on creating their base AI capability the last year and a half, they now have to go back into refresh. You think about commercial PCs. We're incredibly well positioned in terms of that value prop I just described of our performance capability, our leadership AI enablement on a PC, and TCO advantage. In commercial PCs, we're seeing strong growth. It couldn't have been highlighted more than Dell announcing over 22 platforms with AMD. Dell had been a long holdout of actually adopting AMD for commercial PCs.
Now with a broader portfolio and the ability to come into those leaders that make the buying decisions across enterprise and to have that broad complement of offerings across everything from their PCs across all of their data center needs, it positions us very well. What we are seeing is actually tailwinds behind us for commercial in the second half of the year of our EPYC servers. I think AI has given us a bit of an unexpected tailwind. As I said, people now realize they have to go back and reinvest in their CPU complexes. Likewise across commercial PCs.
Yeah. Josh, just to add a couple of things to what Mark mentioned. The gross margin at AMD has always been driven by mix. And the mix of the business is changing. We had to, unfortunately, because of some export restrictions into China, we had to basically pull out about $1.5 billion in what was planned revenue for MI308 shipments into China this year. That was below, it was at the bottom of the margin stack on our data center GPU business. We're sort of reselling and now building inventory again in our console business. We had been sort of draining it for a while, and now we're sort of shipping back in line with consumption. Our client business overall is just in aggregate larger than you would have thought when you were modeling this maybe six months ago.
There is a lot of moving parts, and I think you'll see margins expand just a tiny bit in the back half of the year. Now, I've been saying this in a lot of different forums, but if the management team does stumble into a big AI deal, we're going to take it. We're trying to drive footprint, dollar share of gross margin dollars, which drive gross profit and free cash flow. That'll change, the margins are different. It'll be because the mix is drastically different of the business. As Mark said, inside of client, inside of server, the margins are getting better within those segments because of the enterprise play.
All right. Unfortunately, we're out of time. Mark, thank you so much for coming and kicking off the conference. Matt, thank you for coming home. That was less weird than I thought it'd be.
Josh, thank you. Thank you to TD Cowen.
Thank you, everybody here.
Thank you.
Thank you all.