Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts

TD Cowen 52nd Annual Technology, Media & Telecom Conference 2024

May 29, 2024

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Okay, everybody. Awkward silence time. Thank you very much for everybody coming out to the keynote session here of TD Cowen's TMT Conference. My name's Matt Ramsay. I work in the semiconductor research team, and my microphone is doing funny things, and that's going to annoy me. All right. You guys can let me know if it gets better or worse. But we are really, really excited to have Mark Papermaster from AMD, who's been the CTO of the company for a long time after a distinguished career at IBM, at Apple, at Cisco, and has been CTO for the last 10 or 12 years. And the products that his team have engineered in the market have changed the AMD market cap by, you know, 150 times, something like that.

So, I mean, maybe that's good. Mark's been a friend for a long time, and we're delighted to have him here. So thank you very much, Mark, for coming.

Mark Papermaster
CTO and EVP, AMD

Thanks, Matt, for having me.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

People would ask me why we chose to invite AMD to be the keynote here, and I think it's pretty simple. AI is affecting computing at literally every level, from the data center to the PC, embedded wireless infrastructure, gaming, gaming consoles. And AMD is the one company that has either a number one or number two position in literally every one of those markets. And so I think Mark's a great person to give us all some perspective on what's going on on high-performance computing and AI. So, on behalf of everyone here, before we get started into the fun stuff, Mark, just thank you for your partnership. Where I wanted to start this conversation is a little bit backward-looking, actually.

It's been interesting to watch your company grow from $2 billion in market cap to $270 billion or whatever it is this morning. And it'd be interesting if you could just kinda talk about how when yourself and Lisa came to AMD, where you started from, what you learned along the way, big pivotal moments in the company's history, and maybe some things that, "Man, if I had that one back, I'd do something a little different.

Mark Papermaster
CTO and EVP, AMD

Sure. Well, it's been an incredible journey, and we, and we have a long way to go on the journey. But, you know, if we take a moment and look back, you know, I came into this role, as Matt said, just over 12 years ago, and at that point, I was a 30-year veteran in the industry, having had the opportunity to work on some great products all around high performance, all about really making a difference in computation. And I came to AMD knowing it had a rich history of IP building blocks, of having you know, had success in CPU and GPU through the ATI acquisition, and great technologists.

I had worked with a number of technical leaders at AMD over my career prior to joining, and so it was really a thrill to have that opportunity. So if you look back on the journey, the opportunity taken was to trust the 30 years of experience and to know that, you know, as my intuition told me, that workloads were very much moving to needing more and more high performance. Think about it. It was, again, 12 years ago. AI was just starting, so acceleration was really coming in to its own.

You know, I looked at where the trends of putting that technology together and the building blocks AMD had, and I knew that we had all the building blocks to be the right provider in that direction. So when you think about what are the tough decisions made, it was to trust that intuition, so you have to have the courage to trust that and to lean in on the high-performance roadmap as we did. First, rebuilding Zen and our CPU franchise because that's the fastest path to revenue. You know, we had well established ourselves in the x86 ecosystem, and so rebuilding that Zen CPU prowess. But we didn't trail far behind in the GPU.

What most folk, people don't realize is, that we immediately went after bids and won bids with the Department of Energy that built up our GPU capability and the software, a ROCm stack, starting with HPC and then, of course, expanding that to AI. So we've been at it over a decade, Matt, of high performance, and we made a number of tough decisions, you know, under Lisa's leadership, my role as CTO, that just leaned in on our strengths and not getting distracted by all the other things going on in the industry.

If you ask me, the biggest thing that surprised me most, it was simply how fast that the uptick of AI demand has gone since the advent of generative AI because it didn't change the secular trend that we had been marching to on high-performance compute capability. But generative AI fundamentally plummeted the barrier to leverage and get access to high-performance devices because now it's a spoken interface. You know, it's a you know, into that prompt that you ask, then it's tapping vast computing and vast data. So that's probably been the biggest surprise that I had, and I think many of us here in the room.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Yeah, I think that's right. I mean, we've in my team anyway, we've had the thesis that for semiconductors, right, if you look at basically every single vertical of the economy, more CapEx and more OpEx go into computing. And I think from my point of view, generative AI is just a massive accelerant of that, and that's what we're seeing. So, it's interesting, right? The last 18 months, 24 months have been the fastest change I've seen in the data center market. I think that's pretty much consensus. From your seat as CTO, then, what big changes are you making in allocation of R&D dollars, of hiring, of focus, just across the organization that you manage?

There has to be some—I mean, products now for AMD, like the MI300 and the successors to that, are the most important products at the company now, and if I had investor meetings 36 months ago, no one would ask about them. And I think that's true of your big competitor as well. It's true of everyone in high-performance computing. So what are the big changes you've made in the last 12 months?

Mark Papermaster
CTO and EVP, AMD

Well, we run a very well-oiled capital allocation process. So we look every year as we're planning ahead to where we're gonna spend our R&D. Clearly, if you look at where we're growing the resource, the last two years, it's been vastly dominant for AI. We've added, you know, hundreds of engineers. Well, actually, I'll have to say thousands of engineers across AI because we have AI-enabled our entire portfolio. We're shipping AI enablement across our entire portfolio.

The last couple of years have leaned in even further on the data center, AI, and really bringing to market our MI300, which expanded us from production level with you know dominant on HPC and really shifted us over to having two versions. One, which is HPC-optimized, the MI300A, which is powering the Lawrence Livermore supercomputer that's being stood up right now, and MI300X, which is what Microsoft announced just last week at their Build, which is in production in Azure. One of the things, though, Matt, that's really helped us is because of the modular design approach that we adopted to fuel the turnaround at AMD. It makes us more agile and allows us to have high reuse.

So the fact that we're leaning in to the extent we are in AI doesn't mean that we stop investment on other, you know, core businesses that we have. They may not have the explosive growth rate that we're seeing in data center AI, but they have strong secular growth each year, and they share so many of the building blocks that we have when you think about how the high compute devices that we have, how do they connect to, you know, to memory? How do they connect to I/O? How we're handling, for instance, the neural net engine that we reuse that came in through the Xilinx application. It's already in our Ryzen PCs, and yet it's also in our embedded devices. So a very thoughtful process and one that really leverages modular hardware and modular software.

We're gonna have, at the end of this year, a unified software stack across every AI implementation we have, whether you're running on the GPU, our CPU, our embedded, our PC, one unified software stack.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

That makes a lot of sense. So I wanted to shift the conversation a little bit because as you and I talked leading into this, the audience here is a little bit wider than just us knuckleheads in semiconductors. So, one of the things that Lisa and yourself orchestrated at the company is the AMD's internal IT organization actually reports into you as CTO. And I think what a lot of us are wondering, just broadly, is how are broader enterprises actually trying to tackle AI? Things are moving so rapidly at the hyperscale that, and those are the most well-resourced, smartest companies in the world. They're doing their thing. But broader enterprise is every boardroom meeting has to be about what are you doing in AI, and how are you adopting it?

I'd be interested to hear your perspective as maybe a customer of your AI business in the AMD IT organization and what the challenges are, what you're excited about, et cetera.

Mark Papermaster
CTO and EVP, AMD

Yeah, it was definitely a smart move by Lisa. I think it was, at this point, nine years ago, where I had IT fall under the CTO and head of technology and engineering role that I play. Because at every meeting that we have, as we're thinking about that next generation CPU or our roadmap that we're tackling, we're also thinking about the very problems that we have as a Fortune 250 company in tackling all of the huge compute demands that we face, that which grow at every product turn, as you...

We have, you know, over 150 billion transistors on the MI300, so you can imagine the compute required, and now the AI capability. So our board, like every CIO, has the same question to us as every other CIO, and that is: What are you doing with AI? How are you improving your processes, improving your productivity? Well, what we're all of us are doing is we're looking at the data we have and the processes we have, and are we truly leveraging the data, harnessing the data? Are we creating models from that data that can solve problems faster? And that's across, in our case, engineering. It's across our supply chain. It's across our financial and our HR processes. And we're have an advantage because we're a technology company.

So what did we do? We immediately started, optimizing our own models, using open source models, training them on the data, now fine-tuning them to get higher accuracy. But that helps me, and my CIO, and as well as the rest of the leadership at AMD. As we talk to other CIOs, they're going through the same journey. They're wanting to run hybrid. Sometimes they're running their most demanding tasks in the cloud, but they also wanna have a local capability. Some of the models and the data are so precious, they actually don't want it leaving on-prem. And we are able to walk them through, based on our own experience, different options that they have, and of course, explain to them how-...

AMD has brought competition and can actually bring tremendous AI solutions with our product portfolio to bear.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

So I guess I made this joke last night at dinner, but it went well, so we'll try it again. My first question on the products is gonna be on gaming. I'm sorry, we're not gonna do that. We're gonna do it on AI. Bad crowd here. I just wanted you to... AMD, this has been since yourself and Lisa came to the company, the most important business as you restarted it, was going to be data center.

Mark Papermaster
CTO and EVP, AMD

Mm-hmm.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

And that's had different instantiations from turning the server business around completely from a standstill to leadership position, pretty much unquestioned across most metrics. To evolving and folding Xilinx into the portfolio, and then eventually, Pensando acquisition and some more work you're doing on networking, and now obviously, generative AI has gone vertical and we're watching the innovations there. But I just want to like, first order tenets of your data center strategy, like, is it just high performance leadership across the board, no matter what application? I just want to understand how you think about the goals of the data center franchise.

Mark Papermaster
CTO and EVP, AMD

Yeah, data center. I grew up at data center. Those who don't know my history, I had, you know, over 2 decades at IBM, providing technology to, you know, data centers across the world. What you learn when you grow up in that environment, that first and foremost, when you serve the data center, you're serving customers that are entrusting their business to you. So yes, they want the economics. So it's a given that you have to bring total cost of ownership advantage. That's a tenet that is certainly very clear to me as I took this role and we started driving, you know, our roadmap to absolutely bring that total cost of ownership. But equally, it has to be reliable.

You have to build in the fact that it can basically hit the kind of 5 nines, the reliability levels. The data centers need to know that you're gonna be up and running and meet your serviceability. Beyond that, you have to be a trusted partner, you have to execute. And so, in our data center strategy, it's about not only providing a leadership technology, not only providing one that's reliable, and you know, has the standards that it meets to meet the diversified product SKUs that our customers may need, but that we'll hit every product cycle when we committed and on time. That's what it means to service a data center, and that's what we've been focused on.

I mentioned, you know, that we understood where technology was going at AMD. So we saw that Moore's Law was slowing, and so for the data center, to provide that TCO gain, generation in, generation out, you have to, in your strategy, encompass the technology trends. We developed a modular approach, one that allowed us to be first in the industry of chiplets. And that allowed us to have our compute engines on the cutting-edge node, and then our connections to memory I/O and support circuitry on our prior generation nodes. We were the first to adopt this chiplet approach in both lateral 2.5D and vertical 3D stacking.

And so, you know, Matt, I think the story is, if you're want to play in data center, you've got to have a broad approach that's customer centric. You're listening to where the workloads are going, and yet, you have a portfolio of technology that can bring the best solution, the best semiconductor node to the problem at hand.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

No, it definitely makes sense. It's funny that you talk about the history of the data center business like that. I remember you and I being on stage eight or nine years ago at one of these things, and being asked if you guys were just taping desktop parts together to try to make servers. And I was like: Well, we've come a long way since then, with leadership basically across the board in the server industry. So I wanted to ask, and we're kind of at a similar point now with generative AI, right?

Mark Papermaster
CTO and EVP, AMD

Mm-hmm.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

There's a market leader that's fairly obvious to the investors in the room. And there's a very deep set of relationships AMD has built with all of those key customers, both on hardware, on software, on design. And also a need, I think, a recognition by all of those customers that they would like to have some diversity in their supplies, some customization in their partnership, et cetera. So, maybe you could just give us a little bit of a status report on where you are now. Is it similar to what happened in server 7-8 years ago? Is it not? What the relationships are and what the roadmap looks like right now.

Mark Papermaster
CTO and EVP, AMD

Yeah, our history at AMD and what we've done in the data center with server is a huge boost to the efforts that we now have as we expand into GPU in the data center. The reason for that is straightforward. It's what I said a moment ago. You have to establish yourself as a trusted supplier, and you need to establish the relationships from everything from the product itself. Are you listening? Are you incorporating the customer requirements? And are you establishing the ties all the way through the field organization on hardware and software? And so when you think about the data center and what we did with x86, it was not only to provide our generation after generation.

We're shipping our fourth-gen EPYC, and soon, in the second half of this year, to ship Turin, our fifth-generation EPYC servers.... and it's been done in such a way where we've listened to customers, understand the requirements, hardware and software, and folded in. That communication, that trust, is directly applicable. What the learning is for data center GPU for us is not the hardware elements. We were tried and true in terms of the hardware elements. But where we had to race in putting together the software structure is building out in a rapidly changing AI software and ISV environment. As soon as you think you have the complete set of software support, a model changes.

And so you have to, in that relationship with the largest hyperscalers driving the most demanding AI models, you have to, develop a set of partnership and listening skills and agility in the roadmap, that, is beyond what we had ever done before. Our time to market of, of the MI300 was vastly faster than any, data center product that we had ever done in the history of AMD, and then the resulting revenue ramp, has been the fastest that we have ever experienced at AMD. The rate and pace of where, that AI is moving, is, really astounding, and it, and it's required us to almost put the company on its head, in terms of, reinventing ourselves, and ensuring that we can, react at that kind of warp speed.

But that's exactly what we've done with the MI300 program and the roadmap, which follows it.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Yeah, a couple of follow-ups there. I think one is on that pace and cadence. Your large competitor recently very publicly changed the cadence of their product introductions from once every two years, give or take, to about once every year. And there's two sides to that coin. One, do you guys think you can keep up with that cadence? And the other side is, do you think the customers can assimilate technology at that cadence? So that's kind of the first part. And then the second part is, you mentioned on the forward roadmap, maybe this isn't the right forum, but I wouldn't be doing my job if I didn't ask. So anything, any comments on the forward roadmap? I know there's a lot of folks that are kinda interested in that, let's say.

Mark Papermaster
CTO and EVP, AMD

Well, first, let's talk about our commitment to roadmap at AMD. Again, what did you see us do as we entered the data center CPU market? We set an aggressive roadmap, we committed to it, and we executed on regular drumbeat. That drumbeat for the x86 server market is about every 18 months. You'll see a new product introduction. And honestly, that's is what I think that the rate of absorption is of our customers' industry. It typically is 18 to actually 24 months. But we are, at our hearts, we are a nimble competitor, and what the market demands, we will deliver. We're on an annual cadence, so as we refresh our Ryzen line of PC chips every year.

We're at an annual 12-month cadence, and what we do for our GPU products. So we'll let the market decide. We've modified our cadence, and we'll be sharing more as the year progresses with our roadmap. But we've accelerated our roadmap as well. And if our customers can consume productivity enhancements in that GPU roadmap at a 12-month cadence, we're certainly prepared to do so. Again, we've been doubling down on our R&D investments, and we also leverage the chiplet methodology that I mentioned earlier. I mean, you think about what we did with MI300. Now, I'll just hold up an MI300 module here. The version I'm showing you here is MI300X.

This is what Microsoft announced last week, is in production in Azure. It's a virtualized instance, so it's leveraging that same virtualization capability, hardware-based virtualization. It's very, very efficient. We've had on CPU that's in multiple generations of Instinct, including MI300. It's been in production for video serving, now in production for AI and Azure. And with that chiplet methodology, we pivoted to offer both an HPC-optimized version of MI300 and an AI-optimized. So going forward, as you think about, you know, how do we handle changes in increased cadence, we will leverage our modularity. You, as you pick and choose which lever are you tapping across? Memory enhancements, I/O enhancements, new GPU cores, other accelerators, so and new math formats that you have.

So the name of the game going forward, in this astounding pace that we are all on in terms of AI compute, is holistic design, is what I call it. You have to bring all of those elements together. You have to work across the stack. And so, Matt, that's how we're going to respond to the cadence, and we're going to do it with an ecosystem. We're committed to our development, leveraging our open source in terms of our software stack and open standards in terms of how we put solutions together right through the rack level.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

It's interesting, what you just mentioned there was actually my next question, so, I mean, and maybe your eyes are really good. One of the things that has happened through generative AI over the last 24 months is NVIDIA's changed itself from a GPU company to a card company, to a server company, to a rack-scale company. And I think when you look at what they've done, I ask myself as an analyst: Okay, well, what companies actually have all the building blocks, and the technology actually can fight back against that? And the list gets really short, really fast.

Mark Papermaster
CTO and EVP, AMD

Mm-hmm.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

So I'd like to hear what you and your team are thinking about in terms of AMD's expansion to the system level and the rack level, and the data center level, frankly, as you partner with customers?

Mark Papermaster
CTO and EVP, AMD

Yeah, you know, again, I grew up for over two decades at IBM, and so I'm well-versed in what it takes to put together system solutions and optimize system solutions. You know, I think our competitor is almost taking a mainframe approach of putting the whole solution together, and you can very finely optimize that, and there's certainly the merits to that approach. But the other side of the coin, and where AMD will very much differentiate is, you bring more people with you, and you can offer more options when you commit yourself to not only an open software that brings a whole development community with you, but open standards is how the solutions are put together. Open standards don't mean that everything gets tied up in committee.

It's typically done best with a small, focused consortia that decide the pinch points of what is the best way to put these system solutions together. Matt, we've been very actively engaged in the Open Compute Platform, OCP. If you look at our platform that we ship the MI300 today, it's an OCP platform. And so is our competitors today with the H100. So you can take out a competitive sled, you can drop in an AMD MI300 sled. And so if you change the differentiation point to be the rack, our strategy is gonna hold. We're gonna continue to differentiate all the way through the rack level.

We have engineers skilled at each of the areas of optimizing around a rack level implementation, but we're gonna do it in such a way where we partner and we create an ecosystem. And what that's gonna create is a broader set of SKUs of offerings, and it's gonna create a broader ecosystem. So we're gonna. Our strategy is to take the ecosystem with us as we move forward and as we optimize at a stack level. It is not new in the data center to optimize at the system level. We've been doing it for generations in x86 servers. And it, it's not that dissimilar as you think through: what is the workload? How is it running? How are you scaling out to achieve it?

Are you partnering with ISVs thoroughly to optimize? That is a constant across data center applications, and that is the equation and strategy that we're bringing to data center GPUs.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Makes a lot of sense. There's only gonna be a couple instances where I'm gonna be able to take a question or two from the audience. So if anybody has something they want to try to bounce off of Mark, we'll get you the microphone, and Erica's right in the front here and caught my eye, so.

Speaker 3

Hello? First of all, thanks, Mark, for being here with us and spending time with us, so we can learn from you. I was wondering, you kind of alluded to this earlier, in response to a question from Matt, but how important has it been, your initial success with the hyperscalers and enterprises with whom you've partnered with on MI300 on the hardware side, but importantly, on the software side, how has, you know, how has that been critical to, that success with MI300? What have you learned, and how is that helping you, win new customers beyond the Microsoft, Meta, Oracle that I think we're aware of? And are, like, can you take those learnings and, you know, use them with other customers, I guess, is more of an IP question as well?

Mark Papermaster
CTO and EVP, AMD

Yeah, it's a great question. We are a collaboration company at AMD. I mean, that's what we are famous for, and we listen extremely well, and that's what we've done in each of the markets that we serve. But for data center GPU, we had to earn the seat at the table, to be that collaborator. So, we had to demonstrate that we had brought our software stack, the ROCm stack and AI, up to a proficiency, and that we had the compute capability up to efficiency. And that's what we did in the Instinct roadmap, building up to the MI300.

And so, the deep partnership we've had with several hyperscaler players has been absolutely essential because these are the companies that are working on the absolute cutting edge of technology of the next generation large language models. And so when you service these largest kind of opportunities and you have that seat at the table, you know the problems you need to solve. And it, and it, it's invaluable of not running down the rabbit trail of something that is solving a problem, but not the most important problem at hand.

And so what the collaboration has done for us with Microsoft, Meta, and others has really helped us prioritize what matters most to them, which therefore is what matters to the, you know, many internal customers and thousands of external customers that run on their, on their third-party platforms. So that part's been invaluable. But also, I think we can plainly state that it was the pull to have competition in the market that we needed. So one, we had to be credible, and we had to have established that trust that we did through our build-up of the EPYC x86 server line. But we're not confused. We have to deliver here. It's a very, very aggressive performance.

We have made tremendous strides in listening very well and folding improvements into our software and hardware stack. So we announced ROCm 6.0 is at production level at the end of last year, yet we just announced the 6.1. And what does 6.1 do? It's folding in the learnings as we've come up to that production ramp that you now saw announced by Microsoft last week, and the same with the feedback we have from the other partners. So we'll continue to collaborate, we'll continue to move very, very quickly and iterate our hardware and software in this fast-paced environment.

But we're really pleased to have earned the opportunity to be the only other production level GPGPU in the industry today, with the kind of competitive level of training and leadership level of inference we have today. And again, you ask about our roadmap, Matt. I won't be going through details here today, but we have increased the cadence of our roadmap, and we'll be sharing more as the year progresses, specific details.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

I wanted to shift gears a little bit, and there's a couple parts of building an AI hardware system that are important, and one that I get questions on a lot is networking. NVIDIA's done a lot of things in their roadmap to... And they obviously made a big acquisition in networking, and have added system-level capabilities. And I get questions from investors sometimes that I don't think have the full picture as to what's gonna happen in the next 18-24 months on the networking side. And I guess I would ask: Do you feel like you have all of the assets that you need between what you got on the SerDes side from Xilinx, what you got from Pensando, what you've done internally on Infinity Fabric?

Do you need to do anything external, or do you have what you need internally? One of the things I get all the time, "Oh, they're wed to this PCIe standard, and it's gonna move really slowly, and NVIDIA is doing their own thing, and they can move really fast." I'd just like to get an update on the networking side specifically, 'cause it's a topic that I get asked about a lot.

Mark Papermaster
CTO and EVP, AMD

Yeah, we're very heavily invested both internally and with the ecosystem across networking. And I'll point out to you that the fabric of how we connect our CPUs and GPUs is the Infinity Fabric. You've been hearing us talk about that for years. It's been a very, very effective... It's been a proprietary interface because it moves at light speed, and so it's driven by the internal requirements to make sure our CPUs scale effectively and our GPUs scale effectively. But again, I said earlier that we're committed to bring an ecosystem with this.

We announced in December that we're opening up the elements needed of that Infinity Fabric so that you can have a broad ecosystem of industry solutions of how accelerators are connected without relying on PCIe or CXL. Those are, I'll say, more complex stacks, so they're, you know, higher latency and won't run at the same bandwidth as if you unencumber that with a, I'll call it a lighter weight protocol. That's exactly what our Infinity Fabric does. We announced that we're opening up that to a small consortium, a consortium that will move quickly.

In fact, there'll be an announcement even later this week with specifics, which will announce details associated with that consortium. I think you'll agree with me that it's off to a running start. And beyond that, Matt, we're committed to certainly supporting the whole networking ecosystem. We run on InfiniBand with ConnectX today. We're very supportive of Ethernet, and we're a founding member of the Ultra Ethernet Consortium, which has laid out a very detailed plan to really allow Ethernet to be a prime-time player in providing solutions for the highly performant AI solutions. Ethernet's already there today. It is an alternative. You see it across a number of installations.

But with Ultra Ethernet Consortium, what you're gonna see is, you know, advanced scheduling, quality of service algorithm put in place, which are very needed for AI algorithms. And we're seeing a very quick and strong progress with this consortium.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

That makes sense. Maybe I'll try to take one more question from the audience. Max, go ahead. This shows you that showing up late and sitting in the front is, like, very advantageous, but-

Speaker 4

Hi, Mark. Thanks for being here and for all the details in the conversation. I wanted to ask a question on the potential for heterogeneous chiplets, and even custom chiplets developed by customers to be plugged into AMD systems, especially as AMD already has a well-established semi-custom business model in gaming. So are we getting closer there? Maybe some investors have concerns as, as Matt's talked about, about competition for AMD from custom ASICs, but could heterogeneous chiplets give AMD the same capabilities?

Mark Papermaster
CTO and EVP, AMD

Yeah. So, you know, first of all, heterogeneous, meaning a CPU with accelerators, is a requirement going forward. Everyone, we've been talking about that for over 10 years. There's no debate on that now. Everyone recognizes you need this type of heterogeneous compute environment to be able to stay apace with Moore's Law. So that said, it's going to evolve. It's going to evolve into an ecosystem, and we're gonna be very supportive of that. That ecosystem relies on a couple of different things. One, as we go to a chiplet-based world, you need standards of how the chiplets interconnect. We're there, we're again, we're a founding member with Intel and others of UCIe. It's a standard way in which the chiplets can be interconnected.

So you have to figure out how, how do you, you know, in the kind of dense computing world that I showed you an example of the MI300, that we have both lateral and vertical stacking, you know, how do you do that with others, right? You leverage the interconnect standards, but that's not enough. You need as well protocols that can allow you to readily adapt and connect accelerator to accelerator. That's the new Accelerator Link consortium that, as I said, we talked about in December, and you'll be seeing more details announced later this week.

As well, it is what we will do with our semi-custom group, and that is, if someone has a business that, that they really have an element that is their own, it's their own CPU, or it's their own accelerator, then what we have done is, with our semi-custom division, we're at the ready, as we've done already across gaming and consumer industries, to take that customization right into the data center with that group.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

I wanted to switch gears a little bit, Mark, and talk about, I don't know. This is admittedly a loaded and sarcastic question, but, like, so with all this AI excitement, we just all have to assume that general purpose servers are dead completely? Is that what we should do?

Mark Papermaster
CTO and EVP, AMD

You're trying to, you're hitting my hot button, you're doing a good job of it here, Matt. No, I mean, look, think about what's going on with generative AI. What it's doing is layering on a new capability. It's not, displacing, for the most part, the existing computation or tasks that you had at hand. What do I mean by, for the most part? Most algorithms, let's say, how I'm closing my books every cycle, how I'm running my customer relationship management, you know, the algorithms that you're doing for that type of work, are not necessarily stochastic. They're not probabilistic. They're taking advantage of just millions of lines of code that you've developed generation after generation, for a bespoke task.

Let's take an example, Salesforce.com, SaaS application. The base application's not going anywhere, but there's a generative AI Einstein capability being bolted right on top of it, right? And that is a game changer. But they work in conjunction together. And for the most part, that's what you're gonna see as an AND function, not NOR. There will be some exceptions. There will be certain algorithms, applications that lend themselves very much to a probabilistic, you know, weight-driven approach that AI represents, and it can really leverage, you know, an accelerator. And the entire application moves over the acceleration. So we're gonna see both, but in most cases, Matt, it's an AND function, not NOR.

No, it's really, really clear. A few topics, we got 15 minutes or so, the handy dandy shot clock up there is telling me that. So I'm gonna make sure we get to some topics that I wanted you to, to speak to this group about. The first one is on AI and the PC. Microsoft Build had their conference last week. Computex is this coming week in Taipei. You guys are obviously a huge player in the PC market, alongside Intel and a few others. And now there's this big push to add in inference capabilities, Copilot capabilities, and things into a PC. I don't think any of us know how quickly that's gonna take off, but from a technology standpoint, what does that mean?

Does that mean that you need to figure out how to add all these capabilities in somehow without sacrificing battery life? Does that open opportunities for your company to gain more PC share? Does it open opportunities for, like, new competitors in the ARM ecosystem to come in and take share? You know, what are your impressions of the AI PC market, and what do you think it means for your company technologically?

Yeah, I'm very excited about what's called the AI PC, and the reason I am is it's adding a whole another capability to speed your productivity in what already is your content generation device that you run today. You're developing that presentation that you're showing to your boss tomorrow. You're having a Teams or a Zoom session with your colleagues on the other side of the ocean. I mean, your PC today is that primary personal compute device.

What AI does is it adds a whole new set of capabilities on top of that, because when you add an incredibly energy-efficient neural processing unit that we've been shipping in our Ryzen PCs since early last year, we announced our, you know, our first PCs, the Ryzen 7040 series last year in 2023. And in 2024, we expanded to the 8000 series. We brought it to desktop, and we're announcing on June fourth at Computex a leadership level of neural network acceleration. When you bring that type of-...

of floating point, incredibly energy efficient capability into a PC, and you preserve the battery life, you know, with efficiency of that offload engine, the neural net engine, it's a game changer, and it'll be driven by applications. So what will create the category? It's actually not us or any of our competitors. It doesn't matter if it's ARM or x86, it really won't be ISA dependent, instruction set architecture dependent. It will be dependent on the integration of the features and the kind of experience that you deliver at the end of the day. We're a leader in delivering high performance PCs that solve your business needs that are also extremely energy efficient. And we've done that historically.

We were the leader in APUs that have CPUs integrated with GPUs, and again, now, have a year and a half under our belt of adding the neural net processor in addition. So what's gonna drive this new category is the applications, and once we see the new version of Microsoft with their effects package that enables new capabilities. Once you start seeing ISVs come out with generative AI enablement that can run locally, you're gonna see the new category established, and then we will win on the merit like we always had in the PC industry. We'll win, we'll win on the merit of the experience that we provide. But that experience will be game changing.

When you talk to your colleagues on the other side of the ocean, you're gonna have immediate language translation that you have capable between us. Well, you know, that presentation for your boss that you used to send off to your help that you might have had if you're at a larger corporation or a media team to fix that. No, you're gonna be interactively giving voice commands and describing what you need, and it's gonna be created before your eyes with the local computation on your laptop. It's an absolute game changer. I think this is just the start in 2024.

I think, as we get into 2025 and beyond, I think we're absolutely application driven, and you'll see the direct impact on the market.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Now, very, very interesting. My team knows that I'm incredibly deficient in working Excel models, so I'm waiting for some help. I wanted to talk about the infrastructure side of the business. As you know, I was a long proponent of the AMD's Xilinx combination, and I think it's brought in some not only revenue and gross margin and all those things to the company with the business, but some technologies that are very important and now can be applied to your generative AI programs going forward, et cetera. Now, we sit here a couple of years after the Xilinx merger. I mean, how would you sort of characterize that from your CTO seat, the technologies that got brought in?

I'm particularly interested. There's a large sort of enterprise and wireless infrastructure market opportunity out there ahead of you. Some in the audience might know why I'm a little biased toward the Siena product, but I just would love to hear your sort of, I don't know, postmortem on the Xilinx deal and what you think it's brought to the company technologically.

Mark Papermaster
CTO and EVP, AMD

We couldn't be more happy with the Xilinx acquisition. The integration has gone actually ahead of schedule, so we're already, we're achieving the revenue and productivity synergies that we had anticipated with the acquisition. And the reason for that is that we moved swiftly. So you talk about building blocks. We integrated even before we closed, we had licensed and started the integration of the neural net engine, which is already now a next generation, the from the time of acquisition, which powering the AI PC that we'll be announcing on June fourth. Our newest generation of AI PC that we'll be announcing at Computex.

When you think about the advanced networking capability that Xilinx had, we formed a combination of that Xilinx network team along with the Pensando team, and it became AMD's data center networking team that we put together right away at the close in 2022, after close in 2022. That team is an integral part of our roadmap. In fact, you said it earlier, are we stuck to PCIe speeds and feeds on our connectivity? No, we have a very deep portfolio that we've integrated very well through acquisition. So, plenty of other examples of where we've leveraged the building blocks. But beyond that, what we're thrilled about is the market expansion that we had.

We created one embedded division, so the embedded division in historic classical AMD was merged into the leadership under the Xilinx embedded team, which had a you know deep roots to market, a very very technical sales force, and so that's been leveraged, and we could not be more excited with that serviceability of the TAM that has just grown with that acquisition. All the limelight goes to data center CPU and GPU because the incredible ramp rate that they're on. But when you take an embedded business, which is a high-margin business, and you can take it from single-digit percentage growth you know to what we foresee going forward is double-digit sustained growth going forward.

Obviously, there's, you know, inventory issues, and across embedded that had to be worked through, in 2024. But going forward, we are very, very pleased with the embedded market growth opportunities.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

... Awesome. So, a couple topics to close out here, and this has been a fascinating conversation. But I wanted to maybe widen the aperture of the conversation a little bit. This exact session tomorrow at the conference, this is my shameless commercial, we are, myself and a couple of my colleagues, are gonna be having a session with Chris Miller, that was the author of Chip War. I'm sure you've read that book, Mark, as

Mark Papermaster
CTO and EVP, AMD

Mm-hmm

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

... as I have. And so lots of conversation around supply chain and the need to source high-end semiconductors from Taiwan and within the China sphere of influence. And what the CHIPS Act is gonna do to help that or change that, and I don't know. As you and Lisa, and the team at AMD think about these things, maybe give us a couple of bullet points as to what are your top priorities as CTO? I mean, is it multi-sourcing? Is it geographical sourcing? Is it like funding for CHIPS Act partners? I'm just trying to think about how important that is in the agenda for sort of the AMD annual planning process.

Where does that appear on the agenda, and what you think about that and the implications of it for the industry?

Mark Papermaster
CTO and EVP, AMD

Well, the last few years have shown us is that the semiconductor industry fundamentally needed more geographic diversity. So you saw that with supply chain shortfalls during the pandemic. You see that with geographic political tensions, you know, across the world. And certainly here in the U.S., the CHIPS and Science Act has accelerated that investment in domestic U.S. We're certainly supportive of geographic diversity. We're supportive of those efforts. Lisa and I are both personally involved. Lisa was a part of the President's Council that made a number of recommendations, and that went to Department of Commerce. I've been on, and remain on the Industrial Advisory Council for Department of Commerce.

And so we, along with our peers in industry, where this is where it becomes not a competitive initiative, but an industry requirement that we work together and establish that geographic diversity. And we see strong progress. It's not just semiconductors. You're seeing fabs come up in the U.S. You're seeing TSMC coming up in Arizona, Samsung investing outside of Austin, Texas. Intel's invested in domestic fabs. But it's more than that. That brings the supply chain ecosystem around packaging and the key materials needed. So it's an important diversification that's occurring in our industry, and could not be more pleased with the progress, and we're fully supportive.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

I guess my... I have two things to end with. One of them is just sort of an open question to you, as someone that's been in this industry for over 30 years, and, I mean, helped, frankly, yourself and Lisa and the whole team have architected a turnaround that I don't think any of us had seen in the semis industry. And, any advice for this audience, just things that we should be watching out for technologically that are coming, things that you've learned, just an open, anything that you think that... Or, even stuff that you hear investors talk about all the time, and you think to yourself: "What the heck are they talking about this for? Why aren't they talking about that?" Just big picture thoughts.

Mark Papermaster
CTO and EVP, AMD

I think the big picture thought, we've touched on a little bit. Let me, let me underscore it. I think we continually underestimate, just how disruptive AI is in everything that we do. What we're trying to do at AMD is just accept that reality. We, we accelerated bringing AI across our portfolio. We're looking at every single process that we do and, trying to, drive and measure the productivity improvements, that we can achieve. And the, the thing that I'll also underscore is that, AI, can't be a single horse race. It can't be. It requires, competition. It requires a multitude of solutions. There's not one size fits all when you look at, AI becoming a part of everything that we do.

Think about the diversity of compute that's required when you're applying AI in everything from your mobile phone, your PC, that factory floor device, edge of network at the base of every cell tower, to the biggest supercomputers in the world. I think we underestimate the vastness of the disruption going on and the vastness of the resulting, frankly, opportunity for those of us in the chip industry.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

No, super fascinating. Last question from me. University of Texas football record as they go into the SEC this year is?

Mark Papermaster
CTO and EVP, AMD

Our record is gonna be great, Matt, and I can't wait until we rematch with your favorite team, Alabama, because it's gonna be a different result next time.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

All right, cool. Anyway, thank you all for your attention. Thank you, Mark, and to the folks at AMD for the partnership, and well done.

Mark Papermaster
CTO and EVP, AMD

Thank you.

Powered by