Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
415.99
-5.40 (-1.28%)
After-hours: May 6, 2026, 7:27 PM EDT
← View all transcripts

Analyst Day 2022

Jun 9, 2022

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Welcome, everybody. We're delighted to have you here with us this afternoon in California. Thank you for joining us in person in the room, as well as to the many folks who are joining us via webcast. We have a tremendous lineup for you today as we come together for this very same event just a couple of years ago. We're going to walk through our long-term strategy and vision, as well as our long-term financial plan to support that vision. You'll hear from many of my colleagues as well around the deeper dive on the key elements of those pieces of today's agenda.

Before we start, though, with the formalities of really kicking off with the detailed information we have to share with you today, I would like to remind everybody that today's content may contain forward-looking statements based on current expectations and assumptions. They speak only as of today, and as such, may have uncertainties and risks associated with them. You can find an update on those risks and uncertainties in our SEC filings, our 10-K and 10-Qs, which are posted on amd.com. In addition, we will also be referencing some non-GAAP information today, which you can also find the reconciliations from non-GAAP to GAAP on amd.com. Before we kick off here today, I would just ask everybody to make sure that you have your phone on silent. With that, without further ado, let's get started. Welcome to AMD's 2022 Financial Analyst Day.

Speaker 26

This is the world's most advanced processor. In entertainment, its rendering speeds render other processors obsolete. It drives the future of autonomous driving, powers cloud services for billions, helps change the course of climate change, connects communities of gamers anytime, anywhere, and uses AI to accelerate disease detection and cures. We make the world's most advanced processors, but only with your vision can we advance the world. AMD, together we advance.

Moderator

Please welcome AMD Chair and Chief Executive Officer, Dr. Lisa Su.

Lisa Su
Chair and CEO, AMD

Good afternoon. It's great to see everybody here in person. Thank you for joining us in Santa Clara, and thanks for those of you who are joining us on the webcast. As Ruth said, we have a tremendous agenda for you today. We're actually really excited. This is our first in-person event in over two years, and so much has happened over the last two years. If you really look at AMD today, we are a completely different company, and I'm excited to share with you. I'll talk a bit about our strategy and, you know, where are we investing and where we see the market going. You're gonna hear a ton from the team on the product roadmaps and the technology roadmaps and everything that we're investing in over the next five years.

Of course, we'll cover the long-term financial model and how that all comes together. Let's go ahead and get started. Let me just start with, you know, sort of the overarching principle of what motivates us at AMD. You know, we are all about high performance and adaptive computing. What really gets us up every day is taking technology, pushing the envelope, and creating solutions that solve the world's toughest challenges. This has always been our North Star, but it's even more true today if you take a look at our current portfolio. It's actually been a little over two years since our last Financial Analyst Day. Actually, many of you were with us. It was also in San Jose here, and it was in March 2020, right at the beginning of COVID.

It's been such an amazing and challenging couple of years, if you just think about everything that's gone on in the world. It's also been just an extraordinary time for AMD. You know, if you take a look at, you know, where we've been, it's really about making the right technology bets. We laid out a strategy. We've been very consistent with our strategy. We focused on execution and ensuring that we meet and actually beat our commitments. Our technology has done amazing things, when you just look at, you know, how far we've progressed. All of that's come together into a period of hypergrowth for AMD. If you just take a look at our organic growth and then add to it, you know, the Xilinx acquisition, we are a large company today.

We're a very different complexion than we were just a couple of years ago. That scale has a lot of benefits. It has benefits in what we can invest in. It has benefits in terms of what we can deliver to our customers, and it has benefits into what we believe the long-term opportunity is, for the company. You know, just a little bit about where we are in the world. I'm actually extremely proud of how important, and how much our technology really powers all of the applications. If you look at the cloud or enterprise or high-performance computing, or you look into communications, you know, adaptive solutions, gaming, PCs, you can see that AMD technology now touches billions of people. We are really powering the daily lives of billions of people across the world, and that's a very humbling experience.

Our strategy has been very consistent. I think that's one of the things that's most important to me is, you know, when you talk to us, when you come listen to what we say year- over- year, our strategy is very consistent. We're clear what we're good at, and it's all about making the right strategic bets and delivering on our commitments. This is the same thing that I told you in 2020 in terms of where were we gonna focus. We were gonna focus on delivering industry-leading IP, really the best roadmaps out there. We were gonna put that together with a great manufacturing strategy, which is not only process technology, but what we're doing on the packaging side and really pushing the envelope in terms of extending Moore's Law.

We said data center was the most important market for us, and there was gonna be an inflection point in the data center, and you'll see there has been an inflection point in our data center business. We said we could continue to grow in the PC and gaming markets because our technology was getting better, and we were integrating it, and you would see that come together. When you look at, you know, how that's translated into results, we've really delivered on that strategy. You know, if you start first with the data center, we said when we first started our data center in March, we said three generations. We were gonna deliver three generations, and by that time, we would have leadership in the industry and customers would recognize that. We're now in our 3rd generation with Milan.

We have absolute leadership in the industry. If you look at that from a performance standpoint, performance per watt and efficiency standpoint, total cost of ownership, it's recognized by customers. What that means is we're very, very fortunate to be in the world's 10 largest hyperscalers. It's a great place to learn about where the market is going in the future. We've also had very strong traction in the enterprise market, and we've doubled that business year-over-year as well. Then when you put our CPU technology together with our data center GPU technology and our Radeon Instinct MI200 series, what you see is that we can actually deliver the best computers in the world.

We were extremely excited actually that last week, the TOP500 list for this half of the year was announced, and AMD delivered the highest performance supercomputer in the world with Frontier. It was the top of the TOP500 list. It was the top of the Green500 list, and it was the top of the HPL-AI list. Not only was it that, we're the first company to deliver an exaFLOP or more of computing horsepower. I can tell you, it absolutely was not easy. It was not easy at all. It was a long-term vision. I mean, we set out on that path to break the exaFLOP barrier almost 10 years ago in terms of all the work that we did with research and development and the national labs and, you know, Hewlett Packard Enterprise.

It's all come together, you know, really nicely from a data center standpoint. When you look at the PC market, we've also made great progress. You know, in PCs, it's been a journey to move AMD from we were the low cost second source solution to the premium best technology in the industry solution. It's been a journey, and you'll hear more about it from Saeid Moshkelani as he talks about it. We've done that. We're delivering leadership performance. We're delivering leadership battery life. We're delivering leadership manageability, and we're still underrepresented in the market. That growth has been phenomenal. Frankly, we've had to make a few choices, right? When you looked at sort of the phenomenal growth in the PC market, we couldn't serve all aspects of the PC market just given some of the supply challenges.

We chose the parts of the market that we knew were gonna be important in the future where we could uniquely differentiate. When you look at our platform coverage with the top OEMs, although we've increased platforms, which is great, more importantly, we've increased platforms in the most important parts of the market. You can see that as we go into 2022. It's a phenomenal lineup across commercial, consumer, gaming, and we're building really, really great systems together with our customers. If you move into gaming, I would say we've done a tremendous job. David Wang's gonna talk about the road map. You know, we started with the notion that we needed a new architecture with RDNA. We were gonna do again over a couple of generations.

I think the progress we've made with the RDNA 2 architecture has been fantastic. I think the performance per watt. If you look at the coverage that we have now across, you know, consumer desktop, consumer notebook. We've also brought these solutions together in systems with our AMD Advantage solutions and software. We are making tremendous progress in the gaming business. The thing about gaming is it's a fantastic market. You know, this is the number one entertainment growth vector. We've seen it in the game console market. You know, very proud of the custom work that we've done together with Sony and Microsoft. They've seen just tremendous demand for their systems. Again, this has come together from a technology standpoint.

Now in terms of results, let me spend just a few minutes on the organic results that we've achieved over the last couple of years. You know, when we sat here and looked at 2019, we were less than a $7 billion company. We've grown at 56% CAGR over the last couple of years, and on an organic basis only, we achieved over $16 billion of revenue last year. More importantly, when you look at the results by business, every single business exceeded expectations. Our classic businesses of PCs and gaming, you know, they're already very good businesses, but we've grown each of them at 50% CAGR over the last couple of years.

Our data center business, which was, I would say, just getting started in 2019 at $1 billion, we said it would inflect because customers would buy more, customers would trust us more as we built out the road map, and that in fact did happen. We hit almost $4 billion last year, and it was really growing, pretty much doubling, for the last couple of years. Part of our strategy has been to really strengthen the mix of revenue. The idea was when you looked at AMD a few years ago, we were a very consumer-centric company. Most of our revenue was in PCs and gaming, which was good revenue. We wanted to really change the mix, and, you know, we believe that data center was the place to lean in.

You're gonna hear us talk about this mix more as Devinder goes into some of his charts, because we've made great progress as we've gone from, you know, let's call it the organic growth over the last couple of years. Last year we were about, you know, mid-20s in terms of data center embedded. We know that we can grow this much more. When you look at the complexion of the company going forward, you know, our view is that we can be over 50% data center embedded with the way our product roadmaps and, you know, sort of the mix of businesses are coming together. That's what we wanna do. We wanna have a very diversified set of businesses to build that strong revenue growth.

If you look at the financial performance, I think across the board, we were able to expand margins, both gross and operating margins. We were able to significantly improve profitability, and we're now generating a lot of cash. You know, the business model is at scale, and that's coming together in significant cash generation. Now, with all of that organic work, it's been a very busy couple of years, but Victor and I were very much on the same page that we also had a much bigger opportunity to significantly transform AMD by combining with Xilinx. You know, we announced that acquisition at the end of 2020, in October 2020. It took us, you know, 14, 15 months to close it, but I'm absolutely thrilled to have Victor and the team now part of the combined AMD.

We're gonna talk a lot today about, you know, how we're bringing the companies together, but let me just give you a few highlights of, you know, the assets that are now part of the combined AMD. We have product leadership. The Xilinx team has grown extremely well. Number one in FPGAs, market share growing. Number one in adaptive SoCs, market share growing. Very, very strong software suite that's gonna become important as we combine the companies. Tremendous IP portfolio, very complementary with the AMD IP portfolio. You're gonna hear us talk about AI a lot today. This is an inflection point also for AMD and how we think about AI, but it comes with some great technology from the Xilinx team, as well as lots of other capabilities, interposers, packaging, you know, overall solutions.

There's a lot of technology that's coming into the combined portfolio. Then when you look at it from a markets and customers standpoint, I love these markets. I mean, look, I love the base AMD classic markets of PCs, gaming, and data center, but it's really nice to have access to these larger diversified markets when you talk about comms and automotive and industrial, because all of these guys also need high-performance computing, and it's a real opportunity for us to bring the portfolios together. In addition to just the tremendous customer relationships, you know, we really broaden the overall customers. The largest and most important customers in the world all use AMD in some way, shape, or form. With Xilinx, we bring many more customers into the mix.

We'll talk a bit about how we bring these companies, the companies together in terms of product portfolio and also in terms of revenue opportunities. To give you a high-level view of it, there are four key areas where we see significant synergy. They are in AI, and that's a doubling down and increase of our investments in AI. They're in bringing the product portfolios together with data center communications, automotive, and embedded. It's really bringing sort of leadership solutions together as we think about what customers want going forward over the next couple of years. In addition to Xilinx, we've also just completed our acquisition of Pensando last month. Pensando is exactly right where we want in terms of broadening our data center solutions capability.

Our goal is to be the most strategic supplier to the largest data centers in the world. We have a great portfolio already with EPYC and Instinct and with the Xilinx assets. Pensando has real leadership technology on the DPU front. What we see is that the solutions will now work hand in hand with the rest of our computing technologies, such that we are accelerating, you know, key aspects of networking, security, storage. We're also offloading on the CPU very much, and so you'll hear more about that going forward. Fantastic team. Very happy to have, you know, Prem Jain and Soni Jiandani, you know, join us overall. You know, they bring networking and systems expertise as we think about where the data centers are going in the future.

Now let me talk about, you know, what we see going forward, particularly as it relates to the opportunity. In 2020, you know, sort of the AMD-based business, we called the TAM at about $80 billion for 2023, and that was across PCs, gaming, and data center. When we announced the acquisition of Xilinx, we said, "Okay, you know, we're gonna bring in the Xilinx TAM." which was about $30 billion. We saw, you know, some acceleration of the compute TAM due to some of the pandemic-related things.

We said, "Okay, maybe the TAM is about $130 billion in 2023." Now, when we take a step back and look at, you know, today's portfolio and where the market is going over the next three or four years, we actually see that the opportunity for our high-performance computing and adaptive technologies has now grown to about $300 billion for a 2025 TAM. By and large, the largest part of that is in the data center. What's happened in the data center is first, we've seen tremendous increase in cloud demand and everything that's going on in the data center. We've seen a tremendous increase in AI and sort of the demand for AI workloads. We've added TAM when we think about the networking and telco and other opportunities.

That's $125 billion of opportunity that we're going after. We see nice growth in PCs and gaming. You know, those are secular growth areas and, you know, although units may be not necessarily growing as fast, I think content grows in PCs and gaming. When we look at the traditional embedded markets, you know, the original embedded TAM we were looking at was about $30 billion, but that actually increases to about $90 billion or so when you look at what we can bring to the market. It's not, you know, just the traditional Xilinx TAM, but we're adding CPUs, GPUs, we're adding custom silicon capabilities as well, and that builds out the full $300 billion of opportunity for us. Couple of items on what's happening in the market.

You know, I think the great part of where we are today is that the computing market is going through tremendous transformation. The underlying principle is you need more compute. No matter who you are, no matter which business you're in, you need more compute. On the data center and cloud side, I think the demands from a performance standpoint continue to go up and up and up. What we're seeing now is as the volumes go up in cloud, in particular, that people are doing more workload optimization, and so optimized compute and networking become more important. There's actually more ways of differentiating versus using just general purpose things. Security is very important, efficiency and sustainability, so power optimization, you're gonna hear us talk about performance per watt and the importance of that.

On the AI side, it's really an explosion of AI. Everybody wants more capability, whether you're talking about training or inference, whether you're talking about data center or cloud or data center or edge, or endpoints. The models are getting more complicated. You want better accuracy, you want more capability, which drives more compute, and you know, this is a key vector of the growth. Then on the PC and gaming side, you know, although if you think about it, we had two years of very strong PC market. I think this year we're gonna see a down PC market and, you know, our view of it is that's expected and a bit natural given where we've come off the last two years. Overall, going forward, you're gonna need PCs. You're gonna need the hybrid work environment.

You want more collaboration. You want more battery life. You want more security capability. You want, you know, many billions of gamers. You'll hear Rick talk about the gaming world. We're seeing that the gaming TAM is about gaming everywhere. Then with Metaverse, you can see the new applications that are about more immersive technologies, and again, you need CPUs and GPUs to do that. Net net, great market environment for high performance and adaptive computing, and I think we understand the trends very, very well. Okay, so what are we gonna do? The next five years. What you're gonna hear from the team this afternoon is these are our strategic pillars. These are where we're gonna put our bets. These are where we think we can win in terms of differentiation. It's about compute technology leadership.

It's about expanding data center leadership. It's about expanding our AI footprint. It's expanding our software capability. It's really bringing together a broader custom solutions effort because we think this is a growth area, going forward. Let me go through each one of these so that you get a flavor of what we're working on. In terms of compute technology, this is the strongest compute technology. You know, if you look across these engines, it's the strongest roadmap in the industry, and we intend to only make it stronger. You're gonna hear from Mark about the CPU roadmap and how we're broadening the CPU roadmap and bringing more capability, across the board. You're gonna hear from David about the GPU roadmap and what we're doing across both, gaming as well as compute. Victor's gonna talk about our Xilinx technology.

We're calling it AMD XDNA. XDNA is the technology elements like the AI engine, the FPGA fabric, as well as the base FPGA products that come into the portfolio and how we're taking that across AMD. Then we're also obviously gonna talk about what we're doing with the networking assets, as well. When you look at our data center leadership strategy, you know, Dan McNamara and Forrest will take you through it. You know, we started in the data center with one product, one product line, and we said, "We are going to be leaders in general purpose computing." We are leaders today in general purpose computing, but the world has gotten a lot more complicated and a lot broader with all the workloads. We are dramatically expanding our data center footprint into technical computing and HPC, into cloud-native computing.

This is basically a parallel line that allows us to optimize for cloud-native applications. This is machine learning and AI on both our current products as well as future products, and then bringing our networking assets into the portfolio. What we believe, though, is with this strategy, we will become the partner of choice in the data center over the next three or four years. We're very excited about the vision of how we bring all of that together. On AI, I think you would say that we have had good exposure in AI. Our opportunity is very, very significant. AMD and Xilinx are already serving a number of AI applications, but we can absolutely go broader. Our vision here is to provide a broad technology roadmap across training and inference that touches cloud, edge, and endpoint.

We can do that because we have exposure to all of those markets and all of those products. It comes with a lot of work, you know, investments to deploy more broadly our AI technology on both the hardware and the software side. I think a very key element is this unified AI software stack that Victor's gonna talk more about on how we bring the AMD and Xilinx assets together. This is without a doubt the single highest growth opportunity for us or largest growth opportunity for us over the next few years. Then software. When you think about software platforms, we are a hardware company, but we are very clear how important the software platforms are to build out the solution set.

If you look at it today, we have approximately 5,000 software engineers. That's been really strengthened with bringing Xilinx and Pensando into the mix. We have been greatly increasing our own efforts in this area. It's a broad, you know, set of investments. It's across, you know, let's call it enablement tools and everything that we need to make our CPUs and GPUs run better, in terms of drivers and tools and compilers. You know, it's all of the FPGA software suite that is so differentiating with the Xilinx platforms, but it's also the broader net of we're doing the better together and bringing the best technologies of both companies, to really combine for a unified platform. You'll continue to see us really mature the software platforms.

This has been, you know, clearly the feedback that we've gotten from our customers is you have great hardware, and it will be even better once you have all of the software tools in place, particularly for AI. Now on the custom silicon side, you know, many of you have asked, you know, what is this trend about, you know, companies developing their own custom solutions? Look, we've been in the custom silicon business for the last 10 years, right? If you look at what we're doing in the game console market, it has been custom silicon bringing our silicon to our customer's vision of the market and system and software applications. You know, my belief is that trend towards custom silicon will only continue to grow.

You know, customers have come and asked us, "Hey, can you help us differentiate? We don't wanna build all the general purpose stuff that, you know, you guys are doing because you have scale in the general purpose stuff, but we wanna be able to add our secret sauce." What we're doing here, and you'll hear more about it from the guys on a technology standpoint, we already have a very broad high-performance portfolio. We already have the leading industry platform for chiplets, but what we're doing is we're gonna make it much easier to add third-party IP as well as customer IP to that chiplet platform. That has a lot to do with the Infinity Architecture that we have been extending.

We needed to do this work anyways 'cause we wanted to bring in the Xilinx assets so that they would fit really nicely into our chiplet platform. We're happy to be ISA agnostic here. In other words, x86 is certainly where a lot of our compute solutions are, but we also recognize that, you know, Arm has a lot of traction, frankly, in our Xilinx roadmap and our Pensando roadmap, we use Arm. We would also use in this environment for custom the really the technology of choice of the customers. We've gotten a lot of positive customer engagement so far. You know, when you think about hyperscalers, when you think about 5G and automotive opportunities, these are big opportunities where people want to customize, and we wanna be their partner of choice in this area.

Now you get a flavor of what we're doing from the technology side. I also like to spend just a minute or two on what we're doing from a brand side. You know, when I think about where we are today as a company, it's such a different company than where we were just a few years ago. We're in mission-critical applications everywhere. We partner with the best brands in the world. We're constantly pushing the envelope on technology. As we start, you know, this next phase of our journey as an overall, you know, just much more important technology capabilities for the world, we're also using this opportunity to launch a new brand campaign for AMD. We've chosen. You've seen it around if you're here in person, you saw it in the opening video.

You know, we've chosen Together We Advance because it's the most, let's call it, you know, true expression of how we approach the industry and how we approach collaboration and really defines sort of the AMD culture. Let's play the video, please.

Speaker 26

Technology changes the world, but not on its own. Hardware needs heart. Software needs soul. When we match compute power to instinct and acceleration to imagination, tomorrow comes alive. We make processing powerful and computing adaptive to inspire great leaps, fight new battles, and ignite AI. For us, it's not what we achieve alone, but sharing a vision to solve the world's most important challenges. Because together, anything is possible. AMD: Together We Advance.

Lisa Su
Chair and CEO, AMD

I love those stories. I really do love how together our technology plus our customers and partners really change the world. We started engaging some of our customers and partners with this brand campaign and what we're trying to do together, and the feedback has been absolutely tremendous. It's also, you know, it has a lot of meaning internally. You know, AMD is now a big company, right? We have over 20,000 people in the company, and it really brings together the spirit of who we think we are going forward. Okay, now let's turn to our financial model.

You know, it's an ambitious financial model and, you know, Devinder is gonna go through all of the details, but I wanted to give you a preview of what we see so that you wouldn't have to wait all the way to the end for the punchline. You know, our model is based on, you know, just, you know, our view of the market. It's an enormous market. We're excited about the market and the possibilities there. We believe we can continue to grow significantly ahead of the market and while doing that, expanding margins, increasing profitability and increasing free cash flow. You know, we're also cognizant of we have a lot of resources, so it's important that we're disciplined in how we spend those resources overall. With that, here are the key tenets of the long-term financial model.

Let me start at the bottom. Sorry. Let me start at the top, then we'll go through each of the items. At the top line, we continue to be very focused on growth. We think that the markets are very attractive. The product and technology portfolio is very attractive. We can drive approximately 20% CAGR over the next three to four years. We call the long-term financial model timeframe three to four years. What we're using for the baseline because we have had a few moving pieces. The baseline for the revenue growth is the 2021 pro forma company, so that's AMD plus Xilinx at $20 billion.

On that large baseline, we think it's the best way to measure the intrinsic growth and the, you know, the product strength of the combined company. If you were to use the AMD revenue, you know, pre-acquisition, the growth rates would actually be even higher than that. From a margin standpoint, you know, AMD in 2021, we ended about 48% margin. We believe with the strength of the product portfolio, the mix that we were talking about. As I said, we believe that we can get the mix of the business to over 50% in Data Center and Embedded, and that drives, you know, margin expansion to greater than 57%. From an operating margin standpoint, that translates into mid-30s operating margins. We're happy with that.

From an investment standpoint, you know, our priority is gonna be focused on investing for growth. I think we have tremendous opportunities across the portfolio, but we're also very disciplined in how we will invest. You should expect that our OpEx will grow slower than our revenue growth. Although if we continue to grow a lot, we're gonna continue to invest 'cause we think the opportunities are there. From a free cash flow margin standpoint, we believe we'll be over 25%, and that's including the fact that we are making forward-looking capacity investments. Devinder has mentioned on the last couple of earnings calls that we wanna secure enough capacity for growth, and we're doing that with our partners.

We have a very efficient business model, and we think with all of that, we can still deliver, you know, a very strong, you know, cash generation machine. Devinder will have much more on this, and we'll go through some of the puts and takes as he goes through his piece, but hopefully, that gives you a flavor of what to look for. You know, we have lots of opportunities in the AMD businesses, and I think what you'll see is growth across all of the businesses. We know where the largest opportunities are. We know where to invest. We know which strategic bets to make, and we believe that's gonna translate into very strong financial performance over the next few years.

With that, let me finish up and say, I'm extremely excited about this next phase of our growth journey. It's a, you know, it's an exciting time to be part of AMD. In some sense, it's a continuation of what we've done, but in many senses, it's an acceleration of what we've done, in terms of technology, in terms of what we're doing with customers, including, you know, how we're bringing the technologies together. Let me introduce Mark Papermaster to the stage and he'll talk a little bit about what we're doing on the technology roadmap. Mark?

Mark Papermaster
CTO and EVP, AMD

Great. Thank you, Lisa. All right. Well, great to be with you here today. I would like to jump in with first, you know, a little bit of my view in terms of our focus first on our foundational technology. When you look at our foundational technology, it's been the key for us in terms of, you know, bringing an intense focus on both our investment of a growth of our portfolio offerings, an investment of actually the capabilities of that to drive our competitiveness on performance and performance efficiency. Equally, we focused on the execution of our foundational technology, and this has been the fuel. It's been the fuel for our innovation, it's been the fuel for our product differentiation, and those two come together, and they've been a key enabler for our market share gain. I'm incredibly proud of our execution.

We've been clear on our roadmap commitments. We've communicated with our customers, we've listened to them, we've gotten their input, and we have executed as promised. We are not letting up. When you look at the strategy that we define for chiplets and heterogeneous computing, you can see that the industry is now following. We defined a CPU roadmap, and you see us rolling out Zen 4 this year, Zen 5 as planned in 2024. We've continued to focus on innovative packaging design and of course, our secret sauce, the Infinity Architecture, that puts it all together and further expands our AMD leadership in chiplet integration and the ability now to integrate third-party and customer IP. Let's walk through some of the details.

You know, we've known for years, you know, I talk about it in any of the industry keynotes that I'm discussing industry trends, that Moore's Law has been slowing for some time. The devices aren't scaling like they used to years ago. Of course, the way that we attack that is with design innovation, and that's what we've done at AMD. At each Financial Analyst Day, I've showed you our consistent investments in that foundational technology. You know, the Infinity Architecture was key to that. It enabled our modularity, allowed us to speed our design. It enabled us to partition our designs for chiplets, which we knew were critical to stay on performance pace. We delivered the best technology node for the function that we had partitioned.

As a result, we've been able to stay on that traditional Moore's Law pace of performance gain despite that slower device scaling. Now, it's widely recognized in the industry that chiplets will be deployed quite pervasively going forward. The silicon carrier, the package, has become a key point of integration. AMD's demonstrated leadership capability to integrate heterogeneous CPUs, GPUs, and accelerators. It allows AMD to keep our compute chips in a leadership position in high-performance computing. To keep that leadership position, we have invested across the IP portfolio. Our CPU and GPU continuing, as I said, to execute on schedule, but adding variants to that roadmap for specific workloads. With Xilinx, we have a deep portfolio of adaptable and programmable engines that have been added.

A versatile AI engine, which we're deploying across our roadmap, and with Pensando, an incredibly efficient DPU packet engine, highly programmable with the P4 language to enable any number of microservices in the data center. It has been about investment, as I said. In the past four years, when you look at the data, we've invested our spend to a growth of 3x. 3x spend, and a lot of that is indeed on skills. It's pretty astounding when you look over that same period that we have more than doubled the engineering talent at AMD. Our culture of collaborative innovation, it attracts the best and brightest to AMD, and we've really expanded our hiring, particularly on software and application skills, 'cause that's what really eases the delivery of our technology to our end customers.

Of course, we maintain focus on our engines and the publicity and acceptance of our CPU roadmap has attracted wide attention. That's brought further skills into AMD. Let me update you on our CPU roadmap. Of course, you can look across Ryzen and EPYC, and the impact it's having, and you know how important the CPU is underneath that. Let's look at since we last got together over two years ago. We've introduced Zen 3. Zen 3 was a ground up new microarchitecture. We added a variant with a 3D stacked V-Cache for memory-intensive workloads. Zen 4, later this year, it adds performance and key performance efficiency. We're adding Zen 4c. Zen 4c is a very compact density version of Zen 4, but it's identical in functionality to the base Zen 4 core.

It's workload optimized, so where you don't have to run at high frequency, you can take advantage of an incredible efficiency and scale out capability. Zen 5 development, well underway in both 4-nanometer and 3-nanometer nodes. When you look at our technology implementations on both 5- nanometer for Zen 4 products, 4-nanometer and 3- nanometer, it was done in incredible partnership with TSMC. We operate in what we call design technology co-optimization mode, and that allows us to get the best out of each technology. Zen 5, for instance, is a tailored, highly efficient, 5-nanometer node which we implement. In each generation, we will drive compute high performance with no letup. Let's look at some of the specifics.

In fact, I wanna start with an update, since I haven't given a PPA, a power performance area update of our core comparing to our x86 competition since we've actually first rolled out the first Zen CPU in 2017. Let's look at that update. What you'll see is that the strong AMD advantage in PPA still exists. In fact, we have almost half of the area, almost half of the power, and what that translates to is a very significant performance per watt advantage. In fact, in the desktop application, it's a 78% better performance per watt. Desktop is, of course, an incredibly demanding workload. You know, I like to look also at. I'll share with you two other use cases. What about notebook?

Well, notebook, that efficiency translates to a much extended battery life. What about server? Well, server socket is power gated. When you have that kind of efficiency, you can add more cores to the socket, and that in turn leads to a much better TCO, total cost of ownership, which is the vital metric for our server market. We're committed to high performance with incredible efficiency, and we've even added the Zen 4c to further that, and what it translates to is really highly sustainable solutions. We have been focused on energy in terms of how the industry can operate in a more efficient manner, and that's of top concerns when you talk to CIOs in the industry.

In fact, we just recently committed in a program where we're gonna drive between 2020 and 2025 a 30x improvement in the power efficiency of heterogeneous computing in the data center. With that, let me share some more details about Zen 4 coming out first in desktop and then later in server this year. The Zen 4 chiplet is the world's first x86 5- nanometer CPU. We've put design changes in for performance to yield about an 8%-10% uplifted instruction per clock. But we've also made design changes to enable higher frequency, and especially a much improved performance per watt, that efficiency I was describing just a moment ago. In the desktop socket, that single thread performance improves 15%.

In data center, which is a very, you know, data dependent application, we've added over 125% memory bandwidth per core. We've also added instruction support. We've added the VNNI AI instruction, and we've added instruction support for AVX-512 for HPC applications. Let's take a closer look at those performance gains. Generationally, when you look at Cinebench NT, what you'll see is looking at a performance for watt over 25% improvement. Now, that's with the focus that we had on the design and, of course, also 5-nanometer technology. When you combine that with the frequency and then the AM5 socket of the desktop, you get an overall performance improvement of 35%. You know, as we go forward, I've talked to you about our leapfrogging design team approach at AMD.

We always have a team working on the current generation, getting it to market. We're well in designing the next generation and we're architecting the generation beyond. Well, we continue that cadence with Zen 5. It's well underway. It's on track for 2024. Like Zen 3, it will be a ground-up new microarchitecture. We've optimized it for scaling across a broad range of workloads. We have re-pipelined the front end. We've widened the issue of Zen 5. Of course, we've maintained our focus on energy efficiency. Zen 5 will bring very strong performance gains as well as further optimizations for AI workloads. At Zen 5, we're focused on being that trusted partner, and that means continuing to deliver on cadence, highly integrating our engines together with the rest of our IP.

We continue to make the investments we need across the holistic solution, how we're putting our engines together into an optimized compute platform. That decade-long investment we've had in the Infinity Architecture gave us the modularity that we have been deploying year after year in our roadmaps. We architected ahead of the curve to enable AMD to lead in chiplet implementation. The flexibility it's given us to optimize at the system level, leveraging leadership in 2.5D and 3D packaging technology has enabled our heterogeneous compute solutions. More importantly, we have now been able to scale in a faster way, both horizontally as we implement these heterogeneous designs and vertically as we put these solutions together. I wanna just do a bit of a deeper dive into that, and I'm showing you actually a cross-section.

On the left side, we'll start, and you look at what we call our 2.5D elevated fan out bridge. It is showing you that you can have with a sliver of silicon connectivity, you can connect rather than the traditional silicon interposer over a sliver technology, you can connect chiplets in a far more simpler and better yielding alternative versus alternatives that are out there in the industry. What this does in the example shown, that's actually our Instinct chip that's powering the world's largest supercomputer today, you can see how EFB is connecting the GPU to the stacked high bandwidth memory. It's enabling our GPU to run at the highest efficiency, getting at a very, very low latency that high bandwidth interconnect.

On the right side, you have an example of what I think is really one of the biggest inflection points in technology assembly that we've had in years. Because what it's enabling us to do is stack vertically in a way where the connectivity is almost identical to if you had two IPs next to each other on a single chip, right? Without having to do a 3D integration. Why? Because it's a copper-to-copper 3D bonding technique, hybrid bonding, which is incredibly efficient. It's now in production in 3rd generation EPYC, and enables us to triple the size of our L3 cache. So it sits right above the base L3 cache on what is our Milan 3rd generation. With Milan-X, you'll hear more about this from Dan later, again, a tripling of cache.

It makes a huge impact. You look at EV applications, it can scale to a greater than 60% performance advantage. You look across enterprise applications like fluid dynamics and many of the simulations, we're seeing 20%-80% gains. Tremendous visible impact of how technology affects the competitiveness of our products as you integrate both horizontally and vertically. Well, we have a long history of firsts in this regard in AMD. We were the first with 2.5D packaging as we added a GPU with stacked silicon over traditional silicon interposer, but it was back in 2015. We set a new trajectory for compute when we introduced in 2017 our multi-chip modules with our first EPYC and Ryzen desktops. EPYC servers and Ryzen desktops.

Three years ago, we went to chiplet technology connecting a 7- nanometer high performance CPUs and a different node, the 12- nanometer I/O die. We leveraged chiplet technology to put the right function and the right technology node. This has led to tremendous performance and overall compute capacity gains for us in our roadmap. Of course led us to the EFB and the 3D approaches I just described. Well, Lisa talked earlier about how seamless the combination has been of Xilinx and AMD, and when you just look at this packaging technology, it demonstrates that. The Xilinx team saw the challenges in the industry exactly as we had already been in the classic AMD, how we had been looking at it.

You can look at 28-nanometer back in 2011, where the Virtex-7 2000T 20-nanometer line stitched multiple FPGAs together on a silicon interposer, creating the largest capacity FPGAs. This roadmap has been leveraging CoWoS, doubling the logic capacity. The inter-logic inter-chip bandwidth, as well as growing the I/O resources to meet the needs of the FPGA and adaptive compute market across emulation, data center, communications and other markets. Moving forward, AMD will continue to invest in driving package innovation. It's now stronger with the combined expertise of Xilinx and AMD, and you can see that our compute future will be highly leveraging 3D and 2.5D packaging, tightly woven together with AMD's Infinity Architecture. Well, let me just give you a quick flyby of our progression on that Infinity Architecture.

You know, when you think about it, our 1st generation was so impactful because it brought AMD together on a common approach, in terms of how we scale, how we bring, the IPs together in a very, very common way, and ensuring that we didn't lose performance as we piece together our solutions. Our 2nd Gen grew our scalability, our bandwidth, our latency as we enabled chiplets, even across different nodes to come together, and we enabled the GPU to leverage Infinity Architecture in clustering from 4-way and 8-way configurations. In 3rd generation, our Infinity Fabric expanded to the capability that we deploy in the Frontier supercomputer. We added a coherent connection. The memory and the CPU of that Frontier supercomputer can cache from the GPU. It, with full coherence, it makes it much easier, to program it across this device.

We continue to evolve with new innovations. Let me introduce our 4th generation of Infinity Architecture. What we focused on is making it even easier to implement chiplets with more flexibility. We seamlessly interconnect our CPU, our GPU, our memory, the high performance I/O interfaces that we design across the company, and of course, our accelerator chiplets. We enable with the fourth generation to mix and match across different physical, electrical and packaging approaches, 2.5D and 3D. We also have opened up to industry standards, and so we support CXL, Compute Express Link, 2.0 memories that goes over the physical PCIe Gen 5 connection. It is designed to be extensible to CXL3 and UCIe. UCIe is the new chiplet to chiplet interconnect standard, which we are participating in with the industry in helping to define for future generations.

Of course, it's designed to integrate our Xilinx adaptive compute engines in a very, very facile way. The implementation of fourth generation AMD Infinity Architecture also allows a unified, coherent, shared memory across host and external devices, and you're gonna hear more about that in our next generation of AMD Instinct as you hear that from David and Forrest later in the day. Bottom line, AMD Infinity Architecture is the flexible and secure fabric that allows the AMD portfolio to grow and to provide the highest performance of heterogeneous compute in the industry. You know, when you put these elements together, you can see why we are well-positioned, as you heard from Lisa, to be able to integrate third-party chiplets and customer IP onto our designs. It's that decade-long investment that we've had in modularity.

Our open standards that we've committed to are a proven path and to create designs that can incorporate that third-party IP. In fact, we have over 40 chiplet designs in manufacture. Benefits, power efficiency, lower latency, very, very high bandwidth. It allows customers to scale beyond the limits of Moore's Law. As the most experienced chiplet provider, we're very, very excited to now bring this capability to bear along with our broadened IP portfolio to the industry with a custom-ready capability. Well, look, let me you know just bring this to a close and talk about what I believe is our strongest asset in AMD. It's the trust. It's the trust that we've established with our customers with that focus on delivering technology consistently as we promise, and delivering value to our end customers.

That is our relentless focus of research and development and execution excellence at AMD. We've expanded the tools in our tools chest. We've expanded our roadmaps, our CPU and GPU. We've added variants. We've brought in the Xilinx and the Pensando portfolio of products and IP. We seamlessly stitch it together with our AMD Infinity Architecture, and the result is the broadest high-performance portfolio that's capable of being implemented across a chiplet integration with proven manufacturing results. At AMD, we saw where the industry was going. We are perfectly now positioned to bring that into our future roadmap and to open up this capability for tailored compute solutions of third parties and customers. There will be no letup from AMD R&D. Thank you very much. With that, I'd like to invite David Wang, our Head of Radeon Technologies Group.

David Wang
Head of Radeon Technologies Group, AMD

Good job. Thank you, Mark, and hello, everyone. I'm very excited to be here today to share with you our GPU technology strategy and roadmap to drive product leadership for gaming and accelerated computing. At the last Financial Analyst Day, I share with you our strategy and the vision of taking AMD graphics technology everywhere. I'm glad to report that we have made tremendous progress since then. You can see today we have the industry's broadest GPU ecosystem, powering everything from the supercomputers and data centers, gaming PCs, consoles, embedded and mobile devices. To drive leadership across such a broad range of systems and workloads require very strong focus on GPU technology development. The primary goal of our GPU technology development strategy is to deliver the performance and energy efficiency leadership. There are four key pillars of this strategy.

First, develop the domain-specific architectures optimized for the targeted workloads. That is our RDNA architecture for gaming and our CDNA architecture for accelerated computing. Second, to leverage the advanced process and packaging technologies so we can continue to scale even with the Moore's Law slowing down, as Mark has said. Third, to deliver our leadership performance per watt improvement roadmap so we can achieve optimal system-level energy efficiency. Lastly, to continue to expand our software ecosystem, that's open source, for gaming and accelerated computing, now including the support of AI workloads. In this presentation, I'll share with you our consistent execution of this strategy has enabled us to achieve leadership today. I will also introduce our next generation RDNA and CDNA architectures that will allow us to continue to deliver leadership in the future. Next, I'll start with our gaming architecture.

RDNA 2 is our current generation of gaming architecture. The key innovations include the high-speed compute units, the high-efficiency ray tracing cores, our revolutionary Infinity Cache, and the support of the latest DirectX 12 gaming API. We delivered more than 2x of performance and greater than 50% performance per watt over the previous generation. The RDNA 2 architecture is also designed to be scalable across mobile, console, PC gaming, and cloud gaming. The game developers can really benefit from designing and optimizing games on a common, scalable architecture. Next, I'll show how we're doing competitively. In the design of RDNA 2, we focus not only on absolute performance, but also the power and area efficiencies.

Through the combination of our architecture, design, and methodology innovations, the Radeon RX 6950 GPU, powered by the RDNA 2 graphics, deliver much higher performance per watt and better performance per area than our competitor's flagship solution. Power efficiency is also a very good thing to help our gamers to reduce the long-term total cost of ownership. I'm very proud of what the team has accomplished. Hey, great hardware needs great software. Next, let me tell you about our gaming software. Adrenalin is our gaming software designed for what gamers care about the most. Starting from the focus on quality and stability. For each software release, we go through rigorous testing based on thousands of test cases using our state-of-the-art AI-assisted test methodology. Once released, we engage closely with the community to monitor end user feedback and to provide updates accordingly.

We also continue to improve gaming performance through the entire product life cycle. For example, we have been able to achieve average 15% generational year-over-year performance uplift on the existing titles. We also routinely release day zero drivers for new games at launch to provide the best possible performance and experiences for our gamers. Lastly, we provide open source SDKs to help our developers deliver immersive gaming experiences through the combination of boosted frame rates, enhanced visual quality, and seamless content streaming. One key technology call-out here is the super resolution, which Rick will talk in more details in his presentation. Now, let's look at what's coming next. I'm very excited and very proud to introduce RDNA 3, our most advanced gaming architecture to- date.

It is also our first gaming GPU architecture that will leverage the enhanced 5- nanometer process and the advanced chiplet packaging technology that Mark talked about earlier. Other innovations also include a re-architected compute units with enhanced ray tracing capabilities. An optimized graphics pipeline with even faster clock speed and improved power efficiency. Similar to the performance per watt uplift of RDNA 2, the RDNA 3 is going to deliver another greater than 50% performance per watt uplift over the previous generation. This is a great example of how we consistently executed our performance per watt improvement roadmap. Next, let's look at some of the key RDNA 3 innovations, starting with advanced chiplet architecture. It allows us to continue to scale performance aggressively without the yield and the cost concerns of a large monolithic silicon.

It allows us to deliver the best performance at the right cost. For end-to-end power optimization, we are using AMD's leading-edge adaptive power management technology that sets workload-specific operating points to achieve the optimal system level energy efficiency. To bring more photorealistic effects into the domain of real-time gaming, we are developing hybrid approaches that takes the performance of the rasterization combined with the visual high-fidelity of ray tracing to deliver the best real-time immersive experiences without compromising performance. Lastly, on next generation multimedia, we'll support the advanced video codecs such as AV1 to deliver high quality video streaming at reduced latencies and bit rates. We'll also augment our display capabilities with the new DisplayPort 2.0 standard to support upcoming HDR displays with high resolutions and refresh rates.

With this many exciting technologies, I'm very, very excited to say that RDNA 3 will deliver incredible performance and energy efficiency to power the next generation of games. Now, let's look at our gaming roadmap. We achieved the performance per watt leadership with the Radeon RX 6000 series of GPU, powered by the RDNA 2 graphics. RDNA 3 and RDNA 4 development is well underway to continue our journey to drive leadership. There's exciting plan to leverage our RDNA 3 graphics that Rick and Saeid will talk about in their session. Next, let's switch gears to accelerated computing. CDNA 2 is our current generation of compute architecture. The key innovations include the high-performance dual compute engines, packaged MCM, the 3rd generation Infinity Architecture that supports CPU, GPU memory coherency, and the ultra-wide HBM interface that provides industry-leading memory bandwidth and capacity.

We integrated everything using the 2.5D EFB technology that Mark also mentioned earlier to deliver an impressive 4x higher HPC performance and 2x higher AI performance compared to the prior generation. We also continue to mature and broaden our ROCm open software ecosystem to enable more applications beyond HPC, now including AI. Next, let's look at how we're doing competitively. Similar to RDNA 2, in the design of our CDNA 2 architecture, we focused on performance, but also power efficiency. As a result, you can see the Instinct MI250X GPU delivers much higher performance and performance per watt on the HPL and HPL-AI benchmarks. MI250X also powers the world's fastest and most energy-efficient supercomputer. This is a tremendous testament to the strength of our architecture and design, and power optimization methodologies. Again, I'm very proud of what the team has accomplished.

Great hardware needs great software, so let me cover our ROCm software next. ROCm is AMD's comprehensive open software for GPU compute. It unleashes the full performance of our GPU architecture while supporting industry standard programming APIs and frameworks. It is optimized for HPC and AI, and it is built to be scalable from a single node to the supercomputer and mega data center levels of performance. It is open sourced to enable collaboration, innovation, and differentiation with our customers and partners, to actually provide a credible and desirable alternative to our competitive solution. Let me talk about the ROCm ecosystem enablement. The ROCm journey started back in 2018. We have made tremendous progress since then. We released ROCm 4 last year, optimized for HPC and exascale computing. It is part of a foundational software stack that enables the Frontier supercomputer to deliver its performance.

ROCm 5, with the ROCm 5 introduced this year, we have expanded our focus to include AI. Optimizing both training and inference performance for popular AI frameworks such as PyTorch and TensorFlow. We also added the ROCm support for RDNA GPU to broaden the access to the AMD accelerators for our developers. Lastly, we're developing SDKs with pre-optimized models to ease the development and deployment of AI applications. With ROCm now supporting both HPC and AI, the researchers can take advantage of the combined capabilities to achieve even faster time from hypothesis to discovery. Next, let's look at how we are innovating in AI with the partnerships. To drive AI innovation, we have formed deep partnerships with some of the key leaders in the industry.

Through the partnerships with Microsoft and the PyTorch team at Meta, we have optimized PyTorch, and we have optimized ROCm for PyTorch to deliver amazing, very, very competitive performance for their internal AI workloads, as well as the jointly developed open source benchmarks. We are also expanding our partnerships with high-profile AI platform companies such as Landing AI. Working with Landing AI, we are bringing their data-centric AI technology into our portfolio to accelerate performance for the mission-critical AI models. We are leveraging all these partnerships to ensure that we can develop, we can deliver complete functionalities and optimize performance for the existing and future AI workloads. Now, let's look at what's coming next in our roadmap. I'm very excited, again, very proud to announce our next generation CDNA 3 compute architecture.

This is our first compute GPU architecture that leverages the fine nanometer process and also the 3D stacking technology to achieve a whole new level for integration that has never been accomplished before. Here we have the CPU and the GPU chiplets and the Infinity Cache on the base die. They are connected with the fourth generation Infinity Architecture using the 3D stacking technology that Mark talked about. Integrated with a unified HBM memory, we have an entire system on a single APU package. We also support the new mixed precision data format to greatly increase the compute density. This is the most aggressive GPU innovation we have done for us to achieve the next level of performance and energy efficiency. Next, I'll talk about some of the key benefits of the unified memory APU architecture. The coherent memory architecture in CDNA 2 provides the following benefits.

It simplifies the programming. It has a low overhead communication between CPU and GPU with our 3rd generation Infinity Fabric. It also provide industry standard modular design with the standard CPU and GPU packages. With a unified memory APU architecture in CDNA 3, we can eliminate the redundant data copies altogether to save power and to improve performance. You will also achieve high efficient CPU to GPU communication with the on-die 4th generation Infinity Fabric. You also provide much lower total cost of ownership, again, with the CPU, GPU, and unified memory all in a single package. We believe the unified memory APU architecture is a game changer for the future of accelerated computing. Next, I'll talk about the roadmap. This is our multi-generational CDNA compute architecture roadmap.

We achieved the absolute performance and the performance per watt leadership with the Instinct MI200 GPU family, powered by the CDNA 2 architecture. CDNA 3 development is well on the way for us to continue our journey to drive performance and energy efficiency leadership. For us in this session, we'll actually introduce our next generation MI300 GPU based on this exciting CDNA 3 architecture. All right. Next, I'll wrap up my session. We continue to focus on GPU technology development in the following area. The industry-leading RDNA and the CDNA architecture roadmaps to leverage the best advanced process and packaging technologies. The consistent execution of the performance per watt roadmap. Lastly, continue to expand our open source software ecosystem, now including AI support. I'm very confident with our focused execution. We'll continue our momentum of driving and delivering GPU leadership. Thank you, everyone.

Next, I'll introduce Dan McNamara, and who is the SVP and GM of our Server Business.

Dan McNamara
SVP and GM of Server Business Unit, AMD

Okay. Thank you, David. Pleasure to be here. I'm gonna change gears a little bit from Mark and David, from all that, you know, incredible technology and talk a little bit more about the business, in particular the server business. I just wanna start off by saying our EPYC business is on fire. We are driving year-on-year growth in eight out of the 10 last quarters, doubling it. We've gained share in 12 consecutive quarters, and the momentum continues. The really exciting part about this is what Lisa brought up, she talked about that data center TAM. It's an expanding TAM, and we have a tremendous amount of momentum, and I'll go through the roadmap that we have going forward.

We really feel like we've got the products, the strategy, and the capacity to capitalize on this growing TAM. I'm gonna spend a few minutes on our journey to date and the momentum we have today. I'm gonna talk a little bit about the forward-looking and that the server CPU piece of that data center TAM that Lisa talked about. Then most importantly, the strategy of how are we gonna not only maintain our momentum, but really accelerate our gains going forward. When we think about our journey, it's really about two things, and Mark talked a little bit about this. It's about predictable execution, and it's about building customer trust with every generation. We started off with Naples, and that was a first of its kind multi-chip module.

It garnered very good early support in cloud storage and, you know, in the national labs with high-performance computing. In 2019, we came out with Rome, and Mark talked about Rome earlier. This was a step function in performance leadership. We basically expanded our coverage in cloud. We accelerated our momentum in high-performance computing. We also got into and expanded our customer base in the important enterprise space, in high-end enterprise. Then we fast-forward to last year, last March, with Zen 3 and Milan. We again delivered the highest performance processor in the market. We delivered 19% IPC uplift and a 25% improvement in performance per watt. We accelerated all of our segments. Then we move forward to earlier this year, and we delivered Milan-X.

As Mark talked about, using our 3D V-Cache technology, we're delivering over 768 MB of L3 cache, strong uplift in EDA and computational fluid dynamics, and just a tremendous uplift in overall performance. As we stand here today, EPYC sets the bar for performance in the data center, and we have over 250 performance world records. Another thing about our journey was you have to add solutions and software. If you take the same picture, but with a slant on solutions, you can see the rapid ecosystem growth that we had from 50 with Naples to over 1,000 today. And it's this performance and these solutions that's actually driving our growth and share gains.

I want to take you back to the FAD of March 2020. Far showed the roadmap for EPYC back then. He showed basically the roadmap, but he also showed market coverage or segment coverage, and he had predicted forward with Milan. I'm excited to say that not only do we have all of that coverage, but we have leadership across all that coverage. The momentum today continues to accelerate. Let's take a look at that momentum we have. We continue to win across cloud, high-performance computing, and the enterprise because we're delivering key value to each segment. In the cloud, our strategy is to deliver the highest density products and enable scale and efficiency for our cloud customers. As you can see here, we deliver our top-of-stack to our competitor's top-of-stack, deliver 60% higher density.

We can deliver up to a 30% lower TCO by driving higher performance and better energy efficiency. In cloud, we're expanding. We're up to approaching 500 public cloud instances, and our internal properties where, by the way, that's a faster adoption cycle for us, are growing very, very quickly. When you go to high-performance computing, it's all about floating point performance. As you see here, our top-of-stack to our competitor's top-of-stack, 50% higher floating point performance. That's 22 world records in performance in SPECfp benchmarks. With Milan-X, our customers' engineers can actually deliver up to 2x the number of simulations per day. That drives faster product cycles and higher product quality. I wanted to touch one more piece on HPC.

Lisa talked a little bit about the Green500 and you know the exaFLOPS system. I'm excited to tell you that in the Green500 list, EPYC is represented in eight out of 10 and 17 out of 20. That not only speaks to our performance but speaks to the energy efficiency, which is tremendously important as we go to the enterprise. The enterprise is a similar story. We deliver our top-of-stack to our competitor's top-of-stack, 60% more online transactions per second in SAP SD benchmarks. We can deliver up to a 41% lower TCO in enterprise virtualization. That's not all in enterprise, though. Our customers have a choice in enterprise. They can leverage the performance gain, or they can keep the same performance and actually save on cost and power by doing more with less AMD server CPUs.

We are really, really excited because our coverage and our momentum have never been greater. Now, the only thing that excites me more than our momentum is the opportunity is only getting bigger for us. We have three major trends, and I think you all know them. With cloud service proliferation, there is no shortage of new public cloud instances. The diversity and the number are growing every day. Internal properties are expanding every day in terms of SaaS and PaaS. IDC puts a number on it recently that, and they said that cloud spend will be over $800 billion in 2025. That's a 21% compounded annual growth rate from 2021. We feel very good about where we are there. The expanding use cases for AI. Lisa talked about, David talked about.

You know, these are expanding pretty dramatically, but I think it's really important to say that a large percentage of the inference is happening in CPUs, and we expect that to continue going forward. Even as the expansion of the use models, you know, move to the edge with smart cities and smart retail, and even the Metaverse, we believe that CPUs will play a critical role. There will be accelerators, but CPUs will play a critical role and continue to. Then there's the glut of data from the billions of connected devices and really the enterprise, all enterprises capturing all sorts of structured and unstructured data. Everyone talks about the amount of data being captured. It's a large number. The real point is getting value from the data, and quickly and efficiently getting actionable insights.

Those are the critical drivers. All three of these are no question driving an increase in computational demand. The bigger point that's driving our strategy forward is the workload diversity, and Lisa mentioned this. The workload diversity that's happening is driving more requirements on the CPU to optimize for better efficiency. That's really what our strategy is all about. Let's talk about that. We look at the next three to five years of our journey. We have very, very tight relationships with our customers, and they've informed us here and given us very great feedback on where to go. Pillar one, the cornerstone of the company, is to continue to deliver the highest performing general-purpose processor on the market, period.

We will continue to drive performance across a wide swath of workloads and continue to drive general purpose computing. Public cloud, high performance computing, and even the high end of enterprise will need this, and we will continue to drive that with every generation. The second piece is really about driving more optimization and efficiency for the diversifying workloads. Like Lisa talked about earlier, cloud computing, technical computing, edge computing. We're looking at whether it's performance per watt, performance per dollar, thread count, throughput, all of these workloads require something different, and we're looking at driving our roadmap in that way. The last two are all about taking the performance and optimization and accelerating time to value, our customers' time to value, by delivering full stack solutions. That's so that our customers can get to the performance faster. Of course, driving ecosystem scale.

We are dedicated to our partners and scaling that ecosystem, and it's not just about breadth, right? Cause you can put up any list of customers or partners. This is about picking the workloads of today and tomorrow and jointly putting together solutions to solve our customers' problems. That's a critical piece. Now, let's look at each one of these. We showed a little bit of this roadmap last November, and we talked a little bit about Zen 4. You know, Mark already talked about Rome and Milan and the leadership there. In Zen 4 and Zen 4c families or cores, we're delivering four new products, Genoa, Bergamo, Genoa-X, and Siena. We believe this is the best and most diverse server CPU portfolio ever unveiled.

We will continue to drive not just performance, but efficiency for the workload across a wide swath of the data center. As you look here, Mark talked about Zen 5. We will begin delivering the Turin family in 2024. We really feel good about this roadmap, and this is why we are so bullish about the future. Now let's take a look at each one of these. Genoa is our flagship fourth generation CPU, and we will deliver this, and then we'll launch this later this year. This is an exciting product. This delivers up to 96 Zen 4 cores. It's leadership memory bandwidth. It's delivering all the new I/O with Gen 5 and CXL. It's the first server CPU to deliver Type 3 memory expansion with CXL attached memory.

We're also driving more and more security features around confidential computing to deliver memory encryption for both direct attached and CXL attached memory. More importantly, I wanted to give you a piece of data on performance. We've done some testing on enterprise Java. We're delivering greater than 75% performance uplift on enterprise Java workloads from our top of stack Genoa to our top of stack Milan. Tremendous performance uplift. Our customers are really pushing for this product. I've been told by many cloud providers that they believe that with Genoa, we hit a performance and a TCO that is actually better than we have in the past between Rome and Milan, and Rome and Milan delivered really big leaps. We're excited about this.

We're gonna launch this later this year, and things are really looking good there, and we're sampling to customers right now. Okay. Bergamo will be the highest performance cloud-native computing processor. Cloud-native workloads require high number of threads and high throughput. Bergamo's gonna deliver 128 Zen 4c cores and 256 threads. It's gonna deliver all of the same I/O and security features as Genoa, same platform, and complete ISA support, as Mark talked about. What's best about Bergamo is our customers get all this throughput and all these cloud-native features without a software port needed. Containers are a common deployment model for cloud-native deployments. We are 2x the density, a container density of Milan also, which is a leader today in container density.

Our cloud customers tell us that they will deploy both Genoa and Bergamo targeting across their fleet, targeting different workloads and optimization points. We're really excited about delivering this in the first half of 2023. Rounding out the optimized silicon, Genoa-X is an extension or the next gen from Milan-X. It's delivering up to 96 cores with over 1 GB of L3 cache. The customers that are using Milan-X today want to jump to this already. I mean, they want this ahead of Genoa, believe it or not. There's just a lot of pent-up demand for this and excitement. It's, again, targeting technical computing and relational databases because we believe that added L3 cache will give even more benefit and lower latency for some of the relational databases out there. Well, that's coming in 2023. Then Siena.

Our customers talk to us about the edge and telco, and as the network virtualizes and the edge gets built out, they're looking for better performance per watt and better performance per dollar. We designed Siena for this. We're delivering up to 64 cores in a low-cost platform, and we've optimized it for both performance per watt and performance per dollar. We'll be delivering this in 2023 also. Okay, now switching gears a little bit to solutions, and solutions are a really broad topic. I wanna go back to the two that I talked about earlier, database and analytics, and the glut of data and then AI. We have right now about 55 solutions, fully vetted solutions with our OEM and ISV partners out there targeting relational databases, data analytics, data lakes, and it's delivering over 60 world records.

The reason why I call out the records is I talked about it earlier, the whole point of data is to get those actionable insights really, really quickly. We're now porting these to Genoa for our launch later this year. In AI, we continue to look at both hardware and software optimizations for EPYC to, you know, for inference and all the expanding AI use cases. I'm happy to talk about ZenDNN, and David mentioned it. This is our own software optimized for EPYC. It's integrated with all the industry standard frameworks. It delivers optimization for graph and computational models to deliver higher performance inference on EPYC. Today, it's deployed across multiple cloud vendors, and they're seeing very nice performance uplift on recommendation engines.

Now, Victor's gonna talk a lot more about AI and the broader picture for the company across CPU, GPU, and adaptive accelerators. This is a focused piece for EPYC that we've been working on for some time. The fourth and final pillar is really about this ecosystem. As I mentioned earlier, we're driving with every generation, we get tighter relationships. Not just about engineering quals, but more about go-to-market and what are the right things to be going to do to solve jointly our customer problems. Whether our customers are in the cloud, on-prem, have a hybrid or multi-cloud installation, we're focused and committed to our partners to deliver those solutions. In fourth gen EPYC, we believe, and we know we will, deliver 2x the solutions from gen three.

Again, focused on those workloads of today and tomorrow. Now let me close. I talked a little bit about our journey, and it is very exciting, but we really are just getting started. We've got a large and growing TAM, and we are underrepresented still. The opportunity is very big. Secondly, we have leadership and momentum today across all of the key segments, and we're continuing to see that. Lastly, we have this expanding portfolio that we're gonna drive both optimized silicon and software and solutions to go really deliver across a wide swath of workloads an optimized solution for our customers. I just wanna end with this is why we are excited about not just continuing our momentum, but accelerating our growth.

There's one thing I would leave you with is I truly believe the best of EPYC is yet to come. I wanna thank you for your time. Now I wanna introduce Forrest Norrod.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Thanks, Dan. Great job.

Dan McNamara
SVP and GM of Server Business Unit, AMD

Thanks.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Good afternoon. I'm incredibly pleased and proud to be here today because I get to represent the work of thousands of AMD engineers that have developed the leadership product portfolio that we have for the data centers today. It's a leadership portfolio that delivers to our customers the highest possible performance, the greatest energy efficiency, with world-class security, which are all attributes that are absolutely critical for the data center. The data center is absolutely critical to AMD. At AMD, we view the data center, as many people have already said today, as our biggest opportunity because quite candidly, the data center is so central to everything that our businesses and ourselves already do. That importance is just continuing to grow.

With new workloads, new capabilities, new services continuing to roll out that have ever-increasing demands on performance, we view the data center as an unending source of opportunity for AMD and our customers. We've got a great set of products to address that opportunity. Dan's already talked about the EPYC server CPU line, which truly has defined performance and leadership in the data center since it was introduced in 2017. I'm gonna talk about the Instinct line of GPU accelerators that have helped the industry enter into a new era, the exascale era. I'll also talk about our new networking technology from Xilinx and the DPU and software from Pensando, which will allow us to offer new levels of performance and infrastructure acceleration for our customers to further improve the efficiency and operations of their data centers.

Right after me, Victor will talk about some of the assets from Xilinx, which add further workload acceleration opportunities with the Alveo and Versal product lines. Again, all of these products offer the highest performance, the greatest power efficiency, and the best security, I think that's available in the market today. That market is a huge one. Lisa already previewed the top-line number, but let me break it down for you a little bit more. We view the data center market for us in 2025 as a $125 billion opportunity, with a very large and important part being the $42 billion in server CPU TAM that Dan talked about a moment ago. There's an even larger opportunity around GPU, AI technology, and related silicon components, $64 billion opportunity.

That, coupled with $13 billion in adaptive and FPGA-adaptive SoCs and FPGAs, and the $6 billion of opportunity that the new AMD can address in networking and infrastructure acceleration, we have an incredible opportunity. Let me click into that $64 billion and start by talking about the journey that we've been on with GPUs for the data center. David's already talked about the foundational architecture, but that foundational architecture, CDNA, which his team started back in 2017, you know, came to its first culmination with our MI100, the industry's first data center optimized GPU, where we got rid of anything that wasn't relevant to the data center and really focused in on compute leadership to power, primarily with the MI100, HPC applications. It was a great place to start. We saw good results with that, but building on those, we built the MI200.

The MI200 is a fantastic part. It's that part that led to the first Exascale supercomputer, and it's the part that's beginning to democratize AI training in the data center as well. It's the first multi-die GPU and the first GPU based on our CDNA 2 architecture. In a moment, I'll take the lid off the next generation of the Instinct GPUs, the MI300, and it is an amazing part. Hopefully, David's already given you a little bit of a taste for that, and hopefully you're as excited as I am about it, but it's a truly amazing part that's going to continue to redefine performance and efficiency leadership in the data center. Talking back on the MI200 is a great part.

It is absolute leadership in HPC performance today, offering a 2x to 5x improvement over the nearest competitor on many important HPC benchmarks. It's all based around that 3rd generation Infinity Architecture that Mark and David talked about, and was the first product to bring that CPU and GPU memory coherence to the standard data center market, providing exceptional system bandwidth and performance, and pointing the direction of the way that new software architectures are going to be built and deployed for accelerated computing in the data center, not just for the GPU. The most important part, the most exciting part about the MI200 is what our customers have done with it, because the MI200 has absolutely unlocked the Exascale era. It powers, along with a customized Milan part, the world's number one supercomputer.

The Frontier system in Oak Ridge National Laboratory, built in conjunction with our partners at HPE, delivers in its first TOP500 run 1.1 exaFLOPS, 1.1 quintillion floating-point operations per second, which is a staggering amount of computational power. It also powers the world's number one green supercomputer. As Dan already said, AMD powers eight of the top 10 most power-efficient supercomputers in the world. It powers the world's number one AI supercomputer with over 3x the performance of the next closest machine. The progress the teams have made in HPC between EPYC and Instinct have been incredible over the last couple of years, and I'm very, very proud of what they've done and very proud to be able to represent that to you. It's not just about HPC with MI200.

MI200 is also making great strides with the ROCm 5.0 software that David talked about being production ready for AI. MI200 offers leadership today in AI performance, delivering 1.2x to 1.5x the performance of the NVIDIA A100 part in many of the most interesting and largest AI applications. That performance and that ease of adoption with the software frameworks that the teams have produced have been recognized. At Build a couple of weeks ago, Microsoft announced that they were the first cloud company to publicly acknowledge the use and adoption of MI200 for their AI training applications for both internal as well as third-party use. We're very, very pleased and proud to have Microsoft as a partner with us on this AI journey. The journey doesn't stop with the MI200.

The MI300 is a truly amazing part, and we believe it points the direction of the future of acceleration. The MI300 is an APU. It combines in one package Zen 4 CPUs, the latest generation of CDNA 3 GPU technology, Infinity Cache, Infinity Architecture 4.0, and a large pool of very high speed HBM memory. That memory is shared, as David said, between the CPU and the GPU, allowing them to communicate freely without the performance or energy overhead of redundant memory copies. It's designed for leadership memory bandwidth, leadership application latency, and delivers substantial power savings over alternative architectures. We're gonna be very pleased to talk more about this as we get closer to the introduction because the MI300 will deliver over eight times the AI performance of the already leadership MI250. This will be available next year. Proud of this part.

More to come. Now I want to lift our gazes from the engines that we've been talking about, the CPU and GPU engines, and consider the other challenges of the modern data center. The modern data center, the software-defined data center, is this amazing place that is far more agile and performant than it's ever been. It's got a new set of challenges with the technologies that have given that agility. In cloud, if you take a look at most large public clouds, a very large proportion of the compute power that's inherent in the systems comprising the cloud doesn't go to the end user applications, doesn't get applied to the internal properties, search, video servicing, whatever.

It instead, it gets consumed by the infrastructure services required to provide the cloud service in the first place, to the networking, security, and storage services that you need to have in a cloud, but that's still an incredible overhead burden to pay. Likewise, the shift of computing to the edge that several of my colleagues have already discussed is exciting. It's gonna provide new applications, new experiences, new benefits for both businesses as well as consumers, but it poses significant challenges of how do you manage the resources, the data? How do you secure computing data when it's been pushed outside of the protected walls of a data center all the way out to the edge?

Then lastly, that explosion of data that Dan talked about earlier is very real, and it presents new challenges because that data, that flood of data, and whether it's at rest in the data center or in flight within the walls of the data center or out to that cloud edge, must be protected, must be secured, must be monitored. Significant challenges to the software data center. To address some of those challenges, we have been focused on building up the AMD portfolio of networking technology. We've got two great additions to that with Xilinx and with Pensando. The Xilinx team has brought a number of new technologies to us that are critically important for addressing each one of those challenges.

The Solarflare NIC, by the way, for many of you, it's probably something that is in your company's data centers today because it is the de facto standard for high-frequency trading network interfaces, because it defines low latency and high performance for those applications. The Alveo line of FPGA-based network accelerators offer our customers the ability to customize the networking flows and to drive ultimate performance for them. The recent addition of Pensando brings us a very flexible set of hardware and software to allow us to offer that same level of agility, and in fact, on even more concurrent services to a broader range of customers.

That Alveo NIC, that Adaptive Network Acceleration, is in production today with a number of hyperscale customers that use it to precisely craft the networking flows, secure, and offer the security services that they need to secure and accelerate their services. This is a part we're very proud of. We're gonna see the next generation of it coming out in 2024. We'll continue to offer this level of differentiated solution for those that desire the ultimate in control and performance on their networking flows. The Pensando acquisition brings us not just an incredible team that Lisa already talked about, one with that has a rich heritage in developing systems and networking solutions, but it also, they've crafted the world's most intelligent DPU that's also, quite candidly, the highest performance DPU on the market today.

It consists of 144 P4 packet processors that are domain-specific, workload optimized to efficiently process networking and data flows, and can accelerate not just networking, but also storage, security, and other applications. Supports tens of millions of networking flows and offers support of those services at line rate with multiple services active simultaneously. The second-generation Elba DPU is in production today with large hyperscalers and across a broad range of applications. It's not just about silicon. The heritage of that team, as I said, is in systems. The Pensando team has created, and is now part of AMD, a complete set of software services to accelerate virtually every infrastructure service required in the modern software-defined data center, from networking to security, from telemetry to firewall.

The software is a complete stack that can be adopted as is, or can be easily customized by a customer to add their additional value or to tweak it as they need to. It's also designed for the reality of the data center operations today. For example, it can be updated in place with no disruption. System can be live, DPU can be operating, you can add or subtract features, implement bug fixes, change the software without bringing down the system or the networking traffic. A truly amazing solution. You put it together, and the DPU acceleration hardware and software can be applied in a large number of places. Often, people have talked about DPUs in the context of SmartNICs to offload that cloud computing overhead that we talked about earlier, and they're great there, and they're in production today.

Also, the Pensando DPU and software stack can be used at other places in the data center. HPE Aruba has just introduced a very novel product, a smart switch, that defines a new category where a top of rack switch provides not just connectivity, but a full set of firewall, security, and telemetry services to a rack of systems, obviating the need for separate and costly appliances to provide those services and allowing much easier east-west firewall and traffic management. Likewise, NetApp, we're very proud to say, is a customer that has built a set of storage solutions that are in production today around the Pensando solution. Really across the data center, we have the DPU and the software to accelerate a wide range of infrastructure tasks and to dramatically simplify and accelerate operations.

Securing the data center is a great challenge for us as those data flows continue to expand. At AMD, whether it's DPU, GPU, CPU, what have you, security and advanced security features are a critical part of everything that we do. I'm pleased to say that we now have the broadest set of security technology that we've ever had and a set of technologies that allow us to secure the node, to secure the server from boot, to offer our web or cloud customers the ability to offer confidential computing, which is a differentiated solution that allows a cloud customer to offer to an end customer the certainty of knowing that that end customer controls that environment, even if they don't trust the cloud company itself. Truly securing the entire node.

Now with the Alveo and Pensando technology, we can secure with firewall and line rate encryption on every port, the access to and from that server and across the data center. We're gonna add to that, embedded in our processors that go into edge devices as well as the networking interfaces that talk from them back to the data center, a complete set of management and security services extending out to that edge. Then throughout the whole thing, our technology allows us to gain telemetry and allow our customers to fully monitor what's going on in the data center without taking a performance penalty from doing so.

Feeding that data into advanced threat detection systems, it'll be easier than ever, along with our partnerships with the companies that you see here, to detect threats and react to them in a way much faster and much more faster than ever before. I've spent just the last few minutes to give you some context of our overall vision of providing a complete set of products and services for the data center. It's a journey that we continue on into the future. We now have the most complete set of data center technologies for silicon data center technologies, and we believe they offer an unmatched performance, the best energy efficiency available today and into the future, and comprehensive security features that will help our customers continue to secure, accelerate, and make their data centers highly efficient.

With that, thank you for your time. Let me introduce Victor Peng, President, Adaptive and Embedded Computing Group.

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

Thanks, Forrest. Great job. Thank you, Forrest. It's great to be here in person, as many of us said, and also speaking to you on the AMD team. Both things took a little longer than I would have dearly liked, but it's definitely worth the wait. You know, the other thing I learned I have in common with Mark Papermaster and I, we're both hitting our fortieth year in the industry this summer. I could tell you it's. I consider myself incredibly fortunate to still be working with people that are absolutely passionate about building great products, high performance, and helping our customers really touch people's lives. I mean, I don't think I could be more fortunate than that. You know, I'm gonna talk about the business I lead, the Adaptive and Embedded Computing Group.

You can think about that as the Xilinx business plus the AMD's original embedded business. Then you've heard that I'll also talk about our strategic initiative, pervasive AI that Lisa mentioned, and then you've heard throughout the talk that AI is an area of focus for AMD. But really if you take that up a level, one of the things I'm really talking about is all the reasons why AMD plus Xilinx is way more than the sum of the parts, right? That's not only in how Xilinx being part of AMD is benefiting already, but how hopefully we can make our contribution to the company as a whole. You know, before I get into ACG, I wanna share with you some of the things in terms of the momentum we built up at Xilinx.

You know, if you follow us, we had a strategy of data center first, and our growth drivers in terms of the markets was data center, 5G, automotive, and then all of our embedded markets. The other part of our strategy is that we wanted to move from just delivering components to platforms. That meant, you know, our SoC integration, software stacks, and our Alveo boards. From a financial perspective, you know, we felt we had a tailwind that was pretty good, that in 2025 it would be $33 billion, and we were trying to sustain double-digit growth. Of course, I'm really pleased to say that we really delivered on all of those, and we certainly grew in all those large markets. You know, we've doubled.

Well, actually, I should say we grew over 20% year-on-year on a pro forma basis if you look at Xilinx's fiscal 2022 versus fiscal 2021. We've eclipsed our design wins and pipeline in terms of Zynq and boards and, you know, platform kind of designs compared to our pure FPGA silicon. The thing is, now as part of AMD, we're really gonna build on that momentum and, you know, both our opportunities is greater, and I think we're really gonna accelerate our growth. The mission's really quite simple. We're gonna exceed our customers' expectations with high performance, adaptive, and intelligent solutions for the data center, the edge, and endpoints. You should think about that. You've heard a lot of discussion already, of course, about the leadership that we have in data center, intelligent edge.

When you talk about endpoints, you're gonna hear later about our clients, but you could also think about it from an embedded space, right? You heard Dan talk about telco cloud. It's probably kind of gives you a sense of where there is synergies. You know, in terms of of edge, Forrest talked about it. You know, you have to have a lot of processing because a lot of latency issues and things that you need to do there. You could also think about an edge device as a car or a base station, right? Cars are also going to talk to base stations. Endpoints, it's not just client devices and things of that nature, but it could be a smart camera, motor control, all kinds of sensors like in a car, LIDAR and radar. We really are.

You heard Lisa remark about end-to-end. We are gonna deliver end-to-end of all forms of computing. The strategy is quite simple: continue to deliver leadership adaptive compute products and technology. You know, if you've heard me speak before when I was with Xilinx, you know, I never said that adaptive computing was the end all be all. Computing on CPUs is gonna always be driving a lot of the workloads as in GPUs. I've always said that in a world of change, adaptability is really an incredibly valuable attribute, right? Change is happening everywhere. You hear about it. Forrest talked about it. The architecture of a data center is changing. Talk about it still more. The platform of a car is totally changing. Industrial is changing. There's change everywhere.

If hardware is adaptable, then that means not only can you change it after it's been manufactured, but you can change it even when it's deployed in the field. To use the car analogy, it's sort of like you do over-the-air updates to firmware. You can change these at the architectural level in the hardware with adaptability. That's gonna be really important as we continue to innovate in adaptability. Of course, we're gonna do our part in the data center, right? You heard Forrest talk about our Alveo and Solarflare SmartNICs, but we also do compute acceleration. You've heard a lot about us saying, you know, matching the right workload to the right architecture is really important. CDNA and RDNA, of course Zen is still doing an incredible amount of lift.

Also, you know, Alveo accelerator cards, because basically, just as Forrest said, Pensando SmartNICs on the fly while it's being in operations, you change what it's doing, you could do that with Alveo compute accelerators, right? We'll continue to drive that and certainly AI. The other thing about adaptability is that you can customize these solutions. You know, more and more customers are expecting you to deliver something that is tailored to their needs, right? Whether it's really because they have to optimize their network for exactly the traffic and the workload they're running in a data center, or it's because they have special security needs. It's also really important in terms of allowing people to have tailored, customized solutions.

Then, of course, we're gonna continue to be absolutely committed to all the embedded markets we serve and drive greater growth there because now I could offer those customers that we worked decades to sort of build their trust and understand their businesses with a much broader product portfolio. Much broader product portfolio, including the capability to even deliver customized, optimized solutions. You know, we're looking at a TAM long-term about $105 billion. That's more than 3x what I said we were pretty proud of when we were part of Xilinx. You could see, you know, we're still looking at the same major growth drivers, right? By the way, that $13 billion was included in Forrest. I just wanted to show again that we're gonna do our part in a data center.

Communications infrastructure, very large growth, automotive growth, and then of course, across all that embedded applications. Again, the common theme here is we can offer our customers tremendously more value. I'm gonna walk you through each of those, very quickly. You know, I already talked about compute acceleration. Maybe I could just, you know, give you an idea of some of the things that we are accelerating in the data center. You know, as you know, we're deploying a couple of different, you know, cloud services in terms of FPGA as a service. You know, we're definitely getting some traction. We've done genomics acceleration, lots of streaming video, data analytics, graph analytics and things of that nature, and AI. Adaptive SmartNIC, you know, Forrest already discussed it.

I think again, you know, the big picture of that transformation that's happening in the data center is, you know, on the compute side, but you could also be limited by the network side, which we discussed. It's not just offloading cores by, you know, removing driver code. It's also making sure that you're not bottlenecked by the network, but also storage, right? Storage and memory. Our products are both in traditional storage controllers as well as computational storage, which you could actually do processing where the data is stored at rest, really at rest. We're also used in general purpose, right? Just in terms of the system level things. Because of all these things, we're deployed at 10 of the largest hyperscalers, and we've got lots of deep engagements going on, particularly in the AI space. Communications.

As if you follow this, you know that we're really strong in radio. We're deployed in six of the seven top 5G wireless. Our Versal with AIE is deployed in 5G in multiple deployments in all geographies. We have exposure also to Open RAN. Of course, you heard about, you know, virtualization of vRAN. You know, as you start going into it, you know, we weren't participating as much in DU as CU. In the core wired networks, we had a good position. Today, as part of Xilinx, you know, you heard about what we could do in terms of EPYC and telco cloud. We could really go and supply all the processing, that signal processing end-to-end, from when it hits the antennas through to the radio, through the DU, the CU, into the core networks and in the cloud.

Again, I have a lot more to offer to our customers, and our customers in the communications areas are quite excited about it. By the way, you know, everybody talked about insatiable demand for compute. There's insatiable demand for bandwidth as well, right? A lot of it is streaming video, but all kinds of other data as all of these sensors and cameras are being deployed. It's a really difficult problem because not only is it bandwidth, but you also need low latency for a lot of these applications. Disruption in this industry as well. Automotive. Another really great place where we've been complementary, but putting things together, we really can serve a complete solution. I think everybody knows that the automotive platform is being completely re-architected, right? Things are getting electrified.

It's been said it's going on for some time for greater levels of autonomy and greater levels of safety, frankly. The silicon content's increasing because there's more sensors and different diversity of sensors being included there like LIDAR, radar, all kinds of cameras, front-facing surround cameras and so forth. At the same time, there's also cameras actually pointing inside the vehicle for things like occupant monitoring systems for safety, you know, drowsiness detection for drivers, but also immersive, really exciting, experiences, including, you know, gaming-level kind of quality experiences. Because once things gets more autonomous, I guess, you know, you're kind of like a, you know, captive audience, so to speak, right? You know, the AMD classic, if you will, side of the team had really great strength in IVI, leveraging their incredible graphics technology.

We had strength in ADAS and all those sensors, right? You know, the architecture of the car platform is moving towards a lot of those diverse sensors, but then more power in these domain controllers that are centralized that really are gonna require an incredible amount of compute. We can offer the compute for the whole platform now. You know, intelligent sensors, you know, data aggregation and preprocessing in the central module, but now we can do the heavy lifting with, you know, embedded Ryzen or EPYC, as well as immersive experiences. This is a no-brainer.

Of course, we've had many decades of experience delivering to automotive suppliers, so we could also help the company overall understand what it takes to supply, say, the top 10 manufacturers, both the tier ones as well as some of the direct OEMs. We've Xilinx has shipped cumulatively over 200 million units in the auto market over a decade that we've been supporting automotive. Then, you know, the broad embedded markets, you know, I can't go through all of them. I just wanna reiterate that we're gonna support all the markets we already served. Healthcare and vision systems and industrial, aerospace and defense, test, measure, and emulation, as well as what's not shown here is audio/video broadcasting and some consumer. You know, there's really disruption going on in these industries as well.

It may not move quite at the same pace as in the data center or some of these other more consumer-oriented things, but there's no question that there's a revolution in healthcare. No question there's a revolution in industrial, right? Just to talk about industrial for a moment 'cause that's another area where there's great synergies. You know, there's more and more, not only because of digitization and things like disruption and, you know, supply chain 'cause of COVID and so forth, there's just more automation going into industry, right? There are more cameras for inspection systems, more robotics, just more automation. You know, machine learning is being used for predicting maintenance. That's generating huge amounts of data.

Now, if you're controlling precision robots doing things, you can't send that all the way up to the cloud because of the latency. That's requiring really fast edge servers, right? Deployed to control what's happening on the factory floor. Xilinx could not have served that business before. We're engaging with some of our top customers who are talking about, "Wow, this is great. Now that you're part of AMD, you know, you guys have already been great in the robotics and the motor control and the smart cameras. Now you can deal with the server, you know, located there and then going up into the cloud." Right? That's an area of disruption that's another big opportunity for us. You know, in healthcare, we shipped over five million units just in the last 12 months.

Everything from, you know, monitoring systems to a lot of imaging systems and even, robotic surgery systems. You know, we have over 6,000 unique customers in these markets and, you know, we. It's just a great thing that we can now bring them more value with this incredible product portfolio and help them through these transitions. Every one of them are using AI too, by the way. Okay. You know, like all the other businesses, the reason why we've had great traction is because we've had really leadership roadmaps, and we've been executing. You know, you've heard me talk in the past, if you followed Xilinx, that we executed not on silicon, but on software and so on. I'm just gonna focus on the silicon roadmap.

As you can see, if you look at the adaptive silicon, we're on seven nanometers. I know many of you are familiar that we've had Versal for quite a while. The thing about it is that we have so many subfamilies that are optimized for different markets, in some cases right down to a specific application like 5G radios, that we continue to roll out more of these families that have, you know, special capabilities for those targeted areas. Like Versal, we just rolled out HBM. We'll be rolling out, you know, Versal AI with high-performance ADCs and DACs for radios. Then the AI edge is really tuned more for like automotive applications where you do need real-time image recognition and so forth, plus all the other programmability. Seven nanometers is our leadership.

Just so happens also on the embedded processors side, we're at 7- nanometer as well. It's a similar story. Like, it's not as critical to move to the absolute leading-edge node right away, so long as we're continuing to deliver capability and value, and they value long life, and they value high quality and reliability. That said, we are gonna be moving to advanced nodes, right? What you could see is on the embedded processor side, we'll be moving to 5- nanometer in EPYC and 6-nano meter in Ryzen. On the adaptive silicon side, because again, we're still rolling out some more versatile products. Indeed, you know, we're actually even still rolling out some 60-nanometer products through the MPSoC product line has been tremendously successful. We're gonna take a little bit longer, but then we're gonna leap all the way to 3- nanometer.

What you could expect, you know, I'm not gonna share the details today. Stay tuned for it when we get closer, but you can certainly expect significant uplift in the architectural performance and the capabilities, the power, efficiency. You hear that as another common theme throughout because it actually almost doesn't matter what operating point you're operating at, you're generally power limited, right? Either literally power delivery or thermal, heat removal. You're always gonna hear us talk about both performance and power efficiency. You know, same thing for the EPYC and the Ryzen embedded processors. We're gonna continue this trend of leadership across even for our embedded markets.

Putting this all together, you know, as I said, you know, at Xilinx, we actually did a pretty good job of building up some momentum behind our business and driving adoption for adaptive computing. As part of AMD, really, I think our growth is gonna accelerate, and that's gonna happen so long as we keep delivering leadership adaptive computing products and complete platforms, right? The silicon, the software. I didn't talk about that much today, but you'll hear about it actually in the next part of my talk. We feel we've definitely sustained double-digit growth. And really, of course, what we bring to the company as a whole, as you heard Lisa say, is diversification of these markets. Fantastic, fantastic customer relationships and domain knowledge.

It's great that we can contribute both, you know, and technically as well as financially. I'll kinda go on to the second part of my talk. In terms of those opportunities, you know, Lisa shared, these are the major areas. I would say in the embedded side, you know, it's pretty straightforward. We can sell just the products that we have with this broader portfolio and with our technical field and the relationships we have, we should bring a lot of the common products we have to market. We'll definitely be doing that. On the automotive side, as I mentioned that now we can actually integrate and cover the entire auto platform. That takes a little bit longer, as people know. Auto development cycles have been accelerating, but they still take a little longer.

Once they do ramp into production, they live for quite a long time. We do see growth in the long term for that. Then, of course, data center and communications. I already talked about, you know, what we have here. In fact, I would say, you know, in this case, in terms of customer relationships, Xilinx had a lot of the long-term customer relationships with the big communications suppliers. We are being introduced to opportunities that we weren't, you know, exposed to before now that we're part of AMD because of the breadth of the technology, because of the capability, like Mark discussed, in terms of chip integration and other forms of integration. Clearly see synergies there. The largest by far is, as Lisa mentioned, the opportunity we have is in AI, both inference and training.

I'll talk about pervasive AI next. Everybody talks about AI, right? It's been going on for quite some time. The truth is, it's a really big business. It's moving very fast, but we're still actually in the early innings of AI being truly pervasive, right? When it does, the world is gonna change in very profound ways. I mean, it has already, but again, it's pretty early days. I think what may be a little bit surprising too is how well AMD is positioned today. You heard it throughout people's presentations, what we're doing, how we're increasing performance by factors. Let's talk about coverage in terms of applications. That's what I mean by pervasiveness.

You might wanna try and start thinking in the background, like how many different applications you think in AMD products are that's doing some form of AI, or being deployed. You probably would've guessed all these. You know, you heard a lot about the great momentum, the incredible momentum doubling every year in the data center. What Instinct is doing, breaking records. Probably not too hard to think about servers and PCs. In the home, of course, you know, not only work from home, but entertainment, game consoles and Metaverse is definitely gonna be forms of AI there. When you add the AMD FPGAs and adaptive SoCs, that actually fills out pretty well, right? We just talked about, you know, communications, of course, smart city. There's more applications within the home, healthcare, intelligent factories, transportation.

We're actually in quite a bit of areas that are doing AI, mostly the inference, but you know, again, the heavy-duty training is happening in the cloud. The other thing, by the way, is the way this looks. They look like they're islands, but they're all connected to the cloud in some way, shape, or form. AMD plays in communications infrastructure as well, right? Both wireless and wired communications. We play there. They will definitely use AI in communications as well, things like dynamic beamforming for radio spectrum management, you know, in real-time. While we have good coverage, we're gonna double down, as Lisa said, to make sure that we cover even more. That takes two parts. One is we're gonna leverage all the IP we have.

You heard about the roadmap that David described in the CDNA, a very ambitious roadmap of increasing by factors generation to generation. You've heard me mention AIE before, but maybe it's good to sort of walk through that architecture and why the scalability of that architecture, and what it does that may be in a little contrast to what's happening in GPUs is really important. Once we integrate some of that IP, that XDNA in broader places, then we'll get broader coverage of neural networks. When we talk about change, right? Like if you follow neural networks, like every week or so, there's a new neural network, and they're definitely increasing in complexity and size. Last but not least, you can't just. You know, you heard David talk about it, you heard multiple people talk about it. It's not just about the silicon.

You have to have the software, right? I will talk about Lisa mentioned a unified AI software stack for developers both to utilize this very broad portfolio. To start off with, Lisa mentioned XDNA, and the way you could think about that is these are the core IPs, architectural IPs that differentiated Xilinx. Of course, the FPGA fabric, you know, we invented the FPGA. That is the most general hardware accelerator, whether it's AI or non-AI. We're gonna continue to innovate there. Let's focus on the AIE. This is a relatively new, right? It was only introduced in Versal. It is deployed in production today. It's. You could think of it as a tiled array architecture, and each tile has a very powerful execution engine together with local memory and local data movement.

It scales really well because, you know, the size of the array, you could scale up or down depending on the performance, power, and cost point you're trying to hit. For instance, the first Versal family product memory we put out had a array of 400 of these. You know, later on you'll hear that we're integrating into more cost and power sensitive areas where we might be integrating only tens. It's sometimes referred to as a spatial architecture, but I'll focus on the data flow architecture because that works really well with AI, and I'll explain that in a moment. The other thing it does is, like the cartoons make them look similar. They're not really similar when you go one level down.

The reason why you want to make them look similar is because the decades we've had of experience of taking tens of thousands of customer designs and compiling into FPGAs, some of those optimizations in space and time are the similar optimizations that we use when we compile, you know, neural networks or other kinds of algorithms, signal processing into the AI engine. That's a really important aspect, and I think that's something that's unique to some of the expertise we've had. Okay, you know, I'm gonna walk you through why AI works well in the AI engine. I apologize to those of you who understand this, but you know, maybe not all of you are quite as geeky as, I don't know, I am. I don't know. Yeah, that graph represents a basic neural network.

Maybe this is image recognition application where, you know, the input comes in, the pixels come in on left. The nodes are the neurons, if you will. There's a lot of calculations that fires. Those are activations, the edges. There's other things that aren't shown, like weights and things that you hear about. Those collectively, all that data is parameters. The data executes, you know, layer by layer, if you will. If you will, the data flows across the neural network, and then in the output layer, it says it's a cat or a dog or whatever, right? This is again, kind of a cartoon of how that would actually get compiled into all those tiles of very lots of distributed memory and execution units to process what's happening, and the data just flows, right?

Traditional architectures are more memory-based, right? Like you fetch data from memory, you do an execution, you put it back to memory, you fetch it again, you move back and forth. Moving that data back and forth to memory burns power, loses time, so it's not really good for streaming kind of architectures or streaming kind of problems like this. That's one thing. The other thing, I won't go into detail, but I mentioned is sparsity. You know, networks aren't fully connected, right? So, and also some things about the weights. If you could take advantage of those attributes of sparsity, it also helps your performance and your power efficiency. So the takeaway here is the AIE is very high performance, very energy efficient. Because it's adaptable, we can change the connectivity of the tiles at the hardware level.

You can also customize it for different workloads. Indeed, you know, AI engine, as I mentioned, is being used for 5G base stations. Okay, how does this help us in terms of our strategy? The second element of how do we make sure that our product portfolio can cover the broadest set of neural networks, but also applications because different types of applications like recommendation engines or image classification or natural language processing use different kinds of neural networks. Well, right now we have really pretty good coverage. You heard, you know, Dan say that an awful lot's being done still on CPUs and with ZenDNN that gives you factors, and you'll continue to do things in terms of mostly inference, but even for small models training.

Now RDNA and CDNA, so Radeon and Instinct, that does all the heavy lifting for training for sure, but also even inference for really larger models. Versal AI is doing both, mainly inference, right? It extends from low to sort of large models, but not quite as large as say what Instinct could do. We're generally also doing scale out. Once we start integrating AIE in more of our products, and we go to the next generation, what we're gonna do, we cover a tremendous more of the space across the model. This, by the way, is while these models are still getting larger, right? Some of these transformer models today already have hundreds of billions of parameters, and they're talking about transformer models that's gonna have a trillion parameters, right?

The fact that even with the growth and the complexity of a lot of these models, we feel that our portfolio could keep up. Again, we're not religious. You want it, you know. If the best work target architecture is CPUs, we'll do that. If it's a GPU, we'll do that. If the adaptive AIE either integrated with a GPU or, I mean, a CPU or just in our Versal products, we have that to offer. I mentioned the third part, it's not just silicon. The third part is we have to enable software developers, right? You heard Dan and David talk about it today. You know, we have very good productive software stacks that you can get factors of performance, right?

They all allow you to work in the industry standard ML frameworks like PyTorch and TensorFlow. They all have compilers, and they have libraries and models, and you compile. Of course, the only problem is, right, you know, world of heterogeneous computing and the data center has oftentimes all of these targets and architectures. If you wanna take advantage of heterogeneity and figure out where things need to go, you have to work in different environments, you have to do partitioning on your own, and so forth. What we're gonna do with the AMD unified AI software stack is we're gonna provide a unified inference front end.

Again, you can interface to all those industry standard frameworks and also will allow people to use very similar development tools like when I talked about quantization. Well, I didn't talk about pruning, but pruning is when you can eliminate parts of the network to make the performance and reduce the resource requirements of that. That's gonna be all common. Okay. Now I'm focusing on inference because, again, all the heavy lifting on training for the most part, you know, is still in the ROCm stack. This is what we'll do first, and now people can, in the same development environment, hit any one of these target architectures.

In the next generation, we're gonna unify even more of the middleware, where now we're gonna have commonality in terms of our ML graph compiler, have much more commonality in our library APIs and inferences. By the way, we're gonna definitely also roll out a lot more, you know, models, pre-optimized models through these targets. You can also see that this software stack will enable, you know, if we've integrated, which we intend to do, AIE into our Ryzen products, AIE into our EPYC products, it will automatically do that code partitioning for you. Okay. This is what I wanna share today, but of course, we know we're gonna do much more than that. We'll be delivering application software development kits as well.

We will be driving an ecosystem, and we certainly will share in the future more updates that we're gonna do in terms of our training stack. These opportunities I talked about, they're all meaningful in terms of the potential revenue we can get. By far, you know, we think it's greater than $10 billion. By far, most of that's gonna probably come from the pervasive AI that we talked about. It's incredibly exciting. I think, stay tuned. You're gonna hear more about this, of course, over the next coming months. To wrap up, you know, I'm really, personally really excited about accelerating AMD's growth by, you know, contributing to this incredible, phenomenal product portfolio, but also our combined customer set.

You know, it's really meaningful for us that we can offer our customers more value and also contribute to their businesses while we grow our company. All of us at AMD are really excited about what we can do with pervasive AI. I think we talked about at the very start about together we advance. You know, when we really make AI pervasive truly in the cloud, at the edge, and endpoints running on AMD products, you know, affecting people's lives, I think that's really a very meaningful and exciting journey that we're on. This is, you know, this classic thing. It's not a sprint, it's a marathon. We're in it for the long haul, and I can't think of a better journey to be on myself. I hope you're excited as I am. Thank you very much.

Moderator

We are now on a 15-minute break. Our Financial Analyst Day program will resume at 3:15 P.M. Thank you.

Please take your seats. Our program is about to resume.

Please take your seats. Our program is about to resume. Now, please welcome AMD Senior Vice President and General Manager, Client Business, Saeid Moshkelani.

Saeid Moshkelani
Senior VP and General Manager of Client Business, AMD

Good afternoon. Did you guys have a good break?

All right. Thank you all for being here in person. It's nice to have an audience after two years of COVID. Earlier today, you heard about the amazing path that we are on as a company with our technologies and products, especially in data center and AI. I'm here to tell you that we also have an amazing and exciting new story to share in client business. Since we last met in 2020, we have kept our promises. We have consistently gained share, and we have substantially increased our product mix in the richer part of the market, being commercial, gaming, and the premium segment of the consumer. As a result, we have grown the business significantly. As I'm gonna show you today, we have the best products.

We have a very, very strong product roadmap and the right market opportunity to sustain business acceleration. Now, if you look at PC market for a moment, we saw that at the onset of the pandemic, PCs played a critical role as the world shifted toward working from home, learning from home, and playing from home. As a result, we saw a huge market expansion, adding about $7 billion to the TAM. We continue to be excited about long-term market growth and market potential, with TAMs reaching to about $50 billion by 2025. We believe that PCs will continue to play an essential role with more connected consumer and hybrid workforce.

As Lisa mentioned earlier, after two years of tremendous growth, we see pockets of softness in the market, but we are well-positioned with our products in the focused segments of the market to continue our business growth. Our strategy for sustaining growth in client business pretty much has remained consistent since the birth of Ryzen. We wanted to build the best processors with leadership performance. As Mark mentioned earlier, we are on a great path for that. We wanted to deliver superior experiences to drive consumer preference for our products, and we wanted to truly partner with OEMs and ODMs to develop the most exciting platforms in the market and expand our portfolio and market reach. We are confident that our focus on these fundamentals will continue to drive long-term business acceleration.

If you ask us how we did, for the past three years, we have been focused on these segments. Again, I keep mentioning them: commercial, gaming, and premium segments of the market. These have been the most exciting segments of the market, and they continue to be strong. We have consistently executed on a product roadmap, making sure that they keep leadership in these segments. As a result, the preference, the customer preference for our product continues to be very, very strong. The result speaks for themselves. The share growth has been remarkable. We have done well, but we have just started this journey, and we are not slowing down. Now, the role of PCs in our daily life is continuously evolving, and our goal is to stay ahead of these evolutions and create inflection points of our own.

Our mobile roadmap starts with an intense focus on end user experiences. Whether it's just pure performance, whether it's silent computing, whether it's battery life, whether it's responsiveness of the system, whether it's audio-video quality or connectivity, we want to bring the best to the end user. This approach has been paying off. From the hybrid worker to on-the-go gamer, to the constant creator, to the modern consumer, Ryzen processors continue to deliver the best notebook experiences in today's connected lifestyle, giving us that competitive edge that we need as we grow the business. Now, if you look at our latest mobile processor, you can see the impact of this end user focus there. We launched Ryzen 6000 Series mobile processor earlier this year at CES, featuring an updated Zen 3 processor with the latest RDNA 2 graphics with ray tracing capability in a leading-edge TSMC 6- nanometer.

What we wanted to do is to optimize all of these technologies, not to be just good at one direction, but to build the best all-around mobile processor the market has ever seen. When you look at the generation gains, they are impressive. 1.3x faster CPU performance, a massive 2x graphics leap, which also makes the Ryzen 6000 Series the best APU gaming processors as well. We combined all of these technologies with an advanced power management technique to give up to 29 hours of battery life. To bring the best connectivity, we paired our processors with the latest technology, such as Wi-Fi 6, 5G, Bluetooth 5.2, or USB4. You can find all of these technologies in very sleek, sophisticated, ultra-thin designs.

The reviews so far for the product has been amazing. There are many Ryzen 6000 Series processors in the market today, with many more to come in very near future. Now, if you look at how all of this technology performs in a commercial environment, when productivity, collaboration, security, and manageability are such key attributes. You can see over here that we offer 17% more performance in a very typical commercial workload. Running a Microsoft Office application and at the same time being on a video conference, sharing your data or presenting your data. When you look at battery life, we offer 45% longer battery life during a video conference call. To further enhance our products for professionals, we provided multiple layers of security to defend against cyber threats. We also integrated a wireless manageability for ease of deployment by IT professionals.

This makes the Ryzen 6000 Series truly the best solution for any enterprise. If you look at our momentum in the market, with each generation of technology, we have raised the bar for ourselves even higher. This has enabled us to develop a lasting partnership with OEMs and ODMs in expanding our portfolio and market reach. We have been co-developing and co-innovating with OEMs, bringing in Microsoft, bringing in Zoom, bringing other ISVs to develop the best PCs in the market. In 2022, we are launching a record number of systems, majority in premium segment. These systems are better, faster, more power efficient, thinner and lighter than ever before. You might ask, "What is next?" Right? You heard from Mark and David today. Innovation is at the heart of AMD's journey in developing the best products.

Whether it's through our own development or through our ecosystem partnerships, we are building technology roadmaps that push the boundaries of what is possible in PCs. Whether it's AI acceleration, which Victor talked about, to heterogeneous architecture, to image signal processing, to advanced packaging, whether it's 3D stacking or chiplets, to extreme power management. We are working to bring the next generation of advancement in today's workload. Whether it's collaboration, creativity, gaming, content creation, and more. These developments ensure that we stay ahead of technology curve and further solidifies our position as technology leaders in the market. When you look at our mobile roadmap, we do have a very strong roadmap, and our commitment to notebooks continues. We talked about Ryzen 6000 series already, but if you look at 2023 with Phoenix Point.

Again, you heard Mark, David, and Victor talk in details about Zen 4, about how great RDNA 3 is, how and how fantastic AI engine is. We are bringing Zen 4 to notebooks, taking CPU performance to this next level. With RDNA 3, we are bringing much more graphics performance with much better power efficiency. We are integrating Xilinx AI engine to enable a range of advanced AI experiences, not in the data center, but in your PCs. We implemented all of these technologies in the leading edge TSMC 4- nanometer. With our next product, Strix Point, we push capability and performance even further with Zen 5, RDNA 3 Plus, and the next generation of AIE engine. We will continue and strive to power the most exciting and premium notebooks in the market.

I hope this gave you a little bit of an insight on what we are doing in notebooks. Now let's talk about desktop. We carry the same end user focus here to drive our development decisions. Desktop gamers and enthusiasts are one of the most sophisticated and tech-savvy PC customers. We design our products to meet the demanding workloads for this audience. Let's look at some of these technologies. Two weeks ago at Computex, we announced Ryzen 7000 Series desktop, featuring new technologies and a new socket infrastructure to bring advanced capabilities to gamers and to enthusiasts. Zen 4 was a big step over here, and Mark talked about Zen 4 in detail.

With clock speeds of more than 5.5 GHz and 8% IPC uplift, it's gonna deliver significant gaming and CPU performance over our current generation. With TSMC 5- nanometer, and again with advanced power management techniques, we are offering more than 25% performance per watt over our current products. The all-new AM5 socket supports DDR5 and PCIe Gen 5, enabling the fastest data access from memory and storage. These processors would be available in market fall of this year. Now let's talk about workstation. This is a new market for us. Threadrippers have continued to excite the enthusiasts since their inception in 2017. What we saw in the market that system integrators were using consumer-grade Threadrippers to build high-end workstation products for their enterprise customers in M&E, oil and gas, 3D design and so forth.

We realized this unserved demand, and last year in partnership with Lenovo, we launched Threadripper PRO. Call it a commercial grade for professionals. The market response has been amazing. Lenovo P620 workstation, the first workstation with Threadripper, was the best-selling workstation in its class in 2021. For a very, very good reason. You can see here we are comparing one Threadripper processor to two Xeon processors with massive leadership in every metric. Whether a single thread, multi-thread, power efficiency, performance to power ratio, it wins in every category. Now we are at the beginning of our journey in workstation, but quickly expanding our portfolio with Lenovo, with Dell, and other OEMs. The expansion in workstation for us is another testament to our leadership in high-performance computing and a go-to partner when performance really, really matters.

When you look at our desktop processor roadmap, you can see that we have delivered a consistent cadence of high-performing desktop processors with each generation. At CES we launched Ryzen 5000 series with 3D V-Cache technology. Mark earlier talked about 3D V-Cache and what it can do. This was the first PC processor that deployed a 3D cache architecture. This technology significantly boosts the gaming performance by effectively reducing memory latency. We wanted to bring the best to our gamers. Today, Ryzen 5800X3D is the best gaming processor in the market, bar none. We are proud of what V-Cache technology is doing for us, and we're gonna feature this in Ryzen 7000 series later this year and in the future generation.

We continue our leadership with the next generation product, the Granite Ridge family, with Zen 5 in the next advanced technology node. Our goal here is to excite gamers and enthusiasts with each and every processor generation. In closing, AMD's client story is really, really exciting. We will continue to execute on our path with relentless focus on execution of our leadership roadmap. We will continue to bring incredible user experiences. We will continue to work with OEMs and ODMs to make sure that we build the best and the most exciting platforms in the market. PC market represents a huge opportunity for AMD, and we believe that we are well-positioned with our products across commercial, gaming, and premium segment of the consumer to continue our business acceleration. Thank you very much for your time.

I'd like to introduce Executive Vice President and General Manager of AMD's Gaming Business.

Rick Bergman
EVP of Computing and Graphics, AMD

Good afternoon. I'm Rick Bergman. Just before the tail end of the break, John Taylor, our Chief Marketing Officer, said, "Rick, we've left the last products that are fun." I guess I'm in charge of the fun products, which is okay, 'cause we've had a couple amazing years just since the last Financial Analyst Day. If you think about it, we collaborated with Microsoft and Sony to launch the current generation of game consoles to much success. We also introduced a whole new family of graphics processors that extended our TAM to new areas. Then third, we introduced the idea of the AMD Advantage program, which combines the best CPUs, GPUs, and software to offer incredible differentiation for our customers.

We're all about taking this gaming leadership that we have and just taking it to new barriers and beyond boundaries that are out there and just elevate that entire experience. Over the last few hours, you've heard a lot about high performance computing, and we certainly at AMD know a lot about it. Well, graphics is no different. Just insatiable demand for graphics performance. What's driving this insatiable demand? Well, look, of course it comes down, like with everything, the workloads, whether it's gaming, high resolutions, entertainment, content creation, and now machine learning, all driving up that curve to try to hit that photorealistic, cinematic quality that everybody wants in their gaming environment. Just to give you one example of the Radeon HD 3870, 2007, was 400 gigaflops. Our new Radeon RX 6900 XT, 23,000 gigaflops.

The great part, this has been a very consistent curve, but now we have something called the Metaverse that's gonna make that curve even steeper. To solve that insatiable demand, marquee brands come to AMD. If they need high performance graphics, AMD is the answer, and whether it's a high-end, liquid-cooled, enthusiast desktop rig for a gamer down to a very mobile phone, again, the answer is AMD. Today I'm gonna walk you through our current markets as well as some new and exciting markets that we're gonna move forward into. Let's start with our gaming and GPU. Two years ago, when I was in front of this group, I talked about 2.5 billion gamers around the globe. Now that number is over three billion gamers, and the majority of them game on either PC or game consoles.

It is. Gaming is the fastest growing form of digital entertainment. In fact, in 2022, it's projected to be larger than TV and cable combined. It's important to note that this isn't a pandemic-fueled trend. This was happening well before the pandemic, and it's gonna continue to go on. Gaming is here to stay. It's a big market TAM. As you saw earlier, $37 billion. As David explained, our RDNA technology is foundational to our success in this marketplace. We're making a huge investment there. We're building out a Radeon family of products to have complete solutions. These investment rates have actually turned into a really good business. You saw the curve in Lisa's presentation, not kinda up to now, and we're gonna continue on that growth going forward.

We began RDNA with the concept of this is a generation over generation improvement, where we're gonna expand the TAM each generation, focusing on the premium segments. We're also gonna focus on performance per watt leadership as well, and that's allowed us to enter the notebook market in a major way. Then we're gonna also make sure our GPUs marry closely with the CPU families that Saeid just described. We've been the innovators in delivering A+ solutions, and we'll continue that forward to make sure that Radeon and Ryzen play best together. With our recent RX, Radeon RX 6000 introduction, and so, as I mentioned, it's allowed us to expand the number of solutions we have. It's over 22. That's about double what we offered in the previous generation.

That in turn has allowed our system integrator, our AIB and board partner and OEM partners, to offer over 300 different solutions in the marketplace. I mentioned the importance of notebook designs. We've tripled just year-over-year the number of notebook platforms that we have in the marketplace. As you heard from David, yes, software is extremely important in the graphics business, and we focus on quality and we focus on performance. One example of that is FidelityFX Super Resolution or FSR. It's a upscaling technology that allows you to use your current generation hardware and get higher performance. Up to now, if gamers wanted higher performance with their current hardware, they'd have to turn off features or game at a lower resolution.

What we can do with FSR is, yes, use that lower resolution and, but upscale the image and get close to near native quality at that higher resolution. Get a 2.5x bump in some cases. The best part about FSR, it's open source from AMD, it's cross-platform. Whether it's PCs, the game consoles or mobile, they can all benefit from FSR. Just last month, we announced FSR 2.0, which will build on the 100+ titles that we already have with FSR today. A lot of innovations with our Radeon RX 6000 current family, both on hardware and software. The good news is we're just around the corner from our next generation.

Again, as David demonstrated, we're gonna focus on performance per watt and also take advantage of the advanced packaging technologies that we have around the company. We're more than just a GPU vendor, as you've seen today. We're gonna look at it from a system level as well and extract system-level efficiencies as well. It's coming later this year. Now you've seen our graphics roadmap, but we know what it takes to win in our businesses, consistent execution on a leadership roadmap. Again, two years ago, I was here and said, "We're gonna have Navi 3x, our family, in two years." and I can tell you we are right on track to that commitment. We know we can't rest, so Navi 4x is well underway. What is our bottom line?

We're gonna have the world's best GPU roadmap, bar none. You've heard from Saeid, and we just have to admit that AMD has had the best CPUs for several years, and that's great. Now we have also the world's best GPU. It's great that we can have both of those together at the same time, the world's best CPUs and GPUs. Now what we're seeing is systems from our customers that of course use Ryzen and Radeon together. In fact, as you can see, quadruple the number of systems that leverage that capability from 2020. All major OEMs now offer Ryzen and Radeon together. We know it's a lot more than just having a GPU and CPU. It's very important how they actually work together to make a good system a great system. A couple examples.

In 2020, we introduced SmartShift, and that would enable our OEM partners to offer thinner and higher performance notebooks by dynamically shifting the power between the GPU and CPU, depending on the workload. We also innovated in the industry by offering AMD's Smart Access Memory. Again, it allowed our Ryzen CPUs to utilize the full capabilities of the Radeon graphics memory, giving a boost of as much as 30% on some games. In a multi-year collaboration with our OEM notebook partners, we've come up with something called AMD Advantage. That's taking the best GPUs, CPUs, and software, as well as the system components, with the goal of, as an end user, you go to an e-tail site or you go into a retailer and you see AMD Advantage, you know you're getting the absolute best gaming experience that you can possibly have.

Now shifting gears a little bit over to the game console side. We know something about combining CPUs and GPUs. Actually, in this market, we've been doing it for years for our partners, just like you heard on the AMD Advantage systems. The market's been growing nicely. Then in 2017, it got a nice bump as handheld gaming took off again. This current generation, we're seeing tremendous demand for our game consoles. What is underlying driving that demand? Well, I'd say it's two major factors. First, of course, is demographics. More female gamers, additional age categories gaming, new geographies, just more gamers in general. Then secondly, the level of engagement with gaming. Now multiplayer, free-to-play, online, building gaming communities that keep the gamers totally engaged and build with their social networks.

All this adds up to just a great business for AMD today and continuing into the future. This is a great market as I just talked about. Hey, I'm really proud that we're clearly market leader. We're clearly number one in this particular marketplace. What has gotten us there? Well, we work with our customers on the toughest problems and where they wanna take their platforms. They wanted 4K immersive experiences. They wanted to reduce the load times, backward compatibility, so the gamers could use their titles from previous generations. We work closely with them to solve those problems, and that's allowing us to have higher silicon content generation over generation. Now with Microsoft Xbox and Sony PlayStation, we've won back-to-back generations. We've also added another exciting platform, the Valve Steam Deck.

That's an all-in-one portable PC. For those that don't know Valve, of course, they're responsible for the Steam Store and have over 100 million active users per month. It's a big market to go after. Okay, I've talked about those two markets and then those sales. They're sizable, and they have good growth rates. How do we grow even faster? Well, a couple areas. The Metaverse. Everybody's heard about the Metaverse. Major companies moving into it. Of course, the most notable is Meta, formerly Facebook. The consistent request is a highly visual, highly immersive environment. With our IP portfolio, it just puts us in an incredible position. Then likewise, we're transforming our semi-custom business to address taking that same AMD IP and creating additional customers with new custom solutions.

Metaverse is truly an immense opportunity for the entire technology industry, including AMD. As I said, we're very well positioned with that. AMD-powered core data center and edge devices to help really provide that responsive environment. There's 5G and 6G broadband networks, again, reducing latencies from maybe 10 milliseconds and even as low as 5 milliseconds. This is a great opportunity for AMD to increase edge computing. Then on the client side, of course, we have Ryzen processors and our Radeon GPUs to power either traditional 2D devices that will access the Metaverse or virtual reality headsets as well. Again, the infrastructure of the Metaverse is needed now, and it's great that AMD is so well-positioned. Of course, the Metaverse starts with content creation. AMD is the de facto standard for 3D content creation studios.

Just one example, Saeid talked quite a bit about Threadripper, but Unreal, Epic's Unreal game engine, which is being utilized for a lot of those virtual worlds, you can compile that 60%-100% higher performance with a Threadripper processor. Likewise, with our custom silicon EPYC processors and Radeon PRO graphics, we're proud that AMD is helping power Microsoft's Xbox cloud gaming infrastructure, a key technological pillar that's enabling 10 million people around the globe to play console games on PCs, phones, tablets, as well as Xbox consoles. All of these are examples of our deep involvement in the Metaverse today, as well as being in the forefront of where the Metaverse is going. Building off the strength of our console base, we plan to increase the number of design opportunities with our custom silicon.

Entering, for us, new markets such as data center, telco, automotive, VR, and AR. This new capability, potentially a $30 billion marketplace for us. It's gonna bring new customers, new challenges. Keep in mind, we have a track record of shipping over 200 million highly complex SoC solutions. Over the course of the last few hours, you've heard a whole bunch about our RDNA, our CDNA, XDNA, of course, our Zen cores, and then the Infinity Fabric, along with our leadership in packaging technologies. That provides this great canvas, so when we engage with these customers, they can think how they can put together their own solutions, leveraging their own IP or their secret sauce, or maybe using third-party IP, where we can jointly then create high-performance SoCs or maybe plug into that custom-ready chiplet platform that Mark talked about earlier today.

To wrap up, as I said at the beginning of the presentation, there's this insatiable demand for graphics. It's fueled by billions of gamers around the world. I mean, gaming is hot. The great news is we have leadership products today. Right around the corner, as I said, we got our next generation coming. We'll expand our market opportunities into these new exciting markets like the Metaverse and leveraging our custom capabilities into new markets. It's really an exciting time for our business. Thank you very much. Okay. It's my pleasure to introduce our Chief Financial Officer, Devinder Kumar.

Devinder Kumar
CFO, AMD

Thank you, Rick. It's a pleasure to be here. Just two years ago, we had our financial analyst day, and a lot has changed in the company. A lot has changed. You know, with everything you have heard from my colleagues, I'll show you based on what they have done, what we have achieved financially, but more importantly, I think what you're here to see is with everything you have heard about technology and roadmap of products, what we can deliver over the next few years. We've had an incredible journey at AMD for the last few years. I'll share that with you, but more importantly, how we advance our financial journey going forward. If you look at our journey the last few years, there's been a lot said and a lot done at AMD.

We have put out in terms of scale, transformation with the acquisition of Xilinx, delivered outstanding financial performance. As a CFO, gives me a pleasure to be able to say outstanding financial performance, and then a strong foundation for the continued success and future growth. If you look at it from a viewpoint of the priorities, these are exactly the priorities I laid out when we met two years ago. Strong revenue growth, expanding gross margin, increasing operating margin and profitability, and significant cash generation. When we met a couple of years ago, we were just coming off of 2019 where we generated a very small amount of cash, and many of you in the audience would ask me, "When is AMD gonna start generating cash with all the technology that you have within the company?" What did we do? Here's the revenue.

Lisa shared earlier the CAGR growth. We had less than $7 billion of revenue coming off of 2019 when we met in March 2020. Since that time, we've added in two short years, more than $10 billion of revenue. More than $10 billion of revenue in two years. We have also had growth in all of our businesses. With the diversified portfolio and the product that we have, having growth in all the businesses is something that's excellent from a $16.4 billion revenue that we delivered in 2021. The top line growing as it has. The other important thing is the diversification of the revenue. We had about $1 billion of revenue in 2019 from data center and embedded, $1 billion, about 15% of revenue.

Lisa and I have talked in many earnings calls that we wanna take that revenue percentage of revenue in data center and embedded up. In 2021, at $1.6 billion, the data center and embedded revenue quadrupled in that timeframe in two short years. 25% of revenue in 2021 came from data center and embedded, which is high margin, high growth, and very stable from that standpoint, and we feel very good about that. If you look at it from a revenue standpoint on the metrics of the financials, gross margin grown a lot. The trajectory is very good, 43%, 45%, 48%. Operating margin more than doubling in the two-year timeframe. Earnings per share quadrupling since the time we saw the first full year for 2019.

All in all, that's why I said earlier, outstanding financial performance in the last couple of years. Then cash. I talked about it earlier. When we met in 2020, coming off of 2019, there was about $300 million of cash generation, significant cash flow generation. By the way, Lisa talked about it earlier, we are investing in the future. In 2021, there's another $1 billion of reduction in free cash flow because we invested in supply and the capacity to get to the 2022, 2023 growth that we wanna invest in our growth for the future. The free cash flow margin and free cash flow are impacted by that, and we continue to make investments for the future growth that you've heard from my colleagues talking about how we can go.

The strong balance sheet that we have at AMD allows us to do that, and we have, in that regard, a very strong financial foundation for the company for the next few years. You talk about Xilinx. It's truly a transformational acquisition. The markets, the products, the technology, the customers that we get with the business that Xilinx bring to us, and more importantly, from my standpoint, when I looked at it, and now we look at it from a financial standpoint, margin expansion, EPS and free cash flow accretive for 2022, just in the first year of acquisition. We just closed it a couple of months ago, and here we are with the accretive nature across the board from what Xilinx brings to the equation from a financial standpoint, and excellent from that standpoint. If you look at the revenue mix.

With Xilinx, if you look at the right, $20.1 billion is the pro forma revenue if you add AMD and Xilinx together for 2021. That sets the base for what we're gonna talk about for the future. As I said earlier, of the $16+ billion, we had $4 billion of revenue, 25% in data center and embedded. You add the two companies together in 2021 on a pro forma basis, 40%, $8 billion, double of the $4 billion coming from data center and embedded. That is something that drives the diversity of revenue that you see in what we can call the new AMD as we look forward 2022 and beyond.

That's the base that I will use in talking about the long-term model as we go forward. Advancing our journey, we've had significant success over the last few years from strong execution. The company has grown. The company has transformed. We have had acquisition of Xilinx, and right after that, acquisition of Pensando. We are a very different company than we were just a few years ago. Now let me share with you how we advance our financial journey on a go-forward standpoint. The journey starts with the expanding market opportunities that you have seen my colleagues talked about. The market TAM growing about $80 billion when we met a couple of years ago, to $300 billion TAM. It's just incredible. We have the products for significant growth opportunities in each of these markets, in each of these markets.

It gives us the diversification of having multiple levels for growth when we look at the long-term model. The long-term financial priorities, on the left, revenue growth and diversification. Diversification is really important with the markets that we are now attacking. Continuing to expand the margins and profitability of the company. Being very disciplined in the capital allocation approach. Finally, having significant shareholder returns, but disciplined from a capital allocation standpoint. Here's the long-term model. Revenue growth, as I said earlier, is based on the 2021 pro forma AMD and Xilinx combined. From there, we expect to grow approximately 20% on a CAGR basis over the next three to four years. It comes really from the leadership products and market share gains.

It also comes from the new market opportunities that we have, that you heard about from some of my colleagues with the new businesses that are growing. Revenue growth from leadership products, market share gains, and then on top of that, continuing to expand the gross margin trajectory of the company from a richer mix of revenue, new data center opportunities that I'll talk about, and cost improvements. We also wanna invest in the business for growth, for R&D, and for the product roadmap that you've heard about today. That's the operating expenses at about 23%-24%, investing for growth, R&D, and the roadmaps.

The operating margin getting to the mid-30s with expanding margins and profitability, and then the tax rate and the free cash flow margin I showed you earlier, about 20%, getting to greater than 25% free cash flow margin and generating significant amount of cash, not just so that we can invest in the business and the roadmaps, but also return capital to the shareholders. Then if you look at the revenue mix, we talked about the $8 billion earlier. In 2021 pro forma, $8 billion of revenue. 40% of revenue, of the $20.1 billion, in data center and embedded. Data center and embedded, growing faster than the other segments, allows us to get to greater than 50% of revenue from a mix standpoint, which has been a goal for a long, long time.

All segments in the business grow, but data center and embedded grows faster than the rest of the business, and therefore takes us to the tipping point of greater than 50% of revenue coming from data center embedded from a mix standpoint. That is something that sets the foundation for AMD as we look forward over the next few years. Then the margin drivers. This is our path to greater than 57%. If you look at the far left, that's exactly where AMD on a standalone basis ended in 2021. 48% gross margin. That is something that improved from the last couple of years when we just tipped over the 40%. We got to 48% in 2021. You add Xilinx to the equation.

On a pro forma basis, 2021, I talked about the $20+ million of revenue, it is 52% margin. You get beyond the 50% level just by combining AMD and Xilinx on a pro forma basis in 2021. Where does the growth come from? Servers, PCs, and gaming. You heard about some of the premium products that we are targeting from a company standpoint. That drives margin improvement. Those are really, as I would call it, the traditional AMD businesses. We have the new businesses that I just talked about, embedded and new data center. New data center is Xilinx, is Pensando, is some of the networking stuff that you heard about that is higher margin, and we captured that from a value standpoint. It drives the margin. The last thing is scale.

When you have scale, you can reduce costs faster, the mix gets better, and all of those three things taken together get us to greater than 50% margin in the long-term model in the next three to four years. The diversification and scale of the company allows us to continue on the margin trajectory that we have been on many, many years, and now we get to the greater than 50% level in gross margin. Then let me spend a minute on Xilinx synergies. When we announced the Xilinx transaction, we had talked about approximately $300 million of synergies on an annualized basis out in time, 12 to 18 months after transaction closes. As we got to the integration execution, where I could actually do the work to look at all the details, we have found additional opportunities for synergies.

Now it's closer to greater than $400 million of cost synergies. It comes from supply chain, from infrastructure, and engineering. We're projecting greater than $400 million of cost synergies as we put the two companies together and execute on all the integration plan over the next 12 to 18 months. Victor Peng talked about earlier, we've identified. There was a lot of planning going on, but now that we are able to go together to the customers, we have identified additional revenue synergies. The biggest area, of course, is AI that Victor talked about. But you also have opportunities in data center, in communications, in automotive, and embedded. Large markets, same customers. You go with a portfolio of products that is synergistic and you get the business and we've identified that over $10 billion over the next five years.

It happens a little bit later because it takes time to put some of the products together, as all of you know. It's about $10 billion over the next five years, and not a lot of it is baked into this current long-term model right now because of the long-term nature of the synergies, especially on the revenue side. Then the long-term capital allocation priorities. Number one priority, bar none, as Saeid said, invest in the business, in technology, in products, in go-to-market, infrastructure and talent. We continue to hire. We wanna invest in the infrastructure. We will be strategic about M&A. We just did a couple, and many people ask about whether we wanna do more. If it makes sense from a strategic technology standpoint, acquihires, for example, getting talent acquisition, we will do it. Then shareholder returns.

Greater than 40% of the free cash flow that I just talked about return to the shareholders in terms of shareholder returns. From a balance sheet standpoint, we want to have a significant net cash position. We are committed to strong investment-grade ratings. The capital expenditures in our fabless model is very small. It's about 2%-3% of revenue. The fabless model gives us the leverage in terms of everything I talk about from a long-term target model. 2022 financial outlook. You know, there's a lot going on in the market, even from when we gave guidance in the Q1 timeframe. You have supply chain, you have macro, you have a lot going on in the market, and some consumer-facing businesses might have weakened a little bit. We are reiterating our guidance for 2022.

We expect to grow approximately 60% from a revenue standpoint in 2022. It really comes from the fact that we have a diversified business model, diversified revenue, and many levers in terms of where we could get our revenue from. That's why we are confident in terms of reiterating the guidance at approximately 60% revenue growth year-on-year. Xilinx helps, but even the AMD traditional businesses are also growing year-on-year from that standpoint. You know, we are focused, very focused on delivering another strong year in 2022 after what we did in 2020 and 2021. You know, in Q1, in the Q1 earnings, we talked about the new financial reporting segments.

I know many of you have been asking about it for a multiple number of years, but this is the way we view our business. Data center, comprising of server CPU, data center GPUs, and the Xilinx portion, which is data center related in FPGAs and adaptive SoCs would be in the data center segment. In the embedded is Xilinx, excluding the data center portion, and then you add what we brought over from the AMD embedded business that those of you know about. Client, desktop and notebook processors, really the PC business. Finally, gaming GPUs and the game console SoCs that Rick just talked about.

When we report the results for Q2, we will provide historical annual 2020 and quarterly 2021 revenue and operating income so that you can see what it looked like in the past and then get the results going forward from a go-forward standpoint. Of course, the Q2 results will be in line with these reporting segments when we report results for Q2. If I look at it from an overall standpoint, you know, we have delivered what we said in each of our Financial Analyst Days in 2017 and 2020. Really an outstanding financial trajectory. We are focused on continuing our momentum with strong revenue growth, start with the top line and margin expansion, getting value for the product that we deliver to our customers. We plan to be very disciplined from a capital allocation standpoint.

Let me end with a personal perspective that I have. You know, I've been with this company almost 38 years. I started way back in 1984. We are a completely different company. Our markets are the biggest they have ever been. Our products and roadmaps are the best we've ever had. We've executed consistently for the last multiple number of years. From my standpoint, we have the strongest financial model in the history of the company. Thank you very much. With that, let me hand it back to our CEO, Lisa Su.

Lisa Su
Chair and CEO, AMD

Okay. Was that a lot or what? Well, first of all, thank you for spending the last four hours with us. I hope you can see the depth and breadth of our technology and our team and our talent, but also the incredible opportunity we have. Let me just take a few minutes to kind of summarize and bring together some key takeaways. What I'd like you to take away from this is high performance and adaptive computing is an incredible market. We are, you know, actually privileged to be in a place where we can affect so many people's lives with the technology that we build, but we do it in a very purposeful way. It's not one-hit wonders. It's not sometimes we do it and sometimes we change our mind. It's about consistent strategy. Consistent execution.

My friend Mark likes to say relentless execution, 'cause every single day it's about how do we build the best CPU roadmaps, the best GPU roadmaps, the best data center solutions. You heard Dan say EPYC is on fire. He's absolutely right, EPYC is on fire. But it's not on fire by mistake. It's not on fire by luck. It's on fire because of very deliberate execution over the last 5-plus years. What we see is, as exciting as the last five years have been, the next five years are only gonna be more exciting, whether you're talking about now our full data center portfolio that you saw from Forrest, our AI opportunity that you saw from Victor, as well as our PC and gaming opportunities, which are large, great markets where we've done extremely well.

If you take anything away from today, I hope you take away that we love this business, we love where we are, but we are gonna work hard every single day to meet and exceed our commitments to deliver this exciting growth as well as the financial model that Devinder is so excited about. With that, thank you again. I can say on a personal note, for me, I've been CEO for now almost eight years, and I've never been more excited than I am today about being at AMD. It is truly my honor to be part of this team. With that, thank you again. Thanks for joining us, and we are about to go to a Q&A session. Is that right, Ruth?

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Yep.

Lisa Su
Chair and CEO, AMD

All right. Fantastic. We're gonna take a minute and just bring up some chairs. Guys, we have a little bit more work to do before the cocktail hour. Come on up.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great.

Lisa Su
Chair and CEO, AMD

Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

All right.

Lisa Su
Chair and CEO, AMD

Can you just bring it there?

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Okay. We're gonna take questions in the room. We have some mic runners spread around. Ambrish, we'll start here. Diana, thank you.

Ambrish Srivastava
Senior Research Analyst, BMO

Thank you. Ambrish from BMO. Thank you very much, Lisa and everybody, for a very informative presentation. Devinder, I had a question for you. End markets, and good to see you're reiterating your guide, but based on the Q1 Q2 earnings call, you had expected the PC market to be 9%, and based on at least the checks we are doing, it could be down mid double- digit. I just wanted to understand how should investors walk away from this with the confidence that where is the flex? I'm assuming it's servers that's doing better, or the gaming business that would offset if PCs were to be down mid double digit? That was my first question. I had one for Victor after that.

Devinder Kumar
CFO, AMD

I think if you look at it, as I said earlier, you know, we have a diverse portfolio from a business standpoint. Yes, PC is showing a little bit of weakness, but even in PCs when you talk about units, it is kind of the lower end. Some of the products that Saeid talked about, you know, we play in the premium, in the high end, and that definitely helps from a revenue standpoint, even though the units might be down from that standpoint. All right? When you look at the diverse portfolio, definitely seeing very good strength in the data center space, and look at the overall portfolio, even with Victor's business, that helps us deliver what we talked about. Lisa, you wanna add anything to that?

Lisa Su
Chair and CEO, AMD

Yeah, no, I would agree. I think the way we look at the year, I think there are puts and takes, and we have seen in general, we took a more conservative outlook on the PC market to begin with. With that, we also have some businesses that have very strong demand, including data center, our console business, as well as actually the Xilinx portfolio, very strong demand. I think we have enough puts and takes in the business.

Ambrish Srivastava
Senior Research Analyst, BMO

Got it. Thank you. My question for Victor, and it goes back to the strategy of, there's many pieces to the acquisition of Xilinx. Victor and the Xilinx team built a great business, Embedded, hit the ball out of the park. In data center, in the last, if I remember correctly, the last four or five quarters, it barely broke 10%. I think in one out of the last four reported quarters it broke 10%. The question is-

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

When you say 10%, do you mean?

Ambrish Srivastava
Senior Research Analyst, BMO

Other revenues.

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

Oh.

Ambrish Srivastava
Senior Research Analyst, BMO

Right. 10% of total reps, because that was a big growth driver, when you laid out the strategy when you became CEO. The question is, Victor, and for you, Lisa, what was missing that you were not able to get to the potential that you had laid out to investors? Was it just a matter of not having a bigger company with a much broader portfolio, or was it something in the approach that you were taking that was not able to lead to success? Also a follow-up, Lisa. What confidence do you have that you're gonna build that up?

Lisa Su
Chair and CEO, AMD

Thank you. Okay.

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

Yeah. First of all, you know, it's kind of because I was asking what do you mean 10% because from a year-over-year growth perspective, the data center business has been growing much more strongly. You know, part of the issue is that some of our other business also grew really, really strongly. You know, in terms of it starting to outstrip things, you know, that's kind of a relative thing. It's actually performing quite well. I guess there are two other things. Now it seems like ancient history because I don't know. I don't know about you, but time seems to have warped being sequestered for a couple years.

You know, we did have some strength in Asia that we kind of, you know, got dampened because of some of the environment. The only other thing else I would say is that, look, you know, we're in it for the long haul. You heard me talk about this. We've got some really good engagements, and they're going on, but some of these things are taking a while longer to deploy because it really is still very, you know, leading edge kinda capability. You know, one of the things I guess we learned is that some of that takes a little bit longer. Yo u know, really, as I mentioned, you know, we're having a lot of traction in different kinds of like video analytics and genomics and in a lot of areas, and the NIC area is definitely gonna be expanding, so.

Lisa Su
Chair and CEO, AMD

Yeah. Maybe I'll just add to that. I think the key point is, the right technology elements are absolutely there. In general, data center takes time. You know, I remember, you know, many people saying, "Hey, why is EPYC taking as long as it's taking?" It takes time. You need a few generations. What we've really seen, though, is as we've come together, it's only been four months, as we've come together as a company, you know, our customers love the combination. I mean, that's what we're hearing. I can't tell you. I mean, Victor and I have done a number of customer calls, you know, Forrest and Mark and I, and customers love the combination because now they have a much broader IP portfolio.

We have actually made some adjustments in terms of the investments that you know, perhaps Victor would have made as a standalone company versus what we'll make as a combined company. We're you know, doubling down on the places where we know that customers want us to. I'm very confident that you know what I would say is I'm much more excited today than I was even when we first announced the acquisition because it really hums. You know, we're not trying to sell to customers the value proposition. Customers are telling us, "Do you know how strong this value proposition is?" Very exciting.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. I'd ask ourselves to limit ourselves to one question. I know there's many in the room, but there's a lot of people in the room. Teresa, we'll go to Ross here in the back row. Thank you.

Ross Seymore
Managing Director and Senior Semiconductor Analyst, Deutsche Bank

Thanks. Ross Seymore from Deutsche Bank. Lisa, one for you. When you laid out the data center TAM, it looked like the GPU/AI's portion was about 50% bigger than the CPU side of things. As we look forward the next few years, is it right to assume that that's the portion of the market that'll deliver incrementally greater growth? And if that's true, how do you think the pace of that growth will differ from what you did on the CPU side of things so successfully over the last few years?

Lisa Su
Chair and CEO, AMD

Yeah, Ross, I think that's true if you look at just the sheer numbers. I think, let me start with, you know, the CPU side of the business is gonna see tremendous growth. We still are underrepresented in the market. We still have, you know, lots of ability to broaden the workloads. I think as it relates to the GPU/AI TAM, you should think about that as, you know, not just, you know, sort of today's GPUs, but also everything we would do with the AI accelerators and the additional acceleration. It will grow faster, but it's gonna grow off of a much smaller base. If you actually look at how it contributes to, you know, this long-term model, it's certainly a contributor, but I wouldn't say it's the largest contributor, you know, of that.

I think from a rate and pace standpoint, you know, it'll take some time, you know, for it to fully, you know, come out. You know, that's what we put into the model.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Here, in the front row. Diana. Stacy.

Lisa Su
Chair and CEO, AMD

Next.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Yeah.

Lisa Su
Chair and CEO, AMD

Thank you.

Stacy Rasgon
Managing Director and Senior Analyst, Bernstein

Thank you. Stacy Rasgon at Bernstein. I had a question for Devinder. Devinder, you talked about targeting 25% or more free cash flow margins. It sounds like to me, though, in 2021, without the supply investments that you were already there, you would have been 25% or 26%, and Xilinx is additive and your margins are going up in this model. Why is 25% the number that we should be thinking about? Why shouldn't it be a lot higher than that?

Devinder Kumar
CFO, AMD

Yeah. If you look at the long-term model is over the next four years. I think I might have said it, but there is additional investments we are making that also have impact on the free cash flow over the next two or three years. We did about $1 billion in 2021, and we have another $5+ billion that we are making for the longer term growth and the capacity commitment, and that has a modeling impact also in 2022, 2023, and 2024.

Lisa Su
Chair and CEO, AMD

Yeah. Stacy, the way to think about it is given the size and scale of our business, we are making long-term, you know, capacity reservation agreements that will play out over, you know, through 2025+ .

Stacy Rasgon
Managing Director and Senior Analyst, Bernstein

Got it. I guess just with the 40% free cash flow return, does that all come back in buybacks or, I mean, you've got like tons of cash now. Like, any thoughts on a dividend? Like, how do we think about how that cash might actually come back?

Devinder Kumar
CFO, AMD

Yeah. I think the free cash flow, you know, doesn't get affected by the share buybacks. From a viewpoint of what I said, we are allocating greater than 40% of what we generate to shareholder returns. You know, dividends. Right now we're very focused on share repurchase, and that's really the initial priority. We've done some over the last year, about $3.7 billion through the end of Q1, and you'll see us continue to focus and return that, you know, especially when we did the Xilinx acquisition, it was all shares. That's an initial focus right now.

Stacy Rasgon
Managing Director and Senior Analyst, Bernstein

Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Here, Vivek. Thank you, Teresa.

Vivek Arya
Managing Director and Senior Semiconductor Analyst, Bank of America Securities

Thank you. Vivek Arya from Bank of America Securities. Thank you, Lisa and the team, for the presentations. Lisa, my question is about, you know, just mixing up the portfolio, because if I look at the last few years, a lot of the growth has definitely come from just unit share gains, but a bigger part has actually come from your mixing of the portfolio to much higher ASPs. But when I now look at comparative ASPs between you and your large competitors, they have kind of closed that gap. My question is there further opportunity in you know, raising and mixing up the portfolio? If you would indulge me, just giving us your views about, is Arm a potential threat in the data center longer term?

Lisa Su
Chair and CEO, AMD

Yeah. In terms of the product mix, Vivek, I think you're right. We've been focused on both, you know, just pure unit share gain, but more importantly, getting value for our products, so we have mixed up nicely. I think, you know, in servers certainly, we've done very well there. You know, we'll continue to ensure that we're not selling on price, we're selling on value and total cost of ownership to the. We are offering more though in each generation, right? If you think about the roadmap of offering more cores, more capability, you know, we do expect some, you know, ASP goodness out of that. On the PC and graphics market, same thing.

I think we have closed some of the gap, but we still have an opportunity to continue to mix up in those premium segments. It is a smaller factor than it was the last few years, but I think it's still a factor as we put together the product portfolio. Then as for Arm, you know, the way I would say is I think Arm is actually more a partner than anything else. You know, I look at it as, you know, let's not talk about, you know, do you prefer Arm or x86? I think it's having the right solution for the right workload.

You know, in some of our businesses, like certainly in the Xilinx business with all of the adaptive SoCs are using Arm, and we're gonna continue to do that with Pensando on the DPU side as well. On the overall choice, I think we're gonna leave it with the customer. With the customer, as we think about our standard products versus our custom products, we're gonna continue to try to bring the highest performance capability for both ecosystems.

Vivek Arya
Managing Director and Senior Semiconductor Analyst, Bank of America Securities

Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Thank you, Teresa. Here, just on the edge there.

Matt Ramsay
Managing Director and Senior Semiconductor Analyst, TD Cowen

Thank you. It's Matt Ramsay from Cowen. Really appreciate all the presentations. That's a great day. I have, I guess, a two-part question on the diversity of the business. I think that's what struck me the most today, is you guys are a company now that you really had to innovate and take market share from a big market leader, and you did that through innovation, but now it's a breadth of markets which heterogeneous compute and diversity of markets. I guess my first question is on the software side, Lisa. Could you walk me through how you're organizing the software group? Because there's a ton of overlap that happened with Xilinx and your software stack. There's a bunch of new software stacks you're buying. All these new end markets require a ton of diversity in software.

I'm really interested in how that's organized and what investments are being made there. The second part is, just maybe this is for Dan or Forrest. I've been tracking for a while the potential for third-party chiplets to come into the data center business. How quickly does that happen? Is it a big deal? How important is that? Thank you.

Lisa Su
Chair and CEO, AMD

Yeah, absolutely. Maybe I'll start on the software question and see if anybody else wants to add. We have software assets all over the company, and by the way, that's the way it should be because we want the software very close to the end market. As it relates in particular to the AI software, we have, actually, I've asked Victor to lead the overall pervasive AI effort for the company, and so he's bringing together all the key leadership to agree on the architecture. You know, if you looked at the architectural pictures that Victor showed, you know, that is 100% everybody who's working on AI software at AMD is agreeing on that picture, and then we're executing that, you know, in stages, you know, as we go forward.

You know, we're gonna see a lot more software in the company, but I think we're very clear that the key architectural decisions need to be made up front, and then the execution, you know, goes along the way. I don't know if you wanna-

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

Yeah.

Lisa Su
Chair and CEO, AMD

...add to that.

Victor Peng
President of Adaptive and Embedded Computing Group, AMD

The only thing I would add to that is, you know, we're also taking best of breed, right? There are things that are coming from the Vitis and some of the Model Zoo and things we've got there. We're leveraging it elsewhere, and we're doing the same thing in terms of all the learning from, you know, ROCm and some of the other things. You know, I think it's the teams are working remarkably well together. I mean, we meet very frequently both at the executive level and you know, more importantly, at the working level. We're working very aligned way with the customers, right? I think, you know, we got great customers. We're learning all the time from our customers as well as from each other.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Dan, Forrest?

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

I think with respect to the second question, Matt, around chiplets and when are we gonna see third-party chiplets integrated into our products. Look, I think the whole industry has come to the realization that really Mark came to a decade ago, that chiplets are gonna be a foundational element of building any complex system going forward. We've got a lot of experience doing that. Admittedly, it's all with Infinity Architecture. All of our learning so far has been, you know, based on that foundational architecture that we put together. We can do things with customers and third-party IP today to fit into that infrastructure, and we have very active discussions and engagements on that basis today.

In terms of having really interchangeable chiplets from third parties that don't require deep design engagements, that's really gonna require industry standards to get developed and promulgated. That's something that we're working on as well. There's a number of companies working on that. There's one principal standards body. I think realistically, for the standards to really be codified, sort of the second generation is probably what it's gonna take to really have interchangeability. You're probably looking at the tail end of this window that we're looking at. The 2025 window is probably the earliest that we'll really see that. I don't know if, Mark, you want to add anything to that?

Mark Papermaster
CTO and EVP, AMD

That's right. Exactly right. I think the industry's coming together well around UCIe. I think it will emerge as a standard. As Forrest said, it'll be 2.0. It's a trickier interface because it is the logical protocols, but you have to actually define right down into the details of the physical interface. I think the teams across the industry are working well on it. Forrest, I agree with your timeline.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Diana here.

Chris Caso
Senior U.S. Semiconductor Analyst, Raymond James

Thank you. Chris Caso from Raymond James. Just a clarification on the roadmap that you provided. The move from Milan to Genoa is generally a 7-nanometer to a 5- nanometer product. On the roadmap that you showed for Turin, it talked about both 4-nanometer and 3-nanometer. If you can clarify that, is it correct to interpret that some of the Turin products would be a half node improvement instead of a full node improvement? If that's the case, if you could clarify that. If you could confirm that indeed we'd see 3- nanometer Turin products in 2024.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Yeah. I think what you will see from us, and you'll see from everybody, is looking forward, in this age of chiplets, you're gonna see a diversity of nodes in pretty much every generation. I wouldn't overread into that. Let me just leave it as you will see many different types of chiplets and a diversity of nodes in most products going forward.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Yeah.

Lisa Su
Chair and CEO, AMD

We should say you should expect to see, you know, 4-nanometer and 3-nanometer versions of Zen 5, and you'll see them in 2024.

Chris Caso
Senior U.S. Semiconductor Analyst, Raymond James

That means that within the chiplets, we would see some chiplets at 4-nanometer, some at 3-nanometer, different nodes, and I guess that's the whole point of using that strategy.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

That's right.

Chris Caso
Senior U.S. Semiconductor Analyst, Raymond James

Right. Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Diana, Mark here. Oh, Suresh. Thank you.

Mark Lipacis
Managing Director, Jefferies

Great. Thanks for the great presentations today, and congratulations on the great turnaround you've executed over the last eight years. It really has been remarkable and a lot of fun to watch. A question for you, Lisa. I wanna pick up on a comment you made about, I believe you said something like prepaying for capacity. Are you of the view that the tight supply chain phenomenon that we've seen over the last several years is a cyclical or secular phenomenon? To the extent that it's the latter, what have you done, or what's your operational strategy to manage that going forward? Have you had to overhaul your supply side strategy, or is it just merely a matter of prepaying for a little capacity here or there?

Lisa Su
Chair and CEO, AMD

Yeah, Mark, thanks for the question. You know, I think it's actually several things. You know, supply chain is now considered a competitive advantage if you get it right. We saw this you know sort of phenomena of the supply chains getting tight, actually, I think you know quite early in the cycle, sort of more than 18 months ago. I think for AMD, you have two pieces. Yes, some of it is secular demand increase. You know, obviously, we've worked on that. Some of it is also, look, we're a much larger company, so when it was, "Hey, you want a little bit more supply and you know your volume is X." a little bit more supply is not a big deal.

When we're talking about the growth rates that we're talking about, it's a lot more supply. We have thought differently about it. We now have very deep relationships with all suppliers in the supply chain, including wafer substrates, back-end, all the various pieces. We are planning for success, so we're planning for significant growth as we go through the next, you know, three, four, and five years. I think we're doing the right thing in, you know, pre-planning that. I'll also say our customers are also pre-planning, so they're giving us visibility into, you know, how much server capacity should we put online because some of the server capacity is quite specific, and that's been very helpful as well. I think it's both. It's both secular, but it's also AMD-specific from just how much we're growing as a portion of the overall market.

Mark Lipacis
Managing Director, Jefferies

Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Teresa, just here.

Louis Miscioscia
Executive Director and Equity Research Analyst, Daiwa Capital Markets

Hey, thank you. Louis Miscioscia here with Daiwa Capital Market. If we could look at the GPU side, I think it's interesting that the hardware is actually coming together from a performance standpoint. Where would you put the software in the sense of if you can go back and think about, you know, when Ryzen and EPYC really started to gain traction 5 years ago? Are we sort of at that stage where you're really gonna start to pick up material share, or what else really needs to be done?

Lisa Su
Chair and CEO, AMD

Forrest.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Yeah, I think, and I assume you're referring primarily to the data center GPU software.

Louis Miscioscia
Executive Director and Equity Research Analyst, Daiwa Capital Markets

Yes.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

I think the way to think about it is, you know, David talked about the evolution of that software stack, the ROCm software stack, and we have been working on it since about 2018. So you can think about that. We're about four and a half years into that journey. With ROCm 4 really being a complete production-ready software stack for the most advanced HPC applications and ROCm 5 really being a production-ready, you know, stack supporting the full gamut of the most common or the most popular AI stacks. From that regard, and ROCm 5, we just introduced. From that regard, I would say we've got the complete software stack right now.

We've got great silicon, and we're starting to get the deeper engagement certainly on the AI side, and we've already secured those on the HPC. I would think about it in that way. I don't know if, David, you wanna add anything to that?

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

David's over there.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Sorry.

David Wang
Head of Radeon Technologies Group, AMD

I think you've said it well. You know, obviously, we'll continue working with our customers and partners to optimize performance as we kind of going from a narrower workloads to a much broader workloads. You know, as Microsoft brings our solutions to the general cloud, more applications will get run on MI200. Obviously, we're gonna continue, just like gaming side, we'll continue releasing updates with performance optimization, so we can provide the best solution with the much broader workloads.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Yeah. Okay, good luck.

Louis Miscioscia
Executive Director and Equity Research Analyst, Daiwa Capital Markets

Thank you.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Okay. Bye.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Diana, Joe here in the front row. Thanks.

Joe Moore
Senior Semiconductor Analyst, Morgan Stanley

Hi. Joe Moore from Morgan Stanley. I wonder if you could talk about Bergamo and how important you think it's gonna be to have these cloud-tailored solutions going forward, and how much of that is a reaction to, you know, maybe some of the competition that you're seeing from the ARM camp, that you're having these kinda dedicated cloud-based solutions that are different than what you supply to enterprise.

Dan McNamara
SVP and GM of Server Business Unit, AMD

Yeah. Thanks, Joe. Bergamo, you know, I think the premise of really what I talked about earlier, which is the expansion of the portfolio in general, is really about what we're seeing in the diversity of the workloads and getting more efficiency on the specific workload, and that's what Bergamo is for. It's driving better performance per watt, performance per dollar, higher density, higher thread count for specifically cloud-native workloads. It's very, very important. As I said earlier, most of the cloud vendors are telling us they're gonna deploy both in their fleet for different, you know, workloads and optimization points. You know, I would say that it's critically important and, you know, we're pretty excited about the uptake in it right now.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Suresh, you're at the back.

Chris Danely
Managing Director and Semiconductor Analyst, Citi

Thanks. Chris Danely from Citi. You know, the PC meltdown is not impacting you guys so far this year. If we do have sort of continued weakness in PCs this year and next year, can AMD just blow right through that and still hit the goals? I guess maybe if you could share with us your thoughts of your assumptions of the base growth of the PC and the server end markets for here.

Lisa Su
Chair and CEO, AMD

Yeah. Chris, you know, I think in terms of the PC market, we have chosen to take a more conservative view on the PC market this year. I think in general, our view over the next few years is, you know, let's call it flattish plus or minus. You might see, you know, in this year, this is a down year. We do see, you know, a reasonable PC market. I think the way to think about our model though, and this is a more generic question, as it relates to just how we put together the model. I think the beauty of, you know, sort of the AMD of the next three or four years is we have multiple growth drivers.

You know, my view of the world is we're underrepresented in the, you know, core PC and gaming market. You know, we do see growth even in a down market there. We also have a lot of other growth vectors. The way we've put together the model, not every single business has to fire on all cylinders. What we need is to make more right decisions than not, and I think we have, you know, done that, very well. In terms of, you know, data center market, you know, growth, you guys wanna cover that?

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

You wanna do this?

Dan McNamara
SVP and GM of Server Business Unit, AMD

Well, yeah. I mean, look, from a server standpoint, we believe we're gonna grow, you know, pretty aggressively here over the next couple of years. In terms of raw percentages, I don't, you know, I don't have. Overall, as I talked about earlier, you know, EPYC momentum today and the pent-up demand for Genoa and Bergamo is very, very strong. The performance of TCO we're bringing to market generation after generation is really starting to accelerate.

Chris Danely
Managing Director and Semiconductor Analyst, Citi

Thanks.

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Yeah. Specifically, if you take a look at the TAM slides that we've showed, we are calling for a continued growth in both the server and then obviously the GPU plus AI segment. Strong growth in both of those, and that is driven by continued unit growth as well as substantial ASP uplift in both, just given the complexity of the products that not just we but the industry are driving into those segments.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Thanks, Chris. Teresa?

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. Thank you. Aaron Rakers from Wells Fargo. A lot of details on the data center, but one area that I'm interested in just understanding how you're thinking about is this Pensando acquisition. In terms of data processing units, there's a lot of discussion around another kinda leg of compute in the data center around DPUs and IPUs, whatever you might call it. I'm curious of how you've thought about that market opportunity, either be it from percentage of servers deployed that could attach to a DPU or have a DPU. That evolution of that addressable market opportunity.

Lisa Su
Chair and CEO, AMD

Forrest?

Forrest Norrod
EVP and GM of Data Center Solutions Business Group, AMD

Yeah. Yeah. I think you know, there's some work that's going on right now. I believe IDC is about to come out with a new report on that market. We see DPU adoption picking up substantially first in the cloud. You certainly have seen the tremendous success, you know, candidly, that Amazon AWS has had with their own Nitro solution. That's really, you know, fired the imagination and the economic analysis of their competitors. We see the advantage that the DPU brings to the cloud market is substantial, and we think there's gonna be a rapid pickup and pretty high penetration over the next couple of years on the cloud side.

On the traditional enterprise side, the key enabler factor there is that VMware, with the Project Monterey, is bringing the capability to use a DPU effectively in an enterprise or hybrid cloud environment, and that will be unlocked later this year. They're gonna be introducing that in the fall. You know, we're starting from a very low base on the enterprise side, but I think the same compelling value proposition is there on the enterprise side as well. With the VMware Monterey enablement, we would start to see substantial growth, albeit from a very low base, you know, starting later this year.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Thank you. Teresa, I think Blayne had you a question. Yeah.

Joe Moore
Senior Semiconductor Analyst, Morgan Stanley

Thanks. It's Blayne of Barclays. It was kind of a clarification and question. You called out data center and embedded as the two biggest. Everything's growing in the CAGRs. I guess is that the new segment, so is embedded X data center? 'Cause you assume a lot of the synergies are coming on the data center side where you can kind of leverage your expertise, and I guess that was the clarification. Then maybe you could elaborate on that pipeline of $10 billion, how much of that is in the embedded segment, the non-data center for the Xilinx business.

Lisa Su
Chair and CEO, AMD

Yeah. Actually, it's a good clarification, Blayne. When you look, you know, the segment, you know, we will be breaking out data center and then embedded. They're actually two separate segments. When you think about the largest growth driver for the long-term model, it is data center. You know, obviously, all of the segments will grow, but data center will grow the most in that. Then in terms of the opportunities, you know, the way we look at it is, let's call it, you know, $10 billion of opportunities that are identified. There is a good amount of that that are in the traditional embedded businesses.

There's also a good amount of that, you know, probably more than half of that would be in the data center, you know, sort of new segmentation. When we talk about AI, we mean AI not just in the data center, but also at the edge, and at the endpoint. AI is really a broad strategy that extends beyond the data center.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Thank you. Diana here.

Srini Pajjuri
Managing Director and Semiconductors Senior Analyst, SMBC Nikko Securities

Thank you. Can you hear me?

Lisa Su
Chair and CEO, AMD

Yeah.

Srini Pajjuri
Managing Director and Semiconductors Senior Analyst, SMBC Nikko Securities

Thank you. Srini Pajjuri from SMBC Nikko Securities. My question is on the server CPU side. Dan, obviously very excited about EPYC. You said it's on fire, and also you said, you know, the growth is gonna accelerate. Just trying to understand what's giving you that confidence. Is it the Genoa launch? If so, could you kind of compare and contrast Genoa ramp expectation with, you know, the last Rome or EPYC? Because on one hand, the product seems, you know, awesome, but on the other hand, you are much bigger now, right? Also some of your customers, I think they're extending the life of the servers in hyperscale data centers. I just wanna understand, you know, what your expectation for the ramp is. Thank you.

Dan McNamara
SVP and GM of Server Business Unit, AMD

Yeah. No, it's a great question. Look, I mean, part of the bullishness is just today with the traction of Milan. We're still—I think this week alone, we just got more OEM platforms launched. Milan's not even done in terms of the full complement of launches across both cloud and OEM. There's a long life here with Milan, and Milan's gonna run very, very strongly into 2023 and beyond. Genoa will be a very good launch, and what we're seeing is there are some of the cloud vendors are really pushing hard and then others, you know, it would be a blend, right?

I believe that, if you think about how fast Milan ramped though, that was because of SP3 platform from Rome, and it was, you know, not exactly a drop-in, but it was the same platform, and that really ramped quicker. With SP5 and all the new IOs and CXL and all that, it'll probably go slower than Milan, but we do expect it to ramp pretty well here in 2023 across, you know, first starting obviously with the cloud.

Srini Pajjuri
Managing Director and Semiconductors Senior Analyst, SMBC Nikko Securities

Thank you.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. We'll probably take about two more questions. Teresa, here in the middle, or Suresh. Thank you.

Matt Bryson
Senior VP of Equity Research, Wedbush

Hi. Matt Bryson with Wedbush. I just wanted to drill on the opportunity to sell compute and into communications applications. I think it's a market that you typically or traditionally haven't really had much traction in. Looking forward, can you talk to the size of the opportunity there? Then when we think about AMD ramping to that, whether it's with Siena or another product, I think Lisa's been talking about how it really took a couple generations to gain traction. Can you ramp faster into that given the architecture you've built around Zen 4? Thank you.

Dan McNamara
SVP and GM of Server Business Unit, AMD

When you think about the network, you gotta think about core and then the edge, right? In the core, I think it was a couple of months ago, Nokia talked about their telco cloud, and they went public with us saying that we're getting the same throughput for 40% less power consumption. In the core, we're seeing it already, and it's less of a software burden, right? There's you know it's easily portable. In vRAN and you know virtualization of the edge, that's. By the way, that's where we're going with Siena, but there's a little bit more of a software you know kind of take-up that we gotta go focus on, and we're working on that right now.

We think that the core is happening today but as you know, the core is lower volume than the edge would be. We feel like we're very well positioned. It's just that in terms of the edge, we need to build out our software stack, and that's really where we are there.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Great. Okay, last question. Teresa, over here on the edge on the side. Yeah. Toshiya.

Toshiya Hari
Managing Director and Lead Semiconductors Analyst, Goldman Sachs

Thank you. It's Toshiya Hari from Goldman Sachs. I had sort of a big picture philosophical question on OpEx. I think Lisa and Devinder, you both talked about the discipline in the model. Obviously from a scale standpoint, you've narrowed sort of the gap between yourselves and your two main competitors. I think back in the day you'd get the question, "You're small, how do you compete?" That's less of an issue. At the same time, when we sort of compare and contrast how they're spending OpEx versus what you spoke to, it seems like you're being a little more disciplined or conservative.

When you're huddled up, you know, as a team and sort of deciding what to pursue and what not to pursue, what are some of the things that you look for? Yeah, I mean, how do you compete growing OpEx at 10%-15%, which is kind of my guess what's embedded in the model? Thank you.

Lisa Su
Chair and CEO, AMD

Yeah, sure. Maybe I'll start and we'll see if Mark has anything he wants to add to that. Look, big picture, there's no question. I mean, there's a tremendous set of opportunities, and we've been growing very fast. If you look at our OpEx growth and our engineering growth, Mark showed it. You know, big picture, I think our goal is to make sure when we spend the money, we will see the return. It is. I wouldn't say it's conservative, I would say it's very deliberate. In some sense, you know, if you look at our portfolio today, it's so much broader than it was just a few years ago. We're very focused on ensuring that when we enter a market, we see, you know, clear execution path and clear customer, you know, path.

That being the case, when you're talking about the numbers that we're talking about, we are adding lots and lots of engineers, and we feel, you know, very good about the growth there. I don't know, Mark, if you wanna add.

Mark Papermaster
CTO and EVP, AMD

Yeah, the thing I'll add is the skill piece is obvious, and you heard that in my comment. We're really pleased with the growth we've had and continue to grow. There's two other segments that I'd ask you to think about. One is the IP. We're growing the IP, not just inorganically through the acquisition, but we've been investing in key IP internally, some of which we previously outsourced and we're taking in to be able to drive a higher performance. Some of it that we brought together in synergy planning, integration with Xilinx, that we've been able to really tighten and actually add funding to that IP portfolio. Thirdly, what I'll say is infrastructure.

If you look at the spending we have with the growth we have, look at the roadmap expansion. We're spending on R&D, on the IT infrastructure and the emulation infrastructure as well. Those are three major components of R&D's growth that I'm really excited about. That's not to mention the software, which I said we've consistently been leaning, and actually both on classic AMD and the Xilinx side for the last several years.

Ruth Cotter
Senior VP of Marketing, Human Resources, and Head of Investor Relations, AMD

Wonderful. Well, that concludes today's formal proceedings. Thank you everybody who joined us here in person and virtually. For those of you here in person, we'd now like to invite you to spend some time with our products in a more informal setting with some hospitality. Thank you, everybody. We appreciate it.

Powered by