Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts
Financial Analyst Day 2020
Mar 5, 2020
All right. Good afternoon, everyone. Thank you for joining us today for our 2020 Financial Analyst Day. We have a lot in store for you, so we appreciate everyone's time. And it's really an afternoon to take a step back and talk about the long term.
We're going to talk a bit about our strategy, a lot about our technology and our technology roadmaps and product plans. And then, of course, talk about our financial outlook over the next 4 or 5 years. So let me start first with just a little bit of context. Some of you are new to AMD, some of you have followed us for a while. But I'd like to say that we've been on a journey these last 5 years.
I've been CEO just a little bit over 5 years. And I've actually shown this chart pretty often over the last few years because it is a set of guiding principles for us. 1st and foremost, we decided to play to our strengths. For us, it's all about high performance, high performance technologies, high performance computing, high performance products. And this is the fundamental DNA of our company.
And so that's why this is our focus. We also set a set of pretty ambitious goals. And we set out to build a culture within AMD of really focused execution, meeting our commitments to our stakeholders, employees, customers, shareholders, partners. And we believed with all of this, we would be able to create a company that would grow and grow profitability as well. And when you take a look at our strategy, actually our strategy has been very consistent.
Again, this has been sort of a fundamental tenant of what
we focused
on. And it's really around 3 core competencies. I like to call them our crown jewels. When you look at it, we invest in high performance graphics for growing markets like gaming, cloud gaming, console gaming, for compute and AI and virtual and augmented reality, we think graphics will continue to be one of the most important elements of the high performance technologies going forward. We also invest in high performance CPUs.
And when you think about that, think about that in our client systems, think about that in our infrastructure and cloud environments, everybody needs more compute. And then when you really put these 2 together, it's really the underlying solutions that are very differentiating. And those solutions can be at the chip level, whether you're talking about semi custom SoCs or they can be at the solution level, whether you're talking about platforms that put together our hardware and software or you're talking about partnerships, where we focus on deep co design with some of the largest customers in the world. These are our 3 tenants: investing in high performance graphics, investing in high performance compute and bringing those together in very differentiated solutions. Now why do we love high performance computing so much?
We really believe that this is a technology that is the enabler for both the present and the future. It actually drives what can be done over the next number of years. And whether you're talking about very, very big systems like supercomputers or the cloud and hyperscale environments or you're talking about new workloads like AI and big data analytics and visualization or you're talking about the things that we enjoy like gaming and new client devices, All of these have one thing in common. They all require high performance computing and they really play to our strengths. So this is our focus from a product standpoint.
Now, as we formulated our strategy and plans over the last 5 years, the most important decisions that we had to make and frankly that any technology company has to make is around those technology investments. It actually takes years to develop a new architecture and to create that foundation to build great products. And when you think about those strategic decisions, they really lead us to where we are today. So these investments include things like our Zen roadmap. With our leadership CPU roadmap, we invested in Zen.
It was a big, big performance boost. We're now with Zen 2, and you're going to hear more about what's coming. We're investing in a new graphics architecture with RDNA. And RDNA is actually very unique because it spans both consoles, PCs and even mobile gaming. And it will last us again for the next 5 years.
We also made a choice, and this was an important choice, to move our entire product portfolio to 7 nanometer and very, very aggressively. And that has really paid off for us with now best in class manufacturing. And when we saw some of the constraints of Moore's Law, we said, hey, there's a different way to do this. There's a better way to do this. And it involves triplet architecture, which allows you to put the best technologies on a package and in some extent, break the constraints of Moore's Law.
Now these things actually may seem pretty obvious today, but frankly, 5 years ago, they weren't so obvious. And they really are the strategic decisions that lead us to today's product roadmap. So timelines are always interesting. There are lots and lots of products. If you take a look at the last few years, we first demonstrated Zen actually in August of 2016.
So that seems like a long time ago. But if you look at over the last 4 years of products, you'll see a couple of things. You'll see consistency in the roadmap. So both on the PC side as well as on the server side, consistency with what we've been able to do on the CPU side as well as consistency in the rollout of our new graphics products and really a cadence of product innovation across the last 4 or 5 years. And when you look at that for 2019, 2019 was actually a huge year.
It was a huge year for AMD because we did introduce 7 nanometer across our entire portfolio. And in high end desktops that was 3rd generation Ryzen. In the HEDT market, that was 3rd generation Threadripper. In mobile processors, we introduced the 7 nanometer Ryzen 4000 series. In graphics, we introduced the new Navi products with RDNA.
And in data center, we introduced Rome or our 2nd generation EPYC. And when you look across this product set, this is performance leadership. This is performance leadership. And I know when people say that, you're like, well, what does performance leadership mean? So me just give you a view of how we think about performance leadership.
This is actually a view of first, let me talk about the PC market. This is desktop and notebook performance. And what this shows is, let's call it the last 5 years of products in the industry, which show relatively incremental performance. On the desktop side, this is multithreaded performance with Cinebench. On the notebook side, this is productivity performance plus graphics performance.
You can see it's been relatively, let's call it incremental. When we introduced Ryzen, it changed. With 1st and second generation Ryzen, we became very, very competitive. With 3rd generation Ryzen, both on the desktop side and on the notebook side, we have changed the performance trajectory. We have changed the performance trajectory.
And by the way, that's what we mean by pushing the envelope on high performance computing. And so when you look at the business results of all of that, looking first at the PC market, look, we've had very strong results in PCs. When you look at desktops today, we're over 50% share in the premium segment at many of the top global e tailers. When you look at mobile platforms, we increased the number of mobile platforms in 2019 by about 70% with Ryzen. And in high end desktop, we have the best product in the industry.
It is the best product in the industry. And what that has translated to in terms of business is we have consistently gained share every quarter for the last eight quarters, 8 points of share in the last 8 quarters. And now when you look at data center, you see the same trends. You've seen the same trends of incremental improvement in the industry over the last 5 or 6 years. This is looking at spec int rate, which is a very which is a normalized benchmark for data center computing.
With 1st generation EPYC, we became very competitive. With 2nd generation EPYC, we've changed the industry curve. We've literally doubled the performance of our competition with the 2nd generation of EPYC. And again, we're very excited about the progress in data center when you look at some of the statistics. We love the cloud.
We are expanding deployments in the cloud with all of the top cloud providers. We doubled our number of cloud instances in 2019. We expect to be at over 150 in 2020. We've expanded our platforms in enterprise across a number of deployments in a number of OEMs. That pipeline is growing quickly.
We doubled the number of platforms in 2019 in enterprise and we expect to be at over 140 platforms in 2020. And in supercomputing, and let's call this a really, really good area for us, we're winning consistently the top deployments. That's the top deployments today and the top deployments over the next 3 or 4 years. And one of the things that I can say we're very, very proud of is the fact that we were selected for 2 of the largest DOE deployments for supercomputers with both Frontier last year at Oak Ridge National Labs. And just yesterday, we announced with Lawrence Livermore National Labs that they've selected AMD CPUs and AMD GPUs for El Capitan, which should be the most powerful supercomputer in the world in early 2023.
So lots of good progress on the data center side. Now a couple of points on graphics. Look, we are investing in graphics. We're investing in graphics. Last year, we delivered our 1st generation of the new graphics architecture RDNA.
We saw significant gains in performance per watt as well as overall performance. And you're going to see us continue to invest in graphics. And that has led to some very nice progress in gaming, including if you look at our RDNA family, the Navi family, we're winning at 1080p and 1440p. This is a very important market for us. We're the exclusive discrete graphics provider for Macs, very important partnership for us.
We are being used in the cloud in many places, very important partnerships for us. And of course, we love game consoles. Game consoles, if you think about all of the folks that have game consoles between Microsoft and Sony systems, in this current generation, we've shipped over 150,000,000 units since we started in 2013. So a lot of momentum there overall. Let me also spend a few minutes on how we think about customers.
In addition to products, we have majored on building very deep customer relationships. And what that means is it's beyond a typical road map customer vendor relationship. It's really about how we do something special, how we co develop, how we co design, how we co innovate. Microsoft and Sony are great examples of that. We love what we do in the console relationships.
We've extended our Microsoft relationship to Surface. And that's a very important partnership for us. When we look at the cloud, the cloud is all about what we can do to optimize for the key workloads. And you'll hear more about what we're doing with those relationships. And then what we're doing with the OEMs as well to bring out the new user experiences.
So lots of focus on deepening our customer relationships and creating competitive advantage in the ecosystem. Now what does that translated into over the last 5 years? It's translated to exceptional performance. If you look at where we are we were in 2015, we were about a $4,000,000,000 company. We finished 2019 at about $700,000,000 That's 14% annual growth rate over the last few years.
And this has actually come from growth across our PC business, our discrete graphics business as well as our data center businesses. So really on the strength of those new products. And that growth has translated into significant margin expansion. We've expanded our margins by more than 10 points in the last few years. We've improved profitability, and we've substantially strengthened the company and the balance sheet.
So it's fair to say we're a much more much stronger company than we were a few years ago. Now as much fun as the last 5 years have been, today is really about the future. And we'd like to talk about what we see over the next 5 years. I can say for sure if you ask me or anybody on this leadership team, we are even more excited about the coming journey in terms of what we can do. And the reason is, are very simple.
First of all, the opportunities are larger. The impact we can make on the industry is larger. And our resources are much stronger. And so if you think about those things and what we've been able to accomplish, it's really exciting to think about what we will accomplish. So again, some of the guiding principles that we think about.
1st and foremost, and you're going to hear us say it probably 100 times this afternoon, maybe 101. We will stay committed to high performance computing leadership. That is our mantra. We are uniquely very, very good at it. And frankly, there are very few companies in the industry that can possibly do it.
It is extraordinarily hard to stay at the bleeding edge. We actually see even more opportunities to combine our CPU and GPU solutions. We combine them today, by the way. If you look at our solutions today, in PCs and consoles, they put integrated CPUs and GPUs together. But we see that opportunity broadening and becoming more disruptive as we go over the next 5 years.
We also will continue to prioritize very strong and predictable execution. We want to be a trusted partner for our customers. We want people to come to AMD first because they know that they can count on us. And then frankly, from a business standpoint, our aspirations are to be a best in class growth franchise. And we don't take that as sort of a light expression.
That is our aspiration. So let's talk about the market and what's happening in the markets. We love our markets. Our markets are big. They're growing.
And there are places where it's very clear who our competition is. When you look at our TAM, it's about an $80,000,000,000 TAM and this is let's call it a 2023 number. We see data center at about $35,000,000,000 Data center includes CPUs, it includes GPUs and includes some telecom infrastructure. We see PCs at about $32,000,000,000 This is not necessarily a growing market, but it's a very large market and it's a large good market for high performance computing. And we see gaming and within gaming, we consider consumer graphics as well as game consoles.
Again, we see this to be a good market. Lots of people are gaming, more people are gaming and they all want better graphics. So it's an exciting TAM and an exciting opportunity for us. Now today, we're going to spend a lot of time talking about our technology investments. When I talk to you guys often, you're like, what makes you so confident that you can continue the leadership in products?
And it really is about the choices that we're making, the choices that we have made, the choices that we will make. And if you look at it, there are really a couple of key areas, right? 1st and foremost, we're going to continue to invest significantly in our core roadmap. That's our CPU, Zen roadmap, that's our RDNA roadmap. These are the baseline for what we have to do from a technology standpoint.
We're going to be aggressive with advanced technology. That has played to our strength. That's where we are. To do high performance compute, you have to be aggressive with advanced technology and that's in process packaging and interconnect. We're very excited about data center.
We think the data center has some very unique characteristics as new workloads come in and there's a lot of innovation to be had across cloud, enterprise and accelerated computing. And we're equally excited about PC and gaming solutions, because again, there are lots of new user experiences. So these are the areas of our technology investments. Now my team is going to talk about this in a lot more detail. So I'm just going to give you like a small preview over the next few minutes.
But it gives you an idea of what we prioritize. So industry leading CPU and GPU roadmaps, These are IP roadmaps. And David and Mark are going to go through these in more detail. The main thing that I want to say is that you can count on us to have a very strong cadence of continuing to innovate on the CPU and the GPU architecture. We can see the path.
We see the path today and our teams are working on these things today. In the area of advanced technology, I said we would be aggressive and we'll continue to be aggressive. Mark will talk more about this. In process, we are in a very, very good position, a very good position with 7 nanometer and we're committed to being aggressive with advanced process nodes. I think that is really part and parcel to the strategy.
We're also very clear that packaging is key, that Moore's Law is slowing down, that packaging is a way to break some of those constraints, that chiplets are great, There's next generation chiplets. There's work that we can do in terms of 3 d die stacking. And we'll talk more about that. And probably the area that is somewhat underappreciated when you talk about advanced technology is in interconnects. As sexy as the individual components are, how you put them together makes an incredible amount of difference.
And we are uniquely positioned to really drive that interconnect architecture. Moving on to the data center market. Look, I said we were really excited about this area. We are really excited about this area. Forest is going to spend quite a bit of time talking to you about it.
The excitement comes from the fact that there's just insatiable demand for more compute. Everybody needs more compute no matter where you are. And it's not just more compute, but it's different compute. And there are new workloads and new problem sets and new ways to solve the problem, no matter where you're looking in this ecosystem. And our view is that there's a huge advantage if you think about solving the problems differently.
And that's where we're focused in data center. Some of the key centers of data center leadership, again, you can expect these, CPU roadmap. Today, Zen 2 is the best CPU in the market. 2nd generation EPYC with Rome is the best X86 server processor in the market. We intend to continue that with Zen 3 based Milan on track for later this year.
Perhaps one of the things that may be very new today is what we're going to do in data center GPUs. We used to really share the architecture. So our GCN architecture was shared between consumer graphics and data center graphics. But when you look at the workloads going forward, there's really an opportunity to optimize. David is going to share with you our new compute roadmap for GPUs.
We call it AMD CDNA. We like that. It stands for Compute DNA. And what you can expect is this is a beginning of a new roadmap that's going to take our GPU compute architecture forward, particularly around HPC and machine learning. And you can expect a cadence like we've done on the CPU side with Zen, on the GPU side with CDNA.
And then really exciting is how we put these system solutions together and really form together with our CPU roadmap, our GPU roadmap and our new interconnect capability, the best system solutions in the industry. And that includes hardware solutions as well as investment in software solutions on the platform side. So that gives you an idea of how we view data center and our bets in the data center. So moving on to PCs and gaming. Look, we are as excited about the opportunities here.
It's a different market. It's a market led more by user experiences, but the world is different today. People have a different expectation of what you can do in a notebook or a desktop or a gaming system. And in the PC market, people want more performance, they want more capability, they want more portability, they want more security and those are things that we're good at. And for gaming, you're going to hear from Rick, there are like more than 2,000,000,000 gamers in this world.
And frankly, they have a lot of expectations. They want to be able to play their games anywhere, anytime on any form factor, while interacting with their friends and family and they want to do it at high resolution. And that requires also plenty of CPU and GPU capability, a lot of technology here. And in this part of the business, what you're going to hear from me and what you're going to hear from Rick is we are committed to building the best products for PCs and gaming. And we think there's a lot of opportunity, a lot of opportunity.
In the PC market, it's a large market that has lots of demands across both notebook and desktop. We've done very, very well in the desktop market, particularly in the DIY market. But we are still very underrepresented in consumer and commercial systems. And we'll talk about how we become more representative in those markets. In graphics, we have the RDNA roadmap, but the roadmap also needs a great set of products to go along with it.
And as I hear all the time from gaming enthusiasts, we are committed to a top to bottom gaming portfolio and that's a multi year, multi generational commitment. And then on the console side, look, we are honored to be partnered with Microsoft and Sony for their next generation consoles. It's probably the most anticipated consumer launch of 2020. And what we do with each of them is really help them power their visions for the next generation of gaming with custom SoCs. And so that gives you an idea of what we have in store for PCs and gaming.
Okay. So lots of exciting technology and products that you'll hear about today. But I also want to make sure that, I give you a preview of how we're thinking about driving shareholder returns. We believe we have great markets. We believe we have great products.
And we also believe that we are underrepresented in the TAM. There is a lot more that we can do. And so with continued execution, what we're driving towards is best in class growth. What we're driving towards is making those right investments, not just for today, but for the next 5 years. What we believe is with our continued expansion of our product portfolio, we will continue to expand margins and grow profitability.
And we also believe we'll generate a significant amount of cash in that time frame. Now Devinder is going to go through much more of this towards the end of the afternoon, but I thought I would again give you a preview of what that long term financial model looks like. So in 2017, at our Analyst Day, we set out a model, which at the time some asked whether it was a bit aggressive. It was double digit annual revenue growth. It was gross margins at about 40% to 44%.
We were about 30% margin in 2015 and it was about increasing profitability and cash. And I'm happy to say, if you look at the numbers for 2019, we've more or less met these goals. In some cases, we've exceeded these goals. And so we feel good about the progress. Now as we project out for the future, we're even more excited about what we think we can accomplish.
And for our long term model that we're going to talk about today, it actually has a few very important components. The first is we believe we can accelerate growth with where our products are positioned, with where we are positioned with our customers. We believe that growth will accelerate and we estimate for the new model, we're calling this a long term model, think about it as 4 years. So let's call it like a 2023 model. We
believe we
can deliver approximately 20% compound annual growth rate over that timeframe. As we expand what we do in the commercial markets, particularly in data center and commercial PCs, that will help expand our margins. And so we see margins increasing from where we ended 2019 approximately 43% to greater than 50%. We think operating margins are in the mid-20s. So again, I think we're building a very balanced model where you see growth, where you see investment, margin expansion on both the gross and the operating line.
And again, our goal is to generate a significant amount of cash as we grow the business. So Devinder will have much more on this, but hopefully that gives you an idea of what we're trying to achieve. And when you think about all the technology and all the products and all the markets, this is the financial model. So let me finish up here and just state that, look, we are very ambitious with where we think we can take AMD. That ambition is motivated by building the best.
And so that's the motto for today. It's about building the best on both the technology side as well as the business side. And that comes with leadership in our roadmaps. That comes with really execution excellence. It comes with market share gains across all of our markets and a commitment to strong shareholder returns.
So with that, let me turn it over to my team so that we can give you a lot more detail on how we get there. And with that, let me introduce Mark Papermaster to the stage to about our technology.
Mark?
Well, thanks to all of you for joining us here today. I've had the opportunity to share with you our technical journey, we've embraced that as a company. We've embraced it in our strategy and in our culture. And so it's exciting to be here today and to talk to you about our view of our status, of how we've done and more importantly, our journey going forward, staying on that pace to deliver high performance to the market. And so, Lisa touched on that earlier, but it is such a fundamental about what we do in our technology strategy at AMD that I'm going to spend just another minute about talking about these workloads in the markets that we serve, because it is an ever changing landscape across each of these segments.
But there's one thing in common and that is that incredible demand for more performance, more high performance in each of these segments. In fact, it's actually an exponential growth in most of these segments. And it's not a surprise to you why that is because you see it every day. You see smart devices around you. You have them in the appliances.
You've got your smart home. You see on the factory floor, the telemetry, smart functionality has been built in across the factory floor. You see it at the emerging edge of the market. When you look at how 5 gs will transform the market, it changes the analytics that has to occur at the edge of the network, in base stations, in the closets, traditional telco closets across the industry. And it is the approaches that supercomputers have deployed, right, to bring massive scale compute to solve the emerging workloads in AI and analytic challenges that we're facing.
You look at decision making, the massive data that's going into decision making today are requiring large scale simulations. What about content delivery? The Olympics are going to be broadcast later this year in 8 ks. That's a tremendous demand in terms of delivery on that content and, of course, game serving, driving up the capabilities in our cloud server. And then bring it down to our the client interface level.
You look at gamers, you look at content creation, you look at what we are all doing on our PC devices today and we want more and more visualization, more clarity, more immersive experience. And of course, we want a higher capability and efficiency at that level, at that client level. And so, all of these factors come together and that's what is driving our strategy. It's our role at AMD with the products we develop to enable our customers to harness that data and put it to work and moreover to do it with devices that are easy to program, that have been out there for years and have an entire ecosystem around them to bring those in solutions to bear. That is what drives our strategy.
And we couldn't be where we are today, as Lisa mentioned earlier, without decisions that we made several years ago. And I will have to say that it wasn't hard for us when we saw where the workloads were going to call out that strategic focus on high performance. That was a strength that we knew that we could tap on with such the deep experience in products over that area. But we made a set of tough calls on investments. We fundamentally had to transform our ability to deliver that performance.
So we had the building blocks, but we transformed that delivery process in a way that we could be trusted to deliver each generation after successive generation, be a trusted partner. So that was a change in our execution model. We changed our road map and we changed the very way in which we put the IP blocks together through a modular approach. And that's what we'll spend a few minutes just looking at some of those accomplishments over the last 5 plus years. I'll start with that reengineering of the engineering approach at AMD.
Frankly, it's one of the key accomplishments. It's, I'd say equal to the tough technology decisions was some of the changes we made in the delivery model at AMD. It was a shift to a culture of high performance, a culture of collaboration and a culture of top flight engineering execution. And in the prior FAD, I shared with you some of the ways in which we are making those change, and they have, in fact, proven to be highly effective. We talked before about this idea of leapfrogging teams, right?
So what is a leapfrogging team? It's where what we did on our CPU roadmap. We were a multiple CPU line, so we were split out in our focus. We consolidated on a single Zen family, not just one product generation, but we were working on multiple product generations from the outset. And then what we do and what we still have today is we always have 1 generation going to market, 1 well along in the design and 1 in the conceptual phase.
And so this has been fundamental to improve our ability, again, to be that trusted supplier to our customers. But we went much further than that. We changed the way in which we brought our IPs together. We historically had very different methodologies across the company. It was hard to collaborate when everyone's sort of rolling their own, doing it their own way.
And so what we did when we adopted that modular approach, yes, it was key to hit the performance goals. That's what drove us was to hit the performance goals. But equally, it facilitated the cultural change, because you had for modularity, you have to architect how the pieces come together that drives the collaboration across the company, drives that co engineering of how we design going forward. And it frankly changed the plumbing of how we put products together forever going forward at AMD. We also made a change in how we put our simulation together, how we verify our designs, because we needed to accelerate how we bring products to market.
So we invested in our simulation and emulation techniques to drive earlier verification of the features that we're designing in our new products. And it was turned out to be quite impactful because what we then were able to do through this effort is to drive left, to drive if you think about a schedule that you lay out in front of you and you typically historically have had that feature validation of software features on top of the hardware and it's to the right of a line. That line is when the silicon comes back and you're testing that silicon. So that feature enablement was done in the bring up in the lab with that silicon. And what we've finally done is we've shifted that validation left.
We've shifted it pre silicon leveraging these advanced simulation techniques that we've deployed. And so, yes, it had an immediate impact in terms of then when you validate that pre silken, the hardware comes back now. We're completing that bring up in hours days versus the weeks months of the methodologies that we had historically done. And it actually changed the way that we even architect our solutions, because it enabled a parallelization of the architecture, the hardware and software architecture putting these solutions together. And a great example of that is modern standby.
Modern standby, you all have enough, any of you have laptops, it's what's giving you improved performance and energy efficiency across your device. And it's complex because of the high software, hardware, firmware interactions to get that done. So we partnered from architecture through delivery with Microsoft, with our AMD software team, our AMD firmware team, chip design teams, platform teams, all of them partnered together, leverage these approaches. And it was you see it, we are shipping modern frankly, this is the new normal at AMD. This is our expectations.
This is how we're developing all of our products going forward. And look at our execution that this has led to on 7 nanometer. Lisa called out that, that decision, that choice had to be made some years ago. And we have delivered as promised in 2019 over the course of the year a comprehensive portfolio, over 20 products in the market now with 7 nanometer. You look at what 7 nanometer did by doubling the density and you could do this in roughly the same power envelope.
So look at our Ryzen roadmap. We doubled the number of cores. Ryzen HEDT, high end desktop, doubled the number of cores. Epic Server going doubling to 64 cores per socket. And then in the most recently announced notebook with the Ryzen 4000 doubling to 8 cores, still having that integrated graphics capability for leadership and with all day battery life.
So 7 nanometer successfully executed and rolled out again over 20 platforms. Let's do and of course, Navi in our graphics line with the impressive performance per watt gains leveraging 7 nanometer, so across our portfolio at AMD. And what I'd like to do now is just dive a little deeper on our delivery of that new Zen 2 core. When you look at that architecture, as I said a moment ago, 7 nanometer was key, of course, given that density and energy efficiency, but the design was fundamental in both performance and the scalability of our implementation. We leveraged the 2nd generation Infinity architecture, taking that Infinity fabric and broadening its application in terms of actually allowing us to still provide that on chip and socket to socket and server connectivity that we had done historically with our Infinity approach.
But now we leveraged it to actually implement a chiplet approach. So what it allow us to do, you look at our server implementation and you see 8 smaller, easier to manufacture 7 nanometer CPU die connected with that Infinity architecture to a central 12 nanometer IO and memory chip. So it delivered performance, it delivered all the scalability, it delivered it's actually easier to leverage that performance because it created a single NUMA domain and it is provided on top of that with the performance enhancements of feeding those engines even greater and actually with a PCIe Gen 4 to double the rate that we had before. So that Infinity architecture was fundamental to putting the whole picture together, delivering new CPU performance, 15% instruction per clock gain, but adding it at the holistic level to make sure that we could deliver the core performance. And look at that core performance, just going when we introduced Zen, it was a breakthrough, 52% instruction per clock.
As I said a moment ago, 15% instruction per clock at Zen 2. But the knock we had taken on that first gain is we had previously had a gap on single thread performance. And we set out with Zen 2 to eliminate that gap. And you can see on Cinebench single thread, which is a great benchmark, a representative benchmark for content creation, 21% improvement on that single thread performance, a majority of that driven by the design improvements. We changed the branch prediction scheme to be more accurate.
You have more accuracy of the code prediction. We improved the pipeline execution, expanding the width as well as our dispatch efficiency. We doubled our floating point to a 256 wide pipe. We doubled how we feed it. So it's a true doubling of floating point.
And with that double core density, it's actually 4x the floating point capability on every Zen2 product that we shipped. And particularly important to gamers and to server is that latency to memory. We improve that effective latency. We doubled the L3 cache. So really, really important decisions that deliver real world performance.
And that's what's driving the rapid growth of the acceptance of the Zen product family in the industry. I want to show you just a few stats on that. The bulk of our shipments are, of course, this is a counter course. So clearly, the highest volume is in the client products. And you can look with that Ryzen launch in early 2017, about a year later, 50,000,000 Zen cores shipped in the market.
We bumped that performance with the key ZenPlus release. And you can see that we jumped to almost 160,000,000 devices with that 2nd generation Ryzen. But critical was Zen 2 and that acceptance that we've had and really blowing through the levels of historic cap of performance of CPUs in the industry, blowing through that and jumping in almost just actually about 8 months, you see that we're now over 260,000,000 Xencor family chips in the industry. So very wide acceptance and with the strength of our roadmap, you'll see this curve do nothing but increase in its steepness going forward. And so as I wrap up looking at that set of products and the execution that we've had in just in the recent past, I'd be remiss if I didn't take a moment and talk about product security because it is the foundation.
You have to be trusted to be able to sell your compute devices. It's the bedrock of what we do. It's job 1 at AMD. And so, of course, we include a dedicated secure processor in every device that we ship. We boot up in a trusted environment.
We authenticate. Before we communicate and talk to any other device, and that continues strong. We continue to be very resilient in terms of the security of our CPU designs and we remain focused every day. You look at security events, we were affected with Spectre V1 and V2 like every CPU chip in the industry. We implemented a software mitigation immediately.
We hardened that in Zen 2 for Spector V1 and V2. But the resiliency of our designs means that when you look at the other side channel attacks that have you've been reading sort of continued reports there, Our architecture was not affected by Meltdown. Our architecture was not affected by SWaP. GS. We were not affected by Zombielode 1 or Zombielode 2.
So we remain incredibly focused here, and we want to continue to roll out enhanced security features. Our encryption has been well accepted in the industry. MemoryGuard, where we have a memory across a memory encryption very seamlessly implemented across our devices. The ability to then move to virtualized machines and have unique keys across unique instances in a cloud environment has been very well received, now rolled out across the OS and hypervisor partners in the industry. And the way that we do that there, of course, is done without the need for any code modification.
So it's now it does not require any lift of the applications of our end users. And so you're seeing a further adoption of this approach. And then going forward, again, continuing to invest modern security going in across our client devices. We will be increasing our security in those cloud environments, multi tenant environments. So we're ready, a leading edge encryption capability, as I just described, with SEV, but growing that to add a capability to protect for what's called a malicious hypervisor.
Somehow a bad actor can get into even a hypervisor of that cloud applications. You'll see that coming in our road map soon. And then lastly, I'm very pleased to announce that AMD has joined the Confidential Compute Consortium. We feel very proud about these advances I just described to you, but we want to join with others in the industry and work to close the final gaps to protect data throughout its entire lifecycle. So again, security will remain a bedrock and foundation of everything that we do.
Okay, let's shift gears. Let's go forward and look at really our investments because as excited as I am about what we've done, I couldn't be more excited about what's coming in the future. How we put our solutions together around our CPU and GPU roadmaps will be very exciting as we leverage the process and the packaging investments we're making as well as our next generation interconnect. And what's so special about what we're doing with these investments is how they come together and enable the accelerated computing and how that will drive our platforms going forward in AMD along with the software stack that you'll hear next about as well as other features that are coming from David Wang, next section. So start with road map, sustained execution.
We talked about how critical that is in the very foundation at AMD. So you saw the progress we've made with Zen 1 and Zen 2. Zen 3, right on track. It's coming up well. It's on track for delivery late this year.
And what I'm really excited about is sharing about Zen 4, our next generation being in the 5 nanometer design. We're working with our foundry partner on 5 nanometer in that same close partnership that we did for 7 nanometer. We bring the know how of how to marry design and foundry technologies for high performance. And so we're continuing that same partnership and that execution for 5 nanometer. So look, we've called out a very clear CPU strategy, our roadmap is stable, we're executing.
We are heads down in executing, and this trajectory will keep us positioned to be that trusted supplier and to meet the demanding applications going forward. And I have to tell you that when we designed that road map, we designed it assuming that like for most of my career that we would have that continued gap versus the process capabilities of our X86 competitor because that's the world we have lived in. That's what we designed for and that's what we put in place. We anticipated that FinFET would help close that gap, which it did. So 14 nanometer FinFET made very good advancements.
The plot you're looking at is showing a relative server product density and relative server performance per watt versus our competition. So you can see that we really had always learned to design with efficiency to account for some of the gaps that we had. But what was historic was at 7 nanometer. We did not anticipate that the 7 nanometer that we'd actually have a leadership process capability. And so what we are doing going forward is continuing that partnership, executing very, very closely with our foundry partners.
And we're assuming that our competitor will address and will come back. But when you once you have a gap and once you have some of the issues that may cause a delay in the new technology, it's going to take some time, right? And so what we're doing is we're assuming that the competitors will come back at some point, but we will keep our trajectory at the same pace of competitiveness that it's been on. And in an era of Moore's Law slowing down, once you have that historic level playing field, it is about how you put the solutions together. And so it is about these innovations of integrating solutions.
And that's again where AMD will not let up. We were at the forefront of packaging technology. You go back in 2015 when we implemented our stacked high bandwidth memories of our 1st generation high bandwidth memories over a silicon connectivity, so can carrier 2.5D packaging to a GPU leader in applying this approach, which lowers the power and dramatically improves the GPU performance. We've continued that approach with our high end GPU products. In our CPU, we've had excellent experience with multi chip approaches and then innovated, as we talked about, with Zen 2 on a chiplet approach to give us tremendous configurability as well as more performance and scalability going forward.
And what I'm really excited about is sharing with you our view of the future because what we're working on with several of our future products is actually marrying those approaches together, working on improving the density and also marrying the capability of what we had practiced on our 2.5D and 3 d packaging along with chiplets. We call this our X3D approach. And it plays perfectly into our modular approach that we have in AMD. So investment in packaging, driving that flexibility and that density and efficiency of packaging going forward and with it, our Infinity architecture. And so if you look at that road map of Infinity architecture, right, it was, 1st, allowing us to have some breakthrough CPU connectivity, which you saw in our 1st generation.
And with the 2nd generation, again, that high performance chiplet implementation on CPUs and we applied it as well on our GPU roadmap to be able to connect in either 4 way or forthcoming up to 8 way configurations of GPU connectivity. So that was a huge step forward in terms of our scalability and our roadmap. And very exciting to share with you today what will be coming out in future products is our 3rd generation Infinity architecture. And with that, we complete that connectivity across our CPU and our GPU roadmap, bringing that Infinity architecture linkage and bringing a coherency across those engines. What does that mean?
It means performance. It means efficiency. It gives us unprecedented bandwidth in the industry that you'll see between those devices. It reduces the latency. But more importantly, it unifies the addressability.
As you cache from the GPU into the CPU, it looks like a unified pool of memory that's available. And with the coherency, it's easier to program. The programmer is not having to manage those elements. It's very straightforward to access. What does this mean with these type of improvements?
Well, look at those machine learning training that are demanding massive amounts of data where the models are growing tremendously. They demand this type of connectivity. Look at another example, motion picture rendering, right, that needs that combination of both CPU and GPU. So many other applications are coming up. We're very excited about what our 3rd generation Infinity architecture will enable.
And you don't have to look further into how this is going to play going forward than looking at accelerated computing, because it is something that you don't have to think that far back on. Go back to 2012. That's when the gradient descent analysis sort of spawned this whole AI approach to be able to rapidly manage data. And that's where you start to see the supercomputing take advantage of GPU and CPUs. That's the first wave of heterogeneous computing and leveraging for both high performance computing HPC markets and machine learning leveraging that heterogeneous approach.
But now the model sizes have exploded. That previous approach isn't good enough. It takes this combined optimized system of CPU, GPU and the ability as well to bolt on accelerators with that, that we're providing that is the new era of computing. It's the next era of computing. It is the exascale era of computing.
And I will tell you that it was a heated competition for the systems that Lisa talked about in the Department of Energy with Frontier, with Oak Ridge National Lab and El Compa 10 with Florence Livermore National Lab. And we love the fight at AMD. That is what we're made of. And so that fight drove us to improve our roadmap, to improve our competitiveness, to go head to head and simply beat the competition. And it is laying the foundation for our next investments for the long term at AMD, because it drove us to the edge of what was possible.
And that's where I'd like to end my comments is on that point about the long term because that's what we're about. We have a deep R and D commitment at AMD. We have a deep experience at AMD. And now we've tied to that an incredible execution culture. So our core roadmaps will be relentless in computing gain as well as efficiency gains, generation after generation.
We've led the industry in innovation and modular and chipwood approach, and it will keep us on a Moore's Law pace, a performance even when Moore's Law itself, the semiconductor node alone is tailing off. We've invested here for the future. And our execution approach, our culture is in fact a differentiator and it's allowing us to be a trusted supplier to Fortune 500 Companies across the globe. And lastly, our successful journey to exo sale computing is driving the next wave of innovation at AMD. We will not let up on the pace of development at R and D.
Thank you very much. And with that, I'm really pleased to introduce my partner in Technology Development, Senior Vice President of Radion Technology Group, David Wang.
Hey. Thank you, Mark, for covering the exciting CPU and the interconnect technologies and the future of accelerated computing. So now it's my turn to talk to you about GPU. Some of you may know I rejoined AMD 2 years ago. I actually have been spending most of my career doing GPU development.
And my top goal here is to drive GPU leadership. So I'm very happy to be here to share with you our GPU technology roadmap and our journey to drive leadership in gaming, in data center, in accelerated computing. Okay. Now let's dive in. Let's start with our vision.
Our vision is pretty simple. We want to drive AMD Radeon technology everywhere. And indeed, we have made tremendous progress, expanding our Radeon ecosystem across PC, Macs, game consoles, now to data center and mobile. This is a very, very broad ecosystem, spanning from cell phone to supercomputer. And to drive GPU leadership across such a broad spectrum of workload require huge focus on technology development.
So next, I'll talk to you about our technology development strategy. We have a very simple and clear strategy on process, architecture, efficiency and software. On the process side, driving aggressive adoption of advanced process technology, Mark mentioned about, because process is the foundation, but it's not enough. We also want to develop domain specific architecture, so the architecture is optimized for its workload. More on that later.
And we want to drive aggressively the performance per watt and the performance per area efficiency improvement because the leadership on performance per watt drives higher ASP. And the improvement on performance per area will continually driving down the cost. And lastly, we want to leverage our open source software strategy to continue to expand our ecosystem. So Mark talked about the process. I'll cover the rest in my presentation today.
We want to develop domain specific architecture because with more slow so in down, it's very, very challenging for the general purpose architecture to achieve optimal efficiency for both gaming workloads as well as the high performance compute workloads. And therefore, we are shifting our strategy from a GPGPU type of architecture to domain specific architectures. That is the RDNA architecture optimized for gaming and the CDN architecture optimized for compute that Lisa mentioned earlier. This allows through domain specific optimization to achieve optimal efficiency for gaming and for compute. It also means that end user don't have to pay for the performance and the features that they don't need for their application.
So it's win win. So I'll next cover the RDNA and followed by CDNA. We launched the RDNA architecture last year. It was all new architecture designed and optimized for gaming with the objectives of driving efficiency and the performance of modern gaming workload, improve power and the bandwidth efficiency and also provide a flexible platform to implement software features to enhance gaming experience. And lastly, the architecture was made to be super scalable to be able to support from mobile gaming to cloud gaming.
And in order to achieve these objectives, we have developed the following architectural innovations. We have a new compute unit design. That's a very, very efficient pipeline for diverse and dynamic gaming workload. We have also developed a new multilevel cache that can feed data to the new compute array in an energy efficient way. And we also optimized our graphics pipeline to deliver the highest performance per clock and also improve the clock frequency.
And as a result, we're able to deliver more than 50% of the performance per watt improvement from GCN to RDNA through a combination of architecture, 7 nanometer gain and the design optimization. So now let's look at beyond rDNA. RDNA is our new foundation of our multiyear, multigenerational gaming GPU roadmap. RDNA2, our next generation, will continue to innovate to raise the bar on improving performance per watt, while adding advanced features such as ray tracing and variable rate shading. You're going to see RDNA2 based products from AMD and from our partners later this year.
And with our strong development pipeline, the rDNA3 is also underway, and we'll continue to push for higher per watt improvement and also new features, so stay tuned. Now let's take a closer look at how we are proving the RDNA per watt. We are leveraging the proven CPU design methodology on the ZEN roadmap to make similar per watt improvements on the RDNA roadmap. We focus on 3 main areas: the microarchitecture innovations to improve the purple clock or the IPC in the CPU terminology. And we're enhancing the logic, reducing the logic complexity and the surging power.
And put together, we have a new physical design flow to drive the highest possible card frequency, multi gigahertz of card frequency for our graphics engine. With all these enhancements, our plan, our target is drive another 50% enhancement from rDNA to rDNA2. As you can see, we have established a very strong generational per per watt improvement road map, which will extend to rDNA3. And this strong road map allow us to really drive desktop and notebook gaming leadership. And this is also a key reason why Samsung has chosen our graphics IP even for their mobile applications.
Now let's talk about ray tracing. Ray tracing is an interesting technology for gaming. However, as you know, the adoption has been slow, mostly because of lack of content, lack of hardware and the performance penalty when people when gamer turn it down. So we have developed all new hardware accelerated ray tracing architecture as part of RDNA2. It is a common architecture used in the next generation game consoles.
With that, you will greatly simplify the content development that developers can develop on one platform and easily port it to the other platform. This will definitely help speeding up the adoption. And we also provide lower level API support that give control that give more controls to the developers, so that they can extract more performance from the underlying hardware platforms. This will help mitigating the performance concern. Well, so you can take a look at the image it was rendered on the rDNA2 silicon, running the latest Microsoft DXR 1.1 API.
That was co architected and co developed by AMD and Microsoft to take the full advantage of the common ray tracing architecture. This is a great proof point of the benefit of Radeon Everywhere. I encourage to check it out in a demo area after the presentation. So now I want to switch gear to telecenter GPU. So we mentioned about we want to develop domain specific architecture in order to improve efficiency, which is extremely critical for data center operations.
And that's why we have designed the CDN architecture that's optimized for data center compute with objectives to enhance performance for HPC and the machine learning workloads with specialized compute and the tensor operations we added in and to reduce the data center total cost of ownership through the similar type of per watt efficiency improvement that we borrowed from the RDNA methodology. To add features that will enhance the enterprise grade RaaS and the security and the virtualization support because all are critical for data center operations. And lastly, the architecture must be scalable in order to scale the performance for multi GPU and exascale computing. Of course, leveraging our infinity architecture that Mark mentioned about. So this is our multi generational compute GPU architecture roadmap.
We introduced our first 7 nanometer GPU last year based on the GCN architecture. That's equipped with 2nd generation AMD Infinity architecture that will greatly enhance the multi GPU connectivity. And the product will be optimized for HPC and the machine learning applications. And to move forward, our next generation CDMA 2 architecture will continue to push on performance per watt. But even more importantly, you'll be equipped with our 3rd generation Infinity architecture with CPU GPU coherence that will extend the architectural capabilities to support exascale computing.
So we have very, very strong roadmap for data center GPU. So now let's take a look at the software because hardware without software doesn't go too far. So let's look at our software strategy. Our data center software on compute is based on what we call ROCOM. It's Radeon Open Compute Platform.
It is fully open sourced. And our partners and customers, they love open source because it enables them to innovate and to differentiate to create their own value add solutions without being logged into proprietary solutions. And RockOn also provides multiplatform support. People can develop in the hip code, which is a platform agnostic, open source API, and that can be compiled and run on any existing GPU. And we have also provided tools, translators, if people want to convert their CUDA code to the HIP code to preserve their investment while moving to an open platform.
And lastly, the ROCOM software architecture was built to scale. It will scale the multi GPU performance, and that's critical for the exascale computing. And the software can also leverage the expanded bandwidth and the coherency from the Infinity architecture. So we have been increasing our software investment in the last few years. So let's look at our progress.
So the journey started in 2018. At the time, you can tell there's still quite a few software components. We're still in the early phase of development, but we have built a solid foundation. Last year, we put our focus on building a complete software stack to accelerate machine learning. And by working closely with Google and Facebook, AMD GPU is now officially supported by TensorFlow and PyTorch, and the combination of both represents more than 95% of the machine learning applications.
So I would say, we're pretty well covered there. And this year, our plan is to develop the complete exascale software solution to cover both machine learning and extending to HPC. This is to support our supercomputer design wins. And there's no better way than working with the exascale customers to make sure our software is ready for large scale deployment. And lastly, our RockOn ecosystem is also growing.
This is a partial list of our ecosystem partners in operating systems, compilers, libraries and applications. We have a great community support, which is important, so we can offer the end to end solutions for our customers. Now let's quickly look at our performance. If you look at the chart on the left, it shows our efforts of continuous performance improvement. You can see from Release 2.0 to the Software Release 2.0, we almost doubled the performance running on the same hardware.
And this demonstrates the maturity of our machine learning software stack, the frameworks, the compilers and libraries. And you can see on the right hand side of the chart, it shows our multi GPU performance scalability. We can achieve almost linear scalability from 1 GPU all the way to 16 GPU. Again, it demonstrates the maturity of our communication library and the benefit of Infinity architecture. And this scalability is critical for exascale computing.
So to put everything together, only AMD can provide the combined advantages of CPU plus GPU plus the open source software. We can provide the fully integrated CPU plus GPU systems and the unified tools to make it easy for people to develop their applications. And the AMD's Infinity architecture connect many CPUs and many GPUs together through enhanced bandwidth and cache coherency. Together with our open source software, we can demonstrate a performance advantage comparing to the competitors' combinations, as you can see in the chart. This combined advantages of accelerated computing really enable us to drive the high performance computing leadership.
And that's the key reason why we were chosen for the supercomputer design wins. Okay. In summary, we have developed strong GPU technology roadmap that's based on advanced process technology, the RDNA, the CDNA architecture, the aggressive per watt efficiency enhancement and our open source software. With this strong technology road map, it puts us on the path to leadership. And we are very, very focused executing to our plan, to our commitment.
This will enable the next wave of winning products. This is the end of my presentation. Thank you very much. And with that, I'll introduce Rick Bergman to tell us the exciting business and products for the PC and the gaming business. Thank you.
Well, thank you very much, David. It's quite exciting to see your passion around GPUs. So I thought I'd first start off reintroducing myself. I'm actually, like David, recently joined AMD after 8 years as President and CEO of Synaptics. Prior to that, 30 year semiconductor background, starting out with Texas Instruments and then a decade at ATI and AMD.
While at ATI, I led the business that grew our GPU share to number 1 in the marketplace for discrete graphics. While at AMD, I led the team that developed the first APU or accelerated processing unit, the first time a X86 core and a GPU were combined together. But that's all in the past and it's real exciting in its own way. But what I'm really excited about now is the opportunity in front of AMD. And that's what I'm going to talk about over the next little bit.
So why am I so excited about it? It's really 3 comes down to 3 things. First, and you heard this from Mark, we've created this execution machine on processors. Just this regular cadence of leadership products coming out. 2nd, I can now see it's clear the same thing is happening on GPUs.
David talked about our RDNA or our Navi generation of products coming out in that similar fashion. And then 3rd, wow, we get to combine these into incredible APUs. So it's just really, really exciting time in AMD's 50 year history. And there's a strong path forward as we look forward to where we can take this. So to my role as Head of the Computing and Graphics Group is how can we turn that into a sustainable growth in our market share, with our profitability, with the platforms moving forward.
So there's 1,500,000,000 PC users, active PC users out there. What an exciting user base. And they have different needs. You have some consumers that are looking for the thin and light laptop computer. Of course, we have the enthusiasts that want the absolute bleeding edge technology.
And then you have content creators that view their PC as a tool. And we've kind of rewritten the rules there going from 16 to 64 cores. But at the end of the day, AMD has to pull all that together. And there's no better company that's positioned better than AMD to pull that together into solutions. And so what is the opportunity?
$32,000,000,000 opportunity in the PC market for us. So it's a very sizable market. We started with the Ryzen Desktop 57 and now have increased to have full coverage of TAM with Ryzen products. We're focused on executing across markets. So whether it's desktop, notebook, high end desktop, commercial or consumer, we have the opportunity to lead in all of those markets.
So as Lisa mentioned, we've disrupted the market. And so in some ways, it's so exciting that we have the fastest solution in 3 distinct segments of the marketplace. Of course, our Ryzen desktop, using the chiplet approach, power 7 nanometers, we have the clear leader in desktop. There's Threadripper for the high end desktop as well, the world's first 64 bit core processor for high end 64 core desktop processor for high end, allowing content developers in some cases to decrease the amount of time, for example, rendering by 50%. And then in notebook, we launched the Mobile Ryzen 3rd generation product back at CES, again, setting a new level of performance for the ultra thin notebook products.
So how does that all add up? Well, let's look at the market share We've almost doubled our unit shipments in just 2 years. As you'd expect, our market share has followed a similar trend. But in addition, our ASVs, our average selling prices, have also increased each year. So if you think about that, we've expanded our STACK Verizon, increased our unit shipments and then also increased our ASPs, meaning we're moving up.
We're moving into the premium segments of the marketplace, going from 30% TAM to full TAM coverage. So what is our path forward over the next 5 years? Well, certainly, we want our growth to be long term, sustainable and built across diverse segments. We've done a great job on the desktop side, but we're going to aggressively pursue the notebooks in the commercial segments as well. It's untapped potential for AMD.
For commercial, we're a natural mix, as Mark touched on. We have the security technology, the reliability and the performance to be quite successful in that particular segment. And we're looking for multi generational support, the CPU cores based on the Zen technology. Of course, we have the GPU cores as well. But what's really exciting again is pulling it all together into SoCs.
And we have clearly the best SoC capability and products coming to market in 2020. And I'll touch on a few of those through the course of the presentation. And of course, we need software, of course, to pull it all together and deliver the full potential of the hardware. So now let's touch on the desktop leadership again. And so almost any way you look at it, AMD has leadership in the key markets in the desktop area.
With our desktop Ryzen products, of course, you can go out to the websites, see the reviews, see the prices that we're getting, see our market share at retailers around the world, or you can go and look at the high end desktop type of solutions as well, where we do have those 64 cores, 3x the performance of our competition. Virtually any workload, AMD is beating the competition. But then kind of don't take our word for it, go out to the respected websites or reviews, and you'll see just review after review after review talking about how AMD has clear leadership in these segments. So now how do we transfer that desktop leadership into notebooks? So notebooks do represent 64% of the TAM out there.
And certainly, performance is important in the notebook business, but we want to have the entire user experience as well. So we want our customers, when they get an AMD based notebook, to love that experience from the day they go home and open up that box all the way through to the lifetime of that notebook. So in case of productivity, we've doubled the multi thread performance generation over generation. In the case of responsiveness, we've really worked on our drivers, and Mark talked quite a bit about modern standby. But if you look at year over year, we have 5x the number of platforms that support modern standby versus prior generation.
And then, of course, battery life, we've moved all the way up to as much as 18 hours with our new Ryzen product. And so we announced it at CES, our 3rd generation AMD Ryzen mobile processor, to quite a bit of fanfare and tremendous excitement at the OEMs. Of course, it's the world's first 8 core X86, the world's first 7 nanometer mobile processor, tremendous graphics, tremendous battery life. The average lifetime of a notebook is about 4.8 years. So let's just call it 5 years.
If you go back 5 years and look how far we've come, we've increased the compute power by 6x, we've increased the graphics by 3x, and the battery life of a notebook is now 3x to 4x longer. So just tremendous progress telling you where we're going to be 5 years from now as well. This is really a watershed moment for AMD. Really, it's a big moment for us when you look at the performance. Of course, we've always led in graphics for a decade.
That's just expected now that we'll have the best graphics solution in the industry. Since its Encore, our multi thread performance has also been leadership as well. But what now is new, single thread performance. AMD has taken over the leadership of single thread performance. So in the 3 major categories of performance, AMD is the leader.
So what does that mean? And at some point, the numbers start to speak. So you can see we've more than doubled our market share in the notebook space. So keep in mind, we've been able to accomplish that without 7 nanometer. This is pre 3rd gen.
Without 7 nanometer, without Zen 2, we've been able to more than double our market share. And now you look at the number of platforms that we have, going from 50 to in 2020, 135%, 176%, it's a leading indicator where this business is going. And that's with all the top PC OEMs as well, HP, Dell, Lenovo and so forth, selecting us for premium notebooks all the way through to entry level notebooks. Clearly, great progress. The other area mentioned that's so important to us now is the commercial business.
That represents 48% of the TAM and yet great opportunity to continue that momentum in growth. Of course, the decision criteria around commercial products is a little bit different. Performance still matters. You want your workers to have snappy performance, of course. Security is absolutely critical, and so I'll talk to that on the next slide.
You do want that battery life to keep your employees going all day long no matter where they are. And then, of course, manageability for deployment, imaging and management within a modern IT infrastructure. So let's talk a minute about what it requires for security. So security is just so important in today's world. You have a 3 level approach to security.
Of course, the first level is at the processor level. And there, as Mark mentioned, we do have that built in dedicated security co processor to protect your PC. Then as you expand at the platform level, we have features like the full encryption, the memory guard, the full encryption of memory to protect your data while it's resident in your PC or while it's being transmitted either direction as well. And then you take it up to the next level, the operating system level. We work with Microsoft and the OEMs to ensure that we have the best security at the enterprise level as well.
And so we support things like ThinkShield from Lenovo or SureStart from HP. And so now how do the numbers look here? As you can see, once again, very strong share moving forward. But again, let's look at the leading indicator. How many design wins do we have?
In this case, more than double the number of commercial platforms going from 35 to 70 plus in 2020. And again, I'll remind you, this was without the Ryzen Pro solution, no 7 nanometer, no Zen2 cores. We have that momentum, the prior products moving forward. So really expect things to pick up also in this space, again, working with top commercial OEMs like HP and Lenovo. Lenovo already announced their intention to use the Ryzen Pro and ThinkPad product.
So there'll be a full lineup of ThinkPad solutions leveraging AMD processors. So the road map, of course, a great deal of interest in the road map. Over the last 3 years, we've had 3 generations of Ryzen products. The Xencor is so critical, the foundation for the success of our products in this area. And as you heard, by the end of this year, we'll have the 4th generation of Ryzen with the Zen 3 core.
So it's going to be a very exciting year for AMD across all the different platforms that we participate in. So now I'm going to shift gears. So I just talked about the PC market. Now I'll talk about the gaming market. 2,500,000,000 gamers out there in the world.
So often you have different images of who a gamer is. In some cases, it's a business person playing Candy Crush, sitting at a gate waiting for their flight to take off. Or it could be the enthusiasts, really again pushing, wanting the latest and greatest. So we love the enthusiasts. It could be a teenager playing Minecraft at home on a PC.
Or perhaps it's a 20 year old playing the Sims also on a PC. A range of environments, different areas, different performance levels, all possibilities for Radeon technology. And as to where is Radeon Technology deployed, well, it's everywhere. We have an installed base of over 500,000,000 and it's clear we're on a path in the next several years to get over 1,000,000,000 of Radeon users. But what's really remarkable about this slide is who is adopting Radeon technology.
Of course, you have Google up there, you have Microsoft, there's Sony, there's Apple, of course, the PC vendors as well. These are companies that do their homework. They go out and look who's got the best technologies out there. And invariably, they're coming back when you want the best the world's best graphics IP, best graphics solutions for gaming, it's AMD. And so the TAM in this business is also sizable, dollars 12,000,000,000 market, and it's growing.
Our DNA, as you heard from David, really forms the foundation for powering the next decade of gaming. It's going to enable a full stack of solutions from AMD. And of course, every gamer knows not just about the hardware, you also have to have great software as well. So our first solution, the Navi what we call our Navi 1x solution using rDNA, we introduced last year. It was a very ambitious introduction.
Of course, brand new grounds up architecture in a leading edge process like 7 nanometer, key features like GDDR6, and we also led the industry with PCIe 4.0, very bold architecture. We targeted the largest audience in the PC market. And it was really around 2 distinct segments. So first, the 1080p market, that resolution, that represents about 60% of the PC gamers and then a 1440p market as well, which is the majority of the balance of the gamers out there. Two different product lines, 5,600 and the 5, Radeon 5,700.
Another thing to note here is the breadth of solutions. Just like I mentioned previously from on Radeon, These are all the leading graphics add in board vendors. And of course, they also happen to several of them also support us on the motherboard side as well. We have 3 times in this generation the number of SKUs than we had in the prior generation due to the strong pull and interest for our Navi 1X products. And as further validation, the Radeon 5,700 XT won the GPU of the year from the PC gamer.
And so the importance is at the end is how well do you run games? And more importantly, modern games or recently released games. This is about a half dozen of the really critical games out there. Looking at our performance, as you can see, it's quite good, which means very fluid gameplay. It also shows actually we're a little better than the competition as well because we targeted the right area with the right architecture, the right solution.
And so as I've mentioned software a few times, I just want to reemphasize the importance of software in this marketplace. We do put a great deal of emphasis on it because it really allows the performance and the features to come out of our solutions. So in December, we released our latest software, we call it Adrenaline, and enabled a new graphics user interface, making it easier for gamers to game. Brand new hot titles come out. We have to be there on day 0 with optimized drivers that work flawlessly.
So that is our commitment. And then there's features. So software gives us a touch point with the gaming community. And so we hear when they're looking for certain additional features or maybe we innovate and come up with features. And one of those examples was Radeon Image Sharpening, which just allows a higher a better visual experience on games.
And so now to Shift. Well, what's next, of course, is the big question. Of course, that will be our Navi 2X family of products. We'll introduce that those products at the end of this year. But clearly, we're stepping up and we're targeting the enthusiast class performance as well, which means we have to have great performance at 4 ks resolutions.
Remember, I talked about 10 80 and 1440, the next big step is 4 ks resolutions. That means uncompromised performance, very fluid play. And of course, we have to have just stunning visuals as well. As David talked about, the importance of ray tracing to show the shadows and reflections and to be able to gain at a very fluid consistent rate. So we'll add additional features like variable rate shading as well to give you that performance uplift so critical in this segment of the market.
And we'll take that 2x stack across the over time across the entire top to bottom lineup for graphics. So now we shift into what does our GPU roadmap look like. Well, this probably looks very familiar to the CPU roadmap that I just showed. So remember at the very beginning of my presentation, I said we're just getting that same clockwork, that same execution machine on the GPUs like we've had on the CPUs. So we're getting that cadence, that drumbeat.
And so yes, as I mentioned, Navi 2X starting later this year and then continuing forward. But the development team is off working on that next generation, creating those even better performance, better set of visuals because it demonstrates our commitment to be in this market, to not only be in it, but to actually win in this market as well. And so now let's talk another key segment of the gaming market, which is the game consoles. 10 plus year relationship with Sony and Microsoft, as Lisa mentioned, having shipped over 150,000,000 units of the current generation. But we're on the cusp of the next generation.
So the next generation consoles will use our latest Zen technology, use our latest RDNA technology as well, really creating this very immersive experience with ray tracing, 3 d audio, fast load times to really excite a new generation of gamers. So whether it's a Radeon mobile processor, Navi 2X or one of these next generation consoles, clearly 2020 is going to be a very, very exciting year for gamers. And so now to wrap it all up, I mean, you've heard about our great processors, you've heard about our great graphics, incredible SoC capability. We'll have more APUs this year than ever in the history of AMD, gives us the opportunity to have leadership across all segments in PC and gaming. Remember, combined, that's a $44,000,000,000 market, and we have the opportunity to lead in every segment.
As we walk the hallways here in AMD or any site worldwide, it's absolutely clear that our employees are maniacally focused on executing to the roadmaps that I showed today. There's really no reason at all that we can't seize the opportunity for sustained growth and success over the next decade at AMD. Thank you very much. So thank you. And now we will take a brief break and we'll come back with Forrest talking about our data center solutions.
Good afternoon. Welcome back. I'm incredibly pleased to be here to share with you an update on the progress we've made so far in the data center and the future that we see ahead of us. And for us at AMD, when we think about our mission to deliver high performance computing, there is perhaps no market for which that is more important than the data center market. Because the data center market of today is a market that is seeing continuous innovation and continuous disruption, with new applications that weren't even dreamed of a few years ago springing into fruition all the time.
With machine intelligence, of course, with the advent of web scale applications, with artificial intelligence, it is truly an era in the data center of continuous change and an endless need for compute power, an endless need for high performance computing, which is a perfect match for our mission. And it's also a very large and interesting market. As Lisa said earlier, it's a $35,000,000,000 market in 2023 with a number of rapidly growing segments that we believe are a perfect fit for our technology and our direction. And so I want to talk to you a little bit about how we're going to continue to attack this market. But let me first reflect on where we began this journey, this journey back to leadership in the data center.
We embarked on this several years ago. We actually talked about this and our ambition to bring innovation back to the data center at our Financial Analyst Day back in 2015. The first major milestone on this journey was when we introduced the 1st generation AMD EPYC Processor in 2017, which truly was the 1st competitive x86 server CPU to challenge our competitor in quite some time. And when we introduced that processor, that first EPYC processor code named Naples, the processor that with its incorporation of the Xencor began our tour of Italy and a return to the data center. We put up a roadmap showing what we were going to deliver from 2017 all the way out to 2020.
One of the reasons that we did that is because we knew that to be considered for the data center, much less to be a leader, we had to be not just a provider of high performance components, we had to be a reliable partner, somebody that end customers could count on to be there and to always be there with each generation, new products, new innovations, maintaining the value of the investment that they would to put into any new entrant into the data center. And so we put this roadmap up. And I'm incredibly pleased and proud to say that the teams have delivered with metronomic regularity on this roadmap with Naples in 2017, Rome of course last year and Milan on track to ship this year as Lisa mentioned earlier. So the execution is there and the execution with the 2nd generation EPYC has delivered an amazing part. This is the highest performing X86 Processor ever and it's not close.
When we introduced Rome, it's twice the performance of the competitive X86 Processor, which enables our customers to, in many cases, drive 25% to 50% lower total cost of ownership for providing the data center services to their internal and external customers, which allows them to transform their data centers, their IT, their operations. And as Mark mentioned earlier, the Zen 2 core at the heart of Rome also incorporates not just high performance, but the high security features that are so critical secure the data center to secure our customers valuable IT assets. And so, Rome has been a revolution in the data center. And on the performance side, I'm incredibly proud to say that we have garnered over 140 world records so far. We're the ultimate in performance in a wide range of workloads from enterprise IT through high performance computing, through cloud infrastructure, Rome delivers superior performance, delivers world record of performance.
If you take a closer look at that and you look at the processors that are available for the vast majority of servers today and that means 2 socket servers, and you look at what we've delivered with Rome versus what the competitor has to offer today, including the processor refreshes that they just introduced a few weeks ago or a few days ago. We clearly have an incredible performance lead, double the performance of the competitive offerings inclusive of their most recent refreshes in the 2 socket market and the same story in 1 socket, where the unique value of Rome and indeed Naples before that was to first establish a no compromise one socket market, where by not placing any artificial limitations on the availability or features of a single socket processor, we have enabled customers who need all the reliability and the resiliency of an enterprise grade server processor to address to scale their processor needs to fit their application and hence drive yet further TCO value. Now, Rick talked earlier about the various aspects of performance in the client market. And indeed, these performance metrics that I've shown you so far demonstrate the throughput superiority, the multi threaded superiority of AMD EPYC over the competitive parts.
And that holds true even when you look at lower core counts and you can compare core to core. So in the heart of the market, where most enterprises buy in the 16 core processor segment. If you take a look at our performance there, again inclusive of the 2 brand new parts that our competitor just introduced, AMD EPYC has a performance lead and an unmatchable performance per dollar advantage. So almost 3.5 times the performance per dollar of the most recently introduced 16 core part from our competitor, a truly remarkable achievement. So with that, I want to reflect on where we stand today.
So today, we're in a place where we have demonstrated repeatedly predictable execution to the market, quite frankly, highlighted and thrown into high relief by uneven execution by others in this segment. We've demonstrated leadership performance on multiple dimensions and we've demonstrated strong ecosystem support where we have the OEMs, the software vendors and the IHVs that provide the rest of the ecosystem to compose an entire solution all embracing Epic. And so that's a great foundation. That's a great place to be. And so we're taking that foundation and our imperative, our mission now for the entire team is to continue accelerating growth of the AMD EPYC product portfolio.
And so our mission is to broaden the deployments that have already begun, 1st with Naples and then more rapidly with Rome across enterprise, cloud and HPC. It's to work with our customers to continuously unlock all of the power and performance of our solutions by working with them to optimize their workloads and tune their software to get every last drop of performance out. It's to continue ramping our field and customer support organizations as our customer set grows and to support optimization and deployments. And then that's all in service, of course, of growing our market share. Now on the market share, the next significant milestone is one that we are going to hit next quarter when, as promised, we believe we will achieve double digit server CPU market share in Q2 of this year.
Now when we look at what we've got in more detail across each one of these segments, we have built a product, we built a portfolio that really can demonstrate leadership performance in the workloads that matter. And the markets that matter, broadly speaking, are these 3: the cloud market, which now constitutes, quite frankly, about half of the overall market for server processors the enterprise IT market, which is a little north of 35% and then of course the HPC market, which constitutes most of the remainder. With the 1st generation EPYC, taking that 1st generation Xencor, we were able to produce a product Naples that effectively addressed about 60% of the workloads across those broad markets. The vast majority of the HPC workloads, much of the cloud workloads and a good proportion of the enterprise IT workloads. And for them, we had leadership performance and a great solution.
With Rome and with the work that we've done with our ecosystem partners and end customers, we've expanded that footprint substantially. And we see over 80% of the workloads across these segments are addressed best, are addressed in a leadership fashion with 2nd generation EPYC Rome. And so to give you a few examples in each one of those segments, on Enterprise, we have a great performance story. We had with Naples superior virtualization capability and superior TCO. With Rome, we've taken that to the next level, up to 50% TCO savings over the alternative.
But beyond that, in many other application areas, in Java performance, in database performance, we're providing nearly half or sorry, nearly twice the performance of the competitive solutions, even on things such as SAP. We've demonstrated world records and we've demonstrated demonstrably superior performance. You couple that with the embrace of the OEM partners who have greatly expanded their portfolio of solutions, their portfolio of platforms that are made available to the end customers, growing from 22 platforms available in 2017 with the 1st generation EPYC Processors to over 140 expected this year, we've got a winning solution for enterprise IT and we've got that broad support across the enterprise ecosystem, both on the OEM side, on the software side and the IHV side. Our mission, keep driving, keep driving. On the cloud computing side, it has been incredibly gratifying to watch the growth here as well.
In 2018, our first instances were launched with some of the major cloud providers. We had 18 public instances available public instances or services available on 1st generation EPYC systems. And now this year, we expect over 150 public services and instances available so that customers around the world who have embraced the cloud computing paradigm can do so now with AMD EPYC and see the full performance of Rome. In order to support that, we have developed and deployed customized versions of the EPYC processors, uniquely tweaking in many cases to ensure that the environments and the TCO is maximized for our cloud customers. And they in turn are using the fact that we can support 60% more virtual machines with each being at the same performance level or better that we can support better Docker performance that we have much better performance for WAD, which is critically important for any cloud provider operating at scale.
And then we've got the memory bandwidth to unlock all of that performance and feed the beast as Mark is want to say to keep all of those cores humming and to keep that performance available to end users. With those advantages, it's no wonder that the world's leading crowds in the U. S, in Asia and in Europe have embraced AMD EPYC. Few examples, Google's largest general purpose actually largest instance of any type is available on 2nd generation Rome. We've delivered more than 25% better TCO for Twitter.
And Microsoft has a great set of instances in both general purpose as well as purpose specific instance types, including examples that are demonstrating over 60% higher SQL performance for companies that are embracing a move to the cloud. So great a great place for us so far and a great place for us to continue to grow with our cloud customers. On high performance computing, of course, this is a real area of pride for us at AMD, as Lisa mentioned earlier. We have a part and a roadmap that has demonstrated leadership in HPC applications by virtue of a substantial lead in floating point performance and demonstrated leads in performance on commercial applications with up to 72% higher structural analysis, 95% higher computational fluid analysis than that available on our competitor. Very importantly as well and one of the things that's led to a number of major supercomputing design wins with large meteorological research associations and institutions around the world, 120% faster weather forecasting performance.
And with the Frontier and El Capitan announcements, it is a real source of pride for the AMD team to be powering the world's fastest exascale systems and to be contributing to supercomputing and HPC clusters around the world from industrial applications to research to defense applications as well. So a new area for us, a new area of focus has really been to expand our reach beyond the traditional areas of cloud, enterprise and HPC and to dive deeper into the realm of the telco and infrastructure products that they themselves are undergoing a revolution. Many of these systems that are powering today's networks have moved or are moving from being based on proprietary hardware, on proprietary ASICs to running in software on industry standard servers. And the thing that's enabling that is the incredible power of today's servers. And that in turn gives those customers who have embraced that paradigm the opportunity to add significant agility and significantly better costs to their businesses.
EPYC, particularly the 2nd generation EPYC is uniquely capable in this area with much higher double the IO bandwidth per link with much higher bandwidth capability in terms of more links, more memory bandwidth. We demonstrate leadership, networking data plane performance and truly are the perfect fit for the emerging 5 gs telco infrastructure. Nokia, one of the leaders in telco infrastructure, recently demonstrated why they have embraced AMD Rome, because they see twice the performance on their 5 gs infrastructure based on Rome than they see using a competitive our competitive CPU. That's the type of performance that Rome provides. And so we've got a great part, we've got a great start in all of those markets and we're just going to keep going.
I'm very pleased to say that the 3rd generation AMD EPYC part code named Milan, we expect to continue demonstrating performance leadership in both throughput and low thread count applications, and we expect to be shipping it later this year as promised. And with Milan, we will open up the aperture even further. I talked to you about with Rome, we've got better than 80% coverage on the workloads across HPC Enterprise and Cloud Computing. With Milan and the work that we're doing with our ecosystem partners, we believe we'll have leadership performance in virtually 100% of the workloads that power today's data center, which is incredibly exciting and we believe continues our progress in every segment. But our tour of Italy doesn't end there.
So Milan Rome is an incredible part. Milan will be fantastic, but Mark and the team are completely committed to that CPU leadership that he talked about before and the Zen 4 core will show up in our next generation server part code named Genoa that you'll see in 5 nanometer and that we announced yesterday will be the CPU that's at the heart of the El Capitan system. So the journey will continue. The leadership products in server will continue. But beyond server, the other market that we've got to talk about is GPU.
Data center GPU for us is a rapidly growing market that we believe by 2023 will constitute an $11,000,000,000 market opportunity for us. And in data center GPU, there's really a number of different application areas or segments that are that where it is crucial, it is vital. First one in virtualization and cloud gaming. Many customers are making the move to do virtual desktop, virtual gaming in the cloud, unlocking a new set of economics, unlocking a new set of collaboration, unlocking a new set or new points for TCO Optimization. Beyond virtualization and cloud gaming, of course, the most actively discussed application of GPUs has got to be machine intelligence.
And we are committed to driving our roadmap aggressively for machine intelligence. And then of course, high performance computing where the top end supercomputers have been the province of GPU accelerated computing for the last decade. Our start in this market has already been made. We have the world's first 7 nanometer GPUs for the data center. We are the only company to implement industry standard hardware based virtualization that allows the resources of the GPU to be scaled up or down depending on the needs of a specific application and that's critical for almost any application in the cloud.
We're taking the same approach to developing a multi generational roadmap, not developing, not just developing, deploying, delivering a multi generational roadmap for GPUs that we took on the CPU side. And when we look at the innovation that's happening in each one of these areas, we think that innovation is best fueled by an open software environment, by an open software community. And AMD has made a strong commitment to having completely open source toolchains from drivers to libraries to enable each one of these segments. In GPU Virtualization, it was incredibly gratifying to us earlier this week to have Microsoft Azure introduce its latest virtualization instance, the NVV4, which provides accelerated VDI solutions via the cloud and with by virtue of our hardware virtualization technology scales up and down and delivers superior economics than any other alternative. That same technology is also critically important to providing the scalability and the long term economics that are so important to cloud gaming to ensure that that segment grows as well.
In Machine Intelligence, look, the applications of MR are just incredible. Anybody that's been you all have seen it. The revolution in natural language processing, in image recognition, in recommendation engines and even in industrial automation robotics has been incredible. The types of capabilities that we're seeing now in these systems are things that people would not have dreamed of a decade ago. And we're seeing applications spring up, applications for artificial intelligence spring up everywhere.
One of the interesting ones that was discussed the other day is using machine intelligence in concert with HPC to steer scientific research and traditional HPC computing and by doing so to get what was characterized as a force multiplier deployed. So we believe that machine intelligence applications are going to continue to grow, but what's going to grow even faster is the insatiable need for performance for these applications. Because as researchers refine their algorithms, as they explore the limits of what machine intelligence can do, they're driving an exponentially increasing demand for performance by virtue of larger models, longer run times, more data going into all of these models. And that looks a lot quite frankly like HPC demands as well. Mark showed earlier a portion of this data on HPC, But just to put a finer point on it, we have seen on the supercomputing side a 10000x increase in performance over the last 15 years.
And if you look at just this year to the exascale era, we need to do another 10x increase in performance to meet that exascale barrier, to break that exascale barrier. Similar things happening on machine compute. If you look at the model sizes of some of the most interesting work that's being done out there, where you're seeing the most interesting results, model sizes have grown 300x in the last 3 years. And so for both of these segments, we see this unending, to use Mark's word, relentless demand for increased performance and the only way to meet those performance needs is with accelerated solutions. And so our roadmap is committed to providing that performance for those applications.
And David showed this earlier. We're already shipping our first 7 nanometer GPU for the data center, our Mi-fifty product based on the GCN. Later this year, we'll be introducing our first cDNA product also based on 7 nanometer and optimized for HPC and MI applications to be a highly efficient, scalable product to meet those needs. CDNA2 will be coming soon to usher in the exascale era and to provide that 10x uplift in performance at the system level. And it's not just about performance, it's about making sure that that performance is usable.
And this was touched on earlier, but I want to turn up the contrast. Because look, accelerated systems have now been around for about a decade, but what we have done is we have bolted on accelerators onto a server system architecture that quite frankly was created for webscale applications and databases. And so the CPUs and the GPUs are isolated from one another. They don't work well together. And although the performance is there, it's difficult to reach.
It's difficult to program effectively applications to fully unlock the performance in a system with this topology. The first step in addressing this, we're taking with our CDN architecture to allow better scalability amongst the GPUs and coherency amongst the GPUs to allow up to 8 GPUs to scale more efficiently and work more efficiently with one another. But with the cDNA2 architecture, we get something truly special, where we extend that Infinity architecture, in this case Infinity Architecture 3, to couple the CPUs and the GPUs together into 1 unified data view, where this not only provides additional performance, much more importantly, it allows programmers to stop worrying about the explicit management of data movement, the explicit management of preemption, the explicit management of a bunch of functions that they haven't had to worry about on CPUs for many, many years and that are acting as barriers to embracing accelerated computing. So we're super excited by this unified accelerated computing architecture that we're bringing to bear in cDNA2. And it's that architecture, quite frankly, that led to the exascale systems that Lisa and Mark both mentioned earlier.
First with Frontier, which will be a 1.5 exaflop system deployed next year and it is capable of outperforming the 100 top 100 supercomputers on today's supercomputing list combined. It will be deployed in the middle of next year, supported by AMD CPUs and AMD GPUs connected together in a coherent way. The only thing that's more exciting than Frontier and what can be done with Frontier is of course El Capitan announced yesterday in an event with Lawrence Livermore National Labs and HPE. It will produce over 2 exaflops of performance expected to be more powerful than the top 200 supercomputers on today's list and will be supported by AMD CPUs and GPU shipping in 2022. Those two systems to us are a source of tremendous pride and a validation that our assertion that we're making tremendous progress on our goal to be the new data center leader is coming true.
And I think that I hope you see and I think that the industry is seeing that AMD has produced leadership products with leadership execution and we have a leadership roadmap that's going to continue out into the future. Delivering the best performance available on all sorts of applications across a wide range of workloads and leading the way to the future of accelerated computing by defining the architecture of tomorrow's accelerated system. And so I think with that, I think we are well on our way to demonstrating to any of the question, we are the new data center leader. And with that, I'd like to turn the stage over to my good friend, Devinder.
Thank you, Forrest. It's been 3 years since we had our last Financial Analyst Day in 2017. And I can tell you, it's been quite an exciting journey. You've heard from Lisa and my colleagues about our plans to accelerate momentum, execute product and technology roadmaps and the long term growth that's ahead of us. I will share with you the financials since 2017 in terms of our progress of what we have done since 2017 and then the next phase of our financial journey, our long term financial model and our capital allocation strategy.
So let's get started. Here is exactly the priorities we laid out in 2017. Grow revenue from the base that Lisa showed you earlier, just about over $4,000,000,000 gross margin expansion, operating expense discipline to return to profitability. And from a standpoint of what we have done, we have focused on the financials to grow our revenue, expand the gross margin steadily over the last few years, exhibited the OpEx discipline to continue to increase our profitability over the last few years. Let's take a look at the details.
On revenue, we've made great progress. We've had increased revenue of $2,500,000,000 from where we were in 2016 to 2019, 56% growth in revenue from 2016 to 2019. It's been driven by the new leadership products that you heard my colleagues talk about, the AMD Ryzen, the AMD Radeon and the AMD EPYC products that have been introduced since that time. Let's turn to margins and OpEx. If you look at the left, gross margin coming off of 2016, 31%, and we've showed steady increase, accelerating gross margin improvement, 12 percentage point improvement in gross margin in just 3 years.
And since the Finance Lens Day in 2017 when we met, every quarter since then, we've had year over year increasing gross margin every quarter. Since we met the last time, gross margin has gone up when you look back at year over year gross margin expansion. We made the OpEx investments needed for R and D and in particular, what you heard a lot about today, the multi generation roadmap, whether you go from 1st generation or 2nd generation to 3rd and now on to the 4th generation of products. In 2017 2018, our focus was R and D. In 2019, in addition to the investments in R and D, we accelerated our investments in go to market and we had a lot of go to market activities in 2019, and that's why you see the $2,100,000,000 OpEx in 2019.
With that, if you take revenue, margin and OpEx, let's look at growth in profitability. On the left, actually, I love this chart. First of all, when you look at the operating margin coming off of 1% in 2016, we've had 11 point increase in operating margins in the 3 years from 1% to 12%. And then on the right, EPS. We've made very good progress on the bottom line.
Earnings per share has gone up. We've had growth in EPS for the last 3 years. And with that, the P and L has gotten better. But let's turn our attention and look at the balance sheet, which I know a lot of you were asking about when we met in 2017 about AMD's balance sheet. If you look on the left of this chart, $1,800,000,000 of debt has gone down by $1,200,000,000 in the 3 year period.
It's less than $600,000,000 as we ended 2019. We've ended 2019 with $1,500,000,000 of cash, which turns net cash positive for the first time in a long time to be about $900,000,000 as we ended 2019. The gross leverage target that we set in 2017 was to get under 2x. We've exceeded that target. The gross leverage is 0.5 times, in particular, because the EBITDA of the company as we ended 2019 on a trailing 12 month basis was more than $1,000,000,000 and that led to a very strong foundation of a balance sheet as we go on with the investments needed for the next few years in terms of everything you've heard from my colleagues this afternoon.
On the next slide, let me summarize what we said in 2017. 2020 long term target model is exactly what we said in 2017, double digit growth in revenue double digit growth in revenue. And as we end 2019, we've had about 16% compound annual growth rate in revenue. Gross margin, we have said we'll get from the low-30s to 40% to 44%. We ended 2019 at the upper end of that range that we laid out in 2017.
EPS, strong growth every year, strong growth every year. We've had solid momentum, solid financial momentum for the last 3 years, which brings us to today, which brings us today. And let me show you the priorities from a financial standpoint for our next 4 years. You've heard a lot today from my colleagues. Mark and David shared with you the multi generation CPU and GPU technology road map.
You heard about architecture. You heard about product road map from Rick and Forrest and their plans to grow market share and grow revenue in many different areas. All of this sets us up for continued success. All of this sets us up for continued success. And for the next 4 years, it's all about growth.
It's all about growth. You saw Lisa put out a chart earlier about 20% compound annual growth rate and revenue for the next 4 years. It's about growth. Now with growth, we also want to focus on continued margin expansion, continued margin expansion to go further from where we are today, and that will continue to be a focus. We want to further increase operating margin and increase profitability.
And finally, we want to generate significant amount of cash. It really sets up for a very exciting 4 years given where we have come from the last 3 years in terms of everything we have done. But it starts, it starts as you've heard from all of us with our market opportunities. We are playing in markets today where the TAM is $79,000,000,000 You've heard about the data center from all of us, dollars 35,000,000,000 TAM in the 2023 time frame PCs, large market, dollars 32,000,000,000 TAM and then the gaming, which is a combination of consumer graphics and our game console business, dollars 12,000,000,000 10, the opportunities ahead of us are pretty large. The opportunities ahead of us are pretty large.
We execute and we grow the revenue. We can improve our financials in a significant manner. Let me show you the long term target model. Revenue growth, 20% compound annual growth rate and a lot of it from increasing market share in the markets we already play in with the products you heard from today. Gross margin greater than 50%, higher ASPs, higher margin products, premium products that we are introducing in premium markets driving the gross margin higher.
We invest in OpEx at 26% to 27% of revenue, prioritizing R and D and also go to market activities. And we expect to double the operating margin. You just saw me show you the 2019 operating margin at 12% And with the mid-20s percent in this time frame, we doubled the operating margins from where we were in 2019. And we expect to generate free cash flow margins greater than 15% and generate significant cash over the long term, significant cash. This long term model projects a very exciting period for us from a financial standpoint for the next 4 years.
Let me show you a little bit on the revenue mix over the last over the next 4 years. In 2019, we ended with $6,700,000,000 of revenue and you've heard us say the data center revenue, CPU and GPU combined is about 15% of the revenue. With the compound annual growth rate that we have for the company of 20%, the overall revenue of the company gets bigger. The PCs and gaming part of it grows mid teen percentage, but we expect that in the long term time frame, in the long term model, data center becomes greater than 30% of the revenue in the 2023 time frame. The radio center, everything you heard Forrest talk about driving the revenue to be greater than 30% of our revenue, which is overall higher from where we were in 2019.
In addition to revenue, we are also continuing to focus on gross margin, margin expansion. So let's talk about that. Where does the margin expansion come from? We have 2019 43% gross margin. The high end gaming is slightly accretive, and the margins will improve as we build out our gaming portfolio.
The margins will improve as we build out our gaming portfolio. PC products are above corporate average and a significant contributor to margin growth in the long term model. And then data center margins, which are well above corporate average, are the largest contributor with the growth in the data center revenue becoming 30% or greater than 30% of the overall revenue. In combination, this drives our gross margin higher, and we would like to do for the next 4 years exactly what we have done for the last 3 years. The tax rate, let me just cover that.
We expect that sometime during the long term target model, given our consistent profitability, the tax rate on a non GAAP basis will move up to approximately 15%. The long term cash tax rate stays at about 3%, similar to what we had in 2019 and what we are guiding to in 2020, fundamentally due to the fact that we have about $6,700,000,000 of net operating losses that carry forward and allow us to pay at a lower tax rate even though the non GAAP tax rate is 15%. And those NOLs protect the approximately 3% cash tax rate through the period of the long term model. Let me move on to the capital allocation strategy. OpEx investment comes first.
As you heard me say earlier, we will continue to invest in R and D and go to market acceleration. We think about many things beyond that. We have to fund the revenue growth in terms of the including limiting share dilution and strategic initiatives. And lastly, building on the credit rating progress that we've made over the last few years, we want to have a goal of achieving investment grade rating from a rating standpoint. In a nutshell, our priorities are invest in the business, drive growth and deliver shareholder returns invest in the business, drive growth and return to the shareholders.
So from an overall standpoint, in addition to the business momentum, the product momentum, the revenue momentum, here is a summary of the financial momentum. We have delivered great products and established great financial momentum. We have been laser focused on execution and on all the roadmaps you heard about today. We have significant opportunities ahead of us with the almost $80,000,000,000 TAM. We are still in the early innings of market share growth across PCs, gaming and data center, and that's where you see the ongoing market share gains as one of the drivers of financial momentum.
We want to accelerate the financial momentum, continuing to drive gross margin expansion, continuing to increase profitability and generate significant cash in the time frame of the long term target model. And finally, we want to deliver strong returns to our shareholders. Thank you very much. So it's a pleasure to invite Lisa back on stage for closing remarks before the Q and A.
Thank you, Devinder.
Thank you.
All right. How's everyone doing? So look, I hope you've enjoyed the last couple of hours and got a feel for the excitement that we have in the products and the technology and the business. And I'm just going to spend just a couple of minutes and just summarize in a few key takeaways. Hopefully, it's very, very clear.
We are committed to leadership in high performance computing. And that's across data center, that's across PCs and that's across gaming. And I hope it's also really clear that we are assuming the competition is going to be very, very strong. We have big competitors and we respect them a lot. But at the end of the day, we're playing our game.
And we know that if we execute our road maps, that we will see the growth that is exciting us. And that growth is in a set of great markets that we are underrepresented in today. And so we come back to our ambitious view of delivering the best. And that's the best in technology and also best in class in terms of overall growth. So those are really the key takeaways.
Now I think we're going to turn it over to Q and A. I'm sure there's some questions. So let me have the team come back up and I think we'll reset the stage for Q and A. Just give us a couple of minutes here.
Are we good?
Mark, you come on this side? All right. No problem. No problem.
Yes. Perfect. Thank you.
Yes. Okay. So look, before we begin the Q and A, we spent the entire afternoon really focused on the long term because that's what this is about. It's about our strategy and our roadmaps. But I do want to address probably a topic that's on the minds of many of you, which is a little bit about what's happening in the short term.
Obviously, there's a lot of volatility in the markets with the coronavirus and we want to make some comments about that as well. Our first priority, of course, like all of our colleagues is to ensure the health and safety of our employees and our partners and our customers. And so that is our focus. And we have taken steps to minimize potential exposure at our global sites and with travel like most of our peer companies have. From a business standpoint, it is a very dynamic situation.
So let me give you some color to kind of give you a view of what's going on. From an overall supply chain standpoint, our supply chain, our supply chain is primarily focused in China, Malaysia as well as Taiwan. And I would say it's a very robust supply chain. So we have taken a number of actions to ensure that we have continuity in that supply chain. And based on what we see today, we're actually back to near normal supply capacity in our supply chain.
So that is something that we continue to be very focused on. We're also monitoring our customers since a lot of our customers have supply chains that are very dependent on China and some of those operations. And we did see some disruptions certainly through Chinese New Year and in month February. There's a lot of progress being made. I would say, all of us in the ecosystem are trying to return those operations to as normal as possible.
And we expect that to continue over the next couple of weeks I'm sorry, over the next coming weeks. Now let me turn to the demand standpoint. I think from the demand standpoint, again, this is a very fluid situation. So there are lots of puts and takes. What we have seen is outside of China, the overall demand has actually been about what we expected for the Q1.
In China, we have seen some reduction in consumer demand, particularly in the offline channel networks and those I think will continue for some time. We have also seen some other puts and takes where the demand for infrastructure has increased beyond what we had expected originally. And so with all of that, we had guided the first quarter at our first quarter earnings call at 1 $800,000,000 plus or minus $50,000,000 We are not updating that as of now. Our best visibility is that the impact in the Q1 will be modest, but we'll keep watching that. And perhaps, we'll be in the lower half of the range, but still within the range of our original guidance.
You also saw from Davinder that for the rest of 2020, we are standing where our Q1 our 2020 guidance remains unchanged. And we see a very exciting growth path over the 2020 year. So with that, let me turn it over to questions from the audience.
Great. Thank you, Lisa. So we have microphones in the room. We have Laura here in the middle and Saskia and Jason. So if you wouldn't mind putting your hand up if you have a question, and we'll take questions in the room.
Aaron?
Yes. Thank you. Aaron Rakers with Wells Fargo. I guess I want to unpack the model a little bit more. One of the things that I'm a bit surprised by is that you've got this big push in the GPU side, particularly the data center side.
How do we as kind of analysts start to think about modeling that out? Have you thought about separating that out from a segmentation perspective? And what's embedded in your expectations as well with regard to the margin profile of the gaming SoCs as Microsoft and Sony come on late this year into 2021?
Yes, I think on the segment piece of it, we report the segments as we do with the CG and EESA. We have provided additional color where we feel it's helpful, especially on the data center piece of it, which is a combination of CPUs and GPUs. If you look at what I presented, we talked about PC gaming in totality, mid teens growth over the time frame of the long term model. And then on the data center side of it, it's higher growth and that's why it's becoming 30% of revenue on an overall standpoint. And if you look at the numbers and do the math, I think you can get to the numbers in terms of how much it is in data center, CPUs and GPUs.
CPUs generally, if you look at it from a viewpoint of the market, is essentially flat. GPUs is where the growth is on the data center side of it. And that's where I would leave it.
Ross?
Thanks. Ross Seymore from Deutsche Bank. I want to stick on that data center side of things and maybe, Devinder, what you just said a little clarification. Going from the 15% of sales to 30% of sales, a little color on how you think that growth will be driven between the GPU and the CPU side, the server CPU? And then maybe a related follow on is on the market share within servers as a whole.
I think, Forrest, you talked about getting to the double digits in the second quarter, hitting your target there. What sort of target should we think of as being next? And is the market size itself the full server market? Are you still kind of judging the market as 2 thirds of what people describe it as in its entirety? Thank you.
Yes. So lots of different questions there, Ross. Let me try to take a couple of them and then maybe Forrest will respond as well. Look, when we think about the model, let me just take a step back and say that we are designing a model that has a number of different outcomes can get there. And so that is obviously a lot of growth in data center, but also a lot of growth on the PC and gaming side.
Within data center, clearly, from a dollar value growth rate, sort of growth number, the data center CPUs will be the larger number. From an overall growth rate, given we're starting from a low base on the GPU side, the growth rate will be higher and as well as the market growth rate is higher. Relative to market share goals, I think our view is that our product portfolio, whether you're talking about the data center or PCs or gaming, really supports very strong market share over the next number of years. We're not putting out a new market share target. But what I would say is that we believe that our product portfolio is strong and can certainly meet our previous market share capability as well as beyond that across a number of years.
Forst, did I No,
I think you hit it well, Lisa. I mean, I think on the last point, we're certainly not done when we hit double digit market share. Our imperative is to continue to grow. And as Lisa mentioned, our ambition is not to stop anytime soon, keep it rolling.
Laura, Tony here in the front row. Front.
My question is on the supercomputer wins, specifically El Capitan and Oak Ridge. The I thought what was interesting is you won both the CPU and GPU at El Capitan And this is a 3 year out, so they're making decisions 3 years out on obviously a forward roadmap at 5 and I'm presuming CDNA. So, what was the reason why they chose you? I can understand the CPU side. And the GPU was a little surprising.
Is that because of this GPU, CPU interconnect and that, that was what that was the trigger to win the combined both sockets. And then does this is this a precursor to hyperscale design wins? And will that translate to hyperscale CPU, GPU combined wins? So that's my question.
Maybe I'll take a first whack at that and maybe ask David and Mark as well to weigh in there. Look, I think it was immensely gratifying to be chosen as the CPU and GPU provider for both of those. I mean, one of the things that is makes it most gratifying is these are extremely rigorous evaluations that are done. And so they look at the technical characteristics of the proposed solution. They also quite frankly look at your execution capability and track record quite closely.
And I think because there are procuring things that are that have not yet finished design. And so I think David and the team did a marvelous job on the GPU side, developing an architecture that we think can scale up quite a bit and maybe he'll talk about that in just a second. But then the ability to put them together, the ability that we talked about throughout Mark Davids and my presentation to be able to have a unified accelerated computing architecture with the CPU and the GPUs working seamlessly together and dramatically simplifying the programming model was something that I think was tremendously attractive to DOE and to the national labs.
Yes. I think we didn't show this purple wad progression on the cDNA side. But you can imagine whatever we have done and what we're planning to do on rDNA 1, 2, 3, that trajectory will be happening on the cDNA side as well. So we are aggressively enhancing the performance per watt on the type of operations that the data center and the HPC customers care about. And we'll be scaling our technology very aggressively through the CDNA architecture migration.
So I would say besides obviously the Infinity architecture that provides all the benefits of cache coherency, programmability, unified memory, I think the GPU performance on its own is also a key factor of winning the deal.
And then lastly, I would add, we have really listened carefully to the requirements from the Department of Energy and their goal to have this system be optimized for both HPC and MI and listen to them and presenting to them how our software stack could really an open source approach optimize across the CPU and GPU and be optimized with the libraries we would provide and then enable them to enhance it even further was also a key element.
The one thing I would add to that is that to add on the back to answer the second part of your question was we do think that that architecture is the right architecture for machine intelligence as well. That necessarily, when you look at those demands, we're going to have to move to clusters of accelerated systems, and this is the right architecture for those machine large scale machine intelligence applications as well.
Great. Saskia at the very back of the room.
Thank you. Tripp Chaudhry with Global Equities Research. A phenomenal presentation, lot of learning. I was wondering if you look at 2 industries, the semiconductor industry, the chip industry and the software industry. For the semiconductor industry, the catalyst in data centers is machine learning models are getting bigger.
But when you look into the software industry, they are going with distributed training, transfer learning and their motto is extract as much power from your existing CPUs and delay sorry, CPUs and GPUs and delaying the purchase. Who do you see this evolving? Will it be a growth linear or would it be a step function? Thank you.
You want to add something more?
Well, I'll just start from a workload standpoint. I think you're going to see either the algorithms are still changing. And so this is dynamic. It's one of the reasons that you saw across our presentations that our whole approach is scalability. Our CPU roadmap runs unabated.
Our GPU roadmap has relentless unabated growth as well and then it's how you put it together. So there's no question that certain workload applications will remain GPU only, will remain CPU only. But what you're seeing in these supercomputer applications is that the it's the leading indicator of where the industry is going. What often is go back through history, the problems that first needed supercomputer class, you think about the old supercomputers, you're actually performing those operations on your phone today. So it is indeed, from my standpoint and what I'm hearing from CTO peers in the industry, it is indeed the leading indicator of the approaches being used on many of these analytic and machine learning workloads.
Thank you, Mark. Jason, Tim here. Thank you.
Tim Arcuri, UBS. Hi, thanks. Devinder, I actually had 2. So first of all, I guess I'm a little surprised that OpEx is coming down so much as a percent of revenue from last year to 2023 given that you have all these new architectures and initiatives that you had to support, A. And B, if you compare say to like NVIDIA, they spend 28% of revenue on OpEx and they're supporting a single architecture.
So can you sort of talk to how you're going to keep OpEx so low and yet grow revenue so much? Thanks.
I think it starts with really some of the things that we talked about. Revenue is growing on a compound basis at 20%. And some of the things that Mark talked about in terms of the efficiencies, in terms of the engineering helps from that standpoint. And we look overall at the investments needed and our view is in the out in that timeframe, not right away. I mean, we ended at 31% in 2019, but out in the timeframe in 2023 where the revenue has grown, we can manage it within the 26% to 27% OpEx rate.
Matt?
Thank you very much, everybody. I just wanted to say the balance sheet stuff, Devinder, congrats. I had one question and then I guess a clarification. The clarification bit was Mark on your slides, I think in the past you guys had talked about Zen 3 being on 7 plus
and
you talked today about it being on 7. Maybe you could just clarify if that's a nomenclature change or if that's an actual change in architecture. And then backing up to the question, I think if you look through the long term model there, 30% of revenue would be, I don't know, dollars 4,000,000,000 $4,500,000,000 in data center. Maybe you could talk about how much you think of that as cloud versus enterprise versus HPC, which seems to have a lot of momentum. And then you talked about wireless infrastructure today for the first time.
Maybe you could just break that, say, dollars 4,000,000,000, dollars 4,500,000,000 revenue down and just how you're thinking about growth in those segments? Thank you.
Matt, the clarification is in fact nomenclature as you said. So we work very closely with TSMC. And as you see how their public road map on 7 nanometer evolved, there was at one time a 7 nanometer plus. And what is often happens with these new full node changes like 7 nanometer, what happens is some of the enhancements actually get folded in the base roadmap. So 7 nanometer actually has encompassed in that nomenclature several different grades, shall we say, of its development.
And so we matched up with TSMC nomenclature.
Then with regard to the second question, I think, look, we think that, that split in the market is stabilizing that we do think that cloud currently constitutes about 30% of the overall market, enterprise about sorry, about 50%, enterprise about 35%, HPC about 15%. When we look out in that timeframe, we don't see it dramatically changing. There will probably be a few percentage points shifting. And then certainly our ambition is to participate in all of those segments quite strongly. So ideally we would be looking at a relatively balanced spread across all three of those businesses.
Jason, over here. Thank you.
Hey, thanks. Mitch Sze, RBC. So you've really 2 questions. The first one is really just regarding your competitor. I mean, every day there's a new leak in terms of what the specs are from both you guys and your competitors.
So maybe you guys provide some sort of high level view on what you guys expect to happen over the next couple of years because long term roadmaps obviously have a lot of competition in it? And then secondly, and Trig is getting a lot of questions on this already, but you have a 20% target over the next several years. Just want to confirm that's more of like a long term CAGR and it's not going to be a 2021 number. So what I mean the specific about that is probably 21 based on my math, it's got to be higher than that and it would decelerate. Just want to make sure that's correct.
Do you want to do the first one and I'll do the second one?
Well, just on the road map, of course, there's always speculation and leaks and we don't comment on speculations. What we really do is really as I alluded to earlier, we and then what Lisa said explicitly, we, of course, have formidable competitors and we're fighters here at AMD. So what we do is focus on getting the most competitive road map that we possibly can and really listening to our customers to make sure that what we're developing will address the workloads that they have coming at them.
Yes. And Mitch, to your comment about the overall compounded annual growth rate. I mean, look, it's fair to say that 20% is a very strong number when you look across 4 years. If you just do the math, it's more than double the size of what the company was in 2019. So I would say it will depend on how exactly things ebb and flow, but we would be very, very pleased and that would be lots of market share gain to achieve the CAGR of 20%.
Fazke at the back here.
Yes, hi. The question is about some of the developments in the startup space, and there's a lot of money going into new developments for semiconductors. And also some of your customers are talking about their own silicon. Maybe can you talk a bit about how you see that? And is this a new fight you have to take on?
Or this something that we should be thinking about more deeply or how do you think about that dynamic?
Mark? Yes.
I'll start just from just take you back on even the historic view of the industry. There's always been a need for specialized devices. There's been specialized ASICs out there or starting often in a FPGA implementation. This is how new approaches, new workloads are typically implemented. But when you look at them, they're typically targeted at a more narrow set of workloads.
And this won't diminish at all the needs for these very high performant, easy to program GPUs and CPUs we have out there. The code that's out there already leveraging these approaches is massive. And you also commented on some of the larger companies demand, there's room for these tailored solutions. And so it's an ecosystem. We're very, very confident on our growth as we shared with you today, and there's plenty of room for tailored solutions in the industry.
Some of them will have staying power, others won't, But there will always be a need for these easy to program general purpose solutions that stay on the competitive path that we set out and are implementing here at AMD.
Jason Gross here.
Your R and D efficiency is really impressive, and you sort of highlighted a couple of areas where you're getting there, emulation, simulation, concurrent software, hardware design and modularity. Everybody else is doing sort of the same thing. And I'm just wondering if you could give a little more color as to how you're getting there and how you're getting these products out so fast relative to your competitors with fewer dollars?
Well, I'll start and
You're trying to get marker
Short answer is necessity is the mother of invention. So when just speaking candidly, when you look at the turnaround we were facing and we really couldn't have the kind of investment that we would have liked to have had, the things I talked about aren't something that we're trying to do now. What I shared with you over my comments was looking back at what we implemented and allowed us to deliver these high performance products to market. So it's really a credit to the team on how they responded to the challenges we faced in bringing AMD back to high performance. And the good news is they love it, and so we don't see any lack of that kind of innovation of how to improve going forward.
Great.
Brian? Thank you. Nathan Brookwood, Insight64. You were very explicit when you talked about Genoa being based on 5 nanometer technology, but you were less specific when you talked about the advanced versions of rDNA and cDNA being on an advanced node. Can you give us some color on that advanced node and why you are characterizing the one very explicitly and the other kind of very generically?
So maybe let me take that. I think when you look at our processor roadmap, again, we have the wind from El Capitan. It is a 5 nanometer roadmap to get some of the performance efficiencies that we need to get. As it relates to RDNA and cDNA, David has a lot of things in the hopper. And we will talk more about what node and what architecture and all that stuff as we get closer to product.
Great. Jason?
Thanks. Blayne, Chris, Barclays. So just want to ask on the data center GPU market. Maybe you could just talk about that split to the CD and A. What exactly is different versus in graphics?
Is it just you're adding acceleration or is there something more fundamentally different? And then just from a competitive landscape wise, unlike in CPU, your competitors should have 7 nanometer this year. Can you just describe where you see the differentiation in the data center market for your product?
Sure. I think, as I mentioned in my presentation, right, the big part of the cDNA optimization is indeed adding some more higher density mass, right, for HPC and for some of the, what we call, the tensor ops, the matrix acceleration ops that are needed for HPC and machine learning acceleration. So I would say that's a very big part of innovation on the CDNA. And of course, I would say we are optimizing architecture also for compute. So that does mean we are putting less focus on other operations that are less important for high performance computing.
So those are the possibility and opportunities for us to reclaim the silicon area to enhance the compute capabilities. I think the other big one though is while Mark talked about the Infinity architecture. As we someone mentioned about the explosion of AI, the machine learning, the size of the training, that is very, very important. The size of the training really grows. We need a lot more GPUs.
They all need to be interconnected together with a high bandwidth efficiency. So that 2nd generation for Infinity architecture that Mark talked about really allow us to interconnect multiple CDNA based GPUs at a much, much higher bandwidth and flexible topology to allow this large training model to be performed in a much more energy efficient way. So I would say those are the kind of key elements.
Orest?
You mentioned go to market investments, and it strikes me that that's a good idea because you wouldn't want something as mundane as salespeople to get in the way of everything you're trying to achieve. So maybe you could flush that out a little bit. What kind of hiring expectations you have in what areas and maybe a sense for how flexible you can be and what you're trying to build towards or are you trying to do it as you go?
Yes, maybe let me take that. So we are at a place where go to market investments are very, very important for us. Darren Grasby has our overall worldwide sales organization and the focus is on customer facing commercial, both for client as well as data center and enterprise as well as sort of field application engineering at top hyperscalers to help them optimize our capabilities. So lots of focus in this area. You will continue to see that be an area that we invest.
And I think it's one of those places, again, that we can scale as the company scales, we scale our capabilities there as well.
Jason?
Shane Rau with IDC. Forrest, my question is regarding data center GPU. Remember the 3 application segments you highlighted, a virtualization, MI and HPC. Could you map your existing product line and which product lines serve those 3 applications? And if then you could then allow a quick follow-up.
Yes, sure. So right now, we have if you look at the virtualization or cloud gaming space, I think we've got some of our MI parts in current GCN based MI parts in the virtualization space and in the cloud gaming space. We certainly see looking forward the CDNA parts, particularly starting with the ones later this year are going to build on the wins that we've already had with our existing architecture and focus those in the MI and HPC segments. On the virtualization side, it will be a mixture of parts going forward.
If you allow me to map into more product names as in you've got the V340 series, you've got business with in cloud with Stadia and you have Radeon Instinct. And I'm trying to map those into those applications that you highlighted earlier, and get a sense for where you stand now given where your strategy that you just highlighted is going to take you as far as server GPU?
I don't think we get into the details of the product mapping, particularly if they're embedded in our customers' end products. So I think I'll stand by what I said a moment ago, which is we've got some of the existing Mi products in the virtualization space based on GCN architecture and we certainly see CD and A driving heavily into MI and HPC going forward.
And then if you allow under MI, would you segment MI in a training versus inferencing way? And do you consider both of them under the TAM of MI?
I think we definitely consider both of them under the TAM of accelerated MI.
Saskia, over here.
Hi, Harsh Kumar, Piper Sandler. So you talked about cDNA and unified data. Is that in Milan or the generation after that? And then also my understanding from this is that the data will appear in the same way to a CPU and a GPU. Is that roughly accurate?
And how big a deal is that to your end customers? And when your competitor comes out with their GPUs, how easy or hard is it for them to emulate something like this?
A number of questions there, maybe I'll take one of the first one and then leave the others to David. Although we've said so far on the supercomputers is that Frontier is based on a future custom EPYC processor and then El Capitan is based on Genoa. I think that's all that we've said thus far on the processor side?
I think on the given our cDNA2, that is hardware coherency, right? That allow us to have the data being its preferred location instead of needs to be replicated on the GPU or CPU or copied to each other. Very simple example is CPU is doing preprocessing on a set of data and that needs to be processed again by TPU and they have to pass back to CPU for post processing. Imagine now we have the unified memory that's cache coherent. You can just keep that data in the HBM memory for the GPU and you can just do the whole processing without moving data back and forth between CPU and the GPU.
And that's tremendously helpful, both from the programming point of view as well as from performance point of view. As far as our competitor wise, I mean, it depends on how they work out that coherency scheme with whoever their partner is.
Great. I think we have time for one last question.
Hi. We actually had 2, but I don't know how that works. But the just one net question, why is the free cash flow margin and operating margin 10 points as much as 10 points apart? Because I would think for you it's pretty comparable. And then I just you also mentioned M and A as a potential, something to think about.
Just any context on what type of M and A you might think about? What is that tuck in type of stuff or something bigger? It seems like you have a pretty full portfolio already.
Let me take the first one. So the first one, if you look at the operating margin approximately mid-20s as we call it, you do have the 3% approximately 3% cash tax rate. You have CapEx investments. You have interest expenses. And then obviously, as you're growing the business, there's some amount of money needed to fund the growth with a significant CAGR that we showed from a revenue standpoint.
And these are approximate numbers. So yes, you're right, there's a difference between that, but those are the 3 or 4 factors that drive to the difference between operating margin and free cash flow margin.
Yes. I wouldn't expect though that Joe, there should be a 10 point difference. So I think that's just the way we've built the model at this point. And then look to your second question, hopefully what you've seen this afternoon is like we're really excited about our organic growth opportunities. There's a lot of growth.
There's a lot of technology. There's a lot of market opportunity. And we see that as really a great business model. Now the fact is the company is a lot stronger today than it was a few years ago. And so that balance sheet is also something that we're proud of.
And we're always going to look at what are good priorities for us to continue that growth. And that's the way we think about strategic M and A.
Great. Well, thank you everybody for joining us today and to everybody for tuning in on the webcast. And for those of you here in person with us today, we're going to retire to a demo area out in the front lobby and you can spend some time also with the presenters out there. Thank you. Fantastic.
Thank you.