Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts
Financial Analyst Day 2017
May 16, 2017
Please welcome CHRO and Senior Vice President of Corporate Communications and Investor Relations, Ruth Cotter.
Welcome everybody and we're very happy that you could join us here in California this afternoon and to everybody on the webcast who's also joining us. We have a jam packed program for you here this afternoon with some great news about AMD, our long term strategy and our incredible product portfolio that we have spent many years putting together to delight our customers. And here's a brief overview of the agenda. I think many of you have that to hand already. It's also posted on the webcast.
Just in terms of some items, I wanted to remind folks that today we will be making forward looking statements. Those forward looking statements are subject to change. And if you'd like to review our risk factors, there's available in our most recent SEC filing. We'll also be making some non GAAP references throughout today's presentation. So we'd like you to make note of that.
So we'd love to get the event up and running here. And we're going to transition to a video to kick off the event and then we'll commence with our first speaker. So we look forward to following us throughout the day and following up in a question and answer session the end of the day. So with that, we'll start the video. Thank you.
Please welcome President and Chief Executive Officer, Doctor. Lisa Su.
Good afternoon, everyone, here in California and for those of you who are joining us on the live webcast. And thank you for joining our 2017 Financial Analyst Day. We hold this event every couple of years and the idea is to really take a step back and look at our strategy, look at our product road map, talk about our progress with customers and of course, update you on our financial model. And this year, it's particularly exciting because we've just had so much to talk about in terms of what's going on with our products and getting ready to launch. So you will hear a lot of product information as we go through the next few hours.
So let me first start off with an overview. Now many of you know AMD very well. And we have always been about building great products and trying to push the next level of innovation. If you think over the last 48 years, we've had a chance to do some of the really first level of innovation in the industry, things like being the first to break the gigahertz barrier in graphics, the first to break the gigahertz barrier in CPUs, the first to break the teraflop barrier, the first to introduce 64 bit X86 and the first to integrate CPUs and GPUs on a single die in what we call the APU. Now all of these things are really cool.
But as we look at what we're trying to do going forward, I became CEO of AMD 2.5 years ago. And we really tried to chart our path for the next 5 years. And the idea is defining a path to focus on high performance technologies and building great products because these are the areas that we can uniquely differentiate in the industry. And I hope what you see over the next couple of hours is that we've also set really ambitious goals, but also put together focused execution because we believe that you need all of these things to put things together and form a great company. Now in terms of our strategy and focus, this is not new.
Our focus is really what are we best at. We are the only company in the world that has high performance graphics and high performance compute. And if you look at how hard it is to be good at these technologies and how important these technologies are for next generation workloads, whether you're talking about gaming or you're talking about machine intelligence or you're talking about VR or AR or the new cloud workloads, all of these require high performance. And our unique capability is to put these together into solutions that are very unique in the industry. Now a couple of years ago, some of you may have attended our 2015 Financial Analyst Day, We made a set of commitments.
We made a number of commitments to our stakeholders, including many of you. And those commitments included stabilizing our business. We were very, very much focused on PCs at the time and our business was declining. We said we would prioritize our investments, focus on where we could be best in high performance, and we actually stopped doing some things. We also said that it would be very, very important for us to strengthen our balance sheet because it takes investment to compete in the high performance world.
And we had a road map of products that we were committed to deliver to our customers and to the ecosystem, and those would start launching in 2016. But really, 2017 is a huge product year for us. And with this execution, I'm actually really proud today to say that we have met many, many of those commitments to our stakeholders. In 2016, we grew annual revenue 7%. And that was across both our Computing and Graphics business as well as our Enterprise Embedded and Semi Custom Business.
We gained market share. We gained market share in discrete graphics, over 5 points of discrete graphics market share. We actually gained market share in mobile APUs with our 7th generation APUs over 2.8 points of market share. We strengthened our balance sheet. We ended 2016 with over $1,200,000,000 in cash.
And very strategically, we reduced and restructured our debt. And again, important as Devinder takes you through our road map over the next few years of strengthening our balance sheet. And perhaps most importantly, what I'm really proud of is our product execution. You saw us launch products, including the Polaris architecture, which drove much of the graphics market share that we gained in 2016, you saw much of the reveals on the next generation Xencore that we just started shipping in the Q1 of 2017 and you'll see much more of over the next several quarters. But it was really all about execution and meeting our commitments to our stakeholders.
In addition to the products and the business, I also want to spend just a couple of minutes on customers. It is extremely important in our industry to build a track record of execution. And when we look at the significant progress we've made with customers, some of the marquee names in the industry, including Sony and Microsoft on game consoles, including HP, Dell and Lenovo in the PC market, including Apple with their Mac product line and a number of new customers that we're adding to the AMD family. And the key for us is we work hand in hand with these customers to define our leadership road maps. And so we're having conversations today with our customers about what do they need 3 to 5 years from now.
And that is key for our success as well as their success. So moving in a little bit to the future. Last summer, you'll get a chance to meet many of my management team today. But last summer, we sat down and we said, what do we really believe are the most important things that will drive the semiconductor industry over the next 10 years? And in our view, a very simple and important statement is immersive and instinctive computing will transform all of our daily lives.
And when you think about that, I've been in the semiconductor industry a long time. And sometimes it's fun and sometimes it's a little bit less fun. But I would say the beautiful thing that is always true and is certainly true today is we get a chance to create things that really change people's lives. And that continues to be true over the next 5 to 10 years. When you think about immersive computing, actually, we're well on our way to immersive computing.
If you think about all of the devices that are integrated into our daily lives, whether personally or professionally or in our home or in our office, we are actually just at the beginning of what we're going to call instinctive computing, which is really the next wave of computing that connects all of the data and all of the intuition required to use all of these immersive devices. And so as we go through the day, I think you'll see how this vision has actually impacted where we focus in terms of markets as well as where we focus in terms of technology and products. So what's at the core of AMD? The core of AMD is delivering high performance computing. High performance computing is critical for a number of applications, whether you're talking about the cloud, you're talking about gaming, you're talking about medicine or education or business.
All of these applications require high performance computing for us to take things to the next level, for us to make computing smarter and more instinctive and more intuitive, this is what's driving a huge, huge growth opportunity in our industry. Now let me talk a little bit about how we think about High Performance Computing as it relates to market segments. Market segments are very large and very important. Start first with PCs. These are 2020 TAMs.
So PCs, dollars 28,000,000,000 TAM immersive devices, dollars 15,000,000,000 TAM data center, over a $20,000,000,000 TAM. And you see that together with these three markets, high performance computing drives over $60,000,000,000 of market opportunity over the next 3 or 4 years. Now how do we think about it? Let me start first with PCs. I have to say, from an AMD standpoint, we really like PCs.
We think PCs are an important market for us. It's a high volume market. It's one of the few markets where you really ship over 250,000,000 units a year. And although it has been volatile at times, it is actually the way many of us connect as both consumers and commercial applications to the cloud and to the rest of the infrastructure. And there are sub segments within the PC market that are actually very profitable and growing.
And so those are some of the areas that we're going to focus within PCs. If you look at the area of immersive devices, this is a very exciting area. We see this as there's a constant demand for more pixels on every screen, whether you're talking about game consoles or embedded devices or high end graphics cards. The idea is you can always make it more beautiful, you can always make it more realistic and you can always have more resolution. We see this market growing about 7% compound annual growth rate through 2020.
But there are certain segments of this market that we think will grow well into the double digits. And those include high end gaming as well as VR and AR and some of those applications. Now when you look at AMD, we know these markets really, really well. If you take the PC and immersive TAM of 40,000,000,000 dollars that represents about 99% of our revenue in 2016. And we have proven we can be successful in this business.
We grew the business about 7% in 2016. We gained profitable share, and we believe we can continue strong growth in this area ahead of the market given the right product choices and the right investment choices. So we're going to talk a lot about what are we doing to address these markets as we go through the next few presentations. But I think the single thing you should take away when we talk about our investment strategy is we are investing to win in our core markets. So in the PC and immersive market, what that means is investing for leadership x86 road map, multi generational in graphics, investing in leadership IP, both for discrete as well as integrated graphics and then software becoming a more and more important part of the solution becomes one of those areas that cuts across a number of different industry segments.
You've heard a lot about Zen and Vega, some of the current products that we're going to talk about over the next couple of hours. But I would also like you to listen very much to Mark Papermaster, our CTO, when he goes through how we built sort of a sustainable model for us to build multigenerational products in this category. We believe that with the model the development model that we've created, we have a formula that will keep us very competitive for the next 5 years.
Now let me talk
a little bit about how we grow in PCs. And what I'm doing now is I'm giving you a little bit of a preview of what the management team is going to talk about in the subsequent presentations. So first, if you look at the PC market, it's about $28,000,000,000 to $30,000,000,000 market. You can see that mainstream is a lot of units, but not a lot of revenue and not a lot of profit. The premium products actually correspond to the majority of the revenue and the profit pool in this business.
When you look at AMD, 2016, before we had the rise in processors, we were really playing only in about 30% of the TAM in the mainstream portion of the business. And what Jim Anderson will go through over his presentation is he will show you how we are systematically introducing Ryzen or Zen based products across the desktop market, DIY, OEM, commercial as well as into the mobile market across Consumer and Commercial over the next four quarters. And it's a very, very strong product road map that goes top to bottom in the stack. By 2018, we are now going to address 80% of the TAM in the PC market. Now moving into the immersive space.
The way we define this space, again, it's a $12,000,000,000 market in 2016, growing to $15,000,000,000 plus in 2018. It includes game consoles, it includes embedded, it includes mainstream GPUs. And again, when you look at this business, although the units are concentrated in the lower bands, the profit is really concentrated in the premium and the professional bands. And so when you look at our product portfolio again in 2016, we've had a very, very strong game console business that we really appreciate and has been a foundation of the company. We added the Polaris line where we gained share with mainstream GPUs.
But Raja is going to talk about how we really take our GPU IP into all segments of the market as we add Vega in 2017 early 2018. And what you see is, again, we're going to address a significant portion a significantly larger portion of the market and significantly larger portion of the profit pool in the Immersive segment. So hopefully, with this, you get a view of our excitement around PCs and our excitement around immersive devices. Those are great markets. We believe we can grow above market there and significantly improve our product positioning with our new technology.
But what I'm extremely excited to talk about today is the single biggest growth opportunity, both in the industry as well as for AMD, is really in the data center over the next 5 years. And this is where we really see the opportunity to really bring Instinctive Computing to life because it's about bringing more processing power to bear on today's workloads and the most challenging use cases. So why is data center so interesting? I think we all understand this, but the numbers are truly staggering. I mean, if you think about it, we're generating 2.5 quintillion bytes of data a day, whether you're talking about e mails, Facebook entries, YouTubes, Instagrams, Google searches, lots and lots and lots of data.
The question is, what do you do with all of data? And to really process and utilize and transport and and utilize and transport and mine all this data, you need incredible data center compute. And that's whether you're talking about businesses or you're talking about social interaction or you're talking about productivity or even entertainment, there's an extraordinary growth in compute necessary. And for us, we see this as a tremendous inflection point in the business. And when you think about the inflection point, the inflection point that we see is that the data center is fundamentally changing, okay?
There are so many different workloads. There is no one size fits all in the data center. And whether you're in the enterprise or the public cloud or the private cloud, you want to be able to optimize your data center for the workloads that are important for you. And so what does that mean? That means really balanced performance that allows you to process different types of data and different types of workloads.
It means much, much more flexibility so that you can configure your data centers as the workloads require. It means a tremendous focus on security and especially hardware based security, as you think about all of the virtualized, compute capabilities. And then it means optimized total cost of ownership. And so if you take a look at these requirements, what we can say for sure is that the data center of today is different from the data center of 5 years ago and will be different from the data center of 5 years from now. And today's data centers really require heterogeneous computing to be successful.
So, how do we see the market? Again, we see the market in data center as a $20 plus 1,000,000,000 market, as a combination of CPUs and GPUs on the processing side. We are the only provider that has high performance CPUs as well as high performance GPUs, and we think that's a distinct advantage for us going forward. Starting first with the CPU space. Again, very large market.
If you really look at it, X86 is the dominant architecture in the server market today. There are lots of other challengers. But when you think about the rich x86 ecosystem and the fact that, that ecosystem has tremendous scale, AMD is one of only 2 companies that has access to high performance x86 for the data center. Now we've talked a lot about Zen and Zen being a clean sheet design for us. Really, when you look at it, we looked at a number of different applications for Zen.
But I can say, when Mark and I started this, we really created Zen with the new data center in mind. If you look at the optimizations for Zen, it was really about what do we think is going to happen in the data center come 2017 beyond. We had a vision for a CPU that could be balanced, efficient, but also very intelligent. And some of the machine intelligence capabilities that Mark will talk about are part of Zen. And as exciting as Zen has been in the PC market, I would say we are even more excited with what Zen can do in the data center market.
So let me give you a little bit of a preview of what the products look like. So Naples is the Zen based data center product. This is the first product that we will offer. And when we really look at how it was built, it was built for not just high performance, because obviously you need higher performance, but it was built to be very, very flexible for those new workloads. And the idea is we put more cores so that it would allow us to do more things in parallel.
We put more memory bandwidth because we really believed that big data was going to be important and you need more memory bandwidth. And we put much, much more IO because we believe that IO bandwidth would be very important because the data center is not just about the CPU, but it's about everything else you put around it, including GPUs, accelerators and other IO. And so when you put all this together, we really thought about this as not just another processor. We thought about this as a new approach to the data center in terms of system design. And we really thought we're looking for not just higher performance, but epic performance for the new data center.
So today, I'm very proud to introduce to you for the first time our brand that will be across our data center business with the following. Please play the video. So one of the fun parts of being CEO is I get to show off the latest and greatest products. So take a look at AMD EPYC. This is our processor that will go into the data center.
Okay. Now what you will see and hear today is a lot, lot more information about EPIC. Forrest Morad, who runs our Enterprise Embedded and Semi Custom Business, will talk about EPYC in detail. We're extremely proud of what EPYC can do. We're going to show you some live demos that will show you competitive benchmarks with Epic and our competitor.
We're going to show you how we've optimized for today's workloads. We're going to show you how the system performance is not just better, but it's truly disruptive for the cloud environment. And we're going to show you how we bring together leadership, total cost of ownership in this system. So I hope I've set you up well, Forrest. But the data center is not just about CPUs.
The data center is also about GPUs. And if you look at GPUs, the growth in GPUs has been incredible over the last few years. We see the GPU compute market growing to about $5,000,000,000 in 2020. So think about it as over 75% growth rate on an annual basis. And when we think about GPUs, again, it's not just about making them faster, it's really about making them smarter.
And Raja is going to talk about the Vega architecture. The Vega architecture, like Zen, was really designed for what are the most important workloads that will affect us over the next couple of years. It includes not just features for gaming and some of the professional content creation, which are areas that we know very well, But it has some very unique capabilities for data center as well. And Raj is going to talk about some of that in his presentation. But the idea is, again, looking at what's important in the cloud, how do we address very, very large data sets, How do we address the new machine learning applications?
And really design for the future with Vega. Vega is going to go into our Radeon Instinct product line. This product line has been getting incredible interest from cloud providers as well as other OEMs. And the idea is to bring a different class of performance but also a different type of integration. And for us, we're going to talk a lot about the importance of an open ecosystem, which is different from what our competitor is doing in this space.
Now although both CPUs and GPUs are independently important, heterogeneous computing really allows us to bring this together for the best of both worlds and really optimize the system for workload. And if you really think about this, this is a problem that AMD has been thinking about for the last 10 years. The idea that you need CPUs, you need GPUs, but you need to put them together in a way that is very efficient, and very optimized when you look at the total system capability. So in addition to each of our EPYC and Radeon Instinct product lines as independent products going into systems. Today, we're also going to talk about some of the ways that EPYC and Radeon Instinct have been optimized together.
And the idea is to start thinking about the system before you start the silicon. So when we do that and you look at some of these workloads, like machine intelligence, deep learning or high performance computing, we can actually deliver breakthrough performance. And some of the ways that we do this, again, are about thinking about the flexibility needed in the workloads upfront. And so again, Forrest and Roger will go through this in more detail. But one of the things that's very, very unique about EPYC and the IO that we have chosen for EPYC is it allows you to connect more GPUs directly to the CPU than any other solution in the industry.
It is simply the most
optimized solution in the system. So hopefully, that gives you just a little bit of a preview of what we're going to talk about as we go forward for the next couple of hours. And as excited as I am about our strategy and our focused markets and our product road map, I want to make sure that it's very clear that we are focused on the financial outcomes. And so Devinder is going to go through our 2017 and our long term model in quite a bit of detail. But I wanted to give you just a few highlights so that you can think about these things as you go through the product presentations.
So first, starting with growth. I believe that with our product portfolio and the market segments that we have chosen that AMD can be one of the premier growth companies in the semiconductor space. We ended 2016 with $4,200,000,000 of revenue, primarily concentrated in the PC and immersive space. And with our new product portfolio, we see the opportunity over the next 3 or 4 years to really return double digit annual revenue growth across these very profitable markets. And the important thing you'll see is that we're denoting what we consider the traditional products, let's call it our mainstream and our game console products and our premium products, which are the ones that address the higher profit pool, the higher ends of the market.
And that becomes successfully a larger and larger piece of our revenue mix as we go forward. So this is an important concept that we're going to talk a lot about in terms of the improved product mix and improved portfolio. The second thing I want to talk about is margin expansion. There's a lot of discussion about margins and the importance of margins in returning shareholder value with products. Again, if you look at where we were in 2016, we were about 31% gross margin.
When we talked about our long term model a few years ago, we had stated a long term margin target of between 36% to 40%. And today, we wanted to give you an update on that long term model. So from what we see today and the strength of our products in 2017 2018, our expectation now is that in 2018, our gross margins will be in the low end of our previously guided long term range, so greater than 36% in terms of growth, really on the strength of those premium products. And then we're now extending our long term model to go through 2020. And again, with the markets, how they're developing, how our products are developing, we believe the extension would say that we can be over 40% gross margin as a company in 2020.
And you can see by that time frame, the mix of our business has significantly shifted with over 50% of our business in what we're calling the premium products or the margin accretive products for the company. So with that, some of the key tenets of our long term model. Again, long term, we're defining as 2020. We believe we can grow double digits annually
for the next 3 or 4 years.
We believe that the margin range for our company will be somewhere between 40% to 44%. And we've also updated our earnings per share target kind of extending forward. And we believe that, the right target for us is over $0.75 given what we have in our product portfolio. So as I said, Devinder is going to go through this in a lot more detail at the end of the day, but I wanted you to have something in mind as we talk through how our products ramp over the next number of years. So let me just finish off and say, our main message today is to give you a view of how we think about the industry evolution, where AMD fits, what we're good at, what our strategy is and most importantly, why we believe that we put together a formula for strong multi generation leadership as well as strong financial growth over the next several years.
And this all comes with meeting our commitments and focused execution. So with that, let me end here. And I want to introduce Mark Papermaster, our CTO, to talk about our technology road map. Mark? So with that, let me end here.
And I want to introduce Mark Papermaster, our CTO, to talk about our technology road map. Mark?
Thanks, Lisa. And thank you all for joining us here today at our Financial Analyst Day. Well, I have to tell you, it's very exciting for me this year because we have been, of course, on a multiyear journey of technology development at AMD. The industry is changing rapidly around us, and we realized that we had to make some fundamental changes. And 2017 marks an inflection point.
It's a key milestone for us and really seeing the fruits of those efforts. And so I'd actually like to go back a little bit. I'd like to spend a moment on the prior 4 years. It was at this Financial Analyst Day, actually right in this auditorium in 2012, where I laid out the groundwork that we would be focused on our crown jewel IP, our CPU, our graphics and then the IPs that we put around it as well as the whole methodology of how we put that together. Then in 2015, we gave you an update and we said we're progressing on all of those targets.
And in fact, that new CPU we're committing would have a 40% improvement in performance over the previous generation. And a lot of folks looked at that and said, that's an audacious goal. AMD, are you going to be able to deliver on that? And that's where we fast forward to today because we have indeed delivered on those commitments. We've delivered on those commitments because of our focus.
We were very, very clear on what we had to do. We had no choice, right? This was a matter of focused execution because we knew where the competitive bar was. And so we drove our IPs there. And we had to make a fundamental change because we wanted to rethink our engineering process.
How do we put it together, which needed a new fabric and the whole methodology. And we invented one that would unify the company as to how we run development. So that's what I want to share with you today. I want to give you an update and show you what we did in that execution. And again, I'll go back to starting with our crown jewels.
That's what we're building great products around. It starts at that foundation on a high performance CPU and GPU. And it's a high barrier to entry. This is not something that you can just go do without a deep know how, a deep experience. We have built up over the years thousands of patents on CPU and GPU technology.
We added 100 more in the development of our Zen high performance X86 and our Vega architecture. And it's that know how, that experience of the customers that can ensure that it not only would deliver performance, but deliver it to an ecosystem where there's the right software enablement and the right application development support so that the technology translates to experience for our customers, solving problems. And that's what AMD does best, is we solve problems, we bring innovation to market. But what's different is what Lisa said earlier. This was done in a focused approach to provide sustained innovation going forward.
Well, look, let's look at the results. Zen didn't achieve the 40% commitment. We achieved over 52% instruction per clock, generational improvement over the previous cycle. We had designed it from the get go for the toughest workloads. We set out for it to be the engine for the emerging data center and cloud workloads for content creation.
We wanted this to bring a substantial differentiated performance for our end users and we did the same with the Vega architecture. It is about content creation. It's about changing experience. And then with Vega, there's another twist because it's the perfect engine for acceleration of emerging machine intelligence. You'll hear much more about that later.
In fact, I'm going to just spend a moment to describe a few of those Vega features because how do you provide what you heard earlier, immersive and extinctive computing? What does that mean, immersive, where you're in a virtual or an augmented reality and you feel like you're approaching true presence, like you're truly engaged in someone in the same room that you're in. And instinctive computing, where you actually are anticipating the intelligence is built in. That's what drove our development. And when you look at the features, the high bandwidth cache controller to bring much more memory available to that computation close in, new programmable geometry engine tailored for the emerging workloads, FP16, pack decimal math to accelerate those machine intelligence applications and frameworks.
And of course, an advanced pixel engine for improved visualization. You'll hear more about this from Raja later. And in fact, you'll see a demo of some of those architectural capabilities in action. Switch over to the Xencor. So this was over 4 years of effort.
Again, incredibly focused on a goal that we would not let go to get us back in high performance. And it was just an incredible focused effort. We had over 300 engineers at peak. Over that entire period of time, it represented over 2,000,000 engineer hours of effort. And with that kind of commitment, the team simply could not let go of achieving or beating our performance goal.
And that's exactly what we did. We delivered over 52% of performance gain generationally. And it was really hard nosed focused engineering effort. It starts with the execution engines. You look at what we did, we widened our execution pipes by 50%.
We increased the instruction scheduling by 75%. So you can flow that instruction execution much more effectively each clock tick. But that only works if you can feed that engine. So what did we do? We have to be able to feed the beast.
So we improved on the instruction side with a very smart branch prediction. We actually built in a Perceptron to have much more accuracy, a Perceptron engine to give us better accuracy in our branch prediction. We inserted a MicroOp cache to more efficiently dispatch those instructions to those pipelines. And then you have to feed it from the data side. We revamped our cache subsystem.
We increased our cache size. We added a dedicated L3 cache. And we advanced our prefetch algorithms, our memory prefetch algorithms, looking at the strides of data, what's the patterns coming in and getting the right data where you need it at the right time. On top of all of that, we added simultaneous multi threading. So your execution engines are precious.
And so if you do have any stall, you're waiting for some data to complete instruction, you want to flip over on a new thread. So simultaneous multi threading effectively doubles the number of threads. It looks to the operating system like a doubling of the cores available to get any work done. And what's the result? It's a dramatic improvement of instruction level parallelism, of that execution.
Said another way, it's a dramatic increase in the performance at every clock tick. And so that's what we've done. The team has delivered competitive X86 single threaded high performance and hands down leadership of multi threaded performance and application development application performance. This achievement absolutely defies industry convention to have this type of gain in a single generational update of CPU design. So what's very, very important when you look at this is not just the performance results, but how we did it.
You look at us historically, and you know that we've fought versus a gap that we had in foundry, semiconductor nodes versus what Intel was able to achieve with their process technology. And so we've learned over the years to actually design smarter, to design our circuits to be denser and more optimized. We had no choice. We became very, very good at that. Well, what happened?
What happened was 14 nanometer FinFET shrunk the gap considerably versus our competition. And we deployed all that learning we had done in prior generations on density optimization. And the result, we have a 10% advantage versus our competition in area efficiency, extremely important to give us both performance and performance per watt. And when you add it all together, it's an astounding improvement in performance per watt, performance per amount of energy, it's 270%, 2 70%. It's important enough, I just want to break it down for you as to where it comes from.
A little less than half of it is from those architectural micro architectural improvements that I walked you through, the features that we designed in from the outset in the Zen Core. Very, very significant, of course, in driving the performance side of that equation. But you have to equally design for efficiency. We designed the Xencor for all of that performance with no increase in the amount of energy expended per cycle. And how do we do that?
We got 70% leveraging that new 14 nanometer FinFET technology that I described. And it also expands the operating range. It lets you scale up and down with voltage. We added a set of very, very smart techniques we call pure power to manage where you're expending energy. If you're not if the program that you're running, the task that you have that you're running does not need a certain element of the chip at that specific moment, then we shut it off.
But more than that, we have thousands of sensors built in around the chip, which are constantly optimizing and saving energy. That combined with the physical design techniques I described a moment ago brought another 70% of improvement. So net result, it was a huge step forward for us in terms of performance and performance per watt. And that plays directly into the value of our product. When you hear Forrest talk about the value that we're bringing with EPYC, you'll hear the word TCO in his presentation, total cost of operation, total cost of ownership.
You have to design not just performance, but the efficiency as you deliver that performance to our end customers. And now it brings me to sort of that 3rd leg that I was alluding to in our focused development. But this 3rd piece I'm talking about, which we call our Infinity Fabric, it's kind of the hidden gem. You haven't heard a lot about it because CPUs and GPUs are very sexy. That's what that gets all the limelight.
But I'm going to spend a few minutes talking about the cousin of it. We started this development back in 2012, right? When we kicked off our new core designs, we kicked off the design of a new strategy. A lot of people have asked us, look, you're not nearly as big as your competitors. Why do you think you can attack these multiple markets?
Well, you can't do it if you just practice the trade like it had historically been done. It doesn't the equation doesn't work. You would be right in that commentary to us. But we didn't follow the traditional path. We developed new technology that would allow us to leverage our IP building blocks, leverage that crown jewel IP and the IPs we put around it and tailor it, tailor it to our specific markets.
Because it doesn't work, you can't just take one building block and think that that's going to work from a server that has to scale many, many cores to a graphics engine, which is tailored to its specific tasks. You think about our laptops that need integrated CPU and GPU and the ability to address the memory heterogeneously across both, those are quite different requirements. But our secret sauce is the Infinity Fabric that ties it all together. It can span from applications that are running a handful of watts to 100 of watts. From unique and disparate features, as you go across our server market with EPYC, our graphics market with Vega and its follow on roadmaps, our client market with Ryzen and Ryzen Mobile coming out at the end of the year and our future semi custom products, which need to leverage our IP base and tailor them to specific needs.
We needed this flexibility. We need to engineer it right along with our new crown jewel IP to enable our strategy, and that's what we've done, and that's what we're delivering with our new products in 2017. It's a world class system on a chip methodology that we're delivering to market. I wanted to just take you through the detail, spend a couple of minutes with you on what we do from both the management side, how does this Infinity Fabric allow you to manage all this disparate pieces of building a complex graphics or CPU end product? And again, then how do you scale?
How do you provide scalability from performance? We'll start with the control side. We have a scalable control fabric that's managing that thousands of sensors and all the chip. It's managing how we change voltage and frequency by examining the workload that's going to give you the most performance at the lowest energy. It provides our security solution, a consistent security solution across all of our products.
It provides an encryption capability across our products, a very differentiated encryption capability, a common way across all of our products of how we test, we initialize. I know you can say, well, that's plumbing, why do we care? Well, in this day and age of advanced technologies, it matters a lot that you have a very efficient engineering process to manage all the way through production and assembly, particularly with the multi chip solutions. So then you flip over to the data fabric. We had a long history with coherent hyper transport.
It was a great protocol for a data fabric. But we did, it was we built it in our comprehensive Infinity Fabric. We improved it. It's an enhanced coherent hyper transport. And it's a beautiful solution, and it's actually designed to be quite flexible in how we deploy it, to be able to get performance scalability at actually reduced latency and in a standardized way as we implement across each of our segments.
I can give you an example first with EPIC. You'll hear lots more detail from Forrest later. But when you just look at this configuration and you see the combination of that enhanced coherent hyper transport and the efficient bandwidth utilization that we built in and the provisioning that we built in to EPYC. And what you see is near perfect scalability, not just on one socket through 32 cores, but all the way through a 2 socket, 64 core implementation using the SpecInt Rate benchmark, a very commonly used benchmark indicative of a typical workload. So it's really an excellent example of leveraging that technology, understanding that end goal and bringing that design to bear, not just for today's products, but for the products of the future.
And that's the point about the Infinity Fabric and how it sets the stage for sustained innovation of how we put our solutions together. We use this fabric on die, it's an on die connection. We export it as we go die to die, right? So it's actually providing that same all those characteristics I described die to die and even socket to socket. It's an incredibly extensible design approach to deliver performance across a multitude of end applications, very disparate requirements across those markets.
That's the Infinity Fabric. And I'll say stay tuned as you see examples today and going forward. And I'm going to tell you just how important that is in really the next section as I talk about our road map because you can look at what we did in 2017 and you can say, all right, great job, AMD, but are you going to be there in the future? We hear Moore's Law is dead, but we hear Moore's Law is slowing down. So how can you keep up if you're back in high performance, how are you going to stay there?
And plain and simple, it is true that there's a change in what we historically got out of each technology node, right? We still get the improvement in transistor density, but it costs more, it takes longer to build and what you get from the process alone and frequency has gone down, node after node. So, what we see going forward is that you need to do something different. You still need each new process node, You still need each of those generations, but you've got to design smarter. You can say, oh, I've heard that before, design smarter.
But I'm talking about very specific approaches of how you bring those IPs together, how you design from a system perspective to deliver performance. It is about how you integrate, how you leverage each of the components and bring it together to provide in value. It is about enabling it with software application. And then, of course, system design, how do you understand the problem you solve and bring accelerators and other architectures to bear to stay on the pace of Moore's Law? That's what Moore's Law Plus is, is even though we're not getting what we used to from the semiconductor technology alone, staying on that pace of generational performance improvement with design, marrying design with process.
And I'm going to tell you exactly what that means to us because we did we're not waiting. We're along this path. The Infinity Fabric, as I just described to you, is a key piece of giving us that modularity, that flexibility to architect solutions based on the segment that we're going after. Packaging, we're shipping multi chip modules today, 2.5D packaging today, more advanced packaging techniques in the near future. Working with the industry.
A and D, we're not about creating a top to bottom proprietary solution. That's not what we are. We're about collaboration. And on interconnect, we are founding members of both Gen Z and CCIX to deliver high performance connectivity and rack scale to the industry in collaboration with other partners, and of course, the optimization of our implementation, our physical design implementations. On the software side, again, we're about having very high performance software solutions, but doing it in an open fashion.
We use the open source LLVM as a base to create our AMD optimizing C and C plus plus compiler, which will be out there in advance prior to our EPYC product shipment. You heard from Lisa about Mi Open, the advanced frameworks for our Radeon Instinct and the whole Radeon Open Compute platform underneath it. And then from a system design, look at the detail and the performance you're going to see later in terms of Radeon Instinct as well as how we're leveraging memory technologies with our graphics to accelerate performance. And then more recently, we just acquired wireless interconnect IP with Natero, which will give us millimeter wave technology, which we can apply in the exciting areas of VR and AR where you want to cut the cord and have a very high quality of service for connectivity. So when you look at where we are, we're not just pointing to the fence, we're not pointing to the future.
We're already in this world of Moore's Law plus The products that we're rolling out in 2017, when you look at those stacked high bandwidth memories, high bandwidth memory 2, which is on the Vega architecture, when you look at the configuration that we have with EPYC to connect not only our CPUs in a high configurability and scalability, but seamlessly, peer to peer connection with accelerators, Radeon Instinct accelerators. So we're positioned to stay on Moore's Law pace, but are we positioned foundationally with our engine, our core roadmaps? And the answer is, we did not work for the last 4 years to get back in high performance to let there be any skip of the beat of our road map. So what we've done is we've invested in leapfrogging design teams. We're well into the design of our next generations.
Zen has been shipping since March in Ryzen. That team's already been on the next generation for over a year. We had a leapfrog, a Skip design team that had already started Zen 3 months in advance. So the new AMD will be working on shipping a current CPU and on developing the next 2 generations at any point in time. That's our commitment to our customers that we will keep them on a Moore's Law pace going forward through our foundational core design and then our integration capabilities on top of it.
And we're doing the exact same across our graphics roadmap. Polaris last year, Vega Architecture coming this June. Already, the Navi next generation design will along as well as the next generation beyond Navi. So the new AMD is focused on simply being that bankable supplier with sustained innovation so that we can deliver going forward to our customers. We want to be a completely bankable supplier.
And that's really what I want to conclude with is that you should expect from us nothing but that continued execution. And I've shown you how we're doing it. It's not magic at all, how we're going to have sustained execution across our core roadmap, leveraging the innovative new Infinity Fabric and the consistent methodology that it brings to AMD to allow our focused execution, to bring these products to market on time and at quality. And we will simply have unmatched configurability to meet the demands of our customers. This is just the beginning.
We are starting a new approach. And you've heard me say before, AMD is back in high performance. But what I'm telling you today is AMD is back in high performance and we're back to stay. Thank you very much. And with that, I would like to bring up my colleague, Jim Anderson, so we can get an update on our CG business.
Thank you, Mark. Appreciate it.
Thanks. Thanks.
Good afternoon. So as you can see, Mark has lined up just a fantastic technology pipeline for us. And so what I'll talk about today is how do we take that technology pipeline and translate that into really the strongest product lineup that we've had in years for the Computing and Graphics business. So I joined AMD about 2 years ago. It's a little it's about 2 weeks to my 2 year anniversary, and it's been a great experience.
And a couple of things struck me when I joined AMD. The first one was how large the market opportunity is for us in computing and graphics. If you look at client compute, $28,000,000,000 TAM. And yes, of course, that TAM isn't necessarily growing, but for us that is a growth opportunity, because we can gain share in that TAM, and we can gain share and grow revenue. And then in discrete graphics, an $11,000,000,000 TAM.
And that TAM is growing, and that TAM is spread across immersive platforms and data center compute. So 2 large big markets. And then the other thing that struck me is how much of that market is concentrated in the high performance segment. So high performance premium segments of the market. And so for that for us, that's what you'll hear from us today is how we're taking those technologies that Mark just talked about, the Zen Core, the Vega Core, and bringing those into those high performance segments of the market and driving revenue growth and profitability gain.
And we've already made some good progress to date. Even before we launched Ryzen or we launched Vega, even in 2016, we made some good progress. So 9% year over year growth in computing and graphics. And actually, over some of the most recent quarters, just recently, we've been growing revenue by double digit percentages on a year over year basis. In fact, in our most recent quarter in Q1, we grew revenue by 29% year over year.
So good progress to date. And with the introduction of Ryzen and the coming Vega, that's just going to continue to fuel the good momentum in computing and graphics. Products. And this year is actually just a really tremendous year for us in terms of product introductions. And I'll walk through that today.
And we'll talk about both client compute and discrete graphics. I'll walk through the product introductions for client compute, and then Raja will come up and talk about discrete graphics. But the business goals for both of these product lines are the same. We're using that new technology, these new high performance cores to reenter the premium segments of the market, the high performance segments of the market, to then gain share in those segments. And then that's going to drive ASP growth, margin expansion, revenue growth and profitability.
Okay? So let's talk about Client Compute. Let's start with the market structure. And so this is the PC CPU TAM. And with Ryzen, it's really a game changer for us in the market, because Ryzen allows us to really address this high performance premium segment.
Before Ryzen, we're really AMD's market position was really concentrated more in what we would call mainstream and below. So a mainstream PC is a PC that would sell for maybe $400 or $500 or less. And in this market segment, yes, it's a large unit TAM, but it's actually a pretty small portion of the revenue TAM and an even smaller portion of the margin TAM. Now with Ryzen, that all changes. With Ryzen, we now attack the premium segment, the high performance segment.
And this is still a large number of the units, a large amount of the unit TAM, but the vast majority of the revenue TAM and even more of the margin pool. And so that's really what our focus is, is with Verizon going off and attacking this premium segment of the market. So that's what I want to walk you through today. I want to walk you through how we're rolling out Ryzen across all of those different premium segments. And as we roll Ryzen across those segments, both consumer and commercial, premium desktop and premium mobile, in premium desktop will add about $9,000,000,000 of market coverage as we roll Ryzen out to that part of the market.
And then in premium mobile, we'll add another $10,000,000,000 of the SAM expansion in market coverage, for a total of almost $20,000,000,000 of increased market coverage. And so I'll walk you through each one of these segments, but I want to start with the part of the market that we've already introduced Ryzen to, and that's the premium desktop, the consumer piece of that market. And we committed quite a while ago to deliver Zen to this part of the market in Q1 of this year. And that is exactly what we did. We executed to our commitment.
We introduced Ryzen 7 at the beginning of March. We introduced Ryzen 5 at the beginning of April. And we're going to introduce Ryzen 3 in the Q3 of this year. So we've introduced, actually 7 versions of Ryzen to date. And if you look at those 7 processors that we've introduced to date across Ryzen 7 and 5, you'll see that we're delivering just disruptive levels of performance to the market, disruptive levels of really setting a new bar of performance in this premium portion of the market.
And the reception from the market so far has been great. The reception from end users has been really great. From we've gotten great support from the ecosystem. Just a couple examples of the ecosystem support we've gotten. Over 90 motherboards now in market supporting Ryzen 7 and Ryzen 5.
Over 200 system integrator systems across the world, based on Ryzen. These are kind of specialty PCs, a lot of times high end PCs that are made by little PC manufacturers across the world. So great ecosystem already. And then I'm very pleased to announce today, for the first time ever announcing this, is that Ryzen desktops will be launched by all 5 of the top PC OEMs by the end of this quarter. So that's yes, it's great support from not just the channel partners, but also the PC OEMs.
So we're really happy with the initial introduction of Ryzen. I think what really is fueling that great support from PC OEMs, from the channel, is the strength of the product itself. So as I mentioned, we've introduced 7 different versions of Ryzen to date. These are the 7 versions shown here. And what this is showing is each one of those different versions of Ryzen compared against their nearest priced Intel competitor product.
So the Intel processor that's of the closest price. And what you can see here is we're showing relative multi threaded performance. This is multi threaded. It's typical benchmark Cinebench NT, very common benchmark. And what you can see is in this premium segment, this higher performance segment, up and down the stack, across the price points, we're delivering just phenomenal levels of performance above and beyond our competitor.
And in some cases, actually almost disruptive levels of performance, way above 50%. And so this is really why we're seeing great support, not just for NAND users, but the PC OEMs as well. And I want to 0 in and dive into just a couple of specific examples, 2 processors in particular that are kind of the more common higher volume processors. And I want to give you a sense of not just benchmark performance, but what are real end users experiencing with Ryzen. So let's take a look at that.
Let's take a look at the first example. So the first example is shown here. And Ryzen is not just winning on one benchmark. It's winning across a wide range of end user applications. So what this is showing is, this is 3rd party data.
So this is public data that's available. And this is showing Ryzen is in the orange line, Intel in the blue line. And this is showing relative performance. So the further out the line is, the better the performance. And this is doing a comparison of Ryzen 7 1700 against Intel's Core i7-7700 ks.
And you can see we're delivering just an incredible amount of performance advantage to the market. These are 2 processors that are at about the same price the market. Actually retail price of about $3.30 So in prosumer applications like video encoding, content creation, encryption applications, significant performance advantage. Also in some of the new emerging usage models like game streaming. So that's where you're playing a game real time, an interactive game, and you're streaming that experience to your friends.
Ryzen 7 is a great processor for that as well. Also a great gaming processor, really good experience in premium gaming, 4 ks, VR, as well as the lower resolution gaming. So this is just one example. Let's take a look at a second example. And this one is actually even more dramatic.
This is showing our best flagship Ryzen 5 Processor against Intel's best Core I5 Processor. So our 1600X against Intel 7600 ks. And here the performance advantage is even greater. Across those prosumer applications, actually more than 50% performance advantage. That game streaming application I talked about, Ryzen 5 handles that just great.
The Intel Core i5 Processor actually can't handle that usage case. And then a great gaming processor as well. Ryzen 5 delivering a great gaming experience. So this is why we're seeing really good reception from end users, from PC OEMs. And Ryzen performance will only improve from here because Ryzen is a new architecture.
It's a new CPU architecture. Software developers, gaming developers will optimize code around this new architecture. In fact, we've already seen some really good examples of that already. We've seen some of the gaming developers already release new versions of their games that dramatically improved performance within just weeks of the Ryzen launch. We're really happy with this product, really happy with the introduction of this to date.
But I want you to hear not just from me, but from real end users and from the
We wanted to disrupt the PC market. We wanted to bring innovation, choice, and performance to as many people as possible. The best starts now. The time Verizon has arrived.
The highest Cinebench R15 score ever on an A 4 CPU. For us,
it really represents bringing back real innovation and competition to the high performance PC market. Everybody is talking about Ryzen.
And since Ryzen is absolutely gorgeous, I felt like my gameplay was elevated.
I'm running this on the Ryzen 7 1800X. I am running the game, Twitch, OBS, encoding, Twitch in the background for music, and I have Revo up and I have Chatty up. I hope you guys can see how smooth everything is rising. What I've seen today, I'm pretty impressed.
I could not be more proud of Verizon. The best is yet to come.
Okay. So Ryzen is off to a great start in our 1st market segment, the premium desktop and consumer. But let's talk about how we take that same rise in technology across the rest of the premium high performance market. The next segment I want to talk about is Premium Mobile. This is another very large, very important part of the market.
And so let's take a look at how Ryzen does in this segment. So Ryzen Mobile will introduce in the second half of this year. And we've got just fantastic support from the PC OEMs in already building systems and designing systems around Ryzen Mobile. We've been working for months with the top PC OEMs to ready new systems for the market. So you'll see with Ryzen Mobile, beautiful 2 in-one systems.
You'll see some really beautiful thin and light ultra portables. Great gaming systems. Ryzen Mobile will have fantastic battery life. It'll be great performance on productivity applications. And it will have phenomenal gaming experience.
You'll be able to get just a great AAA PC gaming experience on a thin and light gaming notebook. And so great support from the PC OEMs. And the reason for that again is the strength of the product itself. So I want to talk a little bit about Ryzen Mobile that we'll launch later this year. So it's really a combination of 3 elements, Ryzen Mobile.
The first one is, of course, the Zen Core. Right? Mark talked about the Zen Core earlier. We've already introduced it to consumer desktop. We'll now bring it to mobile as well.
And with the introduction of the Zen Core, that will boost our CPU performance by over 50% on a generational basis. So 50% higher performance than the products that we have in market today. And then, the second element is graphics performance. And here, I'm very pleased to announce today, for the first time ever, that Ryzen Mobile will have integrated on die Vega graphics cores. So those Vega cores that Mark talked about earlier, that Raja is going to come up and talk about next, those cores are integrated into Ryzen Mobile.
Those, Vega cores will provide an over a 40% graphics performance boost, generation over generation. Again, those will deliver just a fantastic PC gaming experience on a thin and light notebook. And then the third element, and this is just as important as the last two, it's just as important as CPU performance and graphics performance, and that's power efficiency. We spent a tremendous amount of effort with Ryzen Mobile in optimizing power efficiency with this design. And so we're going to deliver that CPU performance leap, that GPU performance leap at half the power.
And that's going to give great battery life and really enable some fantastic premium form factors. So this is why we're really excited to get this product to market, very happy with our engagement with OEMs. And of course, we'll talk more about this product as we get closer to launch in the second half of this year. Okay. So that is the consumer segment.
So we've talked about consumer desktop and consumer mobile. Let's also talk about commercial. So commercial is another large part of the PC market, very stable part of the PC market. And the premium portion is really a profitable part as well. So we're going to bring that same rise in technology to the commercial segment of the market as well.
And even before we bring rise into commercial, we've already got good momentum in this market space. So with the existing products we have today, we've got AMD commercial based systems deployed at over 350 large enterprises, public sector customers. In our most recent quarter, we saw really good revenue growth in this segment, actually double digit revenue growth year over year in commercial PCs. So we've already got good momentum, and that's Ryzen introduction into this segment is just going to continue that momentum. So for the first time today, I also want to announce Ryzen Pro.
And Ryzen Pro is our dedicated specific version of Ryzen for the commercial market for business applications. So it's going to bring that same multi threaded performance that we've already brought to the consumer market, now brought to the commercial market as well. So in addition to that incredible performance, it's going to have great security features, great manageability features, the features that are important to IT managers, IT pros. And here again, we've got really good engagement and
really a lot of
great work going on with the PC OEMs. We'll introduce desktop systems in the second half of this year. And we'll introduce mobile based Ryzen systems in the first half of next year. And again, it's on the strength of the product that we've got really good engagement. So just to give you a little bit of a sneak preview of Ryzen Pro, this shows the Ryzen 5 Pro 1500 Processor, shown stacked up against an Intel Core I5-seven thousand five hundred.
And the reason we picked the 7,500 is because this is one of the common processors used in the commercial market.
And you can
see here again, we're bringing just an incredible level of performance to the market. So, over 20% better CPU general purpose CPU performance on specific more workplace focused applications like 3 d rendering, video content, big performance lead. So, we're very excited to get this product to market as well. And again, you'll hear more about this product as we get closer to the launch of Ryzen Pro in the second half of this year. So, there's one more product I want to talk about today before I wrap up.
And this is, I admit this is my favorite product. And this product we've never talked about before. We've this is the first time that we'll publicly introduce this. And I think this product, more than any other product I've talked about today, really kind of demonstrates the competitive spirit of AMD and the competitive approach that we're taking to the market. We're not just designing products for the average user, we're designing products for the most demanding users.
For those PC users that want every last bit of compute horsepower right at their fingertips. And so this product is really designed with that in mind. And so for the first time here today, let's go ahead and roll the video. Okay. So we call it Threadripper, Ryzen Threadripper, and it's coming this summer.
And Ryzen Threadripper is targeted at the absolute ultra high end of performance in desktop. And it's got up to 16 cores, 32 threads. That's twice as many cores and threads as we introduced just a little over 2 months ago with our Ryzen 7 processors. That's also over 60% more in cores than our competitor has in their highest end desktop processor in market today. So just a phenomenal level of performance that we're bringing to the market.
And it also comes with a brand new high end desktop platform. And we'll talk more about the platform in the future, but it will expand our memory bandwidth, IO bandwidth, a great new high end desktop platform. And this, again, we're building this for the users that demand the ultra performance of, in the high end desktop space at their fingertips. And we're very, very excited about this product. I can't wait to get it to market.
We'll talk more about this product in a couple weeks at Computex in Taipei. But really excited about this. Okay. So with that, I'm going to wrap up. As you can see, we've got a busy year ahead of us.
We've introduced just the first phase of the Ryzen rollout. We've got multiple different versions of Ryzen that we need to rollout this year and over the next 12 months. But at each one of these segments, we're bringing disruptive levels of innovation, performance to the market. And so with that introduction of Verizon, still our business goals are very clear, to reenter these high performance segments, to gain share in those high performance segments, to drive ASP uplift with that, margin expansion and ultimately drive revenue growth and expanded profitability. So thank you.
I appreciate the chance today to talk a little bit more about the Ryzen rollout. So with that, it's my pleasure to introduce Raja Kaduri, my good friend, and he'll talk more about the discrete graphics roadmap.
Thanks, Tim. Can you all hear me? Yes. Sorry, I got the 2:30 slot. I'm between you and some good cookies or coffee or whatever is standing outside.
And, you know, this is a good time if you want to just stand up and stretch. You know, I won't be offended. You won't be leaving. Now before I talk about graphics, I don't think the world actually really realizes how disruptive Threadriple and Naples are going to be. I mean, forget about rest of the world, just my world in graphics, right, my world in graphics.
When you have the CPU get disrupted that way where I have double the number of cores, quadruple the number of cores, ton of IO, you know, many bottlenecks go away. The full potential of some of the things that we have been building on the GPUs get realized. So that's super, super exciting. So, I can spend the rest of my 30 minute slot just talking about CPUs and I don't need slides for that. But I'm going to talk about, you know, I'll tell you a little story.
It's beginning of a book that I thought, you know, I should write or actually recruit, you know, a couple of friends in the analyst community that are much better writers than I, but I can feed some ideas. And I call this book The Radion Rising. If you look at last year, I call it the chapter 1 of this book I'll spend a few minutes on the chapter 1. I call this chapter 1 Better Basics. You know, Jim mentioned, I'm going to talk about discrete graphics, But if you just rewind back a few years, you know, we lost focus on discrete graphics.
I won't go into all the reasons why. But in some sense, if you even go back 5 years, many folks brought up discrete graphics. They said discrete graphics is going to be dead, it's all integrated, mobile is going to take over and all. And a lot of people believe that. Roadmaps changed, focus changed, investments changed.
And about a year and a half back when we sat and said, we are going to put focus back on discrete graphics, The first thing we needed to do was get our basics back again. And that's what we've been doing. And I'll walk through what are some of the basics we did. The first thing is we got IE back on the discrete graphics ball, right? That's the key thing.
It sounds small, but it's very important. We formed Radeon Technology Group to focus on this problem, to focus on this ball. We had a very compelling vision that inspired both our engineers internally and also the community, our customers, developers around immersive and instinctive computing vision, a 5 year vision. We defined a very executable strategy and you see that playing out and we increased investments in graphics. We grew the graphics team to over 3,000 employees.
And all of this, you know, 15 or 18 months after, right, many folks have written us off, right, written AMD off and written AMD graphics off, particularly from a discrete graphics perspective. Now you saw the story played out right throughout the year. We introduced Polaris over 3x performance per watt, basics: performance, power, packaging, programming model, those are all basics for graphics. We improved the basics, 3x performance per watt. We differentiated, we just didn't go out and say we're going to take our competition on like for like everywhere.
We focused on DirectX 12, we focused on new APIs, we demonstrated leadership in those new APIs clearly. We put our energy behind free standards, free sync in the monitor ecosystem. 2 years ago, nobody would have imagined that there'll be 100 different monitors with FreeSync support. We started from 0 2 years ago and we optimized our entire solution stack for the emerging VR stuff. Better hardware, better basics on hardware we demonstrated.
Now software, as you all know, more than is 50% of the story in graphics. What did we do over the last year? Fix the number one ask that the community had. We had 30 new titles supported by our drivers, optimized by our drivers on day 1. And that, just to give you a perspective, in 2015, we did 8.
So we were 4x better in doing our basics right in gaming software. We invested in professional graphics. We increased the number of certification. What does the professional customer ask us for? Be robust in your certifications.
Make sure there is no tool that does not support AMD Radeon GPU. We invested in the ecosystem, we have eliminated 99% of the gaps that we had in the ecosystem versus our competition in Professional Graphics. And towards the end of last year, we also announced our open computing stack, laying the foundation for compute, which I will talk a lot about rest of my presentation. And underlying all of this stuff, we took an open software approach and that made us near and dear to a lot of different communities. Now I said better basics in hardware, better basics in software.
We also did better basics in our marketing. We made it really, really, really clear to our community, we care about gamers. We have a brand for gamers. We have campaigns for gamers. We reach to them.
We knew where they went, where they hung out and we talk to them. We built that community back for the last 15 months. And we clearly differentiated what we did with the professional community and the content creators. And we won them back. And again, last December, we introduced our vision for Mission Intelligence, a vision that spans not just this year, next year, things that has compelling implications for next 10 years around Radeon Instinct.
And what you'll see now is just the last, you know, 15 months played out in this one minute clip of what we have done. Let's play the video Pictures are great, but I know most of you are numbers guys, right? You'd like to see numbers. The result of us getting our basics right and even with a focus that is not as broad as sometimes you wanted it to be and the segments you wanted us to go after, if you look at the results, in the segments we focused on, which is the 84% of the TAM, sub $2.99 desktop, we gained 10 points year over year in Q4. And if you look at the overall desktop GPU share the entire year, we gained over 7 points.
And remember that Polaris impact was only on the second half of this year or of last year. So that's what we did, but that's chapter 1. What's the opportunity ahead of us? If you look at it, we haven't participated in some really, really, really interesting high margin segments, the 1 solid and the gold here. If you do the numbers, this is 15% of the volume, but holds over 66% of the margin dollar.
So a lot of dollars sit here. We have an opportunity to go after this segment. We can play in this segment, but we also understand that just because I stand up here and say, Hey, I want to play in this segment, The customers are not going to show up and give me the dollars, right? Margin dollars are hard to get at. And we understood that we have to do the Chapter 2.
In Chapter 2, you have seen the beginning of the Chapter 2 today, I call it going beyond the basics. You want to go after this margin dollar, you want to play in these big leagues, you've got to do your basics right, you've got to get your, you know, performance up every year. You've got to get your power efficiency up every year. You've got to get your software better, more robust every year, but that's just not enough. You've got to disrupt the incumbents with some things that your customer needs, solve problems that they don't solve, right?
And that's exactly has been our thinking as we thought about how do we go beyond basics. And kind of, you know, let me set that up on, you know, what we have been up to. The first thing is, at the core of it all, architecture matters. Our architecture is optimized for years around gaming and CAD applications, professional workstation applications, design, etcetera, etcetera. And this continues to be important.
By the way, this will continue to be important for a long, long, long, long time, right? Gaming will be there forever. We may not agree on many, many things with our competitor, but we do agree that one day everyone in this world will be a gamer. That much we agreed. So these things matter and we continue to do that.
But then we looked at what is happening in the world outside this, what are the big problems. The dataset sizes everywhere in all domains are rapidly increasing, Everywhere. It's crazy. You know, in our traditional immersive world, everyone wants to get to this, you know, rich, lavish virtual worlds. We know how to get there, but the sizes of the datasets are huge.
And if you go to the content creation world, the same thing. People want to create with this level of detail and the keyword, the operator world real time. They know how to make this image. Hollywood has been making images like this for maybe nearly 10 years. They take days, hours, but they want to do it real time.
Again, I know you like numbers. If you look at the dataset sizes in Hollywood, which is kind of a great combination of this limitless virtual world you want to create and also the content creator perspective, they are well into petabytes, right? So if you look at the, you know, the latest Steven Spielberg film, The BFG, it's into petabytes. And it does need GPUs. It does need compute that can handle this big petabytes of data.
And I kind of joke saying that, you know, the BFG needs a BFG and a BFD. And the compute workloads are into exabytes. Lisa spoke about quintillion bytes of data being generated every day, which is a kind of a more, you know, meaty word for exabytes. And actually, you know, it sounds, you know, somebody wrote on my side actually there, They said 1,000,000,000,000,000,000 bytes. Actually, that sounds bigger than exabytes, right?
Should we 1,000,000 and trillions? So that's the type of stuff that we are dealing. But if you look at what's happening on the GPUs, our compute capability has been going up to the top, right? We are into, you know, Lisa said, over 25 teraflops on single GPU. But the storage capacity hasn't been going up on the GPUs.
Right? We are still talking gigabytes. Right? 16 gigabyte cards or 8 gigabyte. 8 gigabyte cards are at the top of the enthusiast stack right now and they have some 11 gigabyte, 12 gigabyte, 32 gigabyte and all, but we are still talking in gigabytes.
Now the analogy I give to this chart is like, you know, it's like riding a bike. I don't know if any of you rode a bike with the training wheels. Now having a bike is nice, is interesting, right? Even with training wheels, it's better than not having a bike. But once you get rid of the training wheels, the freedom you get is huge.
This problem here is I call it the training wheels, right? GPU has done amazing things to the world, but it's still on its training wheels if you don't solve this memory problem. And that's basically exactly the problem statement we went after with the Vega architecture. So I know Mark introduced a bit of Vega Architecture and we talked about it before. So let me kind of give you highlights of the Vega Architecture.
So with Vega architecture, the high bandwidth cache controller, right, we invented the next evolution of what was called VRAM or GPU memory. The high bandwidth controller effectively doubles and sometimes quadruples as you'll see in the demo in a couple of minutes of the memory that you have. With the new programmable geometry pipeline, we can handle datasets, geometry datasets that are 2x, 3x, 4x more complicated than any architecture before. In fact, data acquisition these days has become so cheap that if you have an iPhone 7 with a depth sensor, you can collect data that can help you build very complex geometry worlds. It has become that accessible.
Rapid Pac Math, we have doubled our floating point FP16 capability, not only information intelligence, in fact, we plan to democratize that computing to both professional graphics and gaming community as well and you'll see some really interesting utilization of Rapid Pac Math across all workloads. And then we brought some interesting technologies that have only been in mobile world on the pixel side like a different primitive batch binning that increases memory bandwidth efficiency over 2x and in some cases in the professional workstation side we are seeing up to 8x improvement in performance with this advanced pixel engine. So those are the key elements of the Vega architecture. And today I thought rather than, you know, we have done the architecture briefs before, there's a lot of information about Vega written out there already. So I will focus on these 3 big use cases for Vega high end gaming, workstation and machine intelligence and demonstrate to you where we are with Vega.
So the first use case is enthusiast gaming. So when we spec Vega about, I would say this was two and a half years ago when we're talking kind of, you know, the final configuration of Vega, the architecture work has been going on for a long time. We said one of the goals for Vega is that it needs to, you know, cross that 4 ks 60 Hertz barrier, a single GPU that can cross 4 ks 60 Hertz barrier. And that seemed quite daunting 2 years ago because at that point of time, 1080p60 was like some of the performance enthusiast class GPUs and this is 4 times more pixels, 4 ks is 4 times more pixels. And what I'd like to share with you is that we I'm so incredibly happy that we are achieving the goal that we had about 4 ks 60 Hertz gaming with Vega.
And what I'll have by the way, the one thing if, you know, the folks in the back who can hear the systems humming, There is about a quarter petaflop of computing just sitting in this room in that corner. The number of Vegas systems that are there, I counted and we have a quarter petaflop. And for a quarter petaflop, it's actually relatively quiet here and there. If you walk into a petaflop data center, you'd have to wear some special equipment. But so Omar will be showing Sniper Elite 4 running 4 ks.
Okay, let's switch on a single Vega. And this has been our target across several of the modern workloads, DX12 workloads and the latest DX11 games and all. We wanted to cross this 60 Hertz threshold and you'll see it just a single Vega sitting in that system comfortably crossing over 60 Hertz, right. And okay, so the next demo, I have lots of demos so and you get to see all of these demos in the back, is about the high bandwidth cache controller. And one of the questions I frequently get is the impact of this for gaming.
And today's games, as many of you know, especially people who follow the gaming stuff, are written for today's memory constraints, right? So we don't see many games actually cross the requirements around 4 gigabytes. So if you have a 4 gigabyte GPU, most games fit. There are some games when different resolutions go above 4. So between 48, you get like every game covered.
So to actually showcase what would a high bandwidth cache controller do if you have games that push this envelope. We have Tomb Raider, the latest version of the Tomb Raider running and we are simulating the benefit of high bandwidth cache controller by giving it only 2 gigabytes of memory. So what you'll see both are running on Vega as if they are utilizing 2 gigabytes of memory, 1 with high bandwidth cache controller on and one with off. So go to the demo. So one of the things that is very, very important for gamers is the concept of minimum frame rate.
What is your worst case frame rate? And so what you see on the right here is the high bandwidth cache controller on and you see, you know, when you look at the minimum frame rate counter, you know, sometimes it's over 2x, over 3x faster because of this high bandwidth cache. And for K MOS losing a frame means sometimes losing a life, right, seriously. They take it that seriously, right. So this is, you know, and in some scenes in there over 3x, you know, benefit with this feature.
So this is going to be a big deal, you know, when we put in the hands of the both game developers and gamers and that's kind of what this architecture is about. We are not building this architecture just for 1 generation. One of the things you hear, you know, the community talking about AMD technology, Radeon technology, they use the term fine wine. And what they mean by that is that, you know, we are providing them continuous improvements. Even GPUs they purchased 3 years ago from us, when we do driver updates, they get 10%, 20% performance improvement.
So the product gets better over the time and technologies like HBCC are really, really critical for kind of delivering those kind of improvements even if the content is not available today. So that's on gaming. As you can see, Vega is really healthy and kicking on gaming. So the next stuff is professional graphics. So Professional Graphics again, you know, as you all know, we've been trying to break that into that with disruptive technologies for a long time.
But only over the last 15 months, we got all the basics right. We covered all of the software gaps, certifications and all that I mentioned. And now we are ready to take the next step is solve a problem that the step is solve a problem that the competition doesn't solve. So last year we announced this technology called SSG, where we put really fast NVMe drives right on the GPU, right where the data is required. And now when you combine that with the high bandwidth cache controller, the capabilities of what you can do with the system are just kind of mind boggling.
And we started putting this technology in the hands of our developers a couple of months ago. And what you'll see are just a couple of demonstrations of what this would do to the content creation industry. So the first demo will be real time ray tracing with and without SSD. Now when I say real time ray tracing for people that follow graphics, that itself is just something unheard of if you talk 40 years ago in there. Now GPUs have come a long way to do real time ray tracing, but you will still see it is not quite real time and because of the dataset sizes.
It's cool if you just have a sphere in the middle and a nice room and say I'm doing real time ray tracing, but if you want to do a realistic data set like this, you need several 100 gigabytes or terabytes of data. So let's go look at the first demo. So what you're looking at here on the left, both are real time path traced running on the GPU with our ProRender software and somebody's house modeled here, it's pretty interactive. Like I said, a couple of years ago, if you said you can do this, you know, nobody would believe you. Now let's go to the right with SSG on.
Amar, move this. Much smoother. It's basically it just moves. It's like I said, that's with training wheels. This is training wheels off.
And that's what the creator wants, right? Training wheels off, the amount of creativity it gives them and the flexibility it gives them. And that's kind of, you know, very, very disruptive for a content creation guy. And if you are in this community, you would know. Let's go to another demonstration of this capability.
Omar would need like a couple of seconds to set up and this is actually changing the Adobe Premiere workflow. So we've been working with Adobe on SSG SSG and they have integrated SSG capability into Adobe Premiere. And as you know, dealing with high resolution video, 4 ks video, like a 22 minute episode that you cut for, like, a TV show is 2 terabytes when they edit, when they they deal with raw data. That's 2 terabytes just for a 22 minute. And now, actually, the whole Hollywood workflows, they are changing to 8 ks.
They want to actually capture 8 k, process in 8 k, and then master in the end to whatever is the resolution they distribute, right, because they want to preserve the resolution throughout the editing process. That's how you get very, very high quality. So an 8 ks video, say, 22 minute clip is 8 terabytes of video. You can't load that on any GPU. You're talking gigabytes on a GPU.
There are 8 terabytes of video that you have to deal with. So here we have a small demonstration of what SSG would do for video. So on the left ear with SSG off and 8 ks, this is an 8 ks video clip. It's actually just kind of, you know, it feels weird for me to say 8 ks video loading on a PC because again, a couple of years ago, you couldn't even load this on a PC. It's playing, but it's dropping frames.
You can see it's dropping frames. Now come over to the right and run SSG on, no dropped frames. Every frame that has been captured on the camera is playing back in the editor. And that's, like I said, again, training wheels on, training wheels off. Right?
That's what SSG is going to do when combined with Vega and the high bandwidth cache controller technology. So that's one of the demonstration and we have about half a dozen of these things I call disruptive for professional graphics that we have lined up this year. You'll hear more about it. It's a graph, which will be coming up in a couple of months. And so that's SSG on SSGR.
But the next topic, and I know this is what many of you are here for, right? No, I saved the best for the last, is machine intelligence. I know many of you want to know what are we doing in machine intelligence. So I want to first start off with basically kind of what is our strategy and I've been thinking about on how to communicate this because the world of machine intelligence is extremely complicated. There's a lot of conversation going on.
And so I kind of, you know, start off with what I call the mountain diagram in there, right? There, you know, there is a nice beautiful peak right in front of there are many options ahead of us on what our machine intelligence strategy is. We have GPUs. We could take what I call the GPU path. We also have CPUs.
If you see the scattered path there, which may be kind of many hiking loops, that could be our competitive strategy that, you know, many architectures that they have under the umbrella and nobody knows what exactly they're doing for mission intelligence. If you look at our strategy, I call it our strategy, you know, as Lisa said, we've been talking about heterogeneous computing and thinking about this problem for a long time. We still believe that is the right approach. And I'll get back to this picture after I demonstrate a few things. So keep this in mind, our goal is to get to that summit.
We can get in many different paths. We can go after many small hills, the GPUs or just CPU alone approaches. But in one sentence, if you ask me what is AMD's strategy for Mission Intelligence, that's we want to get to that peak. And we believe the path to that peak involves CPUs, GPUs, some accelerators and open interconnect and most importantly, most, most importantly, an open approach to software. Imagine the world if you kind of, you know, rewind back to 1997, before the explosion of web and the data center, if there was no Linux, right, and if you had to rely on some companies to get their servers or server software, right, where would the world be?
That's exactly what you need to be thinking when you think about machine intelligence and GPUs and open ecosystem. Now we've been working on the software stack for a while. We have talked about this stuff. We have built a great foundation with our Radeon Open Compute platform. We have support for emerging heterogeneous computing CC plus plus software, HIP, which supports CUDA on AMD platform.
We have OpenCL, which we have had support for 7 years now and we also have support for Python. And on top of it, this is the key, middleware and libraries, this is what enables machine intelligence software. On top of that, we have the machine intelligence frameworks, CAFE, TensorFlow and all of this running. And we have been chiseling away one framework after another, enabling all of this stuff on top of our platform. And you have Mission Intelligence applications running on top of these frameworks.
So what you'll see today is demonstrations of where we are both in our hardware and software. So before I go there, let me introduce for folks who are not familiar with Deep Learning and Deep Learning Benchmarking. The benchmark called DeepBench, this is developed by Baidu. And, you know, if you look at the web page, it says this is a benchmark to benchmark which hardware provides the best performance on basic operations for training deep neural networks. And if you look at the current this is publicly available data,
you
know, the Intel Knights Landing, which is their machine learning accelerator today, does this workload in 569 milliseconds. NVIDIA's last generation does it at 288, almost over 2x faster than the Intel architecture. NVIDIA Pascal is another impressive to more than 2x faster than their Maxwell generation. Pretty impressive set of results by GPUs. No wonder you look at this chart, you should have no doubt in mind why GPUs are dominating this stuff.
But what's wrong with this picture? There's no AMD. Where is AMD? We're not even in the conversation. We're not in the picture today.
I know many of you asked me about that, many of you asked Lisa about that, many of you asked, you know, rest of my colleagues about that. Where is AMD? We've been working away on this problem for a while. And, you know, what I'm going to show is incredible work of a small team of our engineers over the past year. And let's see this where we are on Deep Bench with our new architecture running live.
Again, doing live demos is always, you know, huge risky. I sometimes every time I do this, I say next time I'm not going to do a live demo and it's like an, you know, addict, live demo addict. I come back and do a live demo again. We have 2 servers in the back, you know, running Deep Bench real time. And this is by the way, fortunately, it's a very fast running benchmark.
Actually, it's fast running just because it's running on the GPUs. Right? If it was running on, you know, some other thing, it would you'll be here till, you know, for dinner time. So what you're going to see, this benchmark is benchmarking basically how fast the deep neural network convolutions run on a GPU. And it contains various image sizes, various convolutional sizes, 5x27x7, 5x5, these are all different image sizes.
So it's a very good mix of, you know, the real world that they're benchmarking. So when I say run, you're going to see each of these kernels pop up running on both GPUs and then the total time will pop up on the right. So go ahead, Omar in the back is kicking the benchmark off and NVIDIA P100 is their most expensive GPU that you can buy today. It's done. I said it's fast.
As you can see, the Vega is over 30%, almost 40% faster than the fastest GPU available today. Now, let's get back to the chart.
You
know, this is significant for us. Now, you know, let me caveat a few things. I'm not declaring victory here over NVIDIA or anyone else. This is basically stating that AMD is finally on the chart. You know, a year ago, we would have taken just being on the previous chart.
Okay. You know, we are on the chart with a fairly impressive number. We got a lot of work to do to enable all our customers. They are excited. They see this number, they are excited.
And when they see this number and what they can do with Naples and 4 Vegas, Naples and 8 Vegas, they are beyond excited. They are just, when can I get this in my hand? When will you deliver me software? When will you deliver me platforms? Right?
So we have a different kind of pressure now than we had last year. And you know, people ask us, NVIDIA, Intel have so much investment in these areas, how are you going to compete? You know, we don't need a lot of people. We don't need a lot of money. We need the right 6 people, right, the right 8 engineers to work on this.
And by the way, that's what it took. It's not a lot of engineers because to enable this machine learning framework, remember that, you know, there is this myth that there is this large piece of software that you need to enable. No. You need to enable TensorFlow and TensorFlow calls a bunch of functions that are encapsulated in Mi Open. And Mi Open sits on top of our open compute library.
And enabling TensorFlow was not 500 people operation, it's 20 people operation. We do have 20 engineers. Remember I said we have 3,000 engineers? So we can definitely do that. And now no presence of mind anywhere, whether it's social networks or stages or anything over the last 6 months, you know, won't generate a lot of discussion around where is Vega, when are you launching Vega?
And speculation or even over the last week or so, some of you have been following, there's ton of speculation about Vega. We have talked about Vega, we talked about its architecture, we teased you on its performance and now we teased you on its mission intelligence performance as well, which is spectacular. And now the question is, where is Vega? So there is Vega, there is Vega in the room. And before, you know, I show you that, I just wanted to read you this quote that is, you know, an inspiration for us.
Let me just change one sentence here and I'll read, we stand today on the edge of a new frontier, the frontier of machine intelligence, the frontier of unknown opportunities and perils. Beyond that frontier or uncharted areas of science and space, unsolved problems of peace and war and so on and so forth. If you're JFK Sands, you probably know this by heart. So with that, I'm going to announce the first edition of Vega, we are calling that Radeon Vega Frontier Edition today. And what this card is, is this is aimed at the data scientists, Immersion engineers, the people who create immersive experiences and the top of the line product designers.
These are the people that are not afraid to touch new technology and push it to the limits. So we have a really, really interesting strategy around this Radeon Vega Frontier Edition and on how to reach this creation community and we'll be detailing this in quite amount of detail over the next several weeks. But I know you want to know more, right. So let me give you the specs and also availability. First half, it is coming late June.
So this will be shipping and will be in the hands of the our pioneers late June. And in terms of specs, we're comparing it to the current fastest GPU in our lineup, which is the Fury X, the last which VM GPU we've done. It's over 1.5x the peak teraflops of this GPU. That's the FP32 mode. It's over 3x machine intelligence flops, the FP16 flops and over 4 times the memory footprint, which is by the way very, very important for the machine learning community.
That has been one of the issues with Fiji. Even though Fiji had very huge potential for machine learning performance, The 4 gigabyte HBM1 limit was a very limiting factor for us to gain any foothold with PG. Now we have a GPU that does not have that limitation and has spectacular performance as we have demonstrated in Deep Bench. Now, I told you I have something here to show you. I do have a product.
I do have this card. It wasn't a paper launch even though it's only available end of June. This is the beautifully hand designed card. In fact, the shroud, everything about this card will scream premium, not commodity. And this is our fastest card today and this is, as far as we know today, this is our the fastest machine intelligence card in the market when it ships.
This will be available end of June. So that is Vega Frontier Edition.
By the
way, the chip underneath it which you may have seen is much smaller than this. This is impressive, the epic. Okay, so I said I'll come back to the mountain picture. The progress we've made in the last 12 months and the combination of what you'll hear from Forrest about Naples and the progress we made on with Vega and also the Radeon Open Compute Stack, we are strongly in the discussion on machine intelligence. And I promise you every chart from the moment we ship the Frontier Edition in June that you'll see out there, we'll have Vega.
Whether we win, we lose, we will be on the chart. And as you can see on Deep Bench, the early results, we win. And this chapter or this book, the chapter 3 will be next year. We are writing the chapter 2 and this will be continued. So thank you and please make sure you get hands on experience with all the Vega demos that are in the live area in the cafeteria, right?
Yes. Thank you.
Ladies and gentlemen, we will now take a short break from our program and resume in 15 minutes. Please welcome Senior Vice President and General Manager, Enterprise Embedded and Semi Custom Business Group, Forrest Norod.
Hey, good afternoon. Welcome back. And I hope everybody enjoyed the break. I want to talk to you now about the 2nd major area of the company, the unimaginatively named Enterprise Embedded and Semi Custom Group. I think I hope you all agree that although our names of our groups, CG, EESC are way too literal and unimaginative, Vega, Epic, Radeon redeem us somewhat in naming.
But I want to talk about a new data for the data center that I think AMD is going to bring back. And before I go solely into EPYC and what it means, what we're bringing to the data center, I'm going to reflect briefly on the rest of the EESC portfolio as well because the way that I view it, it's a continuum. In enterprise, embedded and semi custom, we focus on providing highly performant solutions to the most demanding customers, who in turn incorporate these products into their end products that power the Internet, that delight consumers around the world that provide the brains behind medical imaging equipment that really power large sections invisibly, power large sections of the industry. And so those are the 3 pieces, and there is a theme behind all of those 3, highly technical solutions sold to the most discerning customers. And EESC, fundamentally, we view as a principal growth engine of the company alongside what Jim and Raja are doing in Computing and Graphics, we think EESC provides a strong foundation for future growth through both differentiated data center products, EPYC most notably, but also by expanding the opportunities we have in both our semi custom and embedded portfolio.
And on semi custom, that's the business that quite frankly right now is could be argued to be the core franchise for the company. It's the business that really combines the X86 core and the CPU the X86 core CPU and graphics into devices that, most notably, Microsoft and Sony have used to create compelling experiences for tens of millions, over 100,000,000 now, game console customers around the world. And it's a business we're very proud of. We're very proud of not just because of the products, but because of the endorsement that our customers have given us from the partnership that we've established over many years. But that's where we started with semi custom, was back on the first part of the last cycle of game consoles.
But since then, we've been building on that foundation. And as we look forward, we're taking now 3 key pillars of that business and using them to drive the business forward. The first key pillar is we've refreshed our core technologies, our core X86 and graphics technologies, taking advantage of what Mark and Raja spoke so eloquently about, these high performance engines of graphics and compute, refreshing our capability there. The second, and it's hard to overstate its importance, is what Mark talked about in Infinity Fabric, Because what we've done with the investment that Mark and the team have made over the last 5 years is create a capability to rapidly and efficiently assemble high performance solutions, not just for our standard roadmap, but also for our semi custom engagements in not just game consoles, but beyond. And we also have built a strong practice in doing co development, again, with the most demanding customers in the world.
Those three pillars lead to highly differentiated end products that establish new price performance and capability benchmarks for the markets that they serve.
And I
think that will continue to be the focus for us for several years to come. Semi custom, although it started in game consoles, is a capability that you will see us bring to a wider set of markets alongside of our embedded focus. And our embedded business, which I'm not going to speak to very much today because of the focus on EPYC and the data center, is our embedded business is where we take our standard products and we accompany them with the right platforms, the right software, the right certifications to power Fin clients, to power medical devices, to power avionics in some of the world's most advanced jet aircraft, to power digital signage. And we view embedded and semi custom along with what we're doing with EPYC in the data center points on a continuum, where we can offer to our customers a consistent set of technologies, CPU and GPU, with consistent drivers, consistent software investment that allow them to spend a wide range of price and performance points in these key markets. And we'll talk more about this as the year goes by.
But for today, I want to focus, of course, on the central theme of my talk, which is the number one priority for us, which is reestablishing data center leadership for AMD, not just participation, but data center leadership. Now quite frankly, if I was standing on any other stage representing any other company with rounding to 0% market share of data center, you should rightfully say, Forrest, data center leadership? What gives you the right to assert that that is your goal, much less the ability to convince me that AMD has a chance to deliver it? It. Well, for that, I have to look back.
And if I look back at the server and how today's server came to be, I think and I hope you will agree that that server of today is based on a strong foundation of AMD Technical Innovation and Leadership because at the heart of today's server in every data center in the cloud, in every data center in the enterprise is a 64 bit x86 processor, the technology for which was first introduced by AMD. It's a multi core processor, which Lisa gave away my punch line here, was first introduced by AMD. They're processors that are scaled and connected by high speed coherent interconnect, allowing the system to scale up across cores, across sockets. Again, you guessed it, 1st introduced to the market by AMD. All of that power in those cores are effectively utilized by software and scaled up and scaled down by virtualization enabled first by the virtualization hardware introduced by AMD.
And then lastly, I'll use Mark's term feeding the beast, keeping all of that fed are the integrated memory controllers that bring DRAM closer into the processor and allow those cores to be effectively utilized. Again, introduced first by AMD. And so the reason that I think it's plausible, predictable, inevitable that AMD can reenter the market and establish technical leadership once again is we've been there before. We understand what it is to participate in the server market and to drive innovation in the server market because the shape of the modern server is the shape that we introduced over a decade ago. But since then, I will say that server technology has been pretty stagnant.
Over the last decade, particularly the last 6, 7 years, I would indite our industry for incrementalism. Each new generation of processor adds a few more cores, little bit more memory bandwidth, tinkers at the edges with the features, but seldom makes any substantial jumps in either performance or capability. Also that architecture has not evolved. That fundamental architecture in any X86 server out there, brand new, just deployed, is that architecture that I showed you a moment ago that was designed in a different era for a different data center, uninformed by the needs of the cloud. And then lastly, the performance because of that incrementalism and that hasn't happened in an even fashion, it's lurch forward, has been limited by unbalanced designs, where the cores can't be fed, the IO is inadequate and performance falls short.
And so that's where we are today in our opinion. And that presents a fantastic opportunity for AMD to come back in and help reintroduce innovation, help reintroduce competition to the data center. And the data center is a very interesting TAM. The CPU side of this, this is the TAM that Lisa showed you before, $16,000,000,000 of the $21,000,000,000 data center TAM that she articulated earlier is in CPUs, both in traditional server as well as in storage systems, which more and more are just servers running with appropriate software and a lot of disk and networking. And so that's a great TAM.
It's a very interesting TAM. And when we looked at it and said, okay, we're getting back in, we looked at the distribution of that TAM and we said, look, it's gotten to the point where the inevitable migration from higher socket systems, the 30 a lot of servers used to be 32 socket, then 16, then 8, then 4, then 2. That inevitable migration has really attenuated the market at the high end. And the vast majority of server systems out there today are 1 socket or 2 socket and really principally 2 socket. And so that's where we're focusing.
We're focusing on that 91% of the market, which is where the natural trends of technology have driven the market to. And so the way that we're attacking that market is, you guess it, Epic. We're attacking that market with a part that Mark and the whole team at AMD has generated, has wrought from those 32 Xen cores, offering tremendous power and flexibility. But it's a balanced design. So each Xen chip with those 32 cores is coupled to 8 memory channels to keep that beast fed, to keep performance available to applications.
We're also adding 128 lanes of high bandwidth IO on each EPYC chip, again, so that we can pull in data from the network, from the drives, from flash. And none of this would matter if we didn't also support security, something that has become all too evident in today's world and we were reminded of earlier this week and over the weekend. Security is paramount. It doesn't matter how fast your chip is, if it can't help be part of the security solution, it's part of the problem. And so that's what we brought.
Now when you look at that, that's a heck of a lot of capability. And one question that folks have asked me in the wake of our reveal of the Epic specs, code name Naples, the Epic specs back at Ryzen Tech Day is how the heck did you guys stick all of that capability onto one part? And if you're really telling me you've got that much IO, that much memory, that many cores, you're also telling me that you've got a big business problem because you've got a huge chip that's going to be very difficult to reach the volume segments of the market and you're going to be under continuous pressure on margins. And if we thought about this in the traditional way, they'd be right. But as Mark said earlier, when we started designing the EPIC roadmap, we said, look, we have got to break the constraints of Moore's Law and the limitations that it gives us.
And the key to that was that Infinity Fabric. Infinity Fabric allows us to efficiently scale within a die, across multiple die and across multiple sockets. And so when you take a look at what we've actually delivered, and Lisa showed you in the epic package earlier, let me show you the next piece of how we are delivering that much capability to the market in one part. And so this is epic with the heat spreader taken off. And for those of you that are close enough, you can see that we have actually 4 die on 1 package.
And so each one of these die is an 8 core complex knit together with the Infinity fabric that allows us to efficiently and effectively scale them and have 4 pieces of silicon delivering the performance of 1 integrated unit. That allows us to deliver tremendous performance and capability without running into the limitations of high costs, low yields and high variation that you would get out of 1 monolithic device. And it tremendously reduces our costs. And so that architectural innovation of Infinity Fabric lets us solve the feature problem a different way. And it lets us efficiently scale to very large two socket systems by interconnecting the 2 EPYC systems that you see 2 EPYC chips that you see there.
So in a 2 socket platform, we can deliver 64 physical cores, support for accessing 4 terabytes of memory on 16 different memory channels and for supporting 128 lanes of PCIe. That's a lot. What does it mean? It means 45% more cores than the highest end part that our competitor offers today, 122% more memory bandwidth and 60% more IO. That's not incrementalism, that's big leaps to get the system back into balance.
So with that, we've produced what we think is a very high performance balanced system that is quite frankly very different from what our competitor has today and what we anticipate they're going to have introducing in a few months. And so what that translates into is, we believe, an opportunity for AMD to assert leadership performance in key workloads that are really driving the growth in both the cloud and the enterprise data center, Workloads like certain segments of high performance computing, virtualized cloud and enterprise data center general purpose compute, machine learning, big data and analytics, where the memory bandwidth and the memory capacity allows us to process data sets much faster than the alternative. And of course, software defined storage, where that marriage of IO, core density and memory bandwidth, again, allows us to help accelerate the migration off of proprietary storage hardware onto software running on industry standard servers. Now a couple of months ago, at the Horizon Tech Day, we did a demo in that first category, high performance compute, where we showed reservoir simulations, simulating a large oilfield and seismic data that was indicating where the oil was flowing in a large volume of rock.
And we demonstrated at that time that against the very highest end Intel's current new shipping system, the Broadwell 2699A V4, that we could offer in an EPYC system 2.5x the performance. Now I got a few people questioning that and pushing back saying, hey,
you've got a heck of
a lot more memory bandwidth than Intel. That's not really fair. That's you're just showing us how much more memory bandwidth you have than Intel. And my point was, yes, yes, you're absolutely right. Because they are choking the performance of that system because of limitations of that unbalanced architecture, whereas in the AMD system, the cores were all being pegged to 100% utilization all the time.
It's the power of a balanced system. But I'll tell you what, today, we'll turn our attention to the next workload area, virtualized workloads for both the cloud as well as enterprise data center. And I'll show you a different benchmark that's CPU bound. And so for this one, we're going to do a demonstration where this we think is a fairly typical configuration. We're going to run 8 VMs on each machine.
Again, we're going to take a high end, the highest end Intel system that's available today. And the workload that we're going to be running on those 8 VMs is we're going to be compiling the Linux kernel, the Ubuntu Linux kernel. This is also running the guest OS's Ubuntu as well. We think this is relatively representative workload, not that everyone's always compiling the Linux kernel, but it's a highly compute bound workload running in a virtualized system where each VM is multiple cores. And so the configuration that I'm going to show is once again that Intel 26 99a server, highest end Intel server available today, running against a EPYC system, 2 socket, 64 core, 16 memory channels,
blah, blah, blah.
So high end to high end. And what we're going to show is a graphic that you'll see in just a moment. I'll describe it first here because it's going to go by relatively fast, not as fast as Rajna's. Compiling a Linux kernel, even on a high end CPU is not milliseconds. So, a few seconds.
But what you'll see is you'll see a circular graph that's showing the build out of the different parts of the kernel. So you'll see the and the associated other routines. So you'll see the IO routines, the files associated with the IO routines, the files associated with the kernel, etcetera, being built out. And so if we go to the demo, Kevin, kick it off, you'll see on the left, the AMD EPYC system. On the right, this is the average we're averaging is the average we're averaging across those 8 in real time, those 8 systems or those 8 VMs.
And so in 15.7 seconds, EPYC completes the full kernel Versus 22 seconds for Intel. You can do the math. It's a lot faster. And so that we think is representative of a real workload. Now but truth to tell, in my opinion, this is a little bit unfair, because if we look at the server market and now we hone in just on that 2 socket market, that's 80% of that market, 80% of the 91%, that dual socket market.
If we look at that and we look at the performance, you'll see that the E2699 processors occupy the high performance band of that market. The low performance systems, the E260X03s, etcetera, are down around, say, 200, 250 spec int rate per socket or sorry, second rate per system, whereas the 2690X are up over 1,000 spec int rate compiled with GCC Compiler. So those are very high performance systems relative to the rest of the market. But in truth, they're very low volume. The vast majority of server customers do not buy the highest end CPU that's out there.
So the demo that I was running was relatively unrepresentative of the systems that are out there today. In fact, much more than 50% of the current 2 socket server market is the E5-two thousand six hundred and fifty processor and below, processors of that speed or below. It's more than 50% of the market. So a fair thing to do would be to say, how do we compete against those processors? And when we started thinking about this a few years ago, it became clear to us that we had an opportunity in this architecture with Infinity Fabric and what we could do with the capability in each socket to develop a processor that a single socket configuration could outperform the vast majority of 2 socket systems from our competitor, offering customers a whole new choice to drive much higher performance, much better power efficiency, much lower total cost of ownership.
And so what I'd like to do is I'd like to show you another demo, where we're going to take that E5-two thousand six hundred and fifty, one of the sort of the high end of the middle of the market and run it against 1 EPYC processor, 32 core still, you can see the specs there, on running exactly the same workload. We're running exactly the same thing. Again, 8 VMs, running the Linux running Linux compile. And so, Kevin, why don't you go ahead and kick that off? This will take a little bit longer because we're not comparing the highest end to the highest end anymore.
But what I think you will see becoming clear pretty quickly is already the single socket EPYC system is outperforming the dual socket Intel processor. And so here in a few seconds, we will wrap this up. And this preproduction EPYC processor completes ahead of the Intel 2 socket system. Okay. So what does this mean?
Well, if you take a look at the systems a little bit more closely, this is a photograph of that board that's powering the Intel 2 socket system. That's the board. And to scale, this is a picture of the EPYC single socket configuration system that just ran that demo. And if you put them against each other, what you obviously quickly see is that the Epic is a much denser solution, uses a lot less space, uses a lot less power, but still provides more cores and more IO with the same memory bandwidth capacity are actually slightly better than that Intel system. And so for us, we've got the opportunity to really disrupt the volume 2 socket market.
Our one socket strategy is not about doing a better one socket. Our one socket strategy is about offering customers the ability to have a highly optimized enterprise or cloud capable single socket alternative to the 2 socket systems that hitherto they've been forced to adopt if they wanted a certain amount of memory performance or IO capability. And it's not just us saying that, hey, this provides much better TCO. It's not us saying that this provides much better power consumption. I'd like to show you a brief video from 1 of our early partners that's been engaged with us in this effort.
Dropbox is one of the world's largest collaboration platform with more than 500,000,000 users and more than 200,000 business customers using Dropbox every day to get their work done. And that means we have a lot of different workloads in our infrastructure, all the way from requests that we handle for 1,000,000,000 of users to syncing more than billions of files every day, and even more compute intensive workloads that require us to process data to generate reviews to enable search on the content that people saw at Dropbox. Our customers and users expect our products to be fast, to be reliable and efficient. And we are committed to using the best hardware delivered at value to the end users. So we have been looking for processor solutions that provide us with high performance.
Single software solutions are very appealing to us because of the architecture and the course and the memory bandwidth they provide and it fits with what our needs are. We got early access to Epic and everything that we have seen so far is extremely encouraging and exciting. Our level of engagement with AMD has been extremely high. I'm looking forward to Epic being part of our roadmap to deliver the best value to end users. And I'm very, very happy with the collaboration and the partnership we have developed during the course of evaluating Epic.
But I think that you'll see whoops, I'm sorry. I think that you'll see, as we bring eptic to market, we really have a 2 pronged approach. We have a leadership 2 socket part that we think based on the Zen core, the power efficiency of that core, causing it to operate at high frequency in a power constrained environment. And the balanced architecture that we've built around it provides us leadership performance for certain critical workloads. But we also have a disruptive one server play one socket server play where we're offering all the capability that the vast majority of customers that are currently buying 2 socket server processors need at a substantially better TCO, though no compromise one socket.
And so if you look at the status quo that's existed for years in the industry, you see a little bit of attrition just chipping away at the 4 socket and above space. But the 2 socket and 1 socket has been static for many years because there hasn't been a one socket alternative that offers the performance or capabilities that users need. With EPYC, we're going to disrupt that status quo. We think we have an incredible 2 sockets. So don't get me wrong, we've got a great 2 socket part.
But we also have an alternative for customers to allow them to right size the system to their workload without artificial constraints. And we think that's going to redefine large sections of the market. But beyond just the socket configuration, we're also going to disrupt the status quo in another way. We talked when we introduced Ryzen about Ryzen being unlocked. Every Ryzen chip was unlocked in terms of frequency.
You could operate it as fast as you wanted to operate it up to the limit of that particular chip. Every EPYC processor is unrestrained. Everyone supports all the IO channels, all the reliability features, all the memory channels, supports high speed memory, has a complete security stack and integrates the full chipset. Unlike our competitor, we're giving the person that's buying an EPYC server all of the reliability capability and feature set of a true server regardless of what performance level they need for their particular application. And so Epic is unrestrained.
And I think that is also going to disrupt the status quo. And so,
The one other thing
the one other benchmark sorry, the one other application area that I wanted to talk to you about is the machine learning area. And Raja and both both Raja and Lisa alluded to it earlier that we've got a great opportunity with Radeon together with EPYC. And Raja showed you a benchmark, but I want to click into the system system today that has a high number of GPUs for machine learning applications, 4, 6, 8 GPUs, the architecture of that system looks like this. You've got you have to use the 2 socket server, not because you need the CPU horsepower, that's all being done on the GPU, but you need the infrastructure, the features and the capability of that 2 socket platform in terms of IO, in terms of memory. But their IO memory is insufficient.
And so you need PCIe switches to allow the GPUs to be interconnected. You need a storage controller to talk to whatever drives you need to feed this. But those PCIe switches are barriers to performance. They add latency. They choke down the bandwidth that's available between the memory and the GPUs.
And many of them choke the bandwidth between the GPUs. We designed EPYC and Radeon Instinct together. Some people ask me why just why did you put so much damn IO on Epic? You did it for this reason. That's it.
That's almost the schematic for the server, not quite, not quite. Pat, don't give me there's pull ups, there's regulators, but that's it. And that single EPYC processor hooked directly to memory, hooked directly to drives, hooked directly to a large number of Radeon Instinct cards, offering full bandwidth across the Infinity Fabric inside of EPYC, non blocking across from peer to peer, non blocking to memory is an incredible solution that offers extremely high performance and unmatched TCO. And so we think that Radeon Instinct and EPYC together are the perfect choice going forward for machine intelligence. And so because of everything that we just talked about, because of the performance of features that we've brought to market, we've got excellent market momentum.
Over 30 systems are in flight, 30 different systems are in flight to be introduced in the first and second half of twenty seventeen. We've seeded well over 5,000 EPYC CPU production and production candidate samples to our partners, to the industry and our ecosystem partners and customers. We have programs underway at the vast majority of hyperscale customers around the globe. And we are set and on track for launch this June. So with this, we're bringing EPYC finally to market.
And the promise of AMD getting back into data center leadership begins to be delivered. But you know what, promises notwithstanding, no matter how good EPYC is, if we didn't have a road map that we are committing to, that we were resourcing, that our customers could count upon, none of this would matter. Look, I was a server guy for many years. I was a customer for many years. And as a server guy, there's no way I would make that initial investment if I didn't know that my supplier, my partner wasn't going to be there for the long term.
And so beyond EPYC, Naples was the code name of the first EPYC processor that we delivered with the Xencor. There's 2 more steps that we've already committed to and are working on. And of course, we're thinking about the one beyond. Rome using the Zen 2 core and implemented in 7 nanometer technology is the next iteration. And then Milan using 7 nanometer plus technology in the Zen 3 core that Mark alluded to is a step beyond that.
And with each one of them, we're going to deliver not just the core performance, but continuous innovation in the rest of the chip, the rest of the system and the features that we incorporate. And so I think with this road map, as well as the first step that we're launching with EPYC, we are delivering on that data center technical leadership that I articulated earlier. And so we want to return innovation to the data center market. We want to deliver with EPYC the clear choice for key workloads where if you're doing certain things, there is no question but the Epic is the only choice for you. We've got widespread ecosystem and partner support out there to support our support this effort.
And most importantly perhaps, we're not only back, but we're investing for the long term. So the customers who go with us can be confident that we're going to be around with them for the long haul. And so with that, I could not be more excited than to say, please come join us next month when we launch EPYC to the market and bring a new day back to the data center. Thank you. And I'll say it, Raj has said earlier, you're all numbers guys and gals.
So let me bring the real rock star up to the stage and introduce my colleague, my good friend, Devinder Kumar, the CFO of AMD.
I could say last but not the least, I could also say the bottom line. You probably walked in here with a sense there's a lot going on at AMD and you probably realize now there is a lot happening and a lot to come. Last 2 years have been really, really good from a viewpoint of where we were in 2015 and a lot has happened and I'll share some of the financials with you. You've heard from Lisa and my colleagues, there's definitely a lot happening and a lot to come. 2017 is a pivotal year for OMD.
This is a year in particular, we have set a target of finally making money after losing money for the last several years, in particular, when the PC market shifted dramatically a few years ago. So let's get started. I'm going to cover the 2 year journey really quickly to really anchor us on where we've been in 2015 2016 as we continue to invest in all of the products that you heard about today despite all the difficulties and the challenges that we went through. I'll share with you more details on 2017 in terms of giving you incremental guidance beyond what we did on our earnings day that happened on May 1st. I'll talk about the growth opportunities in terms in particular of revenue and gross margin and then share with you to round it off the long term target financial model just like we did a couple of years ago, but looking out all the way to 2020, which is the long term horizon.
So the 2 year journey, you know, AMD turned 48 just a couple of weeks ago. May 1st is actually the anniversary of AMD. We're founded in 1969 on May 1. And we had a history of focus on the PC market, the traditional PC market, the traditional products as we call it today. In 2012, the market shifted, the PC market shifted, surprise everybody, including us.
And our journey since the last FAD, a lot has changed in particular with the investments we have made. In 2015, we're losing money. Revenue was declining 30% year on year. We had a high debt load. Many of you thought we wouldn't be around here at this Analyst Day.
And we had to refocus the company in terms of what we did and what we had to do to 1, grow revenue, which we did. We had to make the roadmap investments that you heard about from all the products, whether it's Ryzen or EPYC or Radeon. And in particular, we did some strategic transactions in 2016 that helped us not just simplify the company from a business model standpoint, but also monetize our IP and from my standpoint, short up the balance sheet. So if you go to the growth side of it, revenue up, you heard Lisa say earlier, 7% up year on year, 9% in CG business and 5% in the EESC business. On the margin side, we had lived below the 28% level in auto 2015.
That improved by 3 percentage points in 2016 on an improved product mix and increasing margin before you heard about all the new products that you heard about today. And finally, if you go to the financial side of it, OpEx because of the situation from financial standpoint, essentially we maintained it flat. But if you look under the covers, R and D was actually up, SG and A was down. And we had to fund all the things that you heard about today in terms of making sure we give the R and D investments to Mark and to Jim and to Raja and to Paulus to get to The PC business stabilized The PC business stabilized and from my standpoint, we actually turned the corner. And that took the losses, still losing money from the $0.54 loss on the non GAAP basis to $0.14 But we don't like to lose money and that's why you hear about the roadmap over the next few years to continue to make progress on that particular front.
On the balance sheet, debt down in 2016 with the product momentum we had and with the market opportunities we had, we reduced that by $500,000,000 And if you compare year on year, cash was up $500,000,000 We finished below above $1,200,000,000 of cash and that obviously took down the net debt pretty significantly from a year on year standpoint, although there is more work to do. The other thing I'll mention on our cash balance is 90% of our cash unlike several tech companies, 90% of our cash is held domestically. So very comfortable with the cash balances that we have, even if they go up and down during the year as we go through the year with the seasonality of the business. So 2017 financial priorities haven't changed. We want to grow revenue, we want to expand gross margin, we want to be very disciplined from an operating expense standpoint and very targeted where we make the investments to get the biggest bang for the buck.
And finally, as I said earlier, we want to return to profitability. So there's a laser focus within the company to get back to profitability. Last year in 20 16, we were operating income profitable and this year we want to be profitable at the net income level as we finish 2017. So the new products are ramping. You heard about all these products on here, including Raja showing the Radion products.
These products, these products are commanding better ASPs. It's a richer mix. I say a lot internally to my folks when I talk internally. I said if your products are that good, your margin must go up. If your products are that good, your margin must go up.
And these are the products that will drive revenue and expand the gross margin as we go forward and we want to ramp the revenue, capturing market opportunities available to us, but at the same time bring the gross margin up from where we are below 30% and make progress to the longer term target model. Then if you go to 2017 in terms of the incremental guidance as I talked about, we already talked about low double digit revenue growth in 2017. We said that earlier on this year. Gross margin for the year, right, approximately 34%. Continued discipline on the OpEx standpoint, as you heard in our 2016 long term target model, we wanted to have OpEx between 26% 30%.
We think this year we could do 31% expenses over the revenue. And CapEx $140,000,000 which is an update from what we said earlier this year. As I said, as we said on the Q1 earnings call, we did capitalize production mastheads in starting Q1, 2017 and these are the production mastheads that are used in producing the products that you heard about. Our product development process has become a lot more predictable and therefore they qualify for capitalization and we went and did that starting Q1 of 2017. More importantly, the 2017 gross margin that I just gave you at 34% already contemplates the impact of the Masat amortization or depreciation.
And this year, 2017, the impact of that is very small, very minimal, is negligible. So we are driving towards positive net income for 2017 as one of our large goals in 2017. So in summary, right, while we have stabilized the business, we have completely focused on operational and financial execution, especially over the last 2 or 3 years. We want to carefully manage the business model. We want to carefully manage the business model, deliver great products, but more importantly, return the shareholder value that we have seen some of that over the last year, but we think there's a lot more to come.
So let me turn to the long term priorities. Long term priorities are unchanged, continue growing revenue, continue expanding the gross margin, consistent profitability. Once we get to profit in 2017, we want to stay there. We want to make more money as compared to 2017. And then from my standpoint, a strong balance sheet, the debt load on the company's balance sheet is still too high.
So we want to strengthen the balance sheet and I'll share with you some of the goals that we have from a capital structure standpoint. And these are all keys to our long term financially and otherwise. So the large market department, as you heard the TAM from all of the presenters today, but in summary, right, there is $60 plus 1,000,000,000 of TAM available to AMD. We're in a very unique position. You heard from my colleagues that the technology that we have can access many of the market areas that are available to us.
The industry needs actually fit our cost trends. We just have to go deploy them in terms of capturing our fair share of revenue in all of these areas. We are segmented essentially to the PCs, immersive devices and the data center. PCs, as Lisa said earlier, is our bread and butter. We know that really well.
250,000,000 plus units, we have low market share. We play in the non premium side. And today with the Ryzen introduction, we are playing the premium space, which is where the larger revenue and gross margin dollars are. That's important. In the immersive, whether you talk about game consoles or you talk about graphic cards or even the embedded applications, $15,000,000,000 or $10,000,000 We think we can capture more share there.
And then finally, you just heard from 4 years, in the data center, there is explosive data growth and the data growth is driving a demand for compute. And with that, with the 0% market share, we can only go up from there, especially with the profile of product that Forrest just shared. The 2018 product portfolio, by the time we get to 2018, the products would be in the market and ramping. And with that, we believe we can have sustainable high product performance leadership. With the complete premium product portfolio, whether it's high end desktops, enthusiast GPUs, data center or premium notebooks to come that Jim talked about earlier, we expect double digit revenue growth in 2018.
And then from a margin standpoint, here are the drivers, all playing in the premium space. Margin expansion is a crucial focus area for us. I know many of you write about us and the thing you focus most about now that we are out of the wood from the 2015 standpoint is when is the margin going to go up. And these are the drivers that are going to drive that premium desktop and mobile products on a multi generation roadmap as you heard from Bong. Improving the GPU mix across the board, not just playing at the low end on the full stacks, because the new markets that we are accessing are actually highly margin iterative.
Re entering the X86 service market, high margins in that area. And beyond the products, we are in the end, firstly and foremostly a product company. But we layer on top of that when opportunities arise IP monetization. We've done that and we'll keep our eyes open for IP monetization in the future. Our IP is unique and very valuable, very valuable.
And that is one of the strongest in the industry and that is something we're very cognizant of. Going to the gross margin, right, and this is in some sense my favorite slide because the products drive the gross margin improvement. Like I said, before in 2015, we look at it 28%, 2016 31%. In 2018, on the strength of all the premium products you just heard about, getting above 36% gross margin is our goal. And then in the 2020 timeframe, getting above 40%.
To get to the 40% level based on the premium products we're introducing and continue to refresh that on a multi generation basis it's what we want to do. The roadmap decisions and execution is really going to be key to get to these margin levels. So going to the long term target financial model, revenue, as you heard from ESA on a CAGR basis, we believe we can continue to grow revenue on a double digit growth basis and in particular with the new product momentum that started off in 2017 and continues in 2018 and beyond. Gross margin, we are projecting approximately 40% to 44%, which is a significant refresh from the last time you saw these numbers. In 2015, you would have seen that same line say 36% to 40%.
We are projecting above 36% in 2018. And then going from there and the 2020 timeframe getting to the 40% to 44% gross margin level, really driven by higher ASPs and a much richer mix from a product standpoint. OpEx, we've gotten good at that. We've managed OpEx really tightly, very disciplined. We modulate that depending on what the revenue profile is and in particular remain targeted to go ahead and take advantage of the OpEx leverage as revenue grows, we will not let OpEx grow as much as the revenue is growing.
And in particular, on the OpEx side, I will say, we want to continue to fund the product roadmap and all of the R and D investments that are needed, while at the same time maintaining lean SG and A expenses. In the long term, we think we can get to the non GAAP EPS of greater than $0.75 which is what you see on this slide and improving the profitability from where we sit today. On the capital structure side, you know, managing cash annually around $1,000,000,000 Now the cash does go up and down. Seasonality of business dictates in the first half of the year is typically lower and then it builds up in the second of the year, but we manage that. And we manage that from an optimal standpoint, when people ask me about cash, I say $600,000,000 to $1,000,000,000 but 2016 we ended at above $1,000,000,000 And if we have cash above $1,000,000,000 right, the target and the goal will to go ahead and deploy the cash to reduce the debt.
Like I said earlier, the debt load on the balance sheet at $1,700,000,000 plus is still too high. We also have an ABL facility or revolver that we renewed recently all the way to 2020. So it's a 5 year facility available to us if we needed it, not that we needed, if we can maintain the cash of about $1,000,000,000 If you are looking at cash flow from operations, it's approximately equals our net income. That's just the way it is. We are working capital move sometime, but over the longer term cash flow from operations equals the non GAAP net income.
And we do want to drive from an overall standpoint consistent free cash flow generation, consistent free cash flow generation. We are focused on the debt load. We want to continue to reduce the debt. Bringing down the debt, lowering the interest expense is a big goal for us over the next couple of years. From a long term target standpoint, net debt negative is our goal, whereby the cash on the balance sheet, the cash on the balance sheet is greater than the debt on the balance sheet.
Net debt negative is a goal in the long term timeframe. And finally, from a leverage standpoint, we want to get to below 2 times. We want to get to below 2 times. And that would say that the gross debt to EBITDA ratio is, you know, the calculation is solving for below 2 times leverage. So, you know, in terms of looking at the roadmap, you know, it is a financial roadmap.
2016, you see it on the page, 2016 greater than 36% gross margin on the strength of the premium products you just heard about. Going from 28% to 31% to 34%, 136 to 40 to 44 is kind of the roadmap from a gross margin standpoint. It's fueled by multiple drivers. There are multiple opportunities available to us with the high performance computing space that is our vision and our mission. And there are multiple drivers within Cadastair and we feel really good about all the products you heard about today because in aggregate, I can share with you that all of the new products in that aggregate are contributing greater than 50% gross margin, greater than 50% gross margin in aggregate from all these new products and that is what is driving the gross margin from where we have come from, where we are and where we are headed to get to the beyond 40% gross margin.
And finally, the bottom line, like I said, the $0.75 EPS is our target in the long term timeframe, which we define as the 2020 timeframe. So investment summary from AMD standpoint, lot of progress from my standpoint over the last couple of years and continuing laser focused execution on the operational sites, on the product side and on the financial side. We have significant opportunities in large markets that we either don't participate in today or have a low share in or in growing markets and the TAM as you saw when you add up the numbers is greater than $60,000,000,000 and our revenue today is just north of $4,000,000,000 Market share gains with margin accretive premium from a P and L standpoint. We'll continue to be disciplined from a financial from a P and L standpoint. We'll continue to be disciplined from a financial management standpoint and we've shown that we'll continue to do that as we go forward because we want to manage for profitable growth here on now.
Consistent free cash flow, reducing the leverage and in some sense getting to a stronger capital structure. In fact, getting to a stronger capital structure than we had several years ago, especially on the strength of the execution. So with that, it's my pleasure to invite Lisa back on stage for some closing remarks before we go to Q and A.
Okay. Thank you, Devinder, and thanks, again, all of you. You've spent now a lot of time with us. It's been over 3 hours. But I just want to finish with just one slide and maybe a couple of comments.
This is a Financial Analyst Day, and we are talking about our long term financial models and our strategy for the business. But what we wanted to do today was to give you a little bit of a peek into our journey. We've made a lot of progress since 2015. I'm really, really proud of the progress that we've made. But I hope what you've gotten from the last couple of hours is this company is all about the products.
And so we wanted to show you exactly what we have in the pipeline, including Ryzen in the PC market, including Threadripper at the high end, including Vega across gaming and professional graphics and putting ourselves on the machine learning map, including Epic and what we can do when we put these things together. This is just a peek into what the 8,000 people at AMD think about every single day. And so my main message is we are positioned for success. We know what our strategy is. We've been on the same strategy.
It hasn't changed. We're becoming even more sure and we have more conviction that we are in the right markets. It's about multiple generations. It's about sustainability of the road map. It's about earning the trust of our customers.
And it's about focused execution, focused execution each and every quarter, each and every year and over the next 3 to 5 years. So I like to say when I talk about products, we talk about fine wine, we talk about few things. I'd like to say the best is yet to come. That's how I feel about AMD and the products and the road map we have ahead of us. So with that, let me thank you again for your time today.
I want to invite my colleagues up here for a Q and A session. And Ruth, come on
up.
Thank you.
These chairs don't look as good as our products. So okay,
great. So we're just going to do some questions and answers. Before we kick off though, I just want to remind folks in the room, we actually have a very large product demo area just down the hall when you leave the auditorium to your left. And we're going to have a reception there after the question and answer session to allow you an opportunity to mingle with management as I'm sure you'll have additional questions to what we have time for today. We have 2 microphone runners in the room, 1 on each side, so people are already eagerly putting their hands up and they will make their way to you.
Perhaps we'll start here with Mark. Susan, could you help Mark out there please?
Yes. Hi, Mark Lipacis from Jefferies. Thank you very much for the great presentations. I had two questions for Forrest and I'd like to pick up on your theme about talking about numbers. If I have the mercury numbers accurate, I think Intel's ASP for the dual socket server is chips are in the $800 range.
So I'm curious if you could share any insight on your pricing strategy is to say, is this are we talking about that kind
of a zip code?
Do you have a discount off of that? And then the second question is the original Opteron was launched in 2,003. I think by the end of 2005, there was about you guys had about 25% market share. You were on the other side of the table at that time. What kind of lessons learned can we can you share with us about what you would expect the competitive response to be to the products because I imagine that is an important input to DaVinder's gross margin expectations?
Thank you very much.
Absolutely. So first off, on the pricing strategy, look, we're going to launch the products here next month, and we're really not going to get into details of pricing before then. I will say though, and I kind of hinted at this, we really think we've got a highly differentiated, highly competitive and highly differentiated product that is clearly the right product if you're doing certain things, if you're getting certain workloads. And I think we're proud of that. We're and I think that's about all I'm going to say about pricing
until June.
That's a very good for us. Good answer.
I think that on the market share, look, the Mercury data or the IDC and Gartner data will tell you that on market share right now, we essentially round to 0. And there's a little bit of rump business left over from the old Opteron days. We're very focused on, look, let's reenter the market, let's make sure that we've got a compelling solution for customers and let's start ramping it. We'll start we'll introduce next month. We'll ramp throughout the year.
Before we get back to those levels, first, let me get from 0 to 5, let me get from 0 to 10. And we're going to ramp this over time.
Yes. And maybe Mark, if I can just add a little bit to Forrest has said. Look, when we made the decision to reenter the data center, we thought about this as really a 5 year strategy. And with the data center, it's a lot about new platforms, trials, demonstrating our capability, demonstrating the success. And so we do have high aspirations for what we can do in the data center.
But as Forrest said, I think the right near term goal would be getting back to double digit market share. And that will take some number of quarters after the first ramp.
Great. Thank you. And I think there's one other part of your question, which was what do we think the competitive response is. I mean, it's hard to say. What we're focused on is driving our road map as hard and as fast as we can.
We're focused on what we perceive to be the customer concerns and what customers value, and we're going to drive the road map as hard and as fast as we possibly can.
Great. Thank you. Diana, just here. Vivek, next to you.
Thanks so much. Vivek Arya from Bank of America. So two questions, one for Lisa and maybe one for Devinder. Lisa, you showed very and your team showed very impressive benchmarks. Beyond benchmarks, what will it take for AMD to be successful?
Because like you said, AMD has had very pioneering architecture before, but for one reason or another, it did not quite show up in financials. So why is it different this time? That is there some friction in the ecosystem that just presents obstacles as you start to penetrate more? So what will it take beyond just showing better benchmarks to be successful? Because we saw very impressive benchmarks with Ryzen, but we have not seen the same kind of very strong launch.
Maybe it's just too early. So that's the first question. And then maybe for Devinder, the gross margin outlook, what is giving you the visibility and confidence to raise that when we have not even achieved the lower end of what you said the last time? Thank you.
Yes. So Vivek, I think it's a very, very good question. I think there are several things that are necessary to really drive products through the entire pipeline, the financial results. It starts with great products. And you have to show what our value proposition is, whether it's in with Ryzen or with Vega or with Epic.
There has to be a strong value proposition, a reason customers want to try your product. The second aspect of it is to have great platform coverage. And so you heard Jim today say that we have all 5 of the top Windows based PC OEMs launching Ryzen desktops this quarter. I think that's a very significant statement. I think as we go from consumer to commercial, you'll see some additional platforms come on board.
And that's the same as we go through the graphics capability as well as with EPYC, the number of customers, both OEMs, end users as well as channel partners are critical. And then I think the most important thing that is different is we must have a multi generational roadmap, okay? People ask me all the time, aren't you afraid that, hey, in a couple of quarters, you're not going to be positioned as well, and so the product positioning won't be as strong as it is today at launch. I can tell you that we have actually very high confidence that we have a good view of what the industry will do. I think we have a great view of what we can do to disrupt the industry.
And that's why we believe that it's not about just this quarter and next quarter, it's about what we do over the next 1, 2 3 years. And we are not going to be impatient. We're not going to force the ecosystem to do something ahead of its time. We're going to spend the right time with customers, with partners, with software developers to build this ecosystem such that it is extremely sustainable as we go over the next couple of years? And then maybe, Devinder, I'll give you the margin question.
So to answer your question, you're saying what's going to drive the gross margin to the 2 targets that we have put
Yes. Yes.
So basically, if you look at the product roadmap, if you go to 2016, there's a chart that Lisa showed in terms of traditional products driving 99% of the revenue. That's the low end of the margin in terms of what we play in. As we introduce the products, we ramp it in 2017 and in particular when we get to 2018, let's say we have about 40% of revenue products and that raised the gross margin 36% plus. As that mix shifts more and we buy we sell more products in foreign assets business in the stuff that Raja showed and also the ongoing investment of the multi generation engine business that takes the revenue from the premium products to above 50% and that drives 40% to 44% gross margin target that I laid out for 2020. Thank you.
Diana, there's a gentleman here in the middle of the 3rd row.
Yes, hi. You're using the word disruption a lot and disruption is really a concept of competitive response. And I think over time, you can only disrupt if you can do something better and or cheaper and others don't want to or cannot do what you're doing. What gives you the confidence that Intel and NVIDIA will not respond and make you a non disruptor? Is it tech?
Is it your low gross margin? Or that's the one thing that is not adding up for me in this whole story. Maybe you can shed some light on that. Thanks.
Yes. Yes, maybe I'll start and then I'll let Forrest and Raja also comment a little bit about it. You know, I think it's a very good question. So what do we mean by disruption? I think what we mean is there's an infrastructure that's in place today, and it's there for many, many reasons, right?
X86 is dominant in servers. There's an ecosystem around the professional graphics and what's been built up in machine learning that has come in place. And what we have the ability to do is the technology is like the baseline. That's the baseline of what you need. But I think we also have a good view of where systems are going.
And you can only disrupt when you have disruption in the ecosystem. And so what we're saying is data centers are going to be fundamentally look different a few years from now. And we have, let's call it, very little to lose and a tremendous amount to offer into the ecosystem. And so that's what we mean about offering some choice to the marketplace. And we'll see that, like I said, both on the data center front as well as on the GPU front.
But I'll let because Forrest and Roger have thought about this a lot. I'll let them add some color to this.
Yes. I would say on the data center side, the specific opportunity that I'm sure you're thinking of most notably is the 1 socket disruption of the general 2 socket market. Look, we see there is a clear opportunity that has no technical barrier to service customers' needs in the bottom half of the 2 socket market in a more efficient way than has previously been done. The and so we see it as an opportunity. We're going to go drive for that opportunity as quickly as we possibly can with as good a product as we possibly can Because again, I have no essentially, again, I have an asset.
I have 0% market share in the 2 second market. So nothing to fear to cannibalize. I'm going to give you the best possible product there, whereas others may be more concerned about cannibalizing that market or upsetting their pricing or tiering structure that might give them pause. At the end of the day, look, we're our formula is to try to identify where there are customer needs or customer opportunities that are technically reachable by what we can do and then go try to service those as rapidly as we possibly can. I don't think in such a competitive industry, any play is static.
There's nothing we can do that's a silver bullet that says, hey, for all time, I've shot the competition. And so it's a process. And we constantly have to be engaged in looking forward, looking at what our technology can solve and then pursuing those opportunities as fast as we possibly can. And the perfect ones are ones where our competitors would prefer the industry not go. Those are ones where maybe you get a little bit more time to address them.
But regardless, we're going to go meet those customer needs where we can.
Rajan? Yes. I think Forrest hit most of the key points there that applies to graphics as well, right? But by definition, disruption is not something predictable. That's why it's called a disruption.
If you hear me ever say that I have a continuous disruptive roadmap, you can call BS on that. And disruption, when there is only one game in town, okay, you can see the opportunities to disrupt. Now you may not be able to you know, the execution disruption is not easy always, but you can see the paths to disrupt when there is only one game in town. And that's the situation before our new products in both data center and also on the mission intelligence, right. So the opportunities are very clear.
Now when we go there, when we establish ourselves, the challenge would be how we disrupt ourselves, right. That's more challenging than disrupting a monopolistic or a single incumbent in there, right. And those things are very, very clear. So I think as Poress said, right now where we are coming from both on the high end GPU, whether it's machine intelligence or even kind of the high end of the professional graphics workload and all, we are practically starting from 0, right, if we round up to the nearest decimal point. So we have like practically nothing to lose.
So we are a little dangerous right now you know, from that angle. And, you know, that's what kind of gives us gives you permission to believe that we can be disruptive for a period of time. But real question is how can you be disruptive in 2019 or 2020, right. I mean, I think that's an interesting kind of a longer discussion. I think in the next couple of years, it's real clear what the opportunity is.
Great. Thank you, Raja. Susan, we're one here. Ross, thank you. Yes.
Thanks. Ross Seymore from Deutsche Bank. Actually, I have 2 related questions on your road map and really the flexibility you have to address it and how you fund it. The first question is on your foundry strategy. A lot of the roadmap was predicated upon 7 nanometer and 7 nanometer plus plus You've had an interesting transition in your foundry relationships between GlobalFoundries and TSM.
Talk a little bit about how much flexibility you have to hit those 7 nanometer nodes between your 2 foundries? Then the second question will be on the OpEx side of things and maybe this is for Devinder directly. That 26% to 30% of sales, I think is about the only metric that didn't change from your 2015 analyst meeting. Some people are going to say you're spending too much, some people are going to say you're spending too little. So there's no magic answer to that.
But it doesn't show any OpEx leverage. So just talk about how you balance the OpEx to deliver that roadmap over time and if we should expect any OpEx leverage as we move closer to the end of that time horizon?
Yes, Ross, maybe let me start with the technology question and then I'll let Devinder address the OpEx question. So look, on the technology question, we are a multi foundry we have a multi foundry strategy. This generation, we've ramped a number of products in 16 nanometer and we ramped a number of products in 14 nanometer. And for all of the products that we had going, it was very, very important actually for us to have both of our foundry partners very actively engaged. In 7 nanometer, Mark showed some very, very aggressive roadmaps.
We'll be one of the first to adopt 7 nanometer for high performance. And we will similarly use 2 foundries. So we will use TSMC as well as GlobalFoundries for 7 nanometer. And the key is our goal is to use the best that process technology has to offer so that we can innovate on design, architecture, all of those other things. And so with our modified wafer supply agreement that we did last year, it does give us the flexibility to use 7 nanometer at both TSMC and GlobalFoundries.
Vindra, do you want to comment on OpEx?
Sure. I can comment. If you look at the OpEx standpoint, first of all, I can say that from a revenue standpoint, whatever the growth is in revenue, the OpEx will not grow anywhere close to that on a percentage basis. The second thing is, if you go back and look as you observe the 26, 30 hasn't changed, part of it is some opportunities that exist today did not exist when we laid out the previous roadmap, with some of the things that you heard in particular Raja talk about and these are new opportunities that have now formed a little bit with the competitors forming the market and we see an opportunity for us being in a very unique position to invest. So you can look at it from 2 standpoints, managing the OpEx from a revenue standpoint, we know how to do that well.
But now it also becomes very on a targeted basis where we want to invest so that Mark can deliver his multi dimensional roadmap and Raja and Forrest and Jim can go identify the new opportunities for us to invest. And that's fundamentally the reason why the range doesn't change, although it is on a higher base of revenue, especially we have to grow double digit this year and then grow again double digit in 2018 and keep doing that for a couple of years.
Great. Thank you, Devinder. Diana, Matt here in the check shirt next to you.
Thank you. Matt Ramsay from Canaccord Genuity. So there's two questions. I think one for Raja and one for Mark. So Raja, your competitor and you guys today gave some amazing TAM targets for this deep learning computing market, which I think has surprised a lot of us is how fast it's taken off.
I guess my questions are 2. 1, do you feel like you have the funding to pursue all of those opportunities from a relationship standpoint and software standpoint? And second, how much share do you think you can get of that space relative to your core gaming business? And then I have a follow-up for Mark. Thanks.
Sure. I mean, there were kind of I think I saw 3 questions. Firstly, a selection of the size of the market and where it's going to go. And we are in early stages, right? The excitement is there and the early results are very good and the number of installations are going and all of the stuff.
So we I mean, depending on which analyst estimates and all, it's a wide range too, right? I saw a factor of almost 4 in terms of the bookends of it all the way from 100,000,000 units by 2020, 2022 to 20,000,000 units. There is a huge range. But for us, it's all great upside. In terms of your question, especially related to machine learning, right, it is very interesting that everything is being centered around these big frameworks, right?
So the battle is a battle of frameworks, right, whether it's TensorFlow, Kaphase, MXNet, there are 4 or 5 frameworks that are emerging and the big players in this, the big hyperscale guys and the thought leaders on kind of the application and algorithm side, they want the world to run, you know, learn and run-in these frameworks. They don't really want the world to think about GPUs are underneath things in there. So they are very good partners and collaborators in helping us to get our products to market and also help on the software side. And as Devinder already alluded, we are investing. We need to continue investing on our platforms, on our software, on our driver support, on all of this stuff and we will do targeted investments in those areas.
But the first enabler for that is do we have the right engine, then we can put kind of all the body around it, the marketing around it, the entire thing around it and make a beautiful car from there, right. And right now, we made enough progress in the last 2 years. We have a good engine and we have put kind of good stuff around it and now it is time to take it out and that's kind of exactly how we are looking at it.
Thanks for that. And Mark, you talked about heterogeneous computing coming back in your talk and I'm a big believer in that thesis. Maybe and you guys talk and kept focusing on your Infinity Fabric, and I don't get the sense that folks get how important that is to a heterogeneous story that you're putting together. So maybe you could talk a little bit more about the differentiation of that and where the competitors are on a similar fabric?
Sure. Well, it's a good question. And then and you actually heard Raju talk about our over 10 year investment, bringing graphics in and figuring out how to optimize in a heterogeneous architecture. So how does the Infinity Fabric play on this? It facilitates, it was designed for heterogeneity.
It was designed to support across all of the assets and bring what had been the whole processes developed for managing not just the GPU compute, but the whole memory subsystem, what we developed around our CPU and its subsystem and bring it together to how we create new SoCs. So the Infinity Fabric is about that taking from the get go, taking from the design and encompassing requirements of a heterogeneous architecture. So that's an enablement. But it doesn't answer the whole question. What that says is, let's take when we're doing Ryzen Mobile.
You saw it on the roadmap. That is our CPU, our GPU leadership assets, it's bringing over to single silicon. It's going to be fully heterogeneous system architecture. And again, enabled by that Infinity Fabric. But the rest of the story was really back in Roger's presentation.
It is then the enablement. It is that Radeon Open Compute platform to provide that whole application suite. So you can develop very, very efficient end results, really take advantage of those assets. So Infinity Fabric, it's all the plumbing. It's allowing us to connect high bandwidth provisioning so that you don't starve those engines.
And then the software enablement of the stack. I don't know, Roger, if you want to add any more on that.
No. Mark hit it. It's basically and we have been thought leaders on heterogeneous computing and heterogeneous architecture for a long time, 10 years, right. And good ideas the one thing about good ideas is people start executing on your good ideas at some point of time, right. And that's what you're seeing actually across the industry.
Everybody is talking heterogeneous computing and using same language that AMD has been using for 10 years now. But what you're seeing with Naples, right, it's a great example of we couldn't have built that class of a chip with that much IO and again using the word disruption again, you couldn't be disruptive without Infinity Fabric, right. That's kind of the fundamental building block that allowed us to do something that is not incremental for the competition to go do out of what they currently have. Will they get there? Sure, everybody can get there once the idea is out there, It's who gets there fast and the IO disruption at Naples is a great example.
And some of the things you'll see when we disclose more details around Vega and its leverage of Infinity Fabric and what's coming up coming down in the compute roadmap will also demonstrate what we've done with Infinity Clarity.
Thank you, Raja. Diana, behind you in the middle, just here.
Thank you. I'm Boorish with BMO. Actually, Daninder, I had a question for you, and I'm not I raised my hand because I didn't quite understand the explanation during the earnings call on the change in accounting from OpEx to CapEx. The question that I have is, you explained that this year, there is no that impact is included in the gross margin guidance that you have provided. But is there a cadence to this now that you've switched the accounting that and correct me if my understanding is not right.
This is for 7 nanometer, which doesn't go into production until at least 2 years from now. The initial prototype will begin next year. So does that mean that there will be yet another bump up in CapEx next year for 7 plus as you have defined 7% plus in the roadmap? And then just more importantly, what implication does that have for free cash flow? You talked about cash flow equating net income, non GAAP net income, but it does fluctuate a lot.
This quarter, it was down pretty dramatically, the negative $322,000,000 So what how should we think about free cash flow for the business? Because one of the underpinnings that you laid out is pay down debt if you get higher than $1,000,000,000 Thank you.
Yes. So I can take the second question first. The change that we made has no impact on free cash flow because either way you're buying it and the accounting is driven by what we do from a capitalization and then the depreciation standpoint. So free cash flow, there is no impact. On the question about gross margin in 2017, because the expense of $60,000,000 that we put in the CapEx and capitalization has some impact, but it's negligible in 2017.
Does it have a cadence? Yes, it does. So going forward, production margin will be capitalized and from the time that product starts production, they will be amortized or depreciated over a 24 month period. So that's the way it will go from a go forward standpoint in 2018, 2019 and beyond, but only for production mastets and not for any other mastets that we have. The last thing I will say on the capital expense or CapEx is this year, we have $140,000,000 of CapEx.
We haven't given guidance for CapEx for 20 18, '19 and beyond, and that is something that we'll do as 2018 unfolds.
Thank you, Devinder. Joe here, Diana down in the front of the blue shirt.
Thank you. I wonder, Forrest, if you could talk a little bit about how quickly you can penetrate the server market. I mean, you've obviously got compelling products, but how quickly do you get to market? Does it happen first with 1 or 2 big cloud guys? Or how long does it take you to start to get enterprise traction with the products you've talked about?
Yes. Look, the server market nowadays obviously has a number of different types of systems as well as number of different players. You've still got the traditional enterprise class systems that are deployed in a wide range of customers and typically on on premise deployments, and you could be running anything on it. You could be running 15 year old Oracle Suites or you could be running the patch. You could be running anything.
And so the validation and the qualification for those systems is quite a bit longer, independent of the processors, quite a bit longer from other systems. And so you'll see we certainly have a lot of customers that are going to be launching systems like that. You'll see those later in the year later in the second half, whereas you'll see the more cloud oriented systems, which have a narrower set of use cases and a narrower validation matrix launching in the really starting next month and throughout Q3. So I think you'll see a continuous stream of systems coming out from our partners coming to market. 1st, in cloud use cases and then a little bit later on in Q3 and beyond in enterprise use cases.
Thank you, Forrest. Diana, there's a gentleman down here in the front.
Following up on that, your competitors talked a lot about semi custom chips for the big cloud vendors. Obviously, your group does both does semi custom chips. How soon would you see that happening? Or is that something on the road map?
The
custom chips what they call custom chips is an incredibly broad umbrella. I mean, I think our competitors has used that term before to refer to refer to chips that maybe have slightly different characterization, a slightly different bin out, all the way to chips that add maybe new instructions or new major functional units. And the hurdle rate for doing each one of those different points under that umbrella are very, very different. We're going to be doing some of that immediately. And I think that we're very open given that we do have, I think, one of the world's best semi custom full semi custom capabilities now and made even more efficient with Infinity Fabric.
We've probably got a lower hurdle rate than most for doing things on the far end of that spectrum. Beyond that, I don't want to put out a specific timeline for what we never put out a timeline for what our semi customers are doing. But I will say, we will begin populating points under that umbrella immediately.
Thank you, Forrest. Perhaps it's the last question. I'm here, Diana, in the front in the check shirt.
Thank you.
Thanks. Quick. Is there a target release date for Vega? You've said Q2 in the earnings. Force said it was going to launch next month in response to the first question.
Is there an actual date that you can share?
I'm sorry, your question was for Epic or for?
Vega, Epic, any of that.
Yes, I think you should. Any and all. Any and all. You should expect Epic in the second half of June. We'll release the actual date shortly.
And you should expect the 1st edition Frontier as discussed by Raja also in the second half of June.
Great. Well, on that note, a very exciting product launch is pending this quarter. This concludes our Financial Analyst Day. We'd like to thank everybody for joining us in person and on the webcast. And we'd like to invite those with us in the auditorium to join us down in our product demo area for some refreshments.
Thank you.