Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.39
+0.85 (1.03%)
Apr 27, 2026, 12:26 PM EDT - Market open
← View all transcripts

Investor Meeting

May 8, 2019

Speaker 1

Good afternoon. Welcome to everyone here who is joining us in Santa Clara and also a special welcome to those who are joining us live via the webcast. We're really excited to have you here with us today. I'll cover a few quick logistical details before we get on to the program. And the first one is Wi Fi network.

You have the Wi Fi network name and the password on the back of your badge. In case you can't see that or can't find it, the network name is im2019 and the password is investor, all lowercase. So, im2019, one more time, all lowercase investor. Also, if we can ask you to just double check your phones, make sure they're set to silent, we'd appreciate that. Starting at the 1st break of the afternoon, we're going to have coffee and drinks, beverages that sort of thing out in the lobby area.

You're welcome to help yourself over the course of the day to all of that. Restrooms are just outside of the auditorium here and we have charging stations in the back of the room against the back wall. All right, with that out of the way, I'll read the risk factors and then we will get on to the agenda. Today's presentations contain forward looking statements. All statements made that are not historical facts are subject to a number of risks and uncertainties and actual results may differ materially.

Please refer to our most recent earnings release 10Q and 10 ks for more information on the risk factors that could cause actual results to differ. Today's presentations also contain non GAAP financial measures. You will find the required reconciliations to the most directly comparable GAAP financial measure on our website intc.com. All right. In just a minute, we're going to kick off the afternoon's presentations with a keynote from our CEO, Bob Swan.

Bob is going to talk about the world's growing appetite for the processing movement and storage of data and importantly how Intel is capitalizing on that trend to pursue a larger TAM and transform into a data centric company. He'll be followed by Doctor. Murthy Rinducintala, who will talk about our approach to delivering product leadership using 6 pillars of innovation. And Murthy is going to be joined by Raja Koduri. And Raja is going to drill into one of those 6 pillars, software, as software is playing an increasingly relevant role in delivering performance.

After we come back from the break, Navin Shenoy will take the stage and Navin is going to be covering the diverse range of opportunities that we're pursuing across the breadth of our data centric businesses. And he's going to be joined by Sandra Rivera. And Sandra is going to talk a little bit about the growing role Intel is playing in the transformation of networks as they virtualize and intelligence increasingly moves out toward the edge. Then Sandra and Navin will hand off to Gregory Bryant or GB and GB is going to highlight the progress he and his team are making in transforming the PC business using a combination of focus and really thoughtful segmentation. We'll then wrap up the keynotes of the day with George Davis, our CFO and he'll take the stage to put all of the day's events in a financial context and talk a little bit about our expectations over the next few years.

Then I'll invite all the presenters to come back up on stage and we'll have some time for Q and A. And I'll say a little bit more about the logistics of the Q and A when we get to that point of the afternoon. All right. Well, again, welcome to Intel. We are absolutely thrilled to have you here and we're excited to share our story with you here today.

And with that, I'd like to invite our CEO, Bob Swan up to the stage.

Speaker 2

Thank you. And let me also extend my warm welcome to you. It's been a couple of years since we've had you here with us. So during that couple of years, a lot has changed. Not a lot has stayed the same as well.

So me and the team are really excited to kind of share with you what's been going on, not just in terms of where we are today, but where we're taking the company forward. Let me just start with maybe the 4 key takeaways I'd like you to walk away with from what I have to say and the team is going to build on this as we go through the course of the day. First, as Mark mentioned, we've dramatically expanded our TAM in an increasingly data centric world where we see the needs for more and more compute, more and more storage and the need to move data faster and faster, it creates real opportunities for us. So, over the last several years, we've been expanding the TAM and the role that we can play and what we view is an increasingly data centric world. Secondly, we're also we have a very strong product leadership position on CPUs.

But we've been investing in and continue to build new architectures as workloads are changing over time that we think are increasingly relevant. So, when we talk about product leadership, we don't talk about just CPUs anymore, but a term that we call XPUs, which is more and more architectures and innovation that we can build into the demands for data. 3rd, our ambitions are pretty big with this expanded TAM. And our expectations of ourselves is that we can play a bigger and bigger role in the success of our customers. And to do that, we know that we have to improve the execution rhythms in the company.

We have to accelerate the rate of innovation and we have to take a very strong culture and evolve it to be more commensurate with the opportunities that we see going forward. And last and certainly not least, what you can expect from us is very disciplined investment, focused not just on expanded TAM and revenue, but focused on an expanded TAM and revenue that will drive profitable growth and attractive capital returns. So those are the key things as we think about where we are today and where we're taking the company forward. But I do want to just acknowledge the present. And we just came off our Q1 earnings call and we delivered our top and bottom line, but we let you down and we let ourselves down.

We lowered our guidance for the full year by $2,500,000,000 on the top line and $0.25 on the bottom line. And my commitment too is we're a team built on credibility and we know we have to earn and maintain your credibility. And after the last 9 quarters of last 10 quarters, we met or exceeded our revenue 9 out of 10 times and we met or exceeded our EPS 10 out of 10 times. But that does not excuse the miss that we had in our outlook for the rest of the year and we'll get better on this. Our processes will get better and get sharper as we go forward.

But I want to put it into context a little bit from the last time that we had to spend time together. In March of 2017, this is kind of the priorities that we laid out for you in this room To grow our grow in data center and adjacencies, strong healthy client business, continued growth in IoT and devices and to flawlessly execute on memory and the recently completed at the time FPGA acquisition of Alterra. That's what we said back in 2017. And at the time, we gave you kind of a 3 year outlook in terms of financials that you should expect from us during this timeframe. So, as we think about I know that we still have 9 months left of this 36 month journey, but just to put it in a context about how we feel we've performed over this timeframe.

With our recent guide for the rest of the year, what it implies for the 3 year timeframe is that we'll have increased data centric revenue by $9,000,000,000 since 2016. The PC centric business delivers record profitability in a declining, but a little bit stable market and excellent free cash flows that allows us to fund not only our capital expansion, but also attractive capital returns as well. And if you look at the performance relative to the 3 year guide we gave at the time, we expect to beat revenue by a little under $7,000,000,000 and our EPS, we expect to be roughly $1.27 higher than the guide we gave at the time. So, over that 3 year timeframe, what's really happened is we expect to deliver $12,000,000,000 more revenue since 2016. We will spend less operating expense during that timeframe generating $12,000,000,000 in revenue.

And as a result, our EPS will grow 64 percent during that 3 year timeframe. So, as we think about the 3 years since we last had you here, we're very proud about what we've accomplished as a team as we've expanded the role we play in this increasingly data centric environment. But we are not satisfied. Many of you have heard me talk about what we believe is a team that we have an opportunity to lead one of the most successful transformations in corporate history. Think about it, a company that a relatively short period of time ago primarily built the CPU inside of APC.

And we're evolving over time, transforming the company to build the technology inside of everything. That's in effect what our ambitions are. That's a fairly dramatic transformation. We know that it's not going to be easy. We know that we deal with customers that are bigger and stronger.

We know the competitive intensity is as intense as it's been in a while. We know that execution is going to be at a premium. But our belief is that we can lead one of the most successful transformations in corporate history and that's what we're focused on and that's what we're going to share with you today. It starts with a relatively simple strategy that frankly hasn't changed a whole lot. Make the world's best semiconductors, lead the AI, 5 gs and autonomous revolution, be the leading end to end platform provider where we're bringing our hardware and software together, not just to sell our products, but to sell solutions for our customers that we believe we are uniquely qualified to do.

We'll continue to be relentlessly focused on operational excellence and efficiencies and continue to hire, develop and retain the most diverse and inclusive talent in the industry. So, our strategy hasn't changed a whole lot, But the overlay and the emphasis for us is going to be, as I said, on execution, on accelerating the rate of innovation and on evolving our culture as we execute our strategy. That strategy plays into our ambitions. It fuels our ambitions over time to go from a PC centric to a data centric to Intel's technologies power the world. In 2013, as I mentioned, CPUs inside the PC represented over 70% of the revenues of the earnings of the cash flows of the company.

It filled our fabs and it funded our IP. But at the time, that PC business, that TAM was declining, but the demands for data were growing. So, over the last several years, we've been repositioning ourselves to take advantage of, as I said, that increasingly data centric world. And now we're investing in the technologies to capitalize, again, not just CPUs, but XPUs and more architectures to populate the billions and billions of increasingly connected devices that we see playing out over time. So, our ambitions are relatively big.

And we've made some decent progress over the last several years. Seventythirty PC centric revenue to datacentricrevenue back in 2013. As I said, the datacentric businesses added $9,000,000,000 in revenue over the last 3 years. So today, we're roughly fifty-fifty. And our expectations as we wind forward is we'll be more like thirty-seventy, a great PC centric business, relatively low growth, excellent execution, real strong profit and cash flow and an increasing portion of our data centric businesses targeted for the higher growth markets.

At the core of our strategy is just the exponential growth of data and the insatiable appetite for enterprises and for consumers to create and consume more and more data, whether it's in data centers or whether it's at the edge or at endpoints. The amount of data that's being created now dwarfs in a relatively short timeframe the amount of data that's been created over the course of the last 10 to 30 years. So, the data creation is fairly significant and the desire to have that data in real time to do something with it is growing as well. We think this bodes well for what it is that we do. That data growth is going to drive increased demand for compute, for storage and for the networks and the need for the networks to move that data faster and faster.

Our expectations over this timeframe is the increase in demands for compute will be up 50% a year. The increased demands for storage, storage demand will continue to grow in about 30% a year. And in terms of the demands on the network to move information faster, we expect traffic to grow at 25% of the year. So as we think about this growing demand for data and what it means for the areas in which we play, we think it's a good time to be in the semi industry and a very good time for us as we look at the what's going on in the markets around us. So, it was a relatively short period of time ago that we looked at our TAM as a roughly $52,000,000,000 TAM of CPUs inside of PCs and CPUs inside of servers.

And we characterize that as we had roughly 90x percent share of a relatively speaking small market where the declines in PC TAM were offset by the growth of servers. That's not how we see the world anymore. Today, we see a much broader TAM and the processing needs, the storage needs and the networking needs to move things faster creates a real opportunity for us. And through both acquisition and organic investment, we've dramatically expanded the role that we believe we can play in the success of our customers. Over the course of the next several years, we see our TAM as the largest TAM in the company's history, roughly $300,000,000,000 with $220,000,000,000 of that in this data centric collection of businesses where the needs for the data center, the network and at the edge on IoT create real opportunities for us to continue to deploy our CPU technology, where we have a very strong base, but add more and more technology up the stack, so the role we play for our customers becomes larger and larger, whether it's AI and FPGAs and connectivity in the data center, whether it's NFV or media or FPGAs at the network or whether it's AI, ADAS, video at the edge.

Today, we have a relatively small share in the data centric arena or roughly 20% with a very strong CPU franchise that will protect, but also extend with the new technologies that we're building. And on the PC centric side, as GB will talk to you later, we see our share of roughly 55% and continue to redefine the role that we can play inside these devices at the edge and the PC with memory, with platforms and increasing connectivity and graphics technology. So, we're moving from an era where we defined ourselves as having a very large share to 1 we see a relatively small share, but real strong capabilities and prospects to grow. So, how do we get after it? I'm going to walk through our plans briefly and then the team is going to build on each one of these over the course of the next several hours.

First, leading technology inflections. Second, how we plan to extend our product leadership third, making big bets, but also 3rd, making big bets, but also generating attractive returns from those big bets, improving our execution, evolving our culture and continuing to play a leadership role in corporate responsibility and in diversity and inclusion that we believe are important aspects to enhance our performance and our strategic advantage. First, leading technology inflections. So, again, over the last several years, organically, acquisitively and with our iCap portfolio, we've been very focused on these leading technology inflections that we think over time are going to continue to accelerate and create the demands for data and the needs to make more and more of that data relevant. And they've been in AI and 5 gs and autonomous.

In the AI world, and you're going to hear more and more about this over the course of the day, but we just see that AI unlocking value from data enables the opportunities to create new business models, new business models that need more data that drive into the technologies and the capabilities that we build. 5 gs, you're going to hear more about this from Sander, but we see in this 5 gs arena that there's going to be a convergence between communications and compute. And through that convergence, we think we have a unique opportunity at play from the investments we've been making in 5 gs over the years. In autonomous systems, more data, faster is going to create new opportunities at the edge to build real businesses that are data intensive that need real time data to for decision making criteria. In that world of autonomous, we think there's opportunities to create new business models and new markets.

So, by playing a leading role in technology inflections, we believe with this expanded TAM that we can play a bigger and bigger role in the success of our customers. 2nd, extending our product leadership advantage with workload optimized platforms and effortless customer and developer innovation on top of those platforms. Murti and Raj are going to talk quite a bit about the activities that we have about expanding our product leadership position. It's no longer just about process and packaging and the architecture of general purpose compute, But it's process and packaging, it's general purpose compute, it's accelerators in different architectures, it's addressing a constraint in the performance of the CPU by eliminating the bottlenecks that current memory technologies are creating. It's about interconnect, security first and last and certainly not least, Raj is going to talk to you about software and the role that soft the increasing role software is going to play to expand our leadership position over time.

And with these different architectures, we're going to be redefining what Intel Inside really means. Again, Intel Inside historically was the CPU inside the PC and then over time inside the server. As we think going forward, we see XPU with different architectures, CPU, GPU, AI accelerators, FPGAs, real packaging technology, together solving problems for our customers, where it's XPUs inside of everything, not just CPUs inside of a PC in a data center. Part of that part of our 6 pillars, as we refer to them, is process technology. And we have been, we'll continue to invest in process technology.

With the delays of 10 nanometer historically, we currently are investing in 3 nodes. We're investing in 14 nanometer. We've gotten more and more performance out of the 14 nanometer node and put more and more capacity in place to support our customers' growth. 2nd, on 10 nanometer, we indicated back in April of last year that we'd have systems on shelf for 10 nanometer products for the holiday season this year. And that is our expectation.

We remain on track to have those systems on shelf. And server, as Navin will talk to you about, will be a fast follow in the first half of twenty twenty. And third, we haven't really talked to you much about 7 nanometer, but we'll talk more about it today. And our intention is to accelerate 7 nanometer. We've obviously been working on this for a while and we expect production and launch of 7 nanometer products in 2021.

So continuing to invest in process leadership and world class packaging technology to complement that so that we're building leadership products for our customers over time. Big bets. This is an industry where technology is constantly changing and we want to be at the forefront of technology and to do that we'll have to continue to make big bets. The criteria that we'll use both in making those bets and in evaluating those bets will be threefold. One, is it at the leading edge of a technology inflection?

2, does it allow us to play a bigger and bigger role in our customers' success? And 3, and not last, but 3rd, that it offers these investments in big bets offer a clear path to profitability and to attractive returns. Those are the simple criteria we'll use when we make big bets and those are the criteria we use in evaluating how successful those big bets have been over time. Let me walk through the 3: modem, memory and Mobileye. 2Bs or 3Ms, Those are the big bets that we've been making over the last couple of years.

1st, modem. Technology inflections in 4 evolving to the 5 gs modem, it gave us a real opportunity to play a more influential role in shaping the standards for 5 gs networks. That role will prove to be very important for what Sander will talk to you later about the role we're playing in disrupting networks in a 5 gs environment. And during the time, we built what we consider some real world class 5 gs modem ready IP for a 2020 industry ramp. From a customer standpoint, 2 things.

1, we built very good momentum with 1 customer, good bad, we only had 1 customer, not so good. So in that world, the ability to make money becomes somewhat constrained. So, when we apply the 3rd criteria in modems over time, our quest was in a 5 gs world to get incremental commercial economics from those commercial economics to fund the development of 5 gs IP to evaluate the alternatives that we have in a PC and IoT applications in a 5 gs world. That's the path that we've been on in building out 5 gs capabilities. Recently, we concluded that the demand required and the commercial terms that we needed to build the IP were no longer feasible.

And therefore, we quickly made a decision that it no longer met one of our three criteria for building making big bets in technology inflections. So as you know, we made the decision to exit 5 gs smartphone because we didn't see a path to make money, to evaluate the value of that IP that we've built and the alternatives we had to develop 5 gs modem technology in IoT in the POC arena. That evaluation is still underway. We'll let you know when we conclude. We're working quickly to conclude on that.

What we've indicated in the near term, George mentioned the other day on our earnings call that we would reduce our 5 gs modem investment for smartphone by roughly $200,000,000 to $300,000,000 this year as we're evaluating what other alternatives we have for this technology. So, the three criteria, the ongoing evaluation and the decision to exit because it didn't meet one of our 3 extremely important criteria. Big bet number 2, memory and storage. Over the last several years, we've made a lot of investments in memory. As there's a transition to nonvolatile memory, we saw technology playing a differentiated role in both the memory space and in the storage space.

Speaker 3

In the

Speaker 2

memory space, our investments in Optane were really geared towards we believe that memory the lack of advancement in memory technologies could impede the performance of the CPU and the architectures that have been developed over time. So, we've been developing Optane technology that we feel great about. The team's made great progress. It's a real differentiator coupled with our Xeon CPU for breakthrough platform level performance. And we just launched it in our Cascade Lake product that Navin and the team announced a few weeks ago.

So, we feel very good about the differentiated performance of transition to 3 d NAND, we've been investing in a differentiated manufacturing process technology that allows us to get aerial density that's best in the industry. And we've made very good progress on ramping that technology in our Dalian fab to bring down the cost per gigabyte over time. We've moved from 32 layer to 64 layer. And during the second half of this year, we'll be transitioning to 96 layer that we believe allows us to drive down cost per gigabyte at less capital employed than the other players in the industry because of the areal density that we've been able to accomplish. At the same time, these transitions are really complicated and you know the dynamics of memory cycles and what they mean.

So as a result, despite last year in our NAND business making decent margins, those margins funded the development of Optane, this year we expect to not be profitable in NAND. So, we're really evaluating the continued progress in NAND, whether the technologies can bring down the cost curve and we'll evaluate that during the course of this year. So, what we expect now, we're not going to put any more NAND capacity in place for the foreseeable future until we bring down the cost curves on 6,496 layer and beyond. 2, we developed the Optane technology. And over time, we have the flexibility to put that Optane product in our Dallium fab because when we designed the fab to begin with, it's relatively high reuse of the capital, whether it's NAND or whether it's Optane.

But the key is differentiated technology that customers want and that we can generate attractive returns. So NAND has been a real challenge for us and we're going to continue to work NAND and evaluate along the way whether a partnership like we had with Micron is a good path forward to accelerate the path to profitability and or share the investment required. 3rd big bet, Mobileye. This has been a tremendous acquisition for the company. Hidden technology inflections, real customer impact, attractive short term returns and we believe very good long term returns as well.

I'm not going to walk you through the progress that this team has made other than to highlight they continue to generate great traction in the customer base. They've evolved good progress on L1, L2 and their product for the L4, L5, AB Arena is on track for 2021. And with that technology, in conjunction with the Israeli government, Volkswagen, Champion Motors, We're going to be work to bring Mobility as a Service to life in Israel in the 2021 timeframe and we're looking forward to learning quite a bit from that. Amnon was going to be here with us to explain a little bit some of the exciting stuff that they have going on, but he was unable to make it. So I think what we're going to do is try to roll a video a little bit about what he has going on.

Speaker 4

Hello, everyone. I'm sorry to miss being with you in Santa Clara today. You have heard Bob and others talk about customer obsession. Well, that's what I'm doing here in London with our strategic partner Ordinance Survey launching a new data centric service. You may recall at CES, we announced our original partnership.

Today, we took the next step and showed how together we're selling a detailed infrastructure data set for roadways. It's the first step in launching the concept of the car cities. The car is sending data and insights to the cloud to power smart cities. This kind of data is invaluable to utilities, cities and others and is just a tip of the data opportunity. There are countless uses for this data that have nothing to do with cars and we can harvest this data now from OEM cars like Nissan, BMW, Volkswagen and Ford plus a number of OEM contracts in the pipeline.

With the contracts we have by 2022, we'll have 24,000,000 cars sending data. This data opportunity would not be possible without our leading position in ADAS that gives us the real estate in the car and tools we need to compete in new markets and grow Intel's revenue. This strategy puts us squarely in front of a significant new revenue stream that is enabling the expansion of Intel's TAM by 1,000,000,000 of dollars. This brings me to the 2nd piece of news. Last year, we announced a joint venture for robotaxis with Volkswagen and Champion Motors in Tel Aviv.

That was just the beginning. Transportation as a service enabled by robotaxis is a game changer for mobility and I'm here to tell you that we plan to go all in the global robotaxi opportunity. We believe we have the right combination to scale cities quickly with a combination of cost effective camera centric technology, RSS as a safety formula that allows robotaxis to flow with traffic without interruption and our REM crowdsource mapping technology. By removing the driver from the financial equation and replacing with a CapEx investment of a self driving car, the economics shift to enable mobility as a service discounts that rival the cost of car ownership per mile. We will not only supply the technology, maps and safety model, but we'll also provide the full stack needed for a mobility as a service to move up the transportation value chain and create a huge value for Intel.

Today, we supply full stack solutions in ADAS and self driving systems. We intend to move along the value chain to data occupation, monetization and all the way to becoming a full service provider for the robotaxi market. Intel aspires to a leading position in this market top to bottom. We can and are building the full solution and we know that we have the products and market position to deliver an accident free world. Now back to Bob.

Speaker 2

So big bets, modem, memory and Mobileye. You can count on us to continue to use 3 criteria and evaluate and what big bets to make and to continue to evaluate the criteria along performance that are in sync with what we set out to accomplish. And profitability and returns will be a key part of that equation. 4th, improving our execution. During 2019, there's 3 massive criteria for our company.

One, to make sure we have the capacity on 14 nanometer to meet customers' demand. Last year and the first half of this year, we were supply constrained. As we go into the second half of the year, we've made very good progress. We feel like we'll be supply balanced in the second half of the year. We'll have some mixed dynamics that we're going to have to work out during the Q3.

But by Q4, we'll be back to fully supply enabled and we've made the investments last year and this year and going forward to ensure we don't get into a supply constrained situation in the future. 2nd, as I mentioned before, we are on track for 10 nanometer systems on the shelf in the holiday season. And last, we got lots of exciting products that we'll be bringing to market during the course of this year, not the least of which is qualifying the Ice Lake product in the second quarter as we begin to ramp as going into the second half of the year. So improving our execution along the way. Part of execution means focusing relentless focus on the things that matter the most.

A big part of our performance over the course of the last few years has been reallocating our capital towards areas with higher growth and being relentlessly focused on the things that we're doing. That resulted in spending as a percentage of revenue that was 36% back in 2015 to come down in our guide today of roughly 28% in 2019. And our expectations, George will give you a bit more detail on this in a bit, but our expectation is to continue to invest in R and D, but bring down our spending as a percentage of revenue to 25% over the next 3 years. Improving our execution and evolving our culture. We have a wonderful culture, but we've gotten really big.

As we think about going forward, we see a world where to achieve our ambitions, we have to evolve from a company that's always built the best products and then expected customers to come to one where we need to listen more in an increasingly customized environment, listen more to what it is our customers are looking for, so that we're solving their problems, not just shipping our wonderful products. We had a TAM of $52,000,000,000 with a market share of greater than 90%. In that world, our tendencies are to protect the moats that we've built, but there weren't great prospects for growth. Today, with a much larger TAM, a wind at our back and a much smaller share, we have opportunities to invest to grow, which means we need to be moving faster as one team to capitalize on the opportunities we have in front of us. When the competition has less than 10% of the market, your tendencies are to compete internally to get resources.

Our challenge now when we've redefined the market that we're serving and by definition competition has 75% share. Our challenge is to act as 1 intel and getting all 107,000 employees in this company growing in one direction, and that's what we're focused on. Last, IDM advantage and process leadership used to trump all. We're migrating to a world where we think product leadership and the 6 pillars that Murthy and Raja will walk you through are what's going to trump all to meet the growing demands of our customers going forward. So, as we think about an incredible 50 years and an incredible culture to achieve our ambitions going forward, we talk about evolving our culture, so we're well positioned for the opportunities in front of us.

And we talk about 4 key things: increased customer obsession One Intel, all 107,000 people rolling in the same direction, fearless, prepared to take risks, but learn fast from failures and last but not least, truth and transparency and free flow of information within our four walls, so we're better equipped to take advantage of the capabilities that we have. We have a wonderful culture, but our dreams and our ambitions are as big as they've ever been. And to capitalize on them, these 4 ingredients, coupled with continuing to progress on Krayton, a great diverse and inclusive place to work are the key things that we think will position us well. The company has always played a leadership role in corporate responsibility and that's not going to change. Whether it's the 4,000,000,000 kilowatt hours of energy saved, number one scores for our role in environmental and social disclosure quality, greater than 20 years of transparency and proactive engagement with you about the things that matter most from ESG perspective, 90% recycle rate from non hazardous waste, 17,400,000 square feet of LEED certified space and 500 plus supplier audits completed since 2014.

Corporate responsibility has always been a critical component of this company and you can expect that that will continue to go forward. As I said, a key part of our strategy is to hire, retain and develop the best, most diverse and inclusive workforce in the industry. We've made great progress on this front. We met our goals of full representation in our U. S.

Workforce 2 years earlier than we had laid out. We said 20 20. We delivered it in 2018. We'll spend $1,000,000,000 annually with our supply chain for diverse owned businesses by 2020 and the team is making great progress. It's not just about diversity inclusion within our 4 walls, but diversity inclusion within our ecosystem.

And we made great progress on global gender pay equity during the course of the last year. So we think this is not only the right thing to do, but as a company, we think that strengthens our strategic advantage as we go forward. Leading technology inflections, extend our product leadership, make big bets with attractive financial returns, execute better over time and continuing to lead on social responsibility and diversity inclusion. George Davis joined us about a month ago. He's having a huge impact in a short period of time.

But I thought it was a little unfair for him to come up and give you a snapshot of our 3 year outlook. So, I'll give you a snap shot and George will talk to you a little bit more about the how. Our expectations over the next 3 years. In terms of revenue growth, we're expecting low single digit growth, which is our data centric businesses growing in the high single digit arena and PC centric flat to slightly down as modest growth in quality of our products offsets the ultimate exit of the smartphone business. Operating efficiently, our expectations are to keep operating margins roughly flat with our guide for 2019 or 32%.

Gross margins will come down as we transition to 10 nanometer and develop 7 nanometer pretty quickly thereafter, but they will be offset by lower spending as a percentage of revenue, partly as we exit the investments we've been making in smartphone 5 gs, but also continue to get leverage and make trade offs on how we're deploying our organic spending on the right prospects. And third, EPS grows in line with revenue during this timeframe and cash flow will grow faster than revenue. So, our expectation on this EPS of free cash flow gap is we're going to continue to make progress from the mid-60s last year up to about 80% over this 3 year timeframe. And you can expect us to continue to make attractive capital returns with a dividend growing in line with EPS and being opportunistic with our wonderful balance sheet on reducing our outstanding flow. That's the 3 year timeframe.

And again, one of the challenges that we'll be wrestling with during this time frame is the pace in which 10 nanometer ramps and the fast follow of 7 nanometer. As a result, that's what's going to be impacting gross margin compressions during this time. As we migrate from 10% to 7% beyond the next 3 years, here's how we see things playing out in 2022 or 2023. Our expectations to continue to grow the company up to about $85,000,000,000 in the 2022, 2023 time frame and to generate about $6 in earnings as well. And it will really come from 5 things that you're going to hear about from the audience during the rest of today.

Naveen and Gregory Bryant are going to be talking about transforming to a data centric world with the data centric update and the PC centric update. Raja and Murti are going to be talking about building product leadership and process leadership. With gross margins coming down that will weigh on our earnings growth a little bit during this timeframe. 3rd, getting attractive returns on our Big Bets and our expanded TAM will be a contributor to earnings during this time frame. 4th, George is going to talk a little bit about our path to 25%.

So going from 36 3 years ago to 28% spending this year to 25% spending in 2021. And I'll also share with you our capital allocation priorities during this timeframe. So, that's a little bit about what you can expect to us. During the rest of the day, you're going to hear from Murti, from Raja, from Naveen, from Sander and from Gregory and George about the how we're going to make it happen. So, with that, Murti, I'm going to invite you up to share the steps on product leadership.

Speaker 1

Thank you, Joel.

Speaker 3

Thank you, Bob, and good afternoon. I think it's afternoon, right? Yes. Good afternoon, everybody. It's great to be with you today.

It's been over 2 years since we last spoke and there's a lot to update you on. So I'll get started without delay. I want to expand on the discussion that Bob walked us through in his opening comments and go into 3 key topics, give you a closer look at the growth of data and the associated TAM disruptions that's driving describe how we've mobilized all of Intel to address the opportunities in front of us. And also illustrate how we've got a lineup of extraordinary products driven by great innovation and what you can expect from Intel this year, next year and into the future. Now, a theme through Bob's talk was about the impacts of the explosion of data and that's going to be a common theme that connects all of my colleagues' presentations together.

Let me show you the data that Bob shared with you, but through a slightly different lens. Now this is a simplification of how we can represent the growth of digital data looking back roughly 25 years and forward about 5. And there are 3 major takeaways that become immediately evident as you study this data in detail. 1, as Bob said, data is exploding. 50% of the data that exists today was generated in the last 2 years.

The mix of that data is changing from structured and outbound to unstructured and inbound. And to deal with that data explosion, the underlying IT infrastructure must transform. Those three takeaways in turn drive 3 major data centric transitions. Compute itself must become more diverse. Growth of data centric workloads such as AI, graphics and video require more parallel compute architectures such as GPUs, neural network processors or NNPUs and FPGAs.

We'll talk more about that as I go through my talk. The network itself must be redefined. If not, the volume of data that will be generated will simply overwhelm it. The wide area networks of the future will more closely represent a distributed data center and essentially the network will cloudify. And lastly, data generating devices will become increasingly intelligent and autonomous, evolving into what we term intelligent agents.

You heard Amnon talk about the car moving towards an evolution path of becoming an intelligent agent. The same will happen in the PC space and in the IoT space. Because in time critical applications, these agents must be able to make decisions without having to interact with a remote control center. So in summary, the data explosion that's described on this graph drives demand for our products and growth of our TAM. It drives value for compute diversity from CPUs to GPUs to NNPUs to FPGAs.

It drives enormous growth of the compute in the network and it drives the proliferation of AI and IoT use cases and all of them represent tremendous opportunities for Intel. These data centric transitions underpin the TAM opportunity that Bob described. And if you double click on the slide he presented, you get a much better idea of the areas that are driving growth. And as you can see from the graphic behind me, there are many segments in our TAM that are growing at something like 3x to 5x faster than the aggregate. But the really interesting thing is that the growth in these segments come as a result of those 3 data centric transitions I described: diversification of compute, network cloudification and the advent of intelligent agents.

And by focusing on those areas, it gives us great confidence that we're focusing all of Intel on the right priorities. Intel outgrows the aggregate TAM by winning these data centric transitions exactly as Bob described. Now winning these transitions will require new approaches in product design and a mastery of a broader range of technologies. I'd like to show you how we're addressing that in some detail. Historically, Intel focused on process technology and CPU architecture, and this has driven 40 years of success.

We even called our development model Tick Tock to describe the relentless competitive metronome that we created. And of course, there are other areas of technology that we needed to complete our product designs, but they played a more complementary role. Going forward, process and CPU leadership continue to remain fundamental to our leadership. However, we need to expand our vectors of leadership to include 6 pillars of coordinated technology that we can integrate in a coherent fashion. And this will enable a roadmap with a faster pace of innovation.

And our business units are taking full advantage of those results. GB, Sandra and Navin will share more on that. But I'd like to show you how each of these pillars as constituents drive product leadership. As I said earlier, data centric workloads such as AI, video and graphics need compute architectures tailored to their requirements. These encompass 4 distinct classes of architecture: CPUs for scalar workloads, GPUs for vector workloads, NNPUs for spatial workloads, and FPGAs for data workloads that require a spatial processing.

And as Bob described, at Intel, we collectively call these XPUs. And Intel is building the world's most competitive portfolio of XPUs. Integrating different XPU architectures in turn drives requirements for product construction and manufacturing that are evolving. At Intel, we've invested in breakthrough package innovation that allows heterogeneous integration of compute architectures beyond the single die. Each XPU in its own process technology in 2 d or 3 d package configurations with performance parameters similar to a single die.

Intel is developing the world's leading portfolio of advanced packaging technologies. As data grows, and as Bob mentioned, both memory and interconnect can become bottlenecks. Intel has to be able to make sure that even the best compute engines aren't starved of data, thus denying them of their full potential. At Intel, we're developing a leading edge portfolio of memory and interconnect technologies that address the requirements of high bandwidth, low latency and power efficiency. Optane persistent memory is a key part of that portfolio.

And in the context of our digital lives today, privacy and security of our data have become a non negotiable. At Intel, we're taking a comprehensive approach to the integration of security into the architectural foundations of everything we do, from silicon to software to systems. Furthermore, we have a well defined process for consolidating the release of security updates in a regular cadence with our partners and customers. And we're committed to following industry accepted practices for coordinated disclosure of vulnerabilities. Lastly, the role of software becomes broader and more significant, Specifically, as the layers below increase in complexity and diversity, software will be left with a very challenging task.

Struggling to stitch together compute from multiple vendors across the CPU, GPU, NNPU and FPGA landscapes that data centric workloads will be processed. This represents a significant differentiation opportunity for Intel. We will develop a single set of APIs across all of our XPUs to our customers and our developers in a manner that will enable them to move workloads between different processing engines. And Raja in his talk will talk a bit more about that. But that's a significant strategic initiative for us.

So to summarize, we think that data centric transitions will stimulate a new chapter of differentiation for Intel. We're uniquely able to co architect, design and integrate the 6 pillars of innovation to deliver product leadership. Our competitors have islands of excellence, but these are difficult to harness across multiple corporate boundaries in order to bring the aggregate together. I'd like to focus on 2 pillars in particular now. Firstly, process and packaging technology and then software.

Historically, Intel drove performance by integrating more and more content onto a single die. It also was the key access for us to drive our cost structure going forward as well. Going forward, Intel is expanding the formula of integration well beyond the single die. We call this approach heterogeneous integration. To illustrate, the product on the right includes multiple XPUs, each in its own optimized process technology, a platform size far exceeding the limit of a single die, all integrated into a single package through advanced 2 d and 3 d packaging technologies such as our recently disclosed EMIB and Foveros capabilities.

There are several fundamental advantages to this heterogeneous approach. First, we can intercept new process technologies up to 2 years earlier by interconnecting multiple smaller chiplets. Secondly, we can build much larger platforms with unprecedented levels of performance compared to non monolithic alternatives. For example, our Foveros technology enables a 10x increase in interconnect bandwidth and at the same time a 6x reduction in interconnect power compared to multi chip packaging. Thirdly, our roadmaps can be driven at a much faster cadence as a result of the increased configurability of our road map.

And as my colleagues go through their talks, you'll see how we're taking full advantage of this in a much faster and more rapid pace of innovation across our roadmaps. And finally, this approach allows us to prioritize and sequence our SoC R and D in areas where performance is most correlated with logic scaling. And in other areas where it is not, we have the option of selective outsourcing and thereby focusing our capital investments where we're most differentiated. Notwithstanding my prior comments, process technology remains foundational to Intel delivering product leadership. With that, let me give you an update on our process technology roadmap.

Now it's no secret that Intel has struggled with 10 nanometers. And what I found in discussions with many of you is the perception that Intel's process innovation has slowed down during this time. So I want to share my insights and conclusions that I personally reached after a year of being deeply immersed in Intel's Technology and Manufacturing Group as part of my broader remit. I hope it will help you grasp how we've managed to continue delivering leadership products during this time and also help you understand how our manufacturing is recovering. As 10 nanometer was originally defined, Intel's leadership set very ambitious goals of achieving a 2.7x scaling in order to maintain our historical cost per transistor trajectory in an area where we were dealing with even more complex paradigms of immersion lithography.

To achieve this, the 10 nanometer team took on multiple revolutionary modules with inherent technical risk. Now as Bob described, taking risks to tackling supposedly insurmountable challenges has always been and will always be part of Intel's DNA. We have a world class team that's fearless and goal oriented. But in hindsight, that team took on way too much risk in one step. And the interplay of those revolutionary modules proved to be very challenging.

And the actual schedule for 10 nanometers is really a result of that risk profile playing out. We learned one very powerful lesson from this experience and that was the need to make sure that our engineers were given clear guidance on the balance of priorities between, of course, scaling and cost, but also making sure that schedule, power and performance was taken into account when they defined their technical plan of attack. As you can imagine, our travails on 10 nanometer left a major gap in our process roadmap. And I'd like to show you how we responded to that. To fill the gap, we had to extract more out of our 14 nanometer technology.

And as we discovered, there was a tonne of untapped performance there for us to harvest. And we ended up introducing 2 rounds of optimizations in 14 plus and 14 double plus We also adapted our roadmap to deliver timely product refreshes such as KB Lake, Coffee Lake and Whiskey Lake for our client portfolio and Cascade Lake and Cooper Lake for our data center product line. The net result of these optimizations is that between the first product generation on 14 Broadwell and the latest 14 Double Plus products such as Whiskey Lake, we achieved a greater than 20% improvement in transistor efficiency and we're able to deliver a 30% improvement in turbo performance. This experience surfaced another really important lesson that we're institutionalizing and that is that we must deliberately plan for intranode optimizations. Our 10 nanometer technology went into high volume production at the beginning of this year, and we're delivering without compromise on the original performance and scaling cart targets.

And there are multiple intranode optimizations to follow that will flow into the basis of our product roadmaps across our businesses. This highlights 2 more important lessons. The value in maintaining a mix of nodes to give our BU's flexibility to optimize for product performance, time to market and margin. And also, we have to make it easy and fast for our development teams to migrate their designs through intranode transitions. 7 nanometers will be the fullest realization of our new approach, incorporating all the lessons that I've just gone through that we've learned in 2014 and 2010.

We've made schedule and time to market a priority. Nonetheless, we plan to deliver 2x scaling and a greater than 15% improvement in transistor efficiency gen over gen. We're radically reducing design complexity for our design teams with a 4x reduction in the design rules that govern the rules of how you design in a new process. As you can see, the 7 nanometer transistor geometry is also being lined up with our next generation packaging technologies. And a major area of risk reduction for 7 nanometers as we go forward will be the Intel's 1st commercial use of EUV.

This technology will help drive scaling for multiple nodes. And as you can see, we're planning on many waves of intranode improvements. And lastly, the introduction of 7 will overlap with our last node of 10, 10 double plus So to summarize, you should leave with 5 takeaways on process technology. We will deliver sustained process advancement between nodes and within a node. Put another way, we will deliver 1 Moore's Law of Performance and Scaling at the beginning of a node, plus another Moore's Law of Performance within the node.

We will utilize multi chip SoC construction to deliver non compromised performance for all of our XPU engines. We're enforcing radical design rule simplifications to make it easy for our teams to move designs through intranode transitions. As Bob said, our first 10 nanometer products will be shipping in June with additional launches across the entire Intel portfolio over this year and next. And 7 nanometers embodies all the lessons from 14 and 10. And as Bob also let out of the bag, we plan to launch our lead product in 2021.

Let me give you some details on those last two points. Our next generation flagship client product, Ice Lake, will be shipping in volume in June. And you'll see systems on shelves, as we've said, since April of 2018 and holiday of 2019. IState takes full advantage of our 10 nanometer technology together with a host of architectural innovations And it brings exciting generation on generation performance gains with approximately a 2x increase in graphics performance, up to a 3x improvement in AI performance, a 2x improvement in video encode performance and 3x faster wireless connectivity through WiFi 6. And as you can see, Intel is planning multiple 10 nanometer product launches through the rest of this year and next across its entire product portfolio.

Let me now turn to 7 nanometers. And I'm excited to unveil our first and lead 7 nanometer product. It will be a groundbreaking GPGPU product targeted for our data center and HPC applications, a major strategic priority for Intel. It will embody our new heterogeneous approach to product construction and use our advanced packaging capabilities. The product will launch in 2021 and it's the basis of the previously announced design win with the Department of Energy to deliver the U.

S. First exaflop computer, Aurora. And if you're wondering what an exaflop is, it's a quintillion floating point operations per second. If you're wondering what a quintillion is, it's 10 to the 18. And if you're none the wiser after that, it's the weight of the Earth in kilotons.

I think it's also 1,000,000,000,000,000. I could be wrong, but I think it's 1,000,000,000,000,000. Now I'd like to turn to software. Our mastery of this domain needs to match our prowess in SoC and Semiconductor Technology Design. At this point, it's my pleasure to introduce my friend and colleague, Raja Koduri, who can give much better voice than I would be able to about our thinking and plans regarding software.

So Raja, please come on.

Speaker 5

Thank you, Moorthy. Good afternoon, everyone. Murthy already eloquently laid out the connection between our 6 pillars, Moore's Law and the role software already plays and is going to play. And as I was preparing for this session, I sat down with our software leadership at Intel to get their feedback on what I was going to say. And interestingly, their first feedback and reaction was, wait, and some of them have been at Intel for 25 plus years.

And they said, I think this is the first time we'll be talking about software with our investors. So, and they said, this is a huge change. And I asked them about this, the statement that I have here for every order of magnitude performance from new hardware architectures, there is often 2 orders of magnitude unlocked by software. And I say, I see this and I want some examples in recent times that you have done it. And they literally sent me 100, like so there are Excel spreadsheets of like data, this software release, that software release and all.

And I said, I'll walk you through a few quick examples to just around the statement of what our software team has done in the last 6 months or so around our Cascade Lake product launches and all. The first one is the incredible performance improvements we deliver in the all important Java ecosystem. From JDK version 8 to JDK version 9, Intel engineers worked in delivering 6 times higher performance on existing hardware for all our existing customers and developers. If you look at it from a hardware standpoint, this is a few generations of Moore's Law performance in one software release. Another example, all the amazing work we do on memory, what unlocks that is the software around it.

So the raw hardware alone is just very incremental. But if you combine that with our memory hierarchy architecture and the incredible work we do in the software stack, 8 times improvement in the workload performance. Again, several generations of Moore's Law. The third example, this is my favorite. Our deep learning software team has done an amazing job unlocking the full potential of CPUs over the past, I'd say, 12 to 18 months, utilizing architecture extensions like DL Boost, 28x speed up from just the last generation of hardware, from Skylake to the top Cascade Lake CPU SKUs.

And the other best kept secret at Intel is the impact software has in our competitive differentiation. And again, I'm picking a few workload domains out of thousands of workload domains we track. The combined hardware plus software differentiation delivered by our products is very much often underappreciated. Other companies would be doing press releases for 10%. And here we have 10x per core advantage, 2x in networking, 2x in Java.

And as Bob said, when we were greater than 90% market share, we didn't talk about a lot of things, why we were where we were. I mean, there was a simple narrative around our process leadership, but these assets, these things in our product leadership have always existed. Now, this didn't happen just by chance. If you look at our software leadership, we have over 15,000 software engineers. We are number 1 contributor to the Linux kernel.

We modify over 500,000 lines of code each year. We optimize over 100 operating systems. We have the top 3 contributors to Chromium OS. We have over 10,000 high touch customer deployments in software. We are the top 10 contributor to the open stack and we have a vibrant over 12,000,000 developer ecosystem.

So this is not something we really often talk about. But this is what makes up for the leadership and the numbers that you have seen in the previous slides. Now, I tried to capture the scale of Intel software in one vision. It's just crazy. The work we do across developers, infrastructure customers, network, operating system developers, tools and SDKs we produce and the number of standards bodies we influence.

And I talked about the high touch customers. We do in IoT and Edge. It's just incredible. And if you think about it, we built this entire capability around 1 architecture, leveraged across PC, the network and the data center. This is awesome.

But when we talk about data centric developer growth, there are new areas growing rapidly. There is this entirely amazing thing that's going on around the cloud native software development, which we have been tracking for a while and have been investing for a while. And then there is the GPU ecosystem. Last 10 years, it grew to an impressive 1,000,000 developers. And then the AI ecosystem, the number may look small, but the ramp rate is incredible.

In a short amount of time, it's gone to 100,000 developers. Now, there is some overlap of this developer community. So, what I'm going to walk through over the next couple of minutes is, what is our strategy? What are we doing? So the first step, as I mentioned, we have absorbed the cloud native developers into our developer ecosystem.

Every morning I wake up and read about a new orchestration or a container layer and really cool innovations happening about making applications easy to deploy from cloud to edge to the device. We'll talk a lot more in the coming months about our cloud native developer strategy. And today, what I want to talk is what we're doing about this other 2 bubbles there, the GPU and AI bubbles. So the first step, as Murthy alluded to, is we needed an architecture strategy. We needed a hardware roadmap, which we have now.

Murthy laid out a few interesting parts and later in the day, you'll hear from GB and Navin on the impressive roadmaps we have in these areas. So we laid out the vision, we stated out in public in December that the future is a diverse mix of scalar vector matrix and spatial architectures deployed in CPU, GPU, FPGA accelerator sockets. And we are executing to that. So that's great. So now we are well on our way from the hardware side to execute on this vision.

But what does that enable us? So, the first thing that it enables us is the center of the sphere there. We have multiple architectures and a couple of memory hierarchies and interconnect hierarchies to handle. So we have the architecture roadmap in place, so we can cover the GPU and AI developers from a hardware roadmap. Now, you may ask first your strategy, how is it different from your competitors?

So if you take a look at our competitors, using 2 examples, The one on the left, you probably can guess by the color there. They have a reasonable size developer ecosystem, but around 1 architecture, single architecture. No memory strategy that I know about. And looks like they're trying to establish an interconnect strategy. The one on the right has 2 architectures, no memory or interconnect strategy that I know of, and the size of their developer ecosystem is tiny.

In fact, without our invaluable software contributions, they have no software ecosystem that's meaningful. Now, a strategy and the potential of this beautiful circle in the middle only gets you so far. As Murthy said, the challenge is how we're going to scale our software strategy from 1 architecture to 4 architectures, while also leveraging the memory and interconnect hierarchies. And we've been working on this challenge for a little while now, and I'm going to give you an update on that. 1st, we set out a few simple goals for ourselves.

We said anything we do need to be simple and scalable. What I mean by that is it should be simple for developers to adopt. And scalable, not only across all our architectures, but across all operating systems. There are many, like I said, hundreds of operating systems we do. And also scalable not from for 1 node, from 1 node to millions of devices that are connected now in the ecosystem that we aspire to play in.

The second goal we set is that it needs to be open. We are committed to open standards, open for all and Intel has the best open source practices in the industry. Like I said, we are the number one contributor to the Linux kernel stack. The 3rd goal said, we got to have one developer experience. Today, we sometimes make working with Intel look like working with 10 different companies.

We said we need to solve that problem. So, we've been executing on this mission for over a year. And internally, we call this the 1API project. And today, I'm super happy to report the team is making tremendous progress on this. Our customers and developers who have seen our strategy, gotten the details, gotten the specs are super excited.

And today, I'm saying and I'm announcing that they are on track to deliver to developers by Q4 2019. So with that, let me hand it back to Moorthy to take you to the closure on this section. Thank you.

Speaker 3

Thanks Roger. That was I hope you thought that was exciting stuff. I'd like to devote the rest of my presentation to product examples and really show you how we're bringing the 6 pillar concepts of how we develop product leadership to life. In the data centric era, as the graphic behind me shows, we face an incredibly wide design spectrum. And as we drive leadership across that spectrum, the 6 pillars will be co architected, designed and integrated in different ways to deliver world class products.

Let me illustrate that with a couple of examples and start with the data center. Naveen will be talking more about this in his presentation. But at Data Centric Day on April 2, we announced our broadest portfolio for moving, storing and processing data. 1 of the highlights of April 2 was our 2nd generation Xeon Scalable Processor Family Code Named Cascade Lakes. And the data center product portfolio addresses the $220,000,000,000 TAM that Bob described as part of the data centric opportunity Intel has.

And this product anchors that product portfolio. As you can see, the platform contains a diverse collection of technologies from across those 6 pillars. I'd like to highlight a few just to amplify just how profoundly different the development model that we've taken to deliver this product is from the past. Under architecture, we introduced DL Boost, our X86 instruction set extension specifically designed to accelerate AI workloads. Under memory, we developed breakthrough performance using Optane technology.

And under security, we're implementing a portfolio of differentiated technologies that reinforces our commitment to make our customers' products the most secure they can be in their respective industries. The breakthrough technologies that we're able to deliver, delivered the following key specs up to 28 x performance gains for AI workloads in the AP configuration and up to 36 terabytes of addressable memory to process large data sets without having to go to off chip memory. And coupled with our highest ever core count and 200 gigabytes per second memory bandwidth per socket to minimize latency, this platform delivers extraordinary performance gains for data centric workloads. Now I'd like to talk about Lakefield and again GB will talk more about Lakefield. But Lakefield is a product that inaugurates a new swim lane for our client portfolio.

This project started in 2016 with 1 of our largest customers and the goal was to architect the future of intelligent agents. Together we defined some pretty ambitious and challenging goals, always on, always connected to support applications requiring continuous sensor processing and network connectivity 1 month standby to extend autonomous operations for months on end smartphone like form factors to allow integration to a diversity of devices. And as we translated these goals into platform technical specifications, we found ourselves in a technical conundrum. Specifically, we needed a 12 by 12 form factor for edge devices. That meant we couldn't use 2 d or planar design, given the features that we needed to put in that platform.

We needed ultra low standby power to enable extended autonomous operations. This meant we needed ultra low leakage transistor technology for key parts of our IP. At the same time, we needed uncompromising performance from our compute engines, our XPU engines, for the data centric workloads we needed to prosecute. And this meant we needed access to 10 nanometer leading edge technology for our XPUs. So how do we solve that puzzle?

By throwing out the rule book and rewriting it to use heterogeneous integration enabled by a groundbreaking 3 d packaging. Let me highlight some of the key innovations. First, we had to target a 12x12x1 form factor requirement, which acted as a forcing function to move away from monolithic design. We used an ultra low leakage technology P1222 to create the base die. And this die houses the chipset's always on always connected functionality along with power delivery and delivers a standby improvement of 10x over previous generations of product.

Now comes the real magic. This was realized through our Foveros 3 d packaging and interconnect technology, which is key to enabling heterogeneous introduction sorry, integration without compromising on performance or power efficiency targets. 0.15 picojoules per bit, being able to deliver power at a level which can drive peak demand up to 1 kilowatt in burst scenarios for peak performance. I mean, these are really design parameters that I think represent foundational and frontier technology when it comes to figuring out how we solve some of the problems we had in front of us. To achieve the no compromise performance we needed, the ComputeDye used our leading edge 10 nanometer technology and key IPs that we developed for Ice Lake such as our flagship Sunny Cove cores and our Gen 11 GPGPUs.

In addition to achieve the all day battery life, we implemented Intel's 1st hybrid computer architecture, which combines 4 low power Tremont Atom cores with 1 high performance Sunny Cove core. This combination enables Lakeville to achieve a standby power of 2.6 Milliwatts and also burst to 27 watts when high performance for demanding workloads is required. You couldn't have done that in a monolithic, constrained environment. And to complete the design, we used POP DRAM to deliver on the 1 nanometer sorry, the 1 millimeter Z height requirement. And this allows our customers to dramatically reduce the PCB footprint of the chip kit and also deliver a mobile like form factor.

The culmination of our work delivered in what I believe is a truly breakthrough product for Intel with up to 10x standby and up to 2x active power improvement relative to 14 nanometer predecessors, a 2x improvement in graphics performance and over a 50% area in PCB area. And we'll be in production with Lakefield by the end of this year. The development of Lakefield demanded new levels of collaboration and flexibility across Intel's key design functions, and it really exemplified a 1 Intel mindset. This product could not have been delivered if our development teams went up to, but didn't cross their traditional responsibility boundaries. What it really required was for Intel to think like a real product leadership oriented company, where we had process engineers, packaging engineers, design engineers, software engineers, all putting the customers' needs and problem in the very center of the table around which we had our debates.

And everybody generating lucidity and freedom of thinking in terms of how we surmounted problems, but we hitherto would have left at a departmental boundary. I think this is an example that really took Bob's culture side and drove it into reality. The necessity of meeting our customers' needs needed us to think differently. And we brought technology out of our research and our advanced development activities at rapid pace to be able to really define something that's truly groundbreaking. And I think when Lakeville comes out, it's going to be a real category defining product for both us and the industry moving forward.

Much of the technical DNA that we imbibed into the Lakefield will propagate through our products going forward. And that DNA already underpins the future of our data centric roadmaps. I kid with my colleagues in many ways to paraphrase Neil Armstrong. I see Lakefield as a small step for GB's client roadmap, but a huge leap for our data centric roadmap going forward. I think I got it right.

I'm not sure if there's a huge leap or giant leap. It's one of those things. But I really think it's going to be something that you will see the DNA in almost every product that we'll talk about from this point forward. So let me summarize what I've hoped and I've tried to accomplish in my conversation together with Roger, with you this morning. I hope I've accomplished 3 things.

1, giving you a deeper insight into the nature and evolution of data growth. How that growth is driving 3 major data centric transitions and how Intel will outgrow the TAM by winning these transitions. I also hope I've demonstrated how Intel has translated that insight into an innovation model based on 6 pillars of technology. These include relentless innovation in process and packaging. Let me remind you, one Moore's Law of Performance of Scaling at the beginning of a node and another Moore's Law of Performance through the node.

We've learned from our 10 nanometer experiences. I think we've come out as a stronger company as a result of it. We have been humbled by the lessons we've learned and we've institutionalized and driven those into our definition of 7 nanometers and we're executing at full pace per the original schedule that we set out 2 years ago on that 7 nanometer roadmap. We're driving breakthrough memory and interconnect technology. Again, our massive arsenal of XPU processing engines will not function at peak performance if they're starved at data.

So how we keep those engines fully fed with enough data to keep them running at peak performance is absolutely paramount in the way we define our architectures going forward. We have a broad portfolio of XPUs. Of course, the CPU remains the central nervous system of our architectures going forward, but they will be complemented by, we believe, really high performance GPUs, neural network processing units where we have specific AI workloads that benefit from customer acceleration and field programmable gateway technology where we need the flexibility to handle spatial workloads. And as Raja so eloquently described, we will provide and harmonize access to those XPU technologies through a single set of APIs that makes the user feel a seamless transition from 1 XPU architecture to the other. You land on 1 Intel XPU and you can migrate through the portfolio with exquisite ease.

And I've also hope I showed you how we're using all the 6 pillars to drive leadership across the breadth of our data centric product roadmaps with an accelerated pace. And that's the key thing, moving our product portfolios forward at a much faster clip than we've been able to do up to now. When I think back to the last time I spoke to this audience in 2017, I talked about how the fusion of process technology and product delivery was going to be key to Intel's future. How architecture and process definition were co designed, how architecture and software were co designed, how we pull down organizational boundaries and start thinking as coherent integrated teams that focus on solving our customers' needs, aspirations and anticipating where they're going to be going in the future in a manner where we could be fearless in driving technical disruption and product leadership. I hope in my talk today I brought that to life and demonstrated the progress we've made in the last 2 years.

We're really excited about the future here at Intel. We have just an incredibly exciting roadmap in front of us to achieve. The challenges for engineers like me and Roger and many of the others in our team is that makes work fun every single day. So we're really jazzed about the work we have ahead of us. And I hope as you get to see our progress and our roadmap that you'll find that we're delivering exciting stuff too.

Thank you very much.

Speaker 1

All right. We're going to take a quick break now. For those of us here in Santa Clara, we'll invite you to join us out in the lobby. We'll come back into the auditorium at about 5 minutes to 2. And for those joining us on the webcast, we'll bring the webcast back up at 2 o'clock sharp.

So thank you, everyone.

Speaker 6

All right. Hi, everybody. Good afternoon. So far, you've heard the enterprise wide strategy from Bob. You heard about the product leadership thrust and strategy from Murphy.

And what I'd like to do is spend the next 45 minutes or so and do a deeper dive on the data centric strategy, the data centric businesses, and then the opportunity that we are driving to create more value for our customers and more value for you. So, let's get started. Three key messages that should be familiar to you by now. Our long term view, the opportunity and our strategy has not changed in the last year. We view the massive data centric opportunity you've heard us talk about and in front of us at $200,000,000,000 as the largest opportunity in our history.

3 major industry megatrends driving that growth, the rapid proliferation of artificial intelligence, the growth of the cloud and the cloudification of other elements of the business, the network and the edge in particular. And all of these trends really leverage our strengths, leverage our strengths in high performance computing, leverage our strengths in ecosystem building, leverage our strengths in architectural innovation. And finally, to win over the long term, we feel very strongly that a broad portfolio is essential and that our unparalleled set of assets is what will differentiate us from others as we get after that large market. Now, the underlying driver of what we talk about today is all based on this massive growth of data, this largely untapped flood of data, giving rise to the 3 megatrends that I mentioned earlier, the growth of AI, the proliferation of the cloud and of the network in the edge. The AI trend started many years ago with the most sophisticated data rich organizations.

But what's interesting now is the way in which AI is permeating all companies. And you're going to hear some examples later today on how we intend to exploit that trend. The proliferation of the cloud, of course, we've seen at the large hyperscale companies. But what's interesting to observe now is the way in which cloud architectures are also moving into enterprises and even on premise enterprises, all built on Intel architecture and many of the innovations that we brought forward to the hyperscalers many years ago. And finally, those same concepts of cloud based economics, of being able to scale up very quickly are now transforming the network itself, cloudifying the network.

And that's leading to tons of opportunity for us, allowing us to think about bringing the same dynamics from the public cloud into the network. And all of that is accelerated as the world moves to 5 gs and as computing moves closer to the edge and closer to where the data is being created and consumed. So, these three underlying megatrends inform everything that we do. Underneath that, you've seen this big explosion in the demand for computing. Bob showed a 50% CAGR and compute cycle growth from 2018 to 2023.

If you take a slightly longer view of that, you see a 60% CAGR for for compute demand over this horizon. And equally importantly is, as that compute demand has grown, we've seen a diversifying of the types of workloads that customers are running, from security to virtualization to databases to network to multi cloud to orchestration to AI. This diversity has increased as compute demand has increased and that in turn creates an opportunity for us, because this broad portfolio of products we have is uniquely something that Intel can do. Inside of that context, this is how we've seen the data centric silicon market expand over time. $150,000,000,000 last year to over $200,000,000,000 in 2023.

And what's new here in this latest update is the dip that you see in 2019. And Bob talked a little bit about this earlier. We've seen customers move into absorbing the purchases they made in 2018 now as they were above trend line in 2018. But we see that as a temporary phenomenon. As you can see, we expect that the market will continue to grow to well over $200,000,000,000 in 2023.

And the compound annual growth between 2018 2023 is 7%. When we think about this TAM, we start with the mindset that we start at only 20%. And that informs the way in which we think about the actions we take. Our ambitions, our goals are to grow our revenue in the data centric portfolio faster than that TAM at high single digits. In other words, to grow our market segment share between last year and the out years of this forecast.

Bob shared the transformation that Intel has been going through over the last several years. There is a similar transformation happening inside of the data center group part of the data centric businesses we have. First of all, you see that over the last 5 years, we grew this business at a 12% compound annual growth rate. What may be more interesting is the way in which under the covers that business has changed. In 2014, 60% of our business was the traditional on premise enterprise and government business.

Fast forward 4 years and now cloud and communications represent 65% of our business and enterprise and government only at 35%. And we expect that to continue, that trend to continue. In 2019, we expect cloud and comms to be 70% of the DCG business. Now, earnings last month, we reported that we now expect DCG to be down mid single digits in 2019, off of that record 2018 above trend line 21 percent growth rate. The inventory and capacity digestion that we did describe in January is going to take a little bit longer.

And the China headwinds have increased since January. And as a result, we brought our guidance for the year down. We do continue to expect a stronger second half than first half, but we moderated our expectations for the pace of the recovery relative to what we thought in January. I would expect that the Q2 would be the bottom for us in the cloud and in the comms and in the ENG segments with a stronger Q3 and Q4 as we navigate our way through a challenging 2019. Now, let me double click on some of the growth drivers inside of the DCG portfolio.

And I want to start with the single largest segment in the data centric portfolio that we have, and that's the public cloud service provider business. This business, fabulous business for us, 30% compound annual revenue growth rate over the last several years. And the growth of the top 7 companies inside of that is well known. All of you know about that at 35% over that time frame, probably not surprising. What may surprise you a little bit is the growth rate of the next wave.

The next wave is sort of the next 300 companies that we do business with in the public cloud. They've also been growing at a very fast rate, 29% over this period of time. In fact, in 2018, that group of customers grew at over 30%. So kind of accelerating in out years of the forecast that we of the historicals that we're showing you here. Now that doesn't happen by accident.

That growth in this next wave of customers has happened because we have been driving that through investments in these companies. We've doubled our investment in our sales force for the next wave. We've doubled our investment in hardware and software, architectural, engineering support that we provide these companies. So driving this diversity is strategically important for us, and we expect that will continue as we go through time. In addition, we've been deepening our partnership with the top CSPs.

We've been increasingly customizing our products for their needs. The value of performance per TCO is so important to these companies that over the last 5 years or so, we've found ways in which we can uniquely optimize our CPUs largely, but other products as well on thermals, on core count, on frequency for their unique needs. In some cases, building custom CPUs or custom ASICs that are entirely dedicated to 1 company. The custom CPU portfolio has grown from 25% of our volume in 2014 to over 55% of our volume in 2018. And we expect that will continue to grow.

And again, we believe that's a differentiator for Intel over time. Now, finally, there is a misconception that our public cloud business is simply cannibalizing our enterprise and government business. And that is not the case. The reality is that our cloud business is expansive to us, TAM expansive to us. Over 2 thirds of our cloud business is net new revenue for us.

Consumer services like Twitter or YouTube or online gaming, these are new capabilities that are in the market that grow the overall pie for us. And then on the enterprise side, there are new use cases that simply wouldn't exist without the public cloud. AI comes to mind as an example of that. Things like Salesforce or Workday or ServiceNow, all new born in the cloud services that we view as expansive, because they didn't exist in the traditional on premise environment. So the public cloud has been a great growth driver for us.

But what's interesting is the architectures that we've had to invent and optimize for in the public cloud are now permeating into the enterprise traditional segment and even the comms service providers. In the last 9 months or so, we've seen every major public cloud service provider in the U. S, AWS, Google, Microsoft, all announce on premise cloud architecture solutions between Outposts, Google Anthos and Microsoft's recent announcement just this week of their VMware solution. In addition, traditional enterprise companies like Dell and VMware just this week announcing their own on premise hybrid cloud solution. All of these, we are deeply optimizing for Intel architecture.

This is good for us. Enterprise companies have more choice now. And working on these architecturally optimized for Intel, I think will be net positive for us over time. On the left hand side of this chart, I wanted to talk a little bit about how many companies are trying to leverage data to do this thing called digital transformation. This buzzword that's been out there is starting to become more and more real.

And these are just three simple examples of that. Rocklatan, Sandra will talk a little bit more about in a bit. This is an e commerce company in Japan transforming itself into a mobile network provider using technology, our technology to make that happen. SF Express is the largest delivery company in China. They're using AI now to fully automate their warehouse, fully automate the way they do and optimize last mile delivery.

Siemens Health and Ear, this is a 171 year old company that is using AI to transform the way they do cardiac MRI. So AI and data transformation, digital transformation becoming more real, even in the enterprise and comm service provider segments. So while we've brought our forecast for the enterprise and government segment inside of DCG down in 2019 due to the inventory burn and the China headwinds that we have, over the medium to long term, we are quite confident that enterprises are going to invest more in digital transformation, not less. Now, in the context of that environment and those trends, our strategy is relatively simple. We are on a mission to help our customers architect the data centric infrastructure of the future.

And to help them take that massive amount of data, largely unstructured data, and find ways to move that data faster, to store more of that data and to process all of that data. And you've seen us build out a portfolio of products to help our customers do just that, whether it's silicon photonics or Ethernet or high performance fabrics or NAND or Optane persistent memory or in the processing category, our CPU and XPU investments between CPUs, AI, ASICs, GPUs, FPGAs. We've built out a very broad portfolio of products, all of which are underpinned with this investment in software that Raja talked to you about to help customers get more value out of this increasingly broad portfolio of products. And you saw us bring this strategy to life about a month ago at our first ever data centric portfolio launch. Historically, we would launch all of these products in different places individually.

And on April 2nd, we decided to pull the portfolio together to talk about how we were going to help customers move data faster, store more data and process all that data. And you could see that in a broad portfolio of new products that will be coming to market in 2019. Many of these products are designed to work together uniquely, I. E, you need both of those products together to get the full benefit. Xeon, 2nd generation Xeon Scalable Plus Optane Persistent Memory is a great example of that.

But there was a wide variety of other products that we announced at that launch that we think will bear fruit for us and drive growth for us in 2019 and beyond. And I'm very happy with the customer reaction we've seen in just the short month or so since the launch. And one of the most important products at that launch was our 2nd generation Xeon Scalable, our most comprehensive Xeon launch ever, with over 50 standard SKUs that we launched, dozens of custom offerings, 8 cores up to 56 cores, 1, 2, 4, 8 socket support. And in the volume price points, the mainstream price points where we ship the most volume, we're delivering the largest gen on gen performance improvement we've delivered in the last 5 years for the volume price points at over 30% gen on gen. All of that gives us confidence that this 2nd generation Xeon Scalable will be our fastest ramping Xeon in history.

We built on our long history of bringing Xeon into the market, our 20 year history of bringing Xeon into the market year after year after year to deliver workload level of performance that we hadn't seen in a long time with real architectural innovation. Raja highlighted some of this was exploited through software innovation, whether it's HPC workloads where we saw 2 times performance improvement, security 3 times performance improvement, network capabilities 2x performance improvement, 1.4x on cloud orchestration, 3x on AI workloads, and so on. We also combined our 2nd generation Xeon Scalable with Optane persistent memory, And we're delivering great performance improvement that wasn't possible with Xeon alone, when we combined it with Optane for 1.3x in memory database performance improvement or 8x more VM instances on software solutions like Redis. This is one of the ways in which we will differentiate ourselves over time, workload delivered performance. I mentioned Optane Data Center Persistent Memory.

This is a true platform approach for us, where we're optimizing the processor and the memory technology to work well together, over 10 years in the making to bring this product together, something that really only Intel could have done. We invented the Optane media. We invented the memory controller inside of the CPU. We invented the DIMM technology. We built the firmware.

We invested in the ecosystem to make this all work. We're going after a $10,000,000,000 market here where we have essentially no share today, growing at 50% over the next 5 years. The capabilities that we can address with Optane Plus EON in workloads such as in memory databases, VM, content delivery, analytics, high performance computing are being proven out now. And we're very excited that since the launch, we're deep into proof of concept deployment with 100 Fortune 500 Companies, 5 of the Super 7, 30 Next Wave CSPs and over 10 comms service providers. Just yesterday, Microsoft announced their bare metal instance with SAP HANA using Optane persistent memory.

At SAP's conference, SAPPHIRE, Haso talked about how they're going to deploy SAP HANA at other cloud service providers using Optane persistent memory. So, very exciting what we can do with Optane. Let me shift now and talk about what's next. Our next generation Xeon platform is going to support both Ice Lake and Cooper Lake. And I'm happy to tell you that that platform is on track.

Cooper Lake will support higher core counts and Ice Lake is our first 10 nanometer data center Xeon. I'm happy to tell you that it is on track for production in the first half of twenty twenty. We are now shipping samples to customers and many of those customers have already powered that silicon on. So we're making very, very good progress on Ice Lake and we expect to advance our per core performance leadership as we enter 2020 with that product. But we're not stopping there.

We are going to take advantage of much of what you heard from Murphy. And in a world where there is near insatiable appetite for computing, and in a world where computers customers and computing is in ever increasing demand, we are going to pick up the pace. Historically, our Xeon roadmap was delivered on a 5 to 7 quarter cadence. Starting now, we are investing to accelerate the pace. And our plans are to bring out our Xeon platforms at a 4 to 5 quarter cadence from today forward, from Cascade Lake to Ice Lake to Sapphire Rapids, our next generation 2021 Xeon built on 10 plus plus 10 nanometer plus plus technology to the next gen after that.

We are going to be on a 4 to 5 quarter cadence and bring pace to bear on the compute demands that our customers have. No place is this pace more important than in the world of AI. And we've talked about how in the AI domain, we were going to move AI from being simply something that the scientific elite could take advantage of to something that all customers could take advantage of. We're doing that in the context of a very large market. Today already, AI in the data center represents a $4,000,000,000 silicon opportunity, growing at a rapid rate to $10,000,000,000 by 2023.

Now, what's interesting to note here is the split between training and inference, right? It's about fifty-fifty as we look at the market over time. And while training is something that is talked about a lot in the market, I believe that over time, inference is actually going to turn out to be the more interesting market and maybe even the larger market. The other thing that we're disclosing here today is the degree to which AI has contributed to the data center revenue for Intel. Previously, last year, we disclosed that in 2017, we had about $1,000,000,000 of AI revenue inside of our Xeon and FPGA portfolio.

We're disclosing today that in 2018 that revenue grew to about $1,700,000,000 and we expect further growth as we look at the market in 2019. Against this backdrop and against this opportunity, we have a multipronged strategy that you heard about from Bob and you heard about from Murphy. This notion that we are going to build out a portfolio of products, CPUs and XPUs, multiple architectures to address our customers' AI advanced data problems. We will be the only company in the world that has a scalar architecture, that has a vector architecture, a spatial architecture and a matrix architecture in house. And we deeply believe that one architecture will not solve all of the AI problems of tomorrow.

From the data center to the edge, from low power domains to high power domains, we think a portfolio approach is what it's going to take to win over the long term. Now we're building purpose built products for AI already, products like in the Mobileye portfolio, products like Movidius, products like our FPGAs. But we're also infusing AI into our existing products, such as Xeon. And you heard a little bit about that from both Murthy and Raja. What we did when we launched the 2nd generation Xeon Scalable is we introduced this new architectural capability called Deep Learning Boost.

And what we're plotting here is simply the improvement in AI performance. We're showing CaffeResNet inference performance over time. And you can see that when we introduced our 1st generation Xeon Scalable in July of 'seventeen until the end of 2018 last year, we saw a 5.7x improvement in performance. When we introduced the 2nd generation Xeon Scalable with DL Boost, we saw another 14x improvement from July of 'seventeen. And with the high end Xeon Scalable AP version, we get another 2x or 28x in total from July of 'seventeen until now.

This is transformative for our customers. The ability to have AI built right into Xeon changes the game for them. They don't need to think about deploying accelerators for their inference problems when we can deliver this kind of performance improvement in many cases. In some cases, they still need accelerators. But in many cases, this level of performance improvement obviates the need for an accelerator for inference use cases.

And we invested heavily to make sure that all of the major frameworks were ready to go on day 1 at the launch. And you can see some of our customers tweeted out their appreciation for that. This one happens to be from Jan Lecun, who's the Chief AI Scientist at Facebook, talking about how DL Boost with PyTorch speeds up the predictions that Facebook makes every day, the 2 trillion predictions they make, the 6,000,000,000 translations they make every single day. So as I said, we can't solve all problems for inference with embedding AI into Xeon. We have to do more than that, and we are.

We're investing also in purpose built inference silicon. At CES, in January, we announced our first purpose built inference accelerator called the Neural Network Processor for Inference, the NNPI. And I'm happy to tell you that that product is progressing very well. We've powered on the silicon. We previously announced that we're partnering deeply with Facebook.

You can imagine we're talking to many other customers as well. We anticipate that when we introduce this product, we'll sample it later in 2019 and bring it out into the market in 2020. When we introduce this product, it will have industry leading deep learning tops per watt and power efficiency. And another new disclosure today is that this product will have the integrated isolate cores in it, which we believe gives us a material advantage for workloads that have a mix of inference acceleration and CPU scalar code, which is a lot. A lot of workloads have that mix.

And so that we think gives us a material advantage. So we're very, very excited about the NNPI. We think it extends our position in AI and allows us to grow from that $1,700,000,000 in 2018 as we go through 2019 2020. Okay. So, let me talk now about another exciting area of growth, and that is the network and the edge, all being accelerated by the build out of 5 gs.

The cloud, of course, was built on Intel architecture and built on many of the technologies such as virtualization that we invented. And many of those technologies that we invented for the data center and the cloud are now moving their way into the network infrastructure, into the core of the network, into the access of the network, into the edge of the network. And in aggregate, this represents a wonderful opportunity for us. I often say, maybe one of the most attractive opportunities for us, a $25,000,000,000 silicon opportunity where we have relatively little share. In addition, as the build out of 5 gs continues, we expect more and more of that compute to move further to the edge, to move closer to the user in industrial contacts, in autonomous driving, as you heard from Amnon.

And that same need that triggered the need for the network to transform is triggering the need for the industrial world to transform, to move more compute closer to where that data is being created. And that, in aggregate, is another $40,000,000,000 of silicon opportunity, dollars 25,000,000,000 plus $40,000,000,000 $65,000,000,000 silicon opportunity by 2023. And no one, we believe, is better suited to address the high performance computing needs that this represents than us. But we, of course, aren't starting from scratch. We already have an excellent position in these trends, these trends that are sort of moving in our direction.

In aggregate, this is not theoretical for us. Between our IoTG business, our network business inside of DCG and a portion of our FPGA business, we already have a $9,500,000,000 revenue run rate across these three segments, growing at over 20% year on year. So we're super excited about the position we have, and we're even more excited about the position we could have as we go through time. So, to talk more about that, to give you more insight into that, I'm really excited to have Sandra Rivera join us, our Senior Vice President for the Network Platform Group, come talk more about this. Sandra?

Speaker 7

So, as Navin stated, the network is a big growth opportunity for Intel. With a $20,000,000,000 network logic silicon TAM in 2018, growing to a $25,000,000,000 TAM opportunity for us by 2023. And over the last 10 years, we have been focused on optimizing the network platform to run best on Intel architecture, leveraging all of the strains and product and market leadership that we have in server virtualization and cloud. And as a result, we have built a growing and profitable business for Intel. So let me start with the landscape of our business.

So as we know, data continues to grow exponentially. And the job of the network is actually to inspect, encrypt, compress and transport all of that data. On the left hand side, you'll see a traditional network. And here, the network is depicted as a big fat, some might say dumb pipe between the things and pools of compute, analytics and storage. And that network in fact was built out of fixed function purpose built appliances.

These would be load balancers and firewalls, switches and routers that you would be familiar with. And in fact, those network nodes did a job they were intended to do quite well. But because networks are typically dimensioned for 7 by 24 peak traffic running 3 65 days a year And because those fixed function appliances could not be reprovisioned or reprogrammed during idle times and in fact the network is really many of those network nodes are not utilized more than 25% or 30% of the time. That left a lot of underutilization of the assets. So our idea, our big bold, some might say a little crazy idea when we dial the clock back 10 years was to re architect the network platform as a high performance computing problem.

And in that process, we would deliver to our customers all of the benefits of server volume economics, the advantages of pooling of the underlying infrastructure to be shared across many different workloads and use cases. And more recently, the benefits of and the advantages of cloud architectures and cloud business models. So, what you see on the right hand side is where the network is now becoming a distributed, intelligent scalable programmable set of platforms throughout that entire continuum of data delivery. This is the intelligent network where not only are you running the network workloads on server based architectures, you're now also able to perform all types of data analytics and compression and storage and really dispositioning that data at that optimal point of data creation or service delivery. And so what we're building out now is a set of distributed data centers and distributed clouds, which is really accelerated by 5 gs.

Why? Because 5 gs is that true convergence between computing and communications that now requires a more intelligent, scalable and programmable network. And of course, the build out of an intelligent edge also plays to this transformation of the network and this journey and these market trends that we are driving. So, as I indicated, we have been investing for leadership in this market segment for many years. Anchored, of course, on our CPUs and our SoCs, just earlier last month in our big data centric launch, we launched Cascade Lake with NFV optimizations that give you better performance for network workloads on our Xeon platform.

We also launched our latest SoCs, which are particularly Xeon SoCs, which are particularly helpful to us in power constrained environments like in the edge of the network. And then of course, earlier this year, we launched Snow Ridge, which is our network SoC that's really designed for 5 gs wireless base stations, allowing us to take advantage of a consistent architecture across that continuum of products. And then finally, with our FPGA assets, our FPGAs, which are of course our ARIA-ten, our Stratix-ten and most recently our announcement around Agilex allow us to build into the platform a level of programmability that we need as 5 gs builds out and you have a continued evolution of that 5 gs wireless stack. So with around our processing portfolio with other capabilities in that network platform, our connectivity capabilities with Ethernet, which is a technology that we invented over 35 years ago and where we have leadership and where we've introduced also a 100 gig NIC recently at our data centric launch. And then with new innovations like silicon photonics, which help us in the front haul of the network, but also where we have opportunity for growth in the backhaul transport.

And then finally, to reiterate a lot of the points that Raja made earlier, the power of software. Software is so critical to our value proposition and our ability to actually achieve the type of high performance packet processing that network workloads require. Software toolkits like OpenNESS abstract that level of complexity of the underlying network in the edge of the network. OpenVINO allows us to do computer vision and edge inference, again, a fast growing workload for us. And then DPDK has become the de facto standard for high performance packet processing on general purpose CPUs, which is another technology that we invented.

So, when we look at that 10 year journey, we look at really having been having invented NFV together with our ecosystem. And through that process, all of the learnings that we gained from hundreds of proof of concepts and trials and now mass deployments. In fact, virtually all of the commercial deployments of NFV in the world are running on Intel architecture. We also invented DPDK. As I mentioned, we've contributed that to Linux Foundation and we learn and grow and continue to improve that platform in community.

And we partnered with not just our direct customers, but by anticipating the needs of our customers' customers, the service providers, we've been able to go with them on their journey of transformation of their network. And we partner with a company as large and also as ATTC, 144 year old company, a behemoth in the industry, as they made or announced their goal to transform their core network functions to 75% virtualized by the end of the decade and having crossed that 65% mark by the end of last year, they're well on their way. But at the other end of the spectrum, we've also partnered with new market entrants, as Naveen mentioned. We have been working with a company by the name of Rakuten, which is an e commerce company with 100,000,000 subscribers on their platform. And we've engaged with them to launch the world's 1st fully virtualized 100% cloud native platform for mobile networks.

And this is built completely on Intel architecture, on our CPUs, on our FPGAs, on our Ethernet technology. And because it's been designed from the bottom up as a cloud native platform, all of these innovations are really done in software built on top of a very simplified set of hardware platforms, just 4 SKUs to run their entire network. And from an operational perspective, taking a page out of the cloud playbook and really being able to operate that entire network with just a fraction of the operations resources. We did all of that in less than a year by the time we go to our first trials in June. And so this is a story that's as much about mindset as technology innovation in terms of just embracing a challenge and driving forward with new innovations.

And one thing that I wanted to show is that we have been working, again, not just to meet our customer requirements, but also anticipate what the market will need. And this here is a ruggedized server platform. This is not probably what you would normally imagine as a server, but in fact, inside we have our CPU, We have an FPGA platform in here as well. We have the ability to add Mobidius for edge inference and for computer vision. And we also have Ethernet connectivity and slots for SSDs, our Optane SSDs, which are really important again at the edge of the network because that's where we're looking to cache a lot of content.

So this is just an example of the type of environment that we are working with our ecosystem, our customers to deploy at scale. So what are our business results, the thing that you guys probably care most about? So when we look at the value proposition that we've been delivering to our customers, the idea that they can take advantage of server volume economics, that they have a tool chain that's rich and extensive to extract network complexity and develop networking applications running on server based architectures and of course the access to the broadest ecosystem in the world. In fact, over the last 5 years, our market segment is positioned from just 8% in 2014 to 22%, crossing the 22% line last year, and that's together with our CPU portfolio as well as our FPGA portfolio. So this is an industry that has been growing roughly 5% a year and we have grown about 40% compound horizon.

And what we've been doing is innovating network IP and integrating it into our portfolio so that our customers can have a more simplified architecture. And the network workloads that are on different architectures, different chips on a network platform are now running on higher performance, more programmable and higher packet processing capabilities in our CPUs. And as you can see, our customers value those innovations because our ASPs have been growing throughout that horizon by 10% CAGR throughout the last 5 years. But we've also extended our portfolio from Xeon SP or Xeon CPU to our Xeon SoCs and our Adam SoCs for those more power constrained and footprint constrained environments. And here we see our volume growth expanding over the last 5 years to about a 20% growth CAGR.

So we're going to continue to focus with the growth of data and the growth and the need to process all of that data. 5 gs really accelerates our strategy and the build out of an intelligent edge also accelerates our strategy. And then one area that's of particular importance with 5 gs is of course the build out of the RAM, the radio network and specifically the 5 gs wireless base stations where we won 1,000,000,000 of dollars of designs over the last several years and where we expect to go from really a 0% market segment share in 2015 to an over 40% market segment share by 2022. So to sum it up, the network is a big market opportunity for Intel that leverages all of the leadership that we have in server, virtualization and cloud. We have been investing for over 10 years optimizing the network platform to run architecture.

And lastly, this was not perhaps an obvious market for us to go after, but through a relentless focus on our customers, a passion for the business, a commitment to learning as we went through this journey, we have built a very successful, profitable and growing business for InTouch. Thank you. And I'm going to bring Naveen back on.

Speaker 6

Thank you, Sandra.

Speaker 7

Thank you.

Speaker 6

Okay. I'm jazzed up. That's a great business. So, Sandra talked about the $25,000,000,000 network opportunity against and the strategy to go after that. I just want to, before we close, highlight how we're going after the other $40,000,000,000 I've shown on that previous chart, IoTG and autonomous driving segment.

IoTG, another amazing business inside of the company. This has grown to be about a $3,500,000,000 business for us, still only 16% share for us, for this business, growing at double digits. Most recent quarter in Q1, we saw 19% year on year growth in our IoTG business. We're very focused in this business on some of the same trends that Sandra talked about. The idea of workload aggregation or running multiple workloads on a single compute node.

We've been driving that for many years now in our IoT business and we're seeing the benefit of that. We're driving into new areas like video inference at the edge and high performance compute in targeted areas like retail and manufacturing and video analytics. And all of that has led to a very nice and maybe surprising to some average selling price for our business here. Our average selling price for IoT business is over $100 It's comprised of Atom CPUs, core CPUs and Xeon CPUs, believe it or not. In fact, the fastest growing part of our IoT portfolio in terms of volume growth in 2018 was in Xeon, driven by some of these trends around aggregation, video inference, high performance compute needs at the edge.

So very exciting, very interesting portfolio, and we expect good things and high growth out of IoT as we look forward. In addition, we're also going after, if you add to our classic IoT business, the autonomous driving opportunity and the results we've seen since the acquisition of Mobileye, you can see that we have over a $4,000,000,000 business now at the edge, over 15% compound annual growth rate. That is largely driven by silicon opportunity. And you heard from Amnon in the video that we're not constraining ourselves to thinking about only silicon opportunities. But in fact, we're extending into new areas of the value chain, such as data driven services and transportation as a service.

So, in summary, we've covered a lot of ground in the last 45 minutes or so. I want to just end by reminding you that we see this $200,000,000,000 data centric opportunity as the largest opportunity in our history, 7% growth rate where we intend to grow faster than the market and gain share. The megatrends leverage our strengths, the growth of artificial intelligence, the build out of the cloud, the quantification of the network and the edge leverage our strengths in high performance compute, architectural innovation, ecosystem development. And finally, that we're intensely focused on this broad portfolio of assets, this unparalleled really array of assets to stitch together solutions for our customers to help them move, store and process data. I have to tell you that at the end of the day, the value that we can deliver for our customers comes through differentiation.

And between what you heard from Bob, Murphy, Sandra and I, I want you to know that we are intensely focused and keenly aware of the competitive environment. We are going to differentiate ourselves to go after these markets. We are going to pick up the pace and deliver products at a faster cadence. And we are going to grow our market segment share in an ever growing data centric market. Thank you for your time.

I appreciate it very much. I want to turn it over now to our good friend and colleague, Gregory Bryant, who runs our client business. Come on up, Greg.

Speaker 8

All right. I'm excited to be here with you all this afternoon. And I'm Gregory Bryant, everybody calls me GB, that was unnatural for Navin. And I have the privilege of leading the client computing group at Intel, which is just a tremendous, tremendous franchise. With the time I have today, I really want you to I want to build on everything you've heard so far and to really land 4 key messages.

We are also going to accelerate the pace of innovation we're bringing to market and we're going to do that in 2 ways, which I'll talk to you about. 2nd, I want to talk to you about how we're building and have built an unmatched breadth and technical capabilities in our portfolio, not just in the CPU, but beyond the CPU. 3rd, consistent with how Bob talked about how Intel is pursuing the largest TAM in our history, In CCG, in the client business, we're also going through a similar transformation inside the PC centric business and we're pursuing a much larger TAM of $68,000,000,000 the largest in our history. And then finally, I want to talk to you how we're going to drive Intel advantage and greater value to our end customers through something that we call platformation or by delivering platforms. So let me just jump right in.

I wanted to do just a little bit of context since it may be a little bit since some of you have heard about kind of the PC business and where we're at. If you kind of think back to circa 2016, it was really an inflection point for the PC business, not just at Intel, but really in the industry. And back in 2016, we had seen this secular decline in the PC business. The market was in its 5th straight year of decline as many of you know. And actually if you think back from the peak in 2011, the peak unit volume for the business to 2016, the TAM was down kind of an astonishing 30% in terms of units.

And of course, that was on the back, not in a small part of the substitution of new devices that have come to market like tablets and even maybe some large screen smartphones. And of course, there were some sentiment in the industry that the PC was dead. It was really in the face of that kind of macro environment and those trends back at that period where we fundamentally just changed our approach to this business. And I've had the chance to talk to a few of you about that. One, we believed at the heart of the PC business that the PC had a loyal loyalist, a set of loyalists, a set of loyal users and a set of usages that were kind of fundamental to the platform and that were going to be hard to substitute.

And we thought that those users and usages would be relevant for the platform in the future. 2, we believe that the market had profoundly changed. It wasn't the dynamic that it ruled today of increasing volume, decreasing And it really had become and it really had become increasingly a premium market, increasingly a mature market and increasingly a segmented market. And 3, even then we knew we needed to innovate, that innovation was required in order to meet those the needs of those oil users and those usage models. So as you know, I want to say what happened.

Together with our partners, we went out and segmented the market. We drove innovation in the form of the Ultrabook and 2 in 1 to detachable, a lot of devices that are used by many of you in this audience today. And as a result, we saw substitution in the market slow certainly. In fact, if you look at some of the data out of our own analytics and retail exit data, we absolutely saw in tenders who went into consumer into retail looking for a tablet, 13% of those consumers walked out with a 2 in-one or a detachable or a modern mobile form factor. It also started to accelerate refresh.

As we move to these more mobile devices, we saw that consumers were buying those devices earlier than they had planned and certainly at a faster rate than other devices in the PC market. So, in terms of our business results, you saw us and the team drive top line and bottom line growth over the last 3 years, which has certainly outpaced the TAM and that TAM had stabilized. And of course, in addition in CCG, one thing we're very, very proud of as a team is not only do we deliver those kind of results, we also deliver a lot of the core IP, the scale and the profit that is fueling the ambitions of the company to transform in this data centric era. Now, as we said for 2019 in our results that we expected 2019 to be down low single digits really on the back of a slightly down TAM this year, which is consistent with third parties as well as the supply constraints that we're working through that Bob hit on at the beginning of the business. Those constraints, we've improved our supply situation over the course of the year.

They've largely been in the small core and transactional segments. And obviously, as we get into the second half of the year and our overall supply gets more in line with market demand, we're working very, very closely with our customers to optimize the mix in their business and to make sure we don't constrain their growth going forward, which is really important. Okay. So I wanted to dig in a little bit of how just dramatically we've changed the game. As I said, a lot of people think about the PC business as it's one homogeneous thing, when really it's not, it's not one segment.

There is at least 7 key segments and I wanted to hit on those. We spend thousands of man hours with end users doing analytics with business professionals, with our customers and our partners around the world. And we've really focused on these segments. And I like to think about it kind of in terms of the old world and the new world inside the business. So the old world consists of legacy form factors, just undifferentiated tower desktops, legacy clamshell notebooks, undifferentiated solutions for business.

Those segments still exist today, right? In fact, it's where a lot of our competition is targeting and building products for those segments. But as you can see in the chart, those segments are all shrinking, some of them substantially. And then there is what I like to consider the new world or the new segments that we've built with our partners in the industry. And that new world is really one in which people value premium, they value performance, and they value the overall platform experience.

And once we recognize that those segments were growing, we really retooled the whole strategy and focused on those 4 core growth segments. And we've been on a mission just relentlessly pursuing those 4 segments and building customized products for each one of those segments. So those 4 segments are the modern notebook, Chromebooks, gaming and vPro differentiated commercial platforms. And all of those segments are growing, 3 of them in double digits. And I just want to hit on each one briefly just while I'm on this slide.

First, I want to just start talking a little bit about the modern notebook, which includes thin and lights, 2 in ones, detachables. I think many of you all know, human beings are inherently mobile. And people absolutely care about battery life, connectivity, performance. The trick is they want those things without compromise. And that's an area where we focused our R and D and investment and we're doubling down.

And that innovation is paying off. This category has grown 44% in revenue terms from 2015 until 2018. And as you can see on the chart, we're expecting that growth to continue at about a 14% clip through 2023. This category has now grown about 44% of the overall TAM. That's how much of a shift we've seen in purchasing behavior in the marketplace.

It's really, really exciting. The next segment I just want to hit on is chrome. That's the next one. I think we added into this category in 2011 early on and drove ourselves to be the leader in chrome. Now a lot of people don't realize chrome is not just a phenomenon now in education in North America, it's going beyond that.

And performance, as it turns out, scales on Chrome. And as more applications are being delivered in the Chrome environment, plays to the strength of us delivering a more integrated premium experience with performance. In fact, in 2018, we did 4 times the number of core based chrome designs than we did the year prior. And we have insights again from our analytics that say 2 thirds of consumers going now into consumer, into retail who want a Chromebook are looking for core based performance and premium based designs. Okay, next up, I just hit on gaming really quickly.

It gets a lot of coverage. It's one of our most exciting segments. There are 1,200,000,000 gamers in the world now. It's about 1 in 7 people on the planet. Things are being gamified.

It's not just gaming, it's esports, it's even the gamification of education. They are our most loyal, knowledgeable communities that we service. We've got a very, very rich history. They also happen to be one of our most demanding user bases. They have an insatiable appetite for performance.

We've been delivering that performance with leadership products. I'll talk about a couple of those in a minute. And as a result of them valuing that performance, we've been able to deliver ASPs that are 70% higher than the rest of the business on average. Okay. And finally, it's vPro, which is really all about differentiated products for commercial.

We launched the vPro platform in 2006. I had the privilege of running the division when we did that way back then. I know you find that hard to believe. But it's been our most one of our most differentiated and most profitable segments. And we have now 130,000,000 unit install base and we're using the deep insight we're getting from that install base to really innovate and create new solutions in a new wave of vPro products that focus on AI security, telemetry, manageability, all supporting the workplace transformation that's happening out in the industry.

Okay. With that, I'm going to hit the next three imperatives quickly, as I said. We've got to accelerate the pace, we've got to expand the TAM, and we want to drive an advantage through platforms. So let me do that. So first, I want to talk about the 2 ways we're accelerating the pace of innovation.

And you've heard that from Murthy, you heard it from Naveen and now you hear it from me. One is we're accelerating the pace of our new product introductions in our portfolio. And I think you can tell from this page, we've built out a portfolio of purpose built products for those growth segments, and that's our intention going forward. For example, we recently just this year launched our 9th generation mobile platforms, laptop platform on the planet. In Chrome now, it's not just entry.

We now have a full range of products from Chrome, from entry all the way up to Core I7. In gaming, right, we introduced the Core I9 Processor. We started with 357, we added Core I9, and we've gone even farther with our overclockable products like the Core i9-nine thousand nine hundred ks. Real life performance, Raja talked about us unlocking performance with software. It is the fastest gaming processor on the planet, period.

And then lastly, we created, and I don't know if folks noticed, this entire family of X Series products specifically for content creators, people who work on videos and photos and music and we have a whole family of products just for those users. And then lastly, just last month in April, we launched our all new vPro platform, another generation, the best platform for business. So the second way we're going to accelerate our innovation is by accelerating the value or the height of steps of value that we deliver in the marketplace based on our roadmap. And it's really all built on the 6 pillars of innovation that Murphy highlighted earlier. And I just want to start with Ice Lake, which as you know is our volume 10 nanometer CPU.

As Murphy stated, we will be in production next month on that CPU with systems on shelf for holiday. And what's exciting about it isn't just it's the next generation CPU with IPC improvements. It's also the level of integration and the new IP that's being brought to bear. So I'll just give you a couple of examples. The new Gen 11 graphics inside of Ice Lake has up to double the performance of what we were previously delivering.

So you can get new experiences like 4 ks video experiences with uncompromised battery life and impossibly thin form factors. You're going to see consumers able to play hundreds of gaming titles at 1080p 30 frames per second in these very, very thin modern form factors, which just wasn't possible today. And in Ice Lake, we also, Navin talked about DL Boost in the data centric space, we have DL Boost in the client roadmap as well. And in client, we're seeing those kind of inference based applications and workloads emerge. So for example, triple the AI performance with DL Boost, being able to find objects and faces and photos more quickly than ever before, being to do things like super resolution and up scaling video from 720p to 1080p and on and on.

These kind of usages are emerging on the client and we've got great performance for those new usage models. Okay, in addition to that, as Murthy discussed, we've got a new Lakeville product that will be in production on by the end of the year that leverages this great work that we've done on the hybrid CPU as well as advanced packaging and other pieces of technology. They've actually resulted in us making the smallest motherboards we've ever made. And now I've almost been compelled, I wasn't going to do it, I know the cameras here. This is a Lakefield motherboard.

It's tiny, right? I pulled that out of my pocket, didn't struggle. So this is an entire PC like motherboard right here in this palm of my hand. It's incredible. And as you can imagine, we're working with our customers to build exciting new form factors around this size of motherboard, not just single screen devices, but dual screen devices and foldables.

And it's not just the point product. Murthy said it was a small step, right? So it's like a very small step because it fits in my pocket. It's a small step, but you can see that the DNA is going to be leveraged through the rest of our roadmap and we're building a swim lane or a roadmap segment of products follow on after Lakefield for these more innovative form factors and devices. And I think it's going to unleash another wave of innovation.

Okay. Here's one I have not talked about, it's new for today. In 2020, we're also going to drive our commitment to accelerate the roadmap with the product, our next generation 10 nanometer plus product that Murphy talked about. It's code named Tiger Lake. I haven't talked about it before.

It's a great product. We're going to make advancements on every vector of computing, and I believe it will really redefine the mobile platform. It has an entirely new CPU core architecture. It's got great new media support for 8 ks or multiple 4 ks displays. It's got the same IP, the basis of that discrete graphics engine that we talked about earlier, that XE for graphics, that engine is built into Gen 12 into Tiger Lake.

So we're going to see blistering graphics performance for even more gaming and content creation and productivity usage models in commercial. And we're excited about the new range of devices that will be on Tiger Lake. And I can say as we sit here today, we have Tiger Lake silicon back at Intel. We have already booted Windows and Chrome and we're making great progress on Tiger Lake for 2020. Okay.

So put it all together and this chart, there's lots of performance numbers today. This one really resonates with me as kind of somebody who builds products for human beings. We're not talking about incremental performance gains 10%, 15%, 20%. We're talking about huge gains that will be felt by end users and professionals and people around the world. 3x performance gains in wireless connectivity, 4x performance gains in graphics, 2x to 3x in AI, double the speed of encode.

So, no matter professional use, consumer use, no matter your focus, I think this is really going to have we're going to deliver dramatically different computing experience in a relatively short period of time. That's the acceleration just 2019 and into 2020. Okay. I want to shift gears to the second imperative, which is how we're transforming and pursuing an expanded TAM. We've reframed the opportunity for us, again, no longer an 80 plus percent share player in PC, CPU, but going after a $68,000,000,000 TAM with adjacencies like connectivity, memory and graphics.

And we've really broadened our aperture to build products and we have significant, I believe, opportunity to grow beyond the CPU TAM and accelerate our growth into these adjacent areas. Let me just give you a few quick examples. First, in the area of memory, octane memory, not just in the data center, also on the client. And in the client, it really accelerates system responsiveness and overall performance. Now, we've already started shipping an Optane module last year called the M10 that sat in front of the storage device as kind of an accelerator or a cache, and that's gone well.

But probably more excited, just last month, we launched our Optane memory, this H10 module. So we have a single M. 2 module that has Optane memory media and a QLC drive on 1 M. 2 module. Very small, can go into any PC form factor, even notebooks, even mobile, even very thin and light systems.

And as a result, you're going to get major performance responsiveness benefits and still the storage of a QLC NAND beyond what you can get on a TLC SSD today. In fact, we did some benchmarking for games and application loading and you're going to see like 60% faster application and gaming launches with this solution compared to what's in the installed base today. And then beyond that, again, just like in the data center, we're taking our Intel Optane DC persistent memory into our workstation client business to support those large memory footprint workloads on workstations, which we think is going to be a real game changer. 2nd area is you're going to see us continue to lean in and lead in connectivity. We are leading the transition in client to Wi Fi 6 used to be 802.11ax in both our notebooks, our desktops and gateway business.

Our gigabit plus portfolio is unrivaled in the industry. And as we said earlier, more than 3 times the bandwidth and support for more devices on the network than what was possible with 802.11ac solutions. 2nd, we have been working very hard on what I call on this chart ACPC or the always connected PC segment. We've got 90 LTE designs going to market for holiday this year, far more than anyone else. And finally, as we stated earlier, in terms of our 5 gs connectivity, yes, we made the decision to exit the 5 gs smartphone modem business, but we are evaluating options to leverage that IP in support of this client business as well as our IoTG business.

3rd, an area that doesn't get a lot in connectivity, one last area that doesn't get a lot of airtime. We are in the Thunderbolt business and Thunderbolt is a wired connectivity, high speed IO connectivity that we defined. We actually have over 1,000 designs on Thunderbolt 3 across PCs and 3rd party devices. That ecosystem is growing and we're going to build on that momentum. As you may have noticed, we released the Thunderbolt 3 specification and protocol, which we expect to form the basis of USB4, which I believe will really accelerate momentum into devices and further give us an opportunity to grow in that ecosystem.

Okay. And finally, last but not least is graphics, an area where we have a long history and a significant opportunity. We are the integrated graphics leader in the business. And as I said earlier, I can't emphasize enough how much we are dramatically accelerating the rate and pace of the innovation in our integrated graphics as we go from today to Ice Lake to Tiger Lake is tremendous. The second thing is, in addition to the integrated graphics business, we are on track and we will deliver a discrete client graphics solution in 2020.

So that is just an entirely untapped potential and part of the TAM for us to go attack. Okay. And finally, my final imperative today is really all about driving an advantage and greater value to our customers and to you through what we call platformation or driving the move to platforms. We continue to work tirelessly to understand what end user unmet needs are. We believe we need to already be working on driving the next wave of innovation beyond what we've done today.

And it really I boil it down to 3 main things. The PC, the client platform is the platform where people go to focus. It's where they go to focus. It's where they go to do their most meaningful work and they want a platform that helps them do that. They also want a platform that helps them always be ready for action.

So they want a platform that's ready to go when they're ready to go. And finally, they definitely want their devices to be more intelligent and help them adapt to kind of their ever changing roles over the course of the day. That's at the heart of what this platform is all about. And we believe if you look across the technical capabilities that are needed to make that possible, 1, it plays to our strengths, but, b, there is ample room left for improvement. We're not nearly done.

Now the first big step in us embarking on this next wave of innovation and this journey is a project. It's a code name that we use called Project Athena. You may have heard a little bit about it. It is a multiyear innovation effort. We're embarking with the industry that's laying the technical foundation and the optimizations in both hardware and software to support these platforms.

So why innovation and why now? By rallying the industry and leveraging the strengths that we have, we think we can drive innovation in a way that no one can do on their own. And really, we have unrivaled capabilities in order to do this. Not only do we have the core CPU and all those adjacent silicon areas that I mentioned, we have the platform engineering expertise that's absolutely critical across thermals, mechanicals, form factor expertise, etcetera. And we've been out working with customers for years to opt to do highly optimized designs that scale to all to over 2,000 designs a year.

So we launched this initiative to go drive it. We're very, very excited. We have all of our major industry partners on board with driving forward in the space. We've done it before. We did it with Centrino.

We did it with Vipro. We did it with Ultrabook. We're leaning into it again to drive the next wave of innovation. We have some of our first designs, it's not just PowerPoint, we have some of our first designs coming to market holiday this year and you'll see us ramp even harder as we get into 2020 and as we approach that redefined experience on Tiger Lake that I talked about. Okay.

So in closing, I just want to share and I hope you can feel the excitement that we have for the journey that lies ahead. And I know you heard from Bob at the very, very beginning, Intel's strategy to become a data centric company and to ultimately power the world running on Intel. We in the PC centric space believe we're the human touch point, the human edge in that data centric world. And I fundamentally believe that if we continue to accelerate the pace of innovation, that if we haven't created an unmatched portfolio, if we move the industry forward and innovate toward platforms, we can continue to have a healthy business that will help Intel power our ambitions. And with that, thank you all very much.

And Matt or Mark, let Mark come back up. Thanks, Mike. See you.

Speaker 1

Thanks, GB. All right. We're going to go ahead and take one more break now for about 30 minutes. If I can ask everyone to rejoin us here at a quarter till, we'll restart the webcast then. Thank you.

Speaker 9

Good afternoon, everyone. It's great to see you all. Let me add my welcome to Intel's Investor Day. I have to also add that it is my 35th day, and so I can't thank Bob enough for the opportunity to have an earnings call, planning off-site in an Investor Day in my first 30 days has been fantastic. But I am truly honored to join this team, very excited.

We're going to go through some things today that sound like challenges, that sound like we've got a lot of a lot to do over the next 3 years to make the most out of the next 3 years, which we will do. But I can tell you without any hesitation, I'm more excited about the opportunity at Intel today than I was 35 days ago. From everything I've seen and from the people that I've worked with, from looking at the technology roadmaps, from meeting the teams up in Portland, there's just a lot to be excited about. So we'll go through some things, but the opportunity inflection coming out of this period is what we're all focused on as well as executing thoroughly over the next 3 years. So the agenda that I have today is really to focus on sort of some of the key themes that you heard today and also to put some numbers around them, although Bob was kind enough to kind of preview some of the numbers earlier.

We're going to talk to the challenges to gross margin that you've had that have been discussed already and put them in context and how we're going to deal with them. Talk about how we're going to get the spending leverage that we've talked about. We're going to look at free cash flow to make sure that even in this period of time, we're going to grow our free cash flow as a percent of revenue. You're going to see that our balance sheet remains very strong, which means we've got a lot of flexibility as we look at and expand the TAM to do the things that make sense in terms of investment and also to continue to in a time of high investment to continue to provide attractive returns. So a lot to talk about and we'll get through it quickly so we can get to Q and A for everybody.

All right, a quick look back. Again, I realize this is history, but you've seen almost a 30% growth in revenue over the past 3 years, about a 9% CAGR. Data centric, we've said that that's the focus of where we're going is increasing the data centric nature of the company. It was up 46 percent over that time period, about a 14% CAGR. PC, which as GB did a great job of showing isn't quite dead yet, was up 20% over that time period, the CAGR of 5%, but its operating margin was up over 70% and provided a tremendous amount of value for the company.

And what we're really looking at here is kind of the end of the 22 nanometer period going into the 14 nanometer period. And we're going to be talking about the 14 going to 10 to 7 period over the next 3 years. So I look at this chart and it's part of the reason for a lot of excitement. I think you saw the breadth of the product capability of the company that can serve this market. Very few companies, well, actually no company has that breadth of opportunity and that the number of different capabilities that it can bring to market.

So my clear conclusion from this is that we're not opportunity constrained. But when you're not opportunity constrained, you've got to be very focused because we have a significant amount of work to do to get the kind of performance that we expect to get, and not only in our product performance over the next 3 years, but the significant leap forward in product performance that we even see in 7 nanometer and beyond. So if you look at lengthening process nodes that we're dealing with and the diversity of products that this breadth represents, It also says we've got to be super disciplined on capital efficiency, and that's our intent. Okay. The 2019 outlook, I just wanted to briefly touch on this.

And I think one of the things that's really important on this is that the company reacted very quickly when our outlook changed. The team responded. We not only did we communicate immediately as we knew it, but also we took a look at all of the things that we were doing and said, hey, given this outlook and given a desire to minimize the near term performance, but still invest in the future, what are we going to do? And I think one of the things that you saw is that we were able to increase the operating margin relative to the decline in the gross margin through some fast action and some commitment by the leadership team to refocus spending, make sure that everything we're spending money on serves the purpose of the products and the node transitions over the next 3 years. And everything else, we're just going to be super disciplined on.

I was really for me, that was one of the things that got me very excited about the capability of this team working together. So I think you're seeing the start of this 3 year period. But a lot of the real change is more of a change in the market outlook than it is in our understanding of what's going on in 10 and the process nodes there. And so what we saw was data centric outlook declined for the year. And obviously, memory has been a situation where you've seen significant declines in memory pricing relative to our expectations.

Those are really the things that led to the change that we had. But the basic fundamental understanding of where we're going, which products we're going to be putting out, all of that is intact, and the company was able to respond quickly to minimize the impact of change in the market outlook. So the next few slides, I'm going to look at how we're going to grow performance over the next 3 years, despite these headwinds and with an emphasis on disciplined capital management, the way we look at R and D dollars and where they're going, and how we're going to get continuous improvement as we move through the 10 nanometer mode into 7. I love this chart. I think it's extremely exciting to think about the types of opportunities this is going to give the company as we move through these nodes and the intranode performance increases.

If you're a customer and you see the breadth of our products and our ability to move at this pace, I think that's a pretty compelling argument. That being said, 2019 to 2021, this is about strengthening products. It's about being very focused on all the disciplines that we have to have in a period where we're seeing increased competition. We've seen the need for capital spending to support more than just 10, more than 7. We when you look at the out periods, you realize we're probably going to spend some money on 5 as we get into 2021 and you'd be right.

We'll talk a little bit about that in a minute. But it's so this next 3 years for me is when we I think you're going to get a sense for the both the combination of engineering product leadership and also discipline. EPS leverage coming out of this, I think this is one of the strong investment thesis for the stock, which is we're positioning the company to come out of 2021 with a very strong performance ramp. So during the 3 years, talked a little bit about this in Bob's note, but again, we see even though 2019 was a data center itself was going to be down a little bit and modest growth in data centric, you're saying that we're returning to high single digit growth over that time period. Even as we start to see modem come out of the PC centric numbers, you're going to see we still are going to outperform the market as we look at it based on the new products coming.

Where we see it down slightly, but that's actually better performance than the TAM overall. We're not assuming any significant improvement in the economy. In fact, we're assuming that the relatively benign environment that we're in today will continue. Obviously, if something like China, the China economy approved over this time period, that actually could be a positive. We'll talk a little bit about gross margin, where it's going to bottom out and that we see that in 2021.

And really, that is the confluence of nodes. And then we'll go a little bit more into the capital. But capital discipline, even as we invest across nodes, we're going to keep that contained. And we're going to look at opportunities such as outsourcing selectively to make sure that we're constantly optimizing the capital spend over time. This slide you've seen, but let me just make a couple of points.

The low single digit growth that we've talked about over this time period, that's about works out to be about a 9% growth off of 2018. It's about 12% growth off of 2019, our guide there. There was some confusion I heard from in a couple of questions as to whether low single digit was off of 'eighteen or 'nineteen, that's off of 'eighteen. Again, all of the references tie back to 2018. But again, very strong growth for the core businesses relative to their opportunity set.

Operating margin means as the simple math is if we're going to be roughly at a 32% margin and we have gross margins in the range of 57% to 60% if we're at 57% and we're going to have 25% spending as a percent of revenue. The math is pretty simple that in 'twenty one, we're looking at about 57% at the bottom. So let me give you a little bit of anatomy of the operating margin and gross margin. And we spent a lot of time saying that, hey, as we enter some new markets, it's important to understand both what's going on at the gross margin line, but also at the operating margin line. And when I look at this slide, and I'll just give you a moment to look at it, this tells me we've got a lot of levers.

So I don't know exactly how things are going to play out over the next 3 years, but I feel like we have the levers that we need to manage some of these things in front of us. Clearly, if you look at the tailwinds for gross margin and you saw it in both Navin's and GB's presentations, the demand for performance across the portfolio is great. That's going to play to our strength, and that's where we'll see ASP opportunities. We also are investing for performance improvements across the nodes. Murthy spent a lot of time on that.

That actually can lead to better The the really the length of the 10 nanometer period, the investment period for that and absorbing that cost as we go into this time period, as well as having to bring 7 nanometer right on top of that and the investment required for that. Also, some of the adjacent businesses, as I said, are bringing at slightly below the company average in margin. But again, we'll be looking to improve those margins over time. And then finally, in this period, we think there's increasing competition that we have to plan for, that we have to assume could impact ASPs and our ability to recoup some margin through that tool. I think on the OpEx side, we actually have some really important tailwinds to get to 25%.

We were at about 29% in 2018. We said we're going to be at 28% in 2019. What that means is roughly $1,000,000,000 has to come out of spending. And I think I already got one question on really how do you get $1,000,000,000 in spending out in 1 year. Quite frankly, the first more than a third of that is really actions that were taken last year in terms of portfolio decisions and other actions that are actually feeding in and helping this year.

We've talked also about the fact that 200 to 300, so maybe 20% to 30% of that is coming from taking investments that were solely focused on 5 gs smartphone. Now that we've made that decision, we can take that cost out. And then the remainder are the things, one, obviously, when you change your view inside lead time, variable comp is going to come down as a result of that as it should. And also, the actions that the team took to respond to the change in outlook to maximize the performance was the remainder of that. So I feel very good about our ability to get off to the right start in 2019 even though it's a little bit of a difficult year.

And we're going to continue to get SG and A productivity gains as we've done. When I was when Bob talked to me about coming here, he says, look, I expect you to continue to drive the spending leverage that we've seen over the last few years and continue that. I expect you to bring a steely focus to capital discipline and return on investment. And I expect you to close the gap on free cash flow as a percent of earnings. And then on top of that, I want to see more diversity in your organization.

So I thought, okay, these are things I can get around, and we'll go get our hands on it. And again, I think we have the levers to do it. But I'll tell you, they're not headwinds. Really, it's probably bad terminology on the OpEx. It's not a headwind to invest in the critical process and product initiatives.

That is what we're here to do. That's what the company exists for. So we're going to fully invest in those. And we'll talk a little bit about R and D in a minute. This is just showing you that we do have a track record.

I'm intercepting a very good track record, and I don't intend to screw it up. Okay. On the free cash flow earnings gap, this is one where we are much closer to a 1 to 1 a number of years ago, came down into the 60s. In a difficult year, we're going to be taking it to 75%, and we have a clear target to get above 80% in the 'twenty one time period, and we're focused on all the ways that we can do that. We expect capacity during this time period to remain tight.

Even as we invest, we're going to spend somewhere in the neighborhood of $15,500,000,000 to $16,500,000,000 in capital per year in our plan over this 3 year period. And that capital is roughly balanced between 10 nanometer and 7 nanometer, obviously, 7 nanometer on the rise, 10 nanometer on the decline. And then you're going to see, as I said, 5 nanometer picking up in 2021 and even some 3 nanometer. So Murphy and his team are always looking forward and making sure we've got our activities planned. And then on the free cash flow front, actually somewhat balanced with the level of capital.

We expect between $15,000,000,000 $17,000,000,000 plus over this time period, and we've already said that we expect $15,000,000,000 in 2019. So free cash flow is growing. Capital allocation, our priorities. This is a mom and apple pie slide, so I won't spend too much time on it. But obviously, our number one priority is to invest in the business.

And the most leverage from an ROI standpoint is typically your organic investments that most effectively leverage your core capabilities, and we'll make sure that those get funded. Strategic M and A, I'll talk a little bit about that, which is the inorganic support of the business and how we think about that, how this team thinks about it and what we're going to be what are the measures that we use as we think about M and A. And then, obviously, what is our practice and our policy with respect to shareholder returns. So, let's talk about investing in the company organically. This is Murphy's slide, and I'll give you the dollar license fee after this.

95% of our R and D over the time period that we're looking at is committed to these pillars of innovation. So you might say, well, what's the 5%? We have to explore new ideas from time to time. But 95% but even with 95%, honestly, we're going to be very focused on are the returns, even if it's in the name of the right pillar, are the returns justified because there's always tension and demand for more capital. We'll never have enough R and D to invest.

And I certainly want to make sure that for the critical things that we need to do that we're moving funds from things that have low returns to high returns as effectively as possible. And what you're going to see, obviously, we get a help here because of the decision on modem, which we would expect to see a significant increase in the savings from modem in 2020. But we're going to see investment shift around within this group. You're going to see much more investment in areas like the XPU expansion that Murthy talked about, also obviously AI, graphics and accelerators, those are all going to see growth in investment over this time period. And of course, the oneAPI is a key investment area on the software side.

Okay, mergers and acquisitions and divestitures. The key thing from our standpoint is M and A has always been a significant activity within the company. I think some of the less successful Most of our investment in M and A is, Most of our investment in M and A is the kind of tuck in investments that allow you to move quickly. If a new technology comes up and it's easier for you to incorporate it into a platform and create significant value, We'll be looking to make those kinds of acquisitions all the time, and you'll continue to see a lot of activity in this regard. All of those activities and anything in the larger size are going to have to be super tied to the TAM direction that we're going in, so that there is a very tight strategic fit.

That means that the whole company is going to be behind and committed to that effort when it comes into the company. We're going to have a DCF discipline for very focused on cash on cash returns. The types of cost of capital you use will obviously fluctuate depending on the type of asset you're acquiring, but we'll have a very disciplined approach to that. From an integration standpoint, very granular synergy planning, milestone commitments to deliver the business. These are things that are going to be hallmarks of any deal that we might do going forward.

I'm not emphasizing this slide because I'm trying to tell you expect a large acquisition anytime soon. We have a lot to focus on. I'm emphasizing this because I believe this is a team that can deliver M and A when we need to deliver it, and we will have the disciplines in place to do it. We have a team that can do that, and I'm quite confident of that. And then we're going to exit non strategic businesses.

You've already seen it, and if we come to the decision that something doesn't fit the criteria that Bob laid out, we'll move forward. And again, I think I wanted just to share couple of observations that I had both on Alterra and on Mobileye. And people probably wonder where are we actually relative to the deal thesis when we made these acquisitions. I would say Alterra is largely on track. I think some of the process impacts that have happened in terms of the timing of the roadmap have affected some of the deal results.

But in terms of where we see this going, in terms of what we see over the next 2 to 3 years, everything appears to be trending exactly where we thought, and it's a critical part of our TAM expansion strategy. Mobileye, I would say, on track, but exceeding. And it's been one of those areas where not only are they exceeding in the things that you think of as their core activity, but just as the tape showed, new markets, new business opportunities, the monetization of data that comes out of this amazing platform is creating more opportunities than we thought. Okay, return of capital. So this is our policy.

No change in the policy today. Again, we believe it's important if you're in the semiconductor business and you play at the scale that we do, you need to be a strong investment grade company to compete and to be relevant with your customers and also as you look at broadening your opportunities. We'll continue to grow the dividend in line with earnings. At a floor. We're going to make sure that we don't dilute investors from our equity plans.

And we're going to be opportunistic on buybacks from that point forward as we look at all of the opportunities in front of the company. But again, I think the underpinning of that is a relentless focus on free cash flow. That's my challenge. It's the team's challenge. We're up to the challenge.

And then attractive capital return coming out of that. And so that's our policy. Here's our practice. We've returned on average, whether you look at a 5 year period or a 10 year period, we've returned 95% of free cash flow. The dividend is up 25% since 2015, And we've had a history of both consistent and attractive capital return, and I think you should hold that expectation for the company.

So let me just quickly summarize. We're going to be very focused on holding our operating margin from 2019 through 2021, nothing that we laid out today is an entitlement. There's a lot of work that's going to have to happen to make these happen, including driving spending to 25% of revenue, but I think we have a roadmap to do that. We've got because of the capital requirements of this confluence of nodes, we've got to be super disciplined on both spending and capital and make sure that the investments that we make serve both the products, first the products and the process. And then we're going to do this in a way that allows you to have confidence that you're going to see strong capital returns and disciplined M and A if that is something we pursue.

Anyway, with that, I'll close with the summary from Bob's presentation, where we've expanded our TAM, we're accelerating our transformation to a data centric company. The history shows that, the future shows that. The transition from a CPU to XPU is providing us even more breadth as we to solve customer problems. It's not an entitlement. We've got to execute, and we have to accelerate innovation, and that's going to require an evolution of culture.

But I have every confidence that this team is on board, and we're going to deliver on that. And so with that, I will stop so we can get to Q and A. But thank you very much for coming today.

Speaker 1

Thanks, George. All right. We're going to take just a couple of minutes and get set up here. We're going to bring some chairs up and I'll invite all of our speakers to join me up on the stage. We'll have mic runners, both Trey and Tushar will be around the auditorium.

They're at the back of the auditorium right now. And just as we do on the earnings call, we'll ask that people when you have a question, if you'd ask just one question and that way we can get to as many of you as possible and we'll as I said, we'll make our way around the room. So give us just a couple of minutes here. We'll get everybody set up.

Speaker 10

Great.

Speaker 1

All right. Thank you. Okay. We'll start over on this side of the room, Tushar.

Speaker 6

Hi, it's Tim Arcuri, UBS. Thank you.

Speaker 11

I had a question. I don't know who's going

Speaker 6

to answer it, but I had a question on the 7 nanometer timeline that you put up. If you sort of look at it, it's nominally roughly 1 year after TSMC is going to ramp their process for 5. And I just wanted to see if you can compare what your 7 is actually going to look like to what you know about their 5?

Speaker 3

Sure, Tim. Our focus is going to be on dialing in the design point for 7 nanometers to meet the requirements of our data centric portfolio. So we're really looking at driving a perspective that's really winning in HPC. And that's the focus of our time line. So I think when you compare relative time lines, I think you have to look at the opportunities and the design targets that are being talked about at that point.

So our 7 nanometers is very clearly dialed in to make sure that we have 7 appropriately available in time so that it complements where we are with 10 Double Plus to continue the product leadership that Navin, GB, Sandra and my other colleagues need in their product roadmaps. So that's kind of like really defining the timing.

Speaker 1

Back over to this side.

Speaker 12

Thank you. Srini Pajjuri from Macquarie. I have a clarification and a question. George, first on the your margin outlook operating margin outlook. I know you said you're assuming or you're taking the 5 gs savings.

I think you said 200,000,000 to 300,000,000 dollars into account. I'm wondering if there's any more 5 gs savings that you're assuming in your 25% outlook for the next 3 years outside of what you told us already before? And then my question is for Navin. Navin on the server TCG data centric business outlook for the next 3 years, you're assuming high single digits even though the business is going to decline I believe mid single digits in 2019, which implies basically double digit growth for the next 2 years. I'm just curious given the increasing competition and what gives you that comfort that you're going to recover to double digit growth in 2019, I mean 2020 2021?

Speaker 9

Sure. Why don't I go on talk about the modem savings. As you probably recall, we're going through a process right now to evaluate what's going to be required from the product perspective, what is what are the opportunities to maximize the opportunity for our employees in that area as well as the reach the maximum value for the significant IP and patents that we have in that space. So we're not through that. That will inform the savings, but the savings will be significantly higher in 2020, and we'll have a full readout of that when we announce what we conclude with respect to the disposition.

Speaker 6

And on your the other part of your question, the guidance high single digit was data centric, not just server. So it was a combination of DCG, IoTG, PSG, NSG and Mobileye. So, when you add all those together, that's how we got to that high single digit growth rate. The other thing I would say is, remember that we're starting from the perspective that we only have 21% share of that $150,000,000,000 TAM and we're adding new products in the horizon where we're gaining share from kind of a position of 0. So, whether that's things like octane persistent memory or the GPU, we're adding those products in the out years.

And so you need to sort of keep that in mind as you think about it.

Speaker 1

Let's go back over to the right side of the room.

Speaker 13

Great. Thank you. And thank you for hosting the day, the gathering as I like to call it. So Chris Rolland from Susquehanna. George, my question for you, if I have the math right gross margins at 57% in 2021.

Perhaps you can give us a gross margin walk or an idea of kind of what those headwinds are? How expensive is 10 nanometer versus 14 nanometer? What are the costs ramping 7 versus what we've seen at 10 or 14? And also what kind of hit competition takes on gross margin as well?

Speaker 9

Yes. I've probably given more specific guidance on a 3 year out than we've ever done before. But my sense is and this is approximate, clearly, I think the cost dynamics of the multiple node impact and the length of 10 nanometer is probably the biggest factor. But competition certainly also impacts your ability to drive ASPs to where you'd like them to be. I think that is certainly a part of it as well.

And but I think it's one of the reasons why we think we're going to see a really good inflection coming out of the 7 nanometer period because the dynamics from the cost standpoint will be enhanced And the product portfolio that we see tied to that is quite exciting.

Speaker 1

Great. All right. And over to the far left side of the room.

Speaker 10

I wonder if I could follow-up on the gross margin question. In 2021, it looks like you're implying about a $33,000,000,000 cost of sales, which is $6,000,000,000 higher. That seems like a lot for start up costs, it's a lot relative to anything you've seen historically. Can you just give us a little color on why the absolute dollars would go up that much? And how revenue dependent is that?

Is that still going to be a $33,000,000,000 cost of sales if revenue is above or below that?

Speaker 9

Well, it's there's clearly revenue dependence on it. But again, I would say the bulk of it is the is really the cost dynamics of moving from 14 to 10 reflected in the products.

Speaker 1

All right. And over here.

Speaker 14

Hi, thank you. Ambrish from BMO. Bob, I wanted to come back to gross margin and pricing as well. At your earnings call for 2019, you had explained to us that you had you felt that you had bounded the ASP pressure pretty well. So what happened between in your thinking from that time and today that the ASP pressure you think will continue and continue for

Speaker 6

the next 2 to 3 years?

Speaker 14

Are you being more aggressive on pricing? Is that the right way to think about it? Or something has changed? You have relooked at the product roadmap that AMD has and you felt, all right, we need to be we need to respond accordingly? Thank you.

Speaker 2

Yes. No, I think on back in January and again in April, reiterated the competitive dynamics going through the course of 2019 from our vantage point haven't really changed. We have a great view of our roadmap. We have a reasonably good view of competitive roadmaps to the extent there's actually products out there. So, the dynamics going from January to April on ASPs didn't really change at all.

We said coming into the year that we are going to be aggressive in protecting our sockets on both the client and the server side. So, no change. The change in outlook for revenue was really volume oriented on the data center side and we explained it, Naveen captured it again today, I won't repeat it. And the second aspect was just pricing and memory. We came into the year thinking ASPs are going to go down in the mid-20s and practical reality is they're close to the mid-40s in the Q1.

And we said we don't expect that to get any better during the course of the year. So the real only change in pricing dynamics in 2019 for us was not in the core business, it was really in the memory business. If you go beyond that, I think just kind of pick up on George's comments, we clearly view a more competitive environment. So all else equal, our ability to capture value for the incremental performance, all else equal, will be more challenging. 2nd, when we transition from one node to the next, we're fairly mature on 14 and we're still maturing on 10.

So that naturally has a gross margin degradation in a more competitive world. And then 3rd, as Murti laid out, we got a pretty fast follow to 10 under the accelerate the rate of innovation sorry, fast follow to 7 under the accelerate the rate of innovation. So as we're coming off a very mature 14, we're ramping 10 and we begin to incur cost of sales on the 7 nanometer process before we get to high volume production in 2022. So more competitive environment, transition from 14 to 10 and accelerating the pace to get to 7. The combination of those three things takes us through what we believe will be a little bit of a bathtub in 2020, 2021.

And then what you heard today from the teams is the performance of the product on 7, best we can tell competitively speaking, whether you're looking at others' process capabilities or the 6 pillars that we believe are required for product leadership, we will continue to distance ourselves from a product performance standpoint, while we're ramping 7 and we think we'll just be that's when gross margins, the expectation is we'll begin to improve coming out of 2021. Thank you.

Speaker 1

Over here.

Speaker 15

Vivek. Vivek. Vivek Arya from Bank of America Merrill Lynch. Thanks for the Analyst Day. I wanted to go back the comments you made about memory.

So a few kind of questions related to that. What is the sense in being in a capital intensive commodity? I still don't get it. So what is Intel's long term strategy in the memory segment? I think you mentioned that 3 d NAND, you would perhaps not be investing as much right on the capacity side.

So what does that exactly mean? What is the implication of sales in 3 d NAND over the next handful of years that are embedded in your long term growth forecast? And as part of that long term growth forecast, what are you also assuming from the exit from the 5 gs modem side and perhaps any actions you might do on the memory side? Are all those contemplated? Because going back to a prior question, yes, there is low single digit growth over 2018 to 2021, but that still implies kind of mid single digit growth in 2020 2021.

So what are the underlying assumptions about modem memory and just overall strategy in memory? Thank you.

Speaker 2

First on the modem, I won't repeat what George said. We're out of smartphone in the modem. Cost is ramping down now, while we evaluate other alternatives on how we'll deal with the value of the IP that we've built and assess the alternatives as it relates to IoT and PC. So you can assume that in that, we've assumed the R and D spending for modem or smartphone in particular will drop in the second half and will continue to drop next year. And again, the more as soon as we have more clarity on what that means, we'll give it to you.

But and that's not a 6 month time frame. That's we're working quickly to get to real clarity on what those trade offs are. But I think just assume that R and D on smartphone modem drops in the second half of the year and drops as we go into next year. Your second question on or maybe first question on memory. I tried to highlight criteria that we use.

Technology inflection, a more important role in the success of our customers and we'll make money and generate you good returns. Those are the criteria. When you apply that to Optane and Navin kind of laid this out, GB did as well, We see not a commodity. We see a differentiated technology that coupled with the CPU in a platform solution can really enhance the performance of the system overall. And through that, we would anticipate it's not a commodity and we'll get differentiated economics with our Optane product to disrupt DRAM.

We need to prove that. But that's our hypothesis and the feedback we're getting from customers along the way, we feel pretty good about it. NAND, the disruptive technology is floating gate, a floating gate architecture that we believe will allow us to get down a cost curve on a per gigabyte basis at less capital employed. In essence, with more and more density, we'll drive cost per gigabyte down. And on that equation, that worked pretty well through 2018.

So our NAND business was making good, not great margins within that sector, even though we were at relatively subscale with capacity in place. The challenge for us is to ramp 64 layers, I said earlier, ramp 96 layer in the second half of this year to continue to leverage floating gate technology to drive down cost per gigabyte performance. The margins in NAND, I would say, are more commodity oriented margins versus Optane. And the key for us is less capital employed because of the technology that we're bringing to the equation. Over time, we're we have to prove 2 things.

1, rapid adoption of Optane with differentiated margins because it's not a commodity number 2, how do we continue to come down the cost curve by going from 64 to 96 layer? We've put lots of capacity in place for memory. And what my comments were, we're not putting any more capacity in place for memory. We have we put the capacity in place. Now it's proving these two concepts.

And over time, when we built out the Dalian facility, the idea was that it was fungible capacity, I. E, if we wanted to, we could move Optane leveraging the scale that was put in place and the technology for NAND and drive the mix differential for that capacity more towards octane versus NAND. That's a work in process now. So that's how we think about it. And over time, I think one of the opportunities we have is always evaluating if in a scale game, if there's an opportunity to partner with somebody else that will help accelerate and or enhance the economics of NAND, we will consider that along the way.

And that's one that we'll always do. We're coming off a partnership, so it's not a foreign concept to us. And we'll continue to see whether there's a 1 +1 equals 3 in bringing scale to our differentiated process technology on hand. Thanks. Let's

Speaker 1

come over to the right side of the room, John. Yes.

Speaker 16

Thanks, Mark. It's John Pitzer with Credit Suisse. Bob, just going back to the margin side of the story, if you look at your 3 year targets and compare that to kind of the longer term EPS target of $6 if there's no improvement to the drive to $25 it kind of implies gross margins getting back to that 60% level. If there's more improvement to the drive to 25%, it actually means gross margins kind of don't get back to 60%. So I guess I'm asking you, if you think about this new TAM expansion strategy, the long term margin profile for the company has been sort of 55% to 65%.

For a long time, the company has been operating at the upper half of that range. Are you taking that range completely off the table now? And if you are and even if you aren't, can you help us understand how we should then think about operating margins? Is 35% the right optimal number? Or could we see a model that supports a 40% op margin over time?

You're

Speaker 6

the finance guy. Okay. Sure.

Speaker 9

So

Speaker 2

We think coming out of the 3 year timeframe when we migrate to even further extending our product leadership position as we go to 7 nanometer. There is more flexibility to capture value as we go to 7. And I think if hopefully, if we impress anything on you today from this product leadership thing in the middle is that we're investing across multiple pillars where we think will deliver differentiated performance across multiple architectures. And we think that real differentiation coupled with migrating to 7%, I would never say we're not going to get back to the higher end of the 55% to 65%. But that's and if you just look at implied in our 3 year versus our 4 to 5 year, there's a little more growth.

You can assume with a little more growth, there's a little more leverage. And you can assume that going from managing multiple nodes to more managing 2 with more mature 10 nanometer yields that there will be some gross margin improvement and we'll continue to be disciplined on the spending profile for the company. I hope I caught most of your questions. Thanks, John.

Speaker 1

All right. Let's come down to the front here on the left with Stacy.

Speaker 11

Thanks, Stacy Rasgon with Bernstein. Bob, I can't help but draw some parallels maybe to this meeting versus the one when we sat in this room 2 years ago. The messaging is different. Obviously, things have changed. But the messaging, if I paraphrase from last time, was basically the story was, guys, we have fabs to fill.

To fill them we have to grow, to grow we have to spend and eventually it's going to work and you guys will all be happy. Now during the day that didn't go over well. And 2 months later on the year Bob, you've only been here for a little bit at that point. 2 months later, you changed the messaging on the earnings call, you put in the 30% spending target and we've gone from there to here. The messaging I'm hearing from today is, again, I guess, the opposite of what we heard 2 years ago.

It's still we have to grow, but we're going to struggle to maintain operating margins even if we do that, so we can't spend. I mean, your OpEx basically assumes your outlook assumes OpEx in 2021 flat to 2019 levels with $8,000,000,000 in revenue added. So what is the risk that the outlook at least the strategy 2 years ago was in fact the right way to go, given all the areas that you do have to spend on? And talk about like why is flat OpEx, I guess, from now until 2021 the right way to go? How do you add revenue given that spending?

And what's the risk that you actually do need to keep spending more rather than less?

Speaker 2

Yes. I mean, first, I don't strategically relative to a couple of years ago, I honestly think we've been fairly consistent. We see bigger opportunities to grow. We've been transitioning more and more of what we spend to the higher growth segments that we think have real short, medium and long term growth implications for us. And our spending back from the 36% of revenue down to the 28%, just to remind you that, that was with $1,000,000,000 more R and D.

So during that time frame, I mentioned earlier, we added $12,000,000,000 in revenue on 1 $130,000,000 less spending, but that was $1,000,000,000 more R and D. So all other costs came down by a little over $1,250,000,000 and we added R and D. As we're looking going forward, we're going to continue to invest in R and D. We'll make trade offs. We'll be flipping, managing multiple 14, 10, 7 nanometer nodes.

We're going to be flipping those 14 brilliant engineers over to 7 as quick as we possibly can. We're going to continue to if we were at 90% on our 6 pillars, George talked about focus and intensity on the things that matter the most. If we were at 90% on our 6 pillars, going up to 95%, we'll continue to look at every R and D dollar to be focused on the prospects for growth that we laid out earlier. And then underneath the color underneath the covers of the guide we gave, it's actually not that complicated. We're going to spend less on 5 gs modem.

That will benefit us in the second half of this year. We indicated there'll be more benefits from that next year. And our spending is after 2019 to 2020 of being relatively flat as we reallocate from 14% to 7%, reallocate from some things to graphics that are will have modest growth in our spending in 2021. It's not that complicated. What I'd like for you to walk away with is that in the plans we laid out a couple of years ago to dramatically reallocate how we spend money and to execute allowed us to grow much faster while investing more in R and D that dramatically lower down our spending as a percentage of revenue.

And trust us, when we say that we're going to do that going forward, not to shoot ourselves in the foot, but to extend our leadership position, we're pretty confident that we're going to be able to pull that off. Thanks.

Speaker 1

Great.

Speaker 9

Let's go

Speaker 1

over to David Wong on the right side here.

Speaker 6

Thanks very much. David Wong, Nomura Instinet. How fixed are your spending goals in an uncertain revenue environment? Suppose you upside on revenue, is that mostly pass through to earnings? Or do your costs actually lift because there are other things that you can invest in?

Speaker 2

Sorry. It will be a function of the opportunities. We kind of took our best view of how the next 3 years are going to play out. If we grow faster, if we make a little more money and great opportunities, to Stacy's point, come along the way, absolutely, we'll kind of we'll invest in great opportunities that we think are going to kind of enhance our competitive and strategic footprint. On the flip side, much like what you just saw in this last earnings call, if revenue is going to be a little bit slower, we're going to be a little more disciplined, but not at the expense of not investing in the product roadmap that the team laid out today.

So a little more growth gives us a little more capacity. For the last 3 years, what that meant for you is the trade offs were $1.26 in higher earnings with a little more growth. That's kind of how it played out the last 3 years. We allocate our money to the higher growth areas. We grew faster than we anticipated.

We continued to invest. But in doing that, we generated $1.26 more earnings because of that higher growth. That will be a trade off we make along the way. We see the opportunity to leverage the technologies that we've built in this company and that we're investing behind to play a much bigger role in the industry at large and in the value we can bring to our customers. And we see those trends those tailwinds to be with us for a while.

So, we're going to invest behind them. But we're not going to but we're going to be extremely disciplined in how we approach it and we're going to learn stuff along the way and what seemed like a great idea one day might not be such a great idea the next day and we'll cut it, we'll learn from it and we'll move on. So higher growth is either more earnings or more opportunities, lower growth, we'll probably spend less, but not at the expense of, I think, Stacy's question about, are you going to constrain R and D? No, we're not going to.

Speaker 7

Great.

Speaker 1

We'll hit Ross here and then we'll have time for one more question.

Speaker 17

Thanks. Ross Seymore from Deutsche Bank over here. One for you Bob. Lots of questions on OpEx, lots of questions on gross margin, very good questions. But a lot of the pushback on the valuation of your stock is the EPS versus free cash flow side of things.

So I wanted to focus on the free cash flow and CapEx. So apologies that this will be a 3 part question, but hopefully it will be a quick answer. One, if memory spending isn't going to go up, why is your CapEx coming up so much? Is it at multi node conversion or confluence? Is that the answer?

2, how is that gap going to close as it seems like the CapEx you talked about is growing low single digits at the same time that your revenues and earnings are growing at low single digits? And then 3 and probably most importantly, over time, what is the capital intensity? And is that GAAP going to stick at 80% free cash flow as a percent of EPS? Or is that going to change for some other reason as we look longer term?

Speaker 2

First, I'd say that we're going to be at a position where the difference between free cash flow to earnings, the good reasons are going to be twofold. 1 is our prospects for growth. So if we see more growth going forward, capital all else equal will be a little bit higher. And second will be NextNode. In the past, there was we had some a bunch of other things in there that explained the degradation.

We want to get back in the I would say the 80% or greater than 80% in 2021. Closing that gap from where we are today, that differential because our outlook for growth beyond 2021, I. E, the capacity that we need to put in place and how we think about 7 and 5 nanometer. And we really we're going to get to a place where we'll be able to explain with real clarity about any time free cash flow is lower than earnings, the why. And right now, we're not by 2021, what we're telling you now is that difference of greater than 80% is a function of our view on the growth rate when we have a bigger chunk of our business tied to higher growth than going forward.

The second more tactical dynamic in the what's driving some of the improvement, Our gross margins are going to be carrying more depreciation. And as you know, we've built up some capital and it's going to be depreciating more. And inherent in our earnings over the course of the next 3 years is more depreciation or otherwise put more cash earnings as a result. So you see that kind of improving over time. This degradation of cash flow and earnings had a lot to do with we were bringing equipment on to deploy it into 10, but we weren't quite ready and therefore we didn't depreciate it.

And I think over the course of the next 3 years, we're going to be putting that capital to work. Depreciation is going to go up and our cash earnings will be higher. Thanks.

Speaker 1

We'll take the last question over here, Matt.

Speaker 18

Thank you. Good afternoon. A couple of points on DCG, I think for Navin or for Murphy. I guess the first part is we talked a lot about memory pricing relative to the memory business at Intel. Navin, I wondered what the memory pricing decline has done to the value prop of Crosspoint Memory and Cascade Lake?

And secondly, I was a little bit surprised, Marty, to hear that the first product on 7 would be a GPGPU. I think the team had talked about the server business potentially leading you on to the new node at 7 and maybe give us an update of the 2 main businesses how those timings might look? Thank you.

Speaker 6

Yes, the first one, Matthew, we didn't plan for DRAM to be at $10 a gigabyte when we started Optane. We had always planned for it to be lower than that. So the recent dynamics of DRAM pricing going way up and now more recently coming way down, you kind of average those out. When we started the Optane program and we started planning for what the products would be, we weren't assuming this sort of abnormally high DRAM pricing. So the recent reductions in DRAM pricing really don't change any of the dynamics in terms of the value prop.

Remember, the 2 primary value propositions that we have are large capacity memory, which can't be done with DRAM, right, 512 gigabyte Optane persistent memory modules are 3 times the size of what DRAM can do. And 2, persistence, the ability to have that information stored even when the power is out. Those two things combined across a wide range of workloads is where the value prop comes from. And the pricing per gigabyte relative to DRAM is something we watch, but we weren't planning for really high DRAM pricing. And so really the value prop hasn't changed.

The thing we've been focused on is getting the product out into the market, getting it into proof of concepts and getting the ecosystem built. And we're now in that early phase of that. I'll start and then Murthy can answer on the second one. Remember that product that Murthy talked about on the Xe architecture, GPGPU, that is a data center product. So that product will go into servers.

So we had said, maybe it was at the last investor meeting 2 years ago that we were switching to data center first on the process node transitions. We didn't maybe it was applied at the time that that would be a CPU, but we didn't specify that. We said it would be data center first. And decided to go invest in that accelerator market, it turns out that an interesting product to start is a GPTPU for a bunch of reasons in the early part of the process now. Murphy talked a little bit about that.

The redundancy you have in a GPGPU product helps a lot at the front end of the process transition. The next product on 7 nanometer will be a data center CPU. And that's probably all we're going to say for now on that. But a data center GPGPU followed by a data center CPU is the way we're planning the roadmap.

Speaker 10

I don't know if you

Speaker 6

want to add anything.

Speaker 3

Just a few things. First of all, Matt really take seriously this message that CPUs are foundational, but we're moving towards the philosophy of competitive XPUs across the spectrum. Secondly, exactly as Navin said, for us, AI and HPC in the data center is a huge strategic priority. So it's actually calling out that priority markedly in terms of lining that with 7. And then of course, the architectural attributes of graphics are more resilient towards ramping a process that has low defect density in the early stages.

So I think it really is this combination of making sure that we think data centric, CPU to XPU and then picking targets that allow us to get yield learning up very, very quickly, so that we have a fast portfolio of products going through. Remember, we've really simplified the design axiom for 7 nanometers. So you're going to see a much, much faster flow through of products through our nodes than you've traditionally measured us by. The idea is simple processes, balanced scaling, large product volume through the node at a much earlier time period. Great.

Speaker 1

All right. With that, we will wrap up the webcast and I'll invite everybody that's here with us in Santa Clara to join us across the street at the headquarters building, the Robert Noyce building for reception and we'll have team members guiding you along the way. Thanks again.

Powered by