Good morning. I'd like to welcome you all to Intel's Technology and Manufacturing Day. We're excited to have you here today. Most of you know this already, but Intel's Technology and Manufacturing Organization is the lifeblood of this company, but it's also our most secretive. In fact, this has been the first time in 3 years that we've done a public briefing about our technology development and manufacturing.
And it's also important to note that this is the most in-depth under the hood briefing we've done on Intel manufacturing in my 17 years with the company. So you want to be paying close attention throughout the day. We're going to have many technical disclosures and news items throughout the program. We will be posting information to Intel's newsroom in a couple of batches today, which will help you digest the information. Those will go live at 9:30 a.
M. And is at noon. As well as after the event concludes, we will be posting the presentations to our newsroom@intc.com. So before I get into the agenda for today, I wanted to talk you through a few logistical details. First, please turn off the ringers on your phones.
2nd, we do have Wi Fi networks. The registration information is on the back of your badge. And with that out of the way, I'm going to touch on a few financial disclosure reminders before we get into the agenda. Today's presentations take place during Intel's quiet period before we announce our 2017 Q1 financial and operating results. While we will not be addressing Q1 information, presentations may contain forward looking statements.
All statements made that are not historical facts are subject to a number of risks and uncertainties, and actual results may differ materially. Please refer to our most recent earnings release, 10 Q and 10 ks for more information on the specific risk factors that could cause actual results to differ. So with that out of the way, we'll now move on to the agenda. And I'll just walk you through a quick overview on that before we get started. Stacy Smith, who leads our manufacturing operations and sales is going to kick off the day.
He'll set the stage for the subsequent technical discussions and he'll cover our technology and manufacturing strategy and Moore's Law. He'll then be followed by 3 of our top technologists from the technology
and manufacturing group.
Mark, Bruce and innovations we're delivering at 14 innovations we're delivering at 14 nanometers and 10 nanometers to enable significant density and cost per transistor benefits. Then Mark Voor is going to come back to the stage later this morning, and he'll kick off a portion of our agenda where we'll spend some time on Intel Custom Foundry. Today, you'll really get a sense of the Foundry's role as a business serving both internal and external customers. He will discuss a new process being offered by the foundry called 22 FFL, which stands for FinFET Low Power and we expect it to have a range of IoT and mobile applications where power efficiency and fast time to market are critical. And after Mark wraps up, we'll transition to have Murti Venduchala join us.
He'll discuss our IDM model and our approach for co optimized process, architecture and IP innovation. He'll also address Intel Custom Foundry. Then we'll wrap up the morning with an audience Q and A with Stacy Murti and the General Managers of the Technology and Manufacturing Group, Suhail Ahmed and Ann Kelleher, who many of you may not have met before. Then we'll break for lunch. And partway through lunch, we'll start our final session of the day, a panel discussion about Intel Custom Foundry.
Panelists will include executives from ARM, Cadence Design Systems, Synopsys and Intel Custom Foundry. And with that, we'll get the program started. Thank you for joining us today.
We don't stop for naysayers, complacency, competition, boundaries or physics. Okay. Maybe physics. But otherwise, there's no stopping us. Not when we put the silicon in Silicon Valley and change the world.
We make the rules. We made the law. We put everything under one roof, design and manufacturing. That's why our products are years ahead of everyone else.
We're literally
bold new tech that blows past the litho barrier. And our pushing doesn't stop in the fab. We're making life better outside. Like this and this or that. We don't stop to wonder what amazing applications deep learning will bring because we're the ones bringing them.
We don't daydream about autonomous cars because we're already driving toward it. We won't ask about what new realities VR will unlock or the capabilities of a 5 gs network. We'll tell people how we unlock them.
Please welcome to the stage Executive Vice President, Manufacturing, Operation and Sales, Stacy Smith.
Good morning, everybody, and welcome to our first ever Technology and Manufacturing Day, where we are going to show you all the secrets of TMG. For those of you that don't know me, I'm Stacy Smith. I actually know a lot of you from my almost 10 years as Chief Financial Officer. As Laura said, now I run manufacturing operations and sales. Since I'm the Chief of Manufacturing Operations and Sales, that makes me CMOS.
That kills in TMG. The sales part of my organization totally doesn't get it, but TMG loves that. Our ability to advance Moore's Law, to make products less expensive and more capable year in and year out is really our core competitive advantage. It's a huge driver of our business. And in truth it's a huge driver of the worldwide economy.
It enables people to connect. It enables people to entertain themselves to play and to learn. Moore's Law helps us solve some of the biggest problems on the planet and it improves people's lives. I'm pleased to be here with you today. You're also going to hear from some of our top technologists and leaders.
You're going to hear from Kaisad Mistry, Ruth Brain, Mark Bohr and Murthy Rindukunchala. So it's a great lineup that we have today. Some of the presentations will get into the details of our technology. So for those of you that aren't technologists, bear with us on that, but there's some really important information that we want to disclose. So I'm going to kick off today by answering just a few of the questions that we get, things like whether or not Moore's Law is dead, do we still have technology leadership?
Spoiler alert on those, it isn't and we do. Then I'm going to talk about how our scale has become a bigger and more valuable competitive advantage. And one of the topics you're going to hear about today is that the naming convention around how people name process technology nodes has kind of lost its tie to the actual process technology nodes. And so Mark's going to talk to you about a standardized methodology of measuring density in a process and how we want to have an objective standardized methodology to be able to articulate how Moore's Law is being advanced. But in the lead up to that so Mark will share that with you.
In the lead up to that, I'll show you some of the ideas that we toyed with, but ultimately discarded in terms of our process naming methodology. So, first, we thought that we could compare to a virus because that's a standardized size. For the record, 14 nanometer is about oneseven of a virus. But our crack team of PR professionals pointed out to me that in the computer industry, no one wants to be compared to a virus. So we discarded that.
Then we came up with a really good idea. The idea was to have a nano mark, compare everything to Mark Bohr as kind of the godfather of Moore's Law inside of Intel. So, if I did my math right, and Mark confirmed this for me earlier, 14 nanometer would be 11 nanomarks. Just to put it in perspective, some of the competitive processes coming to the marketplace this year would be significantly less than that, more like 15 nanomarks. But then Mark pointed out and Ruth was helping us, it's good to have technologies that not only does your height change over your life, but your height changes over the course of a day.
So this was not the standardized metric that we wanted. So it wasn't constant enough. So, we had to let that idea go. So, you'll hear a lot more about a less fun, but real way that we want to measure process technology when we get to Mark's presentation today. All right.
So let me get busy here. I want to take a second and define Moore's Law for you, just starting at the highest level here. Gordon's observation back in 1965 was that the number of transistors per square millimeter was doubling approximately every 2 years. That simple observation has become the heartbeat of technology. It means that the capability of devices that use semiconductors doubles every 2 years.
It's what brings us technology from supercomputers to virtual reality to wearables. It's really the driver of the industry. But at its core, Moore's Law is really an economic law. It says that by advancing semiconductor manufacturing capability at a regular cadence, we can bring down the cost of making semiconductors over time. And since it's a doubling every 2 years, the cumulative effect of Moore's Law has been enormous.
It has literally changed the way we live our lives. To illustrate this in a fun way, we like to look at what would happen if other industries saw innovation at the rate of Moore's Law, a doubling of capabilities every 2 years that kind of starts at the same time frame that Gordon Moore penned his law. So car mileage, if you applied the same metric to that, would be so efficient that you could go the distance the equivalent of the distance between the U. S. And the sun on a single gallon of gas.
You could feed the entire planet on a single kilometer of agricultural land. And space travel would have gotten to the point that you could actually travel at 300x the speed of light. Again, coming back to my new role of manufacturing operations and sales, I have found that the key to the hearts and minds of our technologists is to show a picture of the Starship Enterprise in a presentation. So I'm going to do that all the time.
All
right. So let's shift gears now to directly answer one of the key questions that we get from you from time to time, which is whether or not Moore's Law is dead. By the way, I was in the factories in 1990. We're talking about this out in the foyer. And that was about the time that we were progressing lithography to the point that the lines that we were scribing on the wafer were more narrow than the wavelength of light.
And that was seen as we led up to that, that was seen as this insurmountable technology wall that there was no way that we were going to be able to get through. And it wasn't even a blip on kind of the Moore's Law cadence. The reality is, is that we're always looking out 5 years. We kind of have good insight into how we solve the problems in the next 5 years. You're going to hear a lot of that today.
We do a lot of path finding for the 5 years behind that. And always when it's 10 years out, it's the view that we're not going to be able to solve the problems that exist. But as we get there, we do solve those problems. So the short answer, as you look at this chart, is that Moore's Law is not dead. At least it's not for us.
These are our curves. And let me take a second and walk you through the charts behind me, which shows that for Intel, Moore's Law is alive and well. Starting on the left hand side, what that curve shows is how scaling or density is improving over time. This is a log scale and it shows how much we can shrink the transistors every generation. This is what drives Moore's Law for us.
The transistors shrink so that we can double the number of transistors at every node. The fact that those dots for 10 nanometer and 7 nanometer are below the historical line is actually quite significant. You'll find that when the technologists in TMG draw dots, they don't do that loosely. And what it says is that we are getting a greater than normal density benefit for those processes. That's an important point and I'm going to come back to that in a minute.
The middle graph shows that the cost per square inch of technology goes up. It just gets more expensive to make the wafer. So it's not a surprise that's been a constant in our industry and you'll hear more about that from Mark. And the way that these curves come together is that every generation that cost per square millimeter to manufacture a wafer goes up, but we shrink the transistors. And at the end, we get a decline in cost per transistor.
And you can see all of that come together in the chart on the right. Our cost per transistor is coming down at a slightly better rate than the historical trend. That says that for us Moore's Law is alive and well. Then we get to decide what to do with that benefit, that benefit of a shrinking or a declining cost per transistor. We can either keep the die size constant and add performance capabilities and features or we can decide to shrink the die and reduce the cost of each product.
And the reality for us at Intel is we do both because of the breadth of our products. In some cases, we take that benefit as a smaller die size, so lower cost products to go after new markets. In other cases, we go after more and more performance and features because we get sell up and we can enable new usage models. The benefit of Moore's Law is that we can do all of that. We can improve performance, we can add features, and we can reduce costs.
In a little while, I'm going to show you some of our actual cost data so that you can see the impact that these Moore's Law declining transistors curves has on reducing our costs. Over the course of the rest of the morning behind me, you're going to get a lot of insight into the actual technologies that we're using to continue to enable this. The chart on the previous page showed cost per transistor coming down from process to process, right? But we know that the time between processes, the time between nodes has gotten longer. It's gotten longer for us and it's gotten longer for the rest of the industry.
So given that, you might wonder whether or not we're getting the same annual benefit to Moore's Law, right? We're showing that we're still coming down node to node, process to process, but as that time gets longer, do we still get the same annual benefit? The short answer to that question is yes, and I'll show you a little bit more about this in a second. We're getting the same year on year improvement even with that longer time as we go from 22 nanometer to 14 nanometer, from 14 nanometer to 10 nanometer. And it goes back to that density curve that I showed you a couple of slides ago.
We're getting a larger than normal density benefit as we go from 14 nanometer and as we go to 10 nanometer. In essence, we're taking bigger steps generation to generation, which is enabling us to stay on the historical trend. And we're able to do that because of a strategy called hyperscale. There are several underlying technologies that enabled us, but there's a really important one called self aligned double patterning and self aligned quad patterning. You're going to hear about that in Ruth's and Kaisadd's presentations.
It gets a little technical, so I'll give you a spoiler alert for the non technical people in the room. So all you need to know is it's really, really cool. And we're very lucky that these people work for us. So it's taking us longer to go from node to node. And when Gordon penned his law, when Gordon penned Moore's law, the time between those at that time was more like 18 months, right?
Over the course of my career, that became 2 years. Now it's more like 3 years to go from node to node. But we're able to take bigger steps in terms of density improvement. And this is what's enabling us and allowing us to stay on that same improvement rate that we've achieved in the past. In addition, we're taking advantage of the longer life at each process node to introduce process optimizations.
We have a really clever naming convention for these. We call them 14 nanometer plus, 14 nanometer plus, plus likely be 10 nanometer plus, 10 nanometer plus, plus be easy for all of us to remember. And those optimizations allow us to bundle together process, technology improvements, architectural innovations, new IP blocks to enable an annual cadence of products to hit the market every year. And Murphy is going to talk a lot about that in his presentation. The key for us is an annual improvement so that we can deliver to the customers something new and fresh and enable new usage models for them.
This chart shows the impact of hyperscaling that I was just talking about. So if you look at the 14 nanometer and the 10 nanometer chips that are to the right, what you see is that shaded dotted line shows what the die size would be. So this is a feature constant view of the world. And we know that sometimes we invest Moore's Law in more features, and I'll come to that in a But a future constant view of the world. And the shaded area shows what those die sizes would have been if we were just on the traditional Moore's Law scaling, which by the way is really good scaling.
But with hyperscaling, we're getting that bigger benefit. And you can see the impact on this chart. By the time you get to 10 nanometer because of the cumulative effect of the steps that we're taking at 14 nanometer and at 10 nanometer, we get about half of the die size at 10 nanometer that we would have gotten with normal scaling. That's again assuming that we apply all of the benefit to die size. As a recently reformed CFO, I do feel compelled to show you a couple of actual cost curves.
And I think this really illustrates at the heart how Moore's Law works in practice within Intel. So these are the actual cost curves for us for 22 nanometer and 14 nanometer product families. The first product on a process, as you would expect, is a relatively expensive product, right? You're coming onto a process when the factory is ramping and when yields are a little bit lower. Then the second product typically comes in at the sweet spot of costs and then carries on to hit the optimal cost.
On the left of this chart, you see the 22 nanometer product family. So that's Ivy Bridge and Haswell. As you might recall, Haswell on 22 nanometer was one of our company's lowest cost products ever on one of our highest yielding process technologies ever. On the right hand side, you see our 14 nanometer product family families. And for the first time here, you see that plus and plus plus going on here.
So they follow that similar trend as we go from Broadwell, which was the first product on 14 nanometer to Skylake and then to Cabby Lake. You see here that Skylake gets by the point that it's ramped, gets to a similar cost to Haswell at the same time in its life. You see something really interesting with Cabulate. If you take out your ruler and I know some of the financial analysts in the back will you know who you are. You're going to see that Cabulate at launch is actually a little bit lower than Haswell was.
So we're getting a great cost in 14 nanometer as we get to those later waves of product. That actually shouldn't be a surprise. I think you've seen this kind of information from Intel in the past. But there's something that's really important here. KabyLake has 800,000,000 more transistors than does Haswell.
It scores 30% higher on common metrics like 3dMark. Tabulate enables entirely new usage models like immersive virtual reality and those kinds of things. It's an entirely different kind of product, entirely different class of product, frankly. So the truth of us of Moore's Law resides in those cost curves. We can bring new capabilities to the market at a similar cost to the generation that came before it.
The prior graph showed actual product costs, but that can be impacted by things other than Moore's Law. So if you think about it, as we progress our product line from 2 cores to 4 cores, the die size can increase and our unit costs presumably go up in that model. So it can be impacted by mix. Now presumably we get paid for that, but still that cost trend is impacted by that mix. This graph is an all in cost for our PCCPU products.
So it just isolates that segment of the business and it shows our actual curves of what we've achieved in terms of cost per transistor since 2004. So this is an all in actual cost for the company for the PC client business. It's cost per 1,000,000 transistors if you think about like that. So it's kind of the corollary to Moore's Law where it's showing cost per 1,000,000 transistors as opposed to density. This is also on a log scale and this is the first time that we've shown this data publicly.
This chart is the realized benefit of Moore's Law to our cost structure over time. This is cost per million transistors. So it normalizes for mix, entirety of our PC product business. You see something interesting here because we put the node transitions on here. Even with that longer time between nodes, we're staying on the same realized cost per million transistor curve.
This is the benefit of hyperscaling that I was just talking about and the intranode optimizations. And while I'm not showing 10 nanometer on this chart, I can say that based on everything that we know about 10 nanometer, we expect that this trend will continue out through the 10 nanometer generation. Remember, I told you earlier, there are 2 benefits to Moore's Law. 1 is that costs come down over time, which I just showed you, But the other is that we can improve the capabilities of our products year in and year out. We can do more capable products for data center, for machine learning.
We can do more capable products in the client space. We can go after new markets. And you can see in our financials over the last several years the impact of having this Moore's Law leadership has impacted our financials not just in the cost, but also in the competitiveness of our product line. And the best way to view that is in viewing the gross margin of the company and how it's shifted upward. It's a combination of the cost that we've been achieving and the capability of our products over time.
So we continue to get the benefit from Moore's Law. But as many of you have reported, there is confusion out there around how companies are naming node transitions. So the question out there now is whether or not we still have a lead. I mean others are coming to market with 10 nanometer sometime this year.
So let's hit that question head on.
This chart shows logic area scaling, which is a good way to compare process technologies between companies. If you look at the blue line, which is the Intel line, you can see that we're staying on the historical Moore's Law curve. In fact, at 14 nanometer, we're doing slightly better as a result of hyperscaling that I just talked about. When you look at the pink line or the red line, you see that for others that trend line has flattened out. And you really saw this start to happen as they brought in FinFET transistor structures into their process technology.
They called those node transitions, but they weren't getting a density improvement as they were introducing that. So that caused a flattening out of the curves. And you can see that since that point, they've really diverged pretty significantly and no longer match what we would traditionally call a node advancement. The result of this is that the company the technology that's coming to the market this year from others is equivalent in density, and you'll get more of that from Ruth today, to our 14 nanometer technology. For us, 14 nanometer is a technology that's been in production for 3 years.
I just showed you those cost curves. And we've shipped on the order of 500,000,000 units on 14 nanometer. And others will just be coming to market sometime later this year with it. So it's apparent that the naming conviction has caused some confusion in the marketplace. As a result, we're proposing that we in conjunction with you, put some weight behind an objective measurement that measures process technology empirically and independently.
It's not quite as fun as Nanomark, but it probably makes more sense for the industry and Mark Borer will talk about that in his presentation. So now I'm going to switch gears a little bit and give you a sense of the scale of our worldwide manufacturing operations, which is one of our key competitive advantages. The investment required in this industry is enormous. It costs about $10,000,000,000 to build a single modern fab. You saw this play out a few months ago when we announced that we were going to equip Fab 42.
So that was a factory in Arizona where the shell has been built. And even there, just to equip an existing shell at 7 nanometer is on the order of $7,000,000,000 of investment.
To just
give you a sense of the scale of these projects, they're enormous. When we built that Fab 42 shell, we had over 5,000 construction workers working on the factory. It's one of the largest construction projects on the planet. To put this investment into perspective, we've invested around $50,000,000,000 in CapEx over the last 5 years. I use the term invested here on purpose.
This is one of our most valuable competitive advantages, and I believe that we get a really strong return on that investment by achieving product leadership and by being able to bring down our costs generation by generation. One way to measure this when you're looking at a capital intensive business is in return on invested capital. Our ROIC over this 5 year horizon has been slightly above 15%, which puts us in the top quartile of the S and P 500. Scale is more than just bragging rights for us. To get a return on that $10,000,000,000 of investment, you need significant volume to be able to fill just one modern factory, not to mention being able to have a network of factories, which also has a competitive advantage that I'm going to talk about in a minute.
And you need to have a very significant skill set around technology development. The technologies that work on this stuff are solving some of the most difficult engineering problems on the planet. A decade ago, when you look at this chart, there were 18 companies that had their own leading edge factory. Think about that. 10 years ago, 18 companies.
Today, we're down to 4. It's us, Samsung, TSMC and GlobalFoundries. Of those 4, there's only 3 of us that are really investing in developing our own process technology development at scale. And of those, there's only 2 of us that are left as integrated device manufacturers. And Murthy, in his presentation, is going to talk a lot about the benefit that he gets by being able to partner with the technologists in the manufacturing organization to optimize things for the needs of his product line.
This is hard stuff. You need scale and you need skill, but I'm convinced there's an increasing competitive advantage that's going to accrue to those few players that can do this well. So we have global scale and we don't just invest in 1 factory, we have multiple factories. We have leading edge factories in the U. S, so in Arizona and Oregon, in Ireland and in Israel.
We have a trailing edge factory in New Mexico. In addition, we're building our 1st wholly owned memory factory in Dalian, China, but starting off on 3 d NAND, and over time we'll transition to 3 d XPoint. In the leading edge factories, we employ a methodology, a very strict methodology called Copy Exact. This lets us ramp those factories fast and get to the world's highest yields for the complex devices that we do. It also gives us a model where we have fungible capacity to respond to changes in demand.
And that becomes really important as we think about our supply chain and how quickly the demands of the marketplace can change. There's another element to this, to this ability to respond to changes, and that's that we have a methodology where we build in forward reuse of our equipment. So we can roll forward a lot of the equipment from 1 generation to the next, and we build our factories such that we have several generations of future technology that we can put in those factories. That gives us the ability to respond to changes in the marketplace and it gives us the ability to ramp in place without having to build new factories as we go from process technology to process technology. So we only have to build factories when the demand grows or when our business grows to the point that it dictates the building of a new factory.
In addition to our fab capacity, we have assembly test sites in Malaysia, China and Vietnam. We also do a reasonable amount of outsourced assembly tests, so we have some good sense of what the costs are out there. So one thing I'll say about our assembly test capability is that we do achieve world class costs internally in terms of our assembly test factories. But even more important, increasingly packaging technology, assembly test and some of the associated technologies that come out of that are becoming a really important differentiated capability that gives real value to our customers. Mark and Murphy will talk about something called embedded multi die interconnect bridge or EMIB for short.
It's actually game changing in the way that it allows us to put together different IP blocks into a product that can come from different process technologies and do that in a way where you get super high performance. So to give you a sense of the numbers involved in our technology and manufacturing organization, we have more than 30,000 people that are working in the technology and manufacturing group associated with those 6 fabs and 3 assembly test sites and the associated technology development that goes into them. We have over 4,000,000 square feet of manufacturing cleanroom space. And we ramp these factories at a blistering pace to a really unprecedented volume. As a data point, we produce over 10,000,000,000 transistors a second out of our fab network.
We also have a very substantial footprint in the U. S, by far the largest footprint of any semiconductor company in the U. S. Of the high-tech manufacturing jobs that I talked about on the prior page, more than half of those employees are based in the U. S.
And in contrast, 80% of our revenue comes from products that are exported to countries outside the U. S. So we're one of those rare companies that does most of our manufacturing in the U. S. To service markets that are mostly outside the U.
S. In total within the U. S, so here I'm going a little bit more broad than just manufacturing. I'm looking at all of our R and D. We have over 50,000 high-tech workers.
And over the last 5 years in the U. S, we've been the largest capital investor in the technology space as we've built and equipped the factories here in the U. In addition to the jobs created, we have a very substantial impact on the U. S. Economy.
Our contribution to the U. S. GDP was recently sized at around $90,000,000,000 So it's enormous. It takes a finely tuned supply chain to manage this complexity, and we have one of the best in the world. We get wonderful costs as a result of our scale, but even more importantly, we've developed the capabilities to work really closely with our customers to embed in their supply chains and then help them through transitions and to delight them as we introduce new products and bring technology to the market.
To put this in perspective, last year $10,000,000,000 of our revenue over the course of the year came from products that didn't exist on January 1. It just shows how fast our product line transitions. So we have to have the supply chain that allows us to adjust to those levels of changes. But more importantly, we have to be able to work with thousands of customers all around the world to help them through the transitions, to get the right inventory in place, to help them with the design work, to make sure that they can respond to the new products that we're bringing to the marketplace and ramp those in the market to deliver ultimate value to the consumers. This is very important to us.
It's very important to them. And I think it's become an increasing competitive advantage for us as the world has consolidated and gotten more complicated.
All right.
So I'm
going to switch gears here, and I'm going to talk about our Foundry business. First, let me show you some data on the addressable market. In 2016, the foundry TAM was over $50,000,000,000 If you look at the leading edge portion of that market, which is the orange bar at the bottom, it's a little bit it's just under half of the overall TAM. And here, we define leading edge as 28 nanometer and below, which would be consistent with how the foundry providers define the leading edge. Getting back to the is Moore's Law is dead question, one of the questions I sometimes get is whether SoCs need the kind of leading edge capability that we provide.
When you look at the driver of that orange bar there, why it's been growing so fast, really the dominant driver of that growth has been high end mobile SoCs that's driving the build out of that capacity. So those customers are valuing the performance and the energy efficiency that they get by being on more advanced technology. One last data point on this chart. When you look at that orange portion, the bottom part, the leading edge portion of the market, there's one company that has about 70% share in that market. So the characteristics of this market is that it's a big TAM, it's growing fast, and the people that we're working with want choice and innovation in the marketplace.
So let me just dissect a little bit the $23,000,000,000 portion of that orange bar. What you see is about half of it is on 28 nanometer and the remainder is on the more advanced nodes. This portion of the market has been growing fast. As I said, it's been growing at about a 14% CAGR between 20 10 2016. The key for us in this space, this has been a journey for us, has been to develop our capabilities so that we can address what customers want, so that we can move in our journey from being really very much a custom partner to having more general purpose capabilities to work with a much broader set of customers.
You can see this journey that we've been on, and it's really accelerated over the last 6 months. As we focused on building out our capabilities, the first place that we focused was in bringing our silicon capabilities to this market. So we've made FinFET transistors, high k metal gate, strain silicon, all available to our customers. 2nd, we've been investing to develop a differentiated collection of leadership IP we're making available to customers. And that ranges from Intel Architecture Cores in some cases, FPGAs, optical high performance SERDES.
And we're bringing some of our leading interconnect capabilities like EMIB that I talked about to this market, again, in order to create value for the customers that work with us. 3rd, we're now working very broadly with ecosystem tool, EDA and IP providers to build a robust ecosystem around our foundry. Bottom line, we're trying to provide the technologies, the tools and the services to make it easy and compelling for customers to design on us and to work with us. And we now have the capability to go in and do customer co optimization on a pretty regular basis, and we've done dozens of those. To give you more insight into this, this afternoon, there's going to be a panel on our Foundry business, as Laura said, that and that was primarily and that was primarily for the networking market.
At 14 nanometer, we're seeing interest from networking and mobile SoC customers. One example of this is at Mobile World Congress, we announced in conjunction with Spreadtrum that they're going to be using our 14 nanometer process for mobile SoCs. On 14 nanometer, we also won a very large FPGA customer that it turned out we liked each other's technology so well that we ended up buying them. And now we're deep into the design work on 10 nanometer products. The customers who are interested in this process span from the performance segment of both the client and the mobile markets.
So if you think about it, our current offerings target about half of the leading edge portion of that foundry town, the left hand side of that Founder Town. Today, I'm really excited to announce for the first time publicly, a process is targeted at delivering high performance and also ultra low power. 22 nanometer FinFET low power here in forever now more known as 22 FFL. This platform blends high performance capability with ultra low leakage that enables our customers to do extreme integration where they need both performance and low leakage on the same product. A few characteristics of 22 FFL, and by the way Mark will talk about this in more detail.
22 FFL can do high performance, but the key change for us is that it can also do that ultra low leakage. To put it in perspective, we're seeing more than 100x lower leakage than leading planar processes, 100x improvement and with much less complexity. This allows us to provide that extreme integration capability to our customers where they can put performance IP and low power IP on the same product,
but they can
do it with a much lower cost of design. They can get to market fast and they can get designs to the market fairly easily. So it's the easiest to use FinFET process in the industry. To put this in perspective, for some of our internal foundry customers, they've been able to bring up products in less than half the time with less than half the cost. Lastly, 22 FFL supports full RF integration and because it's built on our high yielding 22 nanometer process, it's cost competitive with 28 nanometer planar technologies.
So think of this as 22 FFL as FinFET for the masses, and it gives us a leadership process to go after the other half of that leading edge foundry TAM. So I'd like to recap some of the things that you've heard from us today and that you're going to hear more about over the course of the rest of the morning. First off, Moore's Law is alive and well. We have a 3 year lead, 14 nanometer, over the rest of the industry. The industry is consolidating, and our scale is both unique and a growing competitive advantage.
And very importantly, we're making the investments and growing our capabilities to build a foundry franchise. But before I go, I have just one more thing that's important to the company and that I want to share. One of the reasons that I joined Intel so long ago was that I had a sense that the people here wanted to make a positive impact on the world. And clearly, we do that through our products. It's one of the ways we make a positive impact on the world because we let people solve important problems and to connect with each other.
But we also try to be good stewards of the planet. And I have to say, I'm so proud of what this organization has done and how they conduct their business. One example, over 7 years ago, we started a journey to establish a responsible supply chain for Conflict Minerals, and we were one of the first companies that was able to validate our products as conflict free. We're the largest voluntary corporate purchaser of Green Power in the U. S, and we've been number 1 on the U.
S. EPA list for 9 consecutive years. And as of last year, we're now also the largest purchaser of green energy in Ireland. We're deep into the work to ensure that the human rights of the people who work in the supply chain are insured. We don't do this for the recognition, but we are recognized for our work here.
We're ranked as one of the world's most ethical companies by both Forbes and Ethisphere. We're number 1 on the EPA's National Top 100 Companies List, and we're in the Dow Jones Top Sustainability Index. Bottom line, we're trying to do good as we do well. This is something that's important to Brian, it's important to me, it's important to our Board. More importantly, it's important to the employees of the company.
And it has an impact. It's one of the things that allows us to hire the best and the brightest and put them to work solving some of these tough engineering problems, all the while making a positive impact. So with that message, I'm going to turn it over to some of these smart and committed people to carry on the presentation from here. Thank you very much.
Please welcome Intel Senior Fellow, Technology and Manufacturing Group Director, Process, Architecture and Integration, Mark Boor.
So good morning. Today, I get to talk to you about my favorite topic, Moore's Law Leadership. But before I do that, let me say that last night, during our dry run, Stacy Smith surprised me with his new density metric, Nanomarks. So trust me, I have a more serious proposal later in my presentation. Let me start with my key messages.
First, Intel leads the industry in introducing innovations that enable scaling. Hyperscaling on 14 nanometer and 10 nanometer provide better than normal scaling while continuing to reduce the cost per transistor. Intel's 14 nanometer technology has about a 3 year lead over other 10 nanometer technologies with similar logic transistor density. Intel's 10 nanometer technology provides industry leading transistor density using a quantitative density metric. And enhanced versions of 14 nanometer and 10 nanometer provide improved transistor performance and extend the life of these technologies.
And finally, the overall message, Moore's Law is alive and well at Intel. So here I show a time line for when Intel introduced our major technology nodes going back to 90 nanometers in 2,003 up to our 10 nanometer technology coming up later this year and also show when others introduced their various technology nodes. Now it was back on the 90 nanometer generation for Intel that's 2,003 when we were the first company to introduce strained silicon transistors in production. And for those of you who can remember back to that time, other companies were exploring a biaxial version of strain. Biaxial strain never worked, never went into production.
Intel invented a unique uniaxial strain technique that everybody copied about 3.5 years later. And then in 2,007, on our 45 nanometer technology, we were the 1st company to introduce and produce high k metal gate transistors. And again, if you remember back to that period of time, other companies were pursuing a gate first process flow. Intel introduced a gate last or a replacement metal gate process flow. And eventually, everybody copied our gate last flow.
And it was at least 3 years later. Then on our 32 nanometer technology, we were the 1st company to develop a self aligned Via process flow for our interconnects that allows interconnect pitch to scale better than in the past. And again, it was more than 3 years later before other companies in our industry copied that approach. We are the 1st company to go into volume manufacturing with FinFET transistors. Of course, we call them TriGate at the time, but we now call them FinFETs.
And it was more than 3 years later before other companies first started producing their FinFET technologies. And then on our 14 nanometer technology, which started volume production early in 2014, we started to use hyperscaling to deliver better than normal scaling techniques. And Ruth Brain, in her presentation that follows mine, will describe in more detail what those hyperscaling techniques were on our 14 nanometer technology. And we expect to see similar techniques on other 10 nanometer technologies coming out sometime later this year. And then finally, later this year, we are coming out with our 10 nanometer technology that introduces some other forms of hyperscaling.
So I think it's fair to say that Intel has developed all of the major logic process innovations used by our industry over the past 15 years. And we've received industry recognitions to reflect this. In 2008, we received the semi award for our strain enhanced transistors. In 2012, we received the semi award for our high ks metal gate transistors. In 2015, another semi award for the first implementation of FinFET transistors and then last year, we won the prestigious IEEE Corporate Innovation Award for our pioneering work on HiK Millgate and TriGate or FinFET Technologies.
So now let me talk about logic area scaling. This is a type of graph you've seen before, logic area on the vertical scale, in this case relative to our 45 nanometer technology from back in 2000 and 7. And we achieved using this metric on the right, the simple gate pitch times logic cell height metric, we achieved about a 0.49x area reduction on our 32 nanometer technology. A little bit better than that on our subsequent 22 nanometer technology with about 0.45x logic area scaling. So this is a pretty easy metric to use to measure on our wafers and any other company's wafers.
But I think it's time to change that, look beyond that metric. This gate pitch times cell height metric is deficient in 2 key ways. Number 1, it's not a very comprehensive transistor density metric. There are some important second order design rules that affect the area, the size of logic cells, not comprehended just by logic cell height and a transistor gate pitch. And the other problem with this metric is that it's maybe fairly good relative metric, you can say, using the spectra, process A appears to be denser than process B, but it doesn't give you a hard number that you can really compare technologies both in the past and some of the latest technologies.
So I think it's time to move beyond this metric. So the metric we propose is not a new metric. It's a metric that's been used by other companies in the past and then seems to have been forgotten or ignored. This is a metric that measures the area and the transistor density in 2 very common logic cells. On the left hand side is a 2 input NAND cell.
Of course, I'm not referring to NAND memories in this case, I'm talking about a logic cell that performs a NAND logical function. It has 4 transistors and is a fairly small logic cell, but very common in any logic circuit. And on the right is another logic cell maybe on the other end of the spectrum of complexity and density. That's a scan flip flop logic cell. So whereas the NAND cell on the left has 2 active gate pitches for to determine its width, A scanned flip flop cell usually uses somewhere between 2025 active transistor gates.
So it's obviously a much larger cell, many more transistors and sometimes slightly different transistor density than the NAND cell on the left. What they do share in common is they have the same cell height, but the cell width will depend upon whether it's a simple cell or the complex cell. And of course, any logic circuit uses a wide range a wide variety of logic cells in between these two extremes, but these two kind of cover the range. And the metric that's used in the past, and I propose we can we resurrect and start using again, is this a NAND plus scan flip flop metric, where what you do is you again calculate the transistor density in the NAND cell, then you calculate the transistor density in the scan flip flop cell, apply weighting factors that historically have been 0.6x for the NAND, 0.4x for the scan flip flop, go through that simple math and you come up with a number in number of transistors per square millimeter. So when I apply that metric to Intel Technologies, I come up with this trend.
So again, now the goal is ever higher transistors per square millimeter, ever higher density. So it will be an upward sloping curve as opposed to the area curve, which is a downward sloping curve. So using this metric, our 32 nanometer increase on our 22 nanometer technology. And our 14 nanometer technology that Ruth will talk more about after my presentation delivered a much bigger gain using hyperscaling to deliver about 2.5x increase in transistor density. And then our 10 nanometer technology that Kai Azat will talk about later this morning, it provided an even bigger jump, again with the use of the unique hyperscaling techniques used on our 10 nanometer technology, an increase of 2.7x in transistor density.
So these are big steps, our 14 nanometer technology and our 10 nanometer technology are bigger than normal steps, providing 2.5x and 2.7x increases in transistor density. And yes, they've taken longer, but we've taken bigger steps, we've taken a little bit longer, and we're still on the historic trend rate of roughly doubling transistor density every 2 years. So Moore's Law is alive and well at Intel. And my claim is that any company developing a logic technology should now be willing to not only whatever node name they choose for that technology, but a quantitative transistor density number. So in the case of Intel, our 14 nanometer technology delivers 37.5 megatransistors per square millimeter.
Our 10 nanometer technology provides a little bit over 100 megatransistors per square millimeter. And now using measured data from other companies, here's a comparison of our transistor density rate of improvement versus theirs. As you can see, they're on a slightly slower slope, but this is, again, this is a quantitative metric measured using this NAND plus scan flip flop metric. And I'll point out that in recent years, their rate of improvement has been a lot slower than Intel on their 20 nanometer, 16 nanometer and 14 nanometer technologies. And if you compare their transistor density to our 14 nanometer transistor density, ours is about 30% higher, 1.3x higher.
And I think we can say that now here I've added a point, an open circle for the 10 nanometer technologies for these other companies. Now there have been various reports in the press that yes, they are or maybe are starting production on these technologies, but no product has yet hit the marketplace. Can't actually find one of these chips to reverse engineer it. So we don't know exactly when they have or will start volume production. We don't know exactly what the density of their 10 nanometer technologies are.
But based on their public statements, it's somewhere around where I've put that large red circle. And their density is expected to be similar to our 14 nanometer technology but about 3 years later. And again, as I said earlier, any company that develops a logic transistor technology, they can give it whatever node name they want, and I think we all agree that node names have lost a lot of their meaning lately, but they should also be willing to follow it with a quantitative transistor density metric. And as I stated earlier, Intel's 10 nanometer technology delivers about 100 megatransistors per square millimeter. We expect these other technologies to be in the range of around 50 megatransistors per square millimeter, a full generation behind.
All right. Let me remind you what Moore's Law is. It's not a law of physics, it's a law of economics. By scaling transistors, you can deliver lower cost per transistor or you can take those lower cost transistors and add more transistors to your product to provide more functionality, higher performance. So again, Moore's Law can deliver either more cost savings or more performance or some combination of the 2.
So historically, microprocessor die area scaling has been around 0.62x per generation, more than the transistor density improvements that I showed earlier because a microprocessor is a mixture of several different types of circuits. There are, of course, the logic circuits, which tend to scale the best, but there are also IO circuits and SRAM circuits and maybe some other analog circuits that typically don't scale as well as logic circuits. So that's where the 0.62x number has come from. And note, if we had kept following this normal scaling path and had started with a 100 square millimeter die and if I left, our 10 nanometer die size would be just a little bit less than 15 square millimeters. But with hyperscaling used on 14 nanometer and 10 nanometer, we, in reality, achieved much better than that 0.62x area scaling factor for a feature neutral die.
So we're not adding transistors in this case. And the 10 nanometer die would be around 7.6 square millimeters or about half of what it had been if we had stuck with normal scaling patterns. So what does this mean for cost per transistor? So I show 3 graphs here, starting with area per transistor on the left. And if we had followed the normal 0.62x trend, we would be delivering those open yellow circles at 14, 10 and 7 nanometers.
So those are hypothetical points, that's not what we're doing. The metal graph, there still would be a wafer cost increase to deliver that scaling, maybe a little bit less than with hyper scaling, but still would have been a wafer cost or cost per area increase on those generations. And the result in terms of cost per transistor is the graph on the right. We would be deviating from the historic reduction rate, delivering not so good cost per transistor, maybe better than the previous generation, but a curve that would be flattening out. Here's one more hypothetical scenario to describe to you.
So again, starting on the left, I'm assuming normal area scaling, 0.62x per generation. So that's a hypothetical for 14, 10 and 7. And then in the middle graph, we would have introduced we could have introduced a wafer size conversion from 300 millimeters to 4 50 millimeters. And of course, wafer size conversions deliver lower cost per area. So that would really benefit 14, 10 and 7, the onetime reduction in cost per area on 14, but that benefit still applies as you scale forward to 10 and 7.
And then the result of that is the graph on the right, a little bit better CPT improvement than on the original set of slides I showed and close to the historic trend rate for reducing cost per transistor. But here's what we actually did on 14 and now on 10 nanometers. Use of hyperscaling on 14 and 10 on the left hand graph shows a much better than normal area scaling. In the middle graph, yes, wafer cost is still going up, maybe at a slightly faster rate. But the result in terms of the cost per transistor is shown on that right hand graph below the trend line for 14 nanometers and certainly for 10 nanometers.
And even 7 nanometers will come in below that long term trend line. So I've talked quite a bit so far about area scaling, density improvements, cost per transistor. But another important benefit of scaling and of Moore's Law is improved transistor performance and lower transistor power. So this graph the graph on the left shows Intel's trend for delivering improved transistor performance over different generations. The graph on the right shows our trend for reducing the dynamic capacitance of those transistors at the same time.
And of course, capacitance affects active power, so you want lower capacitance for lower active power. So a quick message here is that Moore's Law, at least at Intel, continues to deliver higher performance and lower power. But we're not standing still. We're developing performance enhancements on our technologies. So on 14 nanometer, we first developed 14 plus that delivers improved performance over the original version and now also developing 14 plus plus with an even bigger performance gain over the original technology.
That's the performance enhancement is shown on the left hand graph. Notice on the right hand graph that the dynamic capacitance is unchanged. So the point there is that we are delivering increased performance without increasing dynamic capacitance and without increasing active power. Another point I want to make is that these changes in these improvements and enhancements in performance can be traded off for power. So if you have a transistor that is inherently faster, then you can choose to operate that circuit at a lower voltage and you give up some of that performance, but it's a gain in terms of a much lower active power.
And now at the coming 10 nanometer technology, we also have in place plans for A10 plus and A10 plus plus technology, delivering, in each case, improved performance while not sacrificing dynamic capacitance. In addition to developing those enhanced versions of the process technologies, we also develop a wide range of derivative technologies. The table on the left just shows the range of device types, transistor types that we offer And we offer some devices for some products, other products want a different set of devices, but a combination of high performance transistors, low leakage transistors, analog and RF transistors, high voltage transistors, high Q inductors, precision resistors and capacitors. So those are the range of options that we develop for different types of products. We also offer different sets of interconnect stacks, low cost interconnect stack, high density interconnect stack and maybe a high performance interconnect stack as well.
So again, a range of process features, a range of process derivative technologies that we offer at each technology node. And today, we have a wide range of products, some of which have have been in volume manufacturing for a while on our 14 nanometer technology on various derivative versions of our 14 nanometer technology, including a larger server and FDGA die and of course, a smaller client and mobile die as well. So looking to our future, the use of heterogeneous integration options will become increasingly important. This is where you can not only combine a 2 similar die in a package, but in some cases, 2 very dissimilar die. Maybe you'll have a die developed on a technology optimized for high performance computing, another die designed on a different technology optimized for communication circuits or for memory circuits or for many other purposes.
So I think heterogeneous integration will be a bigger part of our future. So how what are the options? What are the process options for implementing heterogeneous integration? Well, you can just use a standard multi chip package as shown there on the top, but it has a couple of problems. And the one that package substrates have a relatively poor density of connections between the die above and the package below.
And also the interconnects or the traces in the package tend to have a pretty loose pitch, so fairly poor density of the die to die connections through the package. Another option is the image in the middle, the use of a silicon interposer. So you add another large silicon die as an interposer between the actual die up on top and the package substrate below. This approach provides good density for the connections between the die above and the interposer below, also good density of die to die connections through that silicon interposer. But it adds a higher cost because you have a pretty large silicon die below and you have the added cost of through silicon vias.
So the 3rd option at the bottom is Intel's embedded multi die interconnect bridge technology or EMIB as we call it for short. This technique inserts small silicon bridges inside the package And then when the die are connected to the package, some of the bumps are the normal loose pitch bumps that connect to the package, but other bumps around the parameter of the die are a tighter pitch and they connect to the tight pitch interconnects in that silicon bridge. So the EMIB approach provides good density of the die to bridge connections, good density of die to die interconnects through that silicon bridge and low cost because these silicon bridges are small and don't include the cost of any added through silicon vias. So in summary, EMIB technology provides high density, high bandwidth die to die interconnects. This is a increased view of the embedded bridge technology showing the package substrate below.
You can see the thin silicon bridge embedded in the package, and that has some relatively refined pitch interconnects that can provide the lateral connections from die to die. On top of them are 2 die and they can be again, they can be similar die or they can be very different die for purposes other than computing, can be for memory, can be for communications. And they have a finer pitch bumps that connect to the embedded bridge below it. And this is not just a process technology or a package technology innovation. It's also a circuit design innovation because we can we have designed custom IO circuits on our die that are optimized for use with this embedded bridge technology, optimized for minimum die area, maximum performance, maximum bandwidth and also low IO power.
And just to show you some actual micrographs of the embedded bridge technology, so here I show in the middle an expanded view. You can see the silicon die on top. On the far right is one of those loose pitch bumps, the normal package bumps connected to the package substrate below. But in the middle, you see an array of many of the finer pitch bumps that connect directly to that silicon bridge embedded bridge. So much higher density of bumps to the bridge with this embedded bridge technology.
And one more magnified SEM image. Here, I show the actual interconnects used on that embedded bridge. So 4 layers of copper interconnects with certainly a much tighter pitch than you can do on any package technology, but also loosen up pitch such that they provide a really good performance as they transmit signals from one die to the other. So again, our vision going forward is more and more heterogeneous products, integrating different chips into a package optimized for their various purposes. And embedded bridge is a key technology that enables dense and cost effective in package heterogeneous integration.
Okay. This is my last slide, so let me wrap up. And these are the same key messages I started with. Intel leads the industry in introducing innovations that enable scaling. Hyper scaling on Intel 14 nanometer and 10 nanometer provide better than normal scaling while continuing to reduce cost per transistor.
Intel's 14 nanometer technology has about a 3 year lead over other 10 nanometer technologies with similar logic transistor density. And our 10 nanometer technology provides industry leading transistor density using a quantitative density metric. And enhanced versions of 14 nanometer and 10 nanometer provide improved performance and extend the life of these technologies. And again, the key message, Moore's Law is alive and well at Intel. Thank you for your attention.
Please welcome Intel Fellow Technology and Manufacturing Group Director, Interconnect Technology and Integration, Ruth
Braynes. Good morning. Wonderful to be here with you guys this morning. I'm hoping I can talk to you a little bit about more about our 14 nanometer technology. My name is Ruth Brayne.
I'm with the really dive into the 14 nanometers in more detail. I'm going to talk about performance. And then I'm going to dive into some of the innovations that I'm really excited about, and I hope I can convey some of that excitement to you about how we actually put together some of the self aligned features that really make our interconnects capable of scaling. So let me dive right in with that. So a few key messages.
I do want to go in through the Intel's 14 nanometer technology in more detail. We do have about a 1.3x density advantage than what you see in others available 20 or 16, 14 nanometer technologies. Intel's 14 nanometer technology is expected to be similar density to others' 10 nanometer technology, but 3 years ahead. Intel's 14 nanometer transistors have at least a 20% performance leadership compared to others' available technology. And at the 14 nanometer node, Intel has really developed all these key enablers of hyperscaling and it really allow us to enable delivery of that cost per transistor benefit.
I will spend some time on that and I hope I can convey a little bit of the physics there because it really is exciting what we're able to do. So let me start with just the density plots and go there first. Let me go through some of the key features that enable our 14 nanometer scaling. If you look at what we do, we have to scale multiple types of different features to enable scaling. Fin pitch, interconnect pitch, the library cell height as well as the gate pitch.
And we do aggressive scaling on all of these different for different features and let me walk through those. At the fin pitch region on 14 nanometer, we did a 0.7x scaling down to a 42 nanometer fin pitch. At interconnects, we did a much more aggressive scaling of 0.65x down to 52 nanometers. And again, as I get further along in the presentation, I'm going to talk a little bit more about how we enable that aggressive pitch scaling. For the cell height, we were able to use our high performance fins to enable that cell height to reduce by more than 0.5x.
And for the gate pitch, we went from 90 to 70 nanometer gate pitch for a 0.78x shrink. That really, if you wrap it all up and I'm going to go through some of those details, you can see that we have an unprecedented 0.37x scaling for our logic area. So let me go through this graph in a little more detail to help understand it. Now this is an Intel graph relative to our own process. As Mark said, it's one good way for us to measure our own success.
And if you look at 45, 32, 22 and 14, you can see if anything we've been accelerating our rate of logic area scaling. So when we went from 22 to 14 nanometers, we took advantage of these new inventions in hyper scaling and were able to achieve 0.37x scaling. That really helped us with keep the cost per transistor as I pointed out in a very aggressive area. Now I'm going to switch to the other metric because I do also want to put this in absolute terms that can really be measured and quantified versus the relative terms on the last page. If you look at logic transistor density using this metric that Mark walked through, 14 nanometer provides much better than normal logic transistor density improvement from an average of about 2.2 on prior technologies all the way up to 2.5x.
That's over 37,000,000 transistors per square millimeter and there's a lot you can do with all those extra transistors. If you look at what others capability are and what they call their 20, 16 14 nanometer node, you can see we're approximately 1.3x better than those nodes as well. And I'm going to go through some more of the details on that so people can really understand the various density metrics that go into this. So let me start with Intel's 14 nanometer technology and others' 20 nanometer technology that was introduced at a similar time. So if you look at all the typical simple metrics that you can look at, gate pitch, logic cell height, thin pitch and metal pitch that I already mentioned, you can see even on those simple metrics, Intel is ahead on every metric in terms of density, by in some cases quite a bit.
But the bottom line is what I really cut down to is this transistor density metric where we're more than 30% more dense than anything that was available on what others call 20 nanometer technology. If you go ahead a year for things that were introduced a year after our 14 nanometer technology, as Mark pointed out, the rate of innovation was slower on some of those and you can see incremental changes. So 1 to 1.0 4 or 1.09. They did incremental benefits to those, but you can still see really world class density on Intel's 14 nanometer. It's really at least 25% better when you actually go through those metrics.
And even if you go through the basic metrics of just gate pitch, logic, so high, you see the same answer. But what we really like to get to is that very quantitative transistor density metric. And in even those cases, you see that there's a clear benefit. I also want to just reiterate, if you look at what's available with the way things are being named, the 14 nanometers is really expected to be similar density to others 10. But if you look at the time line, we've had 14 nanometer in production since 20 14, which you can see from this graph.
So if you look at our 14 nanometers, and again, as Mark pointed out, there are not parts available today for us to really quantify. But based on available information, this is what we can see about the road map looking ahead. So we expect they'll have similar density, but really we do have a 3 year lead. And I think the number I heard was we have more than 450,000,000 units out of production on 14 nanometer today. So we really have a lot of history doing that.
Let me jump ahead to the performance and I want to spend a little more time on 14MU performance because we really also have dense performance leadership in this space and it's important for our products to be able to offer that performance improvement. So this is the same graph Mark showed and then I want to walk into a little more detail on it. What we have done at Intel is we developed an initial technology, but we're still learning things. So we're always enhancing that technology and we're really formalizing that a little bit more by naming some of those enhancements to really enable product teams and others to take advantage of that so that we can really deliver world class performance. So if you look at our 14 nanometer introduced in 2014, you can also see a 14 plus technology introduced in 2015 and a 14 plus plus in 2016.
They offer improved performance without increasing capacitance or active power. And as you said, but depending on the product type we're looking at, we can trade those off. But this is sort of the basic curve with matched capacitance and what you can do for the performance. So let me go through a little more detail on that. So one of the key advantages we have here is our knowledge of FinFET.
FinFET transistors were first introduced at Intel at our 22 nanometer node. And I show here a comparison on the left hand side of a schematic of what those 22 nanometer FinFETs look like versus what they look like on 14, which is our 2nd generation of it. You can see as you compare the schematics in the two cases, we brought the thin pitchers closer together, which I mentioned previously so that we could get the improved logic density. And at the same time, we made them taller. That's what really allows to continue to offer performance.
Of course, making things taller and thinner is more challenging. It's like we're making a taller skyscraper every time. But that's one of the key inventions that we've really worked out to be able to optimize for performance and density. So and you can see on the right hand side, these are actual images of our silicon wafers that I'm more excited about than just the schematic, showing really in production how much nicer those fins look. As you go from 22 nanometers to 14 nanometers, because we have that learning and that underlying knowledge of how to make this work, we really were able to optimize though and produce better shapes and taller fins for that technology.
So let me get into the actual performance data. So I'm showing a couple of benchmarks of Intel's 14 nanometer performance. On the left hand side of this is NMOS and on the right hand side is PMOS. And I just marked down in the lower right hand corner that's where better would be. And then it would be lower leakage current at a higher drive current.
Now Intel's 14 nanometer technology has a poly pitch. You see the 70 pp. I mentioned we run the 70 nanometer poly pitch. And you can see what we ended up with performance. It was leading performance in 2015.
If you look at what we achieved in 2016, we continue to improve. We don't just stand still on these things. So as you look at what we're able to achieve, we continue create performance enhancements and a year later we improved both NMOS and PMOS drive current by more than 12%. And this is still at that same tight polypitch. If you go into 2017, you can see we've added yet another device here.
We've added an 84 PolyPitch device for our highest performance segment. This really enables a 23% to 24% drive current improvement in addition to our original 14 nanometer process. So what we've offered here is the opportunity to have both density and performance. And if you take a look at what I can see as a comparison of what others are able to produce And of course, I only look at what the best available of others is. This is where it looks like it's at.
So you can see even from best available, we have a 20% to 23% performance improvement at the 14 plus plus technology. If you wrap all of this up into a power versus performance set of curves looking specifically at Intel, I've got our 14 nanometer shown in 2015, our 14 plus in 2016 and 14 plus plus in 2017. And you can see that there's really a huge transistor improvement available so that we can enhance this on an annual cadence. And so at a given power, you can choose to have 26 percent performance improvement or you can choose to cut the power in half if you're willing to take lower performance. And we can trade off along those curves to make it the best available choice for a given product.
So let me also talk a little bit about the other capabilities that we offer, because we've talked a lot about the basic process and the FinFET transistors as well as the interconnects. But I also want to point out that really our 14 nanometer technology offers a full range of features for product design needs. So the schematic to the left here really shows kind of a cartoon cross section of an interconnect stack with the silicon substrate at the bottom, transistors just above it, interconnects above that, MIMCAP is a metal insulator metal capacitor that's available. And at the top, we have thick metals that we can use for inductors and other type features. We've added the capability to have high resistant substrates that are needed for certain product needs.
We've also added the ability to have triple well or deep n well type devices. We have low leakage and high voltage transistors available to us as well, which again are needed for certain applications. All of this is available in the silicon substrate. We've added RF transistors with both template and modeling for people that need RF capability. We've added precision resistors.
Again, another key need if you need to accurately benchmark a certain resistance. And finally, when you get up to the top of the stack, you can see that we've added a high density decoupling capacitor. This is this metal insulator, metal layer that's at the very top of the stack. It's a little bit hard to see in that cross section. It's that thin layer with a via cutting through it, but that's a very high density MIM stack.
And finally, at the very top, we can add high Q inductors with our TM1 capabilities. So really, if you look at the stack from the bottom to the top, we offer just a full range of capabilities for whatever the product needs are. This is part of our Intel IDM advantage that we really work with our product teams to understand the basic need of the technology so that we can create that technology at Intel. Don't want to skip over my interconnect stack, which is one of my favorite portions of the stack. We offer a range there as well.
So we really try to have a mix and match set of stacks. Circled in the sort of light yellow is our high performance client stack. And a high performance client stack is really characterizing by having tight pitches at the bottom and slowly transitioning to looser pitches at the top so that you can promote global routes that are routes that are RC limited. You can also switch to having this one I show in the green, which is a high density SoC stack, where again, if you really need to optimize density, we can also take all these basic building blocks we put together and build them into the stacks that we want to create. And highlighted in blue is a feature that we can add or take out of the interconnect stack for stacks that are particularly performance sensitive, we can add air gaps which reduces the line to line capacitance to enable that.
So again, even in the interconnect stack, we have a range of capabilities that we can mix and match for the right product portfolio. So lastly, let me go through some of the key enablers for this. And this is really where I want to get into the hyperscaling and some of the unique advantages of some of the things that we develop and invent at Intel. So innovation and really that ability to understand the physics and understand so exquisitely that we can create things is what really helps us. That to me is one of the key drivers for what I love about working here.
If you look at what we would call traditional scaling enablers, Mark already covered some of these and I know they've been out in the market quite some time now, but let me at least mention them. The Hi ks Metal Gate, the self aligned Via and FinFET transistors, I think these are now familiar names to
everybody in the industry, because we put them together and they really have been able to do
our traditional scaling. We have several generations of learning at this point on those enablers.
But what we're doing
is we're doing a lot of We have several generations of learning at this point on those enablers. What we've added at 14 nanometers and we continue to add as we go to new process technologies is new features and new inventions. So the one I'm going to dive into a little bit on is the hyperscaling feature at 14 nanometers. So let me talk a little bit about interconnects. If you look at interconnects at 14 nanometers, we did an aggressive pitch scaling.
We went from an 80 nanometer pitch on 22 to a 52 nanometer pitch at 14 nanometer. We needed to invent a new technique to do that and it's called self aligned double patterning. It was first introduced into logic manufacturing by Intel and it's really a key advantage to both density and yield. Let me explain a little bit more about that. If you look at what's available on the market, as many of you know, you can just buy 193 nanometer wavelength immersion single pass tools that typically allows you to print features that are down to around 80 nanometers.
At 80 nanometers, you're using all sorts of enhancement techniques to create that, but that's what the capability looks like. To go beyond that, what you need to do is invent new patterning techniques. And what we're going to talk about in the next few slides is a self aligned double patterning technique and I'll go through that detail. And also, as Kai said, will speak about a self aligned quad patterning technique. So if you look at that range of what's capable, the standard immersion single pass could take you down to around 80 nanometers, self aligned double patterning can take me all the way down to 40 nanometers and self aligned quad patterning can take me down to 20.
And when you look at this, you don't see any gaps. I can now basically if I've invented those techniques, I can choose where I want to be. And that understanding really gets us to where we can make choices about exactly how we want to use our technology. In comparison, what I see out in the marketplace is you look at the again 193 Immersion Single Pass that's commercially available to everybody has access to that. And again the capability is down to around 80 nanometers.
What others have done is just use a very simple technique where you do litho etch, litho etch. Now and if you want to go below that, you need to do it 3 times. So litho etch, litho etch, litho etch. So you can see you can just keep stringing those together. But if you look at the comparison, what you can really see is you don't get the same pitch capability range as you do with these real inventions.
They might be sort of very linear thinking about how you go from a previous technology to a current one, but it doesn't really enable you the breadth and that's what that innovation gets us. And then let me point out one of the key things about it is that it's similar cost to these features, these capabilities that really can give us a bigger range. So if you look at self aligned double patterning, I have a capability all the way down to a 40 nanometer pitch and yet I've paid the same cost as LLE, which would only get me down to 60 something pitch range. So that's really one of the key benefits of it. So let me dive into a little bit more detail on that self aligned double patterning versus the LLE to try to help understand that.
So here is just a picture of a double patterning. And to me, it's kind of like a baby picture and that you ought to see the beautiful baby on the double patterning and I hope that's what you all see. There's the desired pattern to the left hand side and you can see that would be what we draw a database. And you can see how exactly that double patterning matches that image. You can see how square the line ends are.
You can see how beautiful and smooth the lines are along the along both dimensions. If you look at litho etch techniques, you can already see that these small feed through features that are short, they're narrow, they're not as long as they need to be that got rounded ends. They really don't look like you'd want in a manufacturing process where you're controlling things down to the last nanometer. You can see in these images that it's really just not what you want. So I want to just say here, you can see the clear benefits just from looking at images of these.
So let me get through how you do that because there's other issues with just doing a litho etch technique versus the double patterning. So let me start with self aligned double patterning and you get let me dive into my excitement of how you actually create that because this is the good stuff. Self aligned double patterning, you have to create a litho pattern on the wafer using resist. What we do then after that is we take and we do a thin film deposition step that we call a spacer and basically wrap it over that and that's what you can see in the blue. Now the beautiful thing about thin film depositions is we can really control those down to the last nanometer.
We really have monolayer film control over a lot of thin film depositions. And so we can just deposit that over the lithostack with precision that's just exquisite. What we do next is we etch the spacer and you can see in the next images, I've removed the spacer from the top of the resist as well as the bottom. So I've spacerized, that's what we call in the industry, spacerizing the resist. I then remove that photoresist and you can see I've cleaned it out and now I can just etch the trenches and I basically have nearly perfect control between the 2 sets of patterns I just created.
If you compare that with litho etch, litho etch, that's what I want to show here side by side. So when you want to do litho etch, that means you're doing it twice. So you start with the 1st pass of litho and you etch those into the trench. You come back and do another litho pass. Now if you have good placement and you're lucky, you basically get perfect alignment.
That's my good placement view right here. And then you'd etch that second set of trenches and you'd end up with the same pattern. So both of these techniques, let's start by saying, will produce the same pattern. You're just not getting to them the same way. But that's an if, because that's if you had good litho placement with that second pattern.
Or here's the other thing that can happen, which is that when you place that second pattern down, there's nothing that's really controlling that just except for how good you are at placing it. And you're trying to place something that is much, much smaller than a human hair with exact precision. So all of a sudden, I look at placement of 10 nanometers, that's pretty darn good already. But you can see it's almost 10x looser placement than what I was able to achieve by going through that invention and putting down the spacer. Now when you go through the edge, you can see well now it's offset.
And this picture really isn't that far out of scale for what you can end up with. You can now see that my two lines are way too close together for the one that was misregistered between them. And not only do I have a yield risk from this, I have a performance risk. Because now if you look at the line to line capacitance, you can see the capacitance is different between these lines. For one set of lines, it's much less than they would have predicted in the modeling and one that's much more.
And so now all of a sudden product designs have to take that into account when they're creating anything is that there's some random fab variation that they can't control. So this is really one of the key risks with lithoetchlithoetch. It has both yield and performance risk associated with misalignment between those patterns. To put kind of a fun point on it, this is the way I look at it very simplistically. When you do self align, you're aligning a single mask to a single mask.
You're the polar bear standing on the solid sheet of ice. If you look at lithoatch lithoatch for anybody that's a fab person, you're out there and you've got one foot on each piece of ice and you're trying to hold the ice together. I definitely want to be the polar bear sitting on the one piece of ice. The other one is the one that falls in the drink. So there really is significant benefits.
You can see out of both the clarity of the image I showed as well as the ability to just control the step as we go through the fab and make the manufacturing process work. Let me go into a little bit more about the 14 nanometer technology now in terms of what we really deliver. Now Mark talked about this image and he talked a little bit about the details of how we've changed the slope of the area scaling. I want to point out that not only at 14 nanometer did we just optimize this one metric. We also if you look at our logic area, that's what we talked about in the logic area scaling, but a microprocessor die has SRAM on it as well as IO.
And I show a scale factor for how those scaled as well as some weight factors here, which are typical of a client die. And you can see that for SRAM and IO as well, which are traditionally very hard to scale, we did very well here, resulting in a less than 0.5x area scale for everything. So it's not that we're just picking one metric. We're really able with these new techniques to be able to do a full feature scaling. And I want to just give an analogy for that.
People get very excited about wafer size transitions. For the 14 nanometer hyper scaling that we developed and because we were able to choose what pitch we wanted to do at a given cost, we were able to give 1.4x more units per dollar than traditional scaling. Now that's roughly equivalent to either a 200 millimeter to 300 millimeter wafer size transition or a 300 millimeter to 4 50 millimeter wafer size transition. It's the equivalent. We got the benefit of a false note of scaling plus this extra.
So it really gave us a huge economic benefit as well as allowing us to really pick and choose exactly how we wanted to land the process. So in summary, I hope I've shed some light on our 14 nanometer technology. It's great stuff, if you ask me. It's 1.3x denser than what we see in others available technology. It's expected to be similar density to others 10, but about 3 years ahead.
And I did want to hit the highlight of that performance advantage, both for FinFETs and our ability to control those and have 2nd generation, our ability to make them taller, our ability to choose interconnect pitches and control the interconnect stack, all of those really play into these performance advantages. And really the thing that gets me most excited is that the 14 node, we really developed all these key innovation enablers to really enable where we wanted to be for both hyperscaling features and deliver those significant cost per transistor benefits. Thank you.
Thank you, Ruth. Before we disperse for the break, just a couple of housekeeping items. We've got beverages outside, stretch your legs, we caffeinate, use the time to do some informal mingling and Q and A with some of our speakers and manufacturing experts. And with that out of the way, enjoy the break. We'll see you back here in 15 minutes.
Thank you.
Please welcome Corporate Vice President, Technology and Manufacturing Group Co Director, Logic Technology Development, Kaisad Mistry.
Good morning, everyone. Welcome back from the break. I'm excited to be here today to reveal to you for the first time some of the leadership features of Intel's 10 nanometer logic technology. This is our next step in logic technology evolution after the 14 nanometer technology that Ruth described for you a few moments ago. If there's one key message I want you to walk away with after you see my presentation, it is that all 10 nanometer technologies are not created equal.
Shakespeare said, what's in a name? A rose by any other name, spells the same well. In this case, I think the great bard was mistaken. And I hope that once you've heard my talk, you will recognize that our rose smells sweeter. So here are the key messages that I'll be going through in our in this presentation.
1st, Intel's 10 nanometer technology has the world's tightest transistor and metal pitches in the industry. In addition, I'll be describing to you some unique hyperscaling features, new innovations in our 10 nanometer technology that provide even greater density than the pitch scaling that we have. The result of those factors is that Intel's 10 nanometer technology will be a full generation ahead of what others call 10 nanometer. Next, I'll talk about enhanced versions of our 10 nanometer technology that provide improved power and performance within the 10 nanometer process technology family. And finally, I'll return to the theme of hyperscaling that Mark and Stacy introduced this morning and really come back to that theme and explain again how hyperscaling allows Intel to continue the economic benefits of Moore's Law while swallowing the cost of these multi pass patterning schemes that are needed to advance Moore's Law.
So first, I'm going to talk about some of the key features of our 10 nanometer process technology. Of course, Moore's Law is built on scaling and scaling means scaling pitches. It means packing wires closer together, packing closer together and in the modern aero thin FETs, it means packing fins closer together. So in our 10 nanometer technology, we feature aggressive pitch scaling. The fin pitch is scaled from 42 nanometers to 34 nanometers.
The minimum metal pitch is scaled from the 52 nanometers that Ruth described to 36 nanometers. This allows the cell height to scale by better than the traditional 0.7x and our gate pitch is scaled to 54 nanometers. So we feature aggressive pitch scaling and for the first time in the industry, we use self aligned quad patterning and I'll speak more about that in subsequent slides. But we didn't stop with pitch scaling. In our 10 nanometer technology, we have 2 additional innovations that add to the transistor density improvement.
And these are single dummy gate and contact over active gate. And I will spend a few slides in subsequent material to go through those 2 unique new innovations. The combination of the aggressive pit scaling and these new features delivers an unprecedented 2.7x transistor density improvement significantly greater than the traditional Moore's Law cadence of 2x logic transistor density. And if you look at the result of all these innovations, Intel's 10 nanometer technology provides a transistor density measured using the metric that Mark described earlier today of over 100,000,000 transistors per millimeter squared for the first time in our industry's history. So Intel introduced FinFETs for the first time in 2011 on our 22 nanometer technology.
We were the first to do so. And with our 10 nanometer technology, we are now on our 3rd generation of FinFET technology. With each successive generation, we've packed the fins closer together and made them taller. We packed them closer together to improve transistor density and we make them taller to improve transistor performance. So on our 10 nanometer technology, our fins are about 25% taller and 25% more closely spaced than the 14 nanometer technology generation.
In fact, if you go and measure the fins, the fin pitch is 34 nanometers and the fin height is about 53 nanometers in our 10 nanometer technology. And you can see the exquisite fidelity of the patterning of these fins that allows us to extract the full performance and density advantage of FinFETs in our 3rd implementation of this technology. The other key pitch for transistors is the gate pitch and Intel's 10 nanometer technology features a 54 nanometer gate pitch scaled to be tighter than our 14 nanometer technology. If you compare that to the competition, Intel's 10 nanometer gate pitch is the tightest in the industry. We have traditionally had the tightest gate pitch in the industry and we will continue to do so with our 10 nanometer process technology.
So that's the transistors, but you need wires to hook them up. And this shows the minimum interconnect pitch trend for our recent technologies. Intel's 10 nanometer technology features a minimum interconnect pitch of 36 nanometers. We accomplished this with the world's first implementation of self aligned quad patterning. Now Ruth went through the difference between self aligned double patterning that we used on our 14 nanometer node compared to some of the other techniques such as litho etch, litho etch and she illuminated the advantages of the self aligned approach.
We have taken that one step further in our 10 nanometer technology by introducing self aligned quad patterning, meaning the original lithographic patterning is divided into 4 to have a pitch that is 4 times tighter than the original lithographic pattern. And using the same self aligned techniques, we achieve exquisite control in terms of the placement of those lines and the fidelity with which the lines are patterned. If you compare that to the competition, we will have the tightest minimum metal pitch in the industry. Now let me talk about some of the other innovations that give us the really phenomenal transistor density improvement that we have on our 10 nanometer technology. The first of these innovations is called contact over active gate.
And I'm going to spend a few minutes on this slide to explain to you what this innovation is and why it's important. The picture on the left shows a traditional transistor with a gate contact. Now this is not quite a traditional transistor, it's a FinFET transistor, but for us that's become normal. So the gray lines are fins running from left to right and the green line is the polysilicon gate running from top to bottom. Wherever the gate crosses a fin, that's a transistor.
Now for the last 40 or 50 years of the semiconductor industry, the contact to that gate, now you have to make contacts to transistors, you have to contact the gate of the transistor to be able to turn the transistor on and off. So for the last 30 or 40 or 50 years of the semiconductor industry, the gate contact has been made away from the active transistor. You can see it's below the active transistor where the fins and the gates cross. In our 10 nanometer technology, we allow the contact to be placed directly above the active transistor. This is a unique innovation that we are introducing in our 10 nanometer technology.
There's a number of technology attributes and innovations that we need to introduce allow the contact to be placed directly above the active transistor. But you can see the advantage of this immediately. If you're trying to pack transistors closer together, you don't need that space in between the transistors where the contact used to be before. It's now placed directly above the transistor and you can pack those transistors closer together. So we estimate that this contact overactive gate technology, this revolutionary new feature that is a first in the semiconductor industry allows for another 10% transistor density improvement or another 10% area scaling improvement over and above the pitch scaling that I described on the previous slides.
But we didn't stop there. We have another key innovation in our 10 nanometer technology called single dummy gate and I'll spend a few minutes again on this slide to explain to you what it is, why it's important and what it buys us. On the left is a 14 nanometer cell, the simple NAND cell that Mark talked about earlier. And one thing you will notice is that at the edges of the cell, you have dummy gates, 1 on the left, 1 on the right. And in the middle, you see in green the active gates that form the active transistors that are switching.
On our 10 nanometer technology on the right, you see that we have what we call a single dummy gate, half on the left, half on the right. And you can immediately see that this offers advantages for transistor density compared to the double dummy gate on the left. Now some will ask what's new about this? Well, it is true that others have offered single dummy gate in prior technologies, where the pitches were looser and prior to the advent of FinFETs. Some of the difficulties with single dummy gate historically have been how do you support single dummy gate on an advanced technology node that has FinFETs?
How do you match the transistor performance for transistors that are in the middle of the cell compared to those that are at the edge of the cell and close to those dummy gates. Those are not historically always matched and that presents a modeling or circuit design challenge for the designer. And then finally, how do you do a single dummy gate on an advanced node with such a tight gate pitch as 54 nanometers? So we have introduced unique innovations in our 10 nanometer technology to overcome those difficulties And we can support a single dummy gate where the performance of the center transistors and the edge transistors are closely matched. We're able to support single dummy gate on a FinFET technology with extremely tight gate pitches.
So this is another key innovation in our 10 nanometer process technology that affords area scaling above and beyond the traditional pitch scaling. And if you look at the metric that Mark proposed earlier of the 2 input NAND cell and the scan flip flop cell, which represents the broad array of logic cells that are used in a modern chip. You can see that the single dummy gate provides an effective additional 20% or more effective area scaling benefit compared to the double delegate that was used previously. And really is another key innovation that allows really aggressive area scaling on our 10 nanometer technology above and beyond the traditional pitch scaling. So next, I'm going to talk about the hyperscaling benefits of Intel's 10 nanometer technology, taking all of the features that I've just described and telling you what it means as a whole.
First, the thin pitch and metal pitch scaling allow the cell height to scale by better than 0.7x from our 14 nanometer technology. And then we scaled the gate pitch as well. So the traditional pitch scaling affords an area scaling of about 0.5x, which is the Moore's Law pace. So if you combine all the pit scaling elements that I talked about, including the world's first use of self aligned quad patterning, we get an area scaling roughly consistent with historical Moore's Law pace of about 0.5x. But we didn't stop there.
We introduced these 2 additional features, contact overactive gate and single gummy gate, and the combined benefit of all of these innovations results in an area scaling of about 0.37x. This is the hyperscaling benefit of Intel's 10 nanometer technology, significantly faster than the traditional Moore's Law pace of roughly 0.5x area scaling. And if you look at that in historical context one more time, traditional Moore's Law scaling are on 0.49x and on our 14 nanometer technology and then again on our 10 nanometer technology, we use these unique hyperscaling innovations to provide better than normal 0.37xlogicareascaling. And if you translate that to the metric that Mark described earlier today, which is the transistor density, which continues to grow. We provide a 2.7x transistor density improvement over our already leading 14 nanometer technology, really an unprecedented transistor density improvement in our 10 nanometer technology.
And again, although it took us longer than 2 years to develop this 10 nanometer technology, we took a much bigger step. So if you look at that in historical perspective, you can see that our 10 nanometer technology as was the case in our 14 nanometer technology continues to keep Intel on the Moore's Law pace of roughly doubling transistor density every 2 years. So we continue to maintain the rate of Moore's Law density scaling. And if you compare us against the competition, you can see that with these unique hyperscaling features, the world's 1st self aligned quad patterning, contact overactive gate, single dummy poly on an advanced thinset node, The combination of all these features says that our 10 nanometer technology will be a full generation ahead of what others call 10 nanometer. Some roses in fact do smell sweeter.
Of course, a modern microprocessor contains more than just logic. It is dominated by logic, which is why the correct metric is to look at the logic transistor density, but it contains more than just logic. You have analog circuits, IO circuits, SRAM memory circuits. This graph shows the SRAM memory offerings on our 10 nanometer process technology. For the last many generations, we've offered a range of SRAM memory cells to service either high density, low power or low voltage or high performance circuit applications and our 10 nanometer technology is no different.
We offer high density, low voltage and high performance versions and each of these is scaled roughly 0.6x in area from our 14 nanometer technology. So if you add it all up, what does a full chip scaling look like on our 10 nanometer technology? What I'm showing here is a representation of a typical Intel product. If I look at our client products, our server products and I take all of the different circuit elements and the weighted usage, IO, analog, memory, logic, and I go through an estimate of weighted average improvement for a typical Intel product, you see that our die area, full chip die area scaling is about 0.43x, significantly more than the traditional or normal Moore's Law trend of around 0.6x. So the hyperscaling benefits of Intel's 10 nanometer technology deliver better than normal microprocessor die area scaling.
Now let me switch gears and talk about enhanced versions of our 10 nanometer technology. So first, a little introduction. What I've said to you so far is that we are taking bigger steps than the traditional Moore's Law pace, but we're doing it at a somewhat longer cadence than the traditional 2 year cadence. The net result in terms of density scaling is that we are on the traditional Moore's Law pace. However, the market still expects improved products on a yearly cadence.
So in order to support that, we need to provide improved versions of our 14 and 10 nanometer technologies. So we provide improved performance, improved power within a given technology node. So this is what's shown here. Mark showed this slide. We're on 14.
We provided 14 plus and plus plus and the same thing is true on 10. We plan to provide 10, 10 plus and 10 plus plus So if you look at our 10 nanometer technology first compared to our 14 nanometer technology, we continue to provide improved power performance relative to the prior generation. You can see that comparing our 14 nanometer technology to our 10 nanometer technology, you can have either a 25% improvement in performance at the same power or a 0.55x reduction in power, almost half the power at the same performance. But we don't stop there. We provide improved performance within the 10 nanometer family.
And here I'm comparing the initial 10 nanometer offering to the 10 plus plus process that will be supported for future 10 nanometer products. And you can see that we by the further 15% performance improvement at the same power or a 30% reduction in power at the same performance compared to our original 10 nanometer offering. So we provide enhancements within the 10 nanometer generation to allow our products to showcase improved performance on an annual cadence. This slide shows some of the other features that we'll be adding to our 10 nanometer technology to make it a complete technology offering to span the rich range of products that Intel has to offer, both for our internal products as well as for our foundry customers. We plan to add all of the many features that are needed to build the wide range of products from servers to clients to FPGAs to other types of products and span the full range on our 10 nanometer technology.
Some of these features include high Q inductors, high resistance substrates, high voltage finsets to support the high voltage IOs that are needed on some of our products, as well as many interconnect options to trade performance, density and cost as well as precision resistors and other features. So all of these technology features are available for use on a product by product basis as the 10 nanometer technology moves forward. So I've described to you some of the unique innovations in our 10 nanometer process technology. In particular, I've disclosed for the first time today some of the key innovations in our 10 nanometer technology, including self aligned quad patterning, contact overactive gate and unique process innovations that allow single dummy poly on an advanced FinFET technology. Now I'm going to return to the theme of hyperscaling that Stacy and Mark introduced and that Ruth explicated in her 14 nanometer presentation.
Hyperscaling is really a technique that allows Intel to continue the economic benefits of Moore's Law. It features logic transistor density increase that is significantly more than the traditional 2x Moore's Law pace, albeit at a longer than 2 year cadence. But it does afford the same rate of transistor density increase per year as traditional Moore's Law scaling and the same rate of cost per transistor improvement as traditional Moore's Law scaling. This is really, really important. The economic benefits of Moore's Law are intact.
In addition to service, the needs of the marketplace for an improved product on a yearly cadence, we provide performance enhancements within each technology node. So let me come back to this. Why do we choose to hyperscale? It's because of what Ruth explained earlier. 193 nanometer immersion lithography with a single pass can only get you down to an 80 nanometer pitch.
If you want to scale below 80 nanometers, you have to have more than one pass through the lithography and etch tools. When you do that, you add to the cost of fabricating the wafer. If you don't scale faster, you don't make up for that cost. And so to maintain the economic benefits of Moore's Law, we have to invent these new patterning schemes to extract the full cost per transistor benefit of those multiple passes through the lithography tool that you are going to pay for regardless. So hyperscaling really would not be possible without these innovations.
These include self aligned dual patterning at the 14 nanometer node, self aligned quad patterning at the 10 nanometer node, along with some of the other innovations that I've disclosed for the first time today, contact overactive gate and single dummy poly with an advanced FinFET transistor. So these hyperscaling innovations really allow Intel to continue the economic benefits of Moore's Law. And this results in this slide, which you've seen now multiple times, Moore's Law is alive and well at Intel. So let me conclude. Intel's 10 nanometer process technology has the world's tightest transistor and metal pitches.
The tightest
pitches in the industry along with unique hyperscaling features, contact overactive gates, single valley poly that really provide us leadership density. We believe our 10 nanometer technology will be a full generation ahead of what others dub 10 nanometers. We support enhanced versions of our 10 nanometer technology to provide improved power performance within the 10 nanometer technology. We do expect our 10 nanometer technology to commence manufacturing in the second half of this year. And returning to the theme of hyperscaling, Moore's Law has been a reality, an economic reality for us for the last 40 or 50 years of this industry and Moore's Law is alive and well.
Hyper scaling techniques allow us to extract the full value of the multi pass patterning schemes and allows Intel to continue the economic benefits of Moore's Law, which has really revolutionized our whole world. Thank you for your attention.
Please welcome back to the stage, Mark Boor.
So good morning, again. So I'm really excited because today I have an opportunity to disclose for the first time details of our new 22 FFL technology. So what is 22 FFL? 22 FFL is the world's 1st FinFET technology for low power IoT and mobile products. It uses advanced FinFET transitions based on proven 22 14 nanometer features, provides a transistor option with more than a 100x leakage power reduction, it uses simplified interconnects and design rules based on 22 nanometer technology, offers new levels of design automation, RF design enabled and it's cost competitive with other industry 28 and 22 nanometer planar technologies.
So this table describes or illustrates the design features we either pulled in from 22 or from our 14 nanometer technologies for this 22 FFL process. Of course, all three of these technologies use FinFETs. Intel has been manufacturing FinFET products since our 22 nanometer generation back in 2011. So starting with FinFETCH, that's one key feature we pulled in from our 45 excuse me, from our 14 nanometer technology. So we use our FinPitch of 45 nanometers, slightly relaxed from the 42 nanometer fin pitch used on 14.
But the same narrow, tall fins, the same high performance fins that we are presently shipping and have a while now on 14 nanometers. Gate pitch has relaxed a bit to 108 nanometers, so we have a wide range of options for different gate lengths within that pitch to best tune the design for either high performance or low leakage. We use a 90 nanometer interconnect pitch. That's not only cost effective because it uses single patterning but has good design rule flexibility. We have a 6 30 nanometer cell library height so it's shrunk from our 22 nanometer process.
And this technology delivers using the metric I described earlier, 18.8 megatransistors per square millimeter. Higher density not only than our own 22 nanometer process, but higher density than any other 28 or 22 nanometer technology. And the SRAM bit cell size is 0.08 8 square micron. So again, this process is based on proven 22 and 14 nanometer features. This technology offers a wide range of device types and features.
It offers high performance transistors, ultra low leakage transistors, high voltage IO and power transistors. These transistors provide good device matching, low onef noise. We also provide a deep end well isolation feature along with precision resistors, BIM capacitors, a high resistance substrate and a high Q inductors. So again, 22 FFL provides a wide range of devices for both digital and analog and RF design. These are 2 graphs that I first showed back in 2011 when we first introduced our 22 nanometer FinFET technology.
These graphs illustrate the benefits of FinFETs over any planar transistor. The left hand graph is a plot of a transistor gate delay on the vertical scale versus operating voltage. So obviously, a lower on the vertical scale means a faster transistor and a lower operating voltage moving to the left and horizontal scale provides lower active power. And that graph, again, shown way back in 2011, shows what we then called the tri gate and now we call FinFET transistors can provide better performance and power than any planar technology. Graph on the right shows channel current or leakage current comparing planar versus tri gate.
And FinFET devices are a fully depleted transistor, which have a steeper sub threshold slope and thus can provide lower off state leakage than any planar transistor. So here I'm showing the high performance transistors offered on 22 FFL and comparing them to our previous 22 GP technology and comparing also to our technology, plotting leakage on a vertical scale versus a drive current on a horizontal scale. So obviously, being down into the right means a better transistor. And our 22 FFL transistors are clearly much higher performance than our 22 gsP technology. And on the same line, roughly as our 14 plus plus so similar drive currents to what we offer on our 14 plus plus technology.
Now let me expand the vertical scale a bit and I add the special low leakage transistors that are offered on 22 FFL, and they provide leakage that is more than 100x lower than our low leakage 22 GP technology. And let me stress an important point on this slide. The vertical scale is not just sub threshold leakage, which is what you normally see on graphs of these types at the conferences. I'm plotting on a vertical scale total leakage, which includes all three of the leakage components on a transistor. It includes sub threshold leakage, includes gate oxide leakage and also includes junction leakage.
And when you get down to these really low leakage levels, all three matter. All three need to be carefully engineered and optimized to get us down to that very low leakage level. And the low leakage devices offered on 22 FFL are the lowest leakage transistors for any mainstream technology. Sony has zero in on the 2 families of transistors that we offer on 22 FFL. We have the high performance transistors that I described earlier.
And those are on the die, of course, to provide high performance logic such as high performance cores, but coexisting being on the same die at the same time are the low leakage devices down there in the lower left. Those devices are typically used to have circuits that don't care as much about performance, but really care about ultra low leakage. An example of such circuits would be the always on, always connected devices or circuits that you have. So on 22 FFL, you really have a very broad range of transistor types from high performance to ultra low leakage. And this technology is supported by Intel System Foundry and our robust ecosystem.
We provide design services, soft IP, advanced IP, foundation IP and design tools and flows. So this is my final slide. The 22 FFL provides high transistor drive current similar to Intel 14 nanometer, provides special low leakage transistors with more than a 100x lower total leakage than 22 GP. Die area scaling is better than industry 28 and 22 nanometer technologies. We offer a wide range of advanced analog RF
devices.
We make extensive use of single pattern for affordable ease of design. We have the ensured eye yield with the use of proven 22 and 14 nanometer features. We are cost competitive with other 20 and 22 nanometer planar technologies and industry standard design kits, technologies and industry standard design kits and the PDK-five is available now and the PDK-1.0 will be available in the Q2 of this year. And the and GBM will be ready for production in the Q4 of 2017. So again, my final summary slide summary statement is that 20 2 FFL is an exciting new technology that provides a compelling combination of performance, power, density and ease of design for low power IoT and mobile products.
So thank you again for your attention.
Please welcome President, Client IoT, Business and Systems Architecture, Murthy Renduchentalla.
So finally, somebody pronounce my name right. I'll have discussions with Stacy and Laura after the event. Maybe we'll sit down and go through the syllabic deconstruction of my surname. Good morning, everybody. It's really great to be here.
I hope you found this morning a fascinating 3 hours of technology exposition. I have the great fortune and incredibly exciting opportunity that together with my colleagues in the product division to work with all the goodness you've just witnessed and create leadership products. Now the role I have at Intel offers me an exhilarating challenge to drive both business and technology. On the business side, I'm responsible for Intel's client IoT, connectivity and automated driving And together, they account for about 60% of Intel's
revenues.
On the technology side, I'm responsible for the company wide systems architecture organization, which consists of all of our silicon engineering, our platform development and software enablement. And very importantly, I'm also responsible for working with Stacy, Suhail, Ruth, Mark, Kaiszad and many other brilliant people in our technology and manufacturing organization to align our process and product development roadmaps. In my talk today, I'll touch on 3 topics: product cadence, advanced silicon design methodologies and the benefits of Intel's custom foundry strategy to Intel's product businesses. All of these are emblematic of a new way of describing Intel's product road map. For that description, we're going to do away with the tick tock metaphor and replace it with a metaphor based on waves of innovation.
The metaphor is built upon the principles of predictable annual cadence, co optimized process, architecture and IP and delivering compelling user experience improvements to drive an acceleration of product refresh. Now it's important to recognize that products are not exclusively associated with a node transition or a process optimization. Certainly, the innovations that Stacy and the team produce are a huge part of what we do, nor could we produce world changing products by relying on architecture and design alone. The wavefront of compelling products is formed by the intelligent application of all the technical vectors Intel has at its disposal. Now perhaps to overuse the metaphor, waves do strike the shore.
So if we view the market as the shore, when the wave strikes, we get meaningful reasons for product refresh. Let's dig a little deeper into the principle of cadence. For those of you that were Investor Day would have heard me already say this, but I think it's a good set of lines. I'm going to use them anyway. At Intel, there's a fusion between process technology and product development, resulting in customized transistors and IP libraries for specific product segments.
The tailoring of our process technologies to our product requirements provides us with performance leadership that exemplifies the strength of Intel's IDM model. For example, we're able to use different transistor flavors to meet the specific power performance and area requirements of our central processing unit needs and our graphical processing unit needs in our core road map. This tailoring of performance leadership encompasses the entire portfolio of Intel's products. We're able to optimize our process and products within our technology generation to deliver meaningful performance and maintain an annual product cadence. Let's supplement the rhetoric with reality.
As we move through our 6th, 7th and 8th generation cores, we're delivering significant performance improvement on important industry standard benchmarks. And we've realized all of that with substantial improvements in power performance as well as we've developed our product range on 14 nanometer. These improvements are made by a combination of continued 14 process nanometer enhancements that Mark and Ruth touched upon. And most importantly, products are being delivered using this technology on an annual cadence with waves of process, architectural and design innovation to stimulate refresh. Now our products are really only as good as their constituent IPs, and our process and product teams work together to ensure that IPs map to the right technology wave.
In fact, we stay on a node longer because it has utility across a wider range of products and IP. At the same time, as we transition from 14 nanometer to 10 nanometer for our core road map, We're bringing our modem, our networking, our FPGA products into 14 nanometer. Our Stratix line of FPGAs is already in production in 14 nanometer. Our LTE and 5 gs modems will ship by the end of this year and through 2018. The IO chips that ship with our core products will be available by the first half of next year, and our networking products will ship in 14 nanometer through the course of
'eighteen.
And we'll continue this pattern in future node transitions. As our core products move to node n+1, the broader portfolio will move to node N. Moving to the second topic of my talk, advanced silicon design methodologies. Let's examine a typical silicon product and depop it for a second. Now the graphic behind me is not meant to reference any particular device or architecture, I think it's pretty typical of what the industry has traditionally done.
It's monolithic, where all the IPs are constituted in the same process technology. As a result, all aspects of the chip have been implemented in a particular fashion, driven by the necessity of node convergence. That's pretty constraining to Intel because Intel has a rich portfolio of IP in an array of process nodes, all optimized to localize power, performance and area, or PPA. And this presents an incredible opportunity for Intel in creating heterogeneous SoCs, and that has a profound impact on how we will deliver products to the market now and in the future. For example, we can mix high performance blocks of silicon and IP together with low power elements made from different nodes for extreme optimization.
Another example, we can mix logic together with field programmable gate array technology to provide complex and adaptable functionality,
which is
going to be key for future technologies such as network functional virtualization and software defined networking. Stacy highlighted and Mark further explained our embedded bridge technology or EMIB in their talks. It's a truly transformational technology for Intel. E MIB is an intelligent, elegant and cost effective approach for high density interconnect of heterogeneous dyes. Unlike traditional multitude packaging, the interconnect, as Marc described, is running on silicon rather than through a substrate.
And the density of interconnect enabled by EMIB allows relatively simple circuits to be used to connect die together at multi 100 gigabytes per second transfer rate. In addition, we're able to deliver 4x reduction in latency at that transfer rate and a 5x reduction in power as compared to MCPs in general. And Intel has already brought this technique to market. The Stratix 10, based on EMIB technology, is already shipping. It consists of the world's first 14 nanometer FPGA, which is constituted of 2,800,000 logic elements and nearly 1,000,000 adaptive logic modules.
And it's interconnected via EMID to up to 16 gigabytes of in package memory with up to 5 12 gigabytes per second aggregate transfer rate. And it's also interconnected via EMID to 144 high speed transceivers, which can deliver an aggregate IO transfer rate of up to 400 gigabytes per second. Now the exciting thing for me is that EMID is no more than just Chapter 1 of the heterogeneous SoC playbook that Intel is going to move forward with in the future. And we'll share some of the evolutions in our thinking in this matter, periods in time when we're close to bringing those technologies to market. But clearly, this is going to be a strategy that is going to play a large part in Intel's product road map moving forward.
Now up until this point, I've talked about how we've established an annual product cadences in all of our businesses and how we're using heterogeneous SoC construction to assist in this regard. Now I'd like to go back and amplify something Stacy discussed, and that's the link between our product business and our foundry. All the technology and IP I've discussed is available at our foundry, and it has the potential to be made available to our strategic partners and customers. The co optimization and integration of Intel IP with that from 3rd parties leads to leadership products not only for our customers but for our Intel as well. Customers gain the advantage of Intel's IDM capabilities and benefit from significant time to market and time to yield for their products.
They also reap the rewards of working with the leader in process technology. Now I don't know who amongst you were at Investor Day, but at Investor Day, I used an analogy of artisanal baking to describe the linkage between process and product development. Now truth be told, many really like the analogy. I think a few didn't, but I'm going to continue with it anyway because I think it serves the purpose that I want to amplify today. At that time, I was really speaking about Intel's internal capabilities.
But then the same principles are no less true in the custom foundry context. The flour we use to bake remains highly differentiated from wholesale. As I said in Investor Day, we partner with the farmers and the husbandry of the week and select the best in order to create products from our bakery that have incomparable quality. With Intel Custom Foundry, we're enhancing and enriching the flour we use to bake, and we're expanding access to the bakery to other artisanal bakers. And our bakery is one where bakers learn from the skills of one another, such as everybody becomes a better baker.
An Intel baker becomes a better baker because we're working with other bakers who have skills and IP and techniques to offer us. And that is going to be a unique chemistry that we'll seek to amplify and enhance our product businesses with as we go forward. So in closing, let me emphasize the key points of at least my pitch today. The tailoring of our process technologies to our product requirements provides us with performance leadership and exemplifies the strength of Intel's IDM model. Our product road map has an annual cadence with waves of process and architectural innovation.
And the co optimization and integration of Intel IP with that from many third parties leads to leadership products for both Intel and our customers delivered via our foundry. That is the IDM advantage. Thank you.
Thank you. So, we're going to bring up a couple of chairs on stage before we begin with the Q and A session. As we're getting ready for that, I want to highlight, we've got 4 mic runners for the Q and A. Raise your hands, Kara, Will, Trey and Mark. So they'll start roaming the rooms getting ready.
It could be pretty much dictated by what process flow you're using. So your biggest knob that you have for the cost is going to be density. And that's a part that drives everything. If you lost the density, then you have lost the ballgame. You just cannot recover that part by any means of cost cutting.
So that's the advantage we clearly have as Stacy, guys have to do. Everybody has shown that part. So we have a very strong foundation on which we are building and providing better and different type of transistors that could be utilized with different products. So in foundry, this is going to be a big item for us and provides us a clear advantage over our competition.
And a shout out to a pro investment policy in the U. S. The other big difference is the incentives and the tax structure, right, which dictates kind of where in the world it makes sense to build these factors. Those are really the 2. It's the density curve and then the incentives you get.
Everything else is a common set of equipment. It's relatively low labor content. So the rest of it just washes out. I'll go to Trey, And then I'll come to Cara next.
Thanks, Tacy. It's John Pitcher with Credit Suisse. I guess my first question, when you look at self aligned double and quad patterning, those were innovations in part that were driven by a delay in EUV lithography tools. So I'm kind of curious as when do you think EUV finally comes into the fray? And does that change sort of the competitive advantage you have with self aligned double and quad battering?
If you look at it, EUV is making good progress. Last couple of years, good progress have been made. Our position has been that we'll use EUV when it becomes affordable. We have the tools. But even if you look at we are if you're running EUV, these techniques that you have, you will need it down the road in order to enable it.
All that's happening is we have learned how to do this thing now. When EUV comes in, it will simplify things. But moving forward, further down the road, they'll talk about high NA EUV, but that's 10 years from now or 5 years from now. So these techniques that we have developed on Immersion are going to all carry over and help us scale better.
That's helpful. And then I guess as my follow-up, can you help us understand how quickly EMID might penetrate your product portfolio? And might that technology be a driver of smaller die size over time at the wafer level if you can do some of the integration very well on the back end packaging?
So if you look at it, we have a part out already. StratExtend, which is our FPGA part is the first part that we got out. We have lots of big plans around EMIP. It's a very enabling technology. And as Mark and Mercy have talked about it, it gives us a lot of flexibility to be able to do different things.
And that's our innovation the internal innovation that we have, and we intend to use it as we move forward.
Rensi, do you want to add anything?
Again, I think, John, what was described today, I think, is the tip of the iceberg in the relevance of EMIB to us. I think it represents the first chapter, as I said, in a more generic strategy for us where we go beyond the constraints of monolithic integration and look at multiple techniques
of how
we can essentially get the equivalence of monolithic performance, but without having to have the compromise of everything being in a single node. As many of you probably know in the audience, as you go from node to node, for example, from 14 to 10 to 7, there are many parts of IP that greatly benefit from that. Clearly, our CPUs and our GPUs really benefit from that. But there's a lot of analog circuitry and IO circuitry that actually is somewhat detrimented by being able to move in that scale because they're really focused much more on leakage as much as they are on performance. So that always constrained our ability to move faster with Agile.
And clearly, one of the key things on my mind is how I most effectively use Intel's R and D budget. And what I don't want to be doing is spending a large amount of R and D porting IP from node to node that doesn't get any inherent performance benefit. The ability to be able to decouple that and have my R and D focused on absolutely competitive advantage generating capabilities is where I want to be. So move my CPU and GPU into one technology, but the other technology is how do I basically play maybe in a lagging mode that gives me other performance benefits that allows me to get new products out quicker? That's the kind of thesis that we're driving inside.
And E MIB is just one of many techniques that I think you'll see us bring to the market in the future to show how we're really tackling that conundrum.
Rick Merritt from EE Times. A question about your new twenty 2 nanometer node. One for, I guess, either Anne or Sohail, if you could give any specific metrics about how it compares not to 28, but to the 22 nanometer FDSOI. And then my follow-up would be for either Stacy or Murphy to talk about give some clarity on the products that you're targeting here. Is it mainly for foundry?
Or if it's your own products, what kind of your own products?
So this technology is offered both internally and externally. Internal product at this point, I don't think I'm going to talk about it, but they are products being designed unless we have products being designed on our 22 FFL part. Compared to FDSI part, there's nothing out here. People talk about it. It's FDSI.
I've heard it's a good technology, but we will be going into manufacturing by the end of the year on a very strong baseline. Fundamentally, SOI, if you look at it, it's a costly technology compared to the bulk. So it still have to be proven. What we are offering is, as Stacy showed in his slides, we have on our FinFET technology, we have already processed over 7,000,000 wafers and have shipped greater than 500,000,000 units on 14 nanometers. So the offering that we have is far more compelling and based on very strong technology that we already have on hand versus somebody might be making promises in the future.
Yes. I'd like to add, as somebody that was involved from the very foundational definition of 22 FFL, we didn't develop that process as something we wanted to put in the shelf for our foundry. We developed that process because of the diversity of our product range and the need to get into processes that we're really focusing on ultra low leakage as well as elements of our portfolio that needed high performance. So for us, very much going to be a fundamental part of our road map going forward because our road map going forward is going to be much, much broader than maybe what you've traditionally expected of Intel moving into IoT, into mobile, into networking, where clearly the attributes of 22 FFL are going to be really, really enabling for us to give differentiated performance. And again, when we spec'd 22 FFL, we had a clear understanding of where 22 FDSOI was going to go.
And I think it's going to be a valuable node not only for the internal divisions, but also the foundry to provide a compelling diversification of options out there for parties who want to go for ultra low leakage implementations. Yes. And internally, just think of
a world where we now have an ultra low leakage process. We apply our leading edge capabilities to ultra low leakage. And then we have unit capabilities to start to put heterogeneous pieces together in a single product. And it really starts opening up a lot of capabilities for us as you think about that going forward. Carrie again?
Or no, she's delegating to Trey. Thanks very much. Chris Hemmelgarn, Barclays.
I guess, could you talk a little
bit about the opportunities EBIT opens up for combining Intel products at your modem with external foundry customer products like an FPGA processor or even opportunities for combining an FPGA with some of the some of your server products?
Yes. I mean, I think you're going to the same territory of just looking at the panoply of opportunities we have with EMEA. I think it's also cautious to say, look, there is MCP is not going away. There's only application where MCP makes perfect sense. There's also going to be a situation where, for example, maybe because of the thermal density of certain pieces of silicon, putting them into one packet may not be the right physics.
So we'll use high speed interconnects such as PCIe or other techniques to deliver that. But there are going to be a number of opportunities where embedded silicon bridge is going to be transformative in the properties that we could deliver. And therefore, again, it gives us a lot of options in terms of thinking about how we deliver the equivalents of monolithic performance. Modem, AP, memory, logic, just a few examples of what you could think of. And clearly, the Stratix 10 has showed how you could put SPTA together with high speed IO, together with embedded memory, all in one package using silicon bridge technologies that gives you pretty much close to monolithic levels of performance.
And all of those are on 3 different technologies. And you can basically take 3rd party memory and put that in that package. Not all of that technology necessarily has to be from the same source. They can be from a mix and match approach.
Got you. So just to make sure I
heard you right, you're definitely very open to taking 3rd party IP, 3rd party blocks and combining that with fully in house Intel stuff?
Yes. Inbound.
Yes. Actually, if you look at StratX10, that's not true. You're basically doing you have SerDes coming from outside and that has to be PGA coming from inside.
It's Nico with Golund. De. I hope you could shed some light on how you managed to get the contact on top of the active gate. Does that have to do with the larger fins that you're using and the Evolv processes in 40 nanometers? I
don't think that part is Skydads' plan to go over the specificity of. We just laid out this is a structure, but we were not going to go over the details how did we do it.
Okay. Point
is taken. And on the new silicon interconnect, Emile, would that also give you an option to have a more cost effective construction of Xeons made up of multiple dies?
That's a great idea. You've been doing
this before before too much.
All right. Two very good questions for which we're not answering. I like it. Where are we? I see Mark.
Yes.
Hi, this is Steven Chin from UBS. A question on the hyper scaling and the cost implications to CapEx. Just given the additional improvements over the longer life of nodes, is there higher CapEx intensity over the full life of that node as a result for both 14 and 10 NIMs?
Yes. I'll refer you back to some information that Bob Swan showed at the Investor Meeting, which was a month or so ago. And he showed, take memory investment to the side, he showed overall capital investment as a percent of revenue. And what it says is that it stayed pretty constant. It's kind of stayed with what I'll call inside the historical band.
And if you think about this, the curves that you saw from several of us kind of answer the question. The cost per square inch of silicon goes up, but we get more scaling, and that allows us to do more products kind of per dollar of capital spent. And so at a constant volume level, we can say pretty constant in terms of overall capital intensity is the way I would think about that.
Great.
Okay. And my quick follow-up for EMIB, for that silicon bridge that was shown in the pictures, is that silicon bridge, and I can't remember if the detail was mentioned earlier, but is that being manufactured by Intel? Or is that coming from some other third party supplier?
We have an IP on it, and it's manufactured externally.
I think 2 more questions, if there are 2 more. There's one upfront
here. Ian Murphy, Enterprise Times. One of the new challenges we've got is IoT and security. With Connected Cars in particular, we've got the problem of a whole ecosystem where a manufacturer does not know who else is going to connect to it. What do you guys start bringing down into the new chips in terms of security components?
It's authentication and how do you actually verify and authenticate either the generator of or the consumer of basically developing. It goes by the acronym of EPID. I can't expand the acronym, I'm sorry. It's a great technology. I know where we're using it, and it's basically very much targeted towards enabling device authentication and attestation.
Quite frankly, I think it's IP that doesn't necessarily have to be only in an Intel part. I think it's our contribution towards this whole discussion on how we essentially get security in the IoT environment. There are other options out there. The key is really being able to have any network infrastructure that's hosting an IoT application to have confidence that it's communicating with valid or authenticated IoT client devices. And that's going to be, I think, a very important piece of technology to enhance IoT.
I don't think it's going to be gate heavy, but I think it's going to be needing to have a broad based discussion around it to make sure there's a degree of industry wide consensus on how we deliver that. So we've put some ideas into the pot. I'm sure others will. And it's a problem that I'm absolutely certain will need to be resolved for IoT to scale.
And I apologize. I'm having trouble seeing the timer, and I was going to cut you all short. So we can actually take one more question after this one also and still live within the time and then get you to lunch. So this question and one more, and then I think we're officially out of time.
Is there
awesome. Oh, wait. Okay, Trey?
All right. Welcome back, everyone. Hopefully, you guys enjoyed the lunch and very excited to bring this panel discussion to you today. I'm Zane Ball. I'm Co General Manager of our foundry along with CEVA there at the end.
And we're very proud to have a very distinguished group of folks from the industry to talk a little bit about Intel Foundry. So we have Will Abbe here from ARM. He's Senior Vice President of Strategic Alliances and Sales. We have Luc Bhutan, President and CEO of Cadence Design. We have Art de Geus, who's our cofounder and co CEO of Synopsys and Siva, my partner managing the foundry.
So I thought we would just get started today. I have a little cheat sheet here with some of my questions. I thought we'd just get started today. We heard a lot about Moore's Law being alive and well at Intel. And certainly, Intel's traditional businesses have always benefited from being on that leading edge.
I wanted to get the panelists' thought around what that Moore's Law progress that we heard about today means for foundry customers. And why don't we start with you, Will?
That's the benefit I've
seen next to you. I guess I get
Get ARM and Intel together. You never know what's going to happen.
We at ARM, we're super excited that we're in collaboration with both Intel and Cadence on both 22 nanometer and 10 nanometer technologies. On 22, we think that's a really 22 FFL. We think that's a really exciting technology for multiple reasons. One, it offers the benefit of a FinFET transistor. You've heard a lot of that being talked about this morning.
And also the fact that it has a simpler back end of line process. And so the benefit from an ARM perspective is that we think that will provide a fairly compelling migration path for our high volume, cost sensitive partners in both mobile as well as consumer. We think that as we couple that technology with the physical IP implementations that my group my former group are working on, targeting our next generation V8 ARM cores. We think it will offer a differentiated offering to the market. As I look at 10 nanometer, clearly, you've heard a lot said about that, and I'm not going to echo any of that.
But as we work through the implementation of 10, we can clearly see that it's going to offer compelling performance benefits, but particularly on the density side, as we continue to look at the implementation options, we can see that there's going to be a clear density benefit beyond the so called 10. We think it's we think that you guys are underselling the capability of the performance of sorry, capability of the technology by labeling it as a 10 nanometer node. So we can see 10 being a good candidate for our next generation high end V8 cores targeting both mobile and high end mobile solutions. So we think that Intel, coupled with the work that EDA partners are doing and then IP partners are doing, is clearly focused on bringing differentiated value the family space. I think so.
Lebo, anything you'd like to add?
Sure. So
I think this morning, you heard that their process technology, their packaging technology and also clearly the low leakage and low power, I think it's a great benefit for our fabless customers and the mutual customer. I think they see that. They really can lean on the ICF to do that and so that they can really focus on their design content. So that's number 1. Number 2 is clearly, there's a massive investment into the foundry and that they can lean on and they benefit from it with clear unique technologies.
And then the other part is also can use the they also can further benefit from the cadence, the tool and IP optimized for the process node that Intel is driving, and we're excited about this collaboration. Thank you. Bernard?
Well, you asked about the future of Moore's Law in Intel's hands. And leadership is something very difficult in this field. And many companies don't achieve this, and Intel has for many decades, which is amazing in many ways. And the presentations this morning gave an insight of this so important tree of PPA, performance, power and area, an area sort of a substitute for ultimately the manufacturing costs when you multiply it by yield. And what is so interesting is the balance of how these variables keep being moved forward at the very moment that the entire market is just hungry, hungry for being able to give more performance with lower power because everything is portable, everything needs to be in power saving mode.
And the very fact that we can today and we have already optimized the number of ARM cores in these new technologies very successfully, The 3rd and 4th variable are really risk management and optimization, which is all the things that you do around it. And so this is where the partnership with Intel has to be as good as we can make it because every time that reduce the risk, it gives more opportunities to customers. Every time we can do a better optimization, be it on the technology or be it on the IP, we actually add value to what our joint customers will do with it.
And
Siva? Yes. I'm super excited to be here. Thank you for being a part of this. And ICF, as you heard, is focused on leading edge 28 and below.
Here, we are offering Intel's crown jewels to be shared with our foundry customers. That is super exciting from a foundry customer standpoint having that choice. At ICF, our focus is to offer industry standard IPs and tune them right from the get go on PPA, which is performance, power and area optimization. And we do that while the process technologies and definitions, so we can actually tune that. And second is, with all the EDA ecosystem partners, we co optimize the solutions to make sure they have a choice, whatever maybe their selection criteria, they have the full choice.
End of the day, they have to make that decision. And same thing with the design partners. So you've heard to our portfolio, we've added 22 FSL and that is another great addition to broaden our portfolio of offerings we have. So really looking forward to a great future ahead.
All right. Thank you, guys. So Art, next question for you. We just announced 22 FFL today, kind of big news and Intel putting more effort into more mature technologies, what do you think the opportunity from a foundry point of view is for this technology?
I think the opportunity is great. And it's big news. Of course, we've been working on this now for a long time to make it work well together. And it's big news because I think
this technology happens to hit
one of the most important sweet spots. Intersection between the ability to go for high performance, but also to drive down leakage or any other form of power as much as possible. And you only have to look around at the world of mobility, of portability, but also on this coming wave of digital intelligence, which will drive semiconductors like crazy. And it has one demand, which is give me more performance and less power and less power and less power. And this technology has the potential to hit the sweet spot and be adaptable on the curves that you saw.
Now in order to do that, a lot of effort went into making sure that the design slows are ready, flows are ready, that the IP is rolling out as needed by the customers when they need it and of course, that we can support the customers in different places in the world. And the good news is we've already taped out quite a number of chips together with ICF. And so there's good evidence that we are ready.
Lip Bu, we've been working together in the foundry. How would you describe our collaboration? And what kind of progress do you see us making in this new initiative?
Sure. So a couple of points. I think one, clearly, we significantly increased our collaboration together. And so on the debt and also in debt. And clearly, on the 22, at that L and also the 10 nanometer, I think those are great opportunity.
We work together. And on the different process node and how to optimize based on the process node. And then the other part is also in term of the more depth is basically we share common customer requirements so that we can really drive the performance and the power or the PPA performance and then to meet the customer requirements. So I think that is a very important point. And then the other part, I think, is clearly the deeper engagement and support the 3rd party IPs.
Like we have worked closely with ARM and also some of the cadence interoperability, interconnect IP and memory IP and then so that we can really serve the customer better. I think that's a very important point. And then our few teams have been working together so that we can serve the customer in the local expertise and plus the different experience that we can support locally, the customer well, I think that's very important. And clearly, I think the overall in term of how to really drive the performance tool in a more better way to serve the customer, I think that is critical, important to the customer. At the end of the day, we are supporting our customer and make sure that they tape out correctly and then meet the schedule that they require.
And I think that meet very deep collaboration from the IP point of view, from tool point of view and also from the foundry. And I have to say that I think the service orientation from ICF has significantly improved. And we see that, and the customer love it. And you have a lot of technology to offer to the customer.
Thank you very much. So Will, our relationship is pretty new. We've been working together for less than a year, but it's been an intense collaboration. What's your take on Intel's ability to compete in the foundry space?
Yes, I think that's a really great question. So I mean in fairness, we have been working together now for 10, maybe 11 months. And it's unquestionable that Intel has great technology. We've heard it play it out today very eloquently. I think an important ingredient that's needed to be successful in foundry is a customer centric mindset.
And we went into this journey together wondering whether Intel would embrace what we believe to be a successful ingredient that's required to make a big difference in the family space. And so we're pleased that as we've asked questions of Intel in terms of looking at the mosaic of different customer needs, Intel has responded very, very well. And so as we work together through those questions of design methodology, EDA tools, EDA views, EDA design capability, we've been pleasantly surprised on 2 fronts. 1, the not just the capability of the Intel engineering and strength and depth, but also the willingness and openness to adopting new approaches and new ways of working. In the recent meeting that we had, I remember sharing with both Siwa and Zane that I think you've come to a point where you can really ask yourself, does ICF really describe what you guys are all about now?
Because the C for me stands for custom. And the fact that you've embraced this notion of a standard methodology, right, a standard approach, I think that you should reconsider whether ICF is a good description for where you want to be. So I'm pleasantly surprised
So Intel Standard Foundry or just Intel Foundry? I've seen all the marketing advice I've seen.
Either Intel Foundry or Intel Standard Foundry. I think that, 1, we've been pleased that you've been open and receptive to working with us in a way that we think it's needed to be successful in Foundry. And I think that you're ready for big time as far as I'm concerned, right? Because Foundry is the ability to support Mosaic are different requirements and Mosaic are different customers. And with the work that we've been doing together, I think you're now ready for prime time.
Well, certainly, we've had a lot to
learn entering this new space. So, Siva, question for you. So you and I started managing the foundry together just about coming up on that 2 year anniversary brief. There's been a lot of things that we've learned as we've gotten into this. What are some of the highlights for you in working with our in external environment?
Yes. It's been a pretty exciting ride. And I would say 2 things really stand out in the learnings. First one is at the customer selection stage, first, the customers want to make sure they have the IPs that they're interested in available. That's the first one.
And the second one is to make sure those IPs are tuned, well tuned. We call it co optimization at Intel, multi use the phrase fusion. Whatever word you use is really make sure the power, performance and area are fully optimized, take advantage of the features that you've heard today. That's the second one they look for. And that's only the selection criteria.
That's a starting point. It gets you started. After that, it's really about understanding the customer needs and how they want to differentiate themselves in the market that they have and how can we fine tune that in that journey in the IP portfolio. One of the key things that we have heard is how about the ARM CPUs not only today's, but the future. So as you've heard in the last idea in August, we've added to the portfolio.
Will was mentioning a dialogue he was having with Zian and myself recently. And the key part that I want to highlight in that conversation, they were asking the methodology, should it be tuned and should it be standard? And we insisted that it should be EDA standard recipe that gives the choice and the power to the hands of the customer. The key leading solutions both are optimized and standardized so that the customers can take full advantage of it. And we made that crystal clear and that's what it's all about.
I think it's about making the customer succeed. That means they're getting the best results on our technology. They have the choices in their hands. 3rd is, we are open to co optimizing the solutions to solve their specific problem. TD organization has been extremely open in what I call the last mile to bring the optimizations in addition to the technological features to make our customers win.
So that has been really the learnings for me and we're super excited that we have all the support we need from the ecosystem partners and the technology development inside to actually serve our customers. So that's been my learning.
If I can, I just want to stress one of the important point, which is the proof of the pudding is always in the tasting? And so both from a 10 nanometer and 22 FFL collaboration that we're working on together, end of the year, we're going to have our 1st silicon tape out on 10. And I think the benefit of that is that our partners our mutual partners will be able to take the solution that we've co developed and actually see if they can replicate that PPA, which we think will be compelling in the shortest possible time. So that's both from a 10 nanometer perspective. And also on 22 FFL, we're targeting low leakage, high performance solution for Q1 of 2018.
So again, it's about standard methodology, standard EDA flows. Can you reproduce that in the shortest possible time? So I think many all our partners in the industry will be able to see whether the work that we've been doing is just small cameras or whether it's real. And we think it's going to be very interesting and very compelling.
I'd like to add to that. It's been interesting to observe the learning of ICF, engaging with customers because you can do everything right, but you have to earn the respect of the customer. And so obviously, I won't share details, but ICF does have the first repeat customers. You never get a repeat if it didn't work out, right? And so people watch you in the first engagements very, very carefully because ultimately, this is all about predictability of outcomes.
It is very complex what we do together. And so a single issue can jeopardize the work of the entire group here in no time. And the fact that you have repeat customers that are engaging on the next node, on new technology, on new cores, I think, is a very good sign. And I would encourage to continue that rapid learning.
Okay. I'd like to maybe turn the topic a little bit to bigger picture. So we've heard a lot of things today with the advance of Moore's Law and very sophisticated items going on with contact overactive gate and self aligned quad patterning, very, very sophisticated stuff. Obviously, Moore's Law is getting more difficult. It's getting more expensive On the design side, to achieve these remarkable benefits of better transistor density and that drives our industry, we have to work a lot harder for it.
We have to there's more R and D required across the industry. I think that's starting to change the structure of the industry as well, right, and with consolidation and things like that. So I'd love to hear each of your thoughts on what you think the future industry structure with foundries look like? And for bonus points, where you think an Intel foundry could fit into that? So we all know the rich history of the fabless ecosystem, but I think it's changing a lot going forward.
So whoever wants to jump in first.
Sure. Well, for starters, the fact that things are more complex, complexity is our middle name, right? That's what we've made a living of. And Moore's Law has been dead so many times you wouldn't believe, and yet, here we are. Secondly, you say it's more expensive.
It is complexity does bring expenses, and you see a consolidation of the market to players that have the ability to continue to invest and survive over time. But then I would add that I think we're entering a phase of the electronics market that is as interesting, as driving as the computational era was in the '80s '90s, the whole mobility era was. And now we're going into this era of smart everything. And if there's one thing that these errors have in common that helps right now in this business is what's better than smart? Smarter.
And how do you get there? Well, better algorithms, But really what we would like, really what we would like is another 10x more computational power or 100x as a matter of fact, and it will open up the door. And I would argue that the cost, as much as one has to deal with it, is not at all the issue. The value is so high that even if the chips were twice as expensive, I'm not suggesting you do that, it still would absolutely be driving things forward. And in that context, a foundry that can execute at this point in time has an enormous potential because there are so many people that are coming up with fabulous software ideas around digital intelligence that if you can support them with the hardware that can deliver more performance of low power, I think you're going to be in a great spot.
Just to add on to it, I think clearly the complexity has increased a lot. And depend on what are the vertical markets you're going after. And then some of them clearly like mobile, power become very important and low power leakage and then density become critical because cost is very important. At the end of the day, it's the first time pass is critical. Anytime you do a respin, it's just costly.
And also time to market is critical. And so I think that execution requirement and also some of this advantage that you have, how to put it together to support a customer. And some of the verticals have different requirements. And so intelligent on the edge, clearly, everybody talking about that whole industrial revolution is a huge opportunity. Then you look at the machine learning, deep learning is a huge broad application along the way either to automotive or the cloud infrastructure.
So all these have different requirements. And that's where I think from Intel point of view, how to drive some of this differentiation to show at the end of the day, the customer success is everything. Time to market, the performance in our PPA onetime is critical.
Okay. I guess it's left to me. So I mean, as I ponder the questions, a wise man once said that nothing new under the sun, right? So as we in order to predict the future of the foundry space, I think the best place to look is look behind us and look where we've come from. And so I guess, for me, the important point is that Intel now entering into the foundry marketplace, I think you're going to bring great technology.
So playing in foundry is not a cheap game. It's a high cost, a high investment, and Intel is not short of capital. And so I think if we look at our rich heritage and where we've come from and how innovation has come about through competition, I think the fact that we now have another significant play in the family space, I think that family is in a good set of hands. We've heard a lot talked about the new emerging application areas of a smarter connected world, whether that's automotive or whether that's in the home. I think that's going to bring a diversity of applications, a diversity of opportunities.
And as we see more competition, we've heard a lot talk about 'twenty two already this morning. There's multiple players now in 'twenty two. We're seeing the same on 'ten and below. And so I think Foundry's in a good set of hands. We will continue to do our part in terms of EDA partners and IP suppliers to ensure that solutions are available for the rich potential of application areas that exist.
So from an ARM perspective, we pride ourselves on multiple choices and enabling choice. And so we don't want to see a world where our partners only have limited choices for technology or limited choices for manufacturing excellence. So we think that Intel coming in is just going to allow the foundry space to become a lot more competitive, drive more innovation. And so foundry space is going to be around for a long time. From an ARM perspective, we think it's a really good thing.
I'd like
to highlight 3 things that really excite me. The first one is Intel Technology. Mark talked about how he actually is spending his time on the next after 7, what we call the 7. And the Intel ecosystem has the technology investment to keep going. That's number 1 that everybody can count on.
Second thing is compared to a pure play foundry, Murthy talked about how he called that the bakery where we can actually share ingredients and the IP, the richness of the IP that we have, we can actually offer in addition to the industry standard IP. That I think is a differentiator that we alone can provide. The third thing that's really exciting is optimizing all of this with an E MIB kind of technology where you can actually mix and match. You can have the cake and eat it too, so to speak. You have the low power applications in one technology node and performance hungry, density hungry applications such as CPU and GPU and put them together, I think that gives a very exciting choice in the world where it is really accelerating the number of applications we're seeing and that gives an incredible choice that I hope that some of the customers will take into any of those.
So I'm super excited to see that.
Okay. Well, I think we'll wrap it there. Just a couple of comments, Something I've learned in being part of this foundry effort is that it takes the ecosystem coming together to make the fabless customer successful. And Intel's traditionally been very much the iconic IDM, right, that works very much under our own roof with the help of a few partners, right? But we always succeed when people like you guys on the that graciously joined our panel today work together.
I think we're learning this game and we wouldn't be able to have gotten to where we are without your partnership and support. So please accept our very big thanks on behalf
of you. Yes. I'd join Zane not to thank all of you. We appreciate your time and presence.
Thank you very much.
And most importantly, partnership.
Thank you. Thank you.
Please welcome to the stage, Vice President
Finance,
Director in Investor Relations, Mark Henninger.
Welcome back, everyone. Thank you for joining us again. Will be doing a Q and A and I'm going to invite a few folks to join us up here. Ruth Murphy, and I'm going to avoid the risk of mispronouncing surnames
by going with
his first name. Howard?
Yes. Howard and Dave. Regentana.
Mine is short.
We don't have to announce it. So we've
got to announce it.
So we've got
to announce it.
So we've got
to announce it. And what we'll do is we'll do one question per person as is our custom. I'll just remind everyone briefly too that we are in the quiet period, so we'll try to limit the scope of the questions to the technology and announcements that we have here today. So without any other delay, why don't
we go ahead and kick things off? Why don't we
start right here? Ross?
Hi, Ross Seymore from Deutsche Bank. Murthy, you've mentioned going from a TikTok to more of a wave in your product innovations. Can you just talk a little bit about the challenge that happens every 3 years where a node based product line hits upon, I guess it would be the plus plus product line and how strategically Intel stacks those in to avoid cannibalizing and all those sorts of issues?
Yes. I mean, first of all, I think we take a fairly long term view of planning our road map and we sit down with, as a team on the product side and with colleagues such as my 3 eminent colleagues here and talk about where are we in terms of the intra node movement as well as the new technology nodes that are coming up. And we look at the relative performance increments that are coming out of process. We match that against where we believe we're coming up with improvements in architecture or IP derived benefits. We have a clear benchmark around which we want to improve performance on a year by year basis.
And we dial in the right chemistry and the right degree of risk profile for how we deliver on that and then lock in which node with which relevance of IP and which architectural framework we go forward with. And that's how we basically move forward. The goal is essentially to hit that annual cadence. And we basically look at a scenario where when we're coming up against the final iteration in node N-one and the first iteration of N, whether we basically double drive those endpoints to make sure that we have a seamless coverage in terms of where one node reaches apogee and another one starts its
infancy. Great.
Next question.
All right, over here to Stacy, please.
Thanks. I guess following up on those lines, I noticed on your charts, your 14 nanometer plus plus actually had performance that was higher than the initial 10 nanometer. Obviously, the 10 nanometer has lower power. But if process density is so important, why wouldn't you be developing a higher performance 10 nanometer today? And presumably that would be more cost effective and have better traction in the marketplace?
Or is this a statement on where, for example, 10 nanometer you're
Let me at least start with that because I did show those 14 nanometer plots. And you did pick up then that, yes, the 14 nanometer is at a relatively constant capacitance, but 10 starts lower. And again, as I showed those various performance curves, we can choose for different products where we want to lie on there. You don't necessarily have to be at lower performance because you can move around on those.
So why would you wait until the 10 plus or the 10 plus plus to actually get the performance of a 10 nanometer product out? Why would you wait to have the advantages of the density improvements? Because it looks like right now you're sticking with 14. Now again, is that a statement on where yields are? Or is there another driver driver
of that? You're adding on maybe I can let Kai's answer, but I think you're adding on we continue inventing things. And so of course, we're going to come in and put those in as soon as we have them available too. But you have to go through waves of work to put together all the process innovations.
That's basically correct. I mean, there will be new features, new techniques added. They were done on 14 to go from 14 to 14 plus to 14 plus plus it didn't just happen automatically. There were experiments needed. It's work.
And the same thing is true on 10. So that's why it doesn't all happen at once.
But I think it's also important to say that I don't I feel that in the product side, we don't have a knee in our back to push us towards 10 at a rate of knots until these guys are really comfortable, have dotted the I's and crossed the T's because they're giving me a really good 14 plus plus node in the meantime to keep my annual cadence. Again, I want to go and generations are going to be synchronized necessarily with process node evolutions. That may happen, it may not happen. But what won't change is the fact that we want an annual cadence with a predictable performance increment between nodes. And that's really how we're constructing our road map.
Stacy, just to return to your original question. Depending on the type of product, the lower power can be a significant advantage and a reason to go to 10 over 14 plus or plus plus So depending on the attributes of the product, some products may prefer to be if it's power unconstrained, may prefer to be on one option versus a or more power constrained product would want to be on another or if you have an architectural innovation that significantly adds to the transistor count and adds to the power, that would provide performance and may benefit from being on 10 versus 14. So as Murthy is suggesting, each product is going to have its own optimal sweet spot of where it wants to land.
And we're going to look at as they add transistors about where we place ourselves on those curves. And just back to the innovation thing one more time, because I do think that is so critical. I mean, once we invent some new technology, we can understand ways to apply it to a lot of different nodes. We're not just going to stand still and say it only goes here. We're going to look at how we can best apply it.
Great. And let's come back over here.
Murti, it's John Pitzer with Credit Suisse. Kind of apologize for this question, but I think almost every week there's speculation in the investment community about whether your 10 nanometer is hitting targets or getting pushed out. So you talk a little bit and just kind of level set us as to when we should expect to see 10 nanometer first production and kind of just how the ramp of Cannon Lake is going to progress as we exit 'seventeen and go into 'eighteen?
Sure. So as we said at Investor Day, we're still on a trajectory where we believe volume ramp will be in the first half of 'eighteen, We'll be in production by the second half of 'seventeen. And right now, we're targeting shipment towards the end of the year. And whether it's this side of Christmas or the other side of Christmas is a little too close to call at this stage.
That's helpful. And then as a follow on, at Analyst Day, you talked about DCG ramping more quickly at leading edge nodes. And I guess, Mark and team, can you help me understand because those have historically been larger die sizes and there's a cost penalty to ramping large die sizes on new nodes. Are you doing anything differently? And is this sort of the advantage of EMIB?
And I guess as you think about EMIB, how much can you deconstruct a chip before it starts to become an issue in the back end packaging?
Mark, do you want to take that one?
Well, I'm not quite sure I caught the gist of the question. Was it whether we are going to start ramping 10 on a small die or a large die? Was that the
Well, the issue is if DCG is going to start taking more of the burden of leading edge process technologies, they tend to be larger die within your product portfolio. And so if you're ramping larger die first, I'm assuming there's a pretty hefty cost penalty to do that. How do you how are you trying to offset that if I understand what you're trying to accomplish with DCG ramping more quickly on the leading end of nodes?
Maybe I'll let Bert Ferzi answer that question. How do you see the strategy for the DCG products? Yes.
I mean, clearly, clearly, John, you picked on some of the key messages. I mean, to be the first or one of the first products on a new node, clearly, you're constrained by a die size that needs to be sensibly hit in order to make sure that you're not exacerbating the issues of the defect density being quite high in the early period. And therefore, the ability to go towards more of a disaggregated die construction helps in that regard. And therefore, if you look at, for example, the benefits of EMIB together with maybe die partitioning, then you can understand where maybe Cerva can basically complement or be benefited from moving on to a new node first. So you take a 600 square millimeter die, you think about a rational purchasing of that die into smaller tiles and then using interconnect technologies such as EMIB to reconstitute a monolithic level of performance.
I think you can see how that all dials into server being able to take advantage of newer nodes quicker.
John, do you
mind repeating that question?
How much can you deconstruct the chip before it becomes a performance?
I don't have kind of like a specific checklist there. Right now, I think what we're really looking at is really looking at individual tiles that probably wouldn't be too dissimilar from what you would see in a client sized die and sticking those together. So it's not like small die. They're still fairly major die, but they're clearly a lot smaller than our traditional server die.
Let me come over here
to Tristan.
Tristan Gerhardt. Just as a follow-up to John's question, what's the customer feedback as you move the next node to DCG versus what you've done in the past? And also implication of 10 nanometer timing into the PC segment?
So first of all, I run a lot of the engineering that supports the server business. I couldn't really represent what the customer feedback is. It's probably a question for Diane. But in terms of the 10 nanometer question, again, let me repeat what I said to John. We're on a trajectory where we'll be in volume ramp by the first half of twenty eighteen.
We'll be in shipment sorry, will be in internal production ramp by the second half of 'seventeen. As I said, in terms of first shipments, whether it's before the end of the year or just after the next beginning of the year, it's too close to call. But I think it's still generally in line with the timing that we have aligned with our customers. So we don't necessarily see any perturbation in launch plans today.
On what side here? Jerome from Exane BNP Paribas. What are your ambition in foundry? What kind of market share are you targeting among the $23,000,000,000 you mentioned? And are you going eventually to be a foundry for competitors?
So I think specific ambitions, we haven't declared any beyond the fact that I think there are strategic partnerships and co learning that could be available to us that could be mutually beneficial both to the internal product business of Intel as well as essentially providing a degree of extension of our foundry capability for other relationships. In terms of types of customers, I don't think we're setting ourselves up to be a general purpose foundry. We're really looking at strategic partnerships that essentially deliver win win arrangements both for Intel and their customers and a degree of technical cooperation that is part of that win win story.
We've talked about or heard about your density and cost lead for several years now. Is there anything about, I guess, these advances that you outlined today that specifically are there any that stand out in your mind that will leverage into new product categories or improve market share in some categories that you guys have been targeting for some time but haven't gotten that level of traction yet?
Well, scale transistors, higher performance transistors, lower power transistors, lower cost back transistor are these are technology benefits that I think will benefit the whole range of product lines from the very smallest mobile chips to the very largest server chips.
And to add to that, what you heard today was not just an explication of Moore's Law, which is providing more and more transistors for the same number of dollars, which allows a broad range of products to benefit. But you also heard about our 22 FFL technology, which kind of takes our proven FinFET technology to address a new market segment where low leakage is extremely important. And you also heard about the heterogeneous integration type technologies, including EMIP that allow us to switch different types of IPs together. So I think the combination of all those things can and will lead to a broader segment of the market being addressed.
And I think we've recognized for a couple of years now that from a process technology perspective, one size does not fit all. We're developing derivative versions of each technology. Some tune more towards a higher performance, others towards lower power or SoC type applications. And now maybe the more extreme example, 22 FFL, really optimized for that market segment.
And we do look at those. I don't know if you remember in the 14 nanometer update, I really walked through a whole list of features that we add to the base technology, right? And then we can look at what makes most sense for any given product and work with the product teams to try to put up the right, call it, flavor that's going to make that segment successful. So we are constantly looking at do we have the right technology portfolio to really build whatever we want to build.
And maybe to add to what Ruth said. Exactly as Ruth said, actually, as we diversify the flavors of transistor technology and their optimization points in a node that's coming towards its end of its lifetime for our core products, it's then in a state where our other product portfolio can take benefit of it. So as I said in my talk, a trend you'll continue to see is as our cores move out of node N, a number of our other products such as our modem technology, our networking technology, our FPGA technology are going to be moving into that vacated node. And I think it's going to be a real sense of virtuous reinforcement of the maturity of the node in which a core product portfolio exits is going to be pretty sophisticated for New Products to come into and get the benefits of that learning. So I see that very much as being a symbiotic set of events that will repeat from node to node.
And I'll add that I've had the opportunity to visit with many of our foundry customers or potential foundry customers where I hear their feedback on our process technologies, what could be different or better. So it's helping me, helping the process team to better optimize our technology, not only for some of these foundry customers, but for our own internal products as well.
Back to the trade.
Why not introduce an architecture new architecture design sooner than Ice Lake, which is 2019. You guys used to do architectural redesign every 2 years. Now it's between Sky Lake and Ice Lake, it's going to be like 4 years, and if I'm mistaken. So why not and architectural design redesign used to be independent of nodes, so why not introduce 1 sooner than later?
So I think we need to be really precise about the expansiveness of terms like architecture. Sure, we can talk about new ways of interconnecting the various IP blocks within an SoC to get better performance. But let's be also clear that what we're also doing is upgrading key pieces of IP, such as CPU or GPU, looking at different configurations of clocks and memory bus access speed. So when you look at the generation from 6th to 7th to 8th generation, and we're delivering double digit increases in performance, That's a mixture of modification at the chip level construct, the improvement of IP, looking at the transistor process evolution and creating a compound chemistry of that to actually give us that performance improvement. Because at the end of the day, what really matters is that when you open the laptop of a machine or you fire up a server, that the experience you get in today's product is distinctively better than last year's product.
And that comes from a combined chemistry of all of that stuff. There isn't really a panacea where you basically move towards a complete new architecture that in and of itself is necessarily going to give you all of those gains. So we see our product road map basically being a continuum of continuing to deliver predictable annual progress. But on that
innovation that's ahead of you will come from integrating a lot of different IP blocks, right, in itself, some of which will require multi chip packaging to your point. Some of this could be on die and the need for re architecting the silicon block, the CPU block. So is that something that you feel like is not needed for the next 4 years and you'll only need it to 2019? Or is there something you guys are thinking about architecturally from an innovation standpoint, what Kajal was referring to intercept some workloads that your customers are asking for?
I think there are certain attributes of our road map that I think are very much dialed into a 10 nanometer transistor specs that we basically believe makes sense to really be launched as that technology becomes more mature. And that's really the thinking on the outer parts of our road map. Thank you. Derek?
Similar question. I guess when I think about EMIB and these tiles, if you will, historically, you guys have embedded a lot of accelerators, video codecs, floating point, all these and embedded it into the Xeon CPU core that's monolithic today. When we think about an EMIB world with these discrete tiles, does that mean more emphasis on individual tiles as an accelerator connected to perhaps a more simplified CPU core. Just wondering if you could help us understand that.
Yes, maybe I'll take that if you guys have input. First of all, let's be clear, I think EMIB is not a panacea. It's useful technology where it makes sense. For example, some of the technology, accelerated technology we may in the future choose to envisage may not have a gate count that necessarily makes great use of EMIB, but therefore you might as well just monolithically integrate it. On the other hand, you could see scenarios where, for example, in a situation where you're looking at really high performance graphics, a really high performance CPU, where the monolithic power profile would not necessarily be able to contain that architecture thermodynamic equilibrium, then maybe die partitioning via Inu could be a good way of mastering the physics.
So in that context, you could think about using different accelerator engines that may need to be mixed and not just possibly being using something like EMIP. But at the same time, it might just be better to do a monolithic integration if they're really tightly coupled with a CPU and have a fairly modest gate size.
Well, and this is what Mauricio said at the beginning. We really do sit down and do a lot of product process co optimization to figure that out. There's no one right answer for that question you just asked because depending on the product need, depending on what we're doing, we have to figure out exactly the best combination to put that all together.
A standard MCP virus substrate interconnect may be really good enough. It depends on what the data rate is of that accelerator. If it's like multi 100 gigabits a second, it's going to be different than if it's tens of gigabits per second.
Just to follow-up on that, just taking a step back. When we think about your hyperscale customers, and this isn't necessarily EMIBs centric, but more just an architecture question for you. What gives you guys confidence over the long term within a hyperscale kind of scale out data center that having accelerators either on package, monolithic die or EMID, choose your variant, connected to the host CPU long term is what the customers want? And could you just, in that world, the scale of hyperscale data center, what's the benefit of having that versus a separate discrete accelerator bank for things like virtualized pools of resources, things of that nature?
On that topic, I really don't think the game there's one ubiquitous one size fits all. I think you're going to see a range of workloads. I think you're going to see a range of workloads that are going to basically be quite suited to instruction set based acceleration on, for example, a standard Xeon server. They're going to be areas where customized ASICs make best sense because it's a highly predictable workload. You're going to see other areas where a GPGPU might make better sense because essentially, it's a very clear workload that may be adaptable in some scenarios.
And therefore, I think Intel's strategy is very much towards be able to have an approach where we have diversity in the way we look towards technical solutions. It could be using ISO based accelerators, it could be using dedicated ASICs in certain scenarios, it could be using GPGPUs. It could be using EMIB technology to have interconnect. It could be using MCP. It could be using monolithic integration.
I think the benefit that Intel has is we have a number of IP in our arsenal to be able to adapt to specific workloads. Therefore, we'll have a very, very case specific answer to each of those questions.
I'd just like to come back to the slide you show of the transistor cost per transistor for the density time the price per square millimeter. What I don't understand in this is where is the yield assumption? And if there is any yield assumption, how do you assume the yield at 7 nanometer node and 10 nanometer node? How do you know this yield?
Well, the assumption is that eventually all of these technologies reach a high mature yield. So that's really where that chart is based on.
But one more piece of this was that Stacy did show the cost per megatrendsister across technologies and that did include yield in that slide. That really does show the continued improvement wrapped up of density and yield when he shows that progression across technology nodes.
And to add to that, in Stacy's graph, it was also clear that if you looked at the first product, you didn't see the full benefit he showed. For example, Ivy Bridge as well as Broadwell, which are the first products, which did not get the full cost per transistor benefit both because in the early days yields are not as good as well as in the early days you haven't ramped your factory network to get the wafer costs down as well. And so in the first product, you don't see that benefit. But as you go through into the second and third wave products, the yields improve and you do realize the cost per transistor benefit.
And he also had that second slide, right, because there was that slide that showed for a given die size what it looked like, but then in subsequent slide also showed it as a cost item so that you could see when you actually wrap it up with the yield included that the cost was continuing to decrease. Maybe that wasn't clear in that slide, but it does include the yield portion of that.
So the assumption is that eventually for every node, the yield is going to be the same?
Yes. We get to mature yield.
Yes, we get to mature yield. And if
you look at 14 nanometer today, it's getting to pretty mature yield levels.
And
the difference between what we had at the
end of 'twenty two and what we have on 'fourteen now doesn't change that cost per transistor picture very much. And today, our 14 nanometer cost per transistor is well below our 22 nanometer.
That's right.
Transistor. That's right.
Okay. Understood. Thanks.
Yes. But the cleanest way is that chart that Stacy showed that really shows you the overall with everything included and that takes into account die sizes and the like.
Back up front. Hi. I just
I wanted to go back to Eric's question and Murphy's, I guess, comment in response to that. So if the idea is that Intel has a lot of IP and depending on what the workload is or depending on what the customer need is, Intel is going to be someone who can provide that tailored solution for what that is. I guess, is the idea that it's not about integration and about total cost of ownership being driven by monolithic die or by co packaged processors or things like that. And we're in a world where you need to be best in class at the specific building blocks in order to compete in hyperscale. Does that make sense what I'm asking?
Yes. I think it really comes down to right now workloads are so much in their nascency. They're so variable that as you look at how you would predict on what platform to launch them, you're really kind of like taking a very early bet in such a formative part of the workload formation. So what I'm really saying is that just for example, as from my own experience, the first smartphones had the image, the gaming and all the graphics done on the CPU. Eventually, they disaggregated into more workload specific architectures when the use cases became steady state.
Everybody wanted a camera, everybody wanted to watch video, everybody wanted to surf the web, everybody wanted to see playback video. That dictated how we invested in specific silicon. I think when you look in kind of like the roadmap of the future for whether it be the data center or even the client devices, some of the workloads are still very much in an experimental, will this grab the end customers' attention or will that grab the end customers' attention? Will this stick? Will that stick?
There's a lot of experimentation going on. So to commit to specific ways of doing things right now is a little early and a little premature. And therefore, what we're really saying is we have a platform that allows a lot of experimentation. And when that experimentation gets towards steady state use cases, we can actually define an optimized silicon solution, whether it be in terms of power, performance, area or cost. I think that's really what I was really trying
to say
in the back.
Hi. This is Stephen Chin from UBS. Murphy, I just had a follow-up question to a comment you made earlier about how Intel Custom Foundry is not aiming to be a general purpose foundry. So with that in mind, I was wondering from a if you could provide more insight from a margin profile standpoint, whether your ICF business would where that might fall longer term relative to Intel's current corporate operating margin structure or relative to the best of breed out there. Yes.
And Murphy,
would you like me to jump in there?
Absolutely. Absolutely. Since we're in
the quiet period, I think we'll pass on that question for now, but we're happy to follow-up with you when we're outside of the quiet Thank you.
Just following up on the same question again. Say one of your customers has made a bet longer term that this architectural whatever architecture they will use for some of these emerging workloads is the right bet. And you said that it's too early to make long term bets on these workloads because they're very nascent in their life cycle. And if they're committed to that bet, why two questions. Why wouldn't you guys actually be foundry partners to them even though it kind of competes with your business in some ways?
And then 2, what would you do differently in order to make sure that you have that far as same architecture for their data center on a go forward basis?
Look, if we have a customer that has a very clear understanding and clear perspective on how they want to pursue their workload management, then we'll clearly work with them and define a solution that basically meets their requirements. I think if the precision of their need is clear and there's a dialogue in terms of a specific implementation that is necessary, then I think it will be within our way it will be within our strategy of making sure we delight our customers to follow their direction and work with them accordingly. And that goes for any relationship beyond just the data center.
So in other words, you would actually make the part to supply to them and also fab it for them if you
Well, I haven't first of all, it's not my business. I shouldn't speak on behalf of Diane. And I'm not sure whether she's even got that request. So maybe that's a question that you best target at her. I wouldn't want to answer on her behalf.
And then just a quick follow-up. I know it took a while to realign all of these different pieces together to finally get things moving at the same cadence. Do you think that all the work is now behind you to basically align the architecture, the microarchitecture, the manufacturing, various IP blocks so that the future cadences on a go forward happen in line with what customers want? Or is there a little bit more work to be done?
Yes. Is all the work done? No. Have we made good progress? Yes.
The one thing that I think is really satisfying to me is I think we've got a real degree of philosophical and planning alignment between our process technology and our product road map. I spend as much time with these guys as I do with people in my product organization. And what we're really talking about is looking at what these guys are recommending against what I need in the product roadmap, what my team says they need in the product roadmap, and having an aligned discussion of these are the rules of the game, annual cadence, predictable performance improvement and being able to make sure that we have business that can rely on a contribution from our foundry and process that underpins their businesses. I think the most thing the thing that I'm most pleased about is I think we've got a great deal of colleagues comment, but I think we've now really got a way of thinking that essentially unifies the company's store prices. Still a lot of work to do to go from intellectual alignment towards having all parts of our roadmap aligned on that theory.
But I think it's now on a path that I think is on kind of like an autonomous navigation vector.
To return to Murthy's earlier image, we have a quiver full of arrows. So it's a we then have to pick the right arrow for each.
Well, and we really do a lot of work internally. Again, you've asked some specific questions about this or that. But that is a lot of the things we can do internally is really work in with each specific team to understand what we can bring to them.
We come back to the middle here. Yes.
Thanks, Mark. Mark, I wanted to go back to a question I asked you kind of offline about EUV. ASM lithography has done a great job with the investment community kind of convincing all of us that the sun sort of rises and sets in the litho bay. And so I'm kind of curious because the next development that's going to happen most likely is that TSMC and Samsung will be using EUV in their logic probably before Intel. How does that change the competitive dynamic if when that happens?
And if it doesn't really change it, can you just kind of dumb it down for us and help us understand why they go into EUV more quickly than Intel really doesn't change that lead?
Okay. First, I'll say that EUV is a great technology for delivering improved patterning capabilities. You can certainly print smaller patterns with better fidelity with a single UV exposure than you can with maybe 2 or 3 immersion steps. But today, we can't commit to EUV because the manufacturing readiness, the manufacturing maturity of the tool is just not there. The uptime of the tool isn't yet very good.
Number of wafers per hour through the tool isn't very good. So if we made a public announcement saying, yes, we're committing an EUV, that would be a pretty hollow announcement. It may sound good for a day or a week, but then you've got to ship wafers. And if the tool is not quite manufacturing ready, you'll suffer. But let me also add that I don't think anybody is ahead of Intel in terms of understanding what it will take to get EUV into manufacturing.
And it's not precluded from being used on our 7 nanometer technology. Because I've stated in public before, although we are initially developing our 7 nanometer technology on an all immersion flow, the design rules are set such that we can put EUV into certain steps, replace 2 or 3 immersion layers with EUV for a cost savings. So we see EUV as eventually as a cost savings.
That's helpful. And then maybe as a follow-up. Since the days that Andy was CFO, it's been drilled into all of our heads that if Intel could move faster down Moore's Law, you would, the economics make sense. I'm just kind of curious, given all the optimization that you've been able to do on 14 and 10 and Murti, given how the market sort of changed from this monolithic PC world into something that's much more heterogeneous around SoCs, is moving down Moore's Law as fast as you can still the best economic outcome for Intel? Or is what's happening now with the elongation of Moore's Law as long as you can keep that competitively actually a better economic outcome for the company?
Yes. Again, I think it's a set of concurrent strategies that essentially at a business level, we can kind of like cut through and exploit in cross section. We have to drive Moore's Law as fast as possible. But then we also have to accept that that's a process of a great deal of innovation. And therefore, with a great deal of innovation comes a degree of approximation and timing.
One thing I don't one thing that's really clear is the fundamental point of Maureaus is to get towards an economic equation. And you don't want to forestall that achievement of an exemplary economic equation because you're kind of like wanting to shave 6 or 12 months off the schedule. In order to protect that economic path and still achieve that economic rate that we talked about, we evolve our current processes to take a little bit of air cover, to give us that sense of ability to optimize. So I'm not compromising my desire to have an annual cadence on my product portfolio because I've got a make before break connection between node N-one and node N. So for me, I think it's a much more sophisticated discussion today.
1, because we have a much broader portfolio of products than maybe when Andy made that statement. 2, Moore's Law importance is ever present for Intel. But what we're also trying to do is get towards the ability to drive all of our business towards a predictable cadence, which means we need to make sure that we cover some of that innovation risk with the ability to make sure that we can have agility in
which node we can land a particular part. So our road map seems seamless to the outside world. Now let me expand upon that answer. As we run as fast as we can down the Moore's Law path, we do not forget to look to our sides as we do so and think and ask ourselves, are there other ways we can optimize these technologies, do derivative versions to meet a broader range of products? So again, the main driver is to pursue Moore's Law, but we also look to our sides and look to ways to develop derivative technologies for a broader range of products.
In a nutshell, more and more.
All right. I think we have got time for a couple of more questions.
Thanks. There's been some speculation in the press recently that some of your competitors are exploring a gate all around technology for ideally for commercialization next year.
What are your thoughts on that?
We are exploring a broad range of transistor options in our research and development groups, including gate all around. As much as we might like to use an attractive name like that, we make choices based on real hard engineering data, density, performance and power.
Got it. So if this is Samsung supposedly, if Samsung actually develops something that is gate all around next year, I mean, where does that put you potentially in terms of your roadmap?
I think still with a better technology. On power performance scenario?
Yes. Okay. All right.
For our last question, why don't
we come here to Tristan?
So you talked about 14 nanometer FPGAs being in production now. Fair to say that there was some initial delays at the time when LTERRA was a standalone company. Looks like Xilinx is going to access 7 nanometer from TSMC. So we know not an apple to apple process with what you're doing sometime in 2018. Is the plan to accelerate the node migration in FPGAs going forward?
And do you think that you now have ways to do that? Or how should we look at the next node for FPGAs?
Well, I
think we're open minded. I mean, I think
at the end of
the day, what we're really looking at is what is the best technology or profile of IP that lends itself towards being the first to ramp a node. We've talked about technical enablement that has allowed us to reconsider whether a service, for example, should be looking at one of those areas. There's nothing to preclude that discussion encompassing the broad spectrum of Intel's technology. So we may consider whether in the future FPGAs may make a better sense. Clearly for us, as I said in my talk, all of our product portfolios will ultimately benefit from our transition from node to node.
As I said, we were exit when we exit 14 on our core roadmap, FPGAs moves into 14 in mainstream. I wouldn't discount options in the future where we may rethink whether FPGA is the right technology to ramp a particular node structure with. So again, we're looking to make sure that we provide our FPGA business with a process that is aligned with the competitive advantage it wants to generate in the market. And if they believe that in the future there comes a time where they need to be on a leading node, then I see no reason to preclude that from our judgment.
Let me just add that the fact that there's so much debate and so much clamor to be first on a node simply makes the point that Moore's Law is alive and well.
All right. Thank you all for joining us. With that, we'll wrap up the webcast and we appreciate you spending
the day with us.