So, good morning to everybody in the room. So, thanks for coming to our 1st data center day. I'd also like to extend a welcome to everybody who's joining via webcast. So thanks for joining us here today. I'm going to be pretty brief and just share the schedule that we have for today, hit a couple of logistical details and then turn it over to the audience to Diane.
First, if everybody can set their phones to silent. And then I just want to let the audience here know that throughout the day, we're going to have charging stations in the back. We'll have drinks available. And then so
so for our risk factors, today's presentations contain
forward looking statements. All statements made that are not historical facts are subject to a number of risks and uncertainties, and actual results may differ materially. Please refer to our most recent SEC filings, including our Form 10 Q, 10 ks and earnings release for more information on the risk factors that could cause actual results to differ. So moving on to the agenda, leading off will be Diane Bryant, our Senior Vice President and GM of the Data Center Group. Diane will outline her strategy for data center business growth.
Following Diane, we'll have presentations on several data center market growth drivers. In each of these presentations, we structured some Q and A time at the end of the session. Following Sandra Rivera's network transformation session at 11 we're going to close off the webcast for the morning and then we'll rejoin the webcast at 1 in the afternoon Pacific Time. On-site, we're going to do a real quick overview of an optional data center tour that we have. And we're also going to have a little bit of an update on some key demos that will be available in the afternoon.
So we'll come back on-site from lunch and the data center tour. And then at 1, we'll restart our webcast. We'll have some afternoon presentations starting with Jason Waxman on big data. He'll cover a market growth opportunity there. Then we'll follow with several presentations covering data center growth opportunities beyond the CPU.
The afternoon presentations are going to conclude at 3 pm and then we'll have a final Q and A opportunity that will be provided with Diane. So bring Diane up to recap the day and talk through final Q and A. And then our webcast is going to conclude at 3:20, so that'll be the end of the webcast. On-site, we'll have some networking time with some of the data center leadership team. So we'll have some of that and then we'll have the opportunity to view some of those key technology demos.
And so with that, I'd like to welcome Diane Bryant to the stage.
Thanks, Chris. Thank you. Yes, so I want to start by let's see if I can get to hold on, it will work. So let's see. Yes, it's not we're not stentlery, but it shouldn't be this complicated.
So I if you can yes, can you back at this one, Gina? One more? Okay. Okay. So, first, a huge thank you to all of you for taking so much of your time to be here with us today.
This is really an important day for us to be able to share with you what we're up to. As Trey mentioned, you have basically all of the data center group staff here, the General Managers and Vice Presidents of the group. So it will be a great opportunity for them to get to know you and you to get to know them as well. So as Trey said, we are going to cover a lot of ground. We're going to start the agenda on the areas that are really the big growth vectors for the traditional microprocessor business, so the CPU business beyond.
And then we'll go into 3 of the areas that are really driving growth beyond CPUs. We picked these 3. These aren't the only 3 areas beyond processes that we're investing in obviously, but we picked these 3 because they're both significant drivers of growth as well as they're new areas that may not be as well known to many of you. So we want to kind of give you a drill down into those new spaces. So I always start with this slide, so I apologize if I've seen it 100 times now, but this is the way we look at the data center business.
We look at it from the perspective of the end user, the people that actually procure infrastructure, deploy infrastructure, manage that infrastructure. Each of these have a slightly different set of value propositions that they're looking for. So it's important for us to keep the end user in mind as we develop our products. Of course, we provide server storage and network technology into those each of those end user segments. You can see our market segment share there.
We're now at 96% share of the server market. So it's a very lovely share, obviously. Storage, we're up over 80%. That's traditional SAN NAS storage as well as the ever increasing scale out or server based storage that you hear so much about. And then in the network space is the big opportunity for growth.
We're at about 8% share now of a very large market and nice opportunity. And then we talk a lot about growth by segment and these are obviously in not so precise numbers. They're not probably super helpful, but it does give you a perspective of the enterprise market is going to grow in single digits, that's out through the future, but the other areas we see continued growth, CAGR of over 20%. Okay. And then last are the 3 big strategic focuses for us.
And this is these are the big strategic growth areas for the industry. So it's not surprising that these are them. And when we so the first is the move to cloud computing. And when we say the move to cloud computing, we mean that as a computing architecture. So it's the move to cloud computing of the big public cloud service providers, of enterprise moving to private clouds, the communications industry moving to a cloud based infrastructure.
So we mean it as a computing architecture that is pervasive. High performance computing, so this is the area of how high performance computing solutions are being used in new ways that they weren't prior, so democratization of HPC, and Raj will talk about that. And then big data as a growth driver with just massive accumulation of data and using that data to drive good business results as well as new services. So recap of the first half, just for a reminder here. So we grew if you look at the first half of 2015, we grew overall 14%.
That was 19% in Q1 and then 10% in Q2. That variability, I think, reflects what we've been talking about for a while, which is each of those end user segments procures infrastructure at a different beat rate. And so you see the variability quarter over quarter, particularly the cloud service providers acquisitions are driven by different dynamics than, say, enterprise IT. So you see these ups and downs in the quarters as we've been talking about. Certainly, the underlying growth of cloud computing, you see that in the numbers as well.
So that's a very strong first half in the cloud build out. It's easy to talk about the big 7 cloud service providers, the 4 in the U. S. And the 3 in China, but what we do see is diversification of that market. So, the next tier moving up into that top 10 space, many more players.
China had a very strong Q2 in the cloud space, folks like JD.com or Qihu 360, so you see some really big consumer growth, consumer service growth. And that's actually the other point I want to make here, which is when we talk about public cloud service provider growth, we see a pretty consistent 2 thirds of the cloud infrastructure is going to consumer services and 1 third going to enterprise based services like Oracle or Amazon's workloads would be heavily enterprise. That mix has been pretty constant over time. And then we do see very strong growth on the network side, and that's an exciting area for us as the comms infrastructure moves off of proprietary infrastructure, fixed function infrastructure onto standard high volume servers in a virtualized environment running in a cloud dynamic automated way. That move is happening.
We see it well and Sandra will give you lots of insights into that. And then after 7 quarters of growth in the enterprise space and a very nice Q1, we did see softening of enterprise in Q2. We still expect for the year will be good. We're not changing any expectations on a 15% growth for the year, but we are watching Enterprise carefully. And some of it is likely China.
China is a the largest procure of 4 socket and above servers. So those are the very high end servers, and we certainly saw that slowdown in Q2. So we're keeping our eye on that, but the underlying growth drivers are remain the same and we're still confident on the year. So when we talk about how are we going to grow, how do we actually deliver that 15% CAGR, there's 3 big ways. The first one is on the server side.
If you have 96% of the server market, you grow with the market and we believe we have ways to accelerate that growth. It goes back to Jevons paradox that we love to talk about that if we can make infrastructure easier, more efficient, easier to deploy, easier to consume, there's new usages that will occur and the demand will increase. So that's a focus for us and it's a lot of what you'll hear about on the cloud build out. You might have heard we announced Cloud for All initiative and some investments within that with Rackspace and Mirantis. So these are investments that we're making to make it easier to deploy technology.
So we grow with the TAM and we do it we take actions to accelerate that TAM. On the network side, it is obviously a share opportunity, a big opportunity for growth, as I just mentioned. And then the 3rd way that we grow and meet that 15% CAGR is through sell up. So we continue to see movement of ASPs and I'll talk about that in a minute. And then our increasing silicon footprint for a given infrastructure solution growing our silicon footprint within that box.
And those are the ways that we're going to talk about later today as well. So I want to just jump into the one point about continued increasing ASPs, so our ability to grow ASPs over time. We now see 80% of our volume, the Xeon volume has the demand has gone up the stack, 80% of it since 2010. And that number was 70%, if you recall, at the Investor Day in November last year. So we continue to see this trend.
And it's you can say why is it? So there's some really good reasons. If you're as technology continues to become more and more critical to your business, even as technology becomes your business, it's just like Amazon and Airbnb and Uber, right, IT is their business. Anything you can do to make that infrastructure more effective and efficient is directly delivering value to your business. So IT is no longer support function and nice to have, IT is a critical business element.
And so we see folks buying up the stack trying to get greater and greater benefit out of their infrastructure. And the way then we continue to deliver greater and greater value so that they see the benefits and buy further up the stack is through many different ways. We deliver value in the Xeon processor line in many different ways. And we do talk a lot in Intel about Moore's Law because we love Moore's Law and that is certainly one of the ways the ever increasing performance and energy efficiency of the transistors as we move from process generation to process generation. But there are, as you can see, other very significant ways that we deliver value into our products that the end user wants to consume.
And that's integration, we've been doing that for years, whether it's just adding more cores to a given processor, whether it's integrating the floating point unit from long ago. Eventually, as we go through the Alterra acquisition, integrating FPGAs will be another great way. So that delivers big pops of performance gain. New instructions, so as a given workload becomes more and more mature, we'll actually add an instruction to the instruction set and that can deliver huge performance improvements. For instance, TSX, which is a new instruction that we added in the Haswell product line we announced earlier this year that gives us 6x performance increase for applications that are parallel and memory intensive.
So big data analytics will benefit from that. So 6x on one instruction, you'll get 6x performance improvement for that given workload. We also deliver greater value through just big new feature innovations. You remember when we introduced hyper threading into our processes or turbo mode and these give big pops of performance to the end user. And then we also have our custom CPU capabilities where we'll deliver a processor directly to an end user to accelerate their particular workload.
And that again will give them significant value that then they see the value in and they're willing to pay for. So many different ways that we deliver value, many different ways that we continue to increase the performance that we deliver processor after processor after processor and then we see that reflected in our ASPs. And then the last point I wanted to make is my statement of we grow by increasing our share silicon footprint within a given system. So if you look 2014, 88% of our revenue was from the Xeon processor product line and 12% from other and other would be Ethernet controllers, chipsets, our boards and systems group. As we move forward, 2018, you can see 22% of our total revenue will come from other, and that is really a reflection of our 3 d Crosspoint memory solution that you'll hear about, silicon photonics that you'll also hear about and our increased fabric presence in high performance computing with the Omni Path and you'll hear Raj talk about that.
Okay. So 2014, we were 14,400,000,000 dollars If you look at the total logic silicon TAM, it's a $37,000,000,000 market. That's the opportunity to grow our footprint. And then if you add in memory with 3 gs Crosspoint now, we look at our TAM as even broader. We're talking about a $49,000,000,000 market that we can grow into.
Okay. So just to summarize then, this is what we've been saying for a while, 15% CAGR out through time. As you can see with the growth of the non enterprise businesses at the pace they are, enterprise becomes a smaller and smaller portion of our total business. And the way we do this, as I said, is within the server market, grow the TAM, accelerate the TAM, grow with the TAM, network gain the share as Sandra will tell you and then sell up in our silicon footprint. So that's the summary.
And with that, we're not as Trey said, each of the presenters will save time at the end of their session for Q and A, and then I'll come back at the end for Q and A as well. So with that, I want to introduce Raj Hazra. Raj runs the High Performance Computing Group and the Enterprise IT Business as well. So he's going to speak though about HPC.
All right. Thank
you, Diane.
Good morning, everybody. Good to see some familiar faces and get to know some of you that I haven't met before. My name is Raj Hazra, as Diane said. And today, I'm going to talk about our High Performance Computing business, which again Diane mentioned is one of the growth areas for the data center. I'm going to talk about essentially three things.
I'm going to talk about why this area, this market segment is poised for growth and investment itself. I'm going to talk about why Intel is best positioned in some respects in enabling and participating in that growth with our products and our innovations. And then I'll talk about our strategies to go take those investments out to the ecosystem and help the ecosystem realize that growth. And then I'll hopefully leave some time at the end for questions. So let's talk about a little bit about the relevance and investment.
So obviously, investment is and as measured by relevance. And when we look at high performance computing, we tend to start with governments, right? That's typically where most of the investment has been. That's typically where all the at least the visible press comes from. And if anything in the last 2 years, as this chart shows, and this is just the tip of the iceberg in some sense, the mantra of to compete you must compute has become a universal mantra, right, which is basically rooted in the fact in the recognition that we are now going into the digital age and the world of the digital laboratory.
You no longer want to build things, metal models or plastic models. You don't want to manipulate things in the physical world. You actually want to use the digital tools of high performance computing to do that much more effectively and do that much more efficiently in the digital domain. And that's really what HPC does is it turns everything into digital PUDI. So as you see some of the things, some of the governments here that are leading this parade on to compete, you must compute and investing, I'm going to start with the U.
S. National Strategic Computing Initiative. The President signed an executive order 2 weeks ago, essentially one of the largest Department of Energy programs in order to go essentially create the next generation of high performance computing technologies, but also the industry and the industrial base around it. So this is not just about creating that faster exascale supercomputer, it's creating the capability for the U. S.
Industries and marketplace to use high performance computing and those bleeding edge technologies at all scales. China has been in the news recently and for a while. They've had the number one supercomputer in the world for 2 years now, which in the world of top 500, as you'd call it, is actually quite a rare achievement to hold on to their number one position because technology moves so quickly in this space, but they've held it. We are actually part of that system enabling Intel Xeon and Xeon PHY co processors enable that. And they're investing massively in that same mantra of actually creating the capabilities, the basic capabilities to both create HPC as well as consume HPC.
So a lot of investment going in applications and even training students to be the next generation of programmers and application developers. Europe has taken a very steady, but maybe a little bit more quieter approach to creating HPC as a tool for fundamentally changing the way the industrial base competes. They made the recognition that there is a large imbalance between their use of HPC for European large European Industries to small and medium business versus their creation of IP in the space. And so they're funding very significant pan European as well as country specific programs into creating more HPC capabilities at the European and the national levels, and that's really the Horizon 2020 program. And then India, about a year ago announced a $730,000,000 program, and the goal of the program, it's very similar to the U.
S. Program, was to create this was the 2nd the largest national investment in science since the space program in India essentially at this scale to essentially upgrade the entire industrial base to a digital base, right, make HPC the tool available. So we see tremendous, not just innovation, but tremendous policy and other related things becoming tailwinds in the acceleration of HPC innovation and HPC adoption. The second space is commercial, right? This is the space where HPC has been used for a long time and it continues to be and it actually has accelerated over the last couple of years driven by 2 things and this is a theme you'll hear throughout the day.
New technologies give new capabilities, right? So these are bleeding edge applications, whether it's a design some basic physics modeling or chemistry problem, new capabilities allow for new science, better science. And sometimes, even if you're doing the same thing, new capabilities give you a TCO improvement given the amount of performance we pack in, in every generation, refreshing your infrastructure even if you're not going to run new applications is a tremendous benefit in HPC supercomputing or HPC centers. But generally, it's the former. There are application areas waiting for more compute.
In very few cases, we have the luxury of saying software and algorithms are so far ahead, they're just waiting, right? My customers tell me give me 10x, 100x, I can use it, right? And the reason for that is in certain sectors of science, applications have now come to their glass ceiling of what they can do with current generation and are poised to actually turn over to the next generation of algorithms. Overlaying all of this fundamental change is the notion of data driven high performance computing. This is big data or as you would say big data, right?
It is this notion that it is no longer just about physical models. It is the fact that there's tremendous insight and knowledge available in the data at large, whether it's generated by machines or it's generated as a course of other transactions and insight can be gained by processing that, right? And that's actually an adder on and we believe a new class of workloads that demands high performance computing infrastructure for certain classes of data analytics and decision support in other areas. Time to solution and the efficiency of that solution is a differentiator. And you can see there are people who are doing studies.
1 of this is a market participation survey where people actually are now looking at the business case of deploying HPC. A dollar of HPC investment, what does it return? Whether it's through an efficiency, whether it's through effectiveness, whether it's through just innovation and competitiveness, right, dollars 3.56 This is a fairly broad based survey and we can make that available through Trey and Company. But what it says is when you go digital, when you get more efficient, you become more competitive. It's a better and a newer way of doing things at all scales.
The third one and this one is also very much in kind of the center of our hearts and in our backyard is the notion of HPC getting democratized, the reach of HPC expanding significantly through what Diane called the cloud architecture for computing, right? Just about 2 years ago, if you went to any cloud space, a cloud service provider and asked about HPC, you would have gotten the what is that, right? If you went to a traditional HPC center a U. S. National Lab or they'd say, why cloud?
Well, that world has changed. The TCO benefits of a cloud based infrastructure are now being recognized in traditional supercomputing infrastructure, HPC infrastructure. More importantly, the ability for the cloud to take HPC not as this esoteric difficult thing, but as a service, as a consumable service and reach new users is tremendous. And so we are seeing the major cloud service providers in just about 2 years, 15% to 20% depending on how you count a little bit of HPC is actually being consumed in the cloud and all the major service providers are looking at this as a fundamental differentiated workload for their infrastructure. Okay?
So with that, we go, okay, so that's obviously not going away, growing and becoming more important. What does that mean for Intel, right? Maybe this foil is a little bit backwards as in you need to look from bottom to up. But the fundamental message is this, what we sell into, what we've participated in the market was through processors in the democratized HPC over the last 10 years. HPC has been democratized by X86 and Linux, right, the open source movement.
And this is like the old mainframes going to industry standard servers, same thing happened. But as increasing energy efficiency and performance needs have dominated the effectiveness of HPC usage and the effectiveness and efficiency of that usage and workloads have become more complex, there's an there has been a need to have that kind of focus be applied to other parts of the system. It's a full system level design of what kind of compute you need, what kind of fabric you need, what kind of storage you need, and all of those become central to providing the HPC user experience. Fundamental to that are 2 things. 1 is integration.
Diane talked about integration just a few minutes ago as a fundamental way we continue to deliver more value. Nowhere other in probably is more critical than an HPC or hyperscale. Why? Integration drives down cost, integration drives down power and integration allows for better use of system resources to provide higher capability sooner, okay? So I'm going to talk about the second part that is critical, which is customization.
As you integrate and I've had this question from several of you in the past, it's kind of all going there and it's commoditizing at some level or horizontalizing, what happens? Our design methodology and our approach to the system is as we integrate, we provide the ability to actually have targeted customization. So what we enable our customers to do is differentiate where they can and where they want to and not have to pay the burden of an entire lift of the infrastructure that is not differentiated, right? So I call it R and D OpEx optimized customization process. We have applied that to the 3 main areas that are fundamental to continuing to drive HPC forward and continuing for us to grow with the market.
Obviously, Xeon remains a mainstay and it's Xeon based systems are now at 94%. It dominates the top 500. It's also used down to the departmental cluster size. About in 2012, we introduced the Xeon FICO Processor, which was called the Knights family. It actually won its biggest thing in being part of the number one supercomputer in the world.
But we announced that we will be shipping the 2nd generation of it, not just as a co processor, but also as a processor next year. And that has received tremendous uptake from again that waiting community of highly parallel algorithms that wanted the next generational jump in energy efficient parallel performance. And the third one is fabric. What we've done is we've based on our Cray and QLogic acquisitions from a few years ago. We've created the capability to build an entirely new fabric called Omni Path optimized for high performance computing and integrated it with our Xeon and Xeon 5 family of processors and co processors.
Across these three, obviously, we enjoy a very strong position in Xeon. The other 2 are strong headroom for growth. As you can see from this chart, in just about 2 years, we've more than doubled our volume market segment share in co processors. This is a sign of 2 things. 1 is that our value proposition is strong with end users and number 2, the market also has grown for highly parallel applications.
We are starting to see that with fabric as well. When we introduced the Intel version of InfiniBand, we were a generation behind our competition in speeds and feeds, yet we almost doubled market segment share because of the value proposition of it being tuned and available in an end to end optimized Intel architecture system, right? So we see these being fundamental needs of systems of HPC systems going forward. We have tremendous technology and the ability to integrate and customize these products for a wide variety of workloads. So what's our strategy, right?
I talked a little bit about the leadership innovation in the silicon components. We will continue to ensure that the high performance workloads run best on Intel. And that's not just a compute statement, that's a statement that says the system architecture will run best with Intel, whether it's Intel processors, co processors, fabric, silicon photonics, next generation memory or storage. The second is how do we take these innovations and continue to facilitate an ecosystem that can build products and solutions around it and deploy them effectively to the marketplace. About 6 months ago, we announced this notion of a single system architecture that actually takes the ingredients and the architectural approach to building a traditional HPC system, which is a kind of a simulation based system and turning that also into a high performance data analytics solution.
There are big data is not all HPC, but there are certain applications in big data like machine learning that can really benefit from distributed high performance infrastructure. Essentially, it's an awesome HPC workload, right? And so that's what we did. Instead of bifurcating, we said what we can do is with these ingredients and the architectural approach, we can actually build a design surface and use the consistency of the surface to actually optimize for various kinds of workloads. This is tremendous from a software ecosystem perspective, because the ecosystem does not like fragmentation.
The ecosystem wants consistency and the ability to have targeted optimizations. What we've also done is taken that approach, which made us very successful in the traditional infrastructure space and extended that to the cloud. How does a cloud service provider not only effectively and efficiently implement an HPC solution in their software defined infrastructure, but also how do new service new software developers who are now called SaaSs in that world deploy their software applications as a service in that infrastructure. And then how does it all come together so that the end user that uses that service does not have to realize that there was any penalty paid for being in the cloud because it is a very performance sensitive infrastructure. The third one is really about applications.
And in many ways, this is one of the most important things that we've done for the last 30 years and we will continue to have to do. This is about working the application base and the ever growing application base that runs on high performance infrastructure and optimizing that for the latest technology innovations, parallelism, threading, vectorization, the better use of integrated fabric, the next generation memories that get integrated have all provide very different interesting models for applications to use. How do we get that code modernized, right? And that has been a huge effort for us. Along with the industry, we have more than 50 parallel computing centers where we work directly with the owners of code, whether it's public domain code or it's even applications written by ISVs and optimize help optimize them not just at a single node level, but at a cluster level so that we can actually expose the goodness of things like fabrics and distributed storage.
We have received tremendous support in this from the ecosystem and we will continue to do this because it is fundamental to bringing together innovation from us and our partners and realizing value for end users in the marketplace. And with that, I'll stop and wait for questions.
All right.
Are there questions?
Yes, David?
On Xeon PHY, can you give us any numbers attach rate of Xeon PHY to HPC systems, the unit growth or the dollar growth that you've seen over the last year, any sort of parameters that you can help us with?
All I'll say is on generation 1, we've met our expectations to get in the market, establish the programming model. On generation 2, we are seeing very good demand and it's again in line with what we'd expect for a 2nd generation product given the state of the applications. One has to realize the use of the Xeon 5 product is much more determined by how many applications are ready for it. And as we work hard on that, the ramp rate of interest is actually in line what I'll say is in line with our expectations. Right.
So the other place where the Xeon Phi initial attach has or interest has been is where obviously the performance density of Xeon Phi pays off, number 1. And number 2, applications can be our self written for it. This is typically in the high end of supercomputing. And over the last since 2012, when we introduced the 1st generation, we have introduced more flops into the top 500 than any other competitive accelerator as you'll hear Rob today approach in the top 500, right? So today more flops run on Xeon Phi than any other accelerator in the top 500.
And that's consistent with the bleeding edge energy efficiency needs of performance and the fact that applications can be quickly rewritten to that architecture.
Blaine?
Could you
just talk about the roadmap for Omni Path? You said you were a generation behind the first product. Do you see yourself catching up? And when would that be?
So what we've said publicly is Omni Path, the next gen the current generation is really InfiniBand based, the next generation OmniPath will be competitive both in speeds and feeds and will be world class in terms of application scalability based on our PSM architecture.
Yes. Srini?
Thank you. Just a clarification. Do you run into any export controls in your business? If so, to what extent do you think that's impacting your growth?
We've operated under U. S. Export control rules for ever since they existed And we plan our business to them. We've not they're always to be looked at. We are always compliant and we don't see them as a fundamental either SAM or a business limiter.
Yes, Harlan?
Thanks for the presentation. So on Xeon PHY, on the accelerator part of the market, can you just help us understand what does the competitive environment look like? Understand GPUs is Tesla from NVIDIA, for example. You've got proprietary. So tell us about that distribution of the mix of accelerators relative to your current market share.
And then help us understand how Intel with Xeon Phi is differentiating relative to these other solutions?
So the shares are actually all available on IDC and it's a little bit of apples and oranges because we are in generation 2 and most of our competition specifically the GPGPUs have been in the market for 7 years. Here's what I'll say to the heart of your question, Eran, what's why Xeon PHY? Because that's really the question, right, when you have a choice. The reason you have Xeon Phi is for 2 things. 1 is you have the performance and the energy efficient performance that's world class.
Number 2 is you have the programming model that is a consistent programming model with the Xeon. Therefore, while you have to optimize code for a many core architecture, you don't have to port it, right? So there is already a very large legacy of code that is ready for Xeon Phi optimization as opposed to all of that code, which is predominantly written for X86 having to be first ported to a different programming model. So the performance as well as the programming model consistency is what's making people go, yes, that's what I want to get if I want to accelerate my many core applications beyond what a Xeon can do. And beyond that, if you have like an hour, I can walk through all the beautiful gizmos inside if I.
Yes. And that's part of the reason what you've seen is over the last 3 years since 2012, we've gained share specifically on the high end of compute of supercomputing rapidly because of the adoption. No one wants to buy 1 and not use it because and so you have to match the acquisition rate with your deployment rate of applications and that's been easy.
Matt? Thanks, Raj. Regardless if you're doing supercomputer deployment or cloud deployment in your business, roughly what percentage of it is government procured versus enterprise procured and what are the different growth rates within that?
It's hard for me to kind of make up the numbers of exactly those silos. But the government academic portion is a fairly significant double digit portion of the business, right? And I know that means it could be anywhere from 10 to 99. But it's yes, in the ballpark, I would say it's somewhere between 40% to 45% of the business. Now that actually surprises some people because they'd go, well, I thought that's all government bought, right?
I mean, it's universities and national labs. No, there's the shells, the financial services. And over the last few years, they've actually been buying not the bigger machines, but more of the smaller machines and refreshing faster because of the TCO benefits.
John? Thanks. My guess is this will be addressed later on in the day, but I'd be kind of curious, how do I think about where Xeon Phi stops and a PLD accelerator begins, do you think the Alterra assets will be something you can leverage into HPC or should we be thinking about that along other sectors within DCG? And then a similar question around Crosspoint. To what extent can you leverage that new technology into the space?
Or how should we think about NVM within the context of HPC?
Okay. So Rob will do the overall taxonomy and the demystifying, but I'll answer your question for the HPC portion. It remains a fairly given the value proposition of FPGA, it remains a fairly consistent and small portion and we will just have a product now a hopefully better product into that space. It fundamentally does not compete with general purpose because its value prop is different than general purpose. On the 3 d XPoint, the only thing I'll say is it's a very exciting new technology and as we've talked about, it's for HPC particularly it's a game changer for how we architect the storage subsystem.
And it's a little too early to comment on that, but most people look at it as what it could do for compute, but very large scale storage, highly parallel fast storage is completely transformed and revolutionized by technology like 3 d XPoint. And in fact, the only thing we publicly said is that revolution will be first seen in the Coral supercomputer, which will be a 1 of first of its kind, I won't say 1 of a kind, it's a first of its kind on the storage subsystem side, along with carrying of course the next generation Xeon PHY processor. So yes, very excited. It's transformation.
Hi, Raj. A couple of quick ones on in terms of what type of Xeon processor SKUs the HPC market typically consumes. Is it all high end E7 type processors? Or is there a good mix between E5, E7? And then specifically to the cloud customer base, whenever you introduce a new generation Xeon processor, does all demand immediately shift over to that new generation product because of the good TCO?
Or is there still a long tail of, call it, n-one or even 2 generation type purchases?
Okay. I'll let Jason answer that last one because that's not specific to HPC. I mean, the cloud service providers won't procure just for HPC, so they have a larger homogenized infrastructure. For the first one, our MP or four way and higher is actually a fairly small portion of the HPC business because HPC today is generally scale out DP. And then within that, one of the reasons we love HPC is because of its performance sensitivity, a very significant portion of it is what Diane showed in the charts, is advanced SKU.
And when people, one of those buying up, HPC has been one of the segments that's consistently bought up and consistently bought as soon as it was available. So they're a big part of our early ship programs. Yes, mostly it's E5.
Yes, maybe one more question, if there's one more.
Rob? Yes. We'll get it for the webcast. Thanks.
Can you just comment on the sizes of the 3 markets, processors, co processor and fabric and the relative growth rates, 4 year growth rates, 5 year growth rates of each?
TAM size? Yes. Rates of the 3 buckets within HPC. So the way to look at it is, it changes with integration. So there is an absolute size for co processors, but because we are now offering it as a processor, it simply becomes one more SKU of the expanded Xeon line, right?
And we see good growth rate for the many core processor rate as kind of in line with somewhat of the higher end SKUs of Xeon. On the fabric, I'm not sure we've actually publicly said what our growth rate is, but our attach assumptions to Xeon through integration is fairly significant because we believe in this segment, every Xeon that goes out will need to be fabric connected. And so and our roadmap is strong. And so we believe it's a fairly high attach rate. And it scales with in correlation to the processor count because then obviously the number of switches and ports you have is a function of the size of your system in terms of compute nodes.
All right.
Thanks very much, Raj.
Good morning. I'm Jason Waxman. I run the Cloud Platforms Group here at Intel and it's good to see some familiar faces. We've had the chance to chat in the past. I've been at this cloud business for a couple of years already.
And I can tell you that if anything, it's continuing to get more complex. And one of the things that I want to try and do today is really help kind of do, I think, 2 aspects. 1 is distill down what's really driving the growth in the cloud, really get to kind of some of the statistics and separate some of the FUD from what's really happening. And then also to make sure that you understand our strategy and why we're doing what we're doing. So those are the 2 things that I hope we'll be able to accomplish.
One of the aspects I want to make sure I get out of the way upfront is that I'm going to be talking about cloud in 2 different contexts. One is we'll talk a lot about the service provider. You tend to think about cloud as Google or Alibaba. I mean that's certainly one of the aspects we'll talk about with regard to cloud growth. But then there's also, as Diane pointed out, the cloud infrastructure growth, which is the combination of hardware and software that are designed to deliver services and that cuts across the cloud service providers, across comms, across the enterprise.
So I'll make sure I'm talking both and being clear whether we're talking in the service provider space or in the infrastructure space as we go through, okay. So just to start off a little bit about the backdrop, the state of the cloud today. So in 2015, when we look at the cloud, a lot of the cloud in terms of technology deployment from a service provider perspective, about 2 thirds of that is consumer driven services. So you think about search applications, you think about things like Hotmail, you think about Uber or PayPal, that drives a tremendous amount of the compute deployment, about 2 thirds as we look at the year 2015. Now one of the reasons we bring that up is that I think you and a lot of folks out there are always trying to figure out for the industry the cloud dynamic.
Is it a good thing or is it a bad thing? And we tend to spend a lot of time very much focused in on the cannibalization of say the enterprise as one of the key topics. And we'll talk a little bit about that. But when you look at 2 thirds of the actual compute deployment going into consumer, that's really net good for the industry. That's overall TAM expansion.
So we still continue to see that being a strong driver. Now when we look at the next 5 years, we actually see that a lot more of cloud is going to be addressing other types of applications, ones that are still pretty nascent right now. So even with as much talk as about private cloud enterprises deploying their own cloud infrastructure, there's still pretty few and far between. And part of the reason is the complexity of the technology. We actually forecast though that by 2020 about between 65% to 85% of all of the workloads will be delivered through a cloud infrastructure.
And again, I say cloud infrastructure not meaning cloud provider means that we see comms service providers, enterprises and enterprises and cloud service providers all having their own clouds and the vast majority of workloads being delivered through that combination of technology, because it's highly efficient and also allows you to rapidly deploy new services. So what we're going to talk about is what needs to change in the industry to help enable that next generation of growth across those usages. I am going to spend just another minute though on the cloud service provider growth. As Diane highlighted, we look at this as one of the P and Ls within the data center group and it has had some tremendous growth. I mean, we're talking over 50% compound annual growth since we started tracking the segment back in 2,009.
Now one of the things I want to highlight here, and this is really just looking at the cloud service providers in terms of their deployment of technology. The bottom are what we call the hyperscale. These are the large kind of Tier 1 cloud service providers, the biggest in the industry, both in the U. S. And in China.
And one of the things that's pretty remarkable is that you tend to think of the law of large numbers that at some point that, that CapEx and that overall growth is going to peter out. And I think from what we've seen, and this is based off of units, right, so CPU units, we're continuing to see very, very strong deployment of technology. But the other element here is that green bar continues to grow and it grows actually faster than the blue bar. And this is sort of the next wave of service providers. It's the Ubers, the Airbnbs, even Apple Icloud, when you go back to 2010, wasn't really even on the radar screen.
And that green bar over the last 3 years is growing at 42% compound annual growth rate as opposed to 25% compound annual growth rate for the Tier 1. So what we are also seeing, and I think this is good for the industry as a whole, is diversification that there are new entrants coming into the market. And in fact, I was pouring through some of stats last night. The fastest growing one in the green area is growing 8x in terms of their technology deployment year on year this year. So there are some real up and comers.
There's always going to be new innovation in the cycle and that's going to continue to drive this cloud growth, okay. Now as I mentioned, it's not just about the public cloud service providers, that's certainly a strong element of the growth. But when you look at what enterprises are trying to go do, and this pie chart here shows a survey, that if you were an enterprise IT, where do you want to go deploy your workloads? And this is one of many different surveys, but a lot of those enterprises for various reasons could be security, regulation, control, in some cases it's the economics, they want to deploy their workloads within their own private cloud. And so we're seeing a tremendous amount of demand.
And even though Intel doesn't sell directly to end users, we have relationships out in the industry where those enterprises, multinational companies are coming to us and asking us, can you help out with the growth and the deployment of our public clouds and I'm sorry, our private clouds. And the reason for that is that the technology is just too complex. And I'll highlight on some of those barriers in a little bit, but if we can reduce the complexity barriers, it means more people can deploy clouds. And that really was sort of the genesis behind the idea of cloud for all and we really do mean cloud for all. It means public cloud service providers, it means comm service providers and enterprises and really working with the industry to deliver the right technology to allow more cloud growth.
So I'll talk about that in a little bit, but you can see some of the partners that are already part of the program that we've been collaborating with, working on new reference solutions, working on ease of deployment of the technology and actually also attaching Intel Silicon value of course into those clouds. So let me click to the next slide and talk a little bit about the framework for cloud for all. And this basically is the snapshot for what we're trying to go do shifting to our strategy around cloud, that we want to be able to service hyperscale service providers. We want to service the next wave, which is comprised of, say, SaaS providers that are public cloud service providers, but also the comms service providers. And then we also want to be able to see the enterprises being able to go deploy cloud.
So each one of them has a different set of requirements and there are 3 basic strategies that we're employing to help make sure that they all get what they need in terms of deploying the technology. And I'll cover each of those 3 in the subsequent slides. Now I want to distinguish between what it is that each of these different types of customers are requiring and what they're asking for from Intel. So if you look at a hyperscale cloud service provider, let's just take Google for a second. I'd love to say that Google is calling me on a daily basis saying, hey, could you help me out with my software?
We really could use some improvements. But they're pretty capable folks. And at the end of the day, what they're really looking for is they want a more efficient engine to continue to fuel their data centers. They're looking for every little bit of optimization across the entire infrastructure from the silicon to the system to the data center itself. And so when you look and think about hyperscale cloud service providers, what they want from Cloud for All is they want optimization.
They want them to deliver the best cost of delivering the service and that's why they come to Intel. The next wave obviously still would like that type of efficiency, but what they're trying to do is they want to be competitive, they want to be nimble and they want to differentiate against, for example, other service providers in the industry. And they're coming to us for 2 different types of things. Some of it's again the hardware and the technology infrastructure, but the other aspect is they would like better software infrastructure out there in the industry. It's a little too hard for them to cobble together all of the integrated parts that say a Microsoft has the ability to go do.
So we're trying to make sure that there is both software infrastructure out there as well as hardware infrastructure to allow them to compete. And then when you look at the enterprises that want to deploy their own private cloud, they really just want something that's going to be as easy and as integrated as it possibly can. In some cases, they may want something like a cloud appliance, but they really want a stack that's robust that they can operate like they're used to operating their enterprise, but it can kind of be stood up in a very easy manner. And we are so far from that today and it needs to be addressed. So those are the 3 different things we're trying to accomplish from a customer perspective.
From a strategic perspective, there are 3 things that we're delivering. The first is that we want to optimize infrastructure across a full range of workloads. The cloud that we used to see back in 2,009 was largely web storefronts, very, very simple, very, very focused and homogeneous. The cloud we see today is running everything from high performance computing to big data to lightweight cold storage workloads. And it's that heterogeneity that we have to help them go optimize for.
And then the second thing, and particularly this is for the next wave and for the broad enterprise, we have to make the software infrastructure easier to deploy and more robust. And then what we want to do is align the industry. And this is something that we take quite seriously. When you have the type of market segment share that we have, you can grow by gaining share, but it's a lot easier actually to start growing by creating new usage and accelerating the overall market. And the way you do that is by making investments with partners, aligning and kind of creating these coalitions and driving standards.
And so I'll talk through all three of these in just a little bit, okay? So let me start with how we do the optimization for the full range of workloads. And I want to do a little myth busting here. When people think about the cloud, they tend to think of commodity because it's just a bunch of white pizza box servers all shoved into Iraq and it looks like a giant commodity. But as Diane alluded to earlier, when you are investing that much in infrastructure, if you're say buying 100,000 servers on an annual basis, if you can get a 10% performance improvement, that's like cutting 10,000 servers out of your infrastructure.
So those little tweaks make a big difference in terms of what they're delivering. And that's actually why while people think of it as commodity, they're driving for higher and higher performance. And what this chart shows is we looked at our top 7 cloud service providers, the public cloud service providers and tried to track from each generation to each generation, were they staying at the same SKU level or are they buying up the stack. So it's not just benefiting from Moore's Law, but actually trying to beat the benefit they get from Moore's Law. And what you can see is that, that green shows how many how much volume of those cloud service providers is actually buying up the stack generation on generation to the point where last year over 80% of the volume was actually driven up the stack.
So that's been supporting the increased ASPs that we've seen from cloud service providers. But beyond that, it's also been the reason that we started thinking more about customization of SKUs and how we go optimize for performance. And so you can see that right now all of the top cloud service providers are doing some level of SKU customization with us. That by the way is now starting to move into how can you take some of our unique IP and embed it into the platform in one direction or another. So we're continuing to increase on that customization factor.
Now we started to realize that you might be able to do some set of those engagements pretty well, but at some point you want to scale this model. And you think about the way to go scale that kind of customization, particularly at a silicon level, and that's where an FPGA comes in and is really quite a handy asset. Because now instead of having a specifically customized ASIC embedded into a piece of Intel silicon, you actually something that's programmable, it's like a sandbox, if you will, that allows the cloud service provider to innovate and to optimize. So it's again fits in very well with this overall strategy. And then the other thing that's been really great about optimizing is that the cloud service providers really just want to see the best silicon.
They just want to see the best platform. And that's why they've been early adopters of our solid state drives. They've been great network customers. And our overall share of wallet in terms of silicon footprint into the cloud service providers is about 2x attach rate versus the rest of the business because they've been early adopters for our next generation technology. We expect to continue to see that.
It's already showing out that there are also the parties that are interested in FPGAs and then looking at Xeon PHY and other technologies as well, silicon photonics. So they're a great test bed and early adopter for us. So this is bringing together how we optimize the silicon footprint and work with them to deliver the best technology. The second aspect here, again, for the broader part of the market is how do we make that software infrastructure easier to deploy. And there's 2 pieces to that.
The first is making sure that the technology, the stacks are maturing. So if you look at a project like OpenStax, and you may be familiar with it. I mean, it's a great idea. It was born out of open source. There's been a lot of momentum and hype around it for years.
The problem is this loose confederation of piece parts. And if you go to an enterprise or you go to even a lot of cloud service providers, it's just not robust enough. It doesn't scale. I mean and if you're a cloud service provider, you need it to scale to 1,000 of nodes. Today, it only scales to a couple of 100.
And you want it to be automated. You can't keep throwing people at the problem, otherwise you're not really delivering a cloud. The whole economics from cloud is that it's highly automated. So those are the problems that we want to address in the software stacks. We work with various partners and we also have a number of Intel engineers that are working on those problems.
But then obviously there's the part that's important to us in supporting our business, which is attaching the silicon footprint. So as we look to the 3 d Crosspoint technology, as we look to Xeon Phi or other accelerators, having a cloud software infrastructure that's aware of that technology is going to be extremely important. You can kind of think of cloud software, I like to think of it like an air traffic controller. You have a whole bunch of applications that are sort of floating around and you got to figure out what's going to be the best fit to go land on that hardware infrastructure. The more intelligent that is, the better you're going to be able to utilize that infrastructure.
And obviously, we want to make sure it does it best on not just our Intel CPUs, but the full range of the portfolio. So this is the key to the second piece of our strategy. And then the third is something that we do quite well and has always been kind of a key component to our strategy, which is enable a good healthy horizontal ecosystem. And I want to make sure I leave ample time here for questions. I'm going to just sum this up pretty easy.
We start with reference architectures. These are the recipes. We want to make sure that if you want to deploy a cloud, whether that's an open stack cloud or a container based cloud on Docker, that there's a proven recipe to do that at scale and that's the reference architecture. You then take that reference architecture and you enable industry standards. So we bring together end users, we bring together technology providers, and we make sure that the standards that they want to use will be aligned with those reference architectures.
And that enables choice in the overall ecosystem. And then the last is through our connections, we bring together various parties. And we broker some of these deals. There are companies that you normally would not see standing next to each other if it weren't for Intel bringing them together. And we accelerate that route to market by taking that reference architecture, combining it with hardware from an OEM, maybe turning it into an appliance, and then having value added resellers or system integrators accelerate that deployment across cloud service providers and the enterprise.
So again, just to reframe our strategy, it's about optimizing across the silicon portfolio and optimizing across the range of workloads, making the software easier to deploy so we can get the next 10,000 clouds out there in the industry and then make sure that the industry is aligned that we speed that accelerated route to market. And with that, I want to see what kinds of questions I can answer for you. Anything about the cloud industry, the trends or what we're doing? You want me to just repeat the question? Right here.
The prior presenter said that most of the deployments unit forms on HPC were like scale out to DB. Is that the same for your group? Or so when they buy up the stack, are they actually buying up just higher end two way sockets?
Yes. Yeah, exactly. It's higher end Xeon E5 predominantly. And when we do customization, it's sometimes tuning the performance, power and other parameters to help them even get more performance than we would in a broad mass market SKU, but it's same product line.
John? Jason, I know this has been sort of an investor concern for multiple years, but I wonder if you could just give us the update on the ARM camp, especially within the hyperscale customer base? To what extent is true optimization for them actually building their own silicon? And kind of where do you see your value proposition
to prevent them from doing that? Yes. It's a great question. And so one of the things that the ARM camp has going for it is that it certainly has numbers of vendors, right? So they're all sort of jockeying for position and we keep a close eye on them.
We're now in multiple years of kind of watching where it is that they're going to try and get early deployment. And we expect, and this is happening, that there'll be evaluations, right? Someone's always going to look and kick the tires on something new. So there are always press articles about the next deployment. I think our goal is to try and make sure that investors ask, are they real deployments or are they sort of the proof of concept eval type of thing?
There's certainly a lot of evaluations. So far, we haven't seen any big or broad deployment. And to your question, there's 2 pieces to it. 1 is what's driving this? And then second, what are we doing about it?
It goes back to what is it that the cloud service providers prefer and what they buy. So if you look at that chart that I mentioned earlier about why they buy up the stack is they really do value performance. And I think part of the original ARM hypothesis was that you could win on low power and low performance. But the reality is that that's opposite of where they're sort of buying. So I think part of the challenge is sort of maybe targeting the wrong sort of customer value angle.
From an Intel strategy perspective, our goal is to make sure that we deliver the best value across every work that we possibly can. And we just literally don't leave any seams. And that's why we have Adam based SoCs to be competitive there. In the mid market, we've got a whole range of Xeon E5 SKUs. And so every time I see an ARM benchmark that kind of comes out, I look and I say, you know what, I have a SKU that's lower cost, higher performance, better power.
And I think we have something that can go meet that customer's needs. And then the last piece of it is just working really, really closely with the customers in this segment. I mean, I know and I respect the fact that they're going to go look for alternatives. That's what makes them great companies is they're always sort of innovating. But we just got to stay fast.
We have to stay focused and make sure that we're delivering a better value at the end of the day.
Thank you. Following up from your point just now, you offer ATOM. What percentage of revenues in the cloud actually do our Atom? Is it a very small percent? Is it beginning to become significant?
It's extremely small. I would say rounding error. Yes, but like 1% or 2% maybe. And that's in my segment.
Question on the FPGA. You talked about customized loads and CSPs and the use of FPGAs. Can you talk about what is the current usage of FPGAs? What's the attach rate? And then as you integrate Alterra products, where do you see that going?
Yes. So I will say upfront that I'm not going to make a projection on the forward looking stuff with Ultera for a lot of different reasons. I will tell you that we've had an ongoing package effort for a while even before we pursued acquisition. So I can talk a little bit about that. Currently, the attach rate for FPGAs in cloud is extremely small.
There's been a lot of evaluation and kind of kicking the tires. I think we're now starting to see on the cusp of a couple of cloud service providers that have usage models that they're interested in ramping that volume. So based on what I see in the industry, I expect over the next, say, couple of quarters to maybe next year to really start to see that ramp initially, in any meaningful volumes. And even then, it's still relatively small. The usage models for it are where a customer has a workload that they can really, really tune something out of.
So one of the best examples right now is machine learning. And this is based on a public paper. Microsoft did a great job of comparing a FPGA usage on a convolutional neural network workload compared to a GPGPU and they found that there was an order of magnitude better performance per watt and that's why they're heading down that direction. But it's those types of workloads where you're doing a search algorithm or you're looking for images, it's a very high performance repeatable action, That's where someone's likely to invest right now in a FPGA. Is it 2020?
Yes. Thank you very much. So I will say that and we have put this out there is that we see an opportunity into several year thing. But by 2020, we think when we look at the workloads that could be running in the cloud, as much as a third of the cloud market could be utilizing an FPGA.
Thanks. Just wanted to come back to some of the custom SKUs that you're doing for the cloud service providers. You give us some sense what types of features you're customizing? Is it core counts, IOs? What are they looking for?
And then just from SKU management, obviously, as you think now talk about 39 different SKUs, how do
you manage that through the manufacturing network? Thanks.
Yes. So the answer on the what knobs we turn is like yes to all of them. It really we started this a couple of years ago and we just started thinking about the idea that when someone has a single purpose usage, they don't have to necessarily order off the menu, right? When someone's a good enough customer that has a big thing in a run rate business that we could go do that type of customization. So it will be cores, it will be frequency, it will be power, it could be reliability, and we'll turn all of those different parameters.
We'll also turn off IOs in some cases if it's consuming power that they don't utilize. So we will literally look at pretty much anything we could reasonably do to tweak it. And part of the reason we're able to do that is when you look at how we produce silicon in general for the mass market, we have to make sure that it can run anywhere. It's got to be highly reliable. I always say like it's got to be able to run on the data center on top of Machu Picchu, right?
That's the type of range and run any workloads. It's like a high performance supercomputer on top of Machu Picchu. When you work with Google or you work with someone that has a very dedicated application, they control their data centers, they know their workload and you can get much more dialed in and kind of what I call sort of low hanging fruit. You can find ways to further optimize that you might not be able to go do in a mass market SKU. Now with regard to your question about how do we manage the number of SKUs, obviously, we have to make sure we're not just cranking them out for low volume because then that would get unmanageable.
So we do have our hurdles for what we think makes sense to go do customization. We also obviously have a very good factory network. So our technology manufacturing group is tremendous in helping us go do that. But so far, we're okay. I do think though, if we were to try and do this for, say, 100 companies, it would get to be unmanageable.
And that's where again, we're having something like an FPGA or other parameters that allow people to innovate will allow us to scale much more broadly.
Hi. A question on your relative ASPs versus enterprise. First of all, is it higher than enterprise lower? And how do you see that trending over time?
Yes, it is lower than enterprise. The growth rate though of the ASPs is faster than the enterprise. So by deductive reasoning, they're converging over time, but it's still lower.
Here and then we'll go over.
Just had a quick question
on the sell up. Talked a lot about sell up. If you're buying up in 2015, why didn't the customer buy up in 2014? Is it coming out with new SKUs? Is it some other bottleneck?
Actually, they did buy up in 2014 as well. So that chart shows like green and you can see that the same ones that were green in 2014 were in 2015 as well. So literally, it went from like if you go back to 2010, they were buying like the low power, low performance SKU and then they went to mid range and then they went to the high end. And then they went to customization level 1 and now they're on kind of customization level 2. So they've Why don't they make the big jump initially?
Yes. That's a very complex thing. And I think it kind of sometimes goes down to, believe it or not, everybody has behaviors. You have procurement departments. You have systems in some cases.
So, another thing is I may have designed the system around a certain set of parameters and so there's only so much I can get out of it in the next generation. Then when they redesign the system, they go, hey, maybe I should redesign it for a different power envelope or I should redesign it something that would be higher performance. So it's both cultural and probably technology related.
I think there's been obviously with the push out of 10 nanometer and all the things that BK has talked about with the cadence of node upgrades etcetera there's been some. I'd like you just to talk a little bit about the upgrade cycle cadence of the customers in your business versus the node upgrade cadence in the factory network and maybe how much variability there is across the upgrade cycles from big customers to small customers or how it varies in your business?
Yes. So I'll try and talk a little about some of the dynamics there. One of the dynamics, which I'm glad you brought up, which I didn't bring up, so thanks for prompting me on it, is they are early adopters. They are the 1st out of the gate to toggle over. There's 1 large cloud service provider that literally within a quarter of us giving them silicon, they've toggled over to the next generation, which is unheard of when you compare that to kind of enterprises.
So it's nice in terms of helping us get our technology ramp. And I think if you looked at our overall DCG businesses, you'd see that the ramp of new tech to new technology is accelerated. Part of the other thing that we've done in addition to customization is we've done early ship types of programs, meaning that when certain customers are ready, that they're able to go ahead and adopt these things. So that's part of the dynamic and it's still consistent with, give me the fastest thing, this is competitive advantage to me. If I can be first with Intel over the next service provider, this is meaningful to us.
So it's been a good, I think, overall dynamic for us. Now with regard to upgrade cycles, it's quite a bit different. That like in an enterprise, they look at some point and say it's time to go retire these dusty machines. They're consuming too much power and you put them out to pasture and they become lawn art or whatever they do. In the cloud service provider space, they generally tend to leave in place.
And so it kind of depends on the actual cloud service provider as how they go do it. But for example, if you look at some of the large ones that do infrastructure as a service, they'll just keep leaving that stuff in there and they'll have like legacy customers that are doing a certain EC2 instance as long as they possibly can. And they manage those prices to sort of continue to utilize that stuff until it fails. So it's really sort of more of continuing to add on. And then when you look at that with our dynamic, they're just rapidly moving to quick.
As soon as that next stuff's out, the other stuff's going to start to become brown bananas. So we got to go and make sure that we're not too far lagging on it. And I think that's really the overarching trend is that they want to be first to node and they want to ramp over quickly and make sure that they do that transition in a way that they don't buy a lot of kind of brown bananas.
All right. Maybe one last question.
Jason, a lot
of us in
the audience try to build cloud CapEx models by looking at the large hyperscale guys and aggregating up their publicly disclosed CapEx. How do we think about sort of the spend with Intel within that? Because clearly, I think Diane has made the argument that even when buying up the stack allows cloud players to actually spend less on CapEx. So if we look at sort of your sort of dollar CapEx from Google, what percent of that did you get 5 years ago today, 5 years from now? And not customer specific names, but just in general, how do we think about that trend within the hyperscale business?
Yes. So rough and I use I always like to cite James Hamilton from Amazon's great guide. If you don't follow his blog, I highly recommend going there and looking at it. But I'll use some of the rules of thumb that he uses in his pie chart. Basically, for every dollar that large cloud service providers spend, about 50% of that goes to equipment, about 25% of it goes to sort of power and facility and then the other 25% kind of sliced up amongst a whole bunch of other types of things.
And so that 50%, I think, stays relatively constant. I think if anything, it may have increased a little bit, may have been going up to like closer to 60%, 65%. And the reason I say that is that the facilities space, they're getting really good at cheap facilities. It used to be that they built a data center. You're going to do a data center tour later.
Now it's basically like they pour concrete rolling trucks. And so a lot of that overhead of the facility pieces is coming down. And they're spending it on hardware. And I think then they go, where is the value add to go spend in hardware? And our rough ballpark of that share of wallet can range anywhere from 20% to 40% depending upon the full portfolio that we're selling.
So those are some heuristics that I think you can kind of use that might help you out. The one thing which I will tell you and I think we've had some of these conversations is that some of the things we've seen published in the industry about CapEx flattening or coming down, we don't necessarily see it. And it may be for 2 different reasons. It might be that some of those reports don't capture the full breadth of service providers. I think they were focused in on kind of a couple of big ones versus we see, like I said, that company that went 8x this past year and they may not see that.
And I think the second is that there may be a shift in the dynamic about kind of the other plant and equipment versus the stuff that does the meaningful work, which is where we tend to participate. Okay, great. Thank you.
Thanks, Jason.
Good morning again. My name is Sandra Rivera and I run the networking business for Intel. And I've been in the communications networking industry for a long time. And I can say that now is probably the most exciting time to be in network infrastructure. It is a market that's at an inflection point where we have a huge opportunity for growth at Intel and where we have a strategy to win.
So I'm going to begin by talking about what do I mean by the network. So when you look at all of the devices that we use every day out to the data center, which serves up a lot of our content, In the middle is the network, which is made up of thousands of points of presence or sites, hundreds of different node types and protocols and dozens of different vendors. And that infrastructure up until now has been built on purpose built fixed function appliances that do what they were deployed to do very well, but can't be adapted to do anything else, certainly not very easily. It's as if you look at kind of a close to home analogy, running your e mail on one computer, running your spreadsheets on another computer, running your PowerPoint presentations on a 3rd computer and running your word processing on yet another computer. That's a very inefficient model, but that's a pretty good description of how the network is built out today.
The infrastructure does represent a huge CapEx investment. The network service operators spend roughly $160,000,000,000 a year on equipment. And of that, there's about $18,000,000,000 of logic silicon TAM. So it's a tremendous opportunity for Intel and where we're focusing a lot of our investment for growth. So I wanted to update the context of the business problem that we are helping to address.
The service providers, the Vodafones, Verizons, the AT and Ts and China Mobiles of the world, they run roughly $2,200,000,000,000 of services business. Most of that historically has been voice business. But with more and more users coming on to the network, more and more devices connecting to the network, you see this explosion of data traffic particularly if you take into account the different usage models that drive more data like social media or video processing. And what you see however is that the ability for those service providers to exact revenue from that network is really flattening out over time. The value proposition of data versus voice in terms of what consumers and enterprises are willing to pay is actually creating what we call an upside down business problem where it will cost some time in the future not the too distant future more to deploy operate and maintain the network than you're able to exact revenue from those networks.
So they know they need to do something dramatically different. And they believe that the answer to that problem statement is to adopt many of the principles of data center and IT infrastructure and cloud that Jason just talked about and take advantage of volume server economics, virtualization technology, so you can pool the resources and the assets across a number of workloads and of course take advantage of cloud technologies, but also cloud business models. So what they want to do is to really have a much, much better approach to reaching the economies of scale to continue to build out this network capacity and capability. They need a much better total cost of ownership better OpEx and CapEx efficiency. If you go back to my analogy of having all of these purpose built boxes for each of those workloads that equipment can sit idle for much of the time.
In the access network as a matter of fact they can sit idle for as much as 70% of the time. So it's a very inefficient model where they're looking to get a lot better asset utilization by adopting these principles of server virtualization and cloud. And not only do they want to manage their bottom line and get better CapEx and OpEx, but they also want to grow their top line in terms of services revenue and the ability to deploy new services and capabilities for which then they can extract more revenue. So in the network infrastructure space, what we're seeing is the evolution of new technologies built on what we know has worked in cloud and data center and that is called network functions virtualization in terms of virtualizing the appliances. So now rather than having a dedicated appliance to run the workload, you virtualize it and instantiate it in a virtual machine running on a standard high volume server.
This is called network functions virtualization. The other principle that's really taking root in the industry is that of software defined networking. So the ability to automatically provision the network and those underlying resources built on again server technology in order to meet the needs of the different applications and workloads. So NFV, we believe will do for networking what virtualization did for the server industry, which is unleash a lot of new innovation as Diane talked about, really make it easier for innovators and developers to consume the technology and deploy a lot of new usage models. So we're seeing this embracing of server technology across the entire infrastructure in the wireless access, in the core of the network and out into the enterprise networks as well.
And in the wireless access space, as I mentioned, there's a lot of underutilization of assets because those network nodes are dedicated network nodes to do a particular function. And one of the examples I wanted to just draw out here is with China Mobile who is building out their 4 gs and LTE network and they're representative of many Tier 1 and Tier 2 and Tier 3 service providers across the globe. Today, if you look at a cell tower, at the foot of each of those cell towers is a base station. And it's the job of that base station to transform radio signals into digitized data. Those base stations sit as I indicated idle for much of the time if you are in an urban area and everybody goes home at night.
It's really sitting there idle until you come back to work in the morning. Or similarly, if you have a base station in a rural area, they and everybody has left and gone to work, it goes underutilized. And what we've done with China Mobile over the last several years and Ocatel Lucent, which by the way is demonstrating here this afternoon you'll see some pretty slick demos is that we've virtualized the workload in that base station to run on a standard server platform. And you get all of the economies of scale in terms of the server capability. You have the opportunity to innovate on top of that server and take advantage of a lot of the application developers and tools that are out there.
And you're able to have an asset now that can be a shared resource across many different physical areas because now you're pooling those requirements in this cloud capability that you've created in that base station. In the core of the network, what we see is that anything that requires high performance computing, which core the core network running all of this data traffic, delivering all of the content, securing, managing, storing all of our data requires high performance platforms. And we've seen AT and T, which is clearly one of the market innovators and a Tier 1 global service provider make a statement that by the end of this year 5% of their network will be virtualized. They're beginning to embrace this approach in terms of using much more server high volume technology versus purpose built fixed function platforms. And they've also made a statement that by the end of the decade by 2020 they expect that 75% of their network will be virtualized, not only providing much, much better CapEx and OpEx and better asset utilization, but also this platform of innovation, this ability to innovate new services and deploy them much, much more quickly than what you're able to do when you're deploying in hardware.
This is innovation at the speed of software. When you look at the enterprise network as well, one of the things that we do in terms of our communications networking portfolio is that we use carrier grades as a design point, which means that you have high levels of security built in, high levels of reliability, low levels of latency and of course high performance packet throughput. All of that capability waterfalls in terms of value that the enterprise also sees in their in running their own networks. And one good example of course is the financial services industry where if you can take out any of the latency in terms of that transaction, you're actually talking about 1,000,000 many 1,000,000 of dollars at stake in terms of just high performance low latency trading. So they have all the same types of requirements that we've built into the communications network, the wide area network and we're able to then bring that value proposition out to the enterprise as well.
So what we have been investing for growth, Diane has been investing in our business for a number of years now where we have opened up a market opportunity as we recognize and I would say it was somewhat prescient that we saw the fact that we could use that same strategy in our industry in the communications network industry that works so well in terms of unleashing innovation and growth in the data center and the cloud business. And over the last 6 years, we've seen that we have been outpacing the market in terms of growing share as Diane talked about in the communication network infrastructure industry and we see that continuing out in time. The compute the network compute trends really favor a lot of where Intel architecture has a leadership position. We see that the market needs to continue to grow to keep up with all that traffic, all the demand, all the users, all the machine to machine, the Internet of Things. So this is an industry that's somewhat recession proof.
We continue to build out the infrastructure to service all of those requirements. But the approach that uses fixed function not only the platform, but the silicon inside of the platform is a very expensive proposition and takes a long time. And you have to know that you have a lot of volume on a particular workload to make it worthwhile. And we see the industry shifting from ASIC technology and purpose built technology and network processors to CPUs to general purpose processing that can be harnessed to run those same workloads in a much more cost effective manner. We also see the convergence of workloads.
If you look at any networking workload, it runs about it runs 4 different processing algorithms application control, packet and signal processing. But the opportunity to run all of those workloads on a single architecture, on Intel architecture really brings benefits to the developers, to the solution providers as well as to the end users because you get to take advantage of a common tool chain, a broad availability of developers that then lowers your development cost, lowers your development risk and speeds your time to market. So that workload convergence onto Intel architecture is a huge benefit for all the members of the value chain. As I indicated, we have been investing for growth. We've doubled our investment in this segment over the last couple of years.
And just at the end of last year, we crossed over our the $1,000,000,000 mark and we're on a nice trajectory for this year's results as well. So, what is our strategy for transformation of the network? There's 3 pillars that we are driving out into the market. The first is we have a deep commitment to open source and open standards. We see this as a way to unlock much more innovation, increase the participation in the market and to have a much broader approach in terms of the level of services and capabilities that are brought to the market.
So we are a huge contributor into open source and open standards historically. Intel actually is the number one provider or contributor to the Linux kernel. And in this market what you need is a parent grade Linux distribution to take into account unique capabilities in terms of serviceability and scalability and reliability. And we also make those contributions to the Linux Foundation. But in addition to that and Jason touched on this a bit in terms of what we're doing for manageability and orchestration for that intelligent workload placement of the application requirements to the physical substrate that you have running the infrastructure.
And that's really where we make contributions into OpenStack. And of course that includes not only the enterprise capabilities and features that are required, but also the carrier grade features that are required and something that we call enhanced platform awareness, which is again that ability to take advantage of the underlying silicon and software ingredients that will make the applications run better, faster and smarter. In addition, we are deeply invested and a big believer in a broad ecosystem. Just 2 years ago, in fact, at the Intel Developer Forum of 2013, Diane announced the creation of the Intel Network Builders community. And we at that time, we had a couple of dozen member companies and a vision shared vision around this transformation that was happening in the industry.
We have grown that now to over 170 members, 173 to be exact. And that community is made up of ISVs, software vendors, OEMs, ODMs, system integrators and ingredients and solution providers of all types. But in the last 9 months, we have now extended our focus to also add end users into that community. And the end users see benefit in terms of being in our community because they can help direct and focus the investments and the activities of that ecosystem to solve the business problems and the technical challenges that they're facing. And so in that end user community, we have companies like Telefonica and America Movil, NTT and SK Telecom, China Mobile.
So global market leaders that are trying to drive faster innovation and focused investment. And just in the last 3 weeks, we in fact had our first enterprise end user join that community as well. Nasdaq joined Intel's network builders community again to try to focus the investments around their specific technology and business challenges. One other value statement in terms of the benefit that the community is deriving from learning a community, from sharing best known methods and approaches from solving these problems together is that both Cisco and Ericsson joined Intel Network Builders earlier this spring. Cisco of course being the number one enterprise networking company in the world and Arison being the number one wireless infrastructure company in the world.
And then lastly, one of the things that we know is that silicon developments are long and expensive campaigns. And rather than just direct the or respond directly to the direct customers of ours. We in fact decided several years ago, many years ago now to engage with the end users, so that they that we can be more informed and anticipate those requirements and build those into our roadmap and build those into our features and into our silicon development investments. And so that started out as technology feasibility studies and proof of concept and early lab trials. But now that is moving out to field trials and some of the early commercial deployments that we're seeing around the globe.
In fact, just in 2015 alone, we see over 20 new commercial deployments across the globe from again communications, cloud and enterprise service providers as well as more than 50 proof of concepts and trials continuing. So the momentum continues to increase. And of course built on top of all this is our technology leadership as the foundation. If we look at again everything that we leverage and borrow from our data center investments, our cloud success, our innovation in all things server. We also then build on top of that capabilities that are unique to communications networking that are unique IOs and more packet processing capability and cryptography for security applications and compression for storage capabilities.
So if you look at just some examples of that, packet processing is one of those workloads that is essential to any communications networking application. And with our optimizations in the open V switch, which is really how you have switching from virtual machine to virtual machine and our data clean development kit, which again are a set of software libraries built on top of standard server technology for high performance packet processing. We've seen a 12x improvement in packet processing performance just from those 2 innovations. We also are building more into our roadmap scaling up on Fuseon and down through Atom and in our SoC capability by having more integration, integrated Ethernet, integrated IOs, integrated content processing capabilities. And then as I mentioned from a software stack perspective, we also are innovating in terms of the requirements around a carrier grade cloud stack and requirements that give you better serviceability, better reliability and better security.
So what's on deck for the future? Well, the innovation never ends and 5 gs represents that true convergence of computing and communications. What we expect to see in the 5 gs timeframe which will build out towards the end of this decade in the next 5 years is an order of magnitude increase in terms of the number of connected devices. We're sitting at about $5,000,000,000 connected devices in people today and we expect this to be 50,000,000,000 connected devices with the Internet of Things. We see a 50x growth in terms of the data rates.
Again, more users, more devices, more data, more traffic. And the expectations in terms of latency reduction is 10x. So this requires more innovation, more collaboration, more broad market participation. And we're beginning those efforts now today built on a foundation with our technologies 4 gs and LTE built on the foundation that NFV and SDN gives us and are partnering with industry leaders like SK Telecom who is building out infrastructure in fact with the Winter Games in mind, right? So in 2018, we see kind of this compelling opportunity to demonstrate some of this new technology and capabilities as well as with a market leader like Nokia that is building the solutions for these types of service providers.
And we've announced strategic collaborations with both of these companies publicly. We have many more that are ongoing that haven't been announced. But that continued innovation in technology to address those latency requirements, those higher data rates, the massive antenna requirements that we'll need in small cells and in big cells that I spoke about earlier. The investments that we've made are foundations in terms of some of the acquisitions that we've done with the LSI Axia assets to address that macro base station capability or the small cell assets that we acquired with the MindSpeed acquisition. All of that helps to build out the capability to address these heterogeneous networks going forward, this combination of 4 gs and 5 gs technology and really accelerate the path to our ability to address the 5 gs innovations.
So just to wrap, the network infrastructure is undergoing a massive transformation, which is moving them from fixed function purpose built appliances to an approach that embraces general purpose processing, server virtualization and cloud technologies. We have a tremendous growth opportunity at Intel because what is valued in that market inflection point in that transformation are all the areas where we have strong leadership. And we're investing and increasingly playing a role in terms of today's network as well as the networks of tomorrow.
Okay.
All right. Questions?
Thanks for the presentation. So as the three segments that you talked about wireless access, com infrastructure and enterprise networking, as they move from fixed function, purpose built to NFV, can you just give us some examples of what are these boxes? What are these purpose built functions
that are going away?
Sure. So, I did describe the wireless access network and the base stations. And again at the foot of each of those towers is a base station that lives in this hostile environment, right? It's outside. It's hot.
It's rainy. It's cold. So rather than have all of that sit out there and be dedicated to just the geographic area that is serviced by that radio tower, what the service providers and the solution providers are doing pulling that back in and creating an edge cloud, right? They're pulling all of that functionality in and now the service edge and creating cloudless and data centers at the edge of the network that now let you really share that pooled resource across a much broader geographic area and more applications and users. So that's an example there.
In the core of the network, what you see in the LTE environment is you have dedicated boxes for packet gateways and signal gateways and multimedia environments and nodes. And all of that then collapses into one physical set of infrastructure running on server technology. So you don't have these dedicated boxes. Now you have virtualized machines running on a server platform.
Yes. David?
Thank you. Altira, of course, has huge presence all over the network and all the class customer relationships. Intel plus Altira together, are there any new capabilities or particular synergies you get in addition to what you can do as 2 separate companies?
Yes, quite a bit. And we're very excited about pending the close, but the opportunity to bring all of that innovation together. As I mentioned, the industry is moving away from fixed function purpose built. And that's not only in the boxes, but in the silicon. So as you move from ASICs, you have the opportunity to move that to CPUs, to merchant CPUs, Intel architecture CPUs as well as move some of that functionality to FPGAs as well.
Why is that? Because FPGAs and Rob will go through this in much greater detail this afternoon, but they are excellent in terms of running algorithms that are either changing or variable depending upon which part of the world you're in. Think about a security application that you may want to run differently in China versus Europe versus the U. S. You don't necessarily want to build out a whole different platform to service your customers in those different parts of the world.
You want to be able to have the same platform and have that programmability built into an FPGA that also can harness the power of a tightly integrated Xeon processor. So that's just one example and Rob is going to go through many more this afternoon.
Yes. Sure.
Thank you. Historically, this is a market that's been dominated by MIPS and PowerPC and some of them seem to be moving to ARM. So just out of curiosity, what was the issue that you never had much presence in this market historically? And what's changing that will help you going forward?
Yes. So historically, the networking workloads required specialized functionality, specialized IO, specialized crypto compression capabilities, a specialized acceleration technology. But over the last 10 years, we have been investing in the Intel Architecture roadmap to integrate a lot of more of that capability into the processors and the chipsets. And that's really been the work that we've been doing in my organization as part of the data center group. So as we take advantage of Moore's Law and innovate around the transistors in the CPUs and include and integrate a lot more capability for IOs and crypto and compression as I mentioned acceleration technology.
It's just a better value proposition to buy an IA SoC or IA based capability than solely to go to what is a very fragmented market in terms of a lot of little players that are focusing in just a particular part of the market. The other unique value proposition that we bring to the market is that of scalability. So this is a market that values performance and of course if it's performance Intel is the highest performing CPU family that exists. But the ability then to scale that down into some of the smaller form factors, some of the small to medium business and enterprise and to have that capability, Adam, through Xeon is also unique and a place that the market is valuing us and why we've been growing and outpacing the market by 5x as I mentioned and taking more share.
Stephen?
Thank you, Sandra. A question on in terms of traditional architectures around boxes that have an MPU in there for example. Is Intel basically going after that socket and also integrating some of the control plane in there as well? And if so, how about the data plane portion of it? And then separately, I didn't hear much about switching.
Is there still, I guess, a reincarnation of the old Fulcrum Micro assets coming along the way?
Okay. So let me answer your first question. In terms of I mentioned that there's 4 essentially 4 processing workloads that run-in any network equipment application control packet and signal processing. We're not only going after the packet processing sockets, we're actually winning many of those sockets. And that's as I indicated through the innovations and the integration that we've built in terms of IOs and compression and acceleration technology both in hardware, but also in software, the data plane development at DPDK.
Just in the last 5 years alone that innovation has created a 25 times performance improvement in packet processing. And that's just software highly tuned libraries running on general purpose processing CPUs. So we do see this workload conversion benefiting into architecture because you have a common tool chain to use by which you can develop. And you probably recall we were in the network processing business. But that business requires a strong innovation, unique tools, a lot of low level microcode development and it just is a difficult business to scale.
Intel architecture done on purpose processing just delivers a much, much better value proposition. In terms of the acquisition of Fulcrum and the switching assets, switching capability is integral as well in terms of any of our platforms. So we have and we use that switching IP in all of our current solutions and increasingly in our SoC roadmap. So then we have yet another value proposition, another unique capability that we can integrate onto the CPU.
Thanks so much, Sandra. Okay.
Thank you.
So with that, for our webcast, we're going to adjourn. Please come back at 1 p. M. Pacific on the webcast. And so we'll come back after that.
A few things we'll talk about here in the room.
It. It. It.
All right. Well, good afternoon and welcome to the afternoon presentations and thanks for joining us again on the webcast. And so we're going to be on the webcast from 1 Pacific Time until 3:20 Pacific Time. And with that, I'd like to start the afternoon presentations and bring up Jason Waxman again to discuss Big Data.
The So, one of the things that have to say I'm very passionate about big data, because I think it is one of those variables that we don't understand yet well enough to know that really could be an unforeseen driver to the overall data center business, not just ours, but in general the tech ecosystem. And so I'm making a bet and I believe strongly that when we do this again in 4 years from now we're going to look back and it's going to be one of those things where we thought the market was going to be this big and then it was this big and then it was this big. Similar to cloud, right, we keep raising the estimates on it because it surprises us. I think the same thing will be true of big data. So, I want to talk a little bit about some
of the underlying dynamics, what do we see
is happening, how is the market evolving and then of course what does it mean to Intel from both a strategy perspective and how we might be driving our business. So obviously one of the big things that we hear when we talk about big data is sort of the connection to all those devices. The IoT as IoT becomes a reality that's going to mean a tremendous amount of ingest and that's then going to require us to store a tremendous amount of data. Sometimes I think about 35 zettabytes, I'm not really sure exactly what that means. It's just so enormous, but hey look, who am I to argue with the industry statistics, I certainly haven't come up with a better estimate myself, but obviously the idea of I've got all these devices, streaming data, I've got all of this data that I've now stored, I need to mean, I've got all of this data that I've now stored, I need a means to go do something with it.
And sort of the three drivers of value when it comes to big data are really thinking about how do companies apply revenue growth. So a good example of this is all of the internet ad placement or recommendation engines right. They are all about taking big data and insights and turning it into revenue growth for them. It could be cost savings. So we deal with a lot of manufacturing companies that are looking to instrument the trucks, their factories, looking for ways to improve quality in the supply chain.
1 retailer is instrumenting their entire retail stores that they know exactly what's moving off the shelf and they can constantly keep things going that obviously it's for both revenue growth and cost savings. And then the last piece is just looking for additional margin value. So what we're really doing here is figuring out how do we take the value most of what has driven our data center business is taking information technology and making it more efficient. And so I tend to look at that as about what a $500,000,000,000 industry to a $1,000,000,000,000 industry figuring out ways to make it more efficient by delivering more performance or making things easier to use. When you look at big data, the thing that I'm excited about and why it's such a giant opportunity is you're talking about like multi $1,000,000,000,000 economic opportunities in government, in healthcare, in manufacturing and basically the whole economy and figuring out how to make it more efficient and then expanding and turning that into an infotech opportunity.
And so that's really the underlying dynamic. And then kind of bringing it back to the more nuts and bolts stuff that we do, we've helped to really I think enable this revolution by driving 2 things. The first is the cost of a server has gone down over the past decade. Now the thing that this doesn't really show is that, yes, it down by 40% and that's pretty impressive. But over the course of that decade, there was a 25 times performance improvement that was driven out of that 40% lower cost server.
So, the economics of compute have dramatically shifted. And when you think back even just a decade ago, you know data management, data warehouses they were done by pretty expensive monolithic systems and now companies are gaining together these more scale out commodity types systems driven by the economics and then the cost of storage continues to come down. So, those two things are the enablers that are going to then go continue to drive the growth in big data. And what you see on the chart is the big data TAM we've broken in now by hardware, software and services. Now looking at the 2013 data for a second, just $13,000,000,000 one of the things that might be striking is actually how big a portion of that TAM the hardware is.
And that's actually very different than traditional data management type of environment. A lot of traditional data warehouse solutions very, very high value on the software, proprietary types of solutions, expensive systems, but not a lot of hardware as a percentage of the share of wallet. And we've done the calculations to show that actually comparing old kind of traditional data management and analytics and insight compared to big data, we actually see a 2x share of wallet coming to Intel. So for every kind of bit essentially that moves over or data moves over, it's good for us in terms of how much money gets spent on silicon. And then of course the real prize is if we can take that $13,000,000,000 and essentially get it to the $41,000,000,000 that we're projecting out by 2018 and again a very healthy portion of that is hardware because you're talking about open source software, it's scale out, it's something that has a high value in terms of the amount of performance that's delivered to the type of insight.
So the thing we should be thinking about is we should just be able to sit back at Intel and just as this happens is what the money rained down. Is just so easy. Any monkey could go do this like myself. But there actually are some challenges to making this evolve and that's really where this chart comes in and we've done a lot of work with end users. So, these are the people that are in the community that are doing data management, doing research and talking about what makes this so difficult and it really comes down to 2 big things.
The first thing is and this is the dirty little secret about big data is no one actually knows what to do with it. They think they know what to do with it. They know they have to collect it. You have to have a big data strategy of course when it comes to actually deriving the insight, it's a little harder to go do. And that's what the biggest bar up there is, is how do I know once I've got all this data what I should go do with it.
And we'll talk about how we address that. All the other bars which are important really do come down to complexity. It's hard to integrate. So I mentioned their commodity systems, but stringing together these clusters of systems to go do analytics in real time with open source software and fabrics and doing it at scale. Again, this is why it was originated by Google and companies like Yahoo!
But it really still hasn't hit the mainstream assets and having talent that can go both understand what information you're looking for and set up the computer science problem and do this altogether is pretty rare. And so, the implication for us on the right hand side is there are a couple of things we have to overcome when we look at big data. First is the technology challenges of accelerating the TAM. We got to help with use cases. So when we go find a retailer that knows how to find better insights and how to go take their in store experience and kind of combine it with their online experience to go sell more revenue, we want to help more people replicate that.
So, robbing the use cases. We have to make it easier to deploy and that's again I'll talk about in a little bit. How can we go find a way instead of having people piece together all these things and write a bunch of programs and stitch together big computers make it easier to go deploy. There also by the way are some risks to Intel and I want to make sure that you realize that we recognize them and we want to be able to address them. One of the big risks is the focus on proprietary solutions because this is difficult, the more that these become proprietary solutions, it either stalls the market because proprietary solutions tend to be more expensive or they become an easier way of sort of swapping out say non Intel silicon or something more proprietary.
And in particular as we look toward machine learning applications such as image recognition as an example, these are becoming beachheads where both NVIDIA and IBM see it as an emerging workload similar like we do, but want to go ahead and make sure that they are using an opportunity to deliver tools and to deliver silicon that should be able to increase their footprint into the data center. So again, these are things that we'll talk about in a little bit, but these are the challenges that we're looking to overcome. Okay. And by the way, just one thing in case you didn't get this from the cloud portion, I like to end a little early and we have plenty of time for questions. So, please do write down anything that you have in the way of questions.
This is the 4 phases essentially to the big data strategy for Intel. The first one and I start at the bottom is really about the infrastructure. We need to make sure that as these new categories of applications emerge whether they're machine learning, stream processing, Spark Frameworks, dupe doesn't really matter. There are all these new open source frameworks, they're flink, anyone heard of flink. There's a new one every day.
We need to make sure that we're engaged in that community, we're tuning the software and that it's optimized to run best on Intel architecture. And that means everything from the lowest end CPUs to things like Xeon PHY, but of course our 3 d cross point technology use of fabric. This is a great as Raj mentioned earlier, this is a great opportunity to pull through high performance computing infrastructure. The second element is around the data platform and you can think of the data platform as just where am I storing and how am I managing all of this data. And there is actually 2 ways that you can kind of do analytics in these platforms.
There is the distributed data storage platform things like Hadoop and Spark also emerging as a very popular framework and those are many, many systems where you're sort of distributing the data across those systems, but there also are popular in memory solutions, think of SAP HANA. And so again, we want to make sure that both a high end, high performance solution like SAP HANA as well as a commodity math big data scale up type of solutions such as Hadoop are optimized for Intel architecture. And we work with our own software engineers to go make that happen. We work with a number of different partners like I mentioned SAP. But I think one of the biggest bets that you've seen us make was actually the investment in Cloudera and I think it was a pretty bold and I think a bit of a departure from us.
I mean we tend on the whole to do things very horizontally and to invest in many different spaces. We in this particular case saw that big data was going to be going through an inflection and we needed a partner with us to help ensure that they could be both ubiquitous and drive an optimized Intel architecture roadmap. And so we have a roadmap with Cloudera gives in security features that's already going to go support 3 d cross point memory technology that's working on optimizations for things such as EON PHI and we go make sure that that happens at the data layer. Now that really isn't enough because as I mentioned earlier part of the challenge is making sure that there are solutions and just putting all of your data into a distributed data store doesn't exactly solve that problem. You need analytics on top of it.
The analytics space is a giant jigsaw puzzle right now. If you look in the machine learning segment, there are over 200 different companies that focus in on different types of machine learning algorithms, very, very highly fragmented and by the way all of them tend to be proprietary. They're not open source. There isn't a broadly available framework. And so, one of the things that I'll talk about in just a second is we need to invest in enabling a platform to make this easier, because again if I want to see Walmart advancing their big data strategy, they certainly have by the way some tremendous data scientist, but it's still a lot of work and you're still limiting the applicability of big data to the people that really know how to go stitch together, write application, stitch together programming, layer it on top of big data.
It just shouldn't have to be this hard. And some of you like me, remember back in the college days where to go right and run a regression analysis or do a Monte Carlo simulation that you actually had to go write a program in SaaS to actually do it and then along came excel and voila drag and drop and you can go do this stuff. We need that equivalent for big data. And one of the other things that we're doing in addition to trying to create something that's more of a platform is we need to go drive vertical use case. So even though Intel has absolutely 0 desire to go be an IBM Global Services or a professional services organization.
I think 0 it might be even less than 0 desire, negative desire to go do this. We are engaged in a number of these different types of engagements because it helps to inform what we need to develop as a platform. And then the other piece of it is it allows us to help replicate those use cases and grow the overall industry. And so if we're successful and I'll tie this back together at the end, we've got a great silicon opportunity. We accelerate that overall $41,000,000,000 TAM and we go work with our partners to make that happen.
Okay. So one of the another major move that we've made recently is the open sourcing of a platform that we call Discovery Peak and Discovery Peak is what we believe to be 1st industry's open source analytics platform to really bring together multiple different layers. And the graphic, sorry, the graphic down below was to try and highlight that there are multiple different types of cloud. So it actually is running as an example on OpenStack. It also is running on Amazon.
So, the idea also is that you can create an analytics platform that can run across multiple different cloud. And part of the philosophy that we have is whereas a lot of platforms are trying to aggregate and bring all of the data together, we believe that what actual companies want to do, they want to hold on to their data, they want to do their analytics, but they may actually want to go share data or share analytics across multiple different sites. And so as an example, we're working on a healthcare solution with some partners up in Oregon that will allow you to do genomic cancer analysis using the Discovery Peak platform and do it across multiple different sites where each research institution can hold on to their own genomic data, but to be able to keep that data confidential, but they all go share the analytics. Again, that's one of the things that makes this platform a bit different. You can see the different layers.
You've got the orchestration services which essentially allows somebody to deploy this more easily. So it's building off of the cloud based infrastructure that we've talked about. It uses the data platform and we are obviously heavily leveraging Cloudera in that layer. And then on top of it, we produced a number of different services. Some of them we've actually created with inside Intel to go do things like graph analytics, but also working with a number of different partners and visualization solutions, on ingest stream processing and a host of other types of application.
And then on top of it, as the application player, we've made it extensible. The idea being instead of having a data scientist try and stitch all these pieces together, they can leverage the platform, they can write their own analytics and code and actually publish it almost like an app store within their own environment. And so this is really trying to take 90% of the non value added workout and allow the data scientists to go get that work done. The way this can be applied in a number of different areas and as I mentioned we're working on different vertical use cases. Some of these are based off of wearables.
We have companies in the wearable space that would like to understand how users or their customers are using some of that stuff, but also in some cases can be applied to healthcare. We're doing work with the Michael J. Fox Foundation on how wearables can be used in the home to track the progression of Parkinson's disease. We see opportunity in energy and optimizing the overall energy grade. I mentioned some of the cancer research that we are doing.
So, all of these are very interesting use cases and our goal is to kind of create a solution that we see a big market for and then be able to turn it over to a range of different partners to go ahead and grow the overall footprint. So last slide, what does this really sort of mean to Intel when you think about Intel as a investment, what's the opportunity. I think the first thing is that we are optimizing all of these frameworks. We want to make sure as this build happens, it's done best on Intel architecture and I mentioned that's the full portfolio of our opportunity. We see that by 2020 there's about $2,000,000,000 to $4,000,000,000 worth of silicon opportunity in big data and obviously we want to get the lion's share back.
The second thing is machine learning is within big data. If a big data wasn't buzzy enough, machine learning is the buzzy part of big data and this is something that we see as a competitive opportunity. I want to be clear about it and we are going to win. We're tracking about 20 different types of machine learning algorithms. We're looking at how some of those things could be accelerated through an FPGA, again looking for synergy in our product line and our architecture.
We also want to make sure that the market grows and there's 2 pieces to that. 1 is really building off and leveraging the Cloudera investment. They're a great partner. They're very capable. They are growing substantially and we want to continue to build off of that momentum particularly as it's optimized for Intel architecture.
And then last is those use cases in the Discovery Peak platform, they're all about taking that $41,000,000,000 TAM and bringing it in. And already we've seen some of the analyst expectations for how big the market is starting to accelerate. Again, that's my prediction is that we will continue to see the opportunity in big data be elevated over time, especially as we make these use cases the reality. So, with that, any questions?
Yes. Is this one?
Can you hear me?
Yes. I just need to touch the button.
I guess two questions. The first one in terms of the market size. So maybe I'm completing things here, but like someone Alterra was thinking about the accelerator market and they were thinking it was going to be $1,000,000,000 by 2020. Is that part of this silicon opportunity you're thinking about there? And then secondly, in terms of thinking about machine learning and big data, how much do you see today of the spending and the attention really more in the research or workstation area?
And then how much where are we in terms of deployment? And actually then saying, wow, we need to have low latency, we need to have this working in a scale out deployment and so really used in the data center.
Yes. So I'll answer kind of both questions. Yes, there's a portion in the $2,000,000,000 to $4,000,000,000 that would be essentially allocated as opportunity to FPGA. It's not a huge portion of it, but it is a portion, so it's included. But I wouldn't it's not the full $1,000,000,000 that $1,000,000,000 opportunity sort of I'd call it intersecting circles and that's where that intersection would be included in there.
The second thing is to your point about the analytics and how much on the workstation, so a lot of data science today in terms of kind of setting up some of the models, people are using workstations to kind of take a subset of the data and essentially build their model on a workstation. But what I can tell you is that our estimate for the overall how big is big data right now in terms of real deployment is 100 of 1000 of CPUs in the data center right now. So it's very sizable. Now the machine learning portion of it is substantially smaller and it's really still kind of nascent, but we are seeing that accelerate as well. Again, companies like Microsoft, Baidu and others are really starting to gear up their investment and it will help drive our business.
John? Jason, I wonder if you
could talk a little bit about the implementation of big data and the impact it might have on corporations deciding public versus private cloud, I. E. If data starts to become extremely valuable, could this sort of reverse the trend towards public cloud or how are those conversations going presently with CIOs and CFOs?
Yes, that's a great question and I have to say it's very fragmented. It's hard to tell. There are some use cases where because it's I just want to get something going fast, they'll do it for example in Amazon EMR. So you certainly see some cases where people want a platform and because this is difficult to go do, they'll take the easy button and the easy button will be HD Insights from Microsoft or EMR at Amazon. But I definitely am hearing an undercurrent that more companies need and want their own platform.
When we announced the open sourcing of Discovery Peak, which was literally just last week, we've been inundated. I was just talking to my product line manager about that yesterday and he goes, this is the worst day for me ever. He goes, you have no idea the floodgates that we unleash because there is this latent need that people do want to keep their own data because that is a key value to their business, but they don't necessarily feel like they have the tools to go manage it, analyze it and do that. So, I do think if we can drive the right solutions, I think it will I always think it will be a happy medium, like I never try and be so black and white and binary, but I do think it will have people have their own data, they will do their own analytics, but they will be able to go federate with the cloud and by the way, federate with other people's analytics as well as part of the goal. There is another question over here.
Ross? Same sort of thing as far as the value of the proprietary data. How do you balance the competitive threat with people coming in with more proprietary solutions if whatever the customer is that has the data wants to analyze it in his or her own way? Is the adoption of something from an NVIDIA or an IBM that you mentioned more likely versus a standardized solution being adopt because of the nature of the market at least in the initial stages of big data?
Yes, that's a good question. I think there's 2 things that you're talking about which is kind of the proprietary and then what does it mean in terms of the silicon piece. I think in general that because people want to own their own data and I think that's the last of proprietary and it's kind of hard to say I'm just going to give away an open source my own data and how much of a company left at that point. So that I think kind of to me makes sense. Own your own data but then you know use horizontal open source standards based everything else.
And that's why there is I think a desire for an open standards based platform. And in fact, there was a financial services company that came to me and said, hey, we were approached by Watson, by IBM Watson and it's a very interesting platform in some respects, but candidly, I just don't want a proprietary platform and to some degree they're also competing with me on the data science side. So I think there is a lot of data points that we can highlight that say that companies want open platform. Now open platforms in general I think should favor us quite well. It's because we've got a good history of knowing how to enable open source.
We drive standards quite well And by the way, because it's open, we've got what I call ninjas. We've got these ninjas that just know how to code and tune the heck out of everything. If If we get those ninjas in there and they make things fly on into architecture. So for us, I think it tends to be a better competitive environment than something that winds up being a proprietary solution, which to your point you could attach something else. It doesn't mean that somebody would choose to do that over Intel architecture, but I think open always tends to favor our strategy in general.
Thanks, Jason.
Okay.
All right. Thanks. Yes. We'll be lost without that. Excellent.
All right. Well, I'm pretty excited to be talking to you today about a silicon photonics update. I'm Alexis Bierlein and I'm the General Manager of our Silicon Solutions Group here in the data center group. I'm responsible to click. Okay.
All right. As we've heard from Jason, Sandra and Raj already, there's a lot of innovation happening in the data centers today. What's needed by data center operators are a combination of high performance compute, fast storage, and what I'm here to talk about is unconstrained connectivity. Now most of you went on the data center tour of Intel's data center earlier and you can start to get a feel for what the connectivity requirements are in the data centers. There are 3 topics we're going to talk about today.
Number 1 is the transformation in the data center. What you've heard about earlier is Intel's data center transformation. But what I'm going to focus in on is the innovation happening by the cloud service providers that are enabling mega scale or hyperscale data centers of today and of the future. So building on that innovation, it requires additional innovation. And what we have done here at Intel Silicon Photonics is a key enabler to continue that innovation and to enable mega data centers of the future.
Finally, we'll focus in on what most of you are here today to hear about, which is the business opportunity this represents for both Intel and for the data center group as far as non CPU growth. So what do we mean by the data centers transforming? This is a picture here of a Facebook data center in North Carolina. It's not this data center was initially put in place in 2012 and it's 370 square feet for just the first building. That's 10 times the scale of what you saw in our data center in our larger data center just earlier today.
But that wasn't big enough. In 2014, they added the 2nd building and which makes this a 750 or just north of a 7 50 square foot data center. Inside one of these data centers, as you guys can imagine, and I'd like you to just visualize it, it's just rows upon rows of racks of servers and storage, all interconnected. And in fact, over 100,000 servers, it's estimated 60,000 in each building, so 120,000 servers are in this data center alone. The first data center when it was put down was 60 megawatts of power.
So we can assume that this is hopefully they've gotten better efficiency and it's less than 120 megawatts, but we're talking huge amounts of power, huge amounts of network infrastructure. Apple and Google also have data centers in North Carolina. That's one of many areas. Mega scale data centers are being deployed across the globe. When you think about the amount of bandwidth that's being processed here at the ONS just in June, Google stated a really resounding fact, which is the amount of data traversing 1 of these mega data centers exceeds that of the entire global Internet.
As you can imagine, traditional approaches are reaching their limits. The network can no longer be the bottleneck and connectivity can't be a bottleneck. Innovation is required across all vectors and the entire ecosystem required to enable mega data centers. So we're going to talk about 3 pillars of this transformation bandwidth scale, physical scale and the traffic pattern shifts that are happening inside the data centers today. So the bandwidth scale, we all know there's been unprecedented growth in bandwidth.
Google also mentioned that over the past 6 years, they've seen a 50 times increase in the amount of data center handled by any one of their mega data centers. That means there's been an unprecedented amount of growth in the number of storage and connected servers connected servers and storage to processes. In addition, the physical scale has become quite large. Some for the Prineville, Oregon data center, which is just a 300,000 square foot data center, 21,000,000 feet of optical fiber deployed inside a single data center. So what that requires is a huge transformation in the amount of infrastructure and innovation in the traditional technologies that have been used.
Finally, we heard from Sandra today significantly about software defined networking and network functional virtualization. So no longer are the architectures of data centers defined simply by the physical hardware, but they're beginning to be defined by the by software definition. So an ideal data center is 1 massive cluster of compute that can be dynamically provisioned to handle any workload. How do you achieve that? You achieve that by having an ideal fully meshed, fully interconnected data center.
So as opposed to the enterprise data centers of the past where you can just envision from maybe some of your former workplaces back cabinets full of Cat 5 cable, these interconnected data centers are this is an example of a Facebook data center network design fully connected. Whereas 80% of the traffic used to be north south into and out of a data center inside of a mega data center or cloud service provider data center, over 3 quarters of the traffic is actually traversing the data center in an east west fashion. So I'm giving you all these facts so that you can start to envision what's happening in the data centers and why is connectivity becoming a bottleneck. A new era of faster, denser and longer reach connectivity is required. So now we're going to talk about how silicon photonics is an enabler for next generation data centers.
First, I'm going to step back and say what is silicon photonics. I'm sure most of you know in this room. But for those of you who don't and I've been surprised by the amount of knowledge that is in this room today, we all know what silicon is. Silicon is the medium through which all of the semiconductor industry has been processed over the past 45 years. Silicon is the base platform for all of Intel's products.
Photonics is the addition of light and light emitting elements to that platform. You could think of it as an extension of the silicon platform. If you step back and think about photonics and where fiber optics has been used, fiber optics has been used since the beginning of time to communicate over long distances. You could think about smokestacks on the tops of hills where people were sending light. You could think about lighthouses even before fiber optics existed.
In the 1960s, the semiconductor laser was invented at Bell Labs and from that the entire fiber optics industry sprang up. So we all know that the undersea cables and the network backbone of the United States or of crossing the globe is based on long distance fiber optic transmission. So silicon photonics brings together 2 very unique industries, which is the high volume, high speed, high performance silicon semiconductor manufacturing industry and that which enables long distance data transmission. So what does the silicon photonics module look like? Well, within a modular many elements.
Silicon cannot emit light. So what we've had to do is find a way and we've been at the forefront of innovation of silicon photonics for the past 10 years. Find a way of combining this light emitting material with silicon. We do that and utilize all of the other elements that are used to combine and to create an optical transceiver. So the optical modules are at the end of a connectivity link.
They take the electrical data and transmit it into optical and then at the end of the fiber optic link it gets reconverted back to electrical. This is an example of one of our first products here. This is a very highly dense compact 400 gigabit per second transceiver based on silicon photonics. But it's not just based on the silicon photonics alone, it's based on the entire manufacturing capability that Intel has, which is includes all of the processing as well as the packaging. So inside this 400 gigabit per transceiver, which is just a little bit larger than a quarter, we've got 16 channels of 25 gigabit per second transmitters and 16 channels of 25 gigabit per second receivers.
So that's we've talked a little bit now about silicon photonics, so you can understand its place. It's a high density optical platform based on silicon and it's used to enable network architectures that prior could not be conceived. So now we'll step back and we'll look what does the data center network look like. This is what you've just toured 1, but a typical mega data center network is comprised of rows and rows of racks of servers that are connected through a top of rack switch through a layer of aggregation switches up to a core switch that then communicates out to the wide area network or the WAN. All of what I'm showing here is inside of a single data center building.
Inside a single data center as we said there could be over 100 of 1000 of servers, tens of 1000 of switching elements and over a quarter 1000000 optical links. So, what we've also said is that one of the most important that
we maximize the
utility and maximize the efficiency of.
So that we maximize the utility and maximize the efficiency of. In order to do that, you need to transmit data across the data center. So there's a lot of communication between one server and another. What does it look like today in today's network when this happens? Well, you can follow the build here.
So the example I've got is if this server on one side of the data center needs to communicate with another server that could be connected by 2 kilometers of fiber away, it can transverse over 7 switching elements and takes quite a lot of time. The switching elements, there's congestion and there's latency associated with each switch. The data centers of today have been designed and developed based on the interconnect technology that's available that achieves the lowest total cost of ownership. What does the data center look like with the advent of silicon photonics, a high volume, low cost, long reach connectivity that is made for the scale of these mega data centers. Well, what you can do is you can envision and there are many network architectures out there, this is just one, but the idea is to enable that innovation to continue.
What I'm showing here is that when these two servers need to communicate, they need to go you can envision each top of rack switch connected across the data center, utilizing the long links to create a fully meshed architecture. So at the top of each rack, you're interconnected and can communicate with any one of the aggregation layer switches. What that enables is the communication across the data center to happen in a shorter amount of time with less latency effectively reducing some of the switching elements that are required and reducing the total cost of ownership. So how are technology choices made in the data center? There are a lot of technologies to choose from And the technology choices are generally based on total cost of ownership.
So power consumption, number of networking elements required, the cost of the optical transceivers. So I'm going to go over what the data center connectivity landscape looks like and the choices that exist. The choices that exist are a function of reach and a function of the data rate. So as you can imagine, the IO is becoming the bottleneck. The input output feeds, how fast and how fat of a pipe we can make to interconnect our servers with the switching elements and interconnect our network as a whole.
So I'm going to talk to you about 2 different elements here. This is at 25 gigabits per second. We have many different distance requirements across the data center. So hyperscale data centers require up to 2 kilometers of reach. We've got the in rack or typically 3 meter connections.
We've got connections across the rows, which are between 10 100 meters. And then we've got connections that are across data centers, which can be anywhere between 102 kilometers, 100 meters and 2 kilometers. And then we've got the interconnection in between data centers, which is the inter data center connections. So in RAC, the choice today is copper, the lowest total cost of ownership, the lowest power and the lowest overall cost. Across the roads, there's a choice between traditional short reach optics that are called VCSEL or silicon photonics can also achieve those distances.
When you reach across the data center links, you need a single mode infrastructure that simply means you need long reach transmission optics. Silicon photonics is best positioned to satisfy that. Although it could also be satisfied with traditional long reach optics, remember the choice is about total cost of ownership and the traditional long reach optics use a huge amount of manufacturing and do not have the cost structure to truly enter the data center at this scale. This is a breakthrough technology, silicon photonics that enables future proofing of data center networks. So what happens as we increase the speed?
At 50 gigabits per second, what we see is the advantage that silicon photonics has continues to grow. Traditional technologies, copper can no longer transmit as far as you increase the modulation speed, the distance goes down. Short reach VCSELs can only transmit across 50 meters. And as we achieve 100 gigabit per second line rates, which we So we So we've gone over the technology choices that exist and silicon photonics has a major value add in the data center to enable the new not only new network architectures, but enables the lowest total cost of ownership. So we're going to talk about why Intel Silicon Photonics is going to capture this market.
Well, who is better positioned than Intel to capture the silicon photonics market? Remember, it's a platform based on semiconductor manufacturing. We've invested 10 years of R and D resources in the silicon photonics platforms and technology. Through that innovation, we've been able to achieve and bring out a unique hybrid integrated laser, which is a simple yet beautiful way of combining light emitting material onto the silicon platform. This is unique to Intel.
One of the traditional hurdles to silicon photonics has been coupling light, which is created by a different material system into the silicon platform at the 300 millimeter wafer scale. The way that we do it here at Intel is we put the material system down in the very beginning. We bond the material and then we use our traditional manufacturing technologies to lithographically define the lasers and define the products that we have. So this is a huge advantage because it enables best in class scalability. When you can use lithography to define and create lasers in silicon, you can imagine creating very dense arrays of laser upon laser.
We can also utilize the unique properties of light emitting materials that you can emit light in different colors. So we have 2 vectors along which we can scale. We can scale both on arrays of densely packed lasers. We can also scale with the light, with WDM, Wavelength Division Multiplex. So whereas our innovation enables us to put the light onto the laser, alternate silicon photonics technologies still are utilizing outsourced or an outside material system creating lasers that then need to ventures.
So it's more of a hybrid infrastructure, whereas we have the most fully complete integrated silicon photonics solution. With this, we're able to achieve the highest density and the reach required for mega data centers. So finally, we'll move on to the business opportunity this presents. So the data center connectivity TAM is pretty significant. We're at the era we're at the dawn of a new era of connectivity and new data center architectures.
What I'm showing here is the data center total spend on 100 gigabit per second and 400 gigabit per second aggregate links. 2016 with the advent of 25 gigabit per second line rates and enabling $1,200,000,000 market. In 2020, we're seeing over a 50% compound annual growth rate to a $5,100,000,000 market. The markets that we're addressing in 2020 are across the data center, across the row and in rack, which is a huge portion of the $5,100,000,000 market. So how are we driving the transformation?
So Intel Silicon Photonics is first targeted at the switch to switch interconnects enabling further innovation in the cloud. We're then going to be focused on our continued high density, highest density scalable arrays to continue to advance high performance computing. And finally, what I'd like to leave you on is a thought about integration. This is an area where Intel has excelled and it's a core pillar of Intel's approach. When we build silicon photonics on silicon with the highest degree of integration, it allows us to think about and conceive end to end network architectures that allow us to bring optics directly onto the CPU or onto switches, which is a very high efficiency manner of combining the long reach connectivity with the existing processing power of the silicon manufacturing engine.
So in summary, we've got cloud service providers innovating across the data center. Silicon Photonics and specifically Intel Silicon Photonics can enable this transformation by enabling the lowest total cost of ownership and conceiving unheard of or new network architectures. Finally, Intel Silicon Photonics as such is a engine for data center growth. Thank you.
Blaine?
Maybe if you could just wrap some timing around this in terms of availability of products, what you've said before and then just talk about the competitive landscape. Obviously, we're going to 10 years and how long you've been running the group, but I mean there's been some points where there was going to be product and it's taking a little longer. It's obviously tricky.
Okay. This is so I agreed. So I so yes. So I joined Intel in November of last year and I joined specifically for the Intel Silicon Photonics. I come from the optoelectronics industry traditional fiber optics and what I saw was a wall.
So joining Intel, I do know that at that time we were ramping towards the product and we did announce that we had a delay in our product launch. We intend to commercially sample this year and be in production in 2016. Absolutely.
Thank you. Can you go back to the slide you showed, copper, VCSEL, long haul and silicon photonics and compare the relative cost, however you want to do it? And also given that you guys seem to be the only manufacturer, what does it take to bring the cost down if you're not standardizing it?
Okay. So I can't seem to reverse. Okay, perfect.
Go back to
Yes. Okay. So, but I could start talking to it anyway. So the relative cost okay, okay. The relative cost structure between traditional fiber optics approaches and here we go, we'll bring this up.
So as I said, for in rack, so for every distance, there's a technology of choice. For in rack today, it's still copper. Copper achieves the better cost per bit. VCSELs and short reach technology achieved the I would say, it's a combination of cost and power for the very short reach distances. And silicon photonics is very competitive with VCSELs already, even in the short distances.
So I don't know exactly, I can't give exact cost numbers for you, but I would say that there's a significant gap between the silicon photonics and the traditional optics in terms of cost structure that's afforded by especially as you scale up in data rates. So what we're looking at deploying is 100 gigabit per second or 4x25 gigabit per second links are going to be deployed late this year. And so at that point in time, silicon photonics becomes the technology of choice.
Gwen?
I just had a follow-up question. Specifically, what will you be sampling? Is it 100 gig module, 4x25 configuration? And then second, it looks like you're targeting kind of the switch to switch connectivity first with Silicon Optics? Correct.
Rather than the server to top rack connection, will you be sampling solutions in industry standard form factors, CFP28 or are you looking at onboard optics? So how are you going to deploy this with the switch vendors?
So our approach moves with time, but yes, our first products are 4x25 gigabit per second 100 gig links. We are offering the products that our customers most value. So initially, the switch to switch links will be in the QSFP form factor and will be MSA compliant. However, there's a lot of drive and we have additional product lines that are more targeted towards integration as well as onboard optics. So we have a slew of products that are in the works, but the first products indeed will be QSFP.
Any other questions? Harlan?
We'll need it
for the webcast.
And I apologize if I haven't been keeping up with all all the innovations in silicon photonics. But my understanding is that lasers have been based off of compound semiconductors, indium phosphide is kind of the one that I'm more familiar with. So does your silicon photonics module actually use a silicon based laser?
We are not using a silicon based laser. We're combining the indium gallium arsenide phosphide material systems, indium phosphide gallium arsenide onto silicon in order to create efficient light emission. So silicon is not an efficient light emitter, not yet.
All right.
Thanks, Alexis. All right. With that, welcome, Rob Crook.
Good afternoon and happy to be here. We are going to just a little bit of background on where we're going with the business and then obviously open it up for Q and A and hopefully we've got plenty of time
for that.
So I first want to we are a data based business and the explosion of data and trying to turn that in information is driving our business. We are very focused on the data center and as we look at turning that data into information, we see a tremendous opportunity to improve the latency and access to that data so that people can take advantage of it. You guys are familiar with the financial service side of this thing having very consistent and high performance transactions is important. You need fast low latency to data and preferably in the smallest chunks as you can get it. Fraud detection, big giant uses of information, but if it takes too long to find out it's a lot less useful to detect the fraud.
And of course optimizing for healthcare and for advertising, the more we optimize the more money we can make. And when we look at what that's doing first in terms of NAND, so we look at the first step in getting data closer to the data center or to the CPU in the data center. The first stage in that is driving NAND and the NAND based SSD market started its modern version of its evolution in 2,008 and we are in aggregate that that market is over $14,000,000,000 right now, basically from 0 to 14 in 7 years. So it's growing very fast and is projected to grow fast as well. And this chart is showing the gigabyte version of that growth.
So the use of data in data centers is growing at an exponential rate. And it's growing in a way that is embracing the legacy interfaces to storage, things like SaaS and SATA or the blue is SATA, which is a traditional hard disk based interface and SaaS, which is another traditional hard disk based interface. And those have been the primary drivers for the early portion of the market growth because you got to plug into the hole in the box that exists if you're new technology. But you really want to re optimize for that and you want a lower latency thing that's more optimized for silicon based storage and PCIe is that interface and you could see how that's growing much, much faster and becoming a dominant portion of that. And so though in our business we continue to invest in SATA based storage products and then SaaS together with HGST and our relationship with them, we do have a huge focus on PCIe because we see that as the right interface for storage and the fastest growing one as well.
And why is that? That is because it's lower latency. And if you look here, this is the view of a storage access if you will, how long it takes to get the data from a hard disk. And what we've done is we sort of chopped off the hard disk bar there because if we do it to scale it would go through the roof of the building probably. And so when you move of course to much faster media like NAND, these legacy storage interfaces start to get to become a problem.
And with the NVMe based PCIe products, we get a significant reduction in latency and efficiency and utilization of the server. And that it doesn't look that much shorter, but it has a significant impact actually on the performance of applications in the data center. So if you look at a vertical a series of vertical applications, it has a multiplicative impact just to go from hard disks to SSD. So if you start with just a SATA based SSD taking a Microsoft SQL example for doing big data analytics, You move from a hard disk to NAND based SSDs, you see doubling in your server efficiency, you see tremendous improvements in performance, like a 6x improvement in performance and it drives higher utilization in the data center and better customer service for various applications. Now if you then just take that and you change that to be a PCIe based storage interface it doubles yet again, right.
So that improvement in latency is critically important to these transaction based applications. So getting short latency, shorter latency access has a big improvement. You go from 2x to 4x the server efficiency and we hope to drive more refresh with that kind of technology improvement as well as building a business alongside the CPU business. So improvements in application performance with NAND and NAND is going to grow big as we indicated before.
Now that's not
the end of our story of course. We want to go beyond NAND and we want to increase reduce latencies even further. And we have come up with a new technology that is intended to work for both memory and storage. And it is a new innovation that we call 3 d cross point technology and it is a new type of memory. So it's a new class of memory, it's not a simple extension of NAND or simple change from DRAM.
In fact, both at the physics level and as well as at the memory attribute level, it's very different. And we can use it for both memory and storage and I want to show you a little bit using that same kind of graph how that works. But just to sort of give you the 3 bullet takeaway, if you will, this new memory technology is a new class of nonvolatile memory. It is going to be significantly faster than NAND media and I'll show you what that feels like in just a moment. But it's up to a 1000 times faster than NAND.
It is tremendously higher endurance. So that means you can use it like memory much more so. And but when you compare it to DRAM, it's much denser than DRAM. So our first product is 128 gigabit and today's memories are DRAMs are about 8 gigabit in density and there's the state of the art NAND is about 256 gigabit. So not quite as dense as NAND, but tremendously denser than DRAM yet non volatile and it stores its memory in terms of a material property change as opposed to electrons stored on either floating gate or capacitor.
So physically it's very different and then its attributes is very different. What I wanted to do was show you I showed you a chart earlier. This is just one graph here. We have the hard disks going through the ceiling. But if we if I now jump to the you can imagine if I make this orange much shorter, it looks different, but I can't show you on the same chart.
So what I'm going to do is I'm going to take this smallest bar in the chart and when I move it over here and I'm not cut it off because then it would go through the roof of the building if I'm not careful. So what you see here is thank you, Diane. So this is the small bar now, right, with a large delay, it looks like a super long delay on the media based on NAND. And then we take this new media and we put it on there and you can see why it was so important we move to the new interface, because even this new interface though is as much as it's a huge improvement over NAND based memory, really that interface is a problem and what we want to do is move to something that's even lower latency. So we're going to build storage based products because they are still way faster than NAND based products, but we are also going to build DIMM based products that will have even lower latency because we've eliminated, if you will, the interface and turn that into hardware, got rid the software.
And then we've got rid of a fair amount of the controller complexity that slows the media interface down here as well. And so we think this will be a significant advantage to our enterprise platforms. Yes, so we got significantly lower access about a 10x lower access Anyway, it's about 10x improvement in performance over NAND based SSDs and then even obviously much, much greater when we get to a short interface path. And so we are going to build some DIMMs for Diane's platform to bring out with the next generation platform. They fit in the socket that is already there for DDR4.
It requires our next generation platform in order to take advantage of it because they need to understand how the memory works. So it requires next generation platform. It allows us to solve problems that haven't been able to be solved before. So we get tremendous increase in memory capacity. So if you have big data problems 4x increase in memory is a lot.
So 4x increase in the maximum size of the memory. So there's a significant impact on that. And it will work and look just like either storage or memory without OS modifications. So it looks just like big really big memory without an OS modification or we have storage hours to make it look like just we could fast storage as well. So what does that mean?
That has a significant impact on any application that had launched really big memory, so massive in memory database applications, something that cares about resiliency because now you no longer if you have a big memory system you're no longer reloading that memory if something goes wrong, you're just doing a consistency check on it and you're off and running. And or if you want very low latency storage, so if you want transactional based very fast storage and of course high endurance for think of it as database log applications that sort of thing. And there's we're excited about what people are going to event on top of this that we haven't been working on, but we have been obviously working with leading ISVs in the industry to try and make sure that they're ready, the things that you would expect us to be working on database applications that sort of thing, whether it's for our client based applications or we think a huge obviously huge opportunity for this is in the data center. One of our favorite topics is genomics and it's a big data problem in terms of actually rapidly and cost effectively doing the genomic work, but also then mining large databases of healthcare to provide personalized medicine to folks.
So we hope that this will be a breakthrough in that area as well, but many, many more applications to come. And then I would just say we're really excited about both the NAND based business, which is doing quite well in the data center, both on the existing legacy interfaces as well as what's happening with NVMe. It's a transition happening in the marketplace that enables us to maintain and differentiate our product lines and then migrating to 3 d cross point on PCIe and ViMi as well as on the DIM interface. So with that, I would be happy to take any questions. Sorry if I spoke quickly.
David, you mentioned that the next generation Xeon platform will support Crosspoint, 3 d Crosspoint. Are you going to tell us which year or what period that we're going to get the next generation Xeon platform?
Apparently not, no. For the 3 d Crosspoint products themselves we plan to have a storage based products into the marketplace in 2016.
Rob, thanks for the presentation. Two quick questions. Just one, can you tell me the IO performance differential between cross point in a DIMM form versus DRAM? That's point number 1. And then secondly, I'm assuming at some point the Koreans are going to have a similar media to Crosspoint.
You clearly have a lead at some point. I guess my question is in the SSD market today, you compete in a market where the media is a commodity. And yet you guys do things on the firmware, you do things with the controller technology that even though you're using sort of commodity NAND, you're adding a lot more value than just the computer medium. And I guess my question is, when you bring Crosspoint onto a DIMM, are there other things like ASICs or firmware that will protect you if and when the Koreans sort of have a comparable media to cross point as a memory and not a storage device?
Well, first, thank you for saying we're adding a lot of value in the firmware in the ASICs. But one, there is still significant before we completely commoditize media, there is significant differentiation we believe in that particularly going to 3 d NAND, driving technology hard in the NAND space to get to more cost effective technology as well as in 3 d XPoint to get to much higher performance technology at great costs as well. So we think there's still plenty of room in just pure technology driving. But over and above that I agree that one of the things that I think is one of Intel's core assets in driving these storage products into the data center is our deep understanding of computer architecture, where it's going, what the bottlenecks are, how to solve them in both software and hardware, places where we can provide acceleration, places where we just need to bridge things around reliability and that sort of thing. And you'll see some of that come out in what we call platform connections like how we go about solving real problems for the computer architecture and for IT folks with these technologies.
And so there is room even on the DIMM for those kinds of differentiation. At a high level, they're of the same order of magnitude kind of 3 cross point is going to be slower than DRAM. But if you were to put say NAND here and DRAM here, we're a lot closer to DRAM than we are to NAND, if you will. And so when you're looking at data access, you're really looking at tiered structures of information. And what you're looking at is the hit rate in 1 tier and then the penalty of going to the next tier.
And if you can have a much bigger close tier and it's not too different than the other one, it's much better to be bigger than it is to be slightly faster.
Blaine? Could
you just talk about the cost structure? Obviously, if people could have a faster alternative to NAND, they would go with it in many cases, but obviously the cost is the other equation. You probably won't give exact number, but could you give a relative gauge as to where you are, what you can do, could you do multiple 3x architecture down the road or how close could you bring into a NAND cost or is it always going to be this segment of the market who just wants the fastest and they have to pay up for
it? Yes, I think for one, it's a new technology, right. And so costs are complicated, right, just but at the sort of architectural level density is the key element for long term, think of this decade long cost focuses. Right now NAND is about 2 56 gigabit on the state of the art and this would be 128 gigabit, So long term, our goal would be to try and stay close to that. And then we believe that the technology scales really well because of the physical, the lithographic architecture of it is good for scaling in 2 dimensions and then a stackable technology.
So it gives us ability to stack as well to get the higher densities.
Matt?
Given the technology is persistent and you guys plan to use it as DIMMs, could you talk a little bit about the power implications?
It's very efficient, how about that? And it fits in the electromechanical slot that DDR DIMM does, right?
Any other questions for Rob? Thanks very much, Rob. Hey, no problem.
Hello, am I on? All right. So I'm the other Rob, Rob Hayes. I run strategic planning in the data center group, so I own product roadmap for all of the data center business. And I'm going to talk about accelerators and accelerators demystified, right?
So we're just going to really talk about what are accelerators, what are the different implementations in the industry that people might use to deploy an accelerator, what are the workload requirements and what kind of workloads are people using accelerators for today? And then finally, I'll finish with what's Intel's approach to accelerators? How do we support them on the platform as well as give you a little bit of a sneak peek into how people are using FPGAs to deploy different kinds of accelerators in a very flexible manner, which might hint to some future opportunities. So I thought maybe I would just start with what is an accelerator. I think you can have many meetings, but what I'm talking about specifically today is hardware assists that can improve the performance for very specific workloads beyond what you could get on a general purpose processor alone.
And we know that general purpose processors are really the most flexible thing you could deploy. They support pretty much any application. They can be programmed pretty much by anybody with the right skills. And they're always the lowest total cost of ownership thing you could possibly deploy just based on the standard high volume economics of sort of standardizing on a single configuration and deploying it at scale. And you saw a lot of that in the data center tour today.
So why would a customer choose an accelerator to add to the platform? And it's really because they're looking for better performance. They're looking at the workload and they look at the peak workload or the peak load of a given workload and they see some opportunity to get substantially better performance per TCO over the total cost of ownership than they could get using just the general processor general purpose processors alone at scale. And I'm not talking at much greater, it's not 20%. If it's 20%, they'd probably just buy the SKU of the CPU or wait till next month and we launch the better one.
We're talking 2x, 3x, 4x where they see really substantial multiple benefits to accelerate their workload without too much cost added to the platform, which is also a key ingredient or key part of the equation. Generally, they're not offloading an entire workload. They're looking at the overall application that they're running and they're trying to figure out what's the specific sort of portion of the code that is running some kind of routine stable algorithm or calculation that might be accelerated in some kind of a specific or purpose built piece of hardware. And the rest of the application will continue to run on the general purpose processor and that specific portion of the code would be offloaded to some fixed function or purpose built accelerator logic. And there's several different implementation options and that's what I'll show you next, how people might go build these.
First of all, let me orient you to my chart here, a simple 2x2. What we have on the left are standardized solutions. Think of these as you go buy something off the shelf and it pretty much is what it is when you buy it. It may be programmable or it may not be programmable, it may be kind fixed function. On the right hand is a customized solution, meaning it may be programmable or not, but it's really something that's flexible for the end user or the OEM or the system provider that can they can configure it to do kind of something specific that they want to go do.
Okay, so that's my world here and I've populated it with the 4 typical implementations that you would see in the industry. And these are all hopefully familiar terms to you. So in the upper left, we have processors or CPUs. And these are products that are available in the market. They're really designed to be flexible ways to program through software anything you want based on just sort of common math functions or logical operations, add, subtract, multiply, divide, inverse, and or these kinds of just simple functions and you can pretty much build any application out of those functions and stringing together different instructions.
On the opposite end of the spectrum, in the lower right, we have ASICs or application specific integrated circuits. These are chips sort of by definition built by their user. They're built using whatever proprietary IP they have. They may pull from an ecosystem of other IPs, but they configure a chip that's purpose built and captive or proprietary for their own use. And then different than ASICs, we start to see even a growing market and a shift from ASIC towards what we call ASSPs or application specific standard products.
And this is where vendors, Intel or others look at what kind of functions are people doing in ASICs. Are there things that are kind of common enough in the industry that there might actually be a market for selling a chip that does that function, right? So this could be like an Ethernet switch or a GPU or something like that where it's a pretty common usage and you can offer that product on the merchant market and anyone can use it. And that's in ASSP. Pretty much fixed function really purpose built.
And then in the upper hand upper right hand we have FPGAs. Now FPGAs are similar to an ASIC and you can customize it. You could pretty much build whatever you want out of it. But it's also similar to an ASSP or a processor and that it's a merchant product. You can go buy it.
You don't have to build the silicon yourself, you buy the silicon and then you program it in hardware, you program the hardware to do the function that you want. So those are the definitions. And if that's not clear, I thought I might use an analogy to talk about what it's similar to. So the ASIC, these have been around since the beginning of sort of silicon time. This is where the world began, which is people use the silicon technology to build whatever they want.
It's like a stone tablet, right? If you have the skills and the tools and the time, you can pretty much make it say whatever you want. But once you've written it, it's done. It's going to be the same, right? So it's pretty much etched in stone.
You could also, as an ASSP, say, hey, I could build kind of a rubber stamp, right? I've got this operation. I'm going to do it over and over and over again, exactly the same operation. I could build a sell you a stamp and you can just hammer through all of those different documents very quickly using something you bought at Staples. An FPGA is kind of like a printing press, right?
I can configure the typeset, I can have it say whatever I want and then I can just print a bunch of books, a bunch of newspapers and it can go very rapidly. And tomorrow when I want to print the next newspaper, I can reconfigure it and I can print that one. So there's some setup time and some overhead in reconfiguring it, but it's reconfigurable and you can get very high throughput once it's configured. And then the CPU is I'll call it like the word processor, right, which is anybody can use it, you can do whatever you want with it. You can scale it with high volume printers and things like that, but it takes a little bit of effort, but it's very flexible.
Sandra is nice enough to point out, you can also have your spreadsheet and your e mail and your video and everything else on that too. So it's a very flexible solution. So hopefully that's a little more clear now. Okay. Why would you choose each of these, right?
So a CPU would be chosen, it's really best for where you want that performance across a broad range of workloads. You want to build an infrastructure, think a cloud infrastructure, a high performance compute cluster, a telco infrastructure or communications infrastructure that's very flexible and you want to be able to support all of those workloads and the innovation and you want to do it at a very low total cost of ownership. That's what CPUs are best for. ASICs on the other hand are best for when you as a user or a system vendor have some proprietary value that you want to go implement and differentiate yourself and it has enough value that is worth the cost, the R and D cost and manufacturing cost and the time to go implement that in silicon, okay. If you don't have that kind of time and skills, etcetera, or you don't see the kind of value in your proprietary logic, then you would either buy an ASSP or you would buy an FPGA.
And the difference there would be you would buy the ASSP if you wanted you had a very stable logic that you were trying to serve and you saw a merchant product that met that need and the easiest and fastest thing to do is then go buy that purpose built piece of silicon and deploy it And so that would be best for that. And then in FPGA would be similar in a lot of those regards, but if your workload isn't stable, in other words, it will change over time, I don't necessarily mean over minutes or seconds, but I mean over maybe days, weeks, months, you might want to buy an FPGA instead because then you could have the opportunity to change the functions and the calculations that it's doing. And or you could put your proprietary logic into that just like you could in an ASIC, but you can do it much faster and cheaper from an R and D perspective than building your own piece of silicon, right? So I can program something off the shelf. It's still proprietary and differentiated and I could ship it.
So that kind of explains some of the different value props of each of these and there is a market and an opportunity in a world where each of these exist. And then you think about discrete versus integrated. Most accelerators start as discrete, right? So there's I've drawn a picture here on the left. So this is kind of a high level server, if you will.
And you've got a general purpose processor in it and that processor, like I said, has general purpose kind of execution units in it. And then you can put an accelerator off the host bus, which would typically be PCI Express in a modern server. And in that accelerator, you would custom craft your function for whatever string of calculations or logical operations you want to run inside that accelerator. And a lot of the code and operating system things like that would run on the base CPU and then it would offload the portions of the code over the accelerator to get the better performance and they would work together in a kind of a co processor kind of model. This is great for configuration flexibility.
If you're building a system, you can choose to put or not put the accelerator in there, choose different kinds of accelerators. It's good for time to market in that the CPU sort of product cadence can be on one time schedule and the accelerator product cadence can be on a different time schedule. And you're not compromising the costs or any trade offs in any one chip or the other. So that would be the benefit of discrete. On the other hand, a lot of accelerators get integrated and integration provides basic benefits of improved performance because you don't have to go off chip, slow down to go off chip onto the next chip, improved energy efficiency because it's lower power to transmit data on die than it is off die.
And you also minimize the system cost because there's benefits of just cost reduction through integration that we've realized for years. So we look at accelerators, discrete or integrated. And over time, where we see scale of discrete accelerators that have enough scale in the market and opportunity then we integrate them and we've done that for years and we continue to do that. So then you sort of think about, okay, great. So what workloads might I accelerate, right?
And we, as a large player in the server market, of course, have to look at a whole basically all the workloads, a very broad range of workloads and really try to understand a few things. Number 1 is, what is the market demand for those workloads or how many people are deploying those workloads and what are the growth rates, so we can kind of understand where the market is going. We also need to understand what are the technical characteristics or attributes of those workloads and what opportunity might we have to make them run faster. And so I've shown here a bunch of bubbles and these all represent different classes of workloads and it's not exhaustive. These are some of the common examples here.
And you can see that we look at things that are IO intensive, so how much data are they sending in and out the chip, how many things are compute intensive or CPU intensive, meaning how many calculations are they running, pattern matches, things like that or how many are memory intensive. In other words, they've got large sets of data that they're reading and writing and operating on. And you can see that there's a broad range of different combinations of sort of sensitivities of these. And that's really important to understand if you're going to go accelerate something, you got to know what you're going to accelerate, how you're going to accelerate it. So we try to understand that.
We look for the technical opportunity to accelerate either in the base CPU or in an accelerator. And then we also look again at which workloads are growing. And what I've shown here on the top are the 3 fastest growing classes of workloads that we just have to nail, right? These are the things where a lot of innovations going on in the industry, improvements and actually developing accelerators themselves. And I'll and improvements and actually developing accelerators themselves.
And I'll share some of those examples in a minute. And everything starts with just improving general purpose computing performance and Diane showed this chart earlier today, but just on a constant march built on Moore's Law, but turning a lot of different architectural knobs that we have in the platform to constantly improve the general purpose performance that we get out of the platform for that broad range of workloads. We target a few workloads with special instructions or optimized feature innovations to get a big pop on the general purpose CPU. But then beyond that, we look at additional things we might go do at the platform level to even further accelerate performance at those sort of multiples level to X4X and beyond. So the first thing we do is we optimize software for IA.
So we have a whole bunch of different libraries, math kernel libraries, data acceleration libraries, data plane development kits, storage acceleration libraries, a whole bunch of different developer tools and reference code, where we've basically put our ninjas as Jason called them on the software problem and said, here's an algorithm, here's a common mathematical function, linear algebra or whatever, go really make sure that thing screams on IA, taking advantage of the latest and greatest technologies that we have. And then we'll put those into the libraries, we'll release them out to our developer ecosystem and they can build on top of those building blocks and have confidence that they're going to get the best performance out of the platform they can before they've even added in any accelerators. After that, if we see opportunity to invest in discrete accelerators that could provide even better performance again for routine calculations then we do it. And we've got a whole bunch of these that we've done forever and continue to do. Back in the day we did floating point co processors, a math co processors that are now integrated.
We've got GPUs that we at one point back in time all used to be discrete. Now they're integrated in a broad portion of our portfolio in both the client and the server roadmap. Today, people are starting to experiment and deploy FPGAs on PCI Express as discrete next to our CPU. But over time, like I said, where the market opportunity hits enough scale and the cost of those deployments come down low enough, we'll integrate those into the CPU. And there's examples of this where we've integrated the graphics processor, like I said before, onto the CPU.
We look at accelerating compression and other types of capabilities onto the CPU. And then we offer sort of an even better solution where we've got integration. And then finally, we are constantly evolving the instruction set for Intel architecture. And when we see an opportunity to even further improve the performance or reduce the cost, we will do that. We will extend the instruction set and we'll basically make that an autonomous instruction reduce the clock set reduce the cost and so forth.
And so I think of this as just a flow. Everything starts with make software run as best as it can on the hardware you have, supplement with discrete, integrate and integrate further and that's sort of the flow. And we've done this many, many times for as long as I've worked here. So that's the playbook and we continue to execute it. I heard that you guys might be interested in FPGAs and so what I wanted to do is just spend a little bit of time talking about 3 of the big use cases that we see.
And I think these are really the primary use cases that we see. I'm not it's not exhaustive. There's others, but these are the 3 that we hear most from our customers right now. And so let me talk about the yellow line on the top here real quick. So this is sort of in a generic sense, how would you use FPGA and then we'll talk about the specific use cases in a second.
So you could use an FPGA if you have one use case. And that's when we get to the cloud service providers image recognition, that's kind of one example there. You could implement customer specific solutions. If I have a custom way of doing some kind of packet processing or something or security algorithm or something I can implement it in an FPGA. And then if I have an algorithm that I need to change over time, I have an opportunity to reprogram it and still get the same kind of performance that I wanted out of that accelerator in the FPGA, but it doesn't have to be static over time.
So I can still kind of get more utilization out of that infrastructure. So three examples of customers here. I've got a cloud service provider example, an OEM sort of system vendor example and a comm service provider example. What we're seeing and Jason alluded to this earlier, in the public cloud is one of the kind of, I would say, primary kind of competitive fronts in public cloud service providers in the search arena is being able to search images and videos and things like that and find people without people having to tag who they are and find other objects. There's other applications for image recognition in cars and robots and other things that technology can also be used for.
But in the public cloud, think of it as image search is the use case I'm talking about. And one of the technologies that has been really moved sort of out of academia over the last decade or so and moved into deployment is deep learning and convolutional neural networks specifically. And this is a technique that people are using to be able to rapidly have the machine learn and recognize different images and tag people and find people very quickly. And what they're finding is that they once they they have 2 parts of this use case. 1 is they train.
So they have a small cluster on the side that's a very high performance kind of HPC like cluster and it trains itself to recognize these images and detect, who the identification is or what the object is and things like that. And once that training phase is done, you've got an algorithm and you want to go deploy that algorithm in production across whatever the 100,000 servers that were in the data center that Alexis showed you today. And you want to be able to just run that algorithm as fast as possible. And so what we see is that FPGAs are used in that deployment could be used in that deployment phase and that's where some of the experimentation is going on right now. And so you get an algorithm, your training cluster says this is it, this is the best way to do image search and then you deploy it.
And today that's all deployed on typical Xeon servers in most data centers, but it can be accelerated by kind of programming in that function into the FPGA. I'll say one thing because there was a question earlier this morning on this that's relevant to the topic here, which someone said GPU and FPGA in the same sentence. And I think these are like kind of apples and oranges. Today, GPUs are used or GPGPUs are used in high performance computing and we have our Xeon 5 solution, which we think is a great solution comparative on performance with what people can do at GPGPUs, but a better developer experience is more consistent with IA. That's all kind of a competitive front that's happening in the training phase and these high performance compute clusters that where they're trying to learn the algorithm and all that.
When we talk about FPGA accelerators, we're really talking about that scale production, we'll call it scoring phase where you've got the algorithm, now you want to go deploy it. And so they're really not kind of apples to apples. So I thought I'd add that right here. Let's see. So the next one is security appliance vendor.
So you can imagine that you're building a firewall, a gateway, some way to encrypt data between data centers, maybe between countries or within your country. And you're going to want to you're selling this appliance in there and you have a unique way of encrypting that data or protecting it or doing identity management and things like that. FPGAs are a good way of programming that in. You're getting lots of packets coming off the network at a very high velocity and you can deploy that algorithm and it can operate very quickly using an FPGA, using encryption. And you may want to change that algorithm over time by the way, because you may have a better encryption algorithm next week than you have this week.
So you may want an FPGA to be able to reprogram it. And then the next one is the comm service provider. So similar kind of use case in that in network equipment is one of the largest places where FPGAs are deployed today. We see just a lot of high velocity throughput traffic. There's a lot of routine calculations, packet parsing, pattern matching, white listing, things like that, that are going on.
And in FPGA, you can program in your specific way of doing that and then FPGA can help supplement some of the application processing and things that are happening on the general purpose CPU with network function virtualization. So it's a good offload mode. Similar to fixed function accelerators that we offer like compression and encryption in some of our quick assist technologies. So different ways of implementing some of our use case. So the last thing I'll say here is we've announced publicly previously that we're going to be integrating FPGAs on package with Xeon.
And so we haven't announced when that product is coming out, but it's coming out in the near future. And so we see an opportunity not just to embrace FPGAs on the platform as discrete, but to integrate and drive that kind of value proposition I talked about before with lower power, lower cost, higher performance. And so when the Ultera acquisition closes then we have opportunities to start to execute on that. So we're very excited about that opportunity And these are some of the initial areas we're getting a lot of customer interest in really being able to optimize those products to work best together. So in summary, really accelerators apply best or really only in scale is when they're providing a substantial performance benefit and over general purpose processing and a substantial performance benefit at the lowest full cost of ownership.
So you have to have both those things to be true at scale. People implement these in an ASSP merchant market format. They implement them as an ASIC as a proprietary I own it kind of thing they implement them in FPGAs and they all have their own strengths and weaknesses relative to each other. Our approach is make sure general purpose computing continues to be march forward and delivering better and better performance that benefits everybody optimize the software and best on it offer discrete in the select areas where we see an opportunity to do so, integrate and deliver better performance, power and cost benefits in doing so. So with that, I would like to thank you and answer any questions
you have.
Thanks Robert for the presentation. Two questions. What do you need to do on the software or the OS side to make the system aware that you have FPGA acceleration rather than doing it in software on the CPU? And then a second question, the Trendverse network function virtualization seems to perhaps work against an FPGA implementation and that if you want a flexible industry standard server that can run different workloads, you're going to have an FPGA that has a fixed function program that sounds like it sort of works against the trend towards network function virtualization. So how
do you
see FPGA acceleration playing out in that comm service provider segment?
Okay. So the first question about what do you need to do with the operating system level to be able to take advantage of an FPGA. Today FPGAs, they're deployed in 1 or 2 miles, or either they're deployed in some other platform that's not an Intel standard server and that I won't answer that question. But when you put it into an Intel architecture server, it's a PCI Express device today, so it operates just like any other PCI Express device. The FPGA vendors, Xilinx, Alterra, whoever it is, would need to provide a driver that would work with Windows or Linux or something like that.
And then that would begin an IO model, so the application would need to know that, hey, I'm going to send or receive data off to this FPGA and the FPGA would be programmed to do whatever operation it does once it receives that data. And then when it's integrated in the package, we are working on the software solution, but it's basically the we're integrating on the memory interface or the coherent interface in the solution that we have right now. And so we're working on both the software and the
hardware modifications that we need to be made in the platform and in
the operating system to make that
just work system level software, but more on the application developer environment and how do we make that work seamlessly. And then moving forward, our focus is really less on the system level software, but more on the application developer environment and how we make that a really good experience and really easy to deploy. So that you don't have to people that are like RTL hardware engineers going off and programming these FPGAs, which is kind of what it's been up until now. And it's more like any software developer can write a software application for X86 and they can offload to the FPGA just like they could offload to any other offload today that we might offer like an encryption engine or compression engine. So trying to that's kind of where we're headed.
Your second question on NFV SDN. So today there are lots of capabilities that exist on the platform that sit next to the CPU. There's IO devices like Ethernet controllers, there's other discrete accelerators like compression encryption devices or FPGAs. And if you think about NFV, it's very much server virtualization, where you have a hypervisor and then you have virtual machines. And there are well established methodologies today on how a virtual machine can access hardware resources outside the CPU, things like single root IO virtualization and other custom techniques that people like VMware and others have invented.
So I think they're complementary. Your network function can operate in the absence of an accelerator and it all run-in software on top of the general purpose CPU. But if the accelerator is present and the hypervisor exposes it, which it needs to do, to the virtual machine, then the virtual machine has access access to those resources. So it's a pretty well understood technical problem. Yes.
David?
You mentioned both in package and on die FPGA accelerator implementations. Do you expect in the long run all your solutions will be on die or will there be a significant proportion of in package even when you do have both choices? Is there an advantage sometimes of having an in package solution? And if so, what are the proportions of applications going to be? Okay.
I don't remember saying on package or on day, did I? Okay. I guess the brief answer is, I don't know what the mix will be. If there is benefits to being on die in performance, cost and power, the same as there's benefits in being on package versus on board. The closer you get, the better the benefits are.
And so our current plan is on package. You could imagine that we're looking at on die for the future, but we don't have any current plans that we're ready to announce on that. And I really don't know what the mix is going to be between them. But there's always going to be just the normal trade offs of what are you trying to accomplish and cost performance and things like that that will take into account. So we may offer multiple options or we may converge on a long term on die only.
We'll find out.
Any other questions for Rob?
All right. Okay. Thank you.
All right. Well, with that, we're going to finish up with the final Q and A with Diane.
Yes. So there's just a single slide here to try to recap for you what you heard today. And I hope this has been informative and hopefully you share in our excitement for the data center business and our growth projections. So, we talked about our fundamental business is microprocessors. We have 96% of share of the server market.
So, our actions here are to accelerate the growth of that market through things like our investments in the cloud build out, build out of the cloud architecture, not just the cloud service providers, but the comms work that Sandra talked about moving them to a cloud environment, as well as enterprise, getting enterprise to deploy private clouds. Big data is another way that we're accelerating the market and you heard Jason talk about what we're doing to accelerate big data deployment that is too complex today, but we're making investments in things such as Discovery Peak to try to accelerate that market as well. Then network we talked about, this is how we're a portion of how about 15% that we are moving into new spaces and growing significantly our share of the market from single digits today, but we have big aspirations obviously and you heard Sandra talk about that. And then the other way we grow in support of that 15% CAGR is through the new technologies, new products that we're launching. Thank you, Rob, for coming and talking about 3 d Crosspoint, Silicon Photonics, Omni Path is another one that is launching soon and we'll support that growth.
And then just our continued investment in the capacity of the processes that we deliver, making them a greater and greater value to a broader variety of workloads. And Rob did a nice job talking about accelerators being one of those ways where we keep driving incremental value and capacity of the fundamental microprocessor. So that's how we are growing the business and we'll just open it up for questions and you have all of DCG staff here that can that can join me in answering your questions if they get too hard. So fire away. Anybody have anything left that's on their mind?
Any questions about our business?
I'll ask a high level question about the longer term growth rate. If we come back here 3 to 5 years from now and growth is 20% CAGR and not 15%, what would be the 2 or 3 reasons in your mind that there's upside to that long term CAGR? And then conversely, assuming there's no big macroeconomic issue, if it ends up being sub 10%, why would you think it could be below kind of that long term 15% growth rate that you have out there, independent of the economy?
So that's a good question. So the first one you sound like my boss, right, why can't it grow faster. So and our Chairman too. So if it grew faster, I would have to point to the network transformation. And I think we've built in some very realistic expectations for what the conversion of that fundamental market is going to be off of those proprietary fixed function boxes onto Intel architecture and onto cloud based automated infrastructure And we do talk about it as kind of a 2 step.
I think the world sees that ASICs are becoming very, very expensive and our products have become better and better and better. And so the gap between what an Intel architecture, CPU plus some accelerators can deliver versus what a custom ASIC can deliver, that gap is narrowing and it's getting harder and harder to justify all of these unique ASIC investments in the network market. So, we think we can move them on to Intel architecture and then move to a true cloud architecture, which is a big undertaking. If it's greater than 15%, it's because that move happened faster. And we do have we have conversations, Sandra and I have conversations all the time about what could we do to accelerate that.
I think we've demonstrated in the past, we are very good at seeing industry trends and making investments to help the market get there faster. So that would probably be the big one that would get us above 15%. Macroeconomic, aside from macroeconomic conditions, what could make us less than 15%, well, I guess you'd have to say the cloud build out in enterprise. I think what we have seen in the last, gosh, maybe just even 9 months is the diversification of the public cloud market. So we had we've been fixated on talking about the big 4 in the U.
S. And the big 3 in China that has been such a dominant portion of the diversification of the market and then There's other guys that have come in. So the diversification of the market and then just this long tail of SaaS offerings, software as a service offerings that tend to be a very nice bolster to our business and not as price sensitive. But we do need enterprise to continue to procure infrastructure, right. We need that to happen.
That still is 40 ish, 40 plus percent of our overall revenue today. So we need to get them to refresh and we I know I was CIO of Intel. I know that we sat around and twiddling our thumbs and said, boy, virtualization is really hard. We'd love to virtualize, but it's so hard. So now they're twiddling their thumbs saying, really love to move to a cloud, it's so hard, really love to so we got to make it easy for them to move to next generation architecture, which is cloud computing.
So it's I think Jason did a great job talking about the investments that he's making to make it easier for your standard enterprise IT shop to deploy a cloud environment. But if we don't make it easier for them and don't get them to do it, we'll likely see that in the numbers.
Just playing off the answer you just gave relatively smaller customer base? Is it going snowball that one adopts it, then others are going to rapidly follow. How do you see that being what could accelerate it? What's different than what you've seen in areas like storage? And the fact that it's a more fragmented competitor base with these individual boxes, how does that play into the mix?
So it is I think you did a nice job hitting on the problem statement, right? It is fragmented. It's a whole bunch of different functions, network functions, load balancers and switches and routers and VPNs and security appliances and you just got all these different boxes today. You need to move all of those workloads on to Intel architecture. So it is and that's what makes it a slow move.
It's like 1 at a time redesign, re architect, port the code, prove it out, deploy it, right, and over and over rinse and repeat, right, over and over and over. So it really becomes a capacity statement. I think the good news is the end users at the end see the value. They're like, okay, we
got to move all of
that stuff over on to Intel architecture over to there. So that's good. There's pull. And we have the product. So we've got enough proof proof of concept that as Sandra said to prove it works.
So now it's just a capacity issue. Our own capacity developing more and more custom solutions capacity or TEM capacity capacity or TEM capacity, both OEMs and TEMs are playing in that space now, right. So, you have the HPs that are providing network function virtualized virtualized solutions as well. So, their own capacity and then the end users capacity to deploy and prove it out with confidence. So it really becomes it's just a capacity issue.
It's not a technical issue. It's not some big invention needs to occur. It's just a capacity issue. And you had a second part of the question that I lost, I'm sorry. Is that it?
Okay. Yes, sorry, I don't get to pick, Trey always tells these. IR gets to pick who gets to ask the question, right. Mark is so mad at me when I take that away from him.
Thank you, Dan. Just a big picture question. Obviously, in the last few years, you made several acquisitions and you filled some of the gaps in your IP portfolio. As you look out to the next 3 to 5 years, how do you feel about your portfolio right now to achieve that 15% plus growth? And how should we think about M and A, either small or large going forward?
I think we are pretty set. I think the Akcea acquisition that we made, if we can pass ourselves in the back, I mean, I can just pass them on the back, it was a brilliant acquisition in helping that capacity problem I just talked about of growing our capacity to deliver custom ASIC solutions to help move that market over. So that was the last big acquisition that we made in the data center space on the hardware side. I think what you so I don't think we have any big gaps in delivering that 15% from a product development perspective. I think what you and then I should add in the FPP Altera, I mean that was huge, right?
Oh, yes, that little Altera thing. So yes, that acquisition obviously benefits my business. So I was in their pitching. It also benefits Doug Davis' IoT business and then it's just a good business given the foundry relationship we have with them. So that also is a core capacity that Rob talked about how it will fuel our business.
So between the Ultera, intent to acquire Ultera and the Axia, I think from a product capability, I think we're looking pretty good. Where you will see investments are is the continued investments in driving that software stack, whether it's the big data software stack or whether it's the cloud software stack, making that stack more robust enterprise class and easier to deploy. That's where you'll see continued investments. And I think we've demonstrated that with the recent announcement of Rackspace, so an investment that we're making in Rackspace, both engineering investment into OpenStack as well as deploying clusters, large clusters that the OpenStack community can actually prove out those systems. And then just on Monday, the announcement of investment in Mirantis.
So helping Mirantis scale and reach more of those enterprise customers in deploying cloud solutions. So you will see more of those and we said when we announced Cloud for All, we said that you'll hear 15 to 20 more of those types of announcements over the next 12 months. So that's where you're going to see us. If we were talking at lunch, the limiter to accelerating the market, when we talk about unit number 1, we know what the server market is, let's accelerate the growth. The limiter is not the silicon products.
We have very compelling products. We have great acceleration solutions. It's not the product. It's making that software stack easier so that the end user can actually deploy our products. So that's where the investments will be.
Thanks, Diane, for making your whole team available. You've done a great job, I of laying out the long term vision and the diversification of growth in your business. Investors are not no matter how long term we want to appear, we're not the best at having a long term horizon sometimes. But
maybe you could talk
a little bit, I asked Stacy this last week at IDF about visibility in your business in the near term, right? So there's the 19% growth in Q1, 10% in Q2. People there's a few concerns about the back half of the year. The length of visibility that you have in your business both on the enterprise side and of the big cloud guys That would be really helpful. Thanks.
Yes. So we're not we haven't moved from our 15% growth for the year. So and Q1, 19% growth that was huge, 10%, still double digits. We're still comfortable with the second half. And that variability, as I said earlier, it's really a reflection of the shift in the end users that are purchasing our products and solutions.
So that we love the old days of enterprise IT, very predictable buying patterns. We have 20 years of seasonal buying patterns of enterprise IT and so we could nail each quarter with great predictability. The cloud market, the public cloud service provider market is much harder to predict and they will even say they struggle to predict, right. So they're tuning their algorithms all the time, they're deploying more infrastructure all the time, they're trying to gauge how much capacity they need because the big CapEx spend, they don't want to spend without the demand being there. So they even have trouble anticipating what next quarter's demand is going to be.
So that is where you get the variability. I think the network conversion has been extremely predictable. We've Sandra has hit her POR or exceeded her POR every single quarter for umpteen quarters now. So it's a very predictable market for us. Enterprise is generally very predictable.
I agree in Q2, it was softer than we would The
cloud
The cloud side, as it becomes a bigger and bigger portion of our business, that's where the unpredictability comes from. But we're still we're confident in the 15%, there's no change to that. China, I will add, I think I mentioned earlier, China is the fastest growing geo clearly, obviously, the largest consumer of 4 socket and above servers, a significant portion of our business. And so that does create a watch item for us, right? We're all watching China.
So that's a little bit of a knob there to turn. But we are working to cloud service providers to get better supply demand signals in, so we can predict it better, but it's the maturity of the market, right?
Diane, just coming back on the eve of virtualization, there was a school of thought that processes would become more fully utilized, number of processes you need would go down and people are pretty negative on the server business and that was just wrong.
That wasn't you, was it?
That was not
me. I
guess to play a little bit devil's advocate, that was mainly an enterprise driven phenomenon and the enterprise was clearly viewing their IT infrastructure as more of a cost generator than a revenue generator. And so what happened is when utilizations went up and the cost of compute went down, it was very easy for people to get VMs and we saw the number of VMs go up. And so it was the Jevons paradox.
Yes. Thank you.
When you look at acceleration, especially in the hyperscale market, why are you so confident that that won't dampen the demand for server processors? Because clearly guys like Google and Baidu and Microsoft, they're not underspending in their cloud right now because they view that as a revenue generator. And so if acceleration massively increases their performance, why are you so confident that there is workloads out there that will soak that up?
But I do think it's Jeff as part of look at the number of new services that have been deployed over the past, I mean, just even year, I mean, Uber, how long has Uber been around, 2 years something? I mean, who would have guessed, right? So as you I mean, it's Kevin's paradox, I hate to be a broken record. But if you can make technology cheaper and easier, easier to deploy, cheaper to deploy, easier to consume, new services will emerge. And I think that's why we can point very clearly to diversification of the cloud service provider market.
There are more services coming online. So it's great that Google and Amazon continue to get more efficient in the way they run their data centers, the best in the world at it, Microsoft best in the world at it. As they make their environment more efficient, that then allows them to deploy new services. And I think that will just continue. You look at enterprise and it's the same story all over again, right.
So enterprises are recognizing that they have the ability to use IT to actually build new businesses, build new services for their company. So IT based businesses, we were just meeting with BMW last night and they have massive new services that they can think of now that they have connected cars, right. Oh, now my car the cars my fleet are checked at 10,000,000, VW is saying 10,000,000 cars coming online, connected cars coming online every year, 10,000,000 devices. Well, now that I have this huge car fleet, think of the new business model, think of the new services I can deploy once I have that cloud computing model. So I do think the more efficient we can make it just like the virtualization, the server demand went up not down when you virtualize your server.
When you deploy cloud computing and you make it more efficient, people invent new services to deploy across that infrastructure. And that's true with enterprise and it's true with the public cloud service providers. It's true with the comm service providers. I mean, they have a list of new services that they're going to deploy and monetize once they get to where they can have an automated cloud based environment. Your location based services that just weren't possible when you have all these dedicated fixed function boxes.
So I think it's an amazing innovation cycle that will continue. Trey, you need to pick him.
Diane, why isn't memory a source of upside to that from 15 to 20 to the question earlier? If there was for you, if you were to look at the source of potential upside versus just
Why isn't memory?
Why isn't memory part of that list?
Yes, I could have put memory in there. Sure, I could have put 3 d cross point. So we have in that 15% is obviously 3 d Crosspoint DIMM. So that is part of that growth. We have been actually it's a good point.
Yes, you sound like my boss now, it's a good point. So we have we had it as a potential investment. We have baked in relatively conservative attach assumptions, attach rate assumptions between the 3 d cost point DIMM standard DRAM per CPU. So it could move faster. But yes, that certainly could be another area for upside.
You're right.
But beyond the DIMMs also in the Optane part, which
Oh, solid state drive?
Exactly.
Yes, that gets he gets that P and L, so that's why I don't. So he's and Rob happy to give it to me. So yes, Rob will say that solid state drives in the data center versus client has been a huge growth for you and the move to 3 d Crosspoint will just accelerate that. So new storage solutions based on 3 d Crosspoint and solid state drive, I agree, it will transform the storage side as well. So you have a good point there.
Yes, it is very exciting. And I don't know if you were at IDF, we had our ISV partners, big ISVs talk about how it will really transform their software. So if you're Oracle, so in memory analytics or exolytics taking advantage of 3 d crosspoint memory, DIMM, SAP in memory, HANA, they're very static about it. VMware, so virtualization is bound by memory capacity. So now you use 3dcross to increase the memory capacity per processor and Cloudera.
So, Hadoop scale out using 3dcrosspoint. So, there's lots of big usage models that you're right, we could be underestimating that, but we'll wait and see how it does.
Maybe this isn't a question for you, but I think it is. Back to the subject of 3 d XPoint. Do you want to make money selling 3 d XPoint? Or do you want everybody to be making 3 d XPoint outside Intel, so there'll be all this memory around which will attach to Intel processors?
Well, I think you have to start by saying we invested 10 years of R and D in inventing 3 gs Crosspoint. So we definitely want to get a return on that investment. It is a unique technology, right, and so it is an opportunity to monetize that R and D and that unique capability. If we ever thought it was limiting the deployment of the usage of the CPU, we could look at different licensing models. But I mean right now it's a wonderful invention that came out of Intel, came out of Rob's group and we want to monetize and get a return on that investment.
And it is unique. I mean, it's breakthrough and it's unique. Yes, and we have plenty of capacity. So if you're worried about whether or not he can crank on enough memory chips, we're not worried about that. So we'll serve the market.
One more question or?
We really, really appreciate all of you being here for this and we do have some exciting demos that we'd love for you to come see. And also, I think do we get alcohol with our we do. We have alcohol with our yes. Thought some excitement there. Okay.
So thank you very much and Trey.
That closes out the webcast. So thanks for joining us online as well.