Ladies and gentlemen, please welcome the Senior Vice President and General Manager of the Data Center and Connected Systems Group, Diane Bryant.
Good morning. Good morning. Thank you much for joining us. I appreciate you all getting up and coming up here to the city for this. We are announcing several new products and technologies in support of the transformation of the data center.
So let me tell you first what you're going Here this morning. So first of all, you're going to hear some details of the Adam C2000. So this is now in production. And for those of you that follow our code names. This is the product that was code named Rangely and Avaton.
2nd, we're going to cover the wide range of C2000 products That are available at launch. So it's not a single product, but it's many products. These are products that support a wide range of workloads across servers, storage and network. And you're actually going to hear from some of the folks that are leveraging the C2000 in their own environment and they can talk to you about the benefits that they're seeing. And third, we're going to launch additional new technologies that help drive the rack scale architectures.
This fundamental transformation of the rack. And this is applicable both to the Adam C2000, but also applicable to our Xeon product line. So we're just maintaining that beat rate of innovation that helps the industry transform to cloud based services. So, talking about IT deployment into cloud based service model. So, This transformation drives a very clear change in the requirements that the underlying infrastructure has to deliver.
So servers, storage and network, the infrastructure must change in support of cloud based services. And if you look at what does it mean to deliver cloud based services? Well, first is you've got to have the ability to rapidly ramp new services and this requires a high degree of automation across a shared set of infrastructure. And Instagram is a good example of that rapid ramp going from 100,000,000 photos to 1,000,000,000 photos in just service over a 10 month period. So a 10x growth.
You also if you're deploying cloud based service, you need to be able to support a Stations, a wide range of workloads. This obviously requires a fine balance between standardization and customization if you're going to maximize your performance at the absolute lowest level of operational cost that you can. Amazon Web Services is a great example of this. They've doubled the number of services and features that they have delivered to their end user base in just 1 year from 2011 to 2012. And in doing so, they're supporting 18 unique server instances just on Windows itself.
And then of course scale. Scale is a big challenge. A critical attribute of cloud computing is that, there is the appearance from the end user perspective of infinite capacity on demand availability. And an example of dramatic growth and scale to be supported is WeChat. So WeChat, obviously, the social media solution from Tencent.
So WeChat went from 0 to 300,000,000 users in just 2 years, A tremendous scale. So these use models, if you think about them, they're just dramatically different services. From the traditional use models of deploying applications into enterprise IT, cloud based services drive this fundamental shift and fundamental requirements structure. We've talked about software defined infrastructure before. This need to move from static to dynamic, to move from manual to automated, so that The resources can be deployed to the cloud services service based on the needs of that service.
The orchestration layer is obviously a critical element in delivering software defined infrastructure. Not only is the orchestration layer Responsible for allocation of the workloads across those resources, but it also needs to monitor the underlying hardware To ensure that service level agreements are met, to ensure that the application is not bottlenecked by compute storage or IO, And to obviously make sure that the operational costs are minimized. And this requires a very tight connection between integration layer and the underlying hardware. And that's our responsibility is to put the telemetry into the infrastructure That allows the orchestration level to inspect utilization levels, inspect power consumption levels, query what the security attributes of that underlying infrastructure are, Everything that's needed in order to drive up utilization of the infrastructure, while driving down the capital and operational costs, And of course, ensuring a consistent end user experience. So running a very efficient data center also requires a strong understanding of the Application and Workload Attributes.
And in the world of hyperscale data centers, a single application can run across thousands of servers. And so if you're going to gain maximum efficiency, you want that infrastructure to be optimized and tuned for that given workload. And that workloads, we have workloads that obviously span across compute demands and IO demands. What we have seen is the emergence of new applications at the low end of compute, Okay. Applications that are lightweight applications.
And one example of this that you're probably very familiar with is Memcached. So that memory caching tier between the front end web tier and the back end database tier. So this is caching objects so that you can speed up dynamic web applications. This Memcached application is going to require obviously a large memory footprint, but relatively low compute demands And demands low power. So you want high density low power solutions.
Another easy example is cold storage As an emerging workload. So there's an ever growing need for very high capacity, low power, low cost means to store these massive data sets that are emerging. And you think about Facebook or Instagram, Facebook states that only 8% Of the 300,000,000 photos that they store every day are actually ever accessed again. And therefore, you've got a lot of photos that you need to store, a commitment to the end user that you will store those photos, but you can store it on a very low compute, low power, high capacity solution. And also, there's many workloads in the entry networking space that require just good enough performance, but they require a small footprint, high density, high integration, low power solution.
So there is an opportunity to target these lightweight workloads with clear optimized infrastructure solutions. And that's what we're doing is we're investing in optimizing low power, high density solutions end to end from the processor to the system Systems and on out to the rack. So you may have seen this roadmap at we had an event in July. You may have I've seen this there, but we have a full and expanding roadmap of low power processors that are Targeted at this very high density, small form factor, low power workloads. That Today, we are now formally launching the Adam C2000.
It is now in production and the C2000 brings Tremendous increase in integration from the prior generation and that prior generation, the Atom S1200, you may know that as Centerton. So compared to the prior generation, we're going from 2 cores to 8 cores. We are integrating the Ethernet fabric. So 4 ports of 2.5 gigabit Ethernet. We are moving to 22 nanometers and all the goodness brings in higher levels of performance at greater levels of energy efficiency and we're adopting the new Adam Silvermont Core.
So this is the first production product at Intel that is using the next generation Silvermont Core. In July, we also talked About the expansion of this roadmap, so we'll obviously continue the roadmap of the Adam system on a chip product. So the Generation is code named Denverton 14 nanometers. And we talked about in July the fact that we're expanding this high density, low Power roadmap with the inclusion of the Broadwell SOC. So taking the Broadwell core and applying the system on a chip High integration design methodology.
So you get the best of both worlds. You get all of the high performance at Broadwell with the low power and small Form Factor of the system on a chip solution. And so now I'd like to give you a little more insight into the specifics of the Adam C2000. So specifically, relative to our 1st generation of the AdamSSC Center TIN, the C2000 is providing a tremendous increase in performance, so a 7x increase in performance, while at the same time delivering a huge gain in energy efficiency, so 6x improvement in performance per watt, thanks in part to our 22 nanometer process technology that move on to 22 nanometers where we have a Substantial lead versus the rest of the industry. With the Adam product family being full instruction compatible with the Xeon processor line.
You get all the benefits of software compatibility. So millions of data center applications Just run on the Adam SoC. And our 2nd generation, just like the 1st generation, includes All of the capabilities that are fundamentally required if you're going to run inside of a data center. So things such as 64 bits and Memory error detection and correction and hardware integration and virtualization capabilities. So all of those good things, They were there on the 1st generation and of course, you can count on them being in this generation and all future generations.
But what is exciting to me, particularly exciting is the fact that this is not just a launch of a single product, But the C2000 is the launch of 13 different products. So 13 different unique solutions launching simultaneously built off of A common product base. And so with this C2000, we are clearly demonstrating Intel's move From a general purpose compute solution provider to optimize targeted products services that directly address a range of workloads across servers, storage and network. And included in This menu that we choose from in building out the system on a chip, the various derivatives of the C2000 Is our Quick Assist Technology, which is an accelerator for cryptography that we've integrated into the SoC. So a very clear move from general purpose solution delivery, which we obviously still do, but adding to The arsenal of Intel's development playbook, we now have the ability to do system on a chip solutions and rapidly turn out derivatives off of the common base to target these specific workloads that deliver greater efficiency to the data center.
So some examples of those 13 products that we're launching. They don't all fit on the slide here, but here's a sampling. So we're obviously launching a cold storage solution. So this is a version of the product that has 16 SATA ports at a very attractive power level of just We're also launching a line card product, switch control plane product, which is at the bottom there. So these are products that support fanless operations, so very low power operation, as well as provide The reliability, the 10 year reliability expected in the network area as well as a commitment of 7 year supply availability.
These are fundamental attributes for the comms industry. And we're also launching a router product And a security appliance product. So these are products that take advantage of that quick assist technology, so the crypto acceleration technology. And they also include 4 cores, so that gives the additional compute horsepower to support the real time encryption and decryption. And we're launching obviously the full 8 core version.
So all 8 cores enabled with 4 ports of integrated 2.5 gigabit Ethernet in support of the micro server segment. And there is obviously more, and the list goes on, but very clear targeting across core count, IO, accelerators, power levels, reliability levels, all these different attributes and knobs that we turn in order to deliver very Targeted Solutions for the various workloads that are running in the data center. So we obviously enjoy a very nice share of in our Storage and Server Market segment. We have a nice share of the market today. With the C2000, we will not only continue to support And meet the emerging workloads for server and storage, but we extend our reach and our ability to serve much of the networking market.
This is portion of the network space that we were unable to address prior. So with the C2000 now, we have the ability to grow our business into what is a multi $1,000,000,000 entry server market. So great opportunity for us to provide that Intel architecture and leadership Products Performance Energy Efficiency into entry network. And so with that, I am thrilled to have Ericsson here with Services. Today to provide you some direct insight into the value of the C2000 in their environment.
And with us is Mats Carlsen. He's Vice President and Head of Architecture and Process. He's responsible for the management of hardware and software technologies and architecture. And he's actually headed up Ericsson's cloud program since 2011. So, Mas, if you want to come up on stage, thank you.
I'll give you this. Yes. Thank you. Thank you.
So let's start with some short what kind of company Ericsson is. So we are Number 1 in mobile infrastructure, number 1 in our SPSS services and media compression and delivery. So if you're looking on If you're having a mobile phone like 2,500,000,000 of the mobile phone subscribers are supported by Ericsson Systems. 1,000,000,000 subscribers are managed by Ericsson as well. Roughly 24,000 people in R and D, more than 60,000 in the services organization and as I said a company of 110,000 people right now, present in 180 countries.
But our core business is, of course, the mobile infrastructure. So let's start with where we are going to use the ATOM C2000. I will explain about the Ericsson Cloud System. Ericsson Cloud System is the Ericsson solution to provide a network enabled cloud. So what Ericsson Cloud System is, it's cloud execution environment, a common execution environment, a unified cloud management.
But the thing that is maybe different is that we can actually provide this Cloud Platform across all the network elements all the way from the small embedded system far out in the network to the central office resides and all the way to the large data
Service.
And of course, all of these nodes being connected by an elastic and programmable network that's kind of provide the dynamic capability that is needed to do this type of systems that we are building in the telecom side. So with this, Ericsson will provide for the operator a cloud platform that kind of spans across all the segments. It will be both targeted for Telecom application requiring high SLA, but it will also be targeted also for new third party software and business locations. This is something that so this is going to be launched in the beginning of next year. And talking very much about the networking side, this is so to say where Ericsson is kind of happy to introduce that we are going to use the ATOM C2000 in our blade switches in the Ericsson Cloud system as the switch controller.
We have been working with Intel for very close cooperation with Intel for 7 or 8 years in the shape of the Ericsson Intel Technology Alignment Program. Over the years, we have been kind of working hard in aligning our server infrastructure going from a number of server process architecture to X86 and Linux and of course removing a number of operating system on the way as well. This is actually the first time now that we're also introducing The Intel architecture within the embedded control space. We have had a lot of other architectures But now it's also taking the full step about also adding the X86 architecture within the embedded control. So why do we do this?
Because Ericsson is a large company. We have a large diversity of products. And for us really to provide a a common software framework that can be reused across the different products. It's very important for us to say to get scale in our development. And by kind of adding also now Intel on the embedded side that we can actually scale all the investments that we have done on the software side also down to the embedded side.
And of course, if you're taking some of the more important things like virtualization technology, DPDK, acceleration, all these things that we have For years investing on the service side, we can also leverage down to the embedded space. Of course, energy efficiency, I don't know, I mean talking about that, but we are often providing our equipment in rough environment. So I think Power efficiency is important for Ericsson in terms of having that capability which we will get with ATN. And so to finalize for us to say to really leverage on one architecture providing one software stack, The benefit it gives to us in kind of shortening time to market and reusing the software is very good. So we are very happy to start using the ATOM C2000 in our embedded Switches.
Thank you very much.
Thank you so much. Appreciate it. Thank you, Manth. So we know that architectural conversions are very thoughtful decisions and I do want to thank Ericsson for Making the decision to utilize the Adam C2000 making that architectural conversion on to Intel architecture. So thank you.
Our innovation obviously extends beyond the processor to other critical system elements and system technologies. And so I want to tell you about some of the things we're announcing today. We're announcing new ingredients that are required for high density servers. These are solutions that are Intel products that we're delivering as well as technologies that we've invented and are delivering to our ecosystem So first, on the left hand side there, we're very excited to be launching our new high density switch solution. So this is the highest density solution on the planet.
So 72 ports of switching. It's called the FM 5,224, that really rolls right off your tongue there. It's a great marketing name. So the FM 52,224, it is a switch solution that is Purpose built for the microserver segment, so for high density compute. It supports up to 64 Adam C2000 modules, Compute Nodes or also obviously supports Xeon.
So it supports a 1 gigabit Ethernet interface or 2.5 gigabit Ethernet interface with a up to 40 gig uplink, so 10 gigabit uplink or up to 40 gigabit uplink. And with this high density port suite count. It enables a 30% greater server density than the leading switch competitor. Not only is it greater density, but it also is Half the latency of the leading competitor switch providers. So very compelling solution.
And we have systems that are available today, switch systems running on the open network platform from Supermicro, NEC and Quanta. So that's a product that is fundamentally required for distributed switching in support of this very high density Compute transformation that the industry is going through in support of Rack Scale Architecture. So not Stopping at the switch alone, so we looked at other elements of the platform and said where do we need to invest in order to increase density as well. One area is in the management solutions. So as you can well imagine, if you have many, many, many compute nodes inside a given rack, you can fit up Over a 1000 C2000 Compute nodes in a given rack, the management of those nodes becomes critical and you So what we have developed here is a shared management architecture.
It's the multi node management controller and it supports with a single chip up to 8 compute nodes. So with that then you obviously get a 75 percent reduction in the footprint, in the power and in the cost. So a very clear opportunity to deliver higher density And with all the power and system management of all those nodes that you obviously need. And we've enabled ASPEED as the solutions provider, so the product provider of this management product. And then not stopping at management, but looking at the memory.
So obviously, we need a higher density memory solution as well to support this high density compute environment. So we have invented a new memory connector that allows a doubling of the memory density, so 2x the number of dims within a fixed footprint. And that connection now has been Given to our ecosystem partners, so they can make those memory DIMMs available to systems providers and end users. So innovation in the switch side with our new Intel product, high density switching and innovations in memory and innovations in systems management. So we make these investments both investments in our own product line as well as investments that we deliver to the industry, to our ecosystem partners because we want to make sure that there's always a very rich and healthy ecosystem around Intel architecture.
We want to make it Easy for our customers, the systems providers to develop compelling solutions running on Intel architecture that meet the needs Of their customers, the telco service providers, the cloud service providers and enterprise IT. So with our C2000, we have over 50 systems That will be launching shortly, all of them using the C2000 solution. This is more than twice the number of systems services that we had on the prior generation, the Synergen generation just 9 months ago. Many of these systems providers, I want to say big thanks to them. They're here today and they're set up around the area and they're very anxious and interested to talk to you and answer any questions that you might have about their solutions.
As I said, the C2000, it is targeted across all lightweight compute solutions, whether it's micro servers or cold storage or entry network. In the microservice space, we have 11 new systems developed by 9 different systems providers. In entry network, we have 27 new designs, 10 of those are architectural conversions. So conversions off of proprietary architecture Services on to Intel architecture. And in cold storage, we have 11 new systems targeted at that emerging segment of the market.
And the OEMs have innovated well beyond these three segments. Super Micro has launched a small business server utilizing the C2000 and Tayane has launched a small business storage solution leveraging the C2000. So lots and lots of innovation, A breadth of system solutions across micro servers, entry network and cold storage. And we have a preeminent leader in web hosting services that we're very happy could be with us today. So, OBH And with us is their COO, Germaine Mas.
Germaine has been with OVH for 11 years now and he's led many of those innovations, particularly in the area of energy efficient data centers and in security. And Germaine is going to tell us a little bit about the Adam C2000 and how they're going to use it in their environment. Thank you, Damian.
Good morning. Imagine You need servers, physical or virtual servers. But imagine you need to deploy a big data infrastructure Or you need to deploy your own cloud, your own hosted cloud. It takes preferably physical machines for that. Now imagine you could have 100 of these servers ready to use with the operating systems, with the applications And you could have these servers in a really, really short time, in a few minutes.
So that's what OVH does. We provide physical servers in the same time others provide virtual machines. For doing that, we have reinvented the way infrastructure should be hosted. We build our own data centers. We design and assemble all servers.
In one word, we have industrialized the provisioning of physical infrastructures. Probably you don't know well OVH because since our beginning in 1999, we were only present in Europe. But there we became a leading web hosting company with 700,000 customers and 150,000 servers in our 12 data centers. OVH is still a private owned company at 100%. As far as I know, we are the only provider infrastructure provider of this size to be private.
It permits us to change the rules. It permits us to use unseen technologies. Let me give you some example. First of all, we are energetically efficient. We do liquid cooling inside our servers since 2003, Long time before Green IT becomes a trend.
Why we do liquid cooling? Simply to reduce by half our energy bill. Let me say that at the beginning, a lot of people look at us bizarrely. What? You place water and electricity in the same service?
Yeah. Fast forward 10 years later and they are singing a really different tune. That of them would like to do the same. At OVH, we love innovation. We are an R and D driven company.
So we like to do the things by ourselves. We design our own network. We purchase our fiber optic fibers across Europe, across America. So when Intel provided 10 gig network controllers, we immediately redesigned a part of our network to be able to provide this really good technology to our customers. Another example, when SSD drive arrived, We immediately knew this will completely change the business of storage.
And once again, we have made this technology available for customers. So when we heard of Avoton, we knew it should be a grid CPU. So we did some benchmarks. And in case of multi threaded application like web hosting, We have noticed an increase of performance of 300%. If you add its grid memory capacity And it's really low energy footprint.
The ATOM C2000 is for sure the perfect CPU for our budget servers. But not only, we'll also plan to use the new ATOM in some of our specialized servers like storage servers, like Memcache servers and probably lots more. Long life to ATOM C2000. Thank you.
Thank you. Thank you.
Thank you
so much. Thank you, German. Very nice. So in addition to OBH, there are many other cloud Service Providers that are investigating and testing the C2000 today. Baidu, we have many R R and D investments going on, projects underway with them today, including the opportunity to augment their Xeon based storage tier with a low end cold storage tier running on Adam C1000.
And 1 on 1, a large hoster that's Dressing worldwide demand and a long time partner of Intel's. They're adopting the C2000 for their entry level dedicated hosting solution. And they selected the Adam C2000 based on the fact that it meets the good enough compute level for services. Low end hosting while delivering very low power profile and obviously supporting all the data center feature requirements such as 60 Orbit and ECC. So just a couple other examples of deployments of the C2000 into cloud based service providers.
So, these data center deployments by OVH and other cloud service providers highlight the need to continue to innovate Beyond the system and into the rack level. We have been investing in Rack Scale Innovation for some time now and we have
some announcements that we'd like to
make today. So if you think about that we'd like to make today. So if you think about delivering against this vision of re architecting the data center and delivering rack, A new rack level architecture of pooled resources, pooled compute, pooled IO, pooled storage to deliver optimal efficiency. You need to innovate at every level and we are innovating at every level and the level we're going to talk about today is in the area of interconnect. So today, we are announcing a new optical fiber solution and a new connector.
This is optimized for Intel's silicon photonics Solution and we've talked to you in prior events about Intel Silicon Photonics Solution, the fact that we've taken optical interconnect and Today, it's developed with esoteric materials. We've taken that optical interconnect and moved it into silicon. So you get all the wonderful benefits of a silicon solution, high density, low power, small form factor and low cost, much lower Sourcing Optical Solutions. So that technology, that Intel Silicon Photonics solution requires a new level of cabling And so the technology we've developed here enables very high density compute both within the rack as well as long reach. So from top of rack The fiber enables up to 300 meters, so that's 3 times the maximum distance that existing optical fiber solution support, so 300 meters.
And the connector supports 64 fibers, each of them At 2.5 I'm sorry, 25 gigabits per second that gives you an aggregate bandwidth of 1.6 terabits per second. So 1.6 terabits per second per connector, that's huge. If you wanted to, you could download the entire Library of Congress in just 30 minutes with that kind of bandwidth. So tremendous bandwidth, if you chose. And the connector obviously is also extremely simple.
The connector is just 7 parts. It's just 7 parts in the connector and that's compared to existing fiber optic solutions that have up to 30 parts. So that simplification of the connector does 2 things that obviously drives down the cost, but it also drives up reliability. So it makes it a very robust data center solution. And so this is The ClearCurve fiber and the MXC connector, This is jointly developed with Corning and we're very happy to be launching this technology today, fundamentally enabling Intel Silicon Photonics.
And as you all know, right, moving photons across the thin optical fiber instead of moving electrons across copper Delivers lots of benefits, delivers higher bandwidth, longer reach. And
if you
think about it, as we move, as we continue to grow The compute density of the rack, if you want to continue to connect within the rack using copper, You go above 10 gig, you're going to be using something like this, obviously a bit prohibitive, not extremely heavy. And so what you have here is the move to silicon photonics both in the rack as well as top of rack to interrow completely transforms the interconnect, Allows you to get much higher density, much higher bandwidth and allows you to move to a single connector across the data center. So you no longer have copper in the rack and fiber rack to rack. So one single solution also drives efficiency and optimization in running a data center. So I'll Put this down.
I'll put this down now. So excuse me. Sorry about that. Okay. So this is we're announcing today and now I am thrilled to show you the first ever Live demonstration of the RackScale architecture.
So this is a RackScale architecture that includes the Atom C2000 micro servers. It includes the new high density 72 port switch solution, the FM5,224. It includes the new ClearCurve Fiber And MXC Connector includes all of the things we've been talking about. It is going to run live, but rather than me do this, This innovation has occurred inside of Intel, with under Jason Waxman, who is the Vice President and General Manager of our Cloud Platform Group. So I'm going to ask Jason to come up and tell you about what you're going to see and show you the demo.
Thanks, Diane.
Thank you.
Good to
see you. Yes.
Hey, so when we announced The Rack Scale architecture just really a couple of weeks ago, one of the questions that people were raising was, well, can you really put all these things together? I mean, the vision of Schooled Resources is very compelling for that orchestration solution that you're talking about. But to make it work, you have to bring the The C technology and all those other pieces together. Well, the team loves the challenge and they knew you were doing the event today and they said, it would be great if we could do the first So what I'd like to be able to do now is unveil the prototype rack that we have here. It will pop up here on the screen and we'll walk through what people are going to C within the demo.
So the first thing that you're going to see in the system are that there are 2 trays of C2000 Micro Servers. So just as an example, this is what a system looks like. We showed this previously. This was the prototype system. These are all just static.
But even better, while this prototype system was only 30 C2000 cards, the ones that we're showing here have 42 C2000 micro servers in a 2U form factor. That's with 2.5 gigabit per second. In fact, that's the actual card. It's 42 of those. That's the real one.
You can tell it's a lot more compelling than the pieces of plastic that we have in the prototype. But we have 2.5 gigabit coming from each of those. And the other thing that you'll notice is that green cable right services. There is the silicon photonic that's actually connecting the 2 trays of micro servers together. Now, as we pointed out, having resource pools means that sometimes one size doesn't fit all.
And while we've got the Adam micro servers, there are people that want Xeons. And that's the next thing that you see here is a 2U tray of 4 dual socket Xeon. These are the X5-two-six thousand, sorry, 2,600 Series Xeon. So you can have your high performance workloads running on those and your lighter and more power efficient workloads running on the C2000 micro servers. Those are connected via to the switch which is down I think kind of out of view here.
Right below the Xeon Servers is a JBOD. And one of the things that we're going to be doing is highlighting how you can in a resource pooled environment map that JVOD to those different servers and I'll show a demo on that just a second. But one of the things that connects the 2 trays of Adam sorry, the Adam C2000 micro servers is the silicon photonics module and Yes.
So we have right here the silicon photonics module. So again, this is 1st live demonstration of silicon photonics. You can see the module is right here and it's connected by a fiber jumper out to the MXC Connected that we were just talking about. So this is all running live.
Great. So all these are the components and what we want to show is all the functionality that This is just a mock up of the actual rack. I'll go click on that and you can see we've got to the left the conceptual Xeon servers and then to the right the Adam C2000 Micro Servers. And in this type of environment, the ideal is that you can go provision your own sets of resources, to be able to map those and then run those workloads very seamlessly across it. So what I'll do here is I'll pick say these 2 Xeon servers And then I'll map these 3 drives to those.
And then I'll pick a couple of the Atom micro servers And I'll map say 3 drives to those. And right now what we have in the rack is a JVOD, but the functionality that we're creating is channel that we're creating is something that we like to call a PBOD, which is a pooled bunch of disks. And the innovation here is we want to be able to create that JBOD like pool, stand, but actually do it over standard Ethernet. So we're going to have a protocol that we're developing that's very innovative and will allow you to have the simplicity of JVODs, But doing it over standard Ethernet. Now what I will do is I will take the workloads and obviously I'm going to take the bigger workload and run that on the Xeon servers and And I'll take the lighter workload and run that on the Atom Micro Servers and you can see all those virtual silicon photonic pulses now connecting all of those.
And actually what that's done is it's spawned the workload. And I will show you here, you can see This is the traffic that's running over that one link. Now that one link is just being driven by 1 C2000 micro server right now. So obviously it's not driving a full 100 gigabit, but you can see all the parts are coming together.
Right. Absolutely. Fabulous. Hey, thanks, Betsy. Thank you, Diane.
Thank you. Okay. It's pretty exciting. Very exciting. So, the transformation of the rack, As you've seen here, we're making headway in this massive transformation.
The rack is the fundamental structure in the data center and that transformation will continue. Back in January, we contributed to Facebook's open compute platform. In April, we joined Alibaba, Tencent And Baidu in their rack level innovation project, which is code named Scorpio. And now taking that innovation to the next level, further optimizing For performance at the lowest possible cost of operation is Microsoft. And Microsoft is a long time industry leader and innovator.
Microsoft today serves over a 1000000000 customers from more there are more than 200 different cloud based services. And you know these services off the top Your head being Azure, Office 365, Xbox. So I'm really excited to have with us Chris Phillips. Chris is From the Windows Server and System Center Group and Chris is responsible for delivering the Windows Server OS and the underlying infrastructure both to the Microsoft Cloud Services as well as their external customers. So Chris, thanks a bunch for coming.
Come on up. Thank you, sir. Here you go. Hi.
Thanks, Intel for inviting me today. When I started with the company, I was reflecting back. I got to Microsoft and we had a service. And for Some of you who are old enough, it ran on a network, a cloud that in those days was X25 and you dialed up and you got on this cloud to see different services in mail as well as Find out about news and we literally over the last 20 years and especially in those days were experimenting as we transition that cloud onto the Internet Very quickly. And it's interesting, I was thinking on this slide, there's only one of these services.
This is just a subset obviously of what we run, which was MSN was that service in those days. And we thought success In those days you measured it in 100 of 1000 and low millions and even maybe tens of millions of subscribers. And you fast forward 19 years and now everything is done in 1,000,000,000 or beyond. And that's the scale that we operate. As we've mentioned publicly, we manage over a 1000000 servers under management.
And what this really means is that when I started in this business, we would put everything in a box and we would design compute networking storage within one box and we called it a server and quite frankly it was a PC we turned on its side and added error correcting memory. Over the years, this has evolved. Obviously, today, you can see stuff that Intel and their partners are showing. It's massively changed. But fundamentally, the scale at which we used to think about and how we design has not changed in 30 What we're excited about is the disaggregation of these pieces and Reinventing a new architecture.
We're really blessed. We're probably the only company in the world that's blessed to be able to operate At massive cloud scale and for me who builds the infrastructure and the operating system that runs it, I have this Innovation cycle going where I can go to all these cloud properties, especially Windows Azure, who's doing infrastructure as a service and platform Service and I can innovate when my brothers and sisters there and bring it into Windows Server and provide it to other service providers, as well as to enterprise customers. And I can do innovation, which is really appropriate for enterprise customers and private cloud and I can bring that to Azure so that people can enjoy it publicly around the world. It's a fantastic feedback loop. And some of the At scale services we run, we run privately.
Like I actually run a service internally just for my test organization And we are currently running 120,000 VMs a day, just servicing testers and doing automation to test Windows Server. That would be the equivalent of 1 of the larger hosters or service providers in the world. And what we find is that the data center is now the computer. I mean, that's how we really think about it. And when we used to think about Compute network or storage is elements in a server.
We actually think of racks as the elements in the data center. We call them euphemistically stamps. We buy in multiples of these things. Azure thinks of compute as 20 racks is a stamp and that's their unit of measure within a data center. So this is the scale that we operate at and this is why the work Specifically, we've had a long standing relationship with Intel.
I think that's well known in the industry. Don't We've done things with Intel engineers like bringing them into our data centers where we can show in a single location 4 generations of data centers For them to really get the gestalt of the challenges we're dealing with and areas that we can do innovation together. It's been a great partnership and I really appreciate it and we're really blessed and honored to have that. Specifically, a number of things we did In the recent version of Windows Server 2012, some of you are probably aware that we just launched R2. We RTM it RTM ed it Couple of weeks ago and it will be GA ed later on this fall to the public.
And in 2012, we did a couple of things. We disaggregated Networking and Storage. We started down that path. We provided SDN in the box in the demonstrated publicly the last few years doing 1,000,000 IOPS per second using standard off the shelf parts and using RDMA technology To take file servers, remote file server and giving you the performance of a local disk drive. And we do that because We use all kinds of technology from Intel and others in the industry to do RDMA 10 gs and run the SMB Call our protocol for file servers.
And we've managed to put a number of things on top of that besides scale out file servers. We've done things like putting SQL Server behind it, which traditionally needed direct attached storage or a SAN. And we're doing this at price points that are fractions of what you would do in traditional SAN architecture. In R2, you'll find additional performance improvements where we go another bump of anywhere from 15% to Over 30% depending on readwrite combination. We're going to continue to push on that and that's why we're also really excited about the Photonics work that Intel just demonstrated.
I mean clearly all the wires and as we design our racks to use internally and work we do in the industry with partners, there's a real need to miniaturize And gain greater efficiency and lower costs in simple physical things like cabling. Like you just don't think about it until you're looking at a data center with 200,000 servers in it or 100,000 servers in it, but you realize, Wow, wires really are hard. They block airflow and they do all these evil things to you and humans touch them and they screw them up. And so We're excited to be engaged once again with Intel deeply on architecture design and we look forward to the second half of this decade and reimagining the server.
Thank you. Thank you. Thank you. Very nice. Thank you so much, Chris.
Thanks for that. So today's announcements build upon a wide range of assets that we have at Intel. We obviously have Leadership in process technology, almost 2 generations ahead of the industry. This gives us tremendous benefits in energy efficient Computing, highest performing transistors at the lowest power levels. It's what's contributed to the power levels we're able to achieve with Hi integration Adam C2000.
We deliver architectural consistency. We hear from the large cloud service providers here today as well as others, As well as we heard from Ericsson that the means to getting the lowest possible total cost of operation is to have consistency in the data center. Single So consistency means a single operating system, single management stack, single developer tools. And so we provide that architectural consistency from ATOM C2000 all the way up to our high end Xeon and Xeon 5 Processors. We also provide software compatibility.
There are millions of applications running the data centers around the world built on the X86 architecture, tremendous ecosystem around the X86 architecture. And we have, as I demonstrated today, we have moved from strictly a general purpose compute solution provider To the SoC development methodology, which means we can turn out many proliferations and derivatives off a common product base to deliver very targeted optimized processing solutions, high integration targeted at a given workload. So We have the ability to span that entire workload space across the range of compute demands and the range of IO demands all running on Intel architecture. And I know you know us as a CPU company, but our technology obviously spans the entire data center. We showed that today with silicon photonics, with our high density switch silicon, With our crypto accelerator solution and we invest substantially to enable our customers to develop compelling solutions on Intel Architecture.
So we continue to invest in the IA, Intel Architecture Ecosystem With things we talked about today such as higher density memory connectors, with server management solutions to provide greater density and lower cost. So these are investments that we uniquely make for the industry and we will continue to make them as we leave the industry through this transformation of the data center to rack scale architecture. So to conclude, the Adam C2000, my little Adam C2000 here, it is now in production. So this is our 2nd generation of the Adam system on a chip product. It is not just one product, but it is 13 different products to support the range of workloads across entry network, cold storage and micro servers.
It is the 1st Intel product built On the Silvermont, the new Silvermont Adam core that allows us to continue to deliver outstanding performance at an ever lower operating Power level. And with over 50 systems in design across a wide range of systems providers, we're enabling new levels of data center efficiency and density. And we will continue to innovate beyond the processor as you've seen. We will continue to innovate across server storage and network from the processor level to the system level to the rack level in order to support the industry's transformation To hyperscale deployments, millions of servers running, and to continue this build out of cloud based services.