Ladies and gentlemen, please welcome Diane Bryant, Intel Vice President and General Manager, Data Center and Connected Systems Group.
Thank you. Thank you so much for joining us on what is a very big day for us, the launch of the Intel Xeon Processor E5 Family. The E5 is truly the heart of the data center, and we mean that very explicitly. When we say data center, we mean all data centers, whether it is the public cloud service provider data centers, the foundation for fueling those services, whether it's enterprise IT data centers, telco service provider data centers, or the home of large high-performance computing clusters. All of those data centers, all of the infrastructure inside of those data centers is what we're targeting with the E5 Family. If you think about it, we are in a period of incredible growth. Today, there's 2.2 billion internet users worldwide. That's projected, as you've probably heard, to go to 3.1 billion users by 2015.
You think about the growth in the number of devices, total number of devices, the growth in the number of devices per user continues to grow. As we have this massive innovation on the client side, we equally have innovation inside of the data center side with technologies such as cloud computing. Cloud computing, in and of itself, allows new services to be rapidly invented and rapidly deployed to all of those devices. You can argue the cause and effect is the innovation in cloud and in devices enabling all these new usages, or are these new usages driving the innovation in the cloud and in the devices? Either way, the result is an ever-increasing demand on the data center infrastructure. The number of innovations in new usages is quite impressive.
That pace of innovation in new usages, new services, new capabilities that are being delivered, both services to the consumer side as well as to business. You think about it, right? We used to have a bank in every town, then we had an ATM machine on every corner, and now we have a banking solution on our smartphone in our pocket. Just a massive breadth of new usages and new capabilities. Another great industry that has shown remarkable innovation, completely transforming itself thanks to technology innovation, is the automotive industry. You may have heard last week Intel announced at the Mobile World Conference a $100 million venture capital fund to fuel the innovation around the automotive industry. Automobiles are becoming yet another mobile device consuming services on the go.
Whether these services are for consumers, whether these services are for business, whether it's for entertainment, for convenience, for improved productivity, they all require a connection back to the data center. They all drive and fuel data center growth. On the IT side of the house, enterprise IT is transforming itself once again. I just last month finished a four-year stint as Intel CIO, and just in those four years that I was CIO, I saw a remarkable change in the role that IT plays. IT transforms itself every six to eight years, so this is nothing new. What I saw over those four years in IT is a dramatic shift in the role that IT plays. The old days of IT being a support organization, delivering services to the business to a standard service level agreement, homogeneous services independent of what the line of business is, are gone.
IT is the organization that keeps the network up and closes the books each quarter. Those days are gone. Today, IT is truly inextricably linked with the business. IT delivers the business solutions. IT delivers the scale to the business to allow top-line growth, and IT delivers the efficiency and the automation and the speed to deliver bottom-line growth to the company. The pace of the business is continuing to increase worldwide, 24/ 7, real-time decision making, all automated, all online. It's very clear that business today rely on IT more than ever, again, driving the demands on the data center, driving demands on IT infrastructure. All this translates to scale. IT must scale. Whether you're talking about the public cloud service providers or business, core business enterprise IT or telco or high-performance computing, the scientific side of the house, they all require the same capabilities.
They all require the same scale. They all require the same innovation. We have to continue to deliver greater responsiveness, pure performance, being there, responsive on-demand capabilities. Energy efficiency power is and will continue to be a constraint in the build-out, so a focus on energy efficiency is demanded across the entire spectrum of data centers. Security, we're living in an environment where the security threats just continue to rise. Having the confidence of end-to-end security and knowing that your corporation's data and personal identifiable information of your employees and consumers and customers is safe is key. Self-service, IT has got to be intuitive, easy to use, available, complete transformation in the way those services are delivered, whether it's consumer or business. Of course, it's all about scale in a balanced fashion across the data center. It's scale across servers, as well as storage and network.
That balanced environment, that balanced scale of continuing to deliver more and more to our customers. I can't think of a better example of a technology innovator that has transformed and continues to transform the experience you feel when you're in your car than BMW. I'd like to invite up on stage Mario Müller, who's Vice President of IT Infrastructure, both the design and the operation side. Mario, please come up and tell us about how you're using the E5.
Thank you very much, Diane.
Thank you.
Good morning, ladies and gentlemen. A very warm welcome also from my side. The BMW Group is one of the most successful manufacturers of automobiles and motorcycles in the world, with its BMW, Mini, Husqvarna Motorcycles, and Rolls-Royce brands. As a global company, the BMW Group operates 25 manufacturing and assembly facilities in 14 countries. The BMW Group concluded 2011 with its best sales result ever. Worldwide sales of all our brands rose by 14.2% and reached a total of nearly 1.67 million vehicles. The company strengthens, therefore, its position as a leading provider of premium vehicles. The success of the BMW Group has always been built on long-term thinking and responsible action. The company has, therefore, established ecological and social sustainability throughout the value chain, comprehensive product responsibility, and a clear commitment to conserving resources as an integral part of our strategy.
As a result of its efforts, the BMW Group has been ranked industry leader in the Dow Jones Sustainability Indexes for the last seven years. Coming to our IT, the BMW Group IT is 99.9x, I do not know the X exactly, based on Intel machines in our data center, and the same on 99.9xx on our client systems. We have roughly 100,000 clients there in our organization. We have 2,700 IT employees in the company, many projects there, 50,000 mobile phones, 3,800 smartphones right now, and the numbers are growing and growing and growing. We have, of course, huge HPC clusters in our company. All of the development there for computer-aided development, design, crash simulation tests, and many things more rely on very, very strong HPC systems. We are lucky to use now the E5, especially the E5-2660 in our company.
That gives us, of course, the performance we need, the scalability we need, the I/O throughput, and most important also, the security. Those are the things that we need in our company. Besides that, machine-to-machine communication. Our vehicles are connected to our cloud, our internal cloud that we have there at the BMW Group. We offer many services to you wherever you go with your car. There is an internet connection available in the car. There are tele-services available that give you, if you have any issue with the vehicle, direct service functionality. We have BMW tracking, of course. We have connected navigation, intelligent drive. That means there is real-time traffic information also coming through the car. Everything is handled in our own data center.
We have there a connection right now with roughly 1 million vehicles, with 1 million requests a day, and with about 600 MB of data volume. That will grow heavily. Soon, we will have more than 10 million vehicles connected, and that will lead us to 1 TB of data volume a day. Therefore, Diane, it's good that we get the E5 now into our data center.
Thank you.
I have to say thank you very much for your time.
We're very happy to help you with that problem.
Thank you, Diane.
Thank you, Mario.
We are continuing on the path of delivering to our cloud vision for 2015. If you think about the cloud and why it is such a big deal and why we all spend so much time talking about it, it is one of those examples of a true win-win. The users of the cloud, whether it's consumers or business, get the wonderful advantage of on-demand instant availability. On the IT side of the house, the cloud delivers reduced total cost of ownership through driving up utilization with virtualization, through reducing the OpEx cost, through automation. You drive down the cost of running IT. It is a big win-win. There are still limiters today that get in the way of cloud adoption, namely interoperability. None of us want to get locked back into a proprietary solution stack. The second issue is obviously increased security concerns.
In a cloud environment, you're operating in a multi-tenant environment. By definition, your security risk level goes up. There are also regulatory requirements. We have our need to attest to all of the Sarbanes-Oxley that all of our applications are running in a controlled environment. Internal audit likes to walk in and see an application running on a server, sitting in a data center, and seeing it all very controlled and contained. When you tell internal audit that the app is floating around in the cloud somewhere, it stresses them out a bit. All these issues are limiters today to broader and broader deployment in a cloud environment. That is what the Cloud 15 vision is all about, eliminating those limiters, addressing those limiters through standards.
For instance, a big focus is on federated clouds, so standards that will allow you the confidence of being able to move your application and your data from one cloud to another, from public to private, without concern of data integration issues or data security issues. In automation, continuing to drive up the automation, truly having the full solution automated, being able to burst not just within one private cloud or one public cloud, but being able to burst between clouds in an automated fashion, taking advantage of that burst capacity. A client-aware cloud is also important, a cloud solution that is aware of where that service, the device that the service is being delivered to, what's the security level of the device. Therefore, what data should I push to the device versus hold back in the data center. What's the form factor, how should I display the data?
What's the battery capacity or the compute capacity? How much of the cloud service can be done on the device locally, and how much should be done in the data center? These are the three big focus areas for us in driving a standards-based cloud environment, making it easier, more secure, and more flexible to deploy your solutions in a cloud. Of course, you don't do something like this alone. As we've learned at Intel over many, many years, driving standards across the industry requires strong partnership and the industry at large. There are many industry organizations and alliances that have been formed around this area. We are a technical advisor to the Open Data Center Alliance. We are a founding member of the Open Compute Project, and we are a governing member of the Open Virtualization Alliance.
The goal here is to accelerate standards to address interoperability, security, and management for faster adoption. These alliances are focused on the end user requirements, and through the definition of those usage models, we're able to back them out into technology solutions. You can see we have over 50 technology partners in the Intel Cloud Builders partnership that are developing those solutions consistent with the usage models that the greater end user population demands. In this world of ever-increasing demand on the data center and on the infrastructure at large, we believe that the Intel Xeon Processor E5 Family perfectly addresses these needs and these challenges, specifically not just the data center side, but specifically in the cloud build-out. Obviously, one of the core requirements is pure performance. I think Intel has an excellent track record of continuing to deliver greater and greater performance levels to the industry.
Over just the past decade, we have delivered over 100x in raw performance. We've done very well on that, but we acknowledge that it isn't all about performance. Along with performance, we have to address the need to keep the platform balanced. We need to address I/O, memory, and compute holistically. We need to address server, storage, and network holistically. We need to continue to focus on energy efficiency, as we have done year after year after year, thanks in a big part to Moore's law and the ever-increasing transistor sizes giving us greater performance at lower power. Of course, the security challenges, as I said. We have an obligation to continue to embed greater and greater security into our hardware platforms. That's what we have done with the introduction of the E5 Processor amily. Generation over generation, we're delivering 80% performance gains.
We are delivering some remarkable architectural innovations in I/O. The first to bring PCI Express 3.0 to market, the first to integrate I/O into the microprocessor. Dramatic, dramatic gains in I/O performance through those innovations. We continue to build upon our security features of Trusted Execution Technology and hardware encryption and decryption. We continue to deliver the best performance per watt. All this goodness is demonstrated by our partners. We want to say thank you to all of them for delivering some amazing world records across a very wide range of our standard benchmarks that you all know and love, demonstrating world record performance, world record energy efficiency, performance across all workloads, whether it's standard enterprise workloads or web-based workloads or high-performance computing workloads. Across the board, you see a total of 15 world records demonstrated on the E5 Processor Family .
A big thank you to our partners for taking our technology and demonstrating the results at a solution level. Let's talk about the really complex, tough scientific and engineering problems and the applications that solve them. With the E5 Processor Family , we are launching new instructions called AVX for short, Advanced Vector Extensions. These instructions double the number of floating point operations per clock. A significant gain in floating point execution, targeting those really tough high computational workloads, whether they're technical computing, whether they're medical imaging, media processing. The value of these instructions and the results that they deliver can be clearly demonstrated with the fact that we are already in 10 of the top 500 supercomputers. Even though we're launching the product today, last fall, we were already noted in 10 of the top 500 thanks to some early production unit shipments.
Really demonstrating right out of the chute the level of outstanding performance we are demonstrating in high-performance computing and even at the very highest supercomputer levels. We're very proud of those results. You can see a couple of quotes here of some of the usages, the real-life usages, whether it's VisionSense delivering better video processing, so you have less intrusive surgery, always a good thing. On the more whimsical side of the house, Face.com using the AVX instructions to allow you to even more rapidly identify your friends online. Something that I particularly probably will not be utilizing, but there's lots of young people out there that I'm sure will value this application. With that, I want to say when you think about animation and content creation and rendering and enormous farms of high-performance computing, it's hard not to think of DreamWorks.
I'm very happy to have Derek Chan here from DreamWorks to talk to you about his experience with the E5 Processor. Derek actually runs all of operations for DreamWorks, so including in that is IT, but he has a very broad responsibility. I'd like to invite you up. Derek, thank you so much. There you go.
Thank you, Diane. Thank you, everybody. Welcome. It's my pleasure to be here this morning. As Head of Digital Operations for DreamWorks Animation, I get the pleasure of leading our team in really delivering the technology that is useful for making our animated films. While we care about the overall compute infrastructure, as we've talked about, the balance between server, storage, and networking, really one of the significant challenges that we have is delivering enough processing power for our artists to take advantage of, to be able to continue to push the bar. You must be sitting there asking yourself, why is a cartoon guy here talking to me about processing power? What does he care about processing power? It actually takes a ton of technology to make an animated feature film. It takes hundreds of artists working for many, many years. It takes over 100 TB of data.
We churn over a TB of data a day. We use thousands of servers. We use hundreds of workstations, all to deliver memorable characters. If I think about our latest film, Madagascar 3, we're going to use over 60 million CPUh to process that film. On a nightly basis, we'll peak out at over 15,000 cores on a nightly basis, spread over many, many data centers, some of which are in the cloud and being pushed outside of our walls. For us, our partnership with Intel is critical to our success. Not only are they helping to optimize and work with our teams to optimize our current generation of technology, our renderer, but they are also helping us deliver a new generation of technology that will literally help us transform how we make movies.
As Diane mentioned, there are definitely advances that are coming, and we're seeing it with the E5 platform. If I take our existing renderer and move it onto the E5 platform, we're seeing an over 35% performance gain from that platform in that family. If I take the AVX 256-bit instruction set and use that with our advanced shading algorithms, we're seeing an over 40% improvement in those technologies. Huge and fantastic results for us that we're very proud of. If I look at all of the things that we've done with Intel and the advances that we're making, we're so excited about the E5 platform and where it's going. It is truly the heart of our data center. As we approach our upcoming film, Madagascar 3, Europe's Most Wanted, we are seeing a ton. It really is the opportunity.
It's the first film that gets the advantage of those advancements. It's breakthrough technology. It's breakthrough sort of entertainment. One of the things that I wanted to do as we talk about all of these advancements was to bring you something that we can talk about advancements, but I'd rather show you what the outcome is. What I brought for you today is a never-before-seen clip of what we call breakthrough technology. Why don't we go ahead and roll the. Thank you all, and thank you, Diane, for.
Thank you. That's wonderful. Can't wait for June. As I mentioned earlier, we have had some remarkable innovations on the I/O side in the E5 processor family. I/O performance is critical as we've continued to grow the number of cores per processor and as we continue to increase the performance of each of those cores. In addition to the move to 10 GB, the I/O needed a big revamp. Rather than me stand here, although I do like to talk about ones and zeros moving around, rather than me do that, we have with us the gentleman who led the E5 engineering team from start to finish, Naveen Narayan. I'd like to give him the opportunity to explain to you what he actually did. Naveen, here you go.
Thank you, Diane.
You bet.
Thank you, Diane. Good morning. I'm really, really excited to show you the breakthrough innovations that we have done in I/O. This simple diagram shows how data flows through the system. The data starts at the network adapter and then flows through a discrete component on the board called the I/O hub in the historical system. From there, it flows to the processor. From the processor, it flows to memory and then back to cache and works all the way back to the network adapter. This is how a previous generation system worked. What we have done is we have done something called Intel Integrated I/O, which is a suite of features that offer dramatic improvements in the I/O. First, we have taken the I/O hub and integrated that into the processor.
With this integration, we reduce the latency of the data traffic by 30% and get the data where it needs to go faster than ever before. As we integrated the I/O, we included some critical storage features, such as non-transparent bridging, hardware RAID support, asynchronous DRAM refresh to the processor to improve both the reliability and the performance across the data center. We not only integrated the I/O, we also increased the performance of the I/O system. Intel is proud to be the first processor to support PCI Express 3.0. With PCI Express 3.0, this is the latest spec from the PCI Express SAGE, and it doubles the I/O bandwidth per port by speeding up the lanes and making architectural improvements over PCI Express 2.0.
In addition to integrating the I/O and turbocharging it with PCI Express 3.0, Intel has included a new technology called Intel Data Direct I/O, which lets Intel's Ethernet controller talk directly to the processor cache. Again, you can see a historical previous system. The data flows through multiple components: the I/O hub, the processor, and memory. When data flows through all these components, the result is it takes more time for the data to get to where it needs to be, and it also keeps the memory active. With Data Direct I/O, we intelligently re-architected this data flow, such that the processor is the primary destination for network traffic. This means the latency between the processor and the network adapter is very short, which can actually double the I/O capabilities of the Xeon E5 Family, depending on the usage.
This also helps keep the memory in a low power state because we're not using memory when it's not needed. All of these capabilities combined to create what we call Intel Integrated I/O. This is the breakthrough innovation that gets you to your data faster to help you scale and meet the growing demands of the users like Diane has talked about. Thank you.
Thank you. Hey, thanks, Naveen. That's great. It's very exciting. It's some really impressive innovations. The allocation of the cache to that network traffic is done completely in hardware, completely seamlessly, no application involvement, no OS involvement at all. It just happens. It just happens. Year over year, IT's investment in security continues to grow. I know that firsthand, having been in the IT group for four years and watching the security budget just very predictably grow year over year. The environment we're all living in just continues to increase in intensity. The number of attacks in our environment is doubling every year. The sophistication of those attacks is getting more intense. The sources of those attacks, attacks from cyber criminals to nation-state cyber espionage to even insider threats, it's very, very broad.
Really, the job of the IT guy in trying to secure the environment and ensure that the corporation's IP and the customer and employee personal identifiable information is secure, that job just becomes harder and harder. As we move to technologies such as cloud computing and greater mobility, more and more networks of differing levels of security, more and more devices of differing levels of security, that just adds to the complexity, adds to the risk in trying to maintain a secure infrastructure, a secure data center environment. The solution is to bring the security closer and closer to the hardware. If you're going to have a truly secure solution, it's going to be built into the hardware platform. If you think about it, the malware has gone from application mode to user mode to now rootkits in the kernel.
The only way to protect against that is to be one level lower sitting in the hardware. That's why our security features that are in the Xeon processor family are so powerful. With Trusted Execution Technology, you're able to ensure that there is a root of trust, a known good software stack. This is incredibly important when you go into a virtualized environment and you have multiple VMs running on a single server. With TXT, should one of your VMs become compromised, you can move all the VMs running on that machine off into a quarantined state, validate that the hypervisor hasn't been compromised by using TXT.
If the hypervisor hasn't been compromised, then you have confidence that only the one VM is at risk, and you can move the other VMs back on the machine and get them back to work immediately. Without Trusted Execution Technology, you would need to bring down the entire system and all the applications that are running on that system to know that you've got a clean and secure environment. This is a dramatic improvement in the confidence of running in a virtualized environment. With Advanced Encryption Standard New Instructions, the Chief Information Security Officers of any enterprise IT organization have been asking for data encryption, data in transit and at rest, for a very long time. They would love to see all data encrypted, but historically, it just hasn't been feasible from a performance perspective.
The guys in IT that own the infrastructure don't want to see the performance of their workloads decline thanks to encryption. With E5 Processor Family now, you can encrypt using the AES new instructions. You can encrypt and decrypt your data with no performance impact at all. It now is invisible. The performance of the processor is so outstanding, and with those instructions, it's hardware accelerated in the processor, no issue. With this new processor family, the Chief Information Security Officers finally get their way, and data can be encrypted both at rest and in transit. You can see some examples here of the usage of our security features. TXT increases the confidence in virtualizing and automating your environments.
Even high-security applications, the level of assurance is such that the quote here from DuPont, their ability to use TXT to ensure that even they can confidently meet regulatory requirements around their applications and data. You can see on AES the additional security that the encryption and the high-performing encryption gives, that GeneX is confident in storing their medical records in a public cloud, which is quite progressive when you think about how security is always the first thing people point to when you talk about moving your apps and data into a public cloud. Knowing that your data is fully encrypted gives GeneX that confidence. Security is always top of mind when you talk about moving to the cloud.
I thought it would be nice to bring a cloud service provider onto the stage to talk about the value that they're getting in running their business and using the new Intel E5 Processor Family. We have with us Alex Rodriguez, not the baseball player we learned last night. He's the Vice President of Systems Engineering and Product Development from Expedient Communications. Alex, if you could come up and tell us what you're up to with E5.
Thanks, Diane. Good morning, everyone. Expedient is a cloud service provider providing services in the cloud to customers nationwide. As part of that, we run into some pretty interesting challenges that we worked with Intel to help overcome as part of their new processor launch. Two key areas of focus for us are scale and security in the cloud. If we look at our scale, we're seeing tremendous growth, just unprecedented growth, 167% year-over-year growth. That growth is continuing to accelerate. Additionally, with that growth, we've got customers bringing to us more and more secured environments. They're looking for us to maintain the level of security and meet the ever-increasing regulatory needs of their business. We took these two issues to Intel, and they helped us to understand the platform better and to actually be able to execute and address these challenges. Let's first talk a little bit about scale.
We've heard a lot about these new CPUs and what they can do. As Diane mentioned, it's about a balanced approach. One of the places that we saw a need to increase scale was inside of our I/O. We had an issue with the fact that these servers are growing so much because of virtualization and the utilization going up, as well because the servers are getting so large that we were exceeding and outstripping the bandwidth that we had to those servers. We looked for a new solution. We found that we needed to have something that wasn't technically complex because that complexity adds cost, as well, it could potentially cause outages. In addition, we needed to make sure that our cabling costs were low. What ends up happening is the data center infrastructure costs can be very, very great, especially when we looked at solutions like optical.
We settled on Intel 10 Gb Ethernet as part of our platform for our next-generation cloud offering. We saw some pretty dramatic benefits. We saw a 23% reduction in the number of switchboards and cables that we needed to deploy our solutions. We saw a 14% reduction in the infrastructure costs out of the gate. As that technology matures, we're really expecting to see that further improve. The big thing was we got a 150% increase in the server bandwidth. That was exactly what we were looking for to match this quality and the speed of the new E5 chipsets. This was all wrapped inside a simplified technical architecture, something that our engineers already knew, it's Ethernet. Our servers went from looking like this to looking like this, bettering airflow, increasing our capabilities, and keeping it simple so that we could continue to deploy our new cloud environments.
I mentioned security is our other challenge. We have a massive need for encryption at Expedient. Our customers, again, are bringing us very sensitive data sets, and we want to make sure that they're covered by encrypting them. That's also a big part of a lot of these regulatory standards that we're seeing today. Intel turned us on to a feature set inside of the Xeon CPUs called AES-NI. As Diane mentioned, AES-NI is hardware-based acceleration. Previously, we had to go to specialized ASICs to make this work. Now we actually found inside the Intel Xeon CPU, it was always there, or at least it was there in the last release, that we could actually access it and get some huge benefits. Let's talk a little bit about what those benefits are. We did a test using AES-NI and doing 256-bit encryption. What's 256-bit encryption?
If I had a machine that could do 72 quadrillion checks a second, and it was set to brute force that piece of data to try to figure out what it was and decrypt it, it's going to take a long time for that decryption to take place. We did four separate tests. The first test was with the previous Intel processor, the 5500 Series. It didn't even offer AES-NI. It didn't have that feature set. We got about 5.3 Gb of 256-bit encryption through a dual socket system. We did that same test with the 5600 series, the current generation Intel processor, and we got about 11 Gb, a pretty healthy increase. On that same dual socket system, all we did was we accessed that AES-NI instruction set, and the rates went up to 18 Gb. The real magic happens when we look at the E5.
When we take the E5 numbers, we're close to 40 Gb/s in a two-socket box. We're encrypting 40 Gb/s inside of a standard two-socket system using AES-NI. It's a 118% improvement over the previous generation. What's this mean? With a dual socket system, I can take that Madagascar DVD that we talked about, and I can encrypt it in less than a second. I can take the Library of Congress and in a better part of a day go through and encrypt all that.
Most importantly to us, I can encrypt and decrypt every packet into and out of any one of our data centers with that single two-socket box. That is allowing us to address security concerns with our customers more reliably with less overhead, and really helping us have an impact to the business and making sure that people can adopt in the cloud space. Those are two areas that working with Intel, we were found to address, and we're very, very happy with our performance results.
Super compelling. Thank you so much.
Thank you.
Thanks, Alex. Very nice. Great data. That is compelling data. Thank you.
The other key attribute, as we know, in delivering ever-improving results in the data center is energy efficiency. It's its core, as we all know. Power will continue to be a constraint in data center deployment and in deployment of infrastructure in the data center. Power is a core pillar of Intel for computing. It's one of our three core pillars of everything we do in Intel. We do it with a focus on energy efficiency. With this generation of Xeon Processors, the E5 Family, we have reduced the power consumption by 50% for a fixed unit of compute. It's a dramatic reduction in power consumption. We have done it, the brilliance of Naveen's team, they've done it across a wide range of features and capabilities, from transistor-level features to block-level features to system-level features. In this generation of processors, we're announcing the Turbo 2.0.
You remember Turbo was launched in the prior generation. Turbo 2.0 allows us to increase the frequency of a single core. If you need single-core performance, you can increase the frequency, the top frequency, by 900 MHz. A significant pop in the performance thanks to Turbo 2.0. We also have features built into the processor that have the intelligence that track the utilization of the CPU core. If the core is not being fully utilized, the processor will ratchet back the frequency of the interfaces, ratchet back the frequency going out to memory, to the QPI, chip-to-chip interconnect, the interconnect into the cache. Throttling back all those I/O interfaces, which tend to be higher performance consumption interfaces since you're going off and on chip, ratcheting those back when you don't need them based on core utilization.
There's a lot of intelligence built into the process that allow us to get this kind of dramatic improvement in performance, improvement in energy efficiency while delivering outstanding performance. As an old IT guy, I am particularly excited about these features that are coming out now. Intel's Node Manager and Intel Data Center Manager, the enhancements are going to completely revolutionize the way we manage data centers in IT. If you think about it today, within IT, there's the data center facility guy, and he is incredibly worried about the power and cooling of the data center and optimizing the power and cooling of that data center facility. You have the IT infrastructure guys, and their passion is around driving up utilization and getting the greatest performance they can out of their infrastructure. Those two are optimizing the data center independently today.
The results are not only suboptimal, but it can actually be dangerous. We had a situation inside of Intel IT where the infrastructure guys were driving up the utilization, getting greater and greater performance out of the infrastructure and feeling quite good about themselves when suddenly our power load increased by 10% and put the data center in grave danger of shutting down. It is a real-life example of what happens when you allow these two worlds to optimize independently. With Node Manager and Data Center Manager, these two worlds come together. With Node Manager, you have visibility into the utilization of the processor as well as the power consumption, the power consumption at a server level as well as a rack level. You can dynamically, through the Data Center Manager, see what your utilization level is, what value you're getting out of your infrastructure.
At the same time, you can see your power utilization, and you can optimize across the two worlds. I really want to applaud Dell for their leadership and their innovation and engineering in taking Node Manager and Data Center Manager and incorporating it into their open-managed power center. Bringing this capability of being able to effectively and efficiently manage your data center across power and utilization, bringing that capability to their customers with the launch of E5. Thanks much to Dell for that innovation. As I said, we need to think about the E5 across the entire data center. It is not just an outstanding server solution. It is an outstanding storage solution. It is an outstanding network solution. You can see that when you look at the number of designs we have launching today with the launch of the E5 Processor.
The power of the processor is demonstrated in 2x, the number of designs that are launching with us. We have over 400 designs, system solutions, designs launching. That's 2x the number of the prior platform transition, the prior talk when we went through another platform-level change. Two times the number of solutions ready to go. You can see those solutions are not just in the server spaces you might naturally associate the Xeon E5 Processor with servers. It's across servers, storage, and network. Even within the network space, of those 400 designs that are being launched, 100 of them are in the communications infrastructure. The communications infrastructure is being supported through the E5. We're supporting the additional extended life requirements of the comms industry as well as the increased temperature requirements.
The E5 not only supports what you would think is your traditional data center, but also your telco service provider infrastructure. To conclude, we're obviously very excited about the Xeon Processor E5 Family. It brings 80% performance improvement over the prior generation. It brings amazing breakthrough in I/O performance, as Naveen talked about, a 30% reduction in latency, a 2x improvement in bandwidth, reduction of power consumption through integration, just dramatic change in delivering that balanced platform performance. We're bringing Intel Integrated I/O to the industry for the first time, bringing PCI Gen 3 to the industry for the first time.
As I talked about security, which is really core as we continue to build out the data centers and build out using these new usage models like cloud, we continue to build upon our security solutions, TXT Technology , as well as the hardware encryption and decryption technology with AES , so nicely explained by Alex. We continue to drive the best ever performance per watt. Energy-efficient performance is at our core. I want to translate all of this innovation into something visual. We'll see if the video, we'll roll the video, please. OK, you can probably guess which is which. If you're struggling, the one on your left is the old generation microprocessor. If you are running a data center today, I guarantee you have lots and lots and lots of these servers running in your data center.
What we have here is we're emulating 30,000 users streaming media down to 30,000 users. That's what the demonstration is showing you. Obviously, on the right, what you see is the new E5 Xeon Processor Family. I hope you would agree that the difference is dramatic. The difference is very clear. I don't think I have to describe the difference to you. I do want to describe to you how it is achieved. I know this is kind of small and kind of far away, but it will be up through the breaks. You're welcome to come over and check it out. Really, through the four big technologies that are integrated into Xeon E5 Processor, you get these kind of dramatic results. It starts with the Direct Data I/O that Naveen described.
You can see the 5300 Processor on the left without it, running 1 GB, the processor on the right, the new E5 with Direct Data I/O, taking you down into the encryption space. On the left, given the lack of AES instructions, we're doing it the old-fashioned way with software encryption. You can see how slow and how long it takes to encrypt the stream versus the picture on the right with the new AES&I instructions and the incredible performance that the E5 Family brings. Over to the graphics, the graphics display there, you can see the power of the AVX instruction set. At the top, Node Manager. The Node Manager display tells you your utilization of the processors at any time, the power being consumed at a server level and a rack level against the power cap for that rack.
Not only is it giving you visibility into the actual usage of the processor, but you then can cap the power and ensure that you maintain the utilization within spec. With that, I want to say thank you again for joining us. It's a big day for us with the E5 launch, and I appreciate your time and coming out to see us and participate in the launch event with us. Thank you very much.