Good morning and welcome. It's wonderful to have you here and I'm thrilled to have the chance to kick off what we think is an extremely important event for our company. This is historic for Intel as we deliver the 1st truly data centric portfolio of products for our customers. As most of you probably know, the last several years, we embarked on a journey. A journey is a company to transform from what we characterize as a PC centric company to a data centric company and to build the silicon technologies in conjunction with our partners to help our customers prosper and grow in an increasingly data centric world.
And this approach has allowed us to pursue a massive $300,000,000,000 silicon based market opportunity. And the launch of the new data centric portfolio of products not only advances our data centric strategy, but more importantly, it helps our customers extract more value out of their data. Today's announcements reflect the transformation we've embarked on to deliver the innovation and the technology that will unleash the power of data. While our overall TAM is roughly a $300,000,000,000 TAM, our data centric TAM represents almost 2 thirds of this. And by 2022, we'll be a TAM of $200,000,000,000 in size.
And it will be the fastest growing aspects of the silicon market that we serve. The broader industry infrastructure and the solution revenue from data center computing represents $1,250,000,000,000 in revenue in this year alone, growing at almost 17% a year growth over the course of the next 5 years, representing a total of $7,500,000,000,000 in revenue during that time frame. Our ambitions as a company have never been greater, and we are growing our market share in an increasing and bigger TAM than we've ever pursued before. And our quest is to play a bigger and bigger role in the success of our customers in working in conjunction with our partners. And while our ambitions are as big as they've ever been, our focus is actually quite simple.
1st, to deliver a high performance data centric silicon portfolio of CPUs plus a suite of accelerant technologies, including FPGAs, memory and more. And second, customer obsessed engagement with our industry partners to leverage our silicon foundation to move, to store and process data to build the best technologies and solutions together with our partners. We're truly proud to have many of our partners and industry leaders from across the data centric landscape here with us today to talk about the joint innovations that we've been working together on. We have a great day in store for you. You'll hear from Intel leaders, from our partners, and you'll get to see our data centric portfolio of products in action.
I'm truly excited to be here. And with that short intro, I want to really get this event started and invite our Executive Vice President and General Manager of our data center group, Navin Shenoy. Please round of applause for Navin.
All right. Good morning and welcome to Intel's Data Centric Innovation Day. As you heard from Bob, this is a big day for us. This is a big day for our customers. This is a big day for our partners.
It is the 1st truly data centric portfolio launch in our history. And as you heard a little bit there from Bob, while Intel has long been the foundation for many of the industry's most important data center CPU advancements, today's announcements reflect a broader transition inside of the company to drive Intel technology across not only the microprocessor, but also to FPGAs and to memory and to storage and to interconnect and to software and to security. Our goal is to unleash the value of the massive amounts of data that are in the world to help our customers move, store and process that data to create value. We're delighted to have many industry leaders with us today to share their stories on this journey of joint innovation. Of course, the underlying driver of the shifts that we're seeing today can be traced back to one thing, data.
Over 50% of the world's data was created in the last 2 years. And we've reached an inflection point where entire industries are being reshaped by leveraging that data. Who would have imagined, for example, that an e commerce company could reinvent itself and also become a mobile network provider? Well, that's what Rakuten in Japan announced at Mobile World Congress earlier this year. Who would have imagined a few years ago that a 117 year old company would be using AI to transform the way we perform cardiac MRIs.
That's what Siemens Healthineers is doing. You're going to hear more from these customers. You're going to hear from many other customers over the course of the event today. But I think it's important to remember that while there's early emerging examples of this digital transformation, less than 2% of the world's data, great optimism about the future, great great optimism about the future, great untapped opportunities still ahead. We see this data centric era emerging really at the confluence of 3 industry megatrends.
The first being the shift to the cloud, the second being the rapid emergence of AI and analytics, and the third being what we call the cloudification of the network and the edge. The shift to the cloud, of course, is probably the most mature of these three. The scale and the efficiency of the cloud architecture started at the major hyperscalers. It was built on Intel architecture. It leveraged innovation like Intel Virtualization Technology, but it's now also fueling the modernization of traditional on premise enterprise data centers.
AI and analytics, which started as sort of the domain of the most sophisticated data rich companies, is also now reshaping entire industries. And you're going to hear some examples of that later today. And finally, the same concepts and technologies that created the cloud are now transforming the network, cloudifying it, allowing the network itself to flex and scale. And as 5 gs emerges, we will only see, I believe, that codification accelerate, moving compute to the edge, moving compute closer to where all that data is being created and where all that data is being consumed. And in the context of those megatrends, we are seeing explosive demand for computing, an almost insatiable appetite for computing.
Over the last 5 years, we've seen a 50% compound annual growth rate in demand for compute cycles or MIPS in the industry parlance. And maybe more interestingly, our forecast is that we will see 50% compute cycle demand ahead over the next 5 years. And just as importantly, we've seen an explosion in the types of workloads that our customers are running from security to virtualization to database to network to multi cloud to orchestration to HPC to AI and analytics. We're seeing the diversifying needs emerge as computing expands. And so given these trends, over the last several years, Intel has been investing in a new approach to the underlying data centric infrastructure.
We have been investing to help move data faster, to help store more data and to process everything. We're architecting the future of the data center. We're architecting the future of the edge. And that required us to broaden our portfolio, to move data faster with a suite of products like our Ethernet portfolio and silicon photonics and high performance fabrics, to store more data with the investments we've made in NAND and Optane and of course processing everything with innovations in the CPU, but also the portfolio of accelerators we've been building with FPGAs and AI ASICs, all of which is underpinned with an increasing investment in software and solutions and system level thinking. Today, we are taking a big step forward with the expansion of an unmatched portfolio to move, store and process data.
At Intel, it's our mission to empower the greatest innovators of our time with the technologies they need to take their ideas further. And it all starts with data. The data centric computing era is marked by the rapid growth of cloud services, the transition to intelligent 5 gs networks, the explosion of connected devices and the adoption of advanced analytics and artificial intelligence. From large complex applications in the cloud to the edge, our customers are looking for solutions that can move, store and process data faster, turning it into actionable insights that drive business forward. Our customers are bringing the future to life, and we are working to make sure they have access to everything they need to do it.
Together, we push possible forward.
With my colleagues today, we are introducing 7 new data centric products. Today, we are launching the Intel Ethernet 800 Series adapter, which supports 25, 50 and 100 gigabit transfer speeds. Today, we are announcing 20 fourseven storage availability with the latest Optane SSD and efficient warm storage with QLC NAND. We're delivering breakthrough innovation with our latest Optane data center persistent memory technology. Of course, we're bringing forward our 2nd generation Xeon Scalable product delivering breakthrough performance with new features like DL Boost for advancements in inference performance.
And we are bringing forward power and space efficient network processing with the Xeon D1600. We're also announcing a leap forward and programmable acceleration with our first ever 10 nanometer FPGA, the Intel Agilex series. Thank you guys for being here. Thank you very much. We're super excited.
Thank you very much. So we're not just innovating at the product level today. What you'll hear hopefully throughout the day is that we're also innovating at the solution level, bringing these various products together to solve customer problems. We have a wide range of Intel Select Solutions that we'll be announcing today for AI, for virtualization, for security. Together, these products and solutions represent years of Intel engineering investment.
But perhaps what's even more exciting to me is the broad set of partners and customers that are here with us today. We have more customers and partners and ecosystem players than we have ever had at an Intel launch before. Many of them are launching their products and services today. So I'd like to just say thank you and congratulations from all of us at Intel to all of you for pushing the envelope on innovation today. Thank you guys very much.
Okay, so let me dig in to the heart and soul of the data center, starting with Xeon. Today, we are introducing the 2nd generation Xeon Scalable Processor. That was safe. It's very robust. You can see.
The 2nd generation Xeon Scalable builds on our 20 year history of Xeon, where we've delivered advancements year in and year out on a diverse range of customer workloads. We are launching today the most comprehensive Xeon stack we've ever delivered with over 50 standard SKUs complemented by dozens of custom offerings for our customers. We're delivering 8 core Xeons all the way up to 56 core, the highest core count we've ever delivered on Xeon. We're delivering support for 1, 2, 4 and 8 socket Glueless support with Xeon. And for the first time in the industry today, we're unleashing the value of persistent large memory capacity on the world's applications with Optane data center persistent memory, 4.5 terabytes of memory per socket.
This product represents deep technical investment in both the memory technology as well as the CPU controller. It allows us to break through the memory economics that have held back application developers for decades. We're very excited about that. We've also added some new key architectural elements to the fastest growing workloads on the planet. We've added a new feature called deep learning boost or DL Boost, enabling the first CPU with built in inference acceleration for AI workloads.
We're further accelerating network applications and we've created a set of Xeon SKUs for the network market for NFV workloads, network function virtualization workloads, with new technologies like the Intel Speed Select technology. We're also delivering CPU architectural enhancements to provide hardware security mitigations for side channel security attacks. So with the smooth platform upgrade that we're delivering here today from our prior generation Xeon, the entire ecosystem, including the companies that you're going to hear from today, are ready to ship now so that customers can immediately take advantage of all of this innovation. This is why we believe our customers will make the 2nd generation Xeon Scalable the fastest ramping Xeon ever. Our new highest performance addition to the family is the advanced performance Xeon Scalable, the Platinum 9,200.
Back in October, we announced the addition of this family to the Xeon set of SKUs. And it's been designed from the ground up for data intensive applications, high performance computing, analytics, artificial intelligence. This product, the Xeon Platinum 9,200 Processor delivers leadership performance. We previously disclosed SKUs up to 48 cores on this product. And today, I'm very excited to announce that we're actually scaling this product up to 56 cores or 112 cores in a 2 socket system with 12 channels of memory support.
It's a very unique design. Each CPU core has access to the expanded memory interfaces. And what's particularly exciting to me is I can remember a day and a time where to deliver a teraflop of performance, we needed this entire room of computers. Today, we have 3 teraflops of compute in the palm of my hand. This thing is truly a beast.
And we're really excited about bringing this product out into the market. Now, as I mentioned, one of the places where we see the most demand for products like that is in the high performance computing domain. And I'm excited to say that some of our customers have already decided to solve some of their biggest data problems on Intel. Let's take a look.
Working in computer science is always very exciting, especially working in high performance computing. My name is Rami Jaipur. I'm professor in computer science at the University of Gotting, and I'm also Managing Director of GWDG, a Joint Compute Center of Max Planck and University of Gottingen. HLN is a quite unique system because it's a computer system by 7 states or the 15 states that we have in Germany. And therefore, the systems caters for a large audience.
HLN is one system with 2 hosting sites, and the 2 hosting sites bring different expertise into the systems. Both hosting sites are quite well known as compute centers. We have long experience in high performance computing. And that's the reason why we have been selected to host the new HLN4 system. We are very demanding clients.
A lot of our research is getting more compute and data intensive. We have people from many different disciplines working in mathematics, computer science, chemistry, physics, aerodynamics, and therefore, they have very different application needs on that system. We are not looking for peak theoretical performance, but for real system performance. And the Intel Xeon Scalable Processor advanced performance is giving us this kind of performance. In addition, the CPU is quite good for artificial intelligence and machine learning.
A104 is a quite large computer system, fits a quarter of 1,000,000 of compute cores. It's quite exciting that we get a 16 petaflop machine, which is a significant increase. In this case, having Intel as one strong partner. On the other hand, having Atlas with their belt system as another partner is quite crucial to make that whole project a success. It should be one of the fastest machines in the world.
So as we heard there, HLRN is working on really important scientific advancements, and we look forward to working with them as well as many other customers as we bring out and ramp the Xeon Scalable 9,200. I asked the team to further highlight capability of the Xeon Platinum 9,200, and I challenged them to do something that we don't do very often, and that was to set a world record here on stage today. And so what I'm going to show you now, hopefully we can do this, is a this system right here is a 2 socket server, 56 cores per processor. And what we're going to do is we're going to run the spec fprate 2017 benchmark. It's a traditional high performance computing benchmark.
It's CPU intensive. It takes advantage of the core count. It takes advantage of the memory bandwidth that this system provides. And what I did is I started the SpecFP benchmark on this server last night and it finished just before this event started. You can see on this chart here the current world record is 282 FP rate and the time stamp that you see on the right hand side of the chart shows you that the benchmark finished just a couple of hours ago.
So what I want to do now is to show you how the Xeon Scalable 9,200 Processor did. So let me start that. So it's running and you can see it's reading a log file and we achieved 5.22 FP rate. That new level of performance breaks the world record on that benchmark. I want to just say congratulations to the team for pulling that off.
Thank you guys very much. So we expect many more world records to come like that world record. Let me shift now from the beast of the Xeon Platinum 9,200 to the bread and butter part of the Xeon lineup. In the volume price points for the next generation Xeon, we are delivering 30% gen on gen performance improvement for the majority of our customers. This is the biggest to put it in context, the biggest generation to generation improvement we've delivered in the mainstream part, the volume part of the Xeon stack in the last 5 years.
This is a big deal for our customers. We're very excited about being able to do that. In addition, we are also seeing great performance on real world applications that our customers care about. You can see here a small sample of some of the significant improvements our customers are seeing on business analytics, on memory workloads, like databases, on network workloads. So very exciting to see that kind of performance improvement.
In addition to these workloads, we also pushed ourselves to innovate on the fastest growing workload on the planet that is artificial intelligence. The rise of AI, of course, has been well documented. We're finally reaching, as I said earlier, an inflection point here. We're seeing AI move into broad deployments. It represents already today a multibillion dollar silicon opportunity and we expect that to grow to about $10,000,000,000 by 2022.
Now there's 2 primary types of workloads in the data center for artificial intelligence. There's training and there's inference. Inference is typically a part of other workloads and it lends itself to run very well on the CPU. It also happens to be over 50% of the AI workload set that our customers care about. And so for many years now, our engineers have been pushing hard to innovate on inference performance on the CPU.
With the 1st generation of Xeon Scalable that we introduced in 2017, we gave the CPU more AI strength. We added the AVX-five twelve engine into Xeon for the first time. And what you saw is a 5.7x improvement on inference workloads through a combination of software and hardware innovation from the time we introduced Xeon Scalable in July of 2017 until the end of 2018. Yet our customers were pushing us to do even better than that. And so today, I'm very excited with the release of our 2nd generation Xeon Scalable to announce that we are further pushing the envelope on deep learning performance with the introduction of DL Boost, built in inference acceleration that extends Xeon performance as the only CPU architecture with integrated AI acceleration.
With the combination now of hardware and software improvements on DL Boost, we're seeing 14x deep learning inference performance improvement from the Xeon Scalable 1st Gen in July of 2017. And with the Platinum 9,200 series we just talked about, we're seeing a further 2x above and beyond that. So to show you DL Boost in action, I thought I'd bring out Niv onto the stage to show you just how great this technology is. Niv, come on out.
How are you doing? Good
to see you. Good to
see you too. Thanks for being here.
Thank you.
You've been hard at work on this for a long time now. That's right.
That's right.
What do
you have to say?
Yes. So we're very excited to show you a demo with Deal Boost. As you know, the team has been working very hard to get here. And I'm happy to report that we now have accelerator live performance with deal boost on a variety of AI workloads. So the main conversation, one of the most important deal conversations always is about image classification, but Intel has been optimizing a lot of workloads in addition to that, like recommender systems, language translation and etcetera that are important to our customers.
So recommended systems in particular is one of the most popular AI workloads in cloud today and accounts for about 60% of all data center inference. So what we're going to show here and we've all used this. I mean, you use it every day when Amazon used to recommend products, Netflix with shows and movies and also like Facebook with ads and everything. So what we're going to show here is a recommender system wide indeed and compare the performance with both the Xeon Scalable as well as an NVIDIA GPU, a V100 GPU, that's a high end GPU in the data center. And we're going to do this with Amazon's deep learning framework.
So let's go.
Okay.
So on
the left side, you have the FP32 performance on the V100. And on the right, it's like Intel Xeon Scalable just with FP32. And you see that we're getting a 2.4x speed up over the V100 with just FP32 itself. And this is for typical small batch size. So the small batch size is important because this is what our customers use for getting real time recommendations.
I mean, when you ask Alexa for a recommendation, you're not going to wait forever to get a response.
Sure, sure. Now that's okay, but you didn't turn on DL Boost yet, right?
That's right. I haven't turned
on DL Boost yet. So when we do, this is
what happens. So we get yet another boost in performance. And so this neat animation actually shows what the magic that our architects did to get to deal boost. So what we did was we actually had about 3x improvement in performance with like 3x multiply accumulates for cycles than we had before with FP32. And now when you actually recommend something, so we get a 1.9x speed up over FP32 and eventually a 4x speed up over the GPU.
Awesome. Awesome.
So I mean, this is exactly what our customers want with their existing infrastructure. We're able to get the AI inference along with their general purpose workloads as well.
Great work, Nev. Thank you
so much.
So those improvements that you saw there have come through close coordination with our engineers and their industry counterparts. Starting today, all of the major AI frameworks, TensorFlow, PyTorch, Caffe, MXNAP, PaddlePaddle and all the advanced data types like Onyx are all ready to go on DL Boost. We're also investigating and working hard and investing to make sure that the high growth use cases that you heard about there from NIVE, object detection, image segmentation, recommendation systems, all fully optimized, easily deployable through our tools such as OpenVINO. Now we've worked with the industry leaders in AI and we've given them early access to 2nd generation Xeon. You can see some of the results they're already seeing.
Microsoft at 3.4x improvement from 1st generation Xeon to now 2nd generation Xeon with DL Boost, JD dotcom 2x, Alibaba 2x to 4x, Target4x Tencent Cloud3x great to see early results already showing this kind of performance improvement. So we're very excited to have a broad ecosystem of support here to ensure the value of DL Boost can be delivered by companies across the globe. One of those companies who deeply understands the value of technology is Amazon Web Services or AWS. And I thought it might be interesting to learn more about how AWS is leveraging Intel technology to offer game changing compute for their customers. So please join me in welcoming the Vice President of Compute Services at AWS, Matt Garman to the So our companies have an awesome track record of technology collaboration, Matt.
It's only fitting to share the stage here with you and this exciting moment with you here today.
Yes, thanks. And yes, it's been a great journey. It's been about 13 years that we've been collaborating together since we started AWS back in 2,006. And the partnership with Intel has been awesome. And it's really been key for us getting the most performance for our customers, focusing on kind of custom technologies that we can bring to market, helping customers that deliver outstanding performance.
And Intel has been a great partner with AWS throughout that whole period as we been growing our cloud business. But frankly, the collaboration goes beyond just the cloud business and AWS. We're partnering across Amazon with the retail business. We mentioned Alexa earlier, we have a great partnership on our devices, as well as the machine learning and IoT really across the board of things that we're doing in AWS. And specifically, if you think about AWS and EC2, we have over 100 instances in EC2 that leverage Intel processors, targeting every type of workload that you can think of, from startups to enterprises.
We have T3 instances that are targeted at smaller workloads, things like everything from blogs to small enterprise applications. So the very biggest in scale up applications like SAP HANA, where we have an instance that has up to 12 terabytes of memory. And it's instances like that, which is the reason that more SAP implementations are running in AWS really than anywhere else. Another highlight that I'd love to talk about is part of our joint collaboration on how we bring innovative instances to market is the Z1D instance. It features a custom Intel Xeon Scalable Processor that provides 4 gigahertz of single threaded performance, by far the fastest of any instance available in the cloud, and that's one that we worked on together to bring to market.
This is really great for applications that are really performance sensitive and latency sensitive, things like online gaming, But it's also great for applications that have really high per core licensing cost, things like EDA cell workloads or also expensive database workloads like SQL Server. And in fact, many of our Windows customers have been excited about the performance gains as well as the cost benefits that we've given to them in the cloud, which is why more Windows workloads run on AWS, in fact, twice as many workloads Windows workloads run on AWS than the next closest cloud competitor. Finally, I want to call out HPC, and that's the last example, which I think is a really great example of our long term collaboration. We're on the 5th generation of our compute optimized instances with our C5 instances. And with every single iteration, we make advancements for our customers.
We help improve workload performance, drive down costs and reduce their time to results. Last November at re:Invent, which is our AWS annual conference, we launched our C5n instances. They also offer advanced networking capabilities, where we introduced 100 gigabits of network throughput for a single instance.
So you mentioned earlier, Matt, 100 services on Intel architecture. How are you seeing your customers take advantage of the optimizations that we've worked on together.
Yes. And we have millions of customers across the world and across all different types of industries and different sizes. I thought I'd highlight a couple of them just to give you a sample of how people are using some of these technologies. So the first one is BP, a global energy company that leverages AWS running on Intel processors for many of their HPC workloads. BP uses AWS to run these linear programming models that involve complex calculations based on thousands of different inputs.
One of the cool things, and I think the other thing that's really exciting is BP, when they move to the cloud, they are able to scale out and leverage a lot more compute. A workload that they were running on premise used to take more than 7 hours to complete, but now they can do it in minutes in the cloud. This gets them so they can rapidly change, adjust to market changes and actually have real time decision making. Another example, and so that's kind of an example on why more people are running HPC more and more in a cloud environment. Another example that's kind of a fun one is Formula 1.
Now there's 500,000,000 customers worldwide that really enjoy watching Formula 1 races. But Formula 1 is actually a pretty data driven sport. It's pretty interesting. A single race generates more than 1500 data points every second during a race, which is pretty amazing. Using Amazon SageMaker, which is Amazon's AWS's hosted machine learning service, Formula 1 data scientists train their deep learning models and actually have gone back and used 65 years of historical race data, extracting critical details that kind of predict how the race is going to go.
Then running on their inference on AWS and Intel, they look and they try to make market predictions real time. So as you're watching Formula 1 races, they try to make predictions on screen to give viewers a more immersive experience. It's kind of a fun way of seeing how ML is really infiltrating or enhancing all sorts of areas, even fun things like races where you wouldn't expect it.
So that's what we've done in the past together. What do you have to say about what we're going to work on together for the future? Yes.
So we're super excited about the launches you have today and Cascade Lake Processor in again, to help further optimize performance and costs and really kind of further push down the performance on those demanding workloads that customers have, particularly really focused on inference, and you mentioned that a little bit earlier. And we're really excited about what that can provide for customers. In fact, much of this work has already started. As you know, we've been working on this already. Last month, we announced plans for the next EC2 instance, which is the G4 instance.
These instances are designed custom made for machine learning, both training but specifically around inference as well as things like video transcoding and other demanding applications. And we think that they're going to be able to really take advantage of some of those benefits like DL Boost and others for increased performance on the inference side in particular. But outside of AWS and across Amazon, others are taking advantage of this technology as well. You guys mentioned earlier Alexa. For the Alexa team, speed is super important, as you talked about.
And when it comes to delivering a fantastic customer experience, really, you've got to have that low latency for those inference workloads. The Alexa team has been benchmarking on automatic speech recognition and natural language understanding using
the DL
Boost technology that comes with Cascade Lake. So far, the results actually have been really promising. They're seeing a 2x speed up in inference when they leverage DL Boost. So overall, we're really encouraged by the early testing of Cascade Lake Processors and are excited to see kind of what machine learning applications come from our customers as we start rolling it out.
Great. Thanks so much, Matt, for the partnership. Thanks for the collaboration. Excited about what we can do for the future. Great.
Thanks for having us here.
Congratulations. Thank you.
Okay. So I talked earlier about the cloudification of the network and the emergence of the edge. And we know that Intel architecture has sort of become the platform of choice before this trend. We've been after this for many years now. And it speaks to the efficiency and cost savings that customers can see by moving to a more virtualized cloud native network infrastructure on Intel architecture.
We've been at this for many years, as I mentioned. We started the notion of NFV or Network Function Virtualization 8 years ago in 2011. We started doing the first proof points in 2013. We open sourced the technology data plane development kit, DPDK in 2017. In 2017 as well, at the 1st generation Xeon Scalable launch, it was exciting to have, I had John Donovan from AT and T on stage at that launch talking about how AT and T intended to virtualize 75% of their network infrastructure by 2020.
At MWC, they announced that 65% is already complete. And of course, at MWC this year, we also saw the announcement by the e commerce provider I mentioned earlier, Rakuten, saying that by October of this year, they would launch the world's first 100 percent cloud native network to deliver new services at not just the speed of the network, but also at the speed of the cloud. So very exciting to see the progress here. And in that context, we're taking advantage of that unique network know how today and we're taking it one step further. We're taking the learnings from those 5 years of NFV proof of concepts, literally hundreds of NFV proof of concepts with customers around the world, and we're offering specific SKUs that are optimized for the network today, call them the end SKUs.
They have the right balance between core counts and thermals and frequency to deliver the maximum performance for network workloads. Now these capabilities provide significant performance enhancement generation on generation. We're seeing a 1.7x improvement on those NFV workloads from the 1st generation Xeon Scalable to Cascade Lake or the 2nd generation Xeon Scalable. And it also delivers enhanced quality of service. So I thought it'd be interesting for you guys to see this in action.
And so to demonstrate this and to show you the value of this capability, I'd like to invite up our Senior Vice President for the Network Platform Group, Sandra Rivera to the stage. Welcome, Sandra.
Hey, Sandy.
Hey, Navi.
How are you?
I'm great. Well, as you've been talking about, we have lots of innovations in our 2nd generation Xeon Scalable for networking. And we're really focused on allowing our customers to drive greater network throughput, meet their stringent service level agreements and then deliver a lower TCO. So we're introducing some new features called Intel Speed Select Technology Based Frequency that allows our customers then to prioritize and optimize their processor for the most demanding workloads. So if you want, I'll show you how all this works.
Let's do it. Okay. So over here in our demo, you see on the left hand side our original Xeon Scalable and then our 2nd generation Xeon Scalable processor. So we're going to deploy an open virtual switch networking workload, a very common, very stringent workload for networking applications. So I'm going to go ahead and deploy that.
And you can see on the left hand side, we've got the original Xeon Scalable. But on the right hand side with our 2nd generation Xeon Scalable, we're getting a 1.56x, 1.57x performance improvement in terms of processing of those packets. And that's on the right hand side, a 20 core CPU running at all those cores are running at 2 point 3 gigahertz in that same 125 watt TDP envelope. As you mentioned, network workloads tend to be power constrained. So gen on gen, you're getting a 1.56x
So that's pretty cool. Very cool. But you mentioned the new technology in there,
Feed Select. Feed Select technology.
You haven't turned that on yet.
No, but I'm going to show you that now. So what we can do when we actually hit or we had deployed the Speed Select technology, now what you've got is 6 of our cores running the open V switch workload now locked at 2.7 gigahertz and the rest of the cores have now been locked at 2.1 gigahertz. So you're still in that same 125 TDP envelope, but you're getting the increased performance of that network, that demanding network workload and you're all the way up to that 1.74, 1.75, 1.76x. So pretty cool improvement.
That is super exciting. Thanks Sandra for doing that. Thank you.
All right.
See you. Thanks.
Okay. As you saw from that demonstration, the Speed Select technology helps the communication service providers think about how to deliver more performance for those NFV workloads. And we're well underway with trials with many of the comms service providers around the world. Let's hear from one of those providers, one of the largest mobile network providers in the world, Vodafone, about how they're delivering more value with 2nd generation Xeon Scalable.
Vodafone is one of the world's largest telecommunications providers. We operate in 25 countries We've been moving We've been moving aggressively to the cloud using X86 based applications and processors to enable us a much greater speed to market, but also so we can gain access to the data to really optimize our services and automate things end to end for our customers. Intel has been really important to us in helping us make this transformation journey. When we first conceptualized it putting these really, really specialist applications on general purpose hardware, frankly, we weren't sure it was possible. So we've had a lot of work from Intel, but as well the entire ecosystem that Intel sponsors and populates around the X86 architectures.
Being an early adopter of the new Xeon scalable processors is actually really important to us because we get on our front end, bringing forward performance and economy as early as possible in the journey. Intel is integrating the thing that was maybe hardest, which was the core accelerators for the network workloads. Network workloads have always been one of the hardest things to actually inject into a general purpose processor. We have a very specific problem around low latency, high throughput, the most demanding things. So actually the new architecture that integrates the network functions to give us an acceleration, that injection in the architecture is what really is going to unlock the in
the architecture is what really is going
to unlock the growth of the network cloud for us. We're really excited about the transformation that we're driving in the network. What it enables us to do is bring a higher degree of automation, much higher degree of reliability, but also to let us scale to a degree that exactly fits what our customers' needs are at that moment. And that ultimately will make us the best service provider on the planet.
So those new network optimized SKUs are a good example of how we're innovating to get data in and out of the CPU more efficiently, process data more efficiently. But what happens when the data isn't getting to the CPU fast enough? Thanks to the array of assets that we've been developing over the years at Intel, we're able to optimize more than just the CPU. We try to take a system level view of the problem. We try to remove performance bottlenecks wherever they exist.
And so what I want to do now is invite one of our partners to the stage, Twitter, to talk about how they unleash CPU performance through that system level approach, that system level optimization. So please join me in welcoming Matt Singer, Senior Staff Hardware Engineer at Twitter to the stage. How are you doing, Matt? Hi, Navin. How you?
Thanks for being here. It's great to be here. So, Matt, Twitter handles a ton of data every day. What kind of challenges does that pose for you guys as you think about your infrastructure?
That's right. There are 100 of millions of tweets every day. And when users interact with those tweets, it turns into actually over a trillion events per day. And that's a lot of data. For example, if I were to tweet right now,
do you have
time for a selfie? Sure.
You're tweeting that, Rinda? Tweeting that right now. Okay. That's cool. So
those events if everybody in the audience right now is liking that tweet and retweeting that tweet, all those events are streaming into Hadoop. So Hadoop is an important part of storing all of those events and performing analytics on that data. Our clusters at Hadoop collectively have more than an exabyte of physical storage. A typical Hadoop cluster can have over 100,000 hard disk drives and that will translate into 100 petabytes of logical storage in a cluster.
Wow. So that's a ton of data to process, analyze, store. What is your top challenge in scaling out that Hadoop cluster you talked about?
So, because of their affordability, hard drives are the workhorses of Twitter's clusters, but hard drive capacities have increased over time, but their IOPS have remained essentially flat. And that's resulted in a storage bottleneck. As you can see here, typical flow in and out of a Hadoop server is composed of 2 parts. There's HDFS, which is the way that data is stored in Hadoop and temporary data managed by YARN. Those two data flows often occur at the same time and they cause contention to access the hard drives.
This results in a performance bottleneck. To look at how we might overcome this bottleneck, we worked in collaboration with an Intel engineering team to run a series of experiments. It turns out that by selectively caching the yarn temporary data to a fast SSD, we're no longer competing for the same resources. So hard drive utilization dropped and Hadoop could process data faster.
So that's fantastic. What does that translate into in terms of your data center footprint?
Sure. Removing that storage IO bottleneck enables us to greatly reduce the number of racks in our cluster, which gives us a much smaller data center footprint. First, we achieved that by moving from 12 smaller to 8 larger hard drives per system, which reduces the number of hard drives in a cluster without negatively impacting performance. By removing the hard drive performance bottleneck, we could now take advantage of a lot more CPU horsepower. So we moved from 4 core processors to 24 core 2nd generation Intel Xeon Scalable Processors.
Well, the combination of the larger hard drives and higher core count processors means we can reduce the number of systems, hard drives and racks in each Hadoop cluster in our data center, which leads to reduced maintenance cost and much better energy efficiency. It's really great that we can have the same result for about 75% less energy consumption. Wow. That's awesome. We expect that caching temp data and bumping up processor core counts results in up to 50% faster run times and the increased density results in 30% lower TCO.
This sets us up to be ready for continued growth of our data, while still delivering the great experience our users expect.
So that's a great story, Matt. You added a fast SSD, you added smart caching, you were then able to go from 4 cores to 24 cores, boost performance and reduce cost. I guess at the end of the day, that's what matters the most. So congratulations on that. I'm glad we were able to work together to find a better solution.
And congrats on 2nd gen Xeon helping you guys out. So appreciate your time.
Thanks, Nivy.
Thanks for being here.
We really appreciate Intel's collaboration.
Thank you very much. Thank you.
So that's a great example of how optimized storage can result in better system performance, lower TCO. And that's why we continue to innovate with new storage products and innovative memory technologies. Today, we're disclosing 2 new storage products, a new Optane data center SSD. This drive brings dual port capability for the most mission critical applications, 20 fourseven access kind of applications. And we're also launching the industry's first EDSFF compliant drive, a QLC NAND ruler drive, which brings a petabyte of storage into a very small ruler form factor that you saw earlier.
And that's going to enable a 20x storage rack consolidation when you compare that to the hard drives of today. And we're not stopping there. There's other problems to solve. 1 of the biggest problems the industry has faced for decades is the problem where DRAM doesn't support the capacity that application developers need. It doesn't allow for persistence where the data continues to be stored when the power goes out.
And NAND doesn't provide the speed that the data centric world we live in today demands. And so we've been at an invention of something brand new, something highly disruptive And I'm excited today to be bringing that to the market, Optane data center persistent memory. This is a decade in the making for us. And we've reached this milestone, not just for Intel, but for the entire industry. Thousands of Intel engineers working on this technology, deep industry engagement.
We're excited to be delivering Optane data center persistent memory together with the 2nd generation Xeon Scalable. Now this is a great example of something that we are uniquely positioned to deliver. This required us to invent a new Optane media on the memory side. It required us to reinvent the memory controller in the CPU. It required us to develop new firmware to have these pieces to work together.
And we had to invest heavily in bringing together the software and solution optimization from the entire ecosystem. What I'd like to show you now is the largest memory module in the world. This is the Optane data center persistent memory module. I'm holding 5 12 gigabytes of memory in my hand. This is 2 to 4x what you can get with DRAM.
We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 terabytes in an 8 socket system. Again, that's 3 times larger than what we were able to do with the 1st generation Xeon Scalable. And this larger capacity then allows us to unlock performance, reduce those bottlenecks that we talked about earlier. There's a couple of examples here on the slide, Redis virtualization instances, 8x more instances per system, while still meeting the sub millisecond SLAs that Redis commits to their customers and SAP HANA delivering now new levels of record database performance than they've ever been able to do before, 9,100,000,000 records in a single database using Optane persistent memory. Now, if that sheer capacity wasn't amazing enough, what may be the most interesting part of this technology is that we're going to break through memory economic bottlenecks.
Remember that DRAM represents up to 60% of the total cost of a system. Optane is going to change the economics of memory. And our industry collaboration is what has fueled our work from day 1 here. We have been working with ISVs, OEMs, cloud service providers to bring the value to our customers. I'm excited today to bring on stage one of the partners that's been with us from day 1, and that is SAP.
We've worked together with SAP for over 20 years now. The last 6 years have been highly focused on delivering value from persistent memory to SAP HANA. So please join me in welcoming the Senior Vice President and Head of Database at SAP, Dirk Bozenoch to the stage.
Good to see you, you, Jeremy.
Thanks for being here. It wouldn't be right to launch the 2nd generation Xeon Scalable and Optane data center persistent memory without you guys. We valued working closely with you guys to bring the value of persistence to SAP HANA. So I'm
happy you're here. Thanks for being here. Yes. We've taken advantage of our early access to the technology to deliver the 1st major database platform optimized for persistent memory with our SAP HANA 2.3 release last year already. We are glad to see that Persist Memory is now broadly available.
We really feel our HANA customers will be delighted with increased memory capacity, lower TCO and improved business continuity, persistent memory enabled. So Dirk, can you explain some of the early use case scenarios for persistent memory that you're seeing? Well, persistency or nonvolatile memory is the place to start. We are seeing data loading times at startup from 50 minutes down to 4 minutes on a 6 terabyte system. We are also seeing customers delivering more efficiency through consolidation of multiple systems into one system.
And lastly, customers are innovating with disaster recovery, consolidation, system replication and use cases that need more resource intensive capacity like our built in HANA machine learning and predictive analytics capability. So how have
your customers responded to this value that you've been able to deliver so far? Yes.
We had a terrific early response from our customers. For example, Evonix, a global chemical company, is using the increased capacity offered by persistent memory to consolidate systems, reduce operating costs and improve the S4HANA performance. Another example is an engagement we have with Geberit, a large germ appliance manufacturer. We are seeing opportunities to further reduce complexity and consolidation opportunities going forward. I'm looking forward to sharing more of those use cases and stories as we arrive with the broad availability of the persistent memory.
That's great. So how are you guys going to continue to push? How are you going to continue to innovate persistent memory in the future? Yes. Well, to start with,
SAP HANA 2.4, the release is really coming out this month. It's further enhanced with more persistent memory features and benefits and look for our look for more innovation to be showcased at our SAPPHIRE conference in May in Orlando. Additionally, we plan to support for 4.6 terabyte per processor or the maximum allowable, which is up to 6 times larger than the first generation of the Intel Xeon Sailweb platform processors offered
and 4 socket configuration. That's awesome. Dirk, thanks for sharing the momentum on SAP HANA. We look forward to hearing more at SAP Sapphire next month. Thanks for being here.
Thanks. Appreciate it. So we're really proud of the ecosystem we've developed around Optane Optane persistent memory. We have over 50 of the top ISVs, OEMs, cloud service providers, all engaged and ready to go on Optane persistent memory. One of those companies, I handed off the first production Optane DIM to them last year for early beta testing, and that was Google.
And I'm excited today to talk a little bit more about the momentum they've already been able to build since that point in August of last year. So please join me in welcoming the Google Vice President of Platforms, Bart Sanno to the stage.
Bart, come on up.
Thanks for being here. I'm delighted. So Bart, last summer, you and I had historic moment. I handed you the very first Optane persistent memory module. What have you guys been doing with it?
Well, can you tell us how it's been going?
It would be a pleasure. So, as you know, Google Cloud strives to be the 1st provider for all of your new technology and to get it to market as fast as possible. And the Optane is one of those perfect technologies for certain workloads and those workloads are in memory database applications as was talked about with the SAP HANA. And so what we've been doing is that we've been doing some testing with that technology and SAP HANA has been great. SAP has been great about trying to take full advantage of all the capabilities, which is for our customers.
And so we've been running with some customers with on that technology with Colgate and ATB Financial and they've been actively testing with the technology and we've actually opened it up to wider to Redis Labs and to Aerospike.
That's great. And why was that a specific focus for you guys as you were doing your
Well, we believe that Google Cloud offers the best infrastructure for SAP applications in the enterprise space. And I think there are three reasons why you touched on some of them. One of them is the sheer capacity that this Optane technology allows for. You're able to have large data sets in memory and at in memory speeds. So that's one thing.
The data persistence is another very important thing. We are able to quickly recover from either reboots or whenever you're doing upgrades and things like that. So it's very fast. And finally, I think that Optane provides a credible alternative to DRAM. And I think that now customers don't have to balance against operational efficiency versus cost.
And so we're really happy with things that are going on. I think our customers are very happy right now and we can't wait to actually deploy this wider in our global infrastructure.
So this is a great example of how Google Cloud and Intel through our alliance are driving value for your customers and the GCP instances fueled by the Intel technology, right, Boris?
That's right. I mean, this alliance has been a very productive alliance. And I think that back in 2017, when we introduced Skylake together as one of the first to market with that, it really demonstrated how this landscape is changing. That is to say that the cloud market is the cloud platform is basically the fastest platform that you're going to be able to introduce this technology. And so basically building upon that, last year in October, we actually introduced the 2nd generation Xeon processor, which is also known as Cascade Lake.
And building upon that is what we offered at that time. And so I'm really excited today to announce that we're also offering 2 new offerings. 1 is for a compute and one is for a memory optimized VMs. And the compute optimized VM instances provide, are powered by these new processors and they provide over 40% improvement over all of our prior generations of VM instances. And so that's a huge offering.
And then secondly, the memory optimized VMs is also known as M2 offers the highest memory configuration for these processors. And so up to 12 terabytes of memory capacity and 4 16 vCPUs, otherwise known as virtual processors. And so these will be able to run virtually any scale up application. So we're really excited for all of this technology and really want to congratulate you on bringing it to market.
That's exciting news, Bart. Thank you for that and congratulations. Thanks for being a value partner for us. I noticed something here. I don't know what that is, but do you want to talk
to people what that is? So when you gave us that first DIM, it was such a precious thing and it really meant
a lot to us. So
we actually have a little magical little group that makes swag. That's what Google is known for is making swag. And so on this little plaque says, Optane, 1st off the line and 1st in the cloud. And I promised to put it in to its hardware qualifications. We finished all that.
I've used it as much as I could. So I kind of broke it.
So I
thought I'd return
it to you.
You still have to pay for it.
Oh, well, okay. Not if it's a no trouble felon. But anyway, no, in all seriousness, we have actually 2 of these plaques, one for your team, one for our team. We can go and sign our names on it, put our little logos on it. I really want to commemorate that time.
It's a very important time and it's a long journey that we're embarking on here to introduce a brand new memory technology and it's not every day that you do that. So I just wanted to make sure that you got some sort of plaque.
Well, thank you, Bart. I appreciate the swag. Thanks for your partnership. Thank you very much. Okay.
So it's been an exciting morning so far. What you've heard is that we're bringing forward the broadest data centric portfolio we've ever brought to market, all designed to help move data faster, to store more data and to process everything from the cloud to the edge. You heard about the 2nd generation Xeon advanced performance up to 56 cores. You heard about our 2nd generation Xeon scalable with DL Boost for AI accelerations. You heard about new optimizations for network function virtualization and network workloads.
You heard about Optane persistent memory, new Optane and NAND SSDs. You're going to hear more about the rest of the portfolio from Ethernet solutions to FPGA. So we're thrilled about the broad partner and customer support that you saw today, but we're just getting started. And so it's my pleasure now to continue with our data centric launch to introduce our Vice President of the Data Center Group and General Manager for Xeon Products, Lisa Smellman to the stage to keep going.
Thank you.
I didn't bring a plaque. I didn't know. That was cute. So thank you, Navin, and thank you all for being here. Now that we have covered our core data center workloads, I'd like to take a little trip with you to the edge.
So as data generation continues to scale and service delivery requirements grow and grow, more processing is going to happen out at the edge at the point of data creation or data collection. The build out of the industrial IoT edge and the network edge implementations will include tons of use cases across a bunch of different industries. We're going to see smart factories, stadiums, edge cloud service delivery. To give a sense of scale, by 2021, there will be 7,000,000 service delivery edges in play. And that's in addition to 39,000 core data centers, all running data centric workloads.
1 of the earliest network edge deployments to reach scale is content delivery networks. In more technical terms, that's called what supports your binge watching of show. Today, 97,000 hours of entertainment are streamed every minute by Netflix. I find that astounding. It's estimated that CDNs are responsible for greater than 60% of Internet traffic.
So this is obviously a really big workload. With growth at this speed, service providers and operators are looking for as much capacity and scale as possible to satisfy your relentless demand for content. Along with they need memory and new modern storage technologies to help put more and more out of the edge.
So let's take
a look at how solutions with Xeon Scalable and with Optane Data Center Persistent Memory are going to help change and satisfy this need. We've been working with Quilt, a CDN edge cloud developer, who is focused on driving the future of online content application delivery. Visual content has traditionally been held in storage in traditional storage devices. Accessing that content can at times be slow or inefficient and it leads to a bottleneck and also a not very good user experience at times. This limits the availability and possibility for these content providers to scale their ever growing catalog of content.
So let's see what happens when we increase the capacity and remove some of those bottlenecks. Okay. I'm going to show a demo here. Let me get it started. And what you're seeing up on the screen is the difference between edge services being delivered with DRAM alone and those also being delivered with Optane data center persistent memory.
You can see as you move to streaming of sports games, the content is requested from the user And in the DRAM only use case, it hasn't yet been pushed to the edge. The capacity isn't there to hold that full game of content. And so you see a worse user experience with some jitter, some latency that's built in there. In contrast with the large memory, you have the ability to hold that data closer to the processor and closer to the point of use. Again, you don't need to go back to the point of origin or the data center to get the content that you need.
We see incredible potential for this solution in the CDN space and leaders like Quilts agree. They're planning to deploy this technology to help scale their edge services as well and they aren't the only one. Comcast has done early testing with us and they had this to say about the results that they're seeing. As you can see, they're seeing better access to their content, they're driving down their total cost of ownership and they're significantly increasing their performance, a great result. Now that we've gone through one of the biggest use cases of the network edge, I'd like to talk a little bit about the intelligent IoT edge.
Again, every industry we see customers building out edge inference systems that consolidate vast amounts of data. And these can be complex data types. Again, you're seeing video, images, audio, voice, all happening at that point of content creation. Enabling better healthcare, smarter cities, automated traffic and parking, which I look forward to personally very much, improving retail services, inventory management, improving the quality and the safety in factories and manufacturing lines
are all critical use cases that
will drive these performance. They require low latency and they require data security, but they face a lot of current challenges, whether that's the bandwidth constraints, the actual cost of the service implementation and lack of persistent and reliable connectivity. Our data centric portfolio has been built out to deliver many products to satisfy these use cases. We have our Intel Movidius vision processing units for AI edge inference. We have our Mobileye autonomous driving solutions.
We have our atom processors. Specialized silicon is a core part of our strategy for satisfying these workloads, but a massive amount of edge services will be delivered on standard Xeon Processors. In recognition of that, we are delivering 2nd generation Xeon Scalable processors with SKUs that are specifically formatted for IoT use cases. You see extended temperature ranges, ability to handle ruggedization and longer life. But they also contain and this is really important, the full suite and capabilities of the architecture.
So they include DL Boost, which is going to be critical for that edge inference. It gives our customers a great way to deliver those services end to end. By some estimates, up to 50% of inference will be done at the edge over the next several years. In recognition of this mega trend and this shift in the workload, we've invested not just in the hardware, but in the software as well. An example of that is our open source OpenVINO toolkit, which is a suite of framework agnostic software tools that fundamentally drive up the inference performance, whether you're working at the edge and your edge cloud or in the cloud cloud.
And again, they work with all of the frameworks, so it's added performance on top of your baseline AI architecture. To make it real, I'd like to share an AI use case in healthcare for you today. The health and life sciences industry is leveraging AI to accelerate clinical workflows, improve accuracy of diagnoses, which is so important and reduce hospital costs in order to better serve our global economies and the citizens of the world. To share how we've worked together to optimize cardiac MRI segmentation models, it is my pleasure to welcome to the stage the Senior Director of MRI R and D Collaborations from Siemens Health and Ears, Stuart Schmeets. Welcome, Sue.
Thank you for coming.
Thank you, Lisa. It's pleasure to be here. It's a very exciting day.
Yes, it is. It's a big day.
We all know that the effects of heart disease have a tremendous impact on our society. To put this into perspective, someone in the United States has a heart attack every 40 seconds. In fact, every minute someone dies from a cardiovascular related event. And that's not to mention the impact on our society of the quality of life and the productivity of people who live with these diseases.
Those are amazing and scary statistics, honestly. So I wanted to ask, how is the reality of those statistics driving the work that you're doing at Siemens Healthineers?
Yes, absolutely. So advances in our MRI technology are already enabling access to important information in the diagnosis and monitoring of cardiovascular diseases. These advances are an excellent example of how we're expanding precision medicine and transforming the care delivery. But there's a challenge. And that is while the accuracy and the efficiency of our technology is improving, the exams, the volume of data, the number of patients that have access is going way up.
And that's greatly increasing the burden on the clinicians that are having to evaluate this data.
Okay.
And that's one reason why at Siemens, we believe that digitalizing healthcare is so critically important.
It's the only way to keep up with that scale. And this data problem represents an opportunity really. So can you share with everyone what we've been working on together?
Gladly. Now through our research collaboration with Intel, we have demonstrated the potential for artificial intelligence at the edge. And in the cardiology and in the radiology department at the exam console. And so we're able to bring near real time MRI imaging analysis to the exam evaluation. And so let me start the demo real quick and I can show you what I mean.
Near real time is great. And I hear this is your you're going to show us some data from your colleague here.
Yes, absolutely. And I never get tired of these images. This is an image of the beating heart, very exciting. And so what you're seeing in the demo here, on the left and the right, the same picture of the heart, The green circle with the red center is the left ventricle and that pumps blood to the entire body. And in both examples, we have an algorithm that's segmenting and analyzing all of the images to produce the clinical results.
But what you'll see is that on the right, on your right, using 2nd generation Intel Xeon Scalable Processors with Intel Deep Learning Boost and the Intel distribution of Otvono Toolkit, we're achieving over 200 frames per second. And what's cool about that is that represents a 5.5 times increase in speed over Intel's previous generation processor.
Yes. It's impressive to see how technology can accelerate these patient outcomes, especially I've waited for an MRI result. So thinking about it near real time is a game changer.
Absolutely. With this rate of acceleration, Siemens can potentially develop solutions that will further reduce time intensive analysis, leading to faster results that can be analyzed almost in real time. And with that, doctors can make faster decisions about the care of their patients. It's like what we were talking about just previous to this. AI performance must keep pace with clinical workflows in the hospital environment.
But one of the things that we think about every day is while technology improves and while quality goes up, we can never stop or lose focus on keeping costs down. And that's why I'm excited that the DL Boost technology, I believe with that we can increase the efficiency and productivity of our MRI systems without the additional expense of specialized accelerator cards. And that's truly groundbreaking.
That's fantastic. So it's so exciting to see this vital work that the Siemens Healthineers team is doing and delivering not just to the industry, but to the world, making these healthcare solutions better and we're motivated and committed to work with you to continue to advance there. To put this in context for the audience, I don't know if you know this, but Siemens Medical Imaging Devices touch 240,000 patients every hour. So it's a huge scale.
Absolutely. And thank you so much, Lisa. We really appreciate Intel's collaboration in helping us bring the benefits of new technologies to the healthcare system and keeping our promise. What we've promised our customers is to work with them to expand precision medicine, transform care delivery and improve the patient experience. And to do that, we believe we have to digitalize healthcare.
Okay.
We'll do
it together.
Absolutely. Thank
you, Stu. I appreciate you coming out today. That was a great example from the Siemens team on an edge inference use case, but we recognize that the workload is vast and wide and the demands are also. So we've invested in some additional products to further accelerate our customers' data opportunities. We have our Intel Ethernet 800 series, we have our Intel Xeon D1600 series and we have our Intel Agilex FPGAs featuring a new brand.
Let's start with the FPGAs. So FPGAs are fundamentally a flexible by design compute resource. They deliver investment protection they will continue to 5 gs standards are continuing to evolve and they will continue to evolve over the next several coming years. With FPGA implementations, you have the ability to have your hardware evolve with those changing standards. But they also have incredible value in many other data centric use cases where customers just need more performance, lower power and higher levels of integration to provide that ultimate in agility and flexibility.
For this discussion, I'd like to invite to the stage our Vice President and General Manager of Product Marketing and our Programmable Solutions Group here at Intel, Patrick Dorsey. Please help me welcome him.
Hi, Patrick. Hi, Lisa.
Yes. Thank
you for
coming. All right.
So it's great to be here. I'm super excited. Thanks for having me out.
I know you've got some good stuff
to share. So we've seen a lot of great traction. We were at Mobile World Congress this year on the current generation of FPGA products for 5 gs and that's been fantastic. But I think you have some new news to share with us about everything else we're doing.
I do. I have some great new news
to share.
But one thing first, what we're seeing in the market is some of the broadest adoption of FPGAs that we've seen in years. I've been in the industry for 25 years and we're seeing in terms of both the revenue and more importantly in terms of design wins, just real strong strength. And now today, we're accelerating that strength. And today, we're announcing a new brand, a new FPGA brand for a new class of FPGA. We call that brand the Agilex FPGA.
So a couple of key points, if
I could.
Yes, I like it. Okay. So one, this is the first FPGA on Intel's 10 nanometer process technology. This is going to create incredible performance breakthroughs, a 40% increase in performance, 40% lower power, 40 teraflops type DSP performance. So we could go through a lot of metrics, but incredible performance.
Okay, fantastic.
The other thing we're doing is any to any integration. What we mean here is multi Dye. We mean E MIB or embedded multi Dye interconnect bridge that allows us to bring heterogeneous functionality, right, into one device. And then it's about any developer. So classically, you develop with an FPGA as a hardware developer.
We're bringing the software developer in with the full suite of Intel tools.
Okay, that's fantastic. So with all this innovation going into Agilex, what are the customers
going to see? What does it mean for them?
Yes, so for customers, what you're seeing with FPGAs again is this broad adoption. You're seeing it in infrastructure, you're seeing it in networking, you're seeing it in storage. So all these applications are taking advantage of the inherent parallelism of an FPGA, which brings you performance and power advantages. What we're now seeing with Agilex in this any to any integration is the ability of workload convergence or being able to address many of these applications in a single platform, single Agilex platform, and in many cases, a single device. Saves a lot of TCO, saves a lot of costs for our customers.
Okay, that's fantastic. The acceleration of the diversity of workflows truly resonates. It's what we've been talking about all day. But I love Xeon. Can you tell me a little bit more about how these Agilex FPGAs are going to integrate with the data centric portfolio and our Xeon product?
Yes, of course. I love Xeon too. Okay. So integration across the Intel portfolio is our focus. I'll say this is the first FPGA 100 percent developed by Intel.
And when I say that, I mean process technology, I mean the architecture and then as you suggest, the connectivity. So the integration that we're doing, number 1, it starts with the developer, as I said before. So the hardware software developer capabilities like OpenVINO and having a common developer experience not only to the FPGA, but to the rest of the Intel portfolio. And then if you look at the Optane DC persistent memory, our ability to pull that in and make that a natural memory hierarchy storage with the FPGA is critical. And then last, I'll say it's the 1st FPGA to fully take advantage of Compute Express Link, which is this coherent, low latency connectivity to the Xeon, which is going to take us performance much, much higher.
So I think as you can see, very exciting for customers.
No, it's a great opportunity and it'll be interesting to see what those developers do once they realize they have access to memory through the FPGA and the CPU together using CXL. So lots of promise there.
I agree. It's going to be fun.
Yes. No, it's going to be great. So I just want to thank you for sharing all of this exciting news with us. And we're definitely looking forward to seeing the team's continued progress as you bring these to market and how we put them all together into solutions based on.
All right. All right.
Thanks, Patrick. Appreciate you coming. Okay. So moving through then 4 years ago, you guys may remember, we announced our first Xeon D Processor and I'm happy to announce today our newest member of the family, the Intel Xeon D1600 Processor. Featuring high density, a highly integrated system on a chip design, the Xeon D Processor has unique capabilities that are really well suited for power and face constrained network, edge and storage environment.
It goes down to 27 watts again for that low power use case and has integrated quick assist technology that allows you to accelerate encryption. These are used for control plane, they're used for security mid range storage devices and they require and deliver the performance of Xeon and the efficiency of an SoC design. Let's set that one down there. Intel is also engineering innovation for network connectivity from the data center out to the edge. As one of the original inventors of Ethernet, did you know that?
We have been delivering foundational network adapters for over 35 years. With the delivery of our Intel 800 Ethernet series, I'll also show that to you here, we continue our long history of innovation. This foundational NIC contains port speeds of up to 100 gigabytes. It has support for RDMA and ROCE version 2 and iWARP. We're also introducing an innovative feature called application device queues and I'll share a little more about that.
The application device queues or ADQ provides applications dedicated express lanes, if you will, for critical traffic. So you can choose which traffic is your highest priority. The result for that application is increased performance and reduced latency. And this isn't just a theoretical feature that we've invented here. We have application providers that are already doing the work to optimize for this feature.
The example I'm showing up here is the Reddit, a leading open source database provider. They're seeing with their optimizations a 45% reduction in latency and a 30% increase in data throughput. So we're looking forward to not only the results that Redis is delivering with its Ethernet product, but what all of our application providers are going to be able to do as they optimize for this advanced feature that we're introducing right now. You've heard several examples today of the benefit of optimizing software for new hardware, just like the one I shared. These real customer workloads represent modern hardware running on modernized code, which is a fantastic experience.
As someone who managed Intel IT infrastructure, I can assure you all that that is not always the reality that we face in the real world. Many companies need to address a sobering reality of their data center infrastructure that is aging and has hardware and software that are incapable of harnessing and providing insights to them in this data revolution. According to IDC, the average age of servers operating in data centers today is 5 years old and that's the average. The costs associated with aging hardware and software infrastructure can often exceed the cost of new acquisition of both. And you further complicate your environment when you move past optimal life cycles.
At the same time, it's estimated that the improved performance and agility of a refreshed server brings over $100,000 of additional business value when deployed. So 5 years ago, it's estimated that 9,000,000 servers were sold. So if you'll do a little math with me, 9,000,000 servers at $100,000 of business value per server means that we have the opportunity together to unleash at least $900,000,000,000 of untapped business value in 2019 alone. I'd say it's time for a refresh. So let's take a look at what some of that refresh performance will look like for customers.
For those 5 year old servers being refreshed to the latest generation of Xeon Scalable, you're going to see greater density, lower cost and just increased performance plain and simple across the broadest range of workloads for network storage and compute use cases. And this is not just with the platinum SKUs as well. As Navin talked about, customers using the gold and silver products will see a great combination of performance and value to power their general purpose workloads. At Intel, we're working with partners across the industry to drive IT modernization because we think it's so important. There is no partner in the industry with us than Microsoft.
To discuss more about how we're working together to accelerate that IT modernization, I would like to invite to the stage the General Manager of Azure Infrastructure at Microsoft, Arvind Shah.
Hi, Arvind.
Thank you for coming.
Thank you
for having me. Yes.
It's great.
This is a great reminder, this lineup.
Yes.
We've been working for decades on delivering solutions to customers, PCs, servers, cloud. So just great to be here.
Yes, we got the whole thing. So before we go into some of today's exciting announcements, I was wondering if you could talk to us and share a little bit about how Microsoft is viewing the world of enterprises today?
Yes, I mean, there's no doubt the cloud is growing. Yes. I mean, people are moving workloads there. We're seeing great success. Customers are innovating with Azure.
Yes.
But we're also seeing loud and clear that customers also want hybrid options. So they want to run workloads in their data center and emerging categories edge computing. They want to have applications, near users, near data, so there's no latency, they can do inferencing at the edge. So some of the use cases we talked about before. And our goal is actually to make sure that we deliver the most comprehensive and consistent experience.
So even you talked about aging infrastructure.
I did.
So customers who have Windows Server 2,008 or SQL Server 2,008, as they come in end of life, Intel and Microsoft have options for customers. They can either move or migrate to Azure and get free extended security updates. And it's 5 times more cost effective than any other cloud or customers can modernize their infrastructure with the latest software and hardware. So really excited about giving all these options to customers.
I think that you're hitting on a key point that our customers are really seeking they need the ability and the performance of the cloud, but they do want choice. They want the ability to work in the public cloud. They want their hybrid multi cloud opportunity. So you have to address the whole suite to really help them.
That's right.
Okay.
And one of the unique things that we did with Intel is actually bring Azure Stack to market. So in 2017, we delivered this integrated system. And today, I'm happy to say we have customers in more than 60 countries worldwide across verticals.
That's right.
And so people can literally bring well, not literally, but bring Azure into their data center and bring Azure to the edge. And so they can build native applications, manage native applications consistent with Azure. And so customers like KPMG, Airbus and even the government of Malta are using Azure Stack today. And last week, along with Azure Stack, we made available now Databox Edge.
At the Edge, okay.
The Databox Edge enables AI at the Edge and is powered by Intel FPGA.
Yes. No, that's great. It's been a great partnership. And you made some additional announcements this last week. You've been busy.
I have been very busy. Again, working with Intel, one of the things that customers tell us is, hey, Microsoft, that's great. We love the cloud. We love Azure Stack. But boy, I have some traditional applications.
And I'd like to modernize my infrastructure. So last week, we announced Azure Stack HCI. So Azure Stack HCI is modern HCI hyperconverged infrastructure, which can lower customers costs and improve performance. The customers love it because it actually helps them with the cost performance. They can use their existing skills and still take advantage of modern infrastructure.
And the beautiful thing about Azure Stack HCI is it's the same foundation as Azure Stack. And it also allows customers to use Azure services for their virtualized traditional workloads. So Azure Backup, Azure Site Recovery, Azure Monitoring, you can do all of that. And of course, people still want Azure Stack and Azure, but now you also have Azure Stack HCI for those traditional workloads.
Okay. So again, you're solving for that choice that the customers
Comprehensive, right. Yes.
So in our collaboration, we've been working, getting ready for there's launch and there's all the work that goes out of it. So we've been working on the 2nd generation Xeon Scalables, having you guys test them. We've had Optane data center persistent memory in your team's hands. So can you share some of the results that you guys are already seeing?
Yes, I mean we're seeing some amazing results. And just to give you some examples, Windows Server and Intel Optane DC persistent memory, we're seeing double the IOPS at 13,700,000 IOPS and using 25 less 25% less servers.
For service to deliver.
So again, performance cost value is incredible and it's industry leading. And with SQL Server, with persistent memory, with your technology, we're seeing much faster insights. And on the AI front, which we know is one of the megatrends, we're both investing in the On X runtime. And so we're able to see a 3.4x increase using On X runtime with Intel DL Boost, which I think Navin talked about before.
Yes. No, no, it's good. It's always impressive to see the collaboration across so many of the technical domains. And I think it's just great partnership that helps our customers in their data centers, in their hybrid capabilities. So let's move to some of the innovation you're doing actually in your cloud service.
So what can you share about what's new and what's happening?
Yes. And one of the
things that we've been working with Intel on is how do we enable customers to bring some of the most complex workloads. And so examples are how we work with Intel and Cray to bring a bare metal supercomputer in Azure. So imagine just being able to have Cray compute in the cloud. So that's super unique. The other thing we've uniquely done with Intel is work with Intel Security to offer the industry first security enclaves through Azure confidential computing.
These are some workloads, some algorithms that are super sensitive in certain industries And now you can run them confidently in Azure and again, in industry first there as well.
That's fantastic.
And in terms of other ways, we're taking advantage of 2nd generation Intel Xeon Scalable Processors and Intel Optane DC. You did that so well. That was fantastic. I want to say Cascade Lake. I like dying to say cascade Lake.
Some announcements that I'm actually excited to make and you guys are hearing it for the first time is we are delivering the large some of the largest SAP infrastructure, which is our bare metal instance using some of the latest technology. And again, customers can deploy SAP and get up to 24 terabytes of memory. And so that's just fantastic. And in addition to supporting SAP on our bare metal instances, we're also refreshing a lot of our virtual machine families. So Azure, FBQ virtual machines, which are compute intensive, good for web and application servers, analytics and gaming.
We will be refreshing those with the latest technology. Also DV3, which is our virtual machine lineup for enterprise class production workloads, we'll get a refresh as well as EV3, which is great for large in memory business critical application. So excited for all that to come later this year.
It's a lot. So I'm excited that we are going to work together on accelerating adoption of all these new technologies and just the innovation that we've been working on together. So thank you so much for coming and sharing that
with us.
Thank you, Lisa. Thank you, everyone.
Bye, Arvind. Okay, that's a lot going on. So Arvind touched on this, but of course, you guys, we all know that pure workload performance is not the only reason that IT organizations consider refresh of aging data center infrastructure. Security has become a focus for every customer in every industry that we work with and it's our focus as well. We're deeply committed to driving security innovation into every generation.
Navin had already talked about the updates we're providing in the CPU architecture to enhance the side channel mitigations and protections and minimize their performance impacts. In addition, we're delivering data protection. And one of our recent announcements we've made is of an Intel SGX card. That's what Arpen again was talking to you about the basis for the base technology for that confidential computing. This is a technology that allows users to create secure memory enclaves for their most sensitive data.
Having this in an add in card form factor allows customers to pair it with their existing and their new Xeon Scalable infrastructure for even broader deployments. New today, we're announcing our Intel security libraries as well. It's a set of open source software libraries that expose and enable Intel's hardware based security features. Again, putting in a hardware isn't the final step. We have to do that industry optimization to make sure that software can take advantage of all of that hardware goodness and investment.
These are modular and they have a consistent interface, which allows developers to become more proficient and efficient in utilizing that technology. And it makes it just fundamentally easier for them to deploy hardware based security. Finally, we're taking all of this innovation and we're working with industry leaders to deliver optimized solutions. To share news about one of our newest security solutions, I am going to invite to the stage the Vice President and General Manager of Digital Transformation and Scale Solutions at Intel, my friend, Lisa Davis. Hi.
How are you? Hi. Thanks for coming. My pleasure. Yes.
So absolutely. And we all know that 2 Lisas is better than 1. Definitely. Yes. Okay.
So I heard you had some exciting news
to share with us. Yes, Lisa. We are so excited about today. As you know, security is the number one CIO imperative. It is.
Right? In today's environment, the security exploits are increasing and they're growing in complexity. As a former CIO, we approach security very much by layering in software controls from the perimeter down to the server. The problem with that is that the controls don't always integrate. And frankly, it puts CIOs and CSOs in what
I would call a reactive posture. Yes. So we've got 2 former IT leases up here.
Yes. Yes. This is good.
I like it. Great. Okay. So we are so excited to be partnering with Lockheed Martin and HPE to deliver a hardened virtualization platform for enhanced data center security. This solution uses a rich set of Intel technologies that are now available in our 2nd generation Intel Xeon Scalable platform.
This is a game changer, very exciting. We're now establishing a hardware root of trust from boot to runtime, putting the CIO or CSO back into a proactive position. That's fantastic.
And I think it's just a great example of how our partners have used the foundational hardware Intel technologies and their software innovation to deliver protections for the most sensitive workloads in their environment. So, you show it
to us in action?
Yes, let's see it. Okay.
So we are actually over here. We have our single we have a upcoming HPE DL380 server featuring our new 2nd generation Intel Scalable Processor, okay? Okay. So I'm going to click the screen and essentially we're looking at a single server with full stack protection with a VM running. These technologies isolate the cores, the cache and the memory, protecting the VMs from potential attacks.
A common challenge in the data center is not only the time it takes to patch the environments, which we know is very cumbersome, but also being aware of new or unknown vulnerabilities. So what we're going to do is we're going to launch a known 0 day attack against this VM. And that's an attack for which the VM has not yet been patched.
Is this your first day as a black hat? Is this the first time you've ever attacked anything?
No. You're
supposed to say yes. Okay.
All right. Okay, sounds good. Let's see what happens. I'm not pushing the button.
Okay. I'm pushing the button. Okay. So here we see the attack on the VM and if it were successful it would be capable of causing the VM to become completely unresponsive. We've completely blocked the attack and protected the data.
Not only is the VM still live, but as you can see by the blue bar, their performance is unaffected. And the log files show us capturing the attack. Yes. And for those of
you that don't spend all day just reading through log files for fun, you can see we tried to highlight it there, but where you get the exit reason equals 0, that's actually showing the unsuccessful attack. It didn't get in and the VM, you heard it, it's still running. Absolutely. Okay.
Great. So now our users can deploy this out of the box solution from our partners at HPE.
Yes. That's awesome. And thank you so much for coming and sharing
that with us.
Thanks for
having me. Thanks for having me.
All right. Okay. What Lisa just shared, I think is a great example of solution innovation that delivers real value for our customers. We believe that as the complexity of IT grows, a solutions focus is required and it increases in importance. In a recent Forrester study of over 1400 Global IT Executives, pre configured and verified IT infrastructure was identified as a key strategy for addressing that solution complexity.
More than 45% expected or had already experienced both better system performance and faster time to value by using a solution reference design. At Intel, our work is not complete until solutions are consumable by customers and consumable at scale. The path to consumability is delivery of these pre configured and verified configurations to the whole industry. Our answer to this challenge is Intel Select Solutions. We have partnered over the years with a deep investment with the industry as we collectively deliver workload optimization, ideal configurations and that pre verified performance.
This also leaves opportunity and creates opportunity for system differentiation with solution provider enhancements. First introduced in the middle of 2017, we've seen incredible momentum with partners in the market and we're delivering over 50 verified solutions today. These select solutions are helping customers speed solution evaluation and deployment. Today, I'm happy to introduce our portfolio of 21 additional select solutions. Highlights of the portfolio that we're launching today and you see up here include new SAP HANA and Microsoft SQL with Windows Server and Visual Cloud Delivery Networks.
Those will be great examples benefiting from not just Xeon but our Intel SSDs and our Optane data center persistent memory. In addition, the Lockheed hardened security solution that we just saw demonstrated with Lisa, as well as new solutions for workloads such as network function virtualization infrastructure, artificial intelligence inferencing to take advantage of that DL boost and VMware's vSAN storage. 1 of the solution partners that has been with us since the beginning is VMware. To learn more about this long standing technical engagement, I'd like to invite to the stage the Vice President of Engineering for the Storage and Availability Business Unit at VMware, Esmerari. All right.
Hi, Smurari. Thank you for coming.
It's great to be with you today on such a significant day for Intel Technology Innovation.
It is. It's a significant day for both of us. So it's been so great working with your team leading up to this launch. You've got the engineering crew and we've been doing a bunch across a bunch of innovation targets. Maybe you can share a little.
Thanks, Lisa. This collaboration, of course, reflects a history of relationship that goes back to the early days of VMware, almost 20 years ago. Our company was founded on the premise of maximizing the delivery of IT value to customers via virtualization to data centers, and our focus has evolved to the delivery of multi cloud solutions. But one thing has remained constant over the years, which is our deep commitment to engineering between the two companies.
I think that's fantastic. Today, we've continued to work together and one of the big engineering collaborations we've been focused on is that redefinition of the memory and storage hierarchy.
That's right. We've worked together to take advantage of your latest technologies to drive optimization on ESXi and in particular, vSAN, our fastest growing storage solution with over 19,000 customers. This includes not only our traditional work on feature differentiation and performance, but also working together on taking advantage of Intel DC Optane persistent memory now in ESXi and VSAN.
Yes, we took it
to a new level.
That's right. We've also introduced new storage architectures utilizing Intel SSD innovations.
No, thank you for that collaboration across the whole of the portfolio. It's a lot. And all that engineering work that your teams have done to deliver these vSAN solutions has led to our new vSAN Select solution that takes advantage of that memory capacity that's now available and wasn't before. As we collectively talk to our shared customers about how to use vSAN, we hear an emerging trend around just the growth in mission critical machines requirements and then also in just that larger VM space requirements. So it's an exciting area for our teams to co engineer on behalf of customers.
So I'd like it if you would do the honor of sharing the results of some of that joint engineering work.
So at VMware, we're focused on ensuring that our customers can rapidly deploy the technologies they need to fuel their business in an efficient manner. With this select configuration in a 4 node vSAN cluster, we are seeing a minimum of 25% higher VM density on large memory configurations. And thanks to Intel Optane DC Persistent Memory, we are able to demonstrate up to a 30% cost reduction in per VM density. I'm looking forward to working together with the industry to deliver these select configurations and ramp deployments of V SAN, Xeon and Optane.
Okay. I think it's going to be great. With that total cost of ownership going down and the performance going up, I think the ramp is not really going to be a problem. Thank you so much for coming back.
Thank you.
Great to be here.
Appreciate it. As you saw, Select Solutions came to life through partner delivery of solutions in the market. We're delighted to feature 35 partners today, 10 of which are new to the program since we first announced. They've invested in engineering their own engineering resources to refine the configurations, benchmark the solutions that we're providing and ensure that we deliver the very best user experience. I want to thank them for their technical work and we look forward to working with them in 2019.
I thought it
would be fun to see one of
these in action. So I picked a workload that's a new emerging one where a solution configuration can add a lot of value and save a lot of time. Blockchain. It's a rapidly growing workload and businesses are set to spend $2,900,000,000 on blockchain in 2019. Hyperledger Fabric is a blockchain framework that offers a unique approach to consensus that enables performance at scale while also preserving privacy, which is so critical to customers.
In order to best show that performance at scale, Intel has been collaborating with IBM to optimize for the latest 1.4 version of Hyperledger Fabric and optimize that for the results they can get on Intel's next generation Xeon Scalable. So we're seeing some amazing results. Okay. So let's get this thing started here. If I can
get it.
Okay, there we go. So it was important to get it started. So this is a preview of our select solution delivered and running on a Lenovo ThinkSystem SR650. We have been able to achieve a blazing 1200 transactions per second with a combination of both hardware and software improvements in this generation. As you can see in this demo, we're hitting a 5.1x improvement over the 1st generation, again, just showing the power of hardware, software combined system innovation to deliver that increasing performance.
Available later this year, these new levels of throughput will offer corporations an opportunity to deliver excellent performance while saving precious IT resources and talent from having to configure their own solutions. Lenovo has also certified a number of Intel Select Solutions for Microsoft SQL, including a 200 terabyte SQL enterprise data warehousing solution. That's big. Now we're nearing the end of my talk, but the day wouldn't be complete for me if I didn't have the opportunity to show you one last thing, some big iron. Where is the fun if we skip that?
On stage, we're showcasing the refreshed Lenovo Think system here. It's an SR 958 socket server supporting 2nd generation Xeon Scalable and our Optane data center persistent memory. It's packed with 24 terabytes of that memory. It's a very dense for you rack form factor, packing just absolute maximum performance in a product chassis. So let's take a look at the system in action.
So what we're going to show here is response times of queries using an open source database, HAMR DB. This is a proxy for what a power user would run against a large data warehouse. And when I say large, I mean it. This is a 30 terabyte of raw data being ingested and consumed here. The system is being compared to is a earlier generation Lenovo 8 socket platform running on an E7V3 family, so a couple of years old.
What you see and start to see here is that the results of the queries goes from about 120 seconds, down to 120 seconds versus 2 75 seconds for the older system, leading to a 56% reduction in run time and 129% increase in query throughput. This has a double benefit for IT. It's lower total cost of ownership and on top of that and almost more importantly, it's more effective use of your data scientists' time, which is again, incredibly precious skill set. Okay, we've done it. We have discussed a tremendous amount of intel and industry innovation to address this data opportunity that our customers face.
There are a few things I want to leave you with today. First is our enhanced data. It's an unmatched portfolio that services workloads from the edge to the cloud and back again. Secondly, we have delivered our 2nd generation Intel Xeon Scalable Processor with DL Boost. It delivers inference leadership that our customers can count on and you saw so many of them already delivering results on today at launch.
3rd, we've delivered network optimized products to further cement Intel's technology as the industry with Intel Optane Data Center Persistent Memory, a product I care so deeply about, I'm even wearing Optane DCPM Earrings. Any of you can Navin held up the 5 12 gigabytes, mine's a little smaller, but that's okay. These products are going to break through performance barriers that have faced the industry and application developers for as long as any of us have been working in the industry. Together with the industry leaders in the room, we have this opportunity to unleash all the power of that data, all the capabilities, all the insights and all the