Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.25
+0.71 (0.86%)
Apr 27, 2026, 12:24 PM EDT - Market open
← View all transcripts

Investor Update

Sep 26, 2019

Speaker 1

Ladies and gentlemen, please welcome Senior Vice President, Intel Corporation, Rob Crook.

Speaker 2

Well, good morning and thank you for joining us, all of you, here in the lovely JW in Seoul. For those who drove across town as well as those of you who flew together with us to get here today. We do indeed, as MS mentioned, thank you, have a packed agenda for you today. And we have brought some of the best and brightest product and technologists to go through some of the next level of detail in these technologies with you as well as provide some additional disclosures. And I've brought along our CEO and some of our key business unit leaders, my coworkers on staff, in video form to share their view on what's happening in memory and storage and how it fits into our platform strategy.

And I will personally give you a high level view of how we view memory and storage along with some increased disclosures on what it is we're doing and connect it to our platform strategy. Now we all know that the rapid growth of data and the potential to improve the world with the insights that come from that have put us at an exciting moment in time for the industry overall. And that's especially true us at Intel because we're in the process of transitioning from a PC centric company to a data centric company to better solve our biggest challenges facing the industry. Now we've been through a few transitions like this before from a DRAM company to a microprocessor company, from a microprocessor company to a PC company. And I personally I've been at Intel 30 years.

I've been in the industry longer than that. And I've personally had the opportunity to experience some of those transitions. And they're pretty exciting to be part of it. They can be anxious moments because there's uncertainty. These things look very obvious in hindsight that we should do these things, but they're super exciting.

But for those who come with us and we look forward at the potential of what we can do here, it's tremendous rewarding. When we made the transition from a microprocessor company to a PC centric company, we connected 1,000,000,000 people around the world through the Internet and fundamentally changed society. And those that participated together with us as an industry saw tremendous growth along with impacting the world around us. And now we have a new opportunity to change the world as we help our customers unleash the value in the oceans of data coming at them. Now the world's ability to generate data and the insatiable appetite for the insight that comes from that data continues to grow, and we all know that.

Whether it's captured in the data center or increasingly on the edge by billions of connected devices, we will need more connectivity, more storage capacity and more analytic horsepower to deliver on that insight. Compounding this, there's this increasing need and desire for the value and the insight that comes from that in real time at high performance. And yes, that means we will need more computational capability, and that's great for us as a microprocessor vendor and a computing vendor, but it also means we will need faster access to bigger data sets and the memory and storage technologies that are needed to drive that computational need to advance. Now we see this data centric era as emerging at the intersection of 3 technology megatrends: the shift to the cloud, the rapid emergence of AI and the cloudification of the network. Now the scale and the efficiency of the cloud architecture that started with the cloud service providers or the hyperscalers was built on Intel architecture and innovations like Intel Virtualization Technology.

Now it drove also the penetration of NAND based nonvolatile memory into the data center storage hierarchy to feed those many virtual machines on the cloud platform. And you saw some demonstrations outside here. Now that cloud architecture continues to advance at a rapid pace with optimized computing solutions for storage centric and computing centric platforms. AI will give us new tools and insights to generate economic value from that vast amount of data today and into tomorrow. And finally, the concepts and technologies that created the cloud are now transforming the network, the cloudification of the network, allowing it to flex and scale, moving more computing to the edge closer to where the data is created and consumed to alleviate network congestion and improve delays in real time insight at the edge.

Now Intel is taking a holistic approach to delivering on those platform capabilities. Our investments in technologies across the 6 pillars of innovation in computer architecture will support that data growth used by cloud computing, artificial intelligence and the network cloudification. And my good friend and coworker, Roger Kadoori, is Intel's Chief Architect. And he's been clear about the 6 pillars that are driving growth for all of computing. And we have significant investments in roadmaps planned for all 6 of these pillars, one of which is memory and storage, which is our focus today, obviously.

But before we dig into memory and storage, let's hear from Raja how Intel is changing the fabric of computing with those 6 pillars.

Speaker 3

It's an incredibly exciting time to be an architect in the industry, even more exciting to be at Intel where we have access to all key technology pillars, process and packaging, scalar vector matrix and spatial architectures, all levels of memory hierarchy, interconnect, security and software. Today, in the world we live in, we are generating data at a faster rate than our ability to analyze, understand, transmit, secure and reconstruct in real time. There is immense demand for architectures that scale up exponentially. Moore's Law continues to deliver exponential growth in compute capability. We have leveraged Moore's Law across multiple architectures like CPU, GPU, FPGA and other accelerators to deliver better than exponential rate compute to the world.

However, the memory bandwidth that's required to feed the compute only grew at a linear rate. We have been working around the memory wall for years with innovations in caches and other techniques. But we have reached a point where we need to do disruptive innovations to keep up with the compute and data demands. At Intel, we started this disruption by looking at the memory hierarchy. We saw opportunities to increase the bandwidth by 10x or reduce the latency by 10x.

These are disruptive changes. We have a roadmap to leadership performance and capacity at all levels of memory hierarchy, coupled with an architecture to deploy scalable software and hardware in a very ecosystem friendly way. We are just at the beginning of a 10 year journey. Optane is already demonstrating the potential across many, many workloads. And you'll see many more workloads taking advantage of Optane soon.

Thank you.

Speaker 2

Yes. As Rajo said, the world is overwhelmed with oceans of data. We all know this. Data Age and IDC Global Data Sphere estimate that there'll be 175 zettabytes of data by 2025. That's exponential growth on a massive scale.

But more interestingly, IoT devices are projected to create 90 zettabytes of that data by 2025, meaning that devices are projected to drive more data than people. And by 2025, only about 15%, which is still a big number, will be stored on endpoint devices. The rest, though, will be in data centers, 50% stored in the public cloud and the rest of that in enterprise. And while it's growing at an amazing rate, we're not really unleashing the value of it. Only a small amount of the world's data is being analyzed.

And we need that disruption that Raja was talking about in order to get access to that data. And you can visualize that data growth as a collection of spheres, each representing a core data center with its own massive ocean of data. And we have to innovate with architecture and technology in order to store more, move faster and process everything so that we unleash the value of this data. And I've brought along Naveen Shenoy in video form to tell us about the data center strategy and how memory and storage fits into that.

Speaker 1

Data is defining the future of our industry, with over half the world's data being created in the past 2 years alone. We've reached an inflection point where entire industries are being reshaped by leveraging data. However, less than 2% of the world's data has been analyzed, leaving a great untapped opportunity still ahead. Our goal at Intel is to unleash the value of the massive amounts of data in the world by helping our customers move data faster, store more data and process all of that data around the globe. In a world of increasing amounts of data, the standard approach to storing and accessing data is insufficient given the limited scalability of legacy hard drives, the performance limitations of traditional SSDs and the high cost and limited capacity of DRAM.

We're investing in cutting edge memory and storage technologies to address these challenges head on. Intel 3 d NAND SSDs provide a foundation for efficient and scalable storage. Intel Optane SSDs are lower latency up to 40x and much higher endurance than NAND SSDs helping to break the storage bottleneck. Intel Optane persistent memory is an entirely new class of memory for data center system architecture, providing increased memory capacity and persistence persistence to data centers around the globe. Thanks to our full data centric product portfolio, we're able to innovate at the system level to unleash more value than we could do with any one product by itself.

On hyperconverged infrastructure storage, for example, we can increase storage capacity by 1.9x per node with nearly 70% better response times at 10% system savings using the latest Xeon scalable processor with Optane SSDs for caching and QLC SSDs for storage. Intel Optane data center persistent memory is another great example of system level innovation that we're uniquely positioned to deliver. And I'm excited to say that we're already seeing incredible momentum on Optane persistent memory. We have over 200 proof of concept programs running with customers worldwide and over 300 more in the pipeline. Some recent examples of customer success.

Baidu was able to lower their total cost of ownership while delivering more personalized search results to its users. Verizon Media deployed Optane data center persistent memory and their content delivery Hadoop clusters delivering faster system responsiveness and lower TCO. We also announced a strategic with VMware to increase VM density and throughput with Optane persistent memory. The data economy is a once in a decade opportunity to transform computing, networking and storage and memory. Together with my colleagues at Intel, I'm energized to help our customers the power of data.

Speaker 2

Yes. He's got a lot of energy going there. Thank you, Naveen. And as he said, this is a once in a lifetime opportunity. The confluence of technology innovation and the need to derive value from that data is reshaping workloads and architectures that drive business today.

Artificial intelligence is driving transformational changes across a broad swath of industries. We're seeing the integration of new classes of workloads based on the cloudification of the network. And we're seeing data centers look for increased levels of densification and improved agility while lowering cost. And what does all that mean? It means that workloads are not homogeneous, and we must increasingly customize store, move and process architectures based on performance, quality of service, new capabilities and efficiency requirements.

We need an integrated platform view across CPU, storage and networking to optimize for these various workloads. Intel is driving innovations across our data centric portfolio, like Daveen mentioned, to unleash the value of this data in the environment. Now if we could put all of that data on top of the CPU and the Level 1 cache, we would. But that's not realistic. It's not technologically possible.

So memory and storage has historically been built in a hierarchical way. This has been true for 40 years since John Van Newman initially talked about the stored program computer in the '40s. Today, closest to the CPU, there's the CPU cache, then DRAM, then SSDs, then hard disks. And the basic rule that Raja talked about was any time you can get faster than a 10x capacity improvement at better than a 10x performance loss or vice versa, a 10x performance improvement at a better than a 10x capacity loss, we have the opportunity to potentially insert a new layer. And at any given moment, we tend to view this hierarchy as static.

But in reality, it's been constantly evolving. And it's really it's a natural occurrence, and it has occurred over and over again across computing history. Example, in the early days when I first joined Intel, we had the 3d6 microprocessor and it had no cache. Then Intel introduced the level 1 cache with a 4d6 microprocessor, then 2 levels. And now it's common for CPU architecture to have 3 or more layers of cache SRAM inside the microprocessor before we even get to DRAM.

And these levels of cash were driven by the rate at which the CPU was accelerating its performance compared to the capability of the next level of memory. That 10x gap existed and a new layer of memory and storage had to be inserted. Another example that's a little more contemporary for all of us in the memory and storage industry was the insertion of solid state drives as a new layer in the data center storage hierarchy that happened over the last 10 years. Having many VMs on a single CPU created an IO blender. We lost the locality of data on a given CPU.

And virtualization drove the need for much higher levels of random IO capability from the storage to bring the computing back into balance. There was a need for something between DRAM and hard disks with high random read storage performance, and hard disks were just not designed for that. To solve this problem, this new layer of storage in NAND based SSDs was inserted into the hierarchy to unleash and rebalance the data center platform. And the SSD industry over 10 years went from less than $1,000,000,000 to a $20,000,000,000 industry in 10 years. Business problem, application performance issue, technological solution delivered at the platform level driven to a business opportunity both at the technology product supplier as well as the solution for the data center.

Now 10 years later, there are new gaps in the memory and storage hierarchy, and that's what Radha was talking about. And any time there's a gap, there's an opportunity for the industry. And since this is Memory and Storage Day, I'm going to talk you through 2 data center nodes as examples. They're generic in nature, but one focused on compute and one focused on storage to show these emerging gaps and how we will unleash the value of data in each of them. So I want to start with the compute node.

Say, the data center compute node is about performance capability and the rate at which improving its speed is no longer going to benefit from the hard drive. The data is just too far away from the CPU in a hard disk to be useful in a compute node. And the evolving workloads of data analytics and artificial intelligence are much more data intensive than traditional applications. Analytics relies on a constant stream of data to the microprocessor and AI workloads can have much more unpredictable mixes of random and sequential reads and writes at various sizes, including some fairly small ones. And traditional storage is not well suited for this.

So also DRAM scaling that Raja mentioned has slowed, falling off of the Moore's Law pace significantly, doubling only every 4 years. And NAND SSDs actually have recently been outpacing Moore's Law enabled by 3 d NAND and increasing bits per cell, but it's still well short of DRAM performance. So we coupled DRAM slowing growth with the fact that Xeon Scalable processors are still growing core count at a Moore's Law rate and increasing data set sizes, and we can see the gap here. Large working data sets are getting pushed farther away from the microprocessor and the power of its analytics are not being utilized. Innovation in the storage and memory hierarchy becomes essential.

Now for us, we have taken Optane technology into this gap. The DRAM gap is filled with Optane persistent memory providing for much larger data sets than DRAM with 100 of times lower latency than NAND SSDs, which are the current tier below that. And today, we're disclosing more about our 2nd generation of Optane data center persistent memory, code named Barlow Pass, that will be coming to data centers in 2020. And you'll hear more about this in future generations of Optane persistent memory with Christy Mann at 11 For larger amounts of Wicked Fast Storage, we have a need to fill the storage performance gap because SSDs are just too far from the microprocessor for many real time applications. And of course, data center persistent memory is nonvolatile and it helps actually fill this gap.

Optane SSDs bring data another 10x closer to the processor than NAND, but at very high capacity capability because we put them on the NDME interface and it gives them us more freedom with which to build bigger drives. And Optane SSDs will continue to evolve. And today, we're going to disclose more details about our next generation of Optane SSD, codenamed AlderStream. And it's based on our next generation Optane controller and our next generation Optane media. Frank's going to show you a demonstration of its remarkable performance.

And when I said it was 10x faster or more than NAND, you're going to see that in many cases, it's much, much faster than that. You'll be blown away by the progress we've made on the 2nd generation of Optane media and the 2nd generation of the Optane controller and how we combine them for phenomenal levels of performance. Now to advance new generation of Xoftane media, we've established a center of Optane Technology Advancement in Rio Rancho, New Mexico, where 100 of the best and brightest are working on the next generations of technology. And we've begun to run wafers. One of the things that we wanted to talk about today is we've begun to run wafers for that 2nd generation of Optane technology inside that facility.

And soon, those folks will be working on the 3rd and 4th generations of Optane media and some cool innovations even beyond that. But let's take a look inside.

Speaker 4

It was impossible until we invented it. It couldn't be built until we built it. Consistent high performance and low latency, designed to bring new and unforeseen computing possibilities to a variety of markets. And it all starts here. Intel, redefining impossible with Octane Technology.

Speaker 2

Now you can see that, that Fab 11x is a very large facility, and we're starting that 2nd generation as part of a pilot for technology development. And we're not announcing where HVM is going to be for the next generation of technology. But certainly, that facility has the potential as once we have that technology running inside of New Mexico, many other sites inside of Intel. But now I'd like to shift to from the compute node to a data center storage node. And of course, a data center storage node has compute capability, but it's optimized for storage.

Storage with performance to enable more analytics. And hard disk performance is just too far away. The gap between the CPU's capability and hard disk latency performance and dramatic improvements in performance of things like deduplification, compression and higher density media as the backing store on nonvolatile memory. And to fill this cost performance gap, we need innovation across multiple areas: higher 3 d NAND layer counts, more bits per cell, new form factors, faster interfaces and software innovation that exploits all of these advancements. Our NAND technology strategy is focused on being a key part of that solution, driving increased density at world leadership levels, especially when combined at a system level together with Optane.

And Intel's floating gate technology has the best aerial density in the industry. And in fact, we've had the highest aerial density on 32 layer 3 d NAND, 64 layer 3 d NAND and 96 layer 3 d NAND based on the published papers that we've seen, public statements and our measured die sizes, and you'll be able to see SSDs running on that outside. But more, today, I'm disclosing to you that our next generation of 3 d NAND will be not 128 layers but 144 layers of active NAND cells, which we believe will continue to give us aerial density leadership based on all that we know on our die sizes and what we've heard from the industry. Now all those areal density comparisons are based on 3 bits per cell. And beyond that, our robust and scalable memory cell has allowed us to move to 4 bits per cell or QLC support in both client and data center SSDs.

And we'll be shipping 96 layer QLC SSDs to customers in production next quarter, and you can see a demonstration of that outside with improved performance compared to our 64 layer. And we've made tremendous progress on that 144 layer technology, both on the technology itself and the products that are going in there. And those products will be in production next year in SSD form. And QLC three d NAND is a very fast growing technology And but it's early and it hasn't been a lot of data on what's been going on in the marketplace. But recently, Forwards Insights published some data on the size of the QLC market.

And by looking at what our sales look like and what the data they've produced, we conclude that 80% of the QLC gigabytes sold as SSDs today are based on floating gate technology shipped by Intel. And if we were to add in the other floating gate suppliers' output of QLC SSDs, it's clear that floating gate technology has an overwhelming lead on QLC capability. Beyond the consistent TLC density leadership, our cell is solid and has great scalability to 4 bits per cell with high yields in a consistent manner, giving us aerial density leadership. And you're going to hear a lot more about that from Pranav in later this afternoon. Now we're innovating at the SSD product level as well.

And Intel has been out the gate first with a new SSD form factor to help fill that cost performance gap. And it's something we call or the industry calls the E1. L form factor or EDSFF, which rolls off the tongue. We have called it the ruler, which you can imagine why we call it the ruler. And this innovative new form factor is designed for high capacity.

We've got more area on the drive. It's designed for better scalable form factors from the outside of the rack. It looks like that. It has a very small surface area, if you will. Yet it has excellent cooling capabilities as the airflow flows nicely through the rack as opposed to in 2.5 trays of 2.5 inches drives with block storage.

The facts are that floating gate technology is delivering the highest aerial density in 4 successive generations of technology with strong performances for the promises for the future. System innovation on top of that is exactly what's needed for this cost performance scale. Now those data centers are connected to the backbone the backbone of those data centers is connected to billions of machines. And when machines talk to machines, data is created at a rapidly expanding rate. And insights from the analysis of that data will be extremely valuable to the new data economy, as Navin mentioned.

And the technologies we are developing for the cloud apply to the capability we're adding to the software defined network all the way from the edge to the Internet of Things. And the cloudification of the network allows for better network agility, better processing of the data anywhere from the edge for the real time lowest latency workloads for to various points inside the network back to the tremendous processing power of the core data center. The versatility of the cloudification of that network will be important to many of the IoT applications. And these devices are rapidly becoming the reason for massive amounts of data stored in the data centers growing at this exponential rate. Now the PC client is a key part of the data universe consuming massive amounts of data, maybe 15%.

It's 15 about what's happening there, let's have Gregory Bryant, who's a leader of our PC client group and a good friend of mine, tell us how PC is evolving its own memory and storage hierarchy and his vision of where it's going.

Speaker 5

Each and every day, I think about how PCs are used and will be used in the future. It's a constant conversation with our partners and customers to determine how we can continue to provide more value and improve PC users' daily lives. I'm always looking for opportunities to drive innovation, not only at the processor, but more importantly at the platform level. Intel capabilities, features and responsiveness to the platform. As you know, we introduced this new capability as part of our 7th generation platforms with Intel Optane with SSD like performance, enabling users to create, game and produce with less waiting.

Since then, we've continued to drive innovation in this space with the release of Intel Optane Memory H10 with solid state storage, which delivers the responsiveness of Optane memory combined with high capacity storage Intel QLC 3 d NAND technology together all in a single M. 2 module. In doing so, we can now address space constrained designs, thin and lights, 2 in-one notebooks, mini PCs, all in-one PCs that typically only have a single M. 2 for storage and deliver an SSD only solution optimized for realistic workloads. Personalized SSD experience.

These attributes position Intel Optane memory H10 with solid state storage as a great revolutionary solution to accelerate SSD based systems. In the near future, we're excited about the potential for Optane memory and enabling and improving new experiences with large memory footprints. The capabilities enabled with Optane will be disruptive to PCs that process large workloads on workstations or content creation PCs with systems that are instantly available and have the ability to switch seamlessly

Speaker 2

client. And the structure of the client memory and storage is similar to our data center storage, but innovations in form factors and specialized functions in the PC client require optimizations in power, performance and form factor. Now first, in that memory and storage hierarchy, the hard disk is becoming less and less relevant to the PC client. It's very rare in notebook form factors because 3 d NAND costs are coming down and densities are increasing, leaving little space and need for to compromise with a hard disk to give up on the form factor and performance innovations of an SSD. And NAND SSDs are becoming increasingly more popular in desktops as well as capacities at the terabyte level are increasingly at affordable price points, particularly with QLCs.

Now for the value PC segment, Intel's 3 d NAND is continuing to get more cost effective while becoming more and more dense expanding the footprint. And as I mentioned earlier, we are shipping our 96 layer QLC 3 d NAND SSDs to customers next quarter, and our next generation of 144 QLC NAND SSDs will ship to customers next year. And I forgot to mention earlier something that's important about what's happening with the presentation here today. Given what I just said, what do you think I'm about to say? We've made tremendous progress on that 144 layer SSD, and this presentation is being done from 144 layer QLC SSD right here, right now, and we're going to bring that outside for you later to see next to the 96 layer and the 64 layer systems as well.

And now this drive was developed with Intel with tuned performance for the value segment SSDs, and its density and higher capacity enables us to expand SSD usage into bigger footprints. And an example of that is we introduced our first QLC SSD a little more than a year ago, and we focused on the North American channel just to get started when we launched the technology. And our MSS in the terabyte capacity segment was about 12% at the time, and it's grown to over 50% in the last year. Clearly, a great solution for the high capacity footprint drives with solid performance optimized for PC client usages. Now on the premium segment, GB mentioned our H10 blurs the line between Optane memory and 3 d NAND, providing the wicked fast performance of Optane with the density and cost benefits of QLC3d NAND, all in one small M.

2 drive to create smaller form factor and lower power systems with performance tuned for the PC client, all of that on a single M. 2 drive. Later on today, you'll be able to see side by side demos in Dave Lindell's discussion with some competitive products and show how much better responsiveness you get from this platform. Now what does the future hold for the PC client? As GB said, we're actually going to create a new layer of Optane memory, enabling new experiences with large memory footprints.

Or in persistent memory mode, it can actually eliminate the need for storage altogether, enabling exciting new usage. Think of features like changing applications just by switching a pointer. No need to load applications. They just already exist in persistent memory. Think of features like instant power on, virtually unlimited standby time because now the memory is nonvolatile, gigantic of numbers of apps and browsers open simultaneously with instant switching between them.

And you're going to hear more about the architecture and the technology that underlies this later today in Frank Katie's presentation. Now vast oceans of data growing exponentially across thousands of data centers enabled by cloudification of the network stitching together billions of devices with the ability to process, store and analyze their data enabling insights from the oceans of data that we can only imagine today. We are committed to deliver on the technologies and products along with the collaboration with our software and system partners and our customers to help make that a reality. Now I'd like to share Intel CEO Bob Swan's perspective on the importance of solving these problems for our customers.

Speaker 6

In our journey to transition from a PC centric company to a data centric company, to a company that builds the technologies that power the world, we recognize our customers have critical challenges to capture, transmit and process their massive quantities of data. It will require meaningful changes in the in the storage and memory hierarchy for computing that we think Intel is in the best position to solve. Solving these challenges for our customers is an essential part of our platform strategy. We're excited and committed to the journey to support customers by ensuring the technologies we develop and the solutions we enable with our platforms are the best for powering the world.

Speaker 2

Exciting stuff. Now we've given you some insight into new products and the advancements in technology we have underway, and you're going to hear more from folks that have come after me, Frank and Pranav and crew. Now and as you've heard Bob say, we are customer obsessed. And what really inspires us is seeing us what our customers do with these products. And we've got a lot of customers doing work with these products, and you're going to hear more about some of that from some of the other speakers.

But I wanted to show just few of them here with you so you could see the real impact of these products. I want to start with a newer company called VAST Data. And they're a company who's on a mission, a mission to address storage differently than any other company has done before because of the market demands. I heard a little bit of that from one of our customers earlier today. And VAST data is changing the paradigm with universal storage made possible by QLC three d NAND combined with the incredible performance of Optane data center SSDs enabled by essential software to maximize the benefit of that solution.

And we talked about the pyramid within a given platform in the storage and hierarchy. But at the enterprise level, there's a pyramid as well. And what they're trying to do is turn that pyramid upside down. And it's better to hear it directly from them, so we'll let them tell that story.

Speaker 7

The way scale out storage systems have been forever, there is a storage pyramid where fast systems are up top, they're very expensive, they're very fast, but they're relatively small because they're so expensive. And then you have many tiers of storage that grow in capacity and shrink in price. The new more AI based applications require fast access to the entirety of the data set. So in effect, they would like to see that pyramid flipped on its head. Fast Data is a next generation storage company.

What we found is that new applications such as analytics and AI and machine learning and deep learning need a new type of infrastructure. We're building that new type of storage system.

Speaker 8

What we're introducing with VAST is essentially a paradigm shift.

Speaker 7

Our vision is one of universal storage, a single system, cheap enough and big enough that you can throw all your data at it, but it's fast enough, everything is accessible at submillisecond latency.

Speaker 8

So you no longer need to make a choice between performance and capacity.

Speaker 7

Our system is primarily based on Intel parts. We use Intel QLC Flash. The servers are Intel servers. Inside those servers, we use Intel 2nd generation Xeon Scalable CPUs. And we could not have built this system without Intel Optane technology.

All of the data that comes into our system lands on Optane and then we migrate it off to the QLC flash in the background. We use that Optane technology in order to have the time and the ability to understand how to place the data, where to place it on flash and to treat the flash in a friendly manner and be able to use QLC. The competitors are between 1 and 3 phases behind Intel when it comes to CPUs and QLC Flash, and they simply don't exist when it comes to Optane Technology.

Speaker 8

So you find VAST in bioinformatics in financial services and you also find us in life sciences environments where people are pushing the boundaries of things like cancer research and these high scale analytic workloads are absolutely enhanced by all flash infrastructure.

Speaker 7

Once we came in, we enabled them both to simplify their stack and improve training that AI machine in order to better predict and better analyze going forward. Today, the same algorithms available 30 40 years ago are now starting to bear fruit because machines have a lot more access to data and a lot faster access to data. They show results that we didn't see in the past.

Speaker 2

Very cool. Now we also have a long and strong partnership with Dell EMC. And Dell EMC PowerMax system is the 1st to market with dual port Optane SSDs. And they use that as storage class memory inside their systems as persistent storage. And the dual port nature of it is part of the reliability in their system.

So there's 2 ports into the SSD such that if any one thing goes down, they still have access to that data. And we worked with them for over a year on bringing that product to market. Now PowerMax has a built in AI machine learning engine that helps place the data into the correct storage media automatically based on incoming host IOs. And that machine learning engine is capable of analyzing and forecasting 40,000,000 data sets in real time and driving 6,000,000,000 instructions per day decisions per day with no additional overhead. And the result of that is Optane SSDs inside of the PowerMax solution deliver 50% better response time with lower latency while still scaling up to 4 petabytes with 256 front end ports and 15,000,000 IOPS with the same reliability that customers have come to expect from Dell EMC and the PowerMax, which is an endorsement in itself of the Optane reliability, but enabling, more importantly to their customers, 50% better response time, a great collaboration with a long term partner, Dell EMC.

Intel is also working closely with Cisco on their HyperFlex all flash NVMe solutions to use Optane as a caching layer and 3 d NAND as the capacity drive. And this has led to a new class of performance in hyperconverged infrastructure and improved IOPS and lower latency in the hyperconverged environments. And this solution is available today and is being used by hospitals, some of the best sporting teams, for real time management of what's going on inside the arenas, real time stock trading environments, heavy equipment vendors and many, many more customers around the world. Now Verizon, of course, is a U. S.

Telecommunications company using Optane data center persistent memory to improve their cloud infrastructure's total cost of ownership for their content delivery media properties, places like Yahoo! Huffington Post, TechCrunch and many more, but best told by Hugo Gunnarsson from Verizon's Performance Engineering department.

Speaker 9

We have every single application run-in the company going through my team, figuring out what hardware to get, and then we're working closely with them to adapt their application to the hardware or the hardware suite application and also working with them to understand how to use the new technology. What my team is actually doing is we're trying to understand what gear gives which property the best ROI for what we're buying for them, at the same time, give them the best possible performance. So my mindset is that I always kind of want to look at what's coming down the road Instead of having a 6 month, 12 month, 18 month cycle to implement new technology, I won't be ready when it's available. So one of the challenges we have had is a lot of application now require a huge amount of memory. One way to dominate that in the past is to do a scale out where we end up just applying more and more servers.

But until fairly recently, you were limited by the size of memory, which typically end up being somewhere between 0.5 to 1 terabyte. But now with the Intel obtained DC persistent memory, it can go way beyond that. It can go to 3 terabytes, 6 terabyte and even further in the future. There's a lot of changes I expect in the next 3 to 5 year time and challenges we have to adapt to. So a lot of the time is actually spent understanding not just the need of the property or the application of the day, but also understand how can we apply the new technology and totally change how to run the code and get much more bang for the buck.

Speaker 2

Yes, very cool. On the client workstation front, Hyundai, which is, of course, based here in Korea, 1 of the top 5 world automakers, they had a research and development challenge. They wanted to deploy workstations to their engineers and required very large capacities, but they had a big data performance problem. And they used Optane memory together with rapid storage technology, which is an Intel software capability, together with high density storage, and they were able to get the best of both worlds, higher performance, greater capacity and efficiency while enhancing their security, a great example of win win between Optane and high density storage. Now for those of you who have heard us talk about Optane before, you may have heard us talk about the amazing impacts our technology is having on health care because these are a way for us to really connect with the impact of our products on people.

And I've talked about our Optane SSDs helping reduce brain MRI scans in research from 42 minutes down to 4 minutes. And for anyone who spent any time in an MRI, you understand the difference 10x makes there. And it enables us to deliver sequencing of a whole human genome, taking it down from 50 hours to just 2 hours, enabling us to use that technology in a way to provide better health care to folks. And we are currently working with a hospital in the United States that provides health care to children. And they've implemented that Cisco HyperFlex solution that we were talking about earlier with, Optane SSDs to manage the business challenges they had around customized settings for the children's health care, and they were able to use that technology to seamlessly integrate those custom settings into their care plans.

Optimizing workloads, storage and memory and compute nodes, product announcements, they're all pretty awesome and cool for those of us in the industry. But when we look at the technology impact, helping children facing scary and challenging problems to have a role in helping their lives just make their lives just a little bit easier, that is pretty inspiring for us. Now you will hear more detail from the key leaders coming up after me. Make sure that you hang in there for we've got some of our best and brightest coming up, Pranav and Frank and Mohammed talking about the technology. And we come to work every day to work on wicked hard technology problems targeted by a holistic platform strategy, working together with our partners and customers in a way that changes the world.

And I think you will see some of that come through when you hear what these technologists have to say.

Powered by